"It’s difficult to get a man to understand something when his salary depends on him not understanding it." - Al Gore (shamelessly ripped from 'Beautiful Security', Mark Curphey's chapter)
I had this discussion with two other security professionals today about security education and why it fails. This is another common failing within security. I'm going to break this into two categories:
- What security guys do wrong.
- What everyone else does wrong.
What security guys do wrong
For starters, we're not prioritising well.
We try and target user behaviour because that's where the bulk of root cause stems from right? I mean it makes sense on a superficial level. Users are tricked by social engineering attempts, they run executibles they shouldn't, they hold doors open for total strangers to secure areas, etc. You get the idea. But is this a failing of the user or a failure of the system that allows and if not perpetuates this behaviour?
Good question. We'll get back to that another day though.
But right now, lets say its our fault. We blame the (l)users because its easier than admitting the truth. Our policies are crap because they're unenforceable. We don't hold people to account. We don't embed our controls into processes to otherwise automate things to make security as automatic as possible.
But what if we start being judicious in our education and say, start educating developers on secure programming practises and focused our efforts on them understanding the vulnerabilities, how they manifest in crappy code and demonstrate alternative methods for cutting code? Sure you won't solve the user behaviour but I bet your code will improve significantly. What if you focus on getting your administrators to harden their build environments. Yes, it won't stop stupid user behaviour - but it will help harden your environments.
We need to start picking the battles we can win, one day at a time.
How can you tell if you are on the winning side? Here's a simple test - if you're relying on your users to be secure by having a solid education and policy in place but lack controls to detect or enforce them, then you are doing it wrong.
I've spent massive amounts of time trying to educate users through massive education campaigns on the do's and don'ts of computer security. While I was proud of the work I've done in that space, I can honestly say now that those efforts were misguided and misspent. I could pick far better targets where that time and energy would be better spent.
What everyone else does wrong
This comes down to one simply axiom - people screw up because they don't know any better. Or perhaps, don't want to know any better.
Let's talk about programmers (hey, they're easy to bash on). They are often flogged to churn out functioning code in short sprints, regardless of other factors. So its often quite natural that security doesn't make the cut. Meeting deadlines, stability, functionality, interoperability - these are the true principles of today's programmer. Can we blame them? No - not at all. They have the same uncaring masters we do. Security isn't prioritised. And much like the user, they aren't given proper alteratives. So, to summarise - they aren't educated on security matters, they are rushed and often given unreasonable alternatives.
Let's digress for a sec and take a quick peek at four comp sci undergraduate degrees.I say comp sci because for this purpose, I want to compare apples with apples. And these were picked off the top of my head btw:
RMIT, University of Melbourne, Monash, Latrobe.
Not one of them offers a compulsory security unit for any of the bachelor degrees! (Although there are some standouts, lets be clear that NONE of them offer a mandatory security unit).
It is no wonder why developers are churning out insecure code. We shouldn't be blaming them, we need to blame the institutions responsible for qualifying them. As a profession we need to be more involved in the education sector and stress that these fundamental skill shortages are addressed.
Similar sob stories exist for the business stakeholders out there, the project managers and the like. They are paid to get a job done, on time, on budget (or less if they can!). Quality is often a negligible factor in whether they are deemed successful. Security is barely a blip on the radar of many of them. It is hard to get them to care when they are paid not to. If you're successful at engaging, can obtain early buy-in, organisational support and embed your own processes into a project to minimise all burdens on a project and come in on time and early (thus help the PM to be successful) you might be able to win the battle.
Again, for the operations managers. They're often there to keep the ship afloat. Uptime is more important than security. Unless you have an operations manager with an ounce of security savvy its hard to get them to justify applying a patch if it means downtime to core production hosts. I've heard absolute horror stories of infrstructure operations managers who refused to apply patches for fear of causing downtime. These same patches they chose not to apply allowed Conflicker onto their network. I think a lot of us have heard similar war stories.
But yes, Mark Curphy/Al Gore hit the nail on the head here.
So where to from here?
Honestly, this is a failing on our profession.
We talk about raising security education and awareness. I'm a big believer in it. However, our efforts are misguided. Nobody tells us how/where to spend the time and energy so we learn by trial and error. Its time to stop wasting our time and start chalking up wins on the board.
We do know better. We have no excuses.