Pretty controversial topic. See here and here.
Reading the summary at first, I started getting angry. I mean lets face it, a user who ignores sage advice and continues to open every attachment blindly deserves whats coming. I mean its like a person with a trashed liver continuing to drink. It will be the death of them one day.
But then I realised that perspectives that challenge our own encourages growth. We examine our values and beliefs and what underpins them. Are our foundations made of bricks and mortar or are they a house of cards - built up on delusions, misinformation and misunderstanding?
"The advice o ffers to shield them from the direct costs of attacks, but burdens them with increased indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones they reject this bargain. Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses."
Several thoughts come to mind on this.
Firstly, I have touched on in the past about pragmatic approaches to security. I immediately picture my 'holy grail' brethren preaching to the masses about the most strictest defenses, completely best practise and clearly not commensurate with the level of risk. Part of me admires their bluntness and ardour, but I digress.
However, comments like this clearly articulate that this approach is doomed to failure.
But I am oversimplifying - Cormac Herley isn't saying that these recommendations are inappropriate. He's gone beyond that -- any expectations for the user to change their behaviour is doomed to failure.
Secondly, the equations in the document and the risk calculations are actually missing vital information. They make no reference to the actual impact to the user - purely a figure based on cost. Cost of remediation, cost on user time, lost productivity. Actually he
Mr Herley proceeds to explain the inconvenience on the user to change their password and make it longer for the perceived short term benefit, multiplied by the number of users. He clearly hasn't spoken to people who have had their accounts ransacked, lost hundrends of thousands of dollars or had their identity stolen, lost their business, etc. I bet you any money those people sure as hell learned from their mistakes.Whats more, I dare say they would place a much higher weighing to the risk calculation.
More than the actual calculation, the arguments are built on a faulty premise. The cost of security is high because of the need to provide assurance - trust - in our communications and electronic transactions. If everyone across the world tomorrow thought nothing they did could be trusted, e-commerce as we know it would die. Sure - many don't - but it isn't a universal belief. But it is human nature to trust. So playing devil's advocate, does this mean that the cost of providing that assurance is a waste of time? Is that the point he's making? Interesting possibility. Alas, these are intangibles aren't even acknowledged. To not consider or at least identify unquantified variables or areas for further research smacks of bad science to me. Conversely, trying to quantify benefit is also delusional. I mean lets face it - what is the benefit of security? Zero incidents? No investigative clean up costs? Wow. Ok. How is a zero benefit greater than a cost? Maths ain't my strong suite but clearly the quantification of a benefit needs as much work as the definition of cost.
Thirdly, with the above point in mind, if users are not doing the correct cost-benefit analysis then they do not understand the risk! Period - and guess what? This is an education problem!! Ok, so your users don't understand what could happen if they get infected with malware. Show some horror stories. I do believe that security education is a necessity - and I do agree with the argument that it does have diminishing value. But when it comes to articulating risks, clearly pointing out the full breadth and depth of what could happen - however rare - that is where security education really counts. And it this is where security education really matters.
If someone still chooses to accept the risk, knowing this and more importantly - you know they understand, then that is a personal decision. You - as their advisor - have done all that can be done. If however a decision is made in ignorance, not understanding, not caring or more importantly, dismissing your advice - then we must look at ourselves and ask "where did we go wrong? was it us or them? what could we do better?".
Fourthly, the examples are poor. His use of spammers neglect to mention that even moderately effective spam filters will prevent most spam from being delivered to an inbox. What is the direct cost on a user to delete it, as they would any other email they didn't care about?? A hundred percent of SSL errors are false positives>? Please! He didn't even cite where he obtained these figures or how they were formed. While MITM attacks are exceedingly rare in the overarching scheme of things (being honest - a drive-by compromise is far easier), they do occur. To put the blinkers on to this reeks of ignorance. The argument on phishing URLs being an imposition on user time?? This is analgous to telling people to look both ways before crossing the road is a waste of time.
"A main nding of this paper is that we need an estimate of the victimization rate for any exploit when designing appropriate security advice."
So in other words, he is really saying that we need evidence of true likelihood in order to quantify risk (which will enable us to define appropriate controls). Fair call. Unfortunately, this the hardest part of any risk assessment. Any anyone who tells who actually does this stuff in InfoSec will tell you - risk assessments - while vital - is the art of guestimation. Literally. It is the art of sticking your finger into the air and trying to figure out which way the wind is blowing and determine whether it will continue to blow that way. You provide me with stats on what the likelihood a desktop user will be hit with Conflicker, and I'll give you the true value of an asset and present you with a one hundred percent accurate risk assessment. Har har.
"Second, user education is a cost borne by the whole population, while o ffering bene fit only to the fraction that fall victim. Thus the cost of any security advice should be in proportion to the victimization rate."
So really, he's saying we need to include the net impact on the userbase of a control to calculate its true cost for implementation. Ok, I jive. What about benefit? Hrmm.
"Third, retiring advice that is no longer compelling is necessary. Many of the instructions with which weburden users do little to address the current harms that they face. Retiring security advice can be similar to declassifying documents, with all cost and no bene fit to the person making the decision."
Such as... ? There's a difference between old advice and just plain old bad advice. I've seen more of the latter than the former. Give us an example!
"Fourth, we must prioritize advice. In trying to defend everything we end up defending nothing. In attempting to warn users of every attack, and patch every vulnerability with advice we have taken the user (our boss's boss's boss) o ff into the weeds. Since users cannot do everything, they must select which advice they will follow and which ignore."
This is irritating on numerous fronts. It presupposes that users have a buffer that will overflow or clobber existing information if you tell them too much. While do I agree that users can make some dumb decisions, it persumes that that they really are lemmings - which is offensive. I don't argue that we need to take a risk based approach on prioritising our time and resources in protecting the threats that matter. However, users can learn. It might be an uphill battle and there will always be outliers or people who refuse. But that doesn't make it a waste of time.
My approach to security isn't to focus on user education. It is to make security so simple, so embedded, automated and ingrained into everyday processes that users have to go out of their way to do the wrong thing.
To be fair, the conclusion I found myself agreeing one hundred percent. Hey, at least they're reasonable. But are they practical? We do need to articulate the risks better. But we can only do that with the information at hand. Quantifying the likelihood of malware infection is certainly more difficult than calculating the risk of natural hazards (fire, flood, etc). Will we ever get there? Hard to say.
In any case, while the conclusion was sound, the math supporting it was terrible. The paper was rifle with poor conclusion, questionable methods and unsubstantiated claims with no regard for additional, influencing factors which could spur on further research.
Hey, at least I know my foundations are solid.