I don't want to delve too much into the article, but it got me thinking about my thoughts on security and why I do what I do, where do my beliefs lie, etc.
For the uninitiated, essentially there are three schools of thought -
Full Disclosure - Disclose all vulnerabilities when they are found, full details to replicate, no regard for timing, vendors to fix, etc.
Partial Disclosure - Disclosure of the vulnerabilities, with conditions. Conditions range from enough detail to find the exploit but not enough for script kiddies to copy/paste into Metasploit but enough for either a serious coder or someone with source code access to replicate. Sometimes the conditions could be around notifying the vendor, allowing some time to fix. Often termed "Responsible disclosure" (be it accurately or not).
Zero Disclosure - Vulnerabilities should never be disclosed, or at the very least, not without a fee.
There are compelling arguments for and against all schools of thought.
Full Disclosure proponents will argue that security is best promoted by being transparent about the findings and that by releasing them into the wild will force vendors to respond to the defects quickly. The idea being that vendors will release an exploit quicker than an attacker can exploit it. The arguments against full disclosure hinge predominantly that vendors have no chance to respond to the defects, and thus security globally is compromised.
Partial Disclosure suggests that disclosing the vulnerabilities is essential but there needs to be conditions attached in order to protect everyone's best interests. If a vulnerability is disclosed in full, it can be replicated by anyone. If it isn't disclosed to the vendor first, they are given no time to respond to the defect and provide a fully supported patch. Arguments against hinge predominantly on whoevers conditions attached to the vulnerability - which vary wildly to be truthfull.
Finally, there is zero disclosure. The reasons why vary from those given in the posts in the blogs above to the fact that full disclosure is little more about vulnerability researchers using public forums to trumpet their own horns and boost their own egos. Security isn't served by full disclosure as simply everyone is put at risk. The benefits of full disclosure are inherently outweighed by the overwhelming levels of risk that everyone is exposed to. Arguments against zero disclosure vary from exploits being kept secret, awaiting to be exploited by the more unscrupuluous hackers (a practise which already goes on) to promoting the ransoming of exploits to the highest bidder, leading to further exploitation and risk.
Honestly, there's more I could say on all three accounts but that's the issue in a nutshell.
So - what camp do I fall into?
Personally, I am a partial disclosure person.
- Disclosure your vulnerabilities, give the vendor the full exploit once you've found it.
- Allow the vendor an appropriate amount of time to remediate and release a patch (bearing in mind that the gears in large enterprise environments take awhile to turn, but they do turn).
- Stay in touch with the vendor, try to keep lines of communication open to track the status of your vulnerability.
- If you don't hear from them or an appropriate time frame to the release of the patc isn't given (subjective I know - but you want to give the vendor every opportunity) , disclose it publicly but don't give all the details. Just give enough to explain the root cause and provide remediation/mitigation advice.
Disclosure should never be (primarily) about getting the kudos. That should be an secondary benefit to promoting security. Also, I'm not saying vulnerability researchers shouldn't be paid either. I think they provide a valuable, if not essential service to the security community and the world at large. I think that mechanisms need to be put in place however to ensure that they are paid appropriately and their findings don't fall into the wrong hands.
Reading posts such as the Anti Sec blog got me thinking. It also got me mad. Not just for the fallacious logic, but for assuming that the motives of all security people is to make money and that our (ours being the security industry) efforts are to prop up undeserving corporate or government entities. Sure, there is that element and that the increasing commercialisation of the industry has attracted more than its fair share of hucksters. But I don't believe that's why we're in this.
Once upon a time, I was interested in security and held notions of some anti-authoritarian nature. Maybe I was interested because of my anti authoritarian nature. I don't remember. What I do remember is how it changed over time and my perceptions of information security became something entirely different.
These days, I like to think that securing or at least working to secure an environment is a far, far more difficult prospect than breaking into it. Far more. To the degree I think that when I read posts like Anti Sec, I can't help but wonder have they ever sat on the other side of the fence. I truly believe that if a black hat could work a week, doing what I do, they'd see there is a far greater challenge attached to working with businesses to fix their policies, procedures and practises - over and beyond the technology - to enhance security.
I understand that the proof is in the pudding - if you can still exploit software with an 0 day then all your policies, procedures, etc, can amount to nothing at the end of the day. But still, surely if our role in the world is to try and do what we can to make it a better place I cannot see how compromising systems and applications, selling off sensitive data to organised crime or supporting spammers, fraudster, child pornographers (such as Anti Sec proposed) helps to achieve that aim.
At least the traditional 'old school' hackers were in it for the exploration and learning, as much as self preservation.
- J.