Friday, March 12, 2010

Wall of Shame: Patch Management

I like to compare patch management to car maintenance. Nobody questions the importance of car maintenance and it serves as a good metaphor. The car is a series of finely engineered, interwoven, interdependent parts that must run in perfect harmony in order to function. If something breaks down, wears down, doesn't run properly, then it can cause a chain reaction leading to a car breakdown if neglected. Usually if it is tended to early enough, it is a simple enough procedure to fix, a few hours in service and then its back on the road..

Now there are excuses why people don't perform it -
"I'm too busy to get it serviced right now."
"I can't afford it right now."
"I'm not paying a mechanic to do it when I can do it myself"
Blah blah blah.

But everyone knows the net effect if they do not get it serviced. Think of a cab. That car is on the road every day, usually 8hrs+ (most cabbies I've met work much longer hours btw) no less than 5 days a week. Do you think they get their car serviced? I'm sure they stretch out the times between maintenance as much as possible in some cases, but most would appreciate that the car is their livelihood and they cannot function without it. Cabs can have 300,000 kms, 500,000kms or even more on the clock. They don't last that long without a single car service.

But let's be real - that is why they get serviced. Its almost forced tax - extortion in a way - you pay this money regularly or your car dies. Everyone understands the consequeneces of not servicing the car.


Nobody has made that association yet with patching. Why not? Because nobody understands the full consequences of not patching.

Your IT systems are exactly the same - they are comprised of a series of finely engineered, interwoven, interdependent parts. They must run in total harmony in order to function properly. Likewise, so is a robust security architecture comprised of many controls to provide depth of coverage. If one of those controls breaks then it can lead to a total compromise of the directly affected system or application, other interconnected systems and applications and all your data. Security is only as good as the weakest link in the chain - after that it all collapses like a house of cards.

Here's what I've seen:
  • A Windows desktop environment for 1,000+ users remain unpatched for an 'indeterminate period'  because two Windows system administrators assumed the either had been doing it and never communicated it with each other.
  • In most cases though, its 50/50 whether Windows desktops get patched.
  • Windows servers are usually patched but again, flip a coin.
  • A Windows Group Policy that won't allow end users to enable Automatic patching of their desktops because central IT divisions want to "control the patch management process" - in this case an XP image with over a year of missing patches and service packs).
  • Solaris Servers when entered into Production are almost never, ever patched (in some instances 2 years+). Many of these Internet facing. I see this almost everywhere I look I might add.
  • Internet Explorer 6 used as the primary web browser within the enterprise for years without many several security patches because they broke with core company applications. Risk permanently accepted..
  • Network devices (routers, switches, firewalls, proxies) patched haphazardly. In some cases never.
  • Applications (e.g. desktop applications or even larger scale enterprise business applications installed on servers) are typically ignored.
The state of play now is that attackers are targeting applications increasingly so and targeting the desktop applications. Not just the operating system, not just the browser. Desktop applications.

See here.

This gives you an indication of what applications are being targeted TODAY. Lets take your operating system out of the equation for a moment. Are you running Adobe Acrobat? If the answer is yes, and you are not patching your applications then there is a strong chance of being compromised. Even if you are patching, guess what - even the tools you use for patching are experiencing issues.

If you want to talk real world examples and research on what are the practises of patch management, check out Project Quant on Patch Management as reported by Securosis (very illuminating btw):

The key findings from the survey are:
  • Most companies were driven by compliance regulation, usually
    more than one regulation applied
  • Process maturity was generally high for operating systems, but low
    for other asset types such as applications and drivers (see chart)
  • Companies tend to utilize multiple vendor and 3rd-party tools in
    their patch management process
  • 40% of companies depend on user complaints as one factor for
    patch validation
My interpretation of this report (bearing in mind that it was mostly US based, and certainly not representative of the security landscape of Australia!) -
  • It is not even the fear of their "car" (IT infrastructure) breaking down - it is the strong arm of the law and financial consequences that drive people to patch (i.e. when you have been mugged, beaten and left lying in a ditch somewhere because your car has broken down).
  • Acknowledgement that not enough attention is being paid to application patching.
  • Companies that are patching are using multiple tools to patch (duh).
  • Testing involves shoving in patches and waiting for user complaints - possibly into a limited test bed if you're lucky!
Being fair, I read this report and was somewhat suprised at how upbeat the document sounded. I firmly believe if it was done in Australia we would come across far, far worse. Our compliance landscape is not the same as the US. Likewise, Australians adopt a laissez-faire attitude ("She'll be right mate!") meaning that nobody really takes things seriously until the shit literally hits the fan.

If you think your patch management process is robust enough, ask yourself:
- Does your patch management include all operating systems and devices within your environment?
- Are applications both for server and desktop in scope for your patching regime?
- Can you say that all of the patches you pushed were applied?
- When patches fail (and they do) do you follow up on each instance to rectify the situation?
- Do you have proper asset management in place and are you able to ensure beyond a shadow of a doubt that no assets are left untracked and unmanaged?
- When a patch breaks a critical business application do you have a plan beyond rolling back the patch?
- Do you have a testing strategy that goes beyond user complaints when a patch breaks something?

If the answer is no to one or more of any of the above, you need to lift your game.

Lift it quickly too I might add before its too late.

- J.

1 comment:

@cloudjunky said...

I must be the only one commenting on your posts today ;)

Ok taking the car analogy a little further. That makes you you the mechanic to a certain degree.

Security Professionals know why the customer should patch - because it is more important to security effectiveness than firewalls, ids/ips.

So as the 'mechanic' is it more important to deal with the 'how' rather then the 'why'? e.g for that 24/7 system could we move to VM, Snapshot, patch the snapshot, test, role into production narrowing the production outage window? (this is a basic example).

Also in relation to C level people, this is a level of detail I am sure they believe is happening. If they were faced with something like 'by patching every 6 months or 12 months you expose your entire organisation to loss that at the end of the day falls on your head' then they would put out an edict saying that patching happens within 15-30 days of patch release.

It's a weird issue patching because it is boring yet vital - a little like breathing people take it for granted ;)

Patching is vital and everyone needs to bring it as close to 24 hours of release as possible.

Good post!