Skip to main content

The Irony Of Preventing Security Failures

Gadi Evron muses about the possibility that a successful security program might result in result in difficulty justifying future spending.

The Irony Of Preventing Security Failures, Gadi Evron, Dark Reading

But what if nothing happens because we stopped it? That may be the most dangerous option in the long term […] The obvious risk is that the security industry will be accused of crying wolf and not believed next time when something serious happens.

Roll back to 2001 and the hype surrounding Code Red. The lead story on major news outlets was the impending implosion of the Internet. The Internet didn’t implode. The hype went away. Slammer circa 2003 snuck up on the world, wreaked havoc, major corporate networks imploded, the internet hiccupped for a few hours. I’d like to think that Code Red was pretty good at culling out the incompetent sysadmins and raising the awareness of patching and hardening amongst the competent but clueless, and that Slammer was pretty good at culling out the incompetent IT departments and raising the awareness of the clueless CIO’s and executives.

Do we need to fear our own success?

Here’s a proposal. Simply allow your peers (or competitors) to continue to fail at security, and use their failure to justify continuing to spend money on your own success. You shouldn’t have too much trouble finding peer failures to use as your benchmarks. I’m pretty sure that the average executive can observe the impact of security events on peers and competitors compared to the lack of similar internal events and associate the difference with the level of competence and funding of the internal IT. If corporate executives can’t figure that out on their own, I’ll bet we can come up with a couple of power points with impressive looking but indecipherable charts to help them out.

Comments

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…