Skip to main content

A Simple Solution, Well Executed

I’m trying out a new mantra:
All other things being equal, a simple solution, well executed, is superior to a complex solution, poorly executed.
Since data destruction[1] discussions seem to have resurfaced again, I’ll try it out on that topic.

In Imperfect, but Still Useful Jim Graves writes
“Almost any method of data destruction is so much better than nothing that any differences between methods are usually insignificant.[2]
Drive-DestructionAs Jim indicates, choosing a technical method or algorithm for destroying data shouldn’t be the problem that we spend significant resources solving. Ensuring that all media gets processed with any destruction method is the problem that needs to be solved. In other words, it is critical that all data on all media is destroyed, and that no media bypasses the process that destroys the media. This is a different problem that requires a different solution and a different skill set than the problem of determining the best method of destroying data. The problem that needs to be solved is one of completeness in coverage, not one of completeness of destruction. It’s a process problem, not a technology problem.

In 2002 our organization had an HGE (Headline Generating Event) related to improper disposal of media. The reaction from the technical people (me) was to research the effectiveness of various media deletion techniques, which inevitably went down the path of data remanence and magnetic force microscopes. It was pretty obvious at the time that the problem wasn’t one of determining the correct number of wipes to which to subject our media, but rather one of ensuring that all media get some form of destruction, even if the destruction isn’t perfect. We didn’t need to make the data completely unrecoverable by all known technology. We needed to make the data significantly expensive to recover compared to its value, and we needed to make sure that of the tens of thousands of disks that we disposed of every year, as few as possible leaked though the destruction process un-destroyed.

In my opinion the real problem (completeness of coverage) needed to be addressed by making the data destruction process as simple as possible, thereby increasing the probability that the process would actually get executed on all media. That isn’t a technical problem, it’s a process and person problem. I used to work on an assembly line. For any process involving humans and repetition, simple is good, and the process that the person follows must be person-proof. In this case, a simple and person-proof process was what was needed.

Unfortunately, our internal legal staff was driving the bus, and they got focused on the technical problems of numbers of passes and zero’s versus ones. Attempts to steer them towards simple destruction processes that had a low probability of getting bypassed were not successful, nor were attempts to match the value of the data against the effort required to recover the data. We ended up with a complex, time consuming process that ensures that the media that went through the process is unrecoverable, but does little to ensure that no media escaped the process.

In a related discussion at Black Fist Security, the principle of the blog writes:
“What if you had one person wipe the drive with all zeros. Then have a second person run a script that randomly checks a representative sample of the disk to see if it finds anything that isn't a zero.[3]
That’s the kind of person-process thinking that can solve security problems. It’s simple (one pass plus a sampling) and has a good possibility of being well executed on any media that is subject to the process.

Related Posts:


References:


[1]
Single drive wipe protects data, research finds, Security Focus
[2]
Imperfect, but Still Useful Jim Graves, Graves Concerns
[3]
Fear and Terror! All your data are being stolen! Black Fist Security

Comments

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…