A Simple Solution, Well Executed

I’m trying out a new mantra:
All other things being equal, a simple solution, well executed, is superior to a complex solution, poorly executed.
Since data destruction discussions seem to have resurfaced again, I’ll try it out on that topic.



In Imperfect, but Still Useful Jim Graves writes
“Almost any method of data destruction is so much better than nothing that any differences between methods are usually insignificant.[2]
Drive-DestructionAs Jim indicates, choosing a technical method or algorithm for destroying data shouldn’t be the problem that we spend significant resources solving. Ensuring that all media gets processed with any destruction method is the problem that needs to be solved. In other words, it is critical that all data on all media is destroyed, and that no media bypasses the process that destroys the media. This is a different problem that requires a different solution and a different skill set than the problem of determining the best method of destroying data. The problem that needs to be solved is one of completeness in coverage, not one of completeness of destruction. It’s a process problem, not a technology problem.

In 2002 our organization had an HGE (Headline Generating Event) related to improper disposal of media. The reaction from the technical people (me) was to research the effectiveness of various media deletion techniques, which inevitably went down the path of data remanence and magnetic force microscopes. It was pretty obvious at the time that the problem wasn’t one of determining the correct number of wipes to which to subject our media, but rather one of ensuring that all media get some form of destruction, even if the destruction isn’t perfect. We didn’t need to make the data completely unrecoverable by all known technology. We needed to make the data significantly expensive to recover compared to its value, and we needed to make sure that of the tens of thousands of disks that we disposed of every year, as few as possible leaked though the destruction process un-destroyed.

In my opinion the real problem (completeness of coverage) needed to be addressed by making the data destruction process as simple as possible, thereby increasing the probability that the process would actually get executed on all media. That isn’t a technical problem, it’s a process and person problem. I used to work on an assembly line. For any process involving humans and repetition, simple is good, and the process that the person follows must be person-proof. In this case, a simple and person-proof process was what was needed.

Unfortunately, our internal legal staff were driving the bus, and they got focused on the technical problems of numbers of passes and zeros versus ones. Attempts to steer them towards simple destruction processes that had a low probability of getting bypassed were not successful, nor were attempts to match the value of the data against the effort required to recover the data. We ended up with a complex, time-consuming process that ensures that the media that went through the process is unrecoverable but does little to ensure that no media escaped the process.

In a related discussion at Black Fist Security, the principle of the blog writes:
“What if you had one person wipe the drive with all zeros. Then have a second person run a script that randomly checks a representative sample of the disk to see if it finds anything that isn't a zero.[3]
That’s the kind of person-process thinking that can solve security problems. It’s simple (one pass plus a sampling) and has a good possibility of being well executed on any media that is subject to the process.

Related Posts: