Design your Failure Modes

In his axiom 'Everything will ultimately fail', Michael Nygard writes that in IT systems, one must:
"Accept that, no matter what, your system will have a variety of failure modes. Deny that inevitability, and you lose your power to control and contain them. [....]  If you do not design your failure modes, then you will get whatever unpredictable---and usually dangerous---ones happen to emerge."
I'm pretty sure that I've seen a whole bunch of systems and applications where that sort of thinking isn't on the top of the software architects or developers stack. For example, I've seen:
  • apps that spew out spurious error messages to critical logs files at a rate that makes 'tail -f' useless. Do the errors mean anything? Nope - just some unhandled exceptions. Could the app be written to handle the exceptions? Yeh, but we have deadlines...... 
  • apps that log critical application server/database connectivity error messages back to the database that caused the error. Ummm...if the app server can't connect to the database, why would you attempt to log that back to the database? Because that's how our error handler is designed. Doesn't that result in a recursive death spiral of connection errors that generate errors that get logged through the connections that are in an error state? Ummm.. let me think about that.....
  • apps that stop working when there are transient network errors, and need to be restarted to recover. Network errors are normal. Really? We never had that problem with our ISAM files!. Can you build your app to gracefully recover from them? Yeh, but we have deadlines...... 
  • apps that don't start up if there are leftover temp files from when they crashed and left temp files all over the place. Could you clean up old temp files on startup? How would I know which ones are old?

I suspect that mechanical engineers and metallurgists, when designing motorcycles, autos, and things that hurt people, have that sort of axiom embedded into their daily thought processes pretty deeply. I suspect that most software architects do not.

So the interesting question is  - if there are many failure modes, how do you determine which failure modes that you need to engineer around and which ones you can safely ignore?

On the wide area network side of things, we have a pretty good idea what the failure modes are, and it is clearly a long tail sort of problem, something like:

We've seen that circuit failures, mostly due to construction, backhoes, and other human/mechanical problems, are by far the largest cause of failures and are also the slowest to get fixed. Second place, for us, is power failures at sites without building generator/UPS, and a distant third is hardware failure.  In a case like that, if we care about availability, redundant hardware isn't anywhere near as important as redundant circuits and decent power.

Presumably each system has a large set of possible failure modes, and coming up with a rational response to the failure modes that are on the left side of the long tail is critical to building available systems, but it is important to keep in mind that not all failure modes are caused by non-animate things.

In system management, human failures, as in a human pushing the wrong button at the wrong time, are common and need to be engineered around just like mechanical or software failures. I suspect that is why we need things like change management or change control, documentation and the other no-so-fun parts of managing systems. And humans have the interesting property of being able to compound a failure by attempting to repair the problem, perhaps the reason why some form of outage/incident handling is important.

In any case, Nygard's axiom is worth the read.