Skip to main content

Essential Complexity versus Accidental Complexity

This axiom by Neal Ford[1] on the 97 Things wiki:

It’s the duty of the architect to solve the problems inherent in essential complexity without introducing accidental complexity.

should be etched on to the monitor of every designer/architect.

The reference is to software architecture, but the axiom really applies to designing and building systems in general. The concept expressed in the axiom is really a special case of ‘build the simplest system that solves the problem', and is related to the hypothesis I proposed in Availability, Complexity and the Person Factor[2]:

When person-resources are constrained, highest availability is achieved when the system is designed with the minimum complexity necessary to meet availability requirements.

Over the years I’ve seen systems that badly violate the essential complexity rule. They’ve tended to be systems that were evolved over time without ever really being designed, or systems where non-technical business units hired consultants, contractors or vendors to deliver ‘solutions’ to their problems in an unplanned, ad-hoc manner. Other systems are ones that I built (or evolved) over time, with only a vague thought as to what I was building.

The worst example that I’ve seen is a fairly simple application that essentially manages a lists of objects and equivalencies (i.e object ‘A’ is equivalent to object ‘B’) and allows various business units to set and modify the equivalencies. The application builds the lists of equivalencies, allows business units to update them and push them out to a web app. Because of the way that the ‘solutions’ were evolved over the years by the random vendors and consultants, just to run the application and data integration it requires COBOL/RDB on VMS; COBOL/Oracle/Java on Unix; SQL server/ASP/.NET on Windows; Access and Java on Windows; DCL scripts, shell scripts and DOS batch files. It’s a good week when that process works.

Other notable quotes from the axiom:

…vendors are the pushers of accidental complexity.

And

Developers are drawn to complexity like moths to flame, frequently with the same result.

The first quote relates directly to a proposal that we have in front of us now, for a product that will cost a couple hundred grand to purchase and implement, and likely will solve a problem that we need solved. The question on the table though, is ‘can the problem be solved without introducing a new multi-platform product, dedicated servers to run the product, and the associated person-effort to install, configure and manage the product?’

The second quote also applies to people who like deploying shiny new technology without a business driver or long term plan. One example I’m familiar with is a virtualization project that ended up violating essential complexity. I know of an organization that deployed a 20 VM, five node VMware ESX cluster complete with every VMware option and tool, including VMotion. The new (complex) environment replaced a small number of simple, non-redundant servers. The new system was introduced without an overall design, without an availability requirement and without analysis of security, cost, complexity or maintainability.

Availability decreased significantly, cost increased dramatically. The moth met the flame.

Perhaps we can generalize the statement:

Developers Geeks are drawn to complexity like moths to flame, frequently with the same result.

A more detailed look at essential complexity versus accidental complexity in the context of software development appears in The Art of Unix Programming[3].


  1. Neal Ford, “Simplify essential complexity; diminish accidental complexity [97 Things] ”, 97 Things, http://97-things.near-time.net/wiki/show/simplify-essential-complexity-diminish-accidental-complexity.
  2. Michael Janke, “Last In - First Out: Availability, Complexity and the Person-Factor,” http://blog.lastinfirstout.net/2008/03/availability-complexity-and-person.html.
  3. Eric Steven Raymond, The Art of Unix Programming (http://www.catb.org/esr/writings/taoup/html/graphics/taoup.pdf: Eric Steven Raymond, 2003), pp 339-343.

Comments

  1. Hi,
    I have few questions.
    How to know whether the complexity is essential or accidental?

    ReplyDelete
  2. Here's how I look at it:

    Essential complexity is the minimum complexity that is intentionally designed into a system to meet the requirements that determine te design of a system.

    Accidental complexity is that which is added to a system without a clear requirement for the additional complexity.

    ReplyDelete
  3. Hi Michael,

    Thanks for the clarification.

    so if we introduce new functionality based on the essential complexity (requirement) can we say are we introducing additional complexity?

    ReplyDelete
  4. Yes, in that case you would be introducing additional complexity, but because it is part of a requirement, it is necessary complexity (essential).

    For example, if the availability requirement for a system is 99% availablity during business hours, adding clustering and load balancing would be non-essential complexity (Accidental). The availability requirement can be met without the complexity of redundant systems.

    But then if in the future, the availability requirement were to change to 99.9% 24 x 7, clustering and other high availability technology would become essential complexity.

    ReplyDelete
  5. this badly reminds me of this quote from Tristan Bernard which roughly translates like this:
    Marriage allows two people to join efforts to solve problems they would never have ran into separately.

    ReplyDelete
  6. Hmm!

    Now I am little bit more clear.

    The bottom line is try to scope or put boundaries around the actual requirement which is essential complexity and as soon as you feel we need to provide this or that functionality with the actual it will start creating additional complexity.

    That being said we can say that the bigger the scope or the more you provide over expectations may/will also result into additional complexity.

    It's like problem of choices. The more you provide it might be possible the more it will open the door for additional complexity. So before providing additional feature or functionality you have to carefully evaluate the problems it might invite in the future.

    Isn't it?

    Thanks,
    Twinkle

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…