Skip to main content

Firewall Complexity Versus Time

As a follow up to Firewall Rule (Mis)Management, I created a few simple charts showing the growth of a firewall config over time. The Y-axis is simply the size of the config in bytes. The X-axis represents time from 2002 to present.

Firewall-Config-Size-Annotated

This particular firewall is at a data center that is being phased out, so as applications get deprovisioned or moved, the configuration size shrinks.

The second chart is for a firewall for another data center that was spun up in 2005. The X-axis is time from 2005 through today. The steep change in size at the left is the initial provisioning of the apps in the new data center.

Firewall-Config-Growth

The configuration size has grown continuously since 2005. I’m expecting that it will continue to grow as more apps get hosted. There are not too many scenarios were a configuration would shrink unless major applications were phased out or the firewall manger decided to simplify the problem with a few ‘permit any any’ rules.

At some point in time it’ll be too large to mange if it isn’t already. Presumably the probability of human error increases with complexity (configuration size), so one might suppose that if firewall configuration errors cause a decrease security, then the data center becomes less secure over time.

Entropy is the enemy of security.

Comments

  1. Very interesting analysis. Never thought of it that way. I can't deny the value of a firewall, but being part of an application development team, often question it. Seems like there is a significant loss to business productivity due to issues that come up only because of the firewall. Thanks for the good post!

    Peter Edstrom
    http://www.edstrom.net/blog

    ReplyDelete
  2. I don't disagree that the firewall can be a headache. Unfortunately, I haven't see an alternative that can provide reasonable protection for the elements that make up the infrastructure that hosts the application.

    Unfortunately firewalls can't protect web applications from themselves. As soon as port 80 is open, the web app is the target, and in my experience, most web apps are not up to the challenge.

    Lately I'm batting 1000 on demonstrating vulnerabilities in newly hosted applications in our data center. The ones that are written by persons who view themselves as highly paid consultants/experts are the worst.

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…