Skip to main content

Firewall Rule (Mis)management

The ISSA released an interesting study on Firewall rule (mis)management[1].

Among their conclusions are:

  • Firewall have gotten more complex over time
  • Firewall administrators routinely make errors
  • Firewall administrators are not following best practices
  • Firewall training materials do not focus management practices

This shouldn’t be surprising. Firewall management is clearly an error prone process. The problem is that the error detection is inherently biased in the wrong direction. If the error results in to few firewall openings, some application or process will be broke, someone will notice and start yelling, and the error will get corrected. If the firewall has too many openings, nothing will be broke, nobody will yell, and error can only be detected and corrected by painful audits and mind-numbing configuration checking. In other words the only self-correcting errors are the ones that result in less security. The result:

  • The natural drift over time will always be towards less security
  • Correcting the drift consumes expensive resources

We recognized the drift years ago, when the rule sets for our firewalls got large enough to be complicated. To help resolve the problem we brought in consultants to perform a firewall rule analysis. They built an Access database that could import the rules from our 70 or so firewalls and our list of a few hundred subnets and export a doc showing us all port/host combinations where less secure networks had access to more secure networks and similar information that we decided might be ‘interesting’. Their tool worked, we corrected some of the drift by fixing up a bunch of firewall badness and decided that having a tool like that available would be a useful addition to our vast collect of advance network and security management tools (Perl). Unfortunately at that time, commercial tools weren’t readily available.

product-threat_path_2_guiThat was a long time ago. Today companies like Redseal build products that can slurp up firewall and router configs and show cross firewall and/or cross compartment risk much better than our primitive home-brew tool. The experience that I’ve had running both the home made tools and Redseal against a complex networks is eye opening. As in the ISSA study, we found leftover temporary rules, dead rules, overlapping rules, rules for applications that no longer exist, etc.

The authors of the ISSA study recommend adequate staffing and automated rule base error detection. The lesson learned is that there is a level of complexity above which appropriate toolkit can help keep you out of trouble.

Correct the drift.


[1]An Analysis of Firewall Rulebase (Mis)Management Practices, Chapple, D’Arcy, Striegel (Membership seems to now be required, even though I was able to freely download the report lasat weekend when I wrote this post.)

Comments

  1. Very interesting article! I don't have a single firewall rule list long enough to be complicated, but I have several independent firewall rules lists that all have to agree. I may look into redseal if things continue to advance the way they have. Thanks for the link!

    ReplyDelete
  2. Hmmm... I was able to link directly to the article when I wrote the post, now it tells me 'Membership Required'.

    I'm not sure how I got at it the first time.

    ReplyDelete
  3. I just rolled out a new firewall and have been playing with the idea of how to keep track of the rules myself. One thing that I considered was periodically logging all matched rules and packet state, then combing the logs for ports that either have low utilization over a given time period, or which have low numbers of "established" packets.

    ReplyDelete

Post a Comment

Popular posts from this blog

Cargo Cult System Administration

“imitate the superficial exterior of a process or system without having any understanding of the underlying substance” --Wikipedia During and after WWII, some native south pacific islanders erroneously associated the presence of war related technology with the delivery of highly desirable cargo. When the war ended and the cargo stopped showing up, they built crude facsimiles of runways, control towers, and airplanes in the belief that the presence of war technology caused the delivery of desirable cargo. From our point of view, it looks pretty amusing to see people build fake airplanes, runways and control towers  and wait for cargo to fall from the sky.The question is, how amusing are we?We have cargo cult science[1], cargo cult management[2], cargo cult programming[3], how about cargo cult system management?Here’s some common system administration failures that might be ‘cargo cult’:Failing to understand the difference between necessary and sufficient. A daily backup is necessary, b…

Ad-Hoc Verses Structured System Management

Structured system management is a concept that covers the fundamentals of building, securing, deploying, monitoring, logging, alerting, and documenting networks, servers and applications. Structured system management implies that you have those fundamentals in place, you execute them consistently, and you know all cases where you are inconsistent. The converse of structured system management is what I call ad hoc system management, where every system has it own plan, undocumented and inconsistent, and you don't know how inconsistent they are, because you've never looked.

In previous posts (here and here) I implied that structured system management was an integral part of improving system availability. Having inherited several platforms that had, at best, ad hoc system management, and having moved the platforms to something resembling structured system management, I've concluded that implementing basic structure around system management will be the best and fastest path to …

The Cloud – Provider Failure Modes

In The Cloud - Outsourcing Moved up the Stack[1] I compared the outsourcing that we do routinely (wide area networks) with the outsourcing of the higher layers of the application stack (processor, memory, storage). Conceptually they are similar:
In both cases you’ve entrusted your bits to someone else, you’ve shared physical and logical resources with others, you’ve disassociated physical devices (circuits or servers) from logical devices (virtual circuits, virtual severs), and in exchange for what is hopefully better, faster, cheaper service, you give up visibility, manageability and control to a provider. There are differences though. In the case of networking, your cloud provider is only entrusted with your bits for the time it takes for those bits to cross the providers network, and the loss of a few bits is not catastrophic. For providers of higher layer services, the bits are entrusted to the provider for the life of the bits, and the loss of a few bits is a major problem. The…