Among their conclusions are:
- Firewall have gotten more complex over time
- Firewall administrators routinely make errors
- Firewall administrators are not following best practices
- Firewall training materials do not focus management practices
This shouldn’t be surprising. Firewall management is clearly an error prone process. The problem is that the error detection is inherently biased in the wrong direction. If the error results in to few firewall openings, some application or process will be broke, someone will notice and start yelling, and the error will get corrected. If the firewall has too many openings, nothing will be broke, nobody will yell, and error can only be detected and corrected by painful audits and mind-numbing configuration checking. In other words the only self-correcting errors are the ones that result in less security. The result:
- The natural drift over time will always be towards less security
- Correcting the drift consumes expensive resources
We recognized the drift years ago, when the rule sets for our firewalls got large enough to be complicated. To help resolve the problem we brought in consultants to perform a firewall rule analysis. They built an Access database that could import the rules from our 70 or so firewalls and our list of a few hundred subnets and export a doc showing us all port/host combinations where less secure networks had access to more secure networks and similar information that we decided might be ‘interesting’. Their tool worked, we corrected some of the drift by fixing up a bunch of firewall badness and decided that having a tool like that available would be a useful addition to our vast collect of advance network and security management tools (Perl). Unfortunately at that time, commercial tools weren’t readily available.
That was a long time ago. Today companies like Redseal build products that can slurp up firewall and router configs and show cross firewall and/or cross compartment risk much better than our primitive home-brew tool. The experience that I’ve had running both the home made tools and Redseal against a complex networks is eye opening. As in the ISSA study, we found leftover temporary rules, dead rules, overlapping rules, rules for applications that no longer exist, etc.
The authors of the ISSA study recommend adequate staffing and automated rule base error detection. The lesson learned is that there is a level of complexity above which appropriate toolkit can help keep you out of trouble.
Correct the drift.