As a natural fit
with running the network my team took on the task of securing the campuses and
data centers, starting with firewalling the data centers from the rest of
the network. We started fairly simply by just segmenting enterprise-wide
servers from networks with users and students and restricting unfettered access to enterprise servers, database and systems. This gave us the ability to
control access to the core servers and systems. As expected, this initial
segmentation was resisted most by the system managers and DBA's who managed the
individual servers and databases. They were convinced that the only way they
could possibly do their job was if they had full access to everything all the
time from everywhere - even if they had no idea how they were accessing the system. This was a pretty typical attitude at the time, and to me an indicator that they didn't actually know how their systems worked.
In about 2001 we
installed firewalls between each campus and the Internet. This firewall
project was one of the first times that I got exposed to vendor and consultant
FUD. We had vendors telling us that only certain models of firewall could
actually secure our networks. We had consultants tell us that we were not
capable of firewalling our own network and that only they had the necessary
skills. Some of the consultants that were most closely working with various
public sector agencies were also the ones that were most clearly full of FUD.
One of the leading
service providers in the area offered that a minimum of $5000 per site and
ongoing operation cost of $2500 per site
per month would be required for the project. We had 55 sites so we'd have had a
project cost of over $1.5M per year. Instead we decided to do the project
internally with minimal consulting help and low-cost Cisco PIX firewalls. We
hired a contractor to help us configure the first couple of firewalls, then
rolled out and managed the firewalls ourselves. I wrote a series of shell scripts
that would automatically configure firewalls, switches, routers and terminal
servers. We came up with a standard campus network edge design that was
detailed enough to specify every connection, port, and even the color of every
cable. We went around that state working with campus staff to make sure that they understood how firewalls worked, and how to work with us to manage the firewall rules.
Total cost of the project ended up far less than budgeted and far less than any vendor or consultant. Years later we still had not spent as much as the first-year cost had we gone with vendors and consultants.
Total cost of the project ended up far less than budgeted and far less than any vendor or consultant. Years later we still had not spent as much as the first-year cost had we gone with vendors and consultants.
When we firewalled the campuses we also had to go against the common wisdom that educational institutions were impossible to firewall - a stance that some educational institutions maintained for many years. We obviously were able to prove conventional wisdom wrong. As described below, not only were we able to firewall campuses in 2001, we were also able to implement strict network segmentation policy in our data centers with outbound default deny as early as the mid-2000's.
Over time, we hired dedicated security staff, and with their
help improved on the data center security model by segmenting servers within
the data center based on the relative importance of the data. We built on that
model for many years, eventually adding fine grained network segmentation,
dedicated jump server networks, dedicated management networks, and dedicated
console networks. The intent was to isolate data center networks from desktops
as much as possible and to prevent propagation of security incidents through
the data center networks and across unrelated applications. As the data center
networks contained only known servers for known applications we were able to
implement bidirectional 'default deny' network security policy. In other words,
servers within the data center could not connect to addresses on the Internet
unless it was specifically permitted by firewall policy.
The strict firewall
policy mitigated many of the common attack vectors to which other organizations
had succumbed. By restricting an applications ability to connect out to random
Internet IP addresses, we also lessened our dependency on application security
- something which applications were (and still are) notorious for failing.
We developed a
strong operating principle that "If
it can surf the Internet it can not be secured". In other words,
when securing our applications and data we did not trust our own desktops. This
principle was and still is validated by even the most trivial following of
desktop and application security.
By following this
principle we were able to move nearly all critical user data off of desktops
and on to data center servers, where we felt reasonably confident in our
ability to secure the data. We came up with a methods for allowing remote
access to data center servers and applications from what were relatively
insecure desktops. We were able to shut off all direct desktop access to all
database listeners by installing all applications that required access to a
database listener onto remotely accessible servers configured to run the desktop
application and manage the data that would normally have been downloaded to
desktop. The data never left the data centers, so it was relatively easy to
secure vs. had it been downloaded to desktops.
This was fairly
complex and expensive to run, as it required thorough understanding of exactly
how every application and technology worked - in many cases something that not
even vendors who wrote the application understood. We oftentimes ran into
vendors who told us that their application or technology could not be
firewalled, or that if we were to attempt to firewall it they would not support
us, or they told us how to firewall the app or technology but they were wrong - they simply didn't know how their app or
technology worked.
This also
required a significant effort to work
with users to convince them that the inconveniences of having to remotely
access their data in the data centers reduced security risk enough that their
obligation and responsibility towards
the owners of the data would be well met. In most cases the user aspect of the
problem was harder to solve than the technical aspect.