One of the first lessons IT managers learn about security is that when someone asks whether or not you have a firewall, the correct answer is "Of course we have a firewall... Whaddya think, we're idiots?" Indeed, the cornerstone of most enterprise computer security plans is the border firewall. Their efficacy has been accepted to the point where they have become "check-list" items on any security audit. Thus, the majority opinion --certainly in industry-- is that enterprise firewalls are at least necessary, if not sufficient, to protect computer systems, and those who ignore their obvious benefits are either uninformed or incompetent. Yet, a few organizations --even in industry-- are bucking the trend. Herewith is a sort of "Minority Report": an examination of why one might choose the road less-traveled, and what alternative approaches are available.
Our thesis is not that all firewalls are evil; rather, it is that all firewalls have significant disadvantages, often ignored, and that their advantages are often overstated. This is especially true of enterprise border firewalls, which are the focus of today's debate.
Can systems be made secure (network safe) without using external firewalls? Clearly yes. We have many examples of this. But that seems to be more the exception than the rule, both because most operating systems are not network-safe "out of the box", and because a large number of those systems are essentially unmanaged.
There is unanimous agreement that evil packets should not be permitted to reach a place where they can do harm, so the debate is not over whether to block, but rather where the blocking should be implemented, and how to deal with the fact that different people want different things blocked. ("One person's secure network is another's broken network.")
Several questions need to be asked more often:
In examining those questions, we conclude that a traditional enterprise border firewall is very often a mistake, that is, more foe than friend. In some situations it may even be true that an enterprise firewall reduces overall security.
One key premise behind this conclusion is that network-based security is maximized when the network protection perimeter is minimized, i.e. pushed as close to the end-system as possible. This is because of the "Perimeter Protection Paradox", namely: the expected value, or perceived effectiveness of a firewall is proportional to the number of systems behind it, but the actual effectiveness of a firewall is inversely proportional to the number of systems behind it. That is true for two reasons: first, as the population of machines in the trusted "vulnerability zone" behind the firewall grows, so does the probablility that at least one of them has been somehow compromised and can thus be used to launch an attack that circumvents the firewall; second, the number of "holes" that need to be poked through a firewall is usually a function of the population of systems behind it, representing diverse organizational units.
In short, a border firewall is clearly neither necessary nor sufficient, much less a panacea, so it's benefits have to be evaluated against the costs and the alternatives.
Each of those criteria is subjective, and in each case the answer may change over time. So far, the only thing that has clearly met all three tests is blocking packets with spoofed source addresses --and even that has had adverse network performance impact in a number of cases.
It is undeniable that the effectivness of a perimeter defense is inversely proportional to the size of the perimeter, so the desire to push the protection boundry as close to the network edge as possible is a consequence of wanting to see real security, not just check-list security. On the other hand, it is also undeniable, that managing a large number of devices, even centrally, is harder than managing a small number of devices. Yet, the indirect costs of border blocking are higher than is generally acknowledged, and the value of border blocking is lower than is generally acknowledged. However, the cost/benefit analysis may result in a different conclusion regarding departmental or subnet firewalls, and most certainly for host-based firewalls.
"Firewalls are such a great idea, every host should have one." Firewalls on end-systems, if centrally managed, can provide maximum flexibility in security policy, while still not having to touch each machine after installation. Unlike personal intrusion detection systems, which are notorious for false positives and corresponding support woes, centrally managed host firewalls have been successfully deployed without undo pain.
Contemporary operating systems can all emulate basic border firewall port blocking by using IP access lists, so that if you need to run insecure protocols locally, you can still implement "border blocking" at the hosts themselves. Even more sophisticated border firewall behavior can be emulated by host-based firewalls. Thus, the case for a larger-perimeter defense boils down to three things:
All of these are legitimate arguments, notwithstanding the implication of #2 that a perimeter defense means the many end devices don't need to be managed. And that leads us to subnet-firewall options. Their goal is to make it easy for units to establish whatever perimeter defense policy they want, without interfering with the management of the network utility. Whether this is easy or hard to accomplish depends on where service boundaries have been drawn within the institution. For example, at institutions where the central IT group is responsible for networking to the individual Ethernet outlet, there are special challenges in allowing departments to manage firewalls at subnet perimeters, as when firewall reconfigurations accidentally block access to the network devices behind them. Thus, having departmentally-managed firewalls in the middle of the network core is antithetical to high-availability network management
In such environments, providing for proper network management and at the same time allowing departments some level of autonomy on subnet firewalling requires unconventional techniques. We have identified two: first, the "logical firewall" (see RESOURCES), which plugs into any single network outlet and allows system administrators to protect hosts on an opt-in basis, and second a VLAN-based strategy where enforced firewalling is possible using a standard bridging firewall combined with network VLAN magic to make sure network management can be maintained on devices behind the firewall, regardless of the firewall configuration.
Another option is to interpose a standard firewall between the network core and a computer lab, machine room, or "server sanctuary" --a cluster of specially protected racks for critical hosts within a machine room. In the case of the academic computer lab, the firewall rules might be adjusted to protect the rest of the network from the student machines!
The key perimeter defense questions are: where to block and what to block? Those two questions are intertwined, since the closer to the end-system you are, the more restrictive your blocking policy can be, without risking limiting legitimate work. One set of options would be:
CAMPUS BORDER LEVEL
Even if enterprise/border firewalls are not the best way to achieve computer security, central IT organizations still have a major role in the solution. For example:
It's worth noting that a lot of the world's security problems would go away if Microsoft would implement a small number of very easy measures. Their new multimillion dollar intitiative to improve the security of all their code is a fine thing, but a few simple changes would make a dramatic improvement in security for customers who do not need to have server capabilities. For example:
Note that the default install of RedHat Linux gives you a system that only permits incoming DHCP packets and does not allow you to create accounts (much less root/administrator accounts) that do not have passwords. This serves as an existence proof for others. All of us should be pushing whoever will listen toward the goal of computers that are network-safe out-of-the-box.