Network Security Credo

T. Gray, et al
University of Washington
Written Jun 2000
Modified Aug 2001, Mar 2002

 


OUTLINE

  • Prologue
  • Summary
  1. Introduction
  2. Operational Security
  3. Application and Host Security
  4. Path Isolation and VPNs
  5. Perimeter Protection
  6. Action Plan
  • Acknowledgements
  • References


PROLOGUE

   

"If you think technology can solve your security problems, then you don't understand the problems and you don't understand the technology."
Bruce Schneier
Secrets and Lies

   

"It's naive to assume that just installing a firewall is going to protect you from all potential security threats. That assumption creates a false sense of security, and having a false sense of security is worse than having no security at all."
Kevin Mitnick
eWeek 28 Sep 00

   


SUMMARY

Some Key Concepts:

Some Good Ideas:

Some Bad Ideas:

In short:

A summary of our credo might be: "open networks, closed servers, protected sessions."

 



1.     INTRODUCTION


1.1   SCOPE

Incidents of unauthorized access to networked computer systems in an enterprise are growing at fearful rates. The purpose of this document is to identify general design principles and practices for defending against these threats.

The full spectrum of security embraces several phases:

and several elements: However, we will focus here primarily on the prevention phase and technical defenses. Note that the prevention phase includes detection of vulnerabilities before they are exploited.

The theoretical goal of network security is to prevent connectivity from/to those who would do you harm. The problem is that packets from attackers don't come with warning labels to make them easy to block, nor is there any totally reliable topological boundary between sources of attacks and sources of legitimate network activity. Opinions vary on how best to be safe.

Computer system security has many dimensions, and they overlap. While this document is focused primarily on network-level security issues, it is impossible to avoid some discussion of other parts of the picture, most notably, application and OS security. Network security is a subset of general computer system security, but a rather large subset, since virtually all access to contemporary hosts is via a network connection.

Threats that are outside the scope of this document include:
(NB: These are very serious concerns; just not the focus of this document.)

  1. Application-level security threats, e.g. email viruses, attachments, IRC bots.
  2. Threats to network infrastructure devices (switches, routers).
  3. Threats to core network computing services (DNS, DHCP, NTP, Kerberos).
  4. Theft of network connectivity services by unauthorized users.

The threats we are concerned about here include:

  1. Unauthorized access to hosts (both clients and servers) via the net
  2. Unintended disclosure or modification of data sent between hosts
  3. Denial of service attacks against connected hosts

The general strategies for protecting against these threats include:

  1. Hardening the target operating systems and their applications
  2. Encrypting sensitive data sent between hosts
  3. Reducing the size of the target by disabling unneeded services
  4. Putting obstacles between the attacker and the target systems

However, perhaps the most important defensive strategy is continuous security education and awareness. Toward this goal, there are a number of very helpful web sites cited in the references section. For example, see http://www.sans.org/mistakes.htm for a number of good ideas.

One of the disappointing realities of this subject is that the most important network security measures need to be implemented at each of many end-systems (hosts) rather than at a single point within the network infrastructure itself. In other words, the conventional strategy of deploying an enterprise firewall no longer gets the job done. The concept of building moats around entire enterprises has largely been overtaken by events, though the desire to cling to strategies suitable for earlier, simpler times is very strong. Today, enterprises must serve constituents all over the world, via the Internet. An enterprise firewall can block some number of attacks, but it cannot provide security. This is bad news, because managing lots of end-systems is hard to do, but not as bad as believing one is safe but finding out otherwise after it is too late.

The ideas in this document are shaped by the special challenges of trying to secure a research university (places where the phrase network security is often considered an oxymoron), but they are much more general than that. In particular: any enterprise that is comprised of many different organizations, with disparate computing and networking requirements. Departments cannot assume that all of their desired security policies are compatible with those of other departments, and therefore they may not be implementable as a single campus-wide network filtering policy. At the same time, departments using the enterprise-wide network utility cannot assume they have the option of implementing a traditional firewall around their entire department.

Reaching consensus on what security measures should be deployed within the network (in addition to measures implemented at the edge) is probably the most contentious aspect of network security, since measures welcomed by one group may be considered totally unacceptable by another. Even when there is consensus on a policy, it may be a mistake to implement it at the enterprise borders. So what's a well-intentioned system manager to do? Our goal is to offer some answers to that question: some suggested defensive strategies, and the rationale behind them. In the course of doing that, several important debates in network security will be discussed; specifically:

We begin with some definitions...


1.2   DEFINITIONS

Network security is about balancing the goals of "OPEN" and "SECURE" and "COST-EFFECTIVE", where in this case OPEN means "conveniently accessible to legitimate users" and SECURE means "inaccessible to attackers".

Types of end-systems, or hosts, include: Clients, Servers, and Attackers. The following figure shows the relevant categories of host functions and their relationship to the "network" itself:

        Host 1                                      Host 2

  |------------------|                        |------------------|
  |   Application    |                        |   Application    |
  |------------------|                        |------------------|
  | Operating System |                        | Operating System |
  |------------------|   |----------------|   |------------------|
  |  Network Stack   |---|    Network     |---|   Network Stack  |
  |------------------|   | Infrastructure |   |------------------|
                         |----------------|

Security measures can be categorized according to which level of the model they are implemented in. As used herein, the term "network-level security" refers to defensive strategies implemented at the network transport layer of a system hierarchy. Sometimes the implementation is done on the host, sometimes it is part of the network infrastructure itself. So the term "network-level security" is broader than the term "security within the network".

The term "network infrastructure" refers to the links, routers, and switches which allow hosts to communicate with one another. For our purposes, the "enterprise network" is distinguished from "the Internet" in order to discuss the merits of topological or "perimeter" defenses which might be deployed at the enterprise network border, interior, or edge:

        |<----------< Network Infrastructure >-------------->|


 |------|    |------------|     |--------------------|       |------|
 | Host |----| "Internet" |-----| Enterprise Network |-------| Host |
 |------|    |------------|  ^  |--------------------|   ^   |------|

                             ^              ^            ^
                           Border        Interior       Edge

Physical devices that may embody security measures include:

Sometimes a "host proxy" or "host front-end" device will be inserted between the end-system (server) and the network in order to implement security capabilities that, for whatever reason, cannot be deployed in the end-system itself.

A "firewall" can be any device that implements "policy-based packet filtering", including dedicated security products, routers, or the hosts/host-proxies themselves.

The term "perimeter defense" is often used to describe measures designed to keep attackers who are "outside" the perimeter from accessing hosts "inside". The perimeter itself may coincide with the physical location in the network topology where the perimeter enforcement is done, or it may be a more abstract notion of perimeter, such as the logical boundary between two different sets ("trusted" and "all others") of device addresses.

A perimeter defense defines a "trust zone". Such a trust zone is being trusted to not contain the same sources of attacks within it that are hopefully being blocked at the perimeter. An important special case is where the perimeter surrounds a single end-system (probably but not necessarily implemented by the end-system itself), so that the trust zone is minimized. As we shall see, these "trust zones" should more correctly be described as "vulnerability zones".

"Defense in depth" is the time-tested concept of putting multiple barriers between attackers and their targets. If one obstacle fails, perhaps another will succeed. These obstacles typically represent perimeters encircling the resources to be protected, which are themselves assumed to be trustworthy. Hence, some additional comfort may come from establishing defensive perimeters around groups of resources, provided that the resulting "vulnerability zone" is kept sufficiently small, in addition to "hardening" those resources (especially critical servers).


1.3   DESIGN PRINCIPLES

Some system design basics:

These definitions embrace an architectural viewpoint in which access-providing and access-controlling functions are implemented in different layers. When one or more of these roles is "done poorly" there is a natural tendency to compensate by overloading one of the other roles.

The goal is "open access with reasonable security at reasonable cost", where ease of implementing the right policy is the key. For more background info along these lines, see:

http://www.atstake.com/company_info/acrobat/turning_security_on_head.pdf


1.4   NETWORK SECURITY AXIOMS

Large security perimeters mean large vulnerability zones.

Moats work best when all the bad guys are on the outside, but the likelihood of that happening is inversely proportional to the population of machines and people inside. It's a question of scale driving risk. In a large enterprise, there will almost always be a compromised system or attacker somewhere inside the perimeter, because network-level defenses cannot protect against important classes of attacks; hence, smaller is better: one should try to make the primary "protection perimeter" as small as possible (nearest the network "edge"); preferably implemented in the end-system itself, thereby reducing the opportunity for hostile access between the security/trust boundary and the end-system. Multiple small moats provide more security than one large one because the large moat will need to have more bridges across it (more holes thru the firewall) to accommodate the needs of the larger population behind it. Small moats/firewalls can be tuned to the specific application or user needs of each small group or individual user, thus allowing tighter security.

Network security is maximized when we assume there is no such thing.

Security is maximized when the network is trusted least. It's human nature: if you believe someone else is protecting you, you may be negligent in protecting yourself. Misplaced trust in enterprise firewalls often reduces the incentive to fix end-system security problems, and that results in reduced overall security. If you assume that network links do not themselves offer data protection, there is more motivation to provide this at the application or host level, which is where it will be most effective. By treating the entire network as "untrusted" and pushing the security perimeter to the edge, overall security is enhanced.

Firewalls are such a good idea, every host should have one. Seriously.

Just because enterprise firewalls don't live up to the hype about them, doesn't mean that firewalling is not important. If the host OS is not itself securable, it is not unreasonable to front-end the host with something that is, or to protect it with a personal or host-based firewall. Similarly, providing additional protection for a cluster of critical servers may be perfectly reasonable, especially given liability concerns. But such protection devices should be as "close" to the end-system(s) as possible in order to minimize the size of the vulnerability zone between the security perimeter and the host, and the fact that they introduce new failure modes must be acknowledged and mitigated. It is significant that all three of the major desktop operating systems (Windows XP, Linux, and Mac OS X) now come with integral firewalls. However, not all three enable the firewall capability by default.

Remote access is fraught with peril... just like local access.

Security is improved if all access is considered "remote" access. That is, whatever mechanisms are deemed necessary and sufficient for "remote" users to access sensitive resources should be applied to "local" users as well. To do otherwise implies an implicit faith in the security of the local network that is generally unwarranted.

One person's 'security perimeter' is another's 'broken network'.

Some forms of packet filtering within the network infrastructure are useful, e.g blocking spoofed source addresses, and reducing the set of ports that can be used for DOS attacks can be useful where consensus can be reached on which ports are unneeded. However, in a research university, connectivity requirements and expectations vary widely among departments, thus making it difficult to achieve consensus on large-perimeter or global packet filtering policies. Moreover, managing "special cases" within a network infrastructure, e.g. department or application-specific router access lists, is extremely problematic and expensive, in terms of network monitoring and management, policy database management, customer support, and mean-time-to-repair.

Private networks won't help.

Application vendors sometimes assert that data traversing the Internet must be specially protected, whereas traffic on private data networks does not. This claim misses the point that network security threats occur at the edges of the network, rarely in the core. Thus, the idea that private data networks provide more security than the Internet is vastly overstated, and approaches that only protect data between enterprise borders (e.g. enterprise VPN gateways) have little security value. "It's not the network... it's what and who are attached to it."

Network Security is about psychology as much as technology.

Aside from the attackers themselves, legitimate users, system managers and policy-makers all play a role in keeping systems safe. Achieving consensus on security requirements, priorities, funding, risk assessment, acceptable levels of inconvenience... is hard, and is not primarily a question of technology.


1.5   SECURITY TAXONOMY

In order to examine our defensive strategies in more detail, we break the general computer security problem into the following parts:

  1. Operational Security
  2. Application and Host Security
  3. Network-Level Security

Network-level security covers those protection measures which are implemented at the Physical, Data Link, and Network layers of the OSI Reference Model; i.e. Layers 1-3. We consider two general categories: path isolation and perimeter protection (or perimeter defense), each of which have some realizations suitable for use in the core of the enterprise network, and others suitable for use at the edges of the network.

The following table depicts the taxonomy:

    Path Isolation   Perimeter Defense

    CORE    
   

   
  • Separate physical links
  • MAC address switching    
  • VLANs
  • Tag switching

   

   
  • Border router filtering
  • Global router filtering

   

    EDGE    
   

   
  • Host-to-Host VPNs    
  • VPN servers

   

   
  • Host/Personal Firewalls
  • Lab Firewalls
  • Machine Room Firewalls
  • Server sanctuary Firewalls    
  • Residential gateways
  • Logical Firewalls/NATs

   

 

A "server sanctuary" is a portion of a machine room --perhaps only a rack-- set aside for critical or sensitive servers, and which will typically have special protections, e.g. a small-perimeter firewall and perhaps additional physical security such as a locked cage.

 



2.     OPERATIONAL SECURITY

A comprehensive look at operational security or procedural defenses issues is beyond the scope of this document, but the subject is far too important to the problem at hand to omit entirely. So let's touch on it first, by identifying a few key areas and sources for further reference.

Security Policies. Start with a security policy defining who can/cannot do what to whom. Policy drives security; lack of policy drives insecurity! This includes identification and prioritization of threats, and identification of assumptions, e.g. Security perimeters, trusted systems, and infrastructure. Once policies are in place, they need to be backed up with specific standards and adequate resources for computer administration. Surprisingly, discussions about network security often begin with proposed solutions before there is any clear idea of requirements; i.e. before any kind of security policy has been defined. As an example: Develop a Strong network connection policy (e.g. "your machine will be disconnected if it fails minimum security requirements") backed by proactive vulnerability probing.

Education on safe/unsafe computing practices. Security is everyone's responsibility, but not everyone knows that. Reminders to, for example, not leave unattended computers in insecure areas logged in, or to not send passwords in email, are needed.

Intrusion detection. A debate continues concerning the value of intrusion detection systems. As network capacity and usage continues to escalate, it becomes increasingly difficult to believe that watching all network traffic for alarming patterns will prove to be a viable long term solution. However, it may be reasonable for specific servers, and can provide a validity check of whatever firewall rules may be in place. On the other hand, sophisticated crackers are likely to attack via client machines rather than the servers directly, and therefore may appear as legitimate users in IDS logs for servers.

Traffic level monitoring. A more promising strategy than pervasive intrusion detection is likely to be a tool set that monitors network traffic levels and sends alerts when baseline thresholds are exceeded. This ability needs to be combined with the ability to examine specific traffic flows when an anomoly has been detected.

Proactive vulnerability probing. This is one of the most important tools available to secure a population of computers. It can be done centrally or by individual departments. Like most aspects of security, it is not a one-time activity, but requires an on-going and recurring effort.

Honeypots. These are systems that are intentionally configured to allow bad guys to penetrate, but with the intent of either tracking them down or keeping them away from actual critical systems.

Security drills. Periodic drills simulating Denial of Service, virus, or data corruption attacks can sharpen diagnostic skills and security awareness. (These might be combined with periodic "recycling days" and/or "office spring cleaning" events for staff who are not directly responsible for dealing with the emergency!) Here is an article from USA Today on the topic: http://cgi.usatoday.com/usatonline/20010524/3346958s.htm



3.     APPLICATION and HOST SECURITY

Here we consider the following application and OS security issues:

This section is about proper configuration of hosts and applications. The bad news is that there are a zillion hosts on the Internet that are essentially unmanaged. Computer ownership is not for the feint-hearted, and it comes with on-going responsibilities. An insecure host is not just a threat to its owner, but also to others who share network or server resources with the vulnerable system.

Software Updates

There is hope. The good news is that "The majority of successful attacks on computer systems via the Internet can be traced to exploitation of one of a small number of security flaws." (From "How To Eliminate The Ten Most Critical Internet Security Threats" at http://www.sans.org/topten.htm ) For example, most of the SANS top-ten are correctable by software upgrades; the others by proper configuration. Here is the SANS top-ten list as of this writing:

  1. Bind
  2. CGI Scripts
  3. RPC
  4. Microsoft IIS
  5. Sendmail
  6. Sun sadmind, mountd
  7. global file sharing
  8. user accounts with no passwords
  9. IMAP, POP vulnerabilities
  10. SNMP vulnerabilities
  11. Bonus: Internet Explorer and Office2000 scripting holes

Also good news is the fact that operating systems are getting better, security-wise. For example, Windows XP, Mac OS X, and Linux 2.4 all come with fairly flexible packet filtering (firewalling) capabilities built-in.

Client and server end-systems should be considered separately because different security measures apply to each. In some ways clients are more difficult to secure than servers, because virtually all clients run mail and web clients which can --either via bugs or user deception-- introduce dangerous code onto the client. If a compromised client allows an attacker to successfully login to a server, this may be as bad as a server compromise; however, if applications accessing sensitive data are properly protected with two-factor authentication, including a one-time key (e.g. SecureID cards or s/key software) or challenge-response smart cards, then the risks resulting from client machine compromise are greatly diminished.

In contrast to client systems, a sensitive application server should have most services disabled, and allow access only to a specific application port. And critical/sensitive servers should avoid exporting information via insecure protocols such as NFS.

Proper Authentication

One of the greatest security risks occurs when static authentication credentials (ID and password) are transmitted from a client machine to the application server "in the clear". Therefore, one of the first steps in securing a network computing environment is to make sure that authentication credentials are always encrypted enroute, either via secure application/access protocols (SSH, SSL, Kerberos) or, failing that, via transport-level encryption (VPNs).

However, one can and should do better, in cases where sensitive data is being accessed. A compromised client machine can result in static authentication credentials being captured surreptitiously. The credentials of a single legitimate user can often be used to access the entire server database, so if a client machine is compromised and legitimate static credentials captured, the server data is compromised. This observation leads to the case for using more sophisticated authentication mechanisms when server data is sensitive.

Examples of authentication mechanisms which offer improved resistance to client machine compromise include:

In many cases it will be appropriate to have additional authorization requirements for modifying data, as compared to those needed to simply read the data.

Host-based Packet Filtering

Host firewalling (i.e. packet filtering on the end-system itself) can provide significant security benefits. Examples of this capability include "TCP wrappers" and kernel filtering (e.g. IP-tables), which provide complementary filtering services. Both allow blocking packets based on port number or IP address; tcp wrappers also allow blocking based on domain names.

This approach is consistent with the principle of moving protection boundaries as close to the edge of the network as possible, and is particularly valuable for controlling access to critical functions needed by system administrators, or use of less secure access protocols by workgroup members. Kernel-level packet filtering also defends against exploitation of buffer overruns or other exploits of higher-level services since the packets are dropped before even being passed to the network transport stack, much less the application. Consequently they provide defense-in-depth opportunities in conjunction with minimizing the number of services even running.

For those of the large-perimeter defense persuasion, note that such host-based filters can also be configured to reject unsecured connections from anywhere outside the enterprise address space, just as an enterprise firewall would do.

In the case of Windows 2000 kernel filters, it is also possible to reject connections unless they are encrypted using IPSEC. Good references for configuring Windows 2000 and Linux 2.4 packet filters are:

(The latter document concerns building Linux-based "logical firewalls" --described later-- but is also useful for configuring Linux end-systems.)

Example of an incoming filtering policy for a generic departmental server:

  POLICY   SERVICES

    Allow from anywhere  
   

   
  • SSH           (22)
  • Kerberos     (88)
  • HTTP         (80)
  • SSL-HTTP   (443)
  • SSL-IMAP   (993)
  • SSL-SMTP   (465)
  • SSL-LDAP   (636)

   

    Allow from selected hosts  
   

   
  • NFS
  • SMB

    Block entirely
   

   
  • Everything else

   

Session Encryption: Secure Application Protocols

One of the best ways to protect data in transit is to use secure protocols for accessing the server application. SSL (secure socket layer) does this for web-based applications. SSH (secure shell) can be used for many other applications. Apps that use Kerberos for authentication can also encrypt their data streams using a session key provided as part of the Kerberos authentication process.

Use of SSL, SSH, or Kerberos encrypted tunnels between clients and server applications is undoubtedly one of the most important security techniques currently available. This is particularly true for web-based applications, since most recent web browsers are equipped with SSL support. Thus, no additional software or configuration of clients is required to implement this form of session/path protection. Even SSH (an encrypted version of the telnet remote terminal protocol) is fairly easy to deploy, and can be used to tunnel various other application protocols securely from client to server.

Note that in most instances, use of SSL, SSH, or Kerberos encrypted tunnels provides path protection all the way from the client machine to the server, which is an extremely desirable attribute.

Legacy Systems and Insecure Services

Older operating systems don't have advanced packet filtering capabilities built-in, and older applications may not be capable of using secure application protocols for access. Some systems may be securable in principle, but non-technical constraints may prevent upgrades or patches from being applied. (This has happened in medical centers when a particular host configuration has been certified for use in a life-critical system.)

Legacy servers that cannot be secured or that export services via insecurable protocols ought to be front-ended by a firewall or by a bastion host, or by servers suitable for network use. Trying to control access to insecure services via IP-address is reasonable in some cases. For example, the NFS remote file access protocol should have limited access. Similarly, certain administrative protocols should be restricted to specific hosts.

Separation of Services

Separation of services and minimizing the number of visible services is another key security strategy, especially if the services are not encrypted. If servers are dedicated to a single (or small number of) services, with all other services disabled, the "entry points" into any given system are minimized, and overall security enhanced. This strategy also allows servers that must use insecure protocols to be hidden behind bastion hosts that export the service in a less vulnerable way.

Application functionality is the ultimate policy-driver. Mixing apps with different security profiles on the same host is a lose.

Desktop Configuration

Perhaps the most intimidating aspect of the security problem is the prospect of securing large numbers of desktop computers. Several strategies to alleviate this problem are worth noting:

Alternatively, if an organization involved with sensitive information finds itself with a large number of client machines which are deemed "insecure" or "insecurable", perhaps the wrong kind of end-system has been chosen. For example, a contemporary "thin client" device (that can act only as an SSH secure terminal client, or SSL-enabled web browser) might meet the need as well as general-purpose PCs, which offer more flexibility but also more opportunities for compromise.



4.     PATH ISOLATION and VPNs

Path isolation is about separating different data streams that share common data links from each other, using switching, virtual circuit, and encryption technologies. (Of course separate physical links may also be used.) The goal is keep sensitive information separated from hostile traffic that might either record or modify the sensitive info. Approaches to achieving path isolation include:

With the exception of transport-level encryption, all of these techniques apply to the core network infrastructure, rather than edge/end-user deployment.

Isolation via separate physical links

Although it is possible to establish dedicated (unshared) links between trusted hosts for the exchange of sensitive information, this approach does not scale and usually doesn't offer any compelling (security) advantages over path encryption techniques. It is mentioned here only for completeness.

Isolation via Ethernet switching

On shared Ethernet segments, all stations on the segment see all the traffic. Moving away from shared Ethernet infrastructure to switched Ethernet infrastructure is a an important way to improve LAN performance, but switches also limit the visibility of network traffic to just the intended source and destination end-systems, thus reducing the opportunity for exposure of sensitive data and authentication credentials. This is obviously a Good Thing (TM), security-wise. However, switching is not an adequate substitute for end-to-end encryption of sensitive data because some switches (depending on configuration) may occasionally "leak" a few packets to all ports, and even when switches within the centrally managed network infrastructure are operating as intended, there may be hostile machines on the same subnet that can fool the switches into forwarding packets to the wrong place, thus allowing session hijacking to take place.

Isolation using VLAN and MPLS virtual circuits

MPLS (Multi-Protocol Label Switching) packet tagging and VLAN (Virtual Local Area Network) Ethernet frame tagging provide mechanisms for defining virtual circuits that prevent comingling of traffic that is actually sharing the same physical links. Like separate physical links and use of Ethernet switching, these tagging techniques are tools in the network designer's toolkit; they are not generally available to end-users or host administrators. Use of such virtual circuit approaches involves creating so-called "overlay" networks on top of the physical network topology, and upon which an IP network mesh is constructed. Opinions vary within the network engineering community on when or where such overlay networks should be used. Again, they are mentioned here for completeness.

Isolation via Transport-level encryption

The objective of transport-level encryption is the same as for other forms of path isolation (as well as the session encryption technique described previously), namely: to prevent sensitive data stream from being visible to (much less, modified by) an attacker.

In contrast to session encryption using SSH, SSL, or Kerberos, (which is implemented at the application layer, and is specific to the application) transport-level encryption is done at the network transport level, preferably in the host's IP stack. The advantage of transport-level encryption over session encryption is that once implemented among a set of cooperating hosts, it works for all applications. The disadvantage is that it may be harder to deploy than SSH or SSL, especially for web-based applications, where the client-side SSL software is generally available "out of the box" and requires no special configuration.

On the other hand, sometimes session encryption is not an option at all. When it is necessary to support legacy applications that cannot be adapted to secure access protocols (e.g. SSH/SSL), then it may be possible to implement transport-level encryption. IPSEC is the standard for doing this, but it is fairly new and not yet pervasively available. While IPSEC has the advantage of allowing any application to be protected, its management overhead (especially in a heterogeneous multi-vendor environment) and its limited availability suggest that use of secure application protocols is a preferable solution whenever their use is possible.

Path encryption at the transport level (as well as session/application level encryption) is best when it is "end-to-end". That way, the data is protected at all points between the client and server. However, transport-level encryption is sometimes implemented between gateways to provide "bulk encryption" between different parts of the Internet. This is a compromise solution, usually done when the end-systems are incapable of supporting IPSEC directly (or the applications can't support SSL or SSH). Ironically, in these cases the part of the path that is protected by encryption is often more secure than the "trusted" enterprise networks at each end.

In contrast to other forms of path isolation, path encryption can generally be done by end-users or system managers, and offers a secure path regardless of how insecure the underlying network links might be.

Transport-level encryption is the foundation of many (most?) Virtual Private Network (VPN) solutions, which are discussed more fully in the next section.

VPNs

VPNs or "Virtual Private Networks" provide an isolated path for some portion of a network connection. This is sometimes implemented using MPLS or VLAN tagging techniques as described in the Path Isolation overview; however, most often a VPN is based on transport-level (rather than application-level) encryption.

The term "tunnel" is often used in conjunction with the term "VPN" to help convey the idea of a separate and distinct channel being created through the vast collection of physical connections comprising the Internet.

VPNs may be implemented end-to-end, or between VPN servers. VPN services are sometimes combined with perimeter defenses. For example, a VPN server may play a role in controlling access via IP addresses, or implementing other policy-based packet filtering.

Reasons to consider a VPN include:

  1. A legacy application is unable to use secure access protocols, so transport-level encrypted tunnels are needed to protect data.
  2. Enterprise IP address ranges must be exported to (remote) clients in order to permit access to restricted national networks.
  3. A legacy application requires authorized clients to use a specific fixed IP address range; so to support remote (esp. DHCP) users, a VPN server is needed to export an accepted IP address to the remote client.
  4. Users/departments need to tunnel through a topological perimeter defense (firewall) in order to export a restricted service to their clients.

However:

  1. VPNs add operational cost and user inconvenience.
  2. VPNs are most effective if end-to-end, in order to minimize the vulnerability zones between VPN servers and hosts, but end-to-end VPNs are harder to configure and manage.

The value of VPNs between the borders of institutions is extremely limited. This follows from the risk profile of the Internet: the risk in the core of the network is low; at the edges it is high. Hence the importance of pushing the protection boundary as close to the end-system as possible.

VPNs make sense when a needed application cannot be secured, or when a firewall must be breached to support legitimate activities. That is, if it is not possible to use applications that communicate via SSH, SSL, or K5, then a VPN may be useful. However because VPNs may be a substantial source of operational and management overhead, secure application protocols are preferable. On the other hand, in some cases VPNs may be essential to poke holes in a firewall or an ISP service that blocks certain ports. For example, if remote file-sharing protocols were blocked within an enterprise or by an ISP, remote clients requiring use of those protocols will need to establish VPN tunnels through the firewall and/or ISP.

Like most firewalls, VPNs that terminate in a server rather than the end-system can give a false sense of security since there remains a vulnerability zone between the VPN server and the application server. In other words, VPNs implemented by special gateway devices (rather than IPSEC tunnels all the way to the server), leave open the question of how the still-not-secure endpoints (servers) connect to the secure (VPN gateway) device; hence they are at best an interim point on the way to having secure hosts.

Finally, if security considerations suggest a VPN solution for remote users, the question must be asked: shouldn't the same level of secure access be provided to local users? Said differently, how much are you willing to bet that the local network is significantly more secure than the rest of the net, even if perimeter defenses are in place?

In summary, when do VPNs make sense?

When legacy apps cannot be accessed via secure protocols, e.g. SSH, SSL,K5.
AND
When the tunnel end-points are on or very near the end-systems.
OR
When a department needs to tunnel thru firewalls or ISP filters in order to utilize blocked application ports.

Of course, using VPNs to tunnel thru firewalls could be viewed as a breach of security, since that practice exports to machines "outside" the firewall perimeter the same protocols the firewall was configured to stop. Such are the contradictions of the network security biz. However, in the case of protocols that are widely viewed as insecurable, e.g. NFS, access should be controlled, either by client IP-address access controls or by perimeter blocking plus VPN tunnels exporting them to authorized users.



5.     PERIMETER PROTECTION

Motivation

A well-managed end-system, especially a sensitive server, will have unneeded and insecure services disabled, and it will use strong user authentication for access control. Given that fact, how important is it to block unneeded/insecure services elsewhere in the network, or use device address perimeters for access control? And if important, where should it be done? (i.e. How large should the perimeter be?)

To answer these questions, we observe that:

  1. If the critical hosts can effectively disable unneeded or insecure services themselves, it is not necessary to block them elsewhere; however, additional perimeter protection can provide some additional assurance (defense in depth), although not without some downsides.
  2. In practice, some number of hosts will be insecure. They may be running older/insecurable OS versions, or will not have been configured to do the necessary service blocking, or may need to use insecure protocols for certain purposes.
  3. Protection against DOS attacks calls for blocking packets to the victim IP address as close to the attacking source(s) as possible, in order to reduce link congestion. Of course, if all packets to the victim machine are blocked, this becomes a self-imposed DOS attack.
  4. Getting consensus on what ports/services are unneeded (or even which ones are insecure) is difficult, and becomes more so as the size of the perimeter grows.
  5. The more restrictive the border is, the more likely it is that a department or individual will create a VPN tunnel through the enterprise firewall to export one or more of the restricted services, or otherwise implement "back door" connections around the firewall.
  6. To serve a geographically diverse and growing user base, some host has to be accessible to them. We claim that this host need not be a special expensive dedicated high-support device IF front-end services are put on appropriately dedicated and monitored boxes, and back-end services are on other boxes that can be protected as required.

Our conclusion is that some forms of perimeter defense are essential while others are dangerous. To better understand why, let's examine the basic perimeter defense concept, and the tools available for implementing same...

Policy-based Packet Filtering

Perimeter protection, or perimeter defense, is about policy-based packet filtering at specific physical or logical points in a network; that is, blocking certain packets from proceeding to their intended destination. This is generally called firewalling, although it is important to realize that such packet filtering need not be implemented using a conventional, dedicated firewall device. Perimeter defenses can be implemented in a variety of ways:

The location at which the packet blocking, or filtering, occurs is called the Policy Enforcement Point (PEP). Putting aside host-based filtering, the PEP may either be at a specific physical point within the network infrastructure (topological firewalls), or it may be a topology-independent (aka logical or virtual) firewall device attached to the edge of the network.

Conventional (topological) firewalls rely on being in between, physically and topologically, the target hosts and the bad guys. In contrast, logical or virtual firewalls establish a virtual perimeter based on addresses or cryptographic associations. Both topological and logical perimeters divide the network into an "inside" and an "outside" much like a medieval moat, and --in principle-- prohibits users who are "outside" from successfully accessing the blocked services.

Filtering Options

In either conventional or logical firewalls, packets may be dropped at the chosen perimeter (policy enforcement point) based on several different criteria. The most common cases are blocking by port and blocking by address. For example, one could allow all services from a specific subset of hosts (as identified by their IP address). Alternatively, one could allow a specific subset of services to be accessed from any host. Or any combination.

In general, criteria for blocking packets include:

Perimeter defenses seek to achieve two network security goals. Blocking packets by service type effectively "reduces the size of the target" and doing device-level access control by blocking untrusted source addresses puts another "obstacle between the attacker and the target systems".

Blocking by Port

Port, or service, blocking is the bread-and-butter of firewalling. The major debate among firewall enthusiasts is whether blocking or permitting should be the default condition. That is, should all ports be blocked unless there is a business case made to permit the port to be opened? Or should ports be open (unobstructed) unless the associated services are deemed to present a significant risk to the "interior" systems?

In simple firewalls, the service blocking is done based on a static port number. In more sophisticated firewalls, packet blocking may be based on complex algorithms, which in some cases mimic the dynamic behavior of the application itself. For example, the H.323 videoconferencing system uses a family of protocols, some of which dynamically assign data ports. Being able to allow selected H.323 sessions without unnecessarily opening up lots of other ports requires a relatively sophisticated firewall.

Blocking by Address

Address blocking can be used in conjunction with service blocking, or it can be used simply to extend the defensive perimeter to include selected hosts that would otherwise be "outside". Said differently, a firewall that permits traffic from certain IP addresses extends the internal "trust zone" (i.e. "vulnerability zone") to a selected set of hosts or range of addresses that would otherwise be "outside", at least for a specified set of services.

From a server's perspective, address-based blocking provides access control based on the identity of the client machine (not the user). Sometimes this device-based access control is used in lieu of user access control, sometimes it is in addition, for extra protection.

The argument for access control by host address is that it is necessary when insecure application protocols must be used, and even when secure application access protocols are used, it is a useful component of a "defense in depth" strategy, and reduces exposure to attacks from the Internet at large.

The argument against such device-level access control is that a) manual configuration of client IP addresses in servers does not scale, and b) it is not necessary if other needed host security measures have been implemented, such as strong user-level authentication.

A middle position is that manually-configured IP-based access controls make sense when a small and static set of hosts are involved, but otherwise should be avoided unless circumstances dictate it. Note that a large-perimeter firewall can be emulated via host filtering on addresses (in addition to whatever other policies are applied) by dropping packets that originate outside the enterprise address space.

Some examples of using IP address filtering:

Network Address Translation

In certain cases it can be useful to configure selected hosts with "local" or "non-routable" IP addresses, in accordance with RFC-1918. A "Network Address Translator" (NAT) gateway is then needed for those hosts to communicate beyond their local subnet. NATs translate local IP addresses to (one or more) global IP addresses.

Hosts with local addresses are "invisible" to the outside world, except via the NAT (provided network infrastructure routers are correctly configured to block packets with local addresses). A NAT can be combined with service-blocking firewall functionality for additional security.

Thus, Network Address Translators (NATs), combined with router filtering (blocking) of local addresses, have some useful security properties. Originally intended to reduce the demand for global IPv4 addresses, NATs now take many forms and are used for load-balancing, high-availability clusters, firewalls, as well as the original goal of multiplexing a single global IP address for use by many computers.

Unfortunately, not all applications and protocols work through NATs. If an application carries IP addresses as part of the data payload, or uses them in cryptographic security associations, NATs can be problematic. Accordingly, like firewalls, NATs are not a panacea, and like firewalls, they are most effective when used in a small-perimeter context. Nevertheless, many enterprises have found that the advantages of NATs outweigh the disadvantages: They have chosen to use nothing but local IP addresses throughout the enterprise, with all external access mediated by a single NAT/Firewall. In this scenario, the NAT typically maps all the local addresses to a single global IP address provided by the Internet Service Provider (ISP). A key advantage of doing this is to avoid being held hostage by the ISP with respect to IPv4 addresses, which are generally "loaned" to the enterprise by the ISP.

Router Access Lists

One way to implement a perimeter defense is to configure routers to block (or filter or drop) selected packets. Computers attached to subnets on such routers are on the "inside" of the perimeter defense. With some types of routers, access lists have significant performance implications, so this option needs to be considered carefully.

Two scenarios are of particular interest:

Border router filtering could be used to implement a basic form of enterprise firewall. Pervasive router filtering describes access list filters that are deployed in every router. There are two common examples of filtering in all routers:

Local addresses are those defined in RFC-1918 to be used within an organization (or home network) for internal use, and are not globally routable. Routers are typically configured to drop packets with these addresses. (NB: In some institutions, this is only done in the border routers.) Local addresses will play a key role in the concept of "logical firewalls" described later.

Spoofed source addresses are those which claim to originate from a different IP subnet than the actual subnet the packet is coming from. These are created either by mistake or, more often, by malice. There is no controversy in having routers block these types of packets, except that on some routers there can be a significant performance hit when doing so.

Given the vulnerability zone problems inherent in a large-perimeter (border) defensive strategy, a fair question to ask is whether any packet filtering done on a border router ought not to be done on all routers. Putting aside router performance considerations for the moment, the larger and more diverse the enterprise, the less compelling would be a border-only blocking policy. A valid exception would be in enterprises where all internal computers are given local IP addresses, and border routers take on the NAT function of mapping to a single ISP-provided global IP address. In this scenario, only the border routers would be configured to block packets with local addresses.

Perimeter Size Considerations

The defensive perimeters described above can be large, including an entire enterprise, or small, encircling a small set of sensitive servers. In the limit, the perimeter encircles a single host. However, as the size of the perimeter increases, we encounter the "Perimeter Protection Paradox" and the "Trusted Network Fantasy".

The "Perimeter Protection Paradox" is as follows: The value of a firewall is proportional to the number of systems protected; however, the effectiveness of a firewall is inversely proportional to the number of systems protected. This is because the larger the number of systems behind the firewall, the more likely it is that one of those systems has been or is about to be compromised by attacks that are difficult or impossible to block at an enterprise firewall (e.g. email attachments or web server exploits). Once that happens, the organization is vulnerable to "insider" attacks, and the firewall is rendered ineffective against them. Moreover, large-perimeter or border firewalls must implement a "lowest common denominator" filtering policy. The larger the number of users or business units in an enterprise, the larger the number of different applications likely to be in use, and therefore the larger the number of "holes" that must be punched thru the firewall to permit operation of those applications. Obviously, the more holes in the firewall, the less effective it is.

A corollary is that so-called "trusted network" zones defined by firewall perimeters should more accurately be called "vulnerability zones" because they represent areas of minimum protection and therefore high risk from attacks that circumvent the perimeter defense. If the the number of machines in "trusted network" zones is sufficiently large, then the trustworthiness of that network zone is surely fantasy.

The principal rationale for network perimeter defenses is that not all end-systems are secure or securable, and having one or more barriers where attacks can be stopped before they get to the insecure systems is clearly a Good Thing (unless those same barriers keep legitimate users from gaining access to the services they need, or lessen the urgency of securing the end-systems). The premise is that it is easier to protect a group of insecure systems by establishing a defensive perimeter around them, than it is to make those systems secure. But what if those insecure systems have already been penetrated, or are somehow penetrated in spite of the perimeter defense?

So the paradox is that a perimeter defense only adds value when some number of the end-systems are insecure, but insecure end-systems constitute potential (if not probable) holes thru the perimeter defense. Moreover, the larger the perimeter, i.e. the more systems within the perimeter (behind the firewall), the larger the zone of trust, and the higher the probability that one of the machines inside the zone of trust has been compromised, thereby undermining the firewall. This is why we assert that these "trust zones" are more aptly described as "vulnerability zones".

Even though it only takes one compromised system to cause Big Trouble, it is natural to want to reduce the number of successful attacks against the insecure systems by deploying a large-perimeter defense. But how much solace should this approach bring? Unfortunately, not much.

Given that a network-topology-based perimeter will rarely encompass all legitimate users, there must be provisions for secure "remote" access by legitimate users. The question then becomes: is the "inside" of the perimeter sufficiently more trustworthy than the "outside" to warrant a more relaxed form of access? Note also that if a topological perimeter is extended to include selected "outsiders" via VPNs, then the vulnerability zone has also grown to include those selected "outside" hosts.

While most firewalls are intended to protect large numbers of systems by providing a perimeter defense at the borders of a network, it is increasingly difficult to identify topological network boundaries that correspond to organizational boundaries. Moreover, all we can really say about a "larger perimeter" defense is that there are probably more "bad guys" on the outside than on the inside, and probably more legitimate users on the inside than on the outside. Unfortunately, there is a temptation to consider perimeter defense firewalls as an economical alternative to properly protecting end-systems on the "inside".

We conclude that small zones of trust, i.e. small service-blocking perimeters, are better than big ones. A corollary of this is that "all access is remote access" unless one is willing to believe that insecure access methods can be tolerated within some parts of a network. The "small perimeter" strategy is also consistent with the observation that the larger the perimeter, the more difficult it is to get consensus on which services are not needed outside the perimeter. Indeed, within a given perimeter larger user populations usually imply more applications in use, and a corresponding increase in the number of intentional holes needed in the surrounding firewall.

Does this mean that, for example, blocking services at the enterprise border is a bad idea? This question will be considered in a subsequent section on Border Firewalls.

Operational Considerations

The case for moving security perimeters toward the edge of the network is based on the goal of minimizing the vulnerability zones inherent in large-perimeter defenses, and avoiding the psychology of a false sense of security. However, there are also operational considerations that affect the location of policy-based packet filters.

The key to survival in large-scale network management is reducing exceptions and special cases. Therefore, in an environment where the network is operated as a common utility, just like power, it is not reasonable to put departmental or application-specific packet filters within the core network infrastructure, either via router filters or dedicated firewalls. Experience has shown that the ability to manage the network and the Mean Time To Repair (MTTR) when something fails both get worse when this is done.

The conclusion is that in order to protect the operational integrity of the network, local packet filtering policy enforcement points should be outside the core network infrastructure (at the edge), controlled by the relevant departments or individuals. Only packet filtering policies that apply to everyone in the enterprise should be implemented within the core.

Denial of Service Considerations

An exception to the principle of moving protection boundaries to the edge of the network concerns defense against denial-of-service (DOS) attacks, which ideally call for protection boundaries to be as close to the attacker as possible, in order to protect the intervening links from saturation. Unfortunately, this is very hard to do, and usually involves blocking all traffic to the victim host --constituting another form of DOS attack against that host.

As a strategy to protect against DOS attacks in particular, some sites block as many ports at their border as possible, thus reducing the number of "entry points" for DOS attackers.

In the case of an ongoing DOS attacks, a different form of perimeter defense may be needed, namely, blocking incoming packets based on their destination address, in order to protect both the victim system and the enterprise network infrastructure. During such attacks, the goal is to block incoming packets destined for the victim system as close to the enterprise border as possible, or better, at the upstream network service providers.

Firewall Tradeoffs

Firewalls, especially large-perimeter/enterprise firewalls, are often sold as security panaceas but they don't live up to the hype. Here's the problem:
   

  Firewall Virtues   Firewall Faults
  • Single device to manage
  • Protects insecure hosts
  • Defense in Depth
  • DOS attacks: Protects inside links
  • Legal liability defense

                                                

   
  • Large perimeter = large vulnerability
  • Encourages a false sense of security
  • Encourages backdoor network links
  • Encourages tunneling apps thru port 80
  • Complicates network debugging
  • Assumes a fixed security perimeter
  • May be hard to manage
  • May be a performance bottleneck
  • May be a single-point-of-failure
  • May inhibit legitimate activities
  • May force use of VPN tunnels
  • Won't stop any threats from inside
  • Won't stop many threats from outside

   

It is easy to underestimate the operational cost of firewalls. Organization boundaries and filtering requirements constantly change, and configuration errors are easy to make. (Indeed, some enterprises find that the principal value of their Intrusion Detection Systems is in detecting firewall configuration mistakes!) Firewalls have also been known to have serious security bugs themselves, so they require continuing maintenance and upgrade --as well as healthy skepticism. Misplaced faith in the effectiveness of firewalls has to be one of the greatest threats to real security, so at the risk of endless repetition: we must examine the entire network computing system, and cannot ignore end-system management.

Where to Filter

Notwithstanding the foregoing concerns, policy-based packet filtering is clearly an important tool in overall system security; the only debate concerns what to filter, and where to filter it. The choices for where to put the policy enforcement point include:

As previously noted, filtering that is acceptable to all users of a network infrastructure (e.g. blocking packets with spoofed source addresses) should be deployed at the borders and/or throughout the network infrastructure via router access lists. More restrictive departmental or application-specific filters need to be pushed out to the edges of the network, as close to the end-systems as possible. (Ideally, implemented by the end-systems.) Department-specific firewalls placed within the network infrastructure are problematic because they undermine the operational integrity of the network utility. Hence, we will focus next on reviewing border firewall issues, followed by a look at some edge firewall choices.

Border Firewalls

Conventional wisdom holds that a firewall at the border of a large enterprise constitutes the principal and essential form of defense against network-based computer attacks. Our unconventional view is that this position is dangerous, leading to a false sense of security which undermines efforts to implement a comprehensive security program, and often increasing operational cost and inconvenience unnecessarily. While limited forms of "large perimeter" defense can be part of the overall solution, viewing them as an alternative to small-perimeter defensive strategies (especially securing end-systems themselves) is dangerous folly, quite analogous to reliance on the defensive Maginot line that led to disastrous results for the French during WW II.

The first rule of sensible firewalling is that enterprise firewalls are NOT a viable ALTERNATIVE to end-system security, because they will not protect against many important threats. (Email viruses are one example; recent vulnerabilities in Microsoft's IIS web server are another.) Obviously a perimeter defense cannot protect against threats originating from within the perimeter, nor can it protect against threats that use access paths which need to be available to those "outside" (e.g. email, web, remote terminal). Therefore, a perimeter defense firewall may be considered as an additional level of security, but if the consequence of deploying such a perimeter defense is to lessen the urgency of keeping end-systems secure, or to cause the creation of backdoor connections, then the firewall strategy will actually UNDERMINE overall security.

A perimeter policy-enforcement point should implement policies that are consistent with the needs of all the entities it is protecting. Traditional enterprise firewalls intended to provide "security by moat" may fail to achieve their goals because the policies needed for every organizational unit in the whole protected net are too diverse, with the consequence that "back door" connections --circumventing the firewall-- begin to appear in the network, or because of the inherent weaknesses in a large-perimeter defensive strategy.

Let's revisit the principal motivation for enterprise firewalls: the assumption that large-perimeter defense is needed because it is infeasible to secure every host on the network. Now look at this from the perspective of a host administrator and ask yourself the following question:

   

   

QUESTION:

If a restrictive border firewall surrounds your --and 50,000 other-- computers, would you take any fewer protective measures for your own host?

ANSWER:

Only if you regularly win the lottery.

   

Clearly a prudent host administrator should not put much faith in an enterprise border firewall since to do so implies sharing one's "security fate" in the resulting vulnerability zone with large numbers of other --potentially insecure-- computers. In other words, it would be negligent not to secure one's own hosts, even if there were a restrictive large-perimeter firewall in place.

In a large, decentralized, heterogeneous organization,
when does a border or pervasive packet-filtering policy make sense?

Principal examples that meet these criteria are those mentioned earlier: blocking packets with spoofed source addresses, and RFC-1918 local address filtering. A current debate is whether or not NFS (generally considered to be an insecure protocol) should be blocked at the border. The argument in favor of doing so is that it would be an efficient way of protecting lots of end-systems from bad guys. The arguments against blocking NFS at the border include:

This last point deserves some elaboration. NFS is an example of a protocol that may provide useful and/or essential functionality for a workgroup, but which carries inherent security risks because of its design. Thus, it may not be reasonable to just shut it off, but it also should not be made available to the crackers world-wide. Is it possible for a host administrator to solve this problem, or is it necessary to block NFS in the core of the network, e.g. at the enterprise border? It turns out that by combining port filtering and address filtering, it is possible for the host administrator to solve the problem locally via host filtering, thereby avoiding the problem of imposing an enterprise-wide policy that might unnecessarily inconvenience others. (Assuming, of course, that the NFS service is hosted on an OS with reasonable packet filtering capabilities!)

For any particular service (associated with a specific port number), host filtering to permit only those packets originating within the enterprise address space provides equivalent functionality to having a border firewall block the service. However, a host filter (on the server itself) could do better, and restrict access for that service only to a particular subnet, or to particular client machines. This would avoid some of the risks associated with the vulnerability zone created by border blocking. As usual, if the server in question is incapable of being made secure, then it should be front-ended by something than can be made secure. Note that such IP address filters are arguably superfluous when using a secure protocol for accessing a service (SSH, SSL, Kerberos), but quite appropriate when there is a requirement to use an insecure protocol (e.g. NFS).

A desirable goal is to minimize the administrative or organizational distance between the individuals responsible for policy definition, those doing the filter configuration, and and those affected by the policy. In a perfect world, all of those folks would either be the same or at least have the same business needs and objectives. The larger the perimeter, the less likely it is that this will be the case, unless the enterprise is extraordinarily homogeneous. In a large, diverse and decentralized organization, there is a risk of serious friction between organiztional units over firewall policy conflicts. This is another reason to favor small-perimeter packet-filtering policies that can be tailored to the needs of each organizational unit. When a border firewall is too restrictive for the needs of individual business units, the consequence is a proliferation of backdoor network connections, or the tunneling of all manner of applications through port 80 --thereby defeating the purpose of the firewall.

Does an enterprise border firewall ever really make sense?

It may, when:

Needless to say, a research university with very high bandwidth network connections fails this suitability test on all three counts. Moreover, recent surveys suggest that a majority of enterprises who embrace border firewalls as their primary defense have been successfully attacked in spite of the firewalls... so caveat emptor. On the other hand, it must be noted that our current legal and liability climate may be such that absence of a border firewall might be the basis for a claim of negligence, regardless of the technical validity of any such claim. However, reasonable people will recognize that such claims are specious if there is a strong small-perimeter and host-based protection regime in place.

Enterprises that meet the three criteria listed above may also be candidates for using local IP addresses internally, with a border NAT/Firewall translating internal/local addresses to a single ISP-provided global address.

It's also worth noting here that a number of studies indicate that the greatest security risk comes from organizational insiders: that is, miscreants who have legitimate access to systems. So bad guys aren't always "outside" the moat, but you already knew that. More important is to remember that a bad guy on the outside who successfully penetrates a client machine belonging to a good guy, will appear to be a (trusted?) insider. This can happen regardless of whether the compromised machine is inside the topological enterprise border or outside it (in a "remote access" scenario.)

Edge Firewalls

Edge firewalls may make sense as a tool to compensate for the lack of security in individual or small numbers of end-systems. This includes servers which are deemed critical and/or which cannot be secured for either technical or non-technical reasons. It is even sensible to have a firewall protecting a single host that is unable to be configured securely. In particular, unless/until desktop operating system vendors obviate the need, use of personal firewalls such as Zone Alarm or Black Ice should be considered to protect client machines, although these do bring with them non-trivial support concerns.

Edge firewalls come in several forms:

And are useful in various scenarios:

(Recall that a "server sanctuary" is a portion of a machine room --perhaps only a rack-- set aside for critical servers, and which will typically have special protections, e.g. a small-perimeter firewall and perhaps additional physical security such as a locked cage.)

Residential Gateways

As broadband cable and DSL connections to homes and remote offices have become more common, a large number of vendors have introduced low cost firewalls for this market. Often called residential gateways, they usually have two Ethernet ports for connection to the cable/dsl modem and the home network switch. They include basic packet filtering capabilities, network address translation, and DHCP service... often in a package the size of a small paperback book at a cost below $100. The reason for mentioning them here is that they may be useful for protecting a single legacy/insecure server with modest performance requirements (say 5Mbps). You can't beat the price for a dedicated hardware firewall! However, in addition to modest performance (since they are targeted for home broadband users) some of these boxes are not very sophisticated software-wise, and may have trouble handling firewall-adverse applications such as H.323 conferencing and IPSEC tunneling.

Logical (Virtual) Firewalls

As an alternative to conventional firewalls which are interposed somewhere in the middle of the communication infrastructure, one can also deploy "logical" or "virtual" firewalls. These are dedicated packet filtering devices which do not have to be placed "in the middle of" the network because they communicate with their clients only via encrypted tunnels, or via local non-routable IP addresses. The advantage of such logical firewalls is that departments can deploy them on their own, and they do not interfere with management of the core network infrastructure like conventional firewalls do.

There are two flavors of logical firewall: NAT-based and VPN-based.

NAT-based. The idea behind a NAT-based logical firewall is to give the computers-to-be-protected local (i.e. not globally routable) addresses, per RFC-1918, and moderating traffic between the "outside" and the target computers by a firewall which also performs Network Address Translation (NAT). Local traffic between target hosts on the same subnet need not go thru the logical firewall. While providing an opportunity to "harden" a set of servers that cannot be placed in server sanctuaries, the same device can provide protection for client machines using NAT "masquerading", whereby local client machine addresses are multiplexed to the firewall's global address. Clients could get local addresses (and the gateway address) from a local DHCP server.

One can think of this approach as a "split" firewall: The policy and protection functions are split between the logical firewall and the subnet router. The subnet router implements a global packet filtering policy that makes the locally-addressed machines invisible to the outside, except via the logical firewall; i.e. forces all off-subnet traffic thru the VFW, where arbitrary packet filtering policies can be applied.

Information on a Do-It-Yourself NAT-based logical firewall is available at http://staff.washington.edu/corey/fw/

VPN-based. A VPN-based logical firewall can be thought of as an "upside down VPN server". Most VPN servers are deployed to permit machines "outside" to establish secure connections to the server, and thence to a presumed-safe portion of the enterprise network. Turn this upside down, and configure the internal machines-to-be-protected with IPSEC tunnels to a VPN server that also has firewall capabilities. Thus, all access to/from the target machines is moderated by the VPN server/firewall. This approach requires more setup effort than the NAT-based logical firewall, but it provides a bit more security and works when machines are on multiple subnets.

Firewall Reality Check

The most important (and prehaps controversial) contention of this credo is that security perimeters must be pushed to the edge as much as possible; in particular, that border solutions (enterprise firewalls and VPN gateways) don't really solve the problem, because they

Most everyone would agree that network topologies rarely map to organizational boundaries in a contemporary Internet commerce environment, but many find it hard to let go of traditional firewall dogma, which holds that you can build a moat and keep all the good guys on one side and the bad guys on the other. This premise is demonstrably false in most modern business scenarios. Indeed, it takes a huge leap of faith to believe that most internal enterprise networks are trustworthy; hence, if one is to build a moat, it best be a very small one: the smaller the network "trust zone" the better.

On the other hand, we accept the premise that there are many hosts on our network that are essentially unmanaged and therefore vulnerable to attack and compromise. Such machines represent a clear and present danger to all users of the net. At first blush, this observation argues for a perimeter defense that can protect all such hosts "at once" --and therefore, at less cost than securing each individual end-system. Such a "fix it in one place" solution would be a Beautiful Thing, if it really solved the problem. Alas, we maintain that life is not so simple.

Institutions that are the best candidates for conventional enterprise firewalls are those which are organizationally homogeneous with relatively simple computing requirements, but most importantly, those which centrally manage all desktop and server configurations. In contrast, the autonomous departments of a research university are a better match for the small-perimeter defensive strategies emphasized here. But host-based protection is still critical for both types of organizations.

Hosts that cannot be properly managed/secured, perhaps due to bizarre regulatory constraints, need to be protected by devices that can be secured... e.g. firewalls. However, such hosts (if they can share a common security policy) should if possible be clustered together for more efficient small perimeter protection. If an unmanageable host must remain outside a sanctuary on an arbitrary subnet, it becomes a candidate for its own per-host firewall.

A growing number of analysts (e.g. Burton Group, @stake) are talking about things like "virtual enterprise networks" and "getting beyond the firewall" and "blurring of extranet and intranet". So an approach that provides seamless external access to business functionality, while still demonstrating due diligence in applying resource-appropriate controls, is in fact cutting edge. We assert that a well-managed host can be on an open network and also be secure... notwithstanding the many examples of insecure hosts.

Apparently, our "open network, closed server, protected session" strategy is catching on, as large corporations begin to embrace what some call an "inverted network" approach, wherein LANs are considered "dirty" and VPN sessions are established from desktop systems to the servers. DuPont is a recent example of a large enterprise said to be adopting this philosophy and abandoning their traditional border firewalls.

John Gilmore, a principal of the Electronic Frontier Foundation, is famous for observing: "The Internet deals with censorship as if it were a malfunction and routes around it." Unfortunately, when faced with individuals determined to attack an organization or undermine its policy, the same observation applies to other, entirely justifiable, forms of policy-based restrictions, whether they be limits on the bandwidth available to "sacrificial" applications such as Napster, Morpheus and KaZaA, or total blocking via border firewall filtering. This would be another argument for moving the protection perimeter as close to the sensitive resource as possible, so there are fewer opportunities to bypass the (border) firewall.

While it's easy to have an "open" network :) and it's plausible to believe that sensitive and critical servers can be properly managed (and thus are safe to have connected to the net), what about the zillions of desktops systems that are not centrally managed? Personal or integral firewalls are one possible answer, but this approach is most tractable when the individual firewalls can be centrally administered, even when the rest of the desktop software is not.



6.     ACTION PLAN

Priorities

Our goal is to prevent successful attacks using strategies that work in decentralized organizations and are compatible with operating a high-availability, high-performance network utility. A wide spectrum of defensive strategies have been outlined, but they don't all have the same payoff. In this section we try to recap and prioritize the most important ones... all the while remaining true to our key design principles; especially the idea of pushing security perimeters to the edge, thereby reducing the size of vulnerability zones.

Vulnerability Probing

Even readers who weren't really paying attention will have noticed the tension between:

Although we assert that the case for small-perimeter and host-based defenses is compelling, the unmanaged host problem cries out for a response. While it may seem like "swimming upstream", the proposed solution is to compensate for lack of large-perimeter border defense by aggressive pro-active vulnerability probing of end-systems, in order to detect problem systems before the bad guys do. This strategy needs to be backed by a strong enterprise policy that makes securing end-systems a corporate priority (read: provides resources and sanctions).

Insecure hosts are not a private affair; they are everyone's problem, and securing them must become a key organization-wide priority. If inadequate investment is made in vulnerability prevention and detection, plan on major investments in forensics and recovery, not to mention liabiity and customer trust issues.

Eliminating passwords in the clear

First on the vulnerability prevention list is elimination of passwords transmitted over the net "in the clear", i.e. without encryption. This is where secure application protocols (SSL, SSH, Kerberos) play such an important role. If these protocols cannot be used to protect user authentication credentials, the situation may call for encrypted VPN tunnels, e.g IPSEC connections. All of these protocols can also protect sessions after authentication, thereby defending against TCP session hijacking.

Managing servers securely

Critical servers must be operated securely. In particular:

Robust authentication

The goal is user-authentication that is robust even if a client machine has been compromised. Toward this end, authentication mechanisms using one-time credentials (e.g. SecureID cards or s/key) or challenge-response systems can help a great deal in protecting critical servers and their data, since cracking a client machine may be substantially easier than penetrating a well-managed server. Without robust server authentication, a compromised client can allow an attacker to gain access to the server by impersonating a legitimate user. (The one-time credentials make compromise of the client system less serious, because the captured credentials cannot be re-used.)

Hardening client machines

Client/desktop security is harder than server security because there are more of them and they must often use a full spectrum of services on older operating system versions. Since client compromise can put all of the data on the server at risk, use of robust user-authentication techniques when accessing sensitive info is essential to mitigate this threat. Even so, desktop security cannot be ignored... Client machine security priorities:

Edge Firewalls

How can departments protect themselves?

In addition to the preceding guidelines, certain classes of systems may warrant additional protection: edge firewalls may be appropriate.

Because having departmental firewalls inside the core of the campus network undermines the operational integrity of the network (and this is not conjecture; it is a fact based on experiment), and because managing exceptions tends to increase costs and increase Mean-Time-To-Repair when things go wrong, departmental or application-specific packet-filtering within the network core is not supported. However, several firewalling options are available to departments:

Network Infrastructure

All network routers should block packets with bogus source addresses. Switched subnets should be deployed wherever possible. Keep the core network infrastructure as simple as possible; avoid special cases. Examine potential for border filters that would do more good than harm. Provide a way to monitor and/or block specific packet flows during a Denial of Service attack.

Central Activities

Some Bad Ideas:

Departmental or application-specific firewalls within the network core.

We are committed to assisting departments in securing their hosts, but we are also committed to operating a high-availability, high-performance network utility. So we espouse departmental security solutions that are compatible with maintaining the operational integrity of the core network infrastructure. Placing departmental firewalls within the core is antithetical these goals, and we claim that the edge firewall strategies described herein ultimately provide greater security.

VPN tunnels that terminate at institution borders.

This goes to a concern about "checklist" security techniques that can add significant cost without improving security very much. A perfect example is the call for encrypted tunnels (VPNs) between the borders of institutions, ignoring the fact that most security risks come from within institutional borders (at one end or the other), not from within the network core, and that use of secure application protocols (e.g. SSH/SSL) is in every respect a superior strategy.

Use of IP-based access controls for large populations of users.

Packet filtering based on IP address is a valuable tool, but best used in small-scale moderation. Managing individual client IP addresses for a major service is not particularly scalable, and arguably superfluous if strong user authentication is in place. Filtering on large numbers of client IP addresses is possible by using a VPN server to export approved addresses to clients who successfully authenticate and establish a VPN tunnel --but VPNs add complexity and inconvenience. They are also arguably superfluous if secure application protocols are used. In particular, if critical services are exported only via secure protocols such as SSL/SSH or Kerberos, and only those two or three ports are accessible to the world, IP-based access controls on those ports offer very little additional security, if any. (One must decide between the risk of bugs in the SSL/SSH protocol daemons vs. bugs in the VPN gateways... and the VPN gateways are not the sure-fire winner in that debate.)

Over-reliance on large-perimeter defenses... in particular, believing a single firewall can effectively protect large numbers of unmanaged hosts.

We conclude with our recurring theme about the misuse of firewall or defensive perimeter strategies, in which large portions of the enterprise network and attached systems are deemed a priori to be trustworthy. As noted, these can lead to a false sense of security and can also trigger back-door network connections. It only takes one compromised host on the "inside" to do enormous harm, if end-system security has been neglected, so "large perimeter" firewall strategies must be approached with great caution.

Parting Thought

The Internet opens new doors for any enterprise, but it also brings nefarious characters to your door. Maintaining the security of large numbers of systems attached to the Internet is hard. Even in a tightly controlled, organizationally homogeneous environment it is difficult. In a research university characterized by diverse and autonomous units, the challenge is an order of magnitude greater.

In this credo we have attempted to outline a defensive strategy that provides a roadmap to secure network computing while not introducing onerous policy restrictions that are sure to backfire or undermine the network infrastructure. Nevertheless, it is clear that the threats are growing and the operating system vendors have been slow to respond with systems that are safe to plug into the net right out of the box. Even with improved OS offerings, we can expect the battle to continue, with threats migrating to whatever is currently the weakest link in the system. Hence, we must all be prepared to make continuing investments in network computing security unless we are willing to disconnect our computers and go back into our cloisters. So, welcome to the global village, complete with the village idiots and sociopaths who are providing job security for everyone involved in network security.



ACKNOWLEDGEMENTS

The words in this document (and therefore responsibility for their inability to communicate clearly) are primarily my own, with significant contributions from Bob Morgan. However, the concepts are a result of hours of debate and discussion with my colleagues in the University of Washington Computing & Communications organization, and elsewhere. Principal partners in crime, without any claim that they agree with everything stated, include: Dave Dittrich, Michael Hornung, Eliot Lim, RL 'Bob' Morgan, Aaron Racine, Tom Remmers, David Richardson, Corey Satten, Lori Stevens, and Sean Vaughan.



REFERENCES

http://www.gocsi.com/

http://www.cert.org

http://www.sans.org/

http://www.nipc.gov

http://www.rootshell.com

http://www.securityfocus.com/

http://security.vt.edu

http://www.cornell.edu/CPI

http://www.atstake.com/company_info/acrobat/turning_security_on_head.pdf

http://www.microsoft.com/ISN/columnists/using_ipsec.asp

http://www.washington.edu/computing/security/update.html

http://staff.washington.edu/dittrich

http://staff.washington.edu/corey/fw/

http://www.suse.de/~mha/linux-ip-nat/diplom/