NOTE: THIS DOCUMENT HAS BEEN SUPERSEDED BY http://staff.washington.edu/gray/papers/credo.html ============================================================================= DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT ============================================================================= UW Network Security Credo T. Gray et al ver7-long June 2000 (An abbreviated version of this document may be found at: xxx) ------------------------------------------------------------------------------ INTRODUCTION ------------------------------------------------------------------------------ Unauthorized access to networked computer systems and denial-of-service attacks against them are growing at fearful rates. The purpose of this document is to identify general design principles for defending against these threats. Computer system security has many dimensions, and they overlap. While this document is focused primarily on network-level security issues, it is impossible to avoid some discussion of other parts of the picture, most notably, application and OS security. Network security is a subset of general computer system security, but a rather large subset, since virtually all access to contemporary hosts is via a network connection. The theoretical goal in network security is to prevent connectivity to those who would do you harm. The problem is that packets from attackers don't come with warning labels to make them easy to block, nor is there any totally reliable topological boundary between sources of attacks and sources of legitimate network activity. Threats that are outside the scope of this document include: 1. Unauthorized access to devices comprising the network infrastructure 2. Application-level security threats, e.g. email viruses, attachments The threats we are concerned about include: 1. Unauthorized access to hosts (both clients and servers) via the net 2. Unintended disclosure or modification of data sent between hosts 3. Denial of service attacks against connected hosts The general strategies for protecting against these threats include: 1. Hardening the target operating systems and their applications 2. Encrypting sensitive data sent between hosts 3. Reducing the size of the target by disabling unneeded services 4. Putting obstacles between the attacker and the target systems One of the ironies of this subject is that the most important network security measures need to be implemented in the end-systems (hosts) rather than within the network infrastructure itself. Reaching consensus on what additional security measures should be deployed within the network is probably the most contentious aspect of network security. ------------------------------------------------------------------------------ DEFINITIONS ------------------------------------------------------------------------------ Network security is about balancing the goals of "OPEN" and "SECURE" and "COST-EFFECTIVE", where in this case OPEN means "conveniently accessible to legitimate users" and SECURE means "inaccessible to attackers". Types of end-systems, or hosts, include: Clients, Servers, and Attackers. The following figure shows the relevant categories of host functions and their relationship to the "network" itself: Host 1 Host 2 |------------------| |------------------| | Application | | Application | |------------------| |------------------| | Operating System | | Operating System | |------------------| |----------------| |------------------| | Network Stack |---| Network |---| Network Stack | |------------------| | Infrastructure | |------------------| |----------------| Security measures can be categorized according to which level of the model they are implemented in. As used herein, the term "network security" refers to defensive strategies implemented at the network transport layer of a system hierarchy. Sometimes the implementation is done on the host, sometimes it is part of the network infrastructure itself. So the term "network security" is broader than the term "security in the network". The term "network infrastructure" refers to the links, routers, and switches which allow hosts to communicate with one another. For our purposes, the "enterprise network" is distinguished from "the Internet" in order to discuss the merits of topological or "perimeter" defenses which might be deployed at the enterprise network border, interior, or edge: |<----------< Network Infrastructure >-------------->| |------| |------------| |--------------------| |------| | Host |----| "Internet" |-----| Enterprise Network |-------| Host | |------| |------------| ^ |--------------------| ^ |------| ^ ^ ^ Border Interior Edge Physical devices that may embody security measures include: -Hosts (End systems) -Dedicated security devices (Firewall/VPN Gateway) -Network infrastructure devices (Router/Switch) Sometimes a "host proxy" or "host front-end" device will be inserted between the end-system (server) and the network in order to implement security capabilities that, for whatever reason, cannot be deployed in the end-system itself. A "firewall" is any device that implements "policy-based packet filtering", including dedicated security products, routers, or the hosts/host-proxies themselves. The term "perimeter defense" is often used to describe measures designed to keep attackers who are "outside" the perimeter from accessing hosts "inside". The perimeter itself may coincide with the physical location in the network topology where the perimeter enforcement is done, or it may be a more abstract notion of perimeter, such as the logical boundary between two different sets ("trusted" and "all others") of device addresses. A perimeter defense defines a "trust zone". Such a trust zone is being trusted to not contain the same sources of attacks within it that are hopefully being blocked at the perimeter. An important special case is where the perimeter surrounds a single end-system (probably but not necessarily implemented by the end-system itself), so that the trust zone is minimized. "Defense in depth" is the time-tested concept of putting multiple barriers between attackers and their targets. If one obstacle fails, perhaps another will succeed. Or perhaps the different barriers are specialized to different types of attacks. These obstacles typically represent perimeters encircling the resources to be protected, which are themselves assumed to be trustworthy. Hence, some additional comfort may come from establishing defensive perimeters around groups of resources, in addition to "hardening" those resources (in our case, hosts, especially servers). ------------------------------------------------------------------------------ DESIGN PRINCIPLES ------------------------------------------------------------------------------ Some system design basics: * The role of (server) applications is to provide reliable, secure access to business functionality for all authorized clients; * The role of (server) host hardware+OS is to provide a reliable, secure environment for all applications to run in; * The role of networks is to provide fast, reliable connectivity for all legitimate purposes; * The role of secure network protocols is to provide access to server functionality while protecting the data stream from unintended disclosure or modification. These definitions embrace an architectural viewpoint in which access-providing and access-controlling functions are implemented in different layers. When one or more of these roles is "done poorly" there is a natural tendency to compensate by overloading one of the other roles. The goal is "open access with reasonable security at reasonable cost", where ease of implementing the right policy is the key. For more background info along these lines, see: http://www.atstake.com/company_info/acrobat/turning_security_on_head.pdf ------------------------------------------------------------------------------ TAXONOMY ------------------------------------------------------------------------------ In order to examine our defensive strategies in more detail, we break the general computer security problem into the following parts: 1. Application security a. Application software integrity (fix the buffer overrun bugs!) b. Path encryption via secure application protocols (e.g. SSH, SSL) c. Isolating critical apps on separate hosts 2. Host/OS security (Same for "host proxy") a. OS software integrity (apply the patches!) b. User-level access control (AAA, 2-part auth, SecureID) c. Block/disable unneeded & insecure services (Target minimization) d. Path encryption via transport-level tunnels (IPSEC) e. Device-level access control (MAC addr, IP addr, DNS name) 3. Network infrastructure security a. Service-blocking perimeter (packet blocking by service type/port #) b. Device-ID perimeter (packet blocking by MAC addr, IP addr, DNS name) c. Path encryption perimeter (transport-level path encryption: IPSEC) d. Path isolation via routers/switches (by MAC, IP addrs or VLANs) e. Path isolation via separate/dedicated infrastructure 4. Procedural/Operational security a. Policies and education on safe/unsafe computing practices b. Desktop configuration management c. Proactive probing for vulnerabilities d. Intrusion detection ------------------------------------------------------------------------------ IMPLEMENTATION CHOICES ------------------------------------------------------------------------------ Should all of the measures described in the taxonomy above be implemented? Most, yes, but not necessarily all. While security measures tend to be "additive" (defense in depth), all security measures have costs (capital, operational, and user-convenience costs), and some approaches may offer small increases in security. Indeed, some may actually undermine overall protection goals by promoting a false sense of security and thereby lessening the urgency of securing the end-systems and/or triggering the creation of backdoor network connections. One example of the trade-offs in this area: Dedicated security devices may be capable of more elaborate packet filtering (based on examining the content of packet flows) than is generally available in end-systems or network devices (e.g. routers). On the other hand, they represent an additional management cost, especially when different groups want different security policies. Several network-level defensive measures, in particular, IP transport-level encrypted tunneling, and packet filtering to block services or to do device-level access control, can be deployed in either the host or the network or both. Some network-level measures may also be superfluous if appropriate application-level and procedural measures are taken (especially the use of secure access protocols, robust authentication, and proactive probing). Other examples of implementation questions include: o Given that a well-managed host will already disable unneeded services, is it still necessary/desirable to block services at a larger perimeter point? o If primary services are made available via secure access protocols such as SSH/SSL, is transport-level path encryption (IPSEC) still necessary/desirable? o If key services are made available via secure access protocols (SSH/SSL), is device-level access control (e.g. IP-based authentication) still necessary/desirable? ------------------------------------------------------------------------------ PATH ENCRYPTION ------------------------------------------------------------------------------ The purpose of "path encryption" is to prevent a sensitive data stream from being visible to (much less, modified by) an attacker. Two forms of path (or "stream") encryption are of interest: application-level and transport-level. Application-level encryption. One of the best ways to protect data in transit is to use secure protocols for accessing the server application. SSL (secure socket layer) does this for web-based applications. SSH (secure shell) can be used for many other applications. Apps that use Kerberos for authentication can also encrypt their data streams using a session key provided as part of the Kerberos authentication process. Transport-level encryption. When it is necessary to support legacy applications that cannot be adapted to secure access protocols (e.g. SSH/SSL), then it may be possible to implement transport-level encryption in the host's IP stack. IPSEC is the standard for doing this, but it is fairly new and not yet widely available. While IPSEC has the advantage of allowing any application to be protected, its management overhead (especially in a heterogeneous multi-vendor environment) and its limited availability suggest that use of secure application protocols are a preferable solution. Path encryption is best when it is "end-to-end". That way, the data is protected at all points between the client and server. However, transport-level encryption is sometimes implemented between gateways (see subsequent section on VPNs) to provide "bulk encryption" between different parts of the Internet. This is a compromise solution, usually done when the end-systems are incapable of supporting IPSEC directly (and the applications don't support SSL or SSH). Ironically, in these cases the part of the path that is protected by encryption is often more secure than the "trusted" enterprise networks at each end. ------------------------------------------------------------------------------ PATH ISOLATION ------------------------------------------------------------------------------ As with path encryption, the purpose of "path isolation" is to prevent data from being visible to (much less, modified by) unauthorized users. In this case, the idea is to use switches and routers to control where packets are sent such that unintended exposure is minimized, or using dedicated physical links between trusted hosts. Consider this a "defense in depth" strategy that should be used in addition to path encryption. Isolation via switched infrastructure. On shared Ethernet segments, all stations on the segment see all the traffic. Moving away from shared Ethernet infrastructure to switched Ethernet infrastructure is a Good Thing, since that limits the visibility of network traffic to just the intended source and destination end-systems, thus reducing the opportunity for exposure of sensitive data and authentication credentials. However, switching is not an adequate substitute for end-to-end encryption of sensitive data because some switches (depending on configuration) may occasionally "leak" a few packets to all ports, and even when switches within the centrally managed network infrastructure are operating as intended, there may be hostile machines on the same subnet that can fool the switches into forwarding packets to the wrong place. VLANs are a variation on the switching theme, and offer a higher degree of isolation than MAC-address-based switching, but use of path encryption for sensitive data is still recommended. Isolation via local IP addresses In certain cases it can be useful to configure selected hosts with "local" or "non-routable" IP addresses. A "Network Address Translator" (NAT) gateway is then needed for those hosts to communicate beyond their local subnet. Conversely, the hosts are "invisible" to the outside world, provided network infrastructure routers are correctly configured to block packets with local addresses. The NAT can be combined with firewall functionality for additional security. Isolation via separate physical infrastructure. Although it is possible to establish dedicated (unshared) links between trusted hosts for the exchange of sensitive information, this approach does not scale and usually doesn't offer any compelling advantages over path encryption techniques. It is mentioned here only for completeness. ------------------------------------------------------------------------------ PERIMETER PROTECTION ------------------------------------------------------------------------------ The term "perimeter defense" is not a synonym for a dedicated firewall device separating an enterprise network from the Internet --although that is the dominant form. Perimeter defenses can be subdivided into these categories: 1. Service-blocking perimeters, which block services at physical or virtual network boundaries. 2. Device-ID perimeters, which define zones of trust and access based on device addresses. 3. Path-encryption perimeters, which define zones of trust and access based on cryptographic associations. Ultimately it's always about blocking certain packets from proceeding to their intended destination, but the location of and criteria for the blocking can vary widely. Possible packet blocking criteria include: service type (port #), content, source address, destination address, or cryptographic association. Service-blocking perimeters divide the network into an "inside" and an "outside" much like a medieval moat, and prohibit selected operations from "outside". Selected services (packet types) are blocked at particular points in the network topology that define a perimeter between a trusted portion of the network and an untrusted portion. The assumption is that the blocked services are needed within the trusted zone, but not outside the perimeter. An example might be an insecure remote filesystem service, e.g. NFS. This approach may also be used to protect critical servers that are unable to protect themselves. Device-ID and encryption perimeters divide network-connected hosts into "allowed" and "not-allowed" categories, regardless of where they may be attached in the global Internet. From a server's perspective, they provide access control based on the identity of the client machine (not the user). Sometimes this device-based access control is used in lieu of user access control, sometimes it is in addition, for extra protection. In the case of an ongoing DOS attacks, a different form of perimeter defense may be needed, namely, blocking incoming packets based on their *destination* address, in order to protect both the victim system and the enterprise network infrastructure. Perimeter defenses seek to achieve two network security goals. Blocking packets by service type effectively "reduces the size of the target" and doing device-level access control by blocking untrusted source addresses, or requiring an encrypted tunnel to the perimeter enforcement point, puts another "obstacle between the attacker and the target systems". ------------------------------------------------------------------------------ THE PERIMETER PARADOX ------------------------------------------------------------------------------ The defensive perimeters described above can be large, including an entire enterprise, or small, encircling a small set of sensitive servers. In the limit, the perimeter encircles a single host. However, as the size of the perimeter increases, we encounter the "perimeter defense paradox". The rationale for network perimeter defenses is that not all end-systems are secure or secureable, and having one or more barriers where attacks can be stopped before they get to the insecure systems is clearly a Good Thing... unless those same barriers keep legitimate users from gaining access to the services they need, or lessen the urgency of securing the end-systems. The idea is that it is easier to protect a group of insecure systems by establishing a defensive perimeter around them, than it is to make those systems secure. But what if those insecure systems have already been penetrated, or are somehow penetrated in spite of the perimeter defense? So the paradox is that a perimeter defense only adds value when some number of the end-systems are insecure, but insecure end-systems constitute potential (if not probable) holes thru the perimeter defense. Moreover, the larger the perimeter, i.e. the more systems within the perimeter (behind the firewall), the larger the zone of trust, and the higher the probability that one of the machines inside the zone of trust has been compromised, thereby defeating the firewall. Even though it only takes one compromised system to cause Big Trouble, it is natural to want to reduce the number of successful attacks against the insecure systems by deploying a large-perimeter defense. But how much solace should this approach bring? Unfortunately, not much. Given that a network-topology-based perimeter will rarely encompass all legitimate users, there must be provisions for secure "remote" access by legitimate users. The question then becomes: is the "inside" of the perimeter sufficiently more trustworthy than the "outside" to warrant a more relaxed form of access? We conclude that small zones of trust, i.e. small service-boundary perimeters, are better than big ones. A corollary of this is that "all access is remote access" unless one is willing to believe that insecure access methods can be tolerated within some parts of a network. The "small perimeter" strategy is also consistent with the observation that the larger the perimeter, the more difficult it is to get consensus on which services are not needed outside the perimeter. Does this mean that, for example, blocking services at the enterprise border is a bad idea? Not necessarily... ------------------------------------------------------------------------------ DENIAL OF SERVICE ATTACKS ------------------------------------------------------------------------------ An exception to the principle of moving protection boundaries to the edge of the network concerns defense against denial-of-service (DOS) attacks, which ideally call for protection boundaries to be as close to the attacker as possible, in order to protect the intervening links from saturation. Unfortunately, this is very hard to do, and usually involves blocking all traffic to the victim host --constituting another form of DOS attack against that host. However, if consensus can be reached on blocking unneeded ports at the enterprise border routers, this can reduce the number of "entry points" for DOS attackers. During a DOS attack, it can be important to block incoming packets destined for the victim system as close to the enterprise border as possible, or better, at an upstream provider. ------------------------------------------------------------------------------ SERVICE-BLOCKING PERIMETERS ------------------------------------------------------------------------------ A well-managed end-system, especially a sensitive server, will have unneeded and insecure services disabled. Given that fact, how important is it to block such services elsewhere in the network? And if important, where should it be done? (i.e. How large should the perimeter be?) To answer these questions, we observe: 1. If the critical hosts can effectively disable unneeded or insecure services themselves, it is not necessary to block them elsewhere. 2. In practice, some number of hosts will be insecure. They will not be configured to do the necessary service blocking, or may need to use insecure protocols for certain purposes. 3. Protection against DOS attacks calls for minimizing access as close to the attacking source as possible. 4. Getting consensus on what ports/services are unneeded (or even which ones are insecure) is difficult, and becomes more so as the size of the perimeter grows. 5. The more restrictive the border is, the more likely it is that a department or individual will create a VPN tunnel through the enterprise firewall to export one or more of the restricted services, or otherwise implement "back door" connections around the firewall. Therefore: 1. Always try to disable unneeded and insecure services on the host. 2. If some security-sensitive servers *cannot* be so configured, then cluster them within a very small perimeter (e.g. one rack) with suitable packet blocking implemented at that perimeter, e.g. a firewall between the rest of the network infrastructure and a switch connecting those insecureable hosts. 3. Try to make sensitive servers less vulnerable to client compromise by using robust authentication methods (e.g. SecureID cards). 4. At the border of the enterprise network, routers or firewalls should be configured to block services that are generally agreed to be insecure or unnecessary "outside" the perimeter. All network routers should block packets with spoofed source addresses. ------------------------------------------------------------------------------ DEVICE-ID PERIMETERS ------------------------------------------------------------------------------ Device-ID perimeters: Limiting access to a host (or a trusted portion of the network topology) by blocking packets from hosts whose IP addresses are outside of a trusted range or set. If implemented in a security device within the network infrastructure, this approach extends the set of allowed clients beyond those within a trusted zone (inside the perimeter), thus defining a "logical" perimeter outside of which no access to the specified services is permitted. If implemented by the end-system (usually a server) instead of a security device within the network infrastructure, there is no trust zone to be extended and this approach simply becomes a method for controlling access to the host by device address. However, one could say that the set of accepted clients constitute a logical (rather than topological) "host access perimeter". Key question: Is restricting access to (blocking by) trusted source IP address (device-level access control) a Good Idea, and if so, should it be done in the host, in the network, or both?" The argument for device access control to a server (in particular, access restrictions based on the IP address of the client) is that it is necessary when insecure application protocols must be used, and even when secure application access protocols are used, it is a useful component of a "defense in depth" strategy, and reduces exposure to attacks from the Internet at large. The argument against device-based access control is that a) manual configuration of client IP addresses in servers does not scale, and b) it is not necessary if other needed host security measures have been implemented. Our position is that manually-configured IP-based access controls may make sense when a small and static set of trustable hosts are involved, but they obviously do not apply when a service has a large dynamic user population. While VPNs (discussed later) are seen as a more scalable way of implementing IP-based access controls to a broad population of clients, they also introduce considerable expense and user inconvenience while providing a dubious increase in security in comparison to other strategies. To serve a geographically diverse and growing user base, *some* host has to be accessible to them. We claim that this host need not be a special expensive dedicated high-support device *IF* front-end services are put on appropriately dedicated and monitored boxes, and back-end services are on other boxes that can be protected as required. In particular, if critical services are exported only via secure protocols such as SSL/SSH or Kerberos, and only those two or three ports are accessible to the world, IP-based access controls on those ports offer very little additional security, if any. (One must decide between the risk of bugs in the SSL/SSH protocol daemons vs. bugs in the VPN gateways... and the VPN gateways are not the sure-fire winner in that debate.) ------------------------------------------------------------------------------ ENCRYPTION PERIMETERS ------------------------------------------------------------------------------ Encryption perimeters come from limiting access to a host (or a trusted portion of the network topology) via transport encryption (e.g. IPSEC). Only devices that share the appropriate cryptographic key information are permitted to communicate with the server or trusted network zone. As with Device-ID perimeters, this approach defines a "logical" perimeter separating trusted and untrusted hosts, and again it can be implemented by the end-system (server) to provide a "host access perimeter" or by another device at a topological perimeter defense point. ------------------------------------------------------------------------------ FIREWALLS ------------------------------------------------------------------------------ When someone speaks of "perimeter defenses" in network security, they are usually talking about firealls, and often a dedicated piece of security hardware separating the enterprise network to the rest of the Internet. However, we define "firewall" as any device that implements "policy-based packet filtering". They can be dedicated security devices, or routers, or capabilities of the end-system (e.g. IP Chains in Linux, or "personal firewalls" for Windows). As previously noted, policy-based packet filtering is an important tool in overall system security; the only debate concerns *what* to filter, and *where* to filter it. Filtering that is acceptable to all users of a network infrastructure (e.g. blocking packets with spoofed source addresses) should be deployed throughout the network infrastructure via router access lists. More restrictive departmental or application-specific filters need to be pushed out to the edges of the network, as close to the end-systems as possible. (Ideally, implemented *by* the end-systems.) While most firewalls are intended to protect large numbers of systems by providing a perimeter defense at the borders of a network, it is increasingly difficult to identify topological network boundaries that correspond to organizational boundaries. Moreover, all we can really say about a "larger perimeter" defense is that there are probably more "bad guys" on the outside than on the inside, and probably more legitimate users on the inside than on the outside. Unfortunately, there is a temptation to consider perimeter defense firewalls as an economical alternative to properly protecting end-systems on the "inside". This is important: Firewalls implementing a "perimeter defense" are NOT an ALTERNATIVE to end-system security, because firewalls CANNOT protect against many important threats. Obviously a perimeter defense cannot protect against threats originating from within the perimeter, nor can it protect against threats that use access paths which may need to be available to those "outside" (e.g. email, web, remote terminal). Therefore, a perimeter defense firewall may be considered as an *additional* level of security, but if the consequence of deploying such a perimeter defense is to lessen the urgency of keeping end-systems secure, or to cause the creation of backdoor connections, then the firewall strategy will actually UNDERMINE overall security. While firewalling generally refers to blocking packets within the network infrastructure to provide perimeter protection, "edge firewalls" and "logical firewalls" are perhaps even more important. Edge firewalls may make sense as a tool to compensate for the lack of security in small numbers of end-systems which are deemed critical and which cannot be secured either for either technical or non-technical reasons. In some cases it may even be sensible to have a firewall protecting a single sensitive host that is unable to be configured securely. As an alternative to conventional firewalls which are interposed somewhere in the middle of the communication infrastructure, one can also deploy "logical firewalls". These are dedicated packet filtering devices which do not have to be placed "in the middle of" the network because they communicate with their clients only via encrypted tunnels, or via local non-routable IP addresses. The advantage of such logical firewalls is that departments can deploy them on their own, and they do not interfere with management of the core network infrastructure like conventional firewalls do. Since no firewall can protect against all attacks (including the most likely to occur attacks), it is imperative that site administrators with sensitive data not be lulled into a false sense of security by them. ------------------------------------------------------------------------------ FIREWALL TRADEOFFS ------------------------------------------------------------------------------ A perimeter policy-enforcement point has to have policies that are consistent with all entities that it's protecting. So traditional firewalls intended to provide "security by moat" often fail because the policies for the whole protected net are too diverse, with the consequence that "back door" connections --circumventing the firewall-- begin to appear in the network. Application functionality is the ultimate policy-driver. Mixing apps with different security profiles on the same host is a lose, and mixing hosts with different security profiles on the same subnet is also a lose. The key to survival in network management is reducing exceptions and special cases. Accordingly, we categorize firewall strategies as follows: Difficult/Impossible to support: -Specialized router access lists for individual departments, applications, or worst of all, individual client IP addresses. -Customer equipment interposed within the network infrastructure that is centrally managed. Possible: -Per-host firewalls managed by the host owner. -Special machine-room area for insecureable and/or extra-sensitive servers on a specially protected subnet. -Personal firewall software (oriented to personal windows workstations) -Logical firewalls. Possible, if appropriate consensus or policy can be achieved: -More restrictive access lists at borders. -More restrictive access lists in *all* routers. ------------------------------------------------------------------------------ VPNs ------------------------------------------------------------------------------ VPNs or "Virtual Private Networks" provide transport-level (rather than application-level) encrypted tunnels for all network traffic between trusted points in a network, and thus create a logical perimeter between trusted and untrusted hosts or network zones. VPNs may also play a role in controlling access via IP addresses. VPNs may be implemented end-to-end, or between VPN servers. Reasons to consider a VPN include: 1. A particular application is unable to use secure protocols, so transport-level encrypted tunnels are needed to protect data. 2. Enterprise IP address ranges must be exported to (remote) clients in order to permit access to restricted national networks. 3. Server application requires authorized clients to use a specific IP address. 4. Users/departments need to tunnel through a topological perimeter defense (firewall) in order to export a restricted service to their clients However: 1. VPNs add operational cost and user inconvenience 2. They are most effective if end-to-end rather than gateway-to-gateway 3. Authentication by IP address is an idea whose time has come and gone, so using VPNs as a temporary workaround is a questionable decision. VPNs may make sense if the applications needed cannot be secured. That is, if it is not possible to use applications that communicate via SSH, SSL, or K5, then a VPN may be useful. However because VPNs may be a substantial source of operational and management overhead, secure application protocols are preferable. Although some hold that managing access to critical resources should be done via packet filtering on IP addresses, even when secure application protocols are used (which in turn leads to an argument that VPNs are necessary for remote access), we do not consider this a reasonable position since a) the incremental benefit is small, if other necessary precautions have been taken, b) it doesn't help if the attack comes from a trusted subnet, c) the strategy does not scale, and d) the management cost is high and unending, since network topologies are dynamic. Moreover, like perimeter defense firewalls, VPNs can give a false sense of security. End-to-end VPNs avoid this problem, but VPN gateways leave the servers vulnerable to certain classes of attacks. That is, to the extent that VPNs are implemented by special gateway devices (rather than IPSEC tunnels all the way to the server), they leave open the issue of how the still-not-secure endpoints (servers) connect to the secure (VPN gateway) device; hence they are at best an interim point on the way to having secure hosts. Finally, if circumstances dictate a VPN solution for remote users, the question must be asked: shouldn't the same level of secure access be provided to local users? Said differently, how much are you willing to bet that the local network is significantly more secure than the rest of the net, even if perimeter defenses are in place? ------------------------------------------------------------------------------ HOST SECURITY ------------------------------------------------------------------------------ Let's consider application and OS security in this section. Recall our taxonomy of defensive measures: 1. Application security a. Application software integrity (fix the buffer overrun bugs!) b. Path encryption via secure application protocols (e.g. SSH, SSL) c. Isolating critical apps on separate hosts 2. Host/OS security (Same for "host proxy") a. OS software integrity (apply the patches!) b. User-level access control (AAA, 2-part auth, SecureID) c. Block/disable unneeded & insecure services (Target minimization) d. Path encryption via transport-level tunnels (IPSEC) e. Device-level access control (MAC addr, IP addr, DNS name) All of these measures enhance security, although we would claim that the cost-benefit of the last two is questionable if the other steps have been taken. The good news is that "The majority of successful attacks on computer systems via the Internet can be traced to exploitation of one of a small number of security flaws." (From "How To Eliminate The Ten Most Critical Internet Security Threats" at http://www.sans.org/topten.htm ) The bad news is that there are a zillion hosts on the UW network that are essentially unmanaged. Computer ownership is not for the feint-hearted, and it comes with on-going responsibilities. An insecure host is not just a threat to its owner, but also to others who share network or server resources with the vulernable system. Client and server end-systems should be considered separately because different security measures apply to each. In some ways clients are more difficult to secure than servers, because virtually all clients run mail and web clients which can --either via bugs or user deception-- introduce dangerous code onto the client. If a compromised client allows an attacker to successfully login to a server, this may be as bad as a server compromise; however, if apps accessing sensitive data are properly protected with two part authentication, including one-time auth data (e.g. SecureID cards) then compromise of clients is in fact less serious than a server compromise. In contrast to client systems, a sensitive application server should have most services disabled, and allow access only to a specific application port. And critical/sensitive servers should avoid exporting information via insecure protocols such as NFS. Host firewalling (i.e. packet filtering on the end-system itself using, for example, TCP wrappers and IP-chains) can provide significant security benefit when the set of allowed remote hosts is small. Use of both TCP wrappers and kernel filtering (e.g. IP-chains) is recommended, as they provide complementary services. This approach is consistent with our principle of moving protection boundaries as close to the edge of the network as possible, and is particularly valuable for controlling access to critical functions needed by system adminsistrators that require use of less secure access protocols. However, as noted previously, it is much less attractive as a strategy for controlling access to primary secure services such as SSL/SSH whose audience is expected to be widespread and growing, and whose integrity is crucial regardless of how many or few hosts are allowed access. A tabular presentation of this approach follows: ----------------------------|---------------------------------------| PORT/APPLICATION>>> | SSH/SSL | "Admin Ports" | Other | ----------------------------|-------------|---------------|---------| INCOMING WRAPPER STRATEGY: | ----------------------------|-------------|---------------|---------| Block completely | | X | X | ----------------------------|-------------|-------or------|---------| Limit access via MAC+IP+DNS | | | | ----------------------------|-------------|---------------|---------| Limit access via IP+DNS | | X | | ----------------------------|-------------|-------or------|---------| Limit access via DNS | | X | | ----------------------------|-------------|---------------|---------| Leave open | X | | | ----------------------------|-------------|---------------|---------| Implicit in the above discussion is the assumption that legacy servers that export services via insecureable protocols ought to be front-ended by bastion host proxies, or servers suitable for network use. Trying to control access to insecure services via IP-address is reasonable for a small administrative group and/or front-end network servers, but obviously a key goal is to have primary services exported via secure application protocols in order to avoid the significant cost and performance impact of VPNs. Separation of services and minimizing the number of visible services is another key security strategy, especially if the services are not encrypted. If servers are dedicated to a single (or small number of) services, with all other services disabled, the "entry points" into any given system are minimized, and overall security enhanced. This strategy also allows servers that must use insecure protocols to be hidden behind bastion hosts that export the service in a less vulnerable way. ------------------------------------------------------------------------------ PROCEDURAL/OPERATIONAL SECURITY MEASURES ------------------------------------------------------------------------------ Policies and education on safe/unsafe computing practices. See http://www.sans.org/mistakes.htm for a number of good ideas. Other suggestons: o Develop a Strong network connection policy (e.g. "your machine will be disconnected if it fails minimum security requirements") backed by a proactive vulnerability probing. o Invest in a strong desktop configuration management plan (similar to C&C's Nebula system). o For users of sensitive servers, encourage use of desktop operating systems that are secureable (e.g. Win2K rather than Win98). o Consider use of thin clients (e.g WebPads) in certain circumstances. Intrusion detection. A debate continues concerning the value of intrusion detection systems. As network capacity and usage continues to escalate, it becomes increasingly difficult to believe that watching all network traffic for alarming patterns will prove to be a viable long term solution. However, it may be reasonable for specific servers. Proactive probing for vulnerabilities. A more promising strategy is likely to be a combination of traffic threshold triggers, combined with the ability to examine specific traffic flows when an anomoly has been detected. ------------------------------------------------------------------------------ PRIORITIES ------------------------------------------------------------------------------ We claim that while there are some security measures that can and should be implemented within the enterprise network infrastructure, by far the highest priority should be to secure the servers and their applications, especially by patching known vulnerabilities, blocking unneeded/insecure services, using SSL/SSH secure access protocols, and using SecureID-like authentication. If these measures are taken, the importance of aggressive perimeter defense strategies within the enterprise network infrastructure is greatly diminished. Indeed, infrastructure-level strategies such as traditional firewalling at the enterprise border may be ill-advised, if they reduce the urgency of taking needed host security measures and/or make unwarranted assumptions about the trustworthiness of the enterprise network "interior". In a contemporary Internet commerce environment where network topologies rarely map to organizational boundaries, it is essential to move the protection boundary as close to the edges of the network as possible. This is in contrast to traditional firewall dogma, which holds that you can build a moat and keep all the good guys on one side and the bad guys on the other. This premise is demonstrably false in most modern business scenarios. Indeed, it takes a huge leap of faith to believe that most internal enterprise networks are trustworthy; hence, if one is to build a moat, it best be a very small one: the smaller the network "trust zone" the better. A growing number of analysts (e.g. Burton Group, @stake) are talking about things like "virtual enterprise networks" and "getting beyond the firewall" and "blurring of extranet and intranet". So an approach that provides seamless external access to business functionality, while still demonstrating due diligence in applying resource-appropriate controls, is in fact cutting edge. We assert that a well-managed host can be on an open network and also be secure... notwithstanding the many examples of insecure hosts. While espousing the importance of end-system security and small trust zones, we also agree that certain defensive measures must be implemented within the network, such as blocking packets with spoofed IP source addresses and blocking ports that correspond to inherently insecure protocols. With the exceptions previously noted, we reject the premise that controlling access to network resources by IP address is a reasonable general strategy. It should only be used in very limited situations because it increases management overhead and does not scale. (An exception is packet filtering by IP address during a DoS attack.) ------------------------------------------------------------------------------ REALITY CHECK ------------------------------------------------------------------------------ We accept the premise that there are many hosts on our network that are essentially unmanaged and therefore vulnerable to attack and compromise. At first blush, this observation argues for a perimeter defense that can protect many such hosts "at once" --and therefore more cost-effectively than securing each individual end-system. Whether such a perimeter defense strategy makes sense depends upon the nature of the threats one must defend against, and how important it is to succeed. The problem is that there are no known perimeter defenses that are completely effective --short of "pulling the plug"-- and so it is dangerous to believe that a perimeter defense can SUBSTITUTE for application and host security. There is no silver bullet. There are also some hosts which cannot be properly managed/secured, perhaps due to bizarre regulatory constraints. These hosts need to be protected by devices that can be managed... i.e. firewalls. However, such hosts (if they can share a common security policy) should if at all possible be clustered in one place to allow special protections on an exceptional basis without adversely affecting the general network infrastructure. If an unmanageable host must be on an arbitrary subnet, it may be a candidate for its own per-host firewall. Alternatively, if an organization involved with sensitive information finds itself with a large number of client machines which are deemed "insecure" or "insecureable", perhaps the wrong kind of end-system has been chosen. For example, a centrally managed desktop configuration system (ala Nebula), or even a contemporary "thin client" device (that can act only as an SSH secure terminal client, or SSL-enabled web browser) might be worth considering. ------------------------------------------------------------------------------ KEY ACTION ITEMS ------------------------------------------------------------------------------ Reviewing key principles: P1: Host OS and Application software should be trustworthy. P2: Hosts should enforce strong user-authentication using one-time keys. P3: Unneeded services should be blocked or disabled close to the host. P4: Sensitive information should be encrypted while traversing the net. P5: Push security perimeters to the edge; make the trust zones small. P6: Keep the net infrastructure as simple as possible; avoid special cases. We suggest the following priority actions for protecting sensitive info: o Move aggressively to use of SSL/SSH for accessing all key services. o Deploy robust authentication, e.g. SecureID cards. o Cluster critical servers on two firewall protected subnets. o Develop consensus on blocking unneeded or insecure ports at borders, while recognizing the limitations of "large" perimeter defenses. o Develop policies and procedures to improve security of desktop systems, e.g. proactive probing, centralized software configuration. ------------------------------------------------------------------------------ ACKNOWLEDGEMENTS ------------------------------------------------------------------------------ The words in this document (and therefore responsibility for their inability to communicate clearly) are primarily my own, with significant contributions from Bob Morgan. However, the concepts are a result of hours of debate and discussion with my colleagues in C&C and elsewhere. Principal partners in crime, without any claim that they agree with everything stated, include: Dave Dittrich, Michael Hornung, Eliot Lim, RL 'Bob' Morgan, Aaron Racine, Tom Remmers, David Richardson, Corey Satten, Lori Stevens, and Sean Vaughan. ------------------------------------------------------------------------------ REFERENCES ------------------------------------------------------------------------------ http://www.sans.org/ http://staff.washington.edu/dittrich ============================================================================= =============================================================================