NOTE: THIS DOCUMENT HAS BEEN SUPERSEDED BY http://staff.washington.edu/gray/papers/credo.html ============================================================================= DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT DRAFT ============================================================================= UW Network Security Credo T. Gray et al version 7-SHORT June 2000 (This is an abbreviated version of the full UW Network Security Credo document which can be found at xxx) Executive Summary We recommend the following actions to improve system security: o Move aggressively to use of SSL/SSH for accessing all key services. o Deploy robust authentication, e.g. SecureID cards. o Cluster critical servers on one or two firewall-protected subnets. o Develop consensus on blocking unneeded or insecure ports at borders, while recognizing the limitations of "large" perimeter defenses. o Develop policies and procedures to improve security of desktop systems, e.g. proactive probing, centralized software configuration. o Eschew "checklist" security techniques that do not add significant value. Introduction The theoretical goal in network security is to prevent connectivity to those who would do you harm. The problem is that packets from attackers don't come with warning labels to make them easy to block, nor is there any totally reliable topological boundary between sources of attacks and sources of legitimate network activity. Threats that are outside the scope of this document include: 1. Unauthorized access to devices comprising the network infrastructure 2. Application-level security threats, e.g. email viruses, attachments The threats we are concerned about include: 1. Unauthorized access to hosts (both clients and servers) via the net 2. Unintended disclosure or modification of data sent between hosts 3. Denial of service attacks against connected hosts The general strategies for protecting against these threats include: 1. Hardening the target operating systems and their applications 2. Encrypting sensitive data sent between hosts 3. Reducing the size of the target by disabling unneeded services 4. Putting obstacles between the attacker and the target systems Taxonomy and Priorities We break the general computer security problem into the following parts, with those that we consider to be high-priority/high-payoff marked with an asterisk: 1. Application security * -Application software integrity (fix the buffer overrun bugs!) * -Path encryption via secure application protocols (e.g. SSH, SSL) * -Isolating critical apps on separate hosts 2. Host/OS security (Same for "host proxy") * -OS software integrity (apply the patches!) * -Strong user-level access control (2-part auth, SecureID) * -Block/disable unneeded & insecure services (Target minimization) -Path encryption via transport-level tunnels (IPSEC) -Device-level access control (MAC addr, IP addr, DNS name) 3. Network infrastructure security -Service-blocking perimeter (packet blocking by service type/port #) -Device-ID perimeter (packet blocking by MAC addr, IP addr, DNS name) -Path encryption perimeter (transport-level path encryption: IPSEC) * -Path isolation via routers/switches (by MAC, IP addrs or VLANs) -Path isolation via separate/dedicated infrastructure 4. Procedural/Operational security * -Policies and education on safe/unsafe computing practices -Desktop configuration management * -Proactive probing for vulnerabilities -Intrusion detection Translating these priorities into actions Servers. Critical servers *must* be operated securely. This means applying security-relevant patches to the OS and applications, disabling unneeded and/or insecure services, requiring user-authentication that is robust even if a client machine has been compromised, and using secure application protocols (e.g. SSL, SSH) wherever possible. If "Backend" servers are used, they should only be accessible from "front-end" servers, not directly by users. If, for whatever reason, there are some critical servers that cannot be managed securely, they should be clustered behind a "minimum perimeter" firewall in a single location (or two for redundancy). Clients. Client/desktop security is harder than server security because there are more of them and they must often run a full spectrum of services. However, client compromise can put all of the data on the server at risk, hence the urgency of requiring "one-time" user-authentication techniques that reduce this risk, e.g. SecureID cards. (The one-time credentials make compromise of the client system less serious, because the captured credentials cannot be re-used.) Even so, desktop security cannot be ignored. If it is impossible to deploy a centralized desktop software management solution, e.g. Nebula, perhaps use of WebPads or other contemporary "thin clients" should be considered. Network infrastructure security. All network routers should block bogus source addresses. Switched subnets should be deployed wherever possible. When consensus can be achieved on blocking unneeded and insecure ports at enterprise boundaries, this can help security, with the following caveat: If the border is too tight, back-door network connections *will* be created, thereby reducing overall security. Similarly, if such firewalling at the border is considered a substitute for end-system security, it would be better not to have the firewalling. Procedural/operational security. Establish strong policies that include provisions for disconnecting hosts from the network if they do not meet basic security requirements. Back this up with pro-active probing for vulnerabilities, and a support structure to do something about it when they are found. Provide a way to monitor and/or block specific packet flows during a Denial of Service attack. Principles behind the priorities * The primary role of networks and secure network protocols is to provide reliable connectivity among hosts without unintended disclosure or modification of data. Proper access control and the use of appropriate access protocols are the responsibility of end systems, and there are no silver bullet network solutions to obviate this responsibility. * Security is maximized when the network is trusted least. That is, one should try to make the primary "protection perimeter" as small as possible (nearest the network "edge"; preferably implemented in the end-system itself, thereby reducing the opportunity for hostile access *between* the security/trust boundary and the end-system. * Security is likewise maximized if all access is considered "remote" access. That is, whatever mechanisms are deemed necessary and sufficient for "remote" users to access sensitive resources should be applied to "local" users as well. To do otherwise implies an implicit faith in the security of the local network that is generally unwarranted. * The concept of building moats around entire enterprises has largely been overtaken by events. There will always be some legitimate users outside the moat, and some compromised system or attacker inside the moat. Because network-level defenses cannot protect against important classes of attacks, they are not an *alternative* to application and OS security. They may be an adjunct to them, but trust in network firewalls often reduces the incentive to fix the end-system security problems, and that results in reduced overall security. Hence the importance of treating the entire network as "untrusted". * Some forms of packet filtering within the network infrastructure are useful, e.g blocking spoofed source addresses, and reducing the set of ports that can be used for DOS attacks can be useful where consensus can be reached on which ports are unneeded. However, managing "special cases" within a network infrastructure, e.g. department or application-specific router access lists, is extremely problematic and expensive, in terms of network monitoring and management, policy database management, customer support, and mean-time-to-repair. Roads not taken We are concerned about "checklist" security techniques that can add significant cost without improving security very much. A perfect example is the call for encrypted tunnels (VPNs) between the borders of institutions, ignoring the fact that most security risks come from within institutional borders (at one end or the other), not from within the network core, and that use of secure application protocols (e.g. SSH/SSL) is in every respect a superior strategy. VPNs may make sense *if* the applications needed cannot be secured, and if they are end-to-end. That is, if it is not possible to use applications that communicate via SSH, SSL, or K5, then a VPN may be useful. However because VPNs may be a substantial source of operational and management overhead, secure application protocols are preferable. Like firewalls, VPNs may undermine the urgency of fixing end-systems by giving a false sense of security (especially when the VPNs are not end-to-end). Although some hold that managing access to critical resources should be done via packet filtering on IP addresses, even when secure application protocols are used (which in turn leads to an argument that VPNs are necessary for remote access), we do not consider this a reasonable position since a) the incremental benefit is small, if other necessary precautions have been taken, b) it doesn't help if the attack comes from a trusted subnet, c) the strategy does not scale, and d) the management cost is high and unending, since network topologies are dynamic. Our view is that manually-configured IP-based access controls may make sense when a small and static set of trustable hosts are involved, but they obviously do not apply when a service has a large dynamic user population. While VPNs (discussed later) are seen as a more scalable way of implementing IP-based access controls to a broad population of clients, they also introduce considerable expense and user inconvenience while providing a dubious increase in security in comparison to other strategies. To serve a geographically diverse and growing user base, *some* host has to be accessible to them. We claim that this host need not be a special expensive dedicated high-support device *IF* front-end services are put on appropriately dedicated and monitored boxes, and back-end services are on other boxes that can be protected as required. In particular, if critical services are exported only via secure protocols such as SSL/SSH or Kerberos, and only those two or three ports are accessible to the world, IP-based access controls on those ports offer very little additional security, if any. (One must decide between the risk of bugs in the SSL/SSH protocol daemons vs. bugs in the VPN gateways... and the VPN gateways are not the sure-fire winner in that debate.) Finally, we are concerned about the *misuse* of firewall or defensive perimeter strategies, in which large portions of the enterprise network and attached systems are deemed a priori to be trustworthy. As noted, these can lead to a false sense of security and can also trigger back-door network connections. It only takes one compromised host on the "inside" to do enormous harm, if end-system security has been neglected, so "large perimeter" firewall strategies must be approached with great caution. Further information The *long* version of this document can be found at: See also: http://www.sans.org/ http://staff.washington.edu/dittrich ===================================================================================