SANS '97 Conference Report

System Administration, Networking and Security Conference

Conference Overview

The conference was held in Baltimore Maryland between April 20 and 26. Of the five days of courses available, I attended three, as well as the two day technical conference where I spoke on a panel covering Help Desk Tracking Systems, discussing the UW QnA software we use for tracking help@cac email.

SANS is sponsored in part by the System Administrator's Guild (SAGE), a special interest group of the USENIX Organization, and put on by the Escal Institute, which also does an annual Network Security conference (among other computer and network related conferences). This is the sixth annual SANS conference held to date, as well as the largest to date (with approximately 900 attendees for the technical conference and ~1200 attendees overall).

A good portion of this conference dealt with security issues, and naturally included several courses on firewalls as a front-line of defense, and on detecting and defending attacks using them. It is clear that corporations are beginning to take the threat of information warfare or corporate espionage very seriously and are taking steps to defend themselves. Several firewalls vendors and consultants were prominent in BOFs, tutorials, and the commercial tools sessions.

It also seemed clear to me that Universites are lagging--as a general rule--in clearly defining policies and putting institution-wide security measures in place, with only the most minor of filtering at the border routers, or spotty installations of firewalls or workstation level security in the richest of departments or research groups. Several Universtiy representatives I talked with where just now organizing incidence response teams, whereas most corporate representatives who spoke up seemed to have organization wide policies and security officers to at least attempt to enforce them across the company. (I have to say that Randall Schwartz' story about his problems with Intel regarding the security of R&D computers for this, the most prominent of semiconductor manufacturers, seems to prove the not *all* companies have their security act together at the uppermost levels and that system administrators should think twice... about a lot of things.)

Eugene Schultz, in his course on Effective Incidence Response, and Ray Semko (a.k.a., "the DICE man"), both mentioned that there is a widespread problem with upper management of organizations taking a "head in the sand" stance towards the widespread security problems they confront. The call is out for those caught in the middle--the administrators of computing systems and networks--to educate both their users (where the front-line defenses of passwords are being breached) and upper-level management (where policies are decided and budgets allocated... or not). This same message was being communicated at a recent Agora meeting in Seattle (a local security related group). Often it seems to take theatrics and/or horror stories to get the point across, if it even does get across.

Courses attended

Effective Security Incident Response

This course was taught by Eugene Schultz, currently of the Stanford Research Institute (SRI) Consulting. Eugene Schultz is a founding member of the Computer Incident Advisory Capability dor the DoD while working at Lawrence Livermore National Labs in California. He (as well as the other founding members) left CIAC to move on to other pursuits.

One of the main points that Dr. Schultz stressed was the need to have a buy-in (literally) by upper levels of management. Budgets must reflect security incident response as a priority, and the organization as a whole must acknowledge the need for security and support an incident response capability. Without this, you are doomed to fail at providing either adequate defenses or the ability to respond effectively and appropriately to security incidents.

Part of the incident response equation is balancing the risks of loss of data, alteration of data, loss of service, etc. with the costs of securing your systems and providing an incident response capability. When couched in terms of dollars, I think it is easy to assume that only businesses are at risk and as a result I believe that Universities often underestimate their risk and thus underfund their security infrastructures. When you consider the types of activities undertaken on a University campus as large as the UW--AIDS and cancer research, cataloging the human genome, research into cutting edge computer and defense-related technologies, administration of life/death services in a research hospital--it is hard to claim that we are not at risk as much (or perhaps more, due to the looser affiliation of individuals and departments sharing the UW network) than any business in the Seattle area involved in the same areas.

(See http://www.cert.dfn.de/eng/team/kpk/certbib.html)

Building Firewalls: No Theory, Just Practice

This course was taught by Marcus Ranum, Chief Scientist at V-ONE Corporation, and the principal author of several major Internet firewall products, including the DEC SEAL, the TIS Gauntlet, and the TIS Internet Firewall Toolkit.

My main goal in attending this class was to get a better feeling for how a firewall is constructed and administered, as well as to see what options of firewall technology can be applied to non-firewalled networks to improve their security.

The good news is that "firewalls" are not monolithic black boxes, but instead a suite of tools that can be used, in full or in part, to add extra layers of security to your workstations and network. So the answer is, yes, there are firewall tools and techniques you can use to your advantage.

The central feature of typical firewall environment is a "bastion" host--a system secured in and of itself--that usually acts as a filtering gateway into and out of a network, normally working in concert with a filtering router. Some features of this bastion host are common to a typical non-firewalled workstation: minimization of network services to just those that are absolutely required, logging and "alarms" in place to detect and notify administrators of intrusion attempts, and policies and procedures for securing the system and auditing it to determine how secure it really is.

Some tools and techniques that can be applied to workstations include:

New Topics in Network and System Administration

This course was taught by Evi Nemeth and Trent Hein, two co-authors of the book, "Unix System Administration Handbook." Of the topics covered, only a couple were of real interest to me, but then I like to pick out the cashews and almonds in a box of mixed nuts as well.

The topics were:

Technical Sessions Attended

Keynote address: The DICE Game - Counter Espionage in the Age of Networked Computers

I have to say that Ray Semko is appropriately employed by the Department of Energy, because he certainly has and transmits it. Ray (or "the DICE man", as I will remember him, which he says stands for "Defensive Intelligence to Counter Espionage") had a quite fast-paced, humorous, and also quite serious message that industrial espionage is alive and well and living on the Internet.

Next time a beautiful woman comes up to you in a bar, offers to buy you a drink, then starts asking about what kind of routers you use and your favorite password algorithm, you can tell her, "no DICE, [wo]man". Ray will be proud of you.

Information Warfare, or Cyber-Joyride?

This presentation was high on hype (including video/audio of Janet Reno press conferences and Discovery channel graphics depicting login screens) and made an interesting story of an international teenage cracker who was finally caught, but it seemed like the motivation for spending the time and energy to catch a teenage cracker with some trojan horse system hacking tools was really the bad publicity the US government received after the DoJ, CIA, NASA and USAF web servers were defaced.

I think it would have been more cost effective to hire better consultants to manage goverment web sites in the first place (who knew to remove the phf program from the server and patch the security holes that allowed "multiple intrusions" to take place on these systems), although its probably a good idea the government gets practice with network tracing, computer forensics, educating judges about computer crimes, etc.

Filtering and Monitoring System Logs with LogCheck

AOL has the volume of systems producing syslog entries so fast and furious that a tool other than swatch is needed. LogCheck is that tool, but it handles log files in a strange way and has to be run by cron.

I don't have that number of systems or volume of log entries to monitor, so I'll stick with swatch (which seems to be easier to use and runs constantly).

Just Another Convicted Perl Hacker

Randall Schwartz, of "Camel book" fame, was recently convicted of three felony counts in Oregon. He admitedly made several mistakes during his contracting at Intel, but is likely the victim of overzelous prosecution, using vague laws that were hastily put on the books in the wake of the movie "War Games", to protect the reputation--at the expense of his reputatation--that of the largest tax paying and employing business in Oregon: Intel.

The basic message was for contractors who are at greater risk of having to fend off a customer's pack of lawyers, but administrators on a University campus should also consider the effect that laws on the books of their own states can have on them if they go too far in chasing crackers around the Internet, etc.

Instrumenting and Optimizing your Web Server's Performance

Rob Kolstad is definately the type of person I, if I were the president of a large ISP or web service company, would like to have on my side. His goal for tuning a BSD/OS system running Apache's web server was to improve the performance to the point it exceeded that of Digtial's Alta Vista server running on a large Alpha workstation. He was able to do this, and then some, by a combination of tuning server parameters, fixing some bugs in the TCP/IP stack, some kernel hacking, and (his favorite technique) throwing some hardware at it. The results were such that a friend of his no longer uses his most frequent retort to Rob's suggestions, which is "Rob, you're full of shit."

Appying SysAdmin Techniques to Security

Australia's largest ISP, Connect.com.au, has a distributed suite of servers and POPs in all the major population centers in Australia. They use ssh as the basis for encrypted logins and file transfers between bastion hosts (using a modified rdist, called "sdist") and for remote administration. This infrastructure has proved quite secure and robust for remote management, software distribution, and controlled access.

Software Repositories and Distribution

In researching a mechanism for software distribution here at the UW campus, I have run across papers on most of the tools mentioned by the speaker. The conclusion of the speaker is that there is no single tool that seems to be ideal for distribution of software from repositories, but some of the main problems he identifies seem to be solvable with tools like ssh and sdist.

Rob Kolstad on Industry Trends

Because of two cancelled talks, Rob was given a room with about 700 people in it, with 50 minutes to speak, and 25 questions. After 48 minutes, he had covered all of two questions but still entertained and educated nearly everyone with his thoughts on technology trends like Java and NCs, network bandwidth growth, the probability that Bill Gates is correct when he claims NT gives you "zero administrative cost" (hint: the probabilty is the same as his claimed cost).

Some of the key points he made included:

NCs and Java

Network trends

His talk also gave me some good fodder for jokes during my part of the panel presentation in the next session, which was the...

Panel on Problem Tracking Systems

Most home grown help tracking systems appear to be quite simple and designed by and for programmers (some require MH mail folders, are command line oriented, do not produce reports, logs, or reminders, etc.)

Of those speaking on the panel, the volumes and staffing break down (if I recall correctly) were as follows:

           QnA                 req             pqmh             BSDi's staff
           =================================================================
Users      80,000              200 users       high hundreds   tens of 1000s
                               600 nodes                       (who pay $)

Qs/day     130                 10              30                 1000

Staffing   ~20                  2               2                  ~5

Turnaround  <1 day             1-7 days         a few days          ~2 hrs

Rob asked a few people in the audience to shout out there numbers and one person said they had 10 people fielding 2000 telephone calls/day (phone constantly off the hook for 8+ hours), with most others having much smaller email flows than we do. Rob was a bit shocked that we field about one eigth the number of questions that they do, with a turnaround time (on average) three to four times theirs. My guess is that they beat us because of their narrowness of scope and resulting depth of knowledge and/or we use part-time students too much and thus take longer to answer questions and have more back and forth because of the lower skill level of those answering questions (probably both).

One notable difference between QnA and all the other options discussed on the panel is that they aren't geared towards generating automatic reminders and statistics. While these facilities could be added by writing new scripts, they often focused almost exclusively on logging information and letting consultants pass on responsibility for producing and answer.

Birds of a Feather (BOF) sessions attended

The best thing about the BOFs at SANS is that they usually last an hour and you get to drink all the beer you want, so if they aren't very good, you still can walk out with a smile. I walked out of each BOF I attended with a smile, and a few times with some good ideas as well.

Human Factors in Computer Security

This discussion rambled a bit, but tended to focus on the hurdles you must overcome to get management acceptance of the need for security policies and measures, dealing with users who are reluctant to change their password behavior, etc., and some back and forth about techniques of dealing with change in an organizational body.

My main contribution to this discussion was to recall a book I had read, titled, "The Diffusion of Innovations," which covers the aspects of implementing change in situations involving humans (who tend, by nature, to resist change).

The majority of the group, it seemed, where in corporate environments with existing firewalls in place (or soon to be put in place) and were interested in learning how to deal with those users who didn't like the restrictions imposed by the firewalls.

Most people indicated that well written and advertised policies, along with adequate staffing to monitor and enforce them, yielded the least amount of hassles and best security. Those with loosly defined (or no) written policies and inadequate staffing spent more time dealing with fires and having trouble with users who said "white" while the admins said "black."

After the discussion broke up, a couple of University types from Canada and Germany (?) and I went stayed around and went off topic to discuss how we do support for so many users with so few staff. One suggestion that was made was to deal with the problem we had during transitioning students to the student-only modem pool (who got fast busy signals) by recording what a normal and failed connection would typically sound like with .AU files linked in a web page, so that users could listen and tell for themselves whether they were getting carrier tones or the fast busy signal that indicates a saturated central switch. He also suggested distributing changed Dial-IP disc via some other mechanism than staff or the bookstore, such as including discs in the student newspaper or other "come and get it" means like boxes in the labs, etc.

Help Desk Tracking Systems

This discussion would have been better titled, "How good is Remedy and should I use it?" This was a relatively large BOF (about 40 people, who quickly drained all the beer) and, by voice poll, consisted of about 1/3 Remedy users, 1/3 home-grown or public domain tracking system users, and 1/3 that had no tracking system at all, but were interested in finding one.

Some of the things people said they wanted in, or to do with, a "tracking system" included:

I mentioned that we looked at Remedy before implementing QnA, and that our operations group uses it for their call tracking. We found that it didn't (at that time, at least) provide any easy way of opening new calls via an email message, at least not without a human having to read the message and key in all the information in fields that allow the tool to route/track the question properly, and that it wouldn't send email as an answer (and accept followup as well).

Several others also mentioned that Remedy wouldn't serve most people's needs "out of the box" and would require several months of programming to make it fit. One person said they had spent about as much time (close to a year) as we did in developing and implementing QnA just to get Remedy to serve their needs, but that this was due to their not really understanding how they worked to begin with and having to re-work things over and over to get it right. I mentioned that we started developing QnA by writing the user documentation and modelling how we route and track questions, then cut code once we knew what we needed to program. I reminded those who were asking, "how do you like Remedy?" that this is not the right question to ask. The first questions to ask are, "what is it we do and how would automating this process help us?" and only then to go looking for a tool to use.

The discussion then went back to the features of Remedy for the remainder of the hour and I went for another beer.

Terminal Room

I have to mention the terminal room here. After attending JavaOne last year and using their abismally insecure network, I made an effort to have a copy of SSH installed on the laptop I took, and keys in place on the systems I would need to log in to, so I didn't have to worry about having my password sniffed on qna-help, my workstation, or red.

The bad news was, they didn't have the ethernet hookups in place for laptops until three days into the conference (they weren't going to have them at all, except a number of people complained about the fact they were advertised as part of the facilities in brochures and they decided to spend the time to follow through).

The good news was, they already had ssh installed on all the Linux PCs in the terminal room! I took care (a nice way of saying I was still paranoid about using these "single-login for everyone" systems) when logging in to ensure command history was turned off, that shell initialization files weren't modified, that nobody else was logged in to the same account, etc.

The systems weren't quite all configured the same way (some still had ownerships and permission of some files set such that they could have been modified by someone), but the majority had the account configured so "sans97" didn't own the shell initialization files, the .Xauthority file, the .ssh directory contents, etc. In other words, you could be pretty sure you wouldn't have anyone abusing the "bob" account you were logged in to ("sans97", with same password, actually) to either see where you were logging in (and with what account) or otherwise stealing your password by sniffing the net. Some systems were providing XDM service to a handfull of X terminals in the room, so on a couple occassions I noticed login sessions to boeing.com and elsewhere, including the login name (and could probably grab a copy of xkey if I wanted to and sniff passwords from the X terminal, since it would have to trust the system I was logged in to for X connections). Other than that, they did a pretty good job of secure X terminals (I still think we did a better job of ensuring secure web sessions at SIGIR last year, however, minus the ssh capability).

Conclusions

I learned quite a few things at this conference, and got some good ideas from people that I hope to put to use this coming year.

In the Help Desk Tracking panel, I learned that we have gone farther in developing a help desk tracking system than most others have gone, and that we are obviously doing something right because of the ratio of accounts to questions we get, but that we are probably trying to do too much with part-time (and high-turnover) student staff and not enough full-time, long term staff.

In the area of security, I think we are fast approaching (if not there already) the point where the security level of corporations and ISPs will increase to the point where the growing number of attackers will shift more towards Universities, community colleges, and K-12 schools with workstations, using these as training grounds as well as bases for lauching attacks (often through sets of secondary and tertiary stolen accounts to "launder" connections) on the tougher targets. At some point, Universities will need to get into the firewall game (even at the expense of decreased bandwidth) and also have adequate incident response, security auditing, and policy enforcement authority institution-wide.

The argument for minimizing network-level security and focusing only on securing workstations, in my opinion, has some logical flaws. If not followed up, across the institution, with policies and staffing to secure these workstations, you leave pockets of wide-open systems to use as sniffers (which is happening widely and continuously). This argument also seems like saying that we should build nice wide roads all around the city, then refuse to put traffic signs and lights up so we never have to slow down, and just rely on people locking their desk and file cabinet drawers and pad-locking the phone so people don't steal papers and make long-distance calls. When everyone is used to speeding, they will never stand for being told they must slow down and stop. Perhaps we're letting the unrealistic expectations and demands of users outweigh the real demands required to secure the basic network infrastructure that is now a central aspect of education, research, and soon commerce. No wonder the "DICE man" said computer espionage is where all the action is at for all those ex-STASI and KGB spies.

Respectfully submitted,

Dave Dittrich