What could have been done differently?

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness. They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms. Yet, they still were hit.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

Are there practical answers that actually work in the real world with
real users and real business needs?

Sean,

Are there practical answers that actually work in the real world with
real users and real business needs?

1. Employ clueful staff
2. Make their operating environment (procedures etc.) best able
   to exploit their clue

In the general case this is a people issue. Sure there are piles of
whizzbang technical solutions that address individual problems (some of
which your clueful staff might even think of themselves), but in the final
analysis, having people with clue architect, develop and operate your
systems is far more important than anything CapEx will buy you alone.

Note it is not difficult to envisage how this attack could have been
far far worse with a few code changes...

Alex Bligh

Date: Tue, 28 Jan 2003 03:10:18 -0500 (EST)
From: Sean Donelan

[ snip firewalls, audits, et cetera ]

As most people on this list hopefully know, security is a
process... not a product. Tools are useless if they are not
applied properly.

Are there practical answers that actually work in the real
world with real users and real business needs?

It depends. If "real business needs" means management ego gets
in the way of letting talented staff do their jobs, having to
form a committee to conduct a feasibility study re whether to
apply a one-hour patch that closes a critical hole, drooling
over paper certs... the answer is no.

Automobiles require periodic maintenance. Household appliances
require repair from time to time. People get sick and require
medicine. Reality is that people need to deal with the need for
proper systems administration.

It might not be exciting or make people feel good, but it's
necessary. Failure has consequences. Inactivity is a vote cast
for "it's worth the risk".

Sure, worm authors are to blame for their creations. Software
developers are to blame for bugs. Admins are to blame for lack
of administration. The question is who should take what share,
and absorb the pain when something like this occurs.

Eddy

Date: Tue, 28 Jan 2003 12:42:41 +0000 (GMT)
From: E.B. Dreger

Sure, worm authors are to blame for their creations.
Software developers are to blame for bugs. Admins are to

s/Admins/Admins and their management/

Eddy

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness. They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms. Yet, they still were hit.

Because they hired people (staff or outsourced) that made them feel
comfortable, instead of getting the job done.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

But this worm required external access to an internal server (SQL Servers
are not front-end ones); even with a bad or no patch management system, this
simply wouldn't happen on a properly configured network. Whoever got
slammered, has more problems than just this worm. Even with no firewall or
screening router, use of RFC1918 private IP address on the SQL Server would
have prevented this worm attack

Are there practical answers that actually work in the real world with
real users and real business needs?

Yes, the simple ones that are known for decades:
- Minimum-privilege networks (access is blocked by default, permitted to
known and required traffic)
- Hardened systems (only needed components are left on the servers)
- Properly coded applications
- Trained personnel

There are no shortcuts.

Rubens Kuhl Jr.

Sean,

Ultimately, all mass-distributed software is vulnerable to software bugs. Much as we all like to bash Microsoft, the same problem can and has occurred through buffer overruns.

One thing that companies can do to mitigate a failure is to detect it faster, and stop the source. Since you don't know what the failure will look like, the best you can do is determine what is ``nominal'' through profiling, and use IDSes to report to NOCs for considered action.

There are two reasons companies don't want to do this:

1. It's hard (and expensive). Profiling nominal means installing IDSes everywhere in one's environment at a time when you think things are actually working and making assumptions that *other* behavior is to be reported. Worse, network behavior is often cyclical, and you need to know how that cycle will impact what is nominal. Indeed you can have a daily, weekly, monthly, quarterly, and annual cycle. Add to this ongoing software deployment and you have something of a moving target.

2. It doesn't solve all attacks. Only attacks that break the profile will be captured. Those are going to be those that use new or unusual ports, existing "bad" signatures, or excessive bandwidth.

On the other hand, in *some* environments, IDS and an active NOC may improve predictability by reducing time needed to diagnose the problem. Who knows? Perhaps some people did benefit through these methods. I'm very curious in netmatrix's view of the whole matter, as compared to comparable events. NANOG presentation, Peter?

Eliot

In a message written on Tue, Jan 28, 2003 at 03:10:18AM -0500, Sean Donelan wrote:

They bought finest firewalls,

A firewall is a tool, not a solution. Firewall companies advertise
much like Home Depot (Lowes, etc), "everything you need to build
a house".

While anyone with 3 brain cells realizes that going into Home Depot
and buying truck loads of building materials does not mean you have
a house, it's not clear to me that many of the decision makers in
companies understand that buying a spiffy firewall does not mean
you're secure.

Even those that do understand, often only go to the next step.
They hire someone to configure the firewall. That's similar to
hiring the carpenter with your load of tools and building materials.
You're one step closer to the right outcome, but you still have no
plans. A carpenter without plans isn't going to build something
very useful.

Very few companies get to the final step, hiring an architect.
Actually, the few that get here usually don't do that, they buy
some off the shelf plans (see below, managed security) and hope
it's good enough. If you want something that really fits you have
to have the architect really understand your needs, and then design
something that fits.

they had two-factor biometric locks on their data centers,

This is the part that never made sense to me. Companies are
installing new physical security systems at an amazing pace. I
know some colos that have had four new security systems in a year.
The thing that fascinates me is that unless someone is covering up
the numbers /people don't break into data centers/.

The common thief isn't too interested. Too much security/video
already. People notice when the stuff goes offline. And most
importantly too hard to fence for the common man. The thief really
interested in what's in the data center, the data, is going to take
the easiest vector, which until we fix other problems is going to
be the network.

I think far too many people spend money on new security systems
because they don't know what else to do, which may be a sign
that they aren't the people who want to trust with your network
data.

they installed anti-virus software,

Which is a completely different problem. Putting the bio-hazard
in a secure setting where it can't infect anyone and developing an
antidote in case it does are two very different things. One is
prevention, one is cure.

they paid for SAS70 audits by the premier auditors,

Which means absolutely nothing. Those audits are the equivalent
of walking into a doctor's office, making sure he has a working
stethoscope and box of toungue depressors, and maybe, just maybe,
making the doctor use both to verify that he knows how to use the
them.

While interesting, that doesn't mean very much at all that when
you walk in with a disease the doctor will cure you. Just like it
doesn't mean when the network virus/worm/trojan comes you will be
immune.

they hired the best managed security consulting firms.

This goes back to my first comment. Managed security consulting
firms do good work, but what they can't do is specialized work.
To extend the house analogy they are like the spec architects who
make one "ok" plan and then sell it thousands of times to the people
who don't want to spend money on a custom architect.

It's better than nothing, and in fact for a number of firms it's
probably a really good fit. What the larger and more complex firms
seem to fail to realize is that as your needs become more complex
you need to step up to the fully customized approach, which no matter
how hard these guys try to sell it to you they are unlikely to be
able to provide. At some level you need someone on staff who
understands security, but, and here's the hard part, understands
all of your applications as well.

How many people have seen the firewall guy say something like "well
I opened up port 1234 for xyzsoft for the finance department. I
have no idea what that program does or how it works, but their support
people told me I needed that port open". Yeah. That's security.
Your firewall admin doesn't need to know how to use the finance
software, but he'd better have an understanding of what talks to
what, what platforms it runs on, what is normal traffic and what
is abnormal traffic, and so on.

Are there practical answers that actually work in the real world with
real users and real business needs?

I think there are two fundamental problems:

* The people securing networks are very often underqualified
  for the task at hand. If there is one place you need a "generalist"
  type network/host understands-it-all type person it's in security
  -- but that's not where you find them. Far too often "network"
  security people are cross overs from the physical security world,
  and while they understand security concepts I find much of the
  time they are lost at how to apply them to the network.

* Companies need to hold each other responsible for bad software.
  Ford is being sued right now because Crown Vic gas tanks blow
  up. Why isn't Microsoft being sued over buffer overflows? We've
  known about the buffer overflow problem now for what, 5 years?
  The fact that new, recent software is coming out with buffer
  overflows is bad enough, the fact that people are still buying
  it, and also making the companies own up to their mistakes is
  amazing. I have to think there's billions of dollars out there
  for class action lawyers. Right now software companies, and in
  particular Microsoft, can make dangerously unsafe products and
  people buy them like crazy, and then don't even complain that
  much when they break.

Not to sound to pro-MS, but if they are going to sue, they should be able to
sue ALL software makers. And what does that do to open source? Apache,
MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun
vendor because some moron shoots himself in the head with it? No. It was
never designed for flicking flies off his forehead. And they said, don't
use for anything other than nailing stuff together. Likewise, MS told
people six months ago to fix the hole. "Lack of planning on your part does
not constitute an emergency on my part" was once told to me by a wise man.
At some point, people have to take SOME responsibility for their
organizations deployment of IT assets and systems. Microsoft is the
convenient target right now because they HAVE assets to take. Who's going
to pony up when Apache gets sued and loses. Hwo do you sue Apache, or how
do you sue Perl, because, afterall, it has bugs. Just because you give it
away shouldn't isolate you from liability.

Eric

At 11:13 AM 1/28/03 -0200, Rubens Kuhl Jr. et al postulated:

Are there practical answers that actually work in the real world with
real users and real business needs?

Yes, the simple ones that are known for decades:
- Minimum-privilege networks (access is blocked by default, permitted to
known and required traffic)
- Hardened systems (only needed components are left on the servers)
- Properly coded applications
- Trained personnel

    I would just add, as has been mentioned by others (but bears repeating):

  - A commitment by management

There are no shortcuts.

    Agreed

Ted Fischer

Not to sound to pro-MS, but if they are going to sue, they should be able

to

sue ALL software makers. And what does that do to open source? Apache,
MySQL, OpenSSH, etc have all had their problems. Should we sue the nail

gun

vendor because some moron shoots himself in the head with it?

With all the resources at their disposal, is MS doing enough to inform the
customers of new fixes? Are the fixes and lates security patches in an easy
to find location that any idiot admin can spot? Have they done due diligence
in ensuring that proper notification is done? I ask because it appears they
didn't tell part of their own company that a patch needed to be applied. If
I want the latest info on Apache, I hit the main website and the first thing
I see is a list of security issues and resolutions. Navigating MS's website
isn't quite so simplistic. Liability isn't necessarily in the bug but in the
education and notification.

Jack Bates
BrightNet Oklahoma

In a message written on Tue, Jan 28, 2003 at 10:23:09AM -0500, Eric Germann wrote:

Not to sound to pro-MS, but if they are going to sue, they should be able to
sue ALL software makers. And what does that do to open source? Apache,
MySQL, OpenSSH, etc have all had their problems. Should we sue the nail gun

IANAL, but I think this is all fairly well worked out, from a legal
sense. Big companies are held to a higher standard. Sadly it's
often because lawyers pursue the dollars, but it's also because
they have the resources to test, and they have a larger public
responsibility to do that work.

That is, I think there is a big difference between a company the
size of Microsoft saying "we've known about this problem for 6
months but didn't consider it serious so we didn't do anything
about it", and an open source developer saying "I've known about
it for 6 months, but it's a hard problem to solve, I work on this
in my spare time, and my users know that."

Just like I expect a Ford to pass federal government safety tests,
to have been put through a battery of product tests by ford, etc
and be generally reliable and safe; but when I go to my local custom
shop and have them build me a low volume or one off street rod, or
chopper I cannot reasonably expect the same.

The responsibility is the sum total of the number of product units
out in the market, the risk to the end consumer, the companies
ability to foresee the risk, and the steps the company was able to
reasonably take to mitigate the risk.

So, if someone can make a class action lawsuit against OpenSSH, go
right ahead. In all likelyhood though there isn't enough money in
it to get the lawyers interested, and even if there was it would
be hard to prove that "a couple of guys" should have exhaustively
tested the product like a big company should have done.

It was once said, "there is risk in hiring someone to do risk analysis."

use for anything other than nailing stuff together. Likewise, MS told
people six months ago to fix the hole. "Lack of planning on your part does

It is for this very reason I suspect no one could collect on this
specific problem. Microsoft, from all I can tell, acted responsibly
in this case. Sean asked for general ways to solve this type of
problem. I gave what I thought was the best solution in general.
It doesn't apply very directly to the specific events of the last
few days.

ekgermann@cctec.com ("Eric Germann") writes:

Not to sound to pro-MS, but if they are going to sue, they should be able
to sue ALL software makers. And what does that do to open source?
Apache, MySQL, OpenSSH, etc have all had their problems. ...

Don't forget BIND, we've had our problems as well. Our license says:

/*
* [Portions] Copyright (c) xxxx-yyyy by Internet Software Consortium.

## On 2003-01-28 17:49 -0000 Paul Vixie typed:

In any case, all of these makers (including Microsoft) seem to make a very
good faith effort to get patches out when vulnerabilities are uncovered. I
wish we could have put time bombs in older BINDs to force folks to upgrade,
but that brings more problems than it takes away, so a lot of folks run old
broken software even though our web page tells them not to.

Hi Paul,

What do you think of OpenBSD still installing BIND4 as part of the
default base system and recommended as secure by the OpenBSD FAQ ?
(See Section 6.8.3 in <OpenBSD FAQ: Networking; )

A law can be crafted in such a way so as to create distinction between
selling for profit (and assuming liability) and giving for free as-is. In
fact, you don't have Goodwill to sign papers to the effect that it won't
sue you if they decide later that you've brought junk - because you know
they won't win in court. However, that does not protect you if you bring
them a bomb disguised as a valuable.

The reason for this is: if someone sells you stuff, and it turns out not
to be up to your reasonable expectations, you suffered demonstrable loss
because vendor has misled you (_not_ because the stuff is bad). I.e. the
amount of that loss is the price you paid, and, therefore, this is
vendor's direct liability.

When someone gives you something for free, his direct liability is,
correspondingly, zero.

So, what you want is a law permitting direct liability (i.e. the "lemon
law", like the ones regulating sale of cars or houses) but setting much
higher standards (i.e. willfully deceiptive advertisement, maliciously
dangerous software, etc) for suing for punitive damages. Note that in
class actions it is often much easier to prove the malicious intent of a
defendant in cases concering deceiptive advertisement - it is one thing
when someone gets cold feet and claims he's been misled, and quite another
when you have thousands of independent complaints. Because there's
nothing to gain suing non-profits (unless they're churches:) the
reluctance of class action lawyers to work for free would protect
non-profits from that kind of abuse.

A lemon law for software may actually be a boost for the proprietary
software, as people will realize that the vendors have incentive to
deliver on promises.

--vadim

[snip]

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness. They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms. Yet, they still were hit.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

True. But few platforms have as dismal a record in this regard as MS. Whether
that's due to number of bugs or market penetration is a matter for debate.
Personally, I think it's clear that the focus, from MS and many other
vendors, is on time-to-market and feature creep. Security is an afterthought,
at best (regardless of "Trustworthy Computing", which is looking to be just
another marketing initiative). The first step towards good security is
choosing vendors/software with a reputation for caring about security. I
realize that for many of us, this is not an option at this stage of the game.
And in some arenas, there just aren't any good choices - the best you can do
is to choose the lesser of multiple evils. Which leads me to the next point:

Are there practical answers that actually work in the real world with
real users and real business needs?

I think a good place to start is to have at least one person, if not more,
who has in their job description to daily check errata/patch lists for the
software in use on the network. This can be semi-automated by just
subscribing to the right mailing lists. Now, deciding whether or not a patch
is worth applying is another story, but there's no excuse for being ignorant
of published security updates for software on one's network. Yes, it's a
hassle wading through the voluminous cross-site scripting posts on BUGTRAQ,
but it's worth it when you do occasionally get that vital bit of information.
Sometimes vendors aren't as quick to release bug information, much less
patches, as forums like BUGTRAQ/VulnWatch/etc.

Stay on top of security releases, and patch anything that is a security
issue. I realize this is problematic for larger networks, in which case I
would add, start with the most critical machines and work your way down. If
this requires downtime, well, better to spend a few hours of rotating
downtime to patch holes in your machines than to end up compromised, or
contributing to the kind of chaos we saw this last weekend.

Simple answer, practical for some folks, maybe less so for others. I know
I've been guilty of not following my own advice in this area before, but that
doesn't make it any less pertinent.

XP has autoupdate notifications that nag you. They could make it automatic,
but then everyone would sue them if it mucked up their system.

And, MS has their HFCHECK program which checks which hotfixes should be
installed. Again, not automatic because they would like the USER to sign
off on installing it.

On the Open Source side, you sort of have that when you build from source.
Maybe apache should build a util to routinely go out and scan their source
and all the myriad add on modules and build a new version when one of them
has a fix to it, but we leave that to the sysadmin. Why, because the
permutations are too many. Which is why we have Windows. To paraphrase a
phone company line I heard in a sales meeting when reaming them, "we may
suck, but we suck less ...". It ain't the best, but for the most part, it
does what the user wants and is relatively consistent across a number of
machines. User learns at home and can operate at work. No retraining.

Sort of like the person who sued McD's when they dumped their own coffee in
their lap because it was "too hot". Somewhere in the equation, the
sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

To turn the argument around, people don't pay for IIS either, but everyone
would love to sue MS for its vulnerabilities (i.e. CR/Nimda, etc).

As has been said, no one writes perfect software. And again, sometime, the
user has to share some responsibility. Maybe if the users get burned
enough, the problem will get solved. Either they will get fired, the
software will change to another platform, or they'll install the patches.
People only change behaviors through pain, either mental or physical.

Eric

[snip]

As has been said, no one writes perfect software. And again, sometime, the
user has to share some responsibility. Maybe if the users get burned
enough, the problem will get solved. Either they will get fired, the
software will change to another platform, or they'll install the patches.
People only change behaviors through pain, either mental or physical.

There's a difference between having the occasional bug in one's software
(Apache, OpenSSH) and having a track record of remotely exploitable
vulnerabilities in virtually EVERY revision of EVERY product one ships, on
the client-side, the server side and in the OS itself. Microsoft does not
care about security, regardless of what their latest marketing ploy may be.
If they did, they would not be releasing the same exact bugs in their
software year after year after year.

</rant>

[snip]

That is, I think there is a big difference between a company the
size of Microsoft saying "we've known about this problem for 6
months but didn't consider it serious so we didn't do anything
about it", and an open source developer saying "I've known about
it for 6 months, but it's a hard problem to solve, I work on this
in my spare time, and my users know that."

Just like I expect a Ford to pass federal government safety tests,
to have been put through a battery of product tests by ford, etc
and be generally reliable and safe; but when I go to my local custom
shop and have them build me a low volume or one off street rod, or
chopper I cannot reasonably expect the same.

The responsibility is the sum total of the number of product units
out in the market, the risk to the end consumer, the companies
ability to foresee the risk, and the steps the company was able to
reasonably take to mitigate the risk.

*applause*

Very well stated. I've been trying for some time now to express my thoughts
on this subject, and failing - you just expressed _exactly_ what I've been
trying to say.

> use for anything other than nailing stuff together. Likewise, MS told
> people six months ago to fix the hole. "Lack of planning on your part does

It is for this very reason I suspect no one could collect on this
specific problem. Microsoft, from all I can tell, acted responsibly
in this case. Sean asked for general ways to solve this type of
problem. I gave what I thought was the best solution in general.
It doesn't apply very directly to the specific events of the last
few days.

Yes, in this particular case Microsoft did The Right Thing. It's not their
fault (this time) that admins failed to apply patches.

Of course, when one has a handful of new patches every _week_ for all manner
of software from MS, ranging from browsers to mail clients to office software
to OS holes to SMTP and HTTP daemons to databases ... well, one can
understand why the admins might have missed this patch. It doesn't remove
responsibility, but it does make the lack of action understandable. One could
easily hire a full-time position, in any medium enterprise that runs MS gear,
just to apply patches and stay on top of security issues for MS software.

Microsoft is not alone in this - they just happen to be the poster child, and
with the market share they have, if they don't lead the way in making
security a priority, I can't see anybody else in the commercial software biz
taking it seriously.

The problem was not this particular software flaw. The problem here is the
track record, and the attitude, of MANY large software vendors with regards
to security. It just doesn't matter to them, and that will not change until
they have a reason to care about it.

[snip]

Hi Paul,

What do you think of OpenBSD still installing BIND4 as part of the
default base system and recommended as secure by the OpenBSD FAQ ?
(See Section 6.8.3 in <OpenBSD FAQ: Networking; )

OpenBSD ships a highly-audited, chrooted version of BIND4 that bears little
resemblance to the original code (I'm sure Paul can correct me here if I'm
off-base). The reasons for the team's decision are well-documented on various
lists and FAQs. Given the choices at hand (use the exhaustively audited,
chrooted BIND4 already in production; go with a newer BIND version that
hasn't been through the wringer yet; write their own dns daemon; use tinydns
(licensing issues); use some other less well-known dns software), I think
they made the right one. I'm sure they'll move to a newer version when
somebody on the team gets a chance to give it a thorough code audit, and run
it through sufficient testing prior to release.

Sort of like the person who sued McD's when they dumped their own coffee in
their lap because it was "too hot". Somewhere in the equation, the
sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

Bad Example. Or at least it's a bad example for your point. That particular
case has a *LOT* of similarities with the other big-M company we're discussing.
Cross out "hot coffee" and write in "buffer overflow" and see how it reads:

From http://lawandhelp.com/q298-2.htm

1: For years, McDonald's had known they had a problem with the way they make
their coffee - that their coffee was served much hotter (at least 20 degrees
more so) than at other restaurants.

2: McDonald's knew its coffee sometimes caused serious injuries - more than
700 incidents of scalding coffee burns in the past decade have been settled by
the Corporation - and yet they never so much as consulted a burn expert
regarding the issue.

3: The woman involved in this infamous case suffered very serious injuries -
third degree burns on her groin, thighs and buttocks that required skin grafts
and a seven-day hospital stay.

4: The woman, an 81-year old former department store clerk who had never
before filed suit against anyone, said she wouldn't have brought the lawsuit
against McDonald's had the Corporation not dismissed her request for
compensation for medical bills.

5: A McDonald's quality assurance manager testified in the case that the
Corporation was aware of the risk of serving dangerously hot coffee and had no
plans to either turn down the heat or to post warning about the possibility of
severe burns, even though most customers wouldn't think it was possible.

6: After careful deliberation, the jury found McDonald's was liable because
the facts were overwhelmingly against the company. When it came to the punitive
damages, the jury found that McDonald's had engaged in willful, reckless,
malicious, or wanton conduct, and rendered a punitive damage award of 2.7
million dollars. (The equivalent of just two days of coffee sales, McDonalds
Corporation generates revenues in excess of 1.3 million dollars daily from the
sale of its coffee, selling 1 billion cups each year.)

7: On appeal, a judge lowered the award to $480,000, a fact not widely
publicized in the media.

8: A report in Liability Week, September 29, 1997, indicated that Kathleen
Gilliam, 73, suffered first degree burns when a cup of coffee spilled onto her
lap. Reports also indicate that McDonald's consistently keeps its coffee at 185
degrees, still approximately 20 degrees hotter than at other restaurants. Third
degree burns occur at this temperature in just two to seven seconds, requiring
skin grafting, debridement and whirlpool treatments that cost tens of thousands
of dollars and result in permanent disfigurement, extreme pain and disability
to the victims for many months, and in some cases, years.