What could have been done differently?

Sean Donelan wrote:

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness. They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms. Yet, they still were hit.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

As a former boss of me was fond of saying when someone made a stupid mistake: "It can happen to anyone. It just happens more often to some people than others."

Are there practical answers that actually work in the real world with
real users and real business needs?

As this is still a network operators forum, let's get this out of the way: any time you put a 10 Mbps ethernet port in a box, expect that it has to deal with 14 kpps at some point. 100 Mbps -> 148 kpps, 1000 Mbps -> 1488 kpps. And each packet is a new flow. There are still routers being sold that have the interfaces, but can't handle the maximum traffic. Unfortunately, router vendors like to lure customers to boxes that can forward these amounts of traffic wire speed rather than implement features in their lower-end products that would allow a box to drop the excess traffic in a reasonable way.

But then there is the real source of the problem. Software can't be trusted. It doesn't mean anything that 1000000 lines of code are correct, if one line is incorrect something really bad can happen. Since we obviously can't make software do what we want it to do, we should focus on making it not do what we don't want it to do. This means every piece of software must be encapsulated inside a layer of restrictive measures that operate with sufficient granularity. In Unix, traditionally this is done per-user. Regular users can do a few things, but the super-user can do everything. If a user must do something that regular users can't do, the user must obtain super-user priviliges and then refrain from using these absolute priviliges for anything else than the intended purpose. This doesn't work. If I want to run a web server, I should be able to give a specific piece of web serving software access to port 80, and not also to every last bit of memory or disk space.

Another thing that could help is have software ask permission from some central authority before it gets to do dangerous things such as run services on UDP port 1434. The central authority can then keep track of what's going on and revoke permissions when it turns out the server software is insecure. Essentially, we should firewall on software versions as well as on traditional TCP/IP variables.

And it seems parsing protocols is a very difficult thing to do right with today's tools. The SNMP fiasco of not long ago shows as much, as does the new worm. It would proably a good thing if the IETF could build a good protocol parsing library so implementors don't have to do this "by hand" and skip over all that pesky bounds checking. Generating and parsing headers for a new protocol would then no longer require new code, but could be done by defining a template of some sort. The implementors can then focus on the functionality rather than which bit goes where. Obviously there would be a performance impact but the same goes for coding in higher languages than assembly. Moore's law and optimizers are your friends.

[snip]

restrictive measures that operate with sufficient granularity. In Unix,
traditionally this is done per-user. Regular users can do a few things,
but the super-user can do everything. If a user must do something that
regular users can't do, the user must obtain super-user priviliges and
then refrain from using these absolute priviliges for anything else
than the intended purpose. This doesn't work. If I want to run a web
server, I should be able to give a specific piece of web serving
software access to port 80, and not also to every last bit of memory or
disk space.

Jeremiah Gowdy gave an excellent presentation at ToorCon 2001 on this very
topic - "Fundamental Flaws in Network Operating System Design", I think it
was called. I'm looking around to see if I can find a copy of the lecture,
but so far I'm having little luck. His main thesis was basically that every
OS in common use today, from Windows to UNIX variants, has a fundamental
flaw in the way privileges and permissions are handled - the concept of
superuser/administrator. He argued instead that OSes should be redesigned to
implement the principle of least privilege from the ground up, down to the
architecture they run on. OpenSSH's PrivSep (now making its way into other
daemons in the OpenBSD tree) is a step in the right direction.

I'm still looking for a copy of the presentation, but I was able to find a
slightly older rant he wrote that contains many of the same points:
http://www.bsdatwork.com/reviews.php?op=showcontent&id=2

Good reading, even if it's not very much practical help at this moment. :slight_smile:

Another thing that could help is have software ask permission from some
central authority before it gets to do dangerous things such as run
services on UDP port 1434. The central authority can then keep track of
what's going on and revoke permissions when it turns out the server
software is insecure. Essentially, we should firewall on software
versions as well as on traditional TCP/IP variables.

The problem there is the same as with windowsupdate - if one can spoof the
central authority, one instantly gains unrestricted access to not one, but
myriad computers. Now, if it were possible to implement this central
authority concept on a limited basis in a specific network area, I'd say that
deserved further consideration. So far, the closest thing I've seen to this
concept is the ssh administrative host model: adminhost:~root/.ssh/id_dsa.pub
is copied to every targethost:~root/.ssh/authorized_keys2, such that commands
can be performed network-wide from a single station. While I have used this
model with some success, it does face scalability issues in large
environments, and if your admin box is ever compromised ...

And it seems parsing protocols is a very difficult thing to do right
with today's tools. The SNMP fiasco of not long ago shows as much, as
does the new worm. It would proably a good thing if the IETF could
build a good protocol parsing library so implementors don't have to do
this "by hand" and skip over all that pesky bounds checking. Generating
and parsing headers for a new protocol would then no longer require new
code, but could be done by defining a template of some sort. The

[snip]

It's the trust issue, again - trust is required at some point in most
security models. Defining who you can trust, and to what degree, and how/why,
and knowing when to revoke that trust, is a problem that has been stumping
folks for quite a while now. I certainly don't claim to have an answer to
that question. :slight_smile:

I'm still looking for a copy of the presentation, but I was able to find a
slightly older rant he wrote that contains many of the same points:
http://www.bsdatwork.com/reviews.php?op=showcontent&id=2

Good reading, even if it's not very much practical help at this moment. :slight_smile:

I'm reminding of the two men that were sent out to chop a whole lot of
wood. One judged the amount of work and immediately started, chopping
away until dark. The other stopped to sharpen his blade from time to
time. Despite the fact he lost valuable chopping time this way, he was
home in time for dinner.

> Another thing that could help is have software ask permission from some
> central authority before it gets to do dangerous things such as run
> services on UDP port 1434. The central authority can then keep track of
> what's going on and revoke permissions when it turns out the server
> software is insecure. Essentially, we should firewall on software
> versions as well as on traditional TCP/IP variables.

The problem there is the same as with windowsupdate - if one can spoof the
central authority, one instantly gains unrestricted access to not one, but
myriad computers.

I din't mean quite that central, but rather one or two of these boxes
for a small-to-medium sized organization. If there are different
servers authenticating and authorizing users on the one hand and
software/network services on the other hand, an attacker would have to
compromize both: the network aaa box to bypass the firewalls, and the
user aaa box to actually log on.

> It would proably a good thing if the IETF could
> build a good protocol parsing library so implementors don't have to do
> this "by hand" and skip over all that pesky bounds checking. Generating
> and parsing headers for a new protocol would then no longer require new
> code, but could be done by defining a template of some sort.

It's the trust issue, again - trust is required at some point in most
security models.

This isn't a matter of trust, but a matter of well-designed and
well-tested software. If the RFC Editor publishes and RFC with the C
example code for a generic protocol handler library, this code will have
seen a lot of review, especially if people intend to actually use this
code in their products. Since this code will be so important and not
all that big, a formal correctness proof may be possible.

He argued instead that OSes should be redesigned to implement the
  principle of least privilege from the ground up, down to the
  architecture they run on.

[...]

  The problem there is the same as with windowsupdate - if one can spoof the
  central authority, one instantly gains unrestricted access to not one, but
  myriad computers.

[...]

  So far, the closest thing I've seen to this concept is the ssh
  administrative host model: adminhost:~root/.ssh/id_dsa.pub is
  copied to every targethost:~root/.ssh/authorized_keys2, such that
  commands can be performed network-wide from a single station.

Do you even read what you write? How does a host with root access to
an entire set of hosts exemplify the least privilege principle?

matto

--mghali@snark.net------------------------------------------<darwin><
   Flowers on the razor wire/I know you're here/We are few/And far
   between/I was thinking about her skin/Love is a many splintered
   thing/Don't be afraid now/Just walk on in. #include <disclaim.h>

  He argued instead that OSes should be redesigned to implement the
  principle of least privilege from the ground up, down to the
  architecture they run on.

[...]

  The problem there is the same as with windowsupdate - if one can spoof the
  central authority, one instantly gains unrestricted access to not one, but
  myriad computers.

[...]

  So far, the closest thing I've seen to this concept is the ssh
  administrative host model: adminhost:~root/.ssh/id_dsa.pub is
  copied to every targethost:~root/.ssh/authorized_keys2, such that
  commands can be performed network-wide from a single station.

Do you even read what you write? How does a host with root access to
an entire set of hosts exemplify the least privilege principle?

Your selections from my post managed to obscure the fact that I was making
more than one point. I did _not_ state that the ssh key mgmt system outlined
above exemplifies least privilege. I was merely making a comparison between
that model and the topic under discussion, central
administrative/authenticating authorities. Additionally, the section higher
up regarding least privilege was in connection with OS design, and was quoted
from another author's presentation at ToorCon last year. You're stringing
together statements on disparate subjects and then jumping to conclusions.

Please do not put words into my mouth.

Your selections from my post managed to obscure the fact that I was making
  more than one point. I did _not_ state that the ssh key mgmt system outlined
  above exemplifies least privilege. I was merely making a comparison between
  that model and the topic under discussion, central
  administrative/authenticating authorities.

So when windowsupdate does it, its a problem, because they aren't
using ssh keys? I'm just confused, as they both seem to represent the
same model in your discussion, however one is a "problem" and the
other is a sugegsted practice.

Is it because windowsupdate requres explicit action on each client
machine to operate?

I'm still missing whatever point you were trying to make in your
original post.

  Please do not put words into my mouth.

I'm not. I'm simply quoting ones coming from it.

matto

--mghali@snark.net------------------------------------------<darwin><
   Flowers on the razor wire/I know you're here/We are few/And far
   between/I was thinking about her skin/Love is a many splintered
   thing/Don't be afraid now/Just walk on in. #include <disclaim.h>

[snip]

  > So far, the closest thing I've seen to this concept is the ssh
  > administrative host model: adminhost:~root/.ssh/id_dsa.pub is
  > copied to every targethost:~root/.ssh/authorized_keys2, such that
  > commands can be performed network-wide from a single station.
  >
  > Do you even read what you write? How does a host with root access to
  > an entire set of hosts exemplify the least privilege principle?

  Your selections from my post managed to obscure the fact that I was making
  more than one point. I did _not_ state that the ssh key mgmt system outlined
  above exemplifies least privilege. I was merely making a comparison between
  that model and the topic under discussion, central
  administrative/authenticating authorities.

So when windowsupdate does it, its a problem, because they aren't
using ssh keys? I'm just confused, as they both seem to represent the
same model in your discussion, however one is a "problem" and the
other is a sugegsted practice.

When windowsupdate does it, it's more problematic because I have no way of
knowing what machine that is, who's controlling it ... I'm basically relying
on DNS. There's no strong crypto used for authentication there that I'm aware
of. Perhaps I'm misinformed. I consider the use of ssh keys I generated, from
machines I built, to be more trustworthy than relying on DNS as the
authentication mechanism.

Is it because windowsupdate requres explicit action on each client
machine to operate?

That's not necessarily true either. Anyway, my point was, windowsupdate has
been spoofed, and spoofing DNS is easier than trying to spoof or MIM an auth
system that uses strong crypto. It's not perfect, but it's better than
relying solely on DNS.

(I can't seem to find the news article I'm thinking of, but I'm pretty sure
it's out there. I'll keep looking.)

I'm still missing whatever point you were trying to make in your
original post.

Go read it again then, and spare us all your lack of comprehension.

  Please do not put words into my mouth.

I'm not. I'm simply quoting ones coming from it.

You did indeed put words into my mouth - you wrote:

IIRC, MS's patches has been digitally signed by MS, and their patching
system checks these sign silently. So, they will claim that
compromised route info and/or DNS spoofing does not affect their
correctness.

Though, I'm not sure what will happen in key revoking situation.

interesting side note ... top of the page right now at http://www.ntk.net
details a similar problem facing MS in the UK currently. (Remember when they
forgot to renew hotmail.com, and some kind Linux geek fixed it for them ...
well, apparently their entry in the Data Protection Register (UK) expired
January 8. This means all personal data held by them in the UK is now illegal
(passport, anyone?) I wonder if something like this would be useful (or even
possible) in the US, of if it would be just another opportunity for
bureaucratic bungling ...)

<shnipp Data Protection stuff>
At least theoretically, the US *is* supposed to have a comparable system.
European privacy law makes it illegal to transfer personal data of any kind
to a country without a comparable system - the US has a voluntary "Safe
Haven" scheme that is supposed to enable US companies to be able to receive
personal data from europe without the board of directors of the sending
company being arrested....
Mind you, none of this takes into account the web; based in the US, Passport
isn't subject to english law (but then, most american courts assume
Internet==American law anyhow)