'we should all be uncomfortable with the extent to which luck..'

david moore's analysis of code red: episode 0/1 is at
  
    http://www.caida.org/analysis/security/code-red/

[funded by DARPA's ITO office NGI/NMS programs,
NSF ANIR, and CAIDA members, david a caida PI]
  
definitely check out jeff brown's animation at bottom;
watch carefully around 15:00 for pretty ominous elbow
in infection rate (get an epidemiologist to look at it
without telling them what it is...)

360,000 machines (well, IP addresses) infected
in under 14 hours.

from conclusion:

  //
  ..in the final analysis, we should all
  be uncomfortable with the extent to which luck,
  rather than proactive diligence, maintains the
  stability of the Internet infrastructure.
  //

it goes without saying that many hosts are still vulnerable.
and will likely remain so (to this or the next poison)
until our luck runs out. do we expect the next version
to have the two weaknesses christopher pointed out today?
do we expect the next version won't clear every 3rd bit on
the hard drive?

almost makes me wonder if some white hat might (should?) have
been behind CodeRed as some 'vaccination' attempt.

  "The bad news is, nobody will do anything about
   critical infrastructure protection until there's
   a global catastrophic failure," said Rasch.
     The good news is, there will be a global catastrophic failure."

     -- http://www.nando.net/technology/story/44887p-694372c.html

the worse news is: protecting 'critical infrastructure'
is far from enough. again from
http://www.caida.org/analysis/security/code-red/

  This assault also demonstrates that machines operated by home
  users or small businesses (hosts less likely to be maintained
  by a professional sysadmin) are integral to the robustness of
        the global Internet. As is the case with biologically active
        pathogens, vulnerable hosts can and do put everyone at risk,
        regardless of the significance of their role in the population.

fwiw, caida trying to do gentle survey of patching speed,
see http://worm-security-survey.caida.org/

k

ps: john maddog hall (linux int'l) had a great slide a
     few months ago at UCSD talk; upshot something like

     INSTALLED BASE (EARTH)

       + 20 million linux systems
       + 450 million gates licenses
        ==> 4.4 - 6.6 % of the population total

     ... world population: ~6B

     ==> 5.4 billion people haven't selected an OS yet

[k: maybe we can get them on OS-antioxidants
before it's too late]

At the very least, this demonstrates that those who produce and
maintain operating system software and software in general (and in
particular, bundled software such as MS office or, in this case, IIS)
need to provide more centralized methods of updating those
packages. (ie, all-in-one type updates that can be more readily
automated) Efforts also need to be made to educate the public that
they need to check for software updates from time to time.

Doing this, right now, can be difficult for many users to grasp (lets
face it, some software doesn't update well, if at all) and may require
more effort than even reputable administrators are willing to extend.

How to go about making the public more secure, of course, is an
on-going debate and perhaps even a losing battle but still worth the
effort.

Perhaps a different approach is in order -- product liability.

When Firestone made a large number of bad tires, they compensated the
purchasers by PAYING for replacement, including those that had not yet
been injured. That included the upgrade, and the installation cost.

Network operators have been injured by the distribution of buggy software from M$. We need to be compensated for our time and expenses.

End users need to be compensated for their costs to upgrade.

A check in the mail would be a better incentive to administrators than
"automatic" updates.

"Wayne E. Bouchard" wrote:

* William Allen Simpson sez:

: Perhaps a different approach is in order -- product liability.
:
: When Firestone made a large number of bad tires, they compensated the
: purchasers by PAYING for replacement, including those that had not yet
: been injured. That included the upgrade, and the installation cost.

Will Chrysler pay me heaps of money if I break my jawbone in an accident
because I neglected to buckle up, turned the airbags off (the light on
the dashboard was kinda annoying), never drove a car before (or even
read a book on driving, for that matter) and nailed a stationery object
on the highway because noone ever told me about the functionality of
that middle pedal?

Perhaps a different approach is in order -- product liability.

When Firestone made a large number of bad tires, they compensated the
purchasers by PAYING for replacement, including those that had not yet
been injured. That included the upgrade, and the installation cost.

Network operators have been injured by the distribution of buggy
software from M$. We need to be compensated for our time and expenses.

End users need to be compensated for their costs to upgrade.

A check in the mail would be a better incentive to administrators than
"automatic" updates.

  Please don't force a benefit on me that I don't want and still have to pay
for. I'd rather pay less for software and hold manufacturers responsible
only for deliberate damage.

  And if we force Microsoft, do we also force the authors of software that is
given away for free? It seems to me that even if Firestone gave away their
tires, their liability for defective ones would be the same.

  DS

The only way that administrators are going to be diligent about
patches/updates is for the bean counters to show the CTO/CIO what the bottom
line is for not installing updates when something like code red happens.
Then management will crack the whip and the administrators will have to
constantly search for updates.

Of course this is all subject to the Dilbert Principle and some companies
will get stupid about it:

CIO: "Why wasn't that patch installed as soon as it became available, that
problem brought us to our knees!!!!"

Administrator: "Well, the patch became available after the attack started
and since it brought us to our knees, I couldn't download the patch because
we had no connectivity and neither did our peers."

CIO: "From now on I want to see a report of all upcoming attacks 48 hours in
advance or you'll be looking for another job!"

Oh come on, you can't tell me that some of you don't work for people like
this.

Larry Diffey

Perhaps a different approach is in order -- product liability.

When Firestone made a large number of bad tires, they compensated the
purchasers by PAYING for replacement, including those that had not yet
been injured. That included the upgrade, and the installation cost.

The problem is, how many people believe MS puts out bad software? It
never ceases to amaze me that no matter how many IT shops I go through for
various reasons and no matter how many problems they've had with MS
software, they still consider it to be top notch. They don't even believe
there's a problem.

And with this latest threat of code red, Microsoft would have been covered
anyway, because a patch for this exploit existed well before CodeRed hit.
They released a patch for the indexing server on June 18, 2001, which as
you know is a full month before CodeRed. So, people had a MONTH to
prepare for something like this, and it's a sad statement that they did
not.

Network operators have been injured by the distribution of buggy
software from M$. We need to be compensated for our time and expenses.

And should Microsoft's "good name" be tarnished because you didn't update
with a security fix that they already had available a month in advance?

A check in the mail would be a better incentive to administrators than
"automatic" updates.

I think this is flawed. And furthermore, let me state that we're trying
to make this a technological problem, when ultimately it's a human one. A
human somewhere wrote some bad code. It happens, and continues to happen
on a daily basis. You'll find examples of it on sourceforge, on mailing
lists, and in commercial operating systems today, and I guarantee that
you'll see other examples tomorrow. Because as long as humans write code
and make silly mistakes you will continue to see security vulnerabilities.
It's not just a Microsoft problem. It's a Microsoft, Linux, *BSD,
Solaris, Cisco, <insert vendor name here> problem.

And then lets not forget that as previously stated, CodeRed exploits a
known bug and that a vendor provided patch was already in existence. The
problem is that too many admins were too lazy or ignorant and didn't
install the patch or implement the workaround to make them immune to this
bug. How would a check have helped thim in this case?

Security requires vigilence, and there seems to be too little of it out in
the world.

Regards,

The only way that administrators are going to be diligent about
patches/updates is for the bean counters to show the CTO/CIO what the bottom
line is for not installing updates when something like code red happens.

Not necessarily bean counters, as I've never seen one who could
understand that there is very little if any monetary ROI on security
products and services, but putting it in tangible terms that management
understands is always a good idea.

Sometime it plays out like a comedy of errors. I used to work for a
company that took in revenue of several billion dollars a year, and who
relied heavily on their corporate image and "industry leader" status. For
them, it was as easy as showing them the value of not having your web page
appear at attrition.org or a story about your company being hacked on cnn.
This was our standard argument with managment. "Buy this and allow us to
implement it, and the chance of us being a news item become a lot
smaller." Of course, then you also have to explain that this alone will
not make you immune to any compromise attempt. So, we got a site license
for an IDS package, becoming the specific vendor's largest licensee for
their IDS product. And we thought all was going well. Then we tried
requesting equipment to deploy the software package across the network,
and were told there was no justification for it. Apparently the
multimillion dollar site license was not justification for spending a
couple hundred thousand on hardware.

Then management will crack the whip and the administrators will have to
constantly search for updates.

Many vendors, including Microsoft, have a security updates announcement
lists. Then there's always the subscription to bugtraq or their new
targeted security updates mailling list.

Of course this is all subject to the Dilbert Principle and some companies
will get stupid about it:

And in a perfect world these companies would start to suffer from clue
atrophy because of a talent exodus. I've certainly seen it happen. But,
with the job market the way it is, I think many of us would live with a
certain amount of management stupidity in exchange for a steady paycheck.
At this point, after being unemployed for almost 5 months after being
laid off and working random contracts as they come up, I'd gladly deal
with some stupidity for medical benefits and a steady paycheck.

However, I think we might be straying from what could be considered
on-topic NANOG content.

Regards,

We did, and are quite amazed at how few others did.

None of *our* Win2k servers were affected (thanks to our NT admin's frequent overnight patchfests), but numerous customers were... most of this manifested as "your network is down" or "hi, we'd like an SLA refund" or "my web server keeps crashing, you guys sell hardware unworthy of a ghetto trash bin".

Windows is NOT easy to administer. Unix (any of 'em) is NOT easy to administer. You can NOT install and not think about it again. You MUST continually think about it, look for updates for it, apply updates (usually overnight, as many of them require a reboot, and some of them wedge the machine), and keep the server in operating condition.

Reality is in direct contrast to Microsoft's main advertising pitch. How many of you have seen the Win2k Datacenter commercial with the unmanned array of large machines, with the voiceover falling just short of saying you can fly to Mars and back without having to do any administration oncesoever?

How many affected customers think that, because of that, no resources need to be devoted to administering their much smaller servers?

How many probably still think that?

It made it through the firewall and didn't set off the virus scanner, so obviously it's not that bad, right?

Something that might help is PSA's -- you know, those radio spots that tell you never to shake babies, drive drunk, or keep a pile of old tires around. Perhaps it's time that everyone also knows keeping your servers secure is not only in everyone else's best interest, but your best interest as well. Awareness is a wonderful thing.

I'll throw in a couple bucks towards airtime. -rt

The problem is, how many people believe MS puts out bad software? It
never ceases to amaze me that no matter how many IT shops I go through for
various reasons and no matter how many problems they've had with MS
software, they still consider it to be top notch. They don't even believe
there's a problem.

I think part of it is because its a standard. Even if its a low standard
it still exists and that makes a big difference. Hell, I do a lot of
work to put on conferences several times a year (If any of you have been
to an I2 Joint Techs meeting I was the guy hassling people for
presentations) and am in charge of presentation wrangling. I decided
quite a while ago that presentations had to be in Powerpoint 97 format.
This wasn't because I love PP97 or because I don't know about magicpoint
or other presentation software. Its just that PP97 is relatively
universal, my admin staff can work on it (reviewing it from problems,
converting to HTML, whatever) without issues, and I know that in almost
all cases it will function as expected.

Its a crappy standard but standards are useful. I'm not saying this is
where things should be or that the excesses and failures of Microsoft
are excusable. I'm simply being pragmatic.

> A check in the mail would be a better incentive to administrators than
> "automatic" updates.

I think this is flawed.

I'm also not sure how the logic works. If MS had to send me a check
everytime they screwed up and it possibly cost me some time I'd never
install a patch.

Because as long as humans write code
and make silly mistakes you will continue to see security vulnerabilities.
It's not just a Microsoft problem. It's a Microsoft, Linux, *BSD,
Solaris, Cisco, <insert vendor name here> problem.

Its also just a problem of *never* being able to plan for all
possibilities in a test environment. Its impossible to do this. Hell,
most of the people doing research in networking are really just trying
to figure out what the hell we've actually created. The behaviour we see
in a lab, test network, or elsewhere doesn't necessarily predict how a
given piece of code will interact when released into the wild.

k claffy <kc@ipn.caida.org> writes:

   almost makes me wonder if some white hat might (should?) have been
behind CodeRed as some 'vaccination' attempt.

  k,

     First, I thought you were going by kc :slight_smile:

     Second, your analysis is flawed in that it makes the fundamental
  mistake that the ends justify the means. If you do not accept that
  notion, then no one who was behind CodeRed can be construed as a
  "white hat." Consider:

    1) A doctor of epidemiology notices that there is a potential for a
         large segment of the population to contract a particular
         disease - say the bubonic plague - which is on the rise again.

           Said doctor contacts the U.S. CDC in Atlanta, as well as
         raising awareness through normal media channels (television,
         magazines, newspapers, radio, slashdot, etc.).

           Said doctor, also having the wealth of Croesus - or being
         associated with the WHO, sets up free vaccination sites all
         across the world. Millions of people receive the vaccination
         and deaths (and much untold suffering) is avoided.

            No doubt, this doctor is a white hat.

    2) Another doctor of epidemiology notices the same situation, but
         takes another tactic. This doctor runs out into the street and
         begins to randomly inject the vaccine into the arms of
         passersby - taking great care to ensure that clean needles are
         used and the strictest handling procedures are followed - we
         wouldn't want to spread hepititus or AIDs...

           This doctor is, undoubtedly, a black hat. The reason for
         this is that the doctor failed to follow one of the fundamental
         rules of civilized society - informed consent of the
         individuals receiving the vaccine.

           No doubt the intentions of the two doctors are the same. No
         doubt that they "mean well". No doubt one is a serious threat
         to the continuing health and welfare of those nearby.

    Now, if there was a tiger team that offered, as part of its
  services, to try and infect your system with CodeRed, then they would
  be operating in the role of white hats.

-jon

> > A check in the mail would be a better incentive to administrators than
> > "automatic" updates.
>
> I think this is flawed.

I'm also not sure how the logic works. If MS had to send me a check
everytime they screwed up and it possibly cost me some time I'd never
install a patch.

That's beacuse your giving the check to the wrong person for the wrong
reason. If M$ had to shell out a check to everyone who was hit by their
errors (all the sysadmins and other people who've spent time cleaning up
after the web servers they don't run that casued them problems), then
it would incentivise M$ to not release such disastrously bad code.

Think of it as the payoffs to the family in the small car that the
Ford SUV crushed when it flipped after the Firestone tire blew
out. It's not that Ford/Firestone pay every customer when they screw
up,
it's that Ford/Firestone are forced to either ACTIVELY resolve their
problems or face serious financial consequences in damages paid to
those they've harmed. Unfortunately, for some reason, we tolerate
software companies providing such bad products with no liability
whatsoever.

> Because as long as humans write code
> and make silly mistakes you will continue to see security vulnerabilities.
> It's not just a Microsoft problem. It's a Microsoft, Linux, *BSD,
> Solaris, Cisco, <insert vendor name here> problem.

Its also just a problem of *never* being able to plan for all
possibilities in a test environment. Its impossible to do this. Hell,
most of the people doing research in networking are really just trying
to figure out what the hell we've actually created. The behaviour we see
in a lab, test network, or elsewhere doesn't necessarily predict how a
given piece of code will interact when released into the wild.

While that is true to some extent for the current state of the art, it's
also true for testing vehicles to some extent. However, vehicle tests
have gotten a whole lot better because an emphasis has been placed on
testing by the product liability involved. Since software manufacturers
have little or no accountability in this regard, there is little
advantage to them in emphasising improving in this area. Result, we
continue to drive software which careens out of control at the drop
of a hat and wonder why we have multi-server pileups on the information
superhighway.

Owen