Level3 routing issues?

Dunno, arent they negligent?

In any other industry a fundemental flaw would be met with lawsuits, in the
computer world tho people seem to get around for some reason.

Not true, look at cars and recalls. Also as I understand it MS
issued a fix for this sometime ago - it the users who didn't implement it!

> Dunno, arent they negligent?

    > > In any other industry a fundemental flaw would be met with lawsuits, in the
    > > computer world tho people seem to get around for some reason.
    >
    > Not true, look at cars and recalls. Also as I understand it MS
    > issued a fix for this sometime ago - it the users who didn't implement it!

Uh, lemme see if I get your argument. People who buy exploding cars from
Vendor M are at fault when the cars explode, since cars from Vendor M
always explode, and Vendor M always disclaims responsibility, since
someone usually points out in advance that the cars will explode?

I'm not sure that your argument has anything to do with the law or with
right and wrong, but in a sort of social-Darwinism sort of way, I guess
I'd agree with it. Except the herds of losers who still buy exploding
crap from Vendor M don't seem to be thinning themselves out quickly
enough. Maybe they're sexually attractive to each other, and reproduce
before their stupidity kills them. That would be unfortunate. Or maybe
it's just that none of this computer stuff actually matters, so exploding
crap isn't actually fatal. Maybe that's it.

                                -Bill

BIll,

I'd agree with it. Except the herds of losers who still buy exploding
crap from Vendor M don't seem to be thinning themselves out quickly

dude, the Exploding Cars are so much easier to drive than the ones from
Vendor L. (tic)

enough. Maybe they're sexually attractive to each other, and reproduce
before their stupidity kills them. That would be unfortunate. Or maybe
it's just that none of this computer stuff actually matters, so exploding
crap isn't actually fatal. Maybe that's it.

I think it sucks that they are exploding on MY highway.

With that in mind is it time yet to talk about solutions to problems like
this from the network point of view? Sure its easy to put up access list's
when needed but I have 100megs available to me on egress and I was trying to
push 450megs. Is there anything protocol, vendor specific or otherwise that
will not allow rogue machines to at will take up 100% of available
resources? I know extreme networks has the concept of Max Port utilization
on thier switches, will this help? Suggestions?

-Scotty

That's not what he said.
To follow on from your analogy it should go:
Car manufacturer finds that under condition X a car explodes. Car vendor
has a mailing list that people who buy the car can sign up to. Car
vendor offers to fix this for free and notifies the mailing list.

Now it's up to the consumer NOT the vendor. If the consumer says "I
don't give a shit about things like this", how can you possibly hold the
manufacturer at fault for it not getting fixed?

To further torture analogies: So what type of vehicles ARE safe for the road, and for which roads? Taking a lawn tractor out on the Interstate surely is the fault of the driver, and not the manufacturer. At what point do folks figure out that putting production servers out on the Internet with no protection whatsoever is an invitation to abuse? Firewalls may not be perfect. Server software may not be perfect. Layering security can sure help.

It appears this worm only sought to annoy. Perhaps the next one that goes after the mass of unpatched MS SQL servers will instead take the opportunity to raid these servers for personal information? The opportunities for mass-scale identity theft are rather staggering.

What about doing some priority-based QoS? If a single IP exceeds X amount
of traffic, prioritize traffic above that threshold as low. It would keep
any one single host from saturating a link if the threshold is low.

For example, you may say that each IP is limited to 10mb of prioirty
traffic. Yes, a compromised host may try to barf out 90mb of chaff, but
the excess would be moved down the totem pole.

Obviously, this may not make sense in all environments, but in a campus or
large enterprise situation, I can see this occuring on your WAN links in
particular.

BIll,
From: "Bill Woodcock" <woody@pch.net>
> I'd agree with it. Except the herds of losers who still buy exploding
> crap from Vendor M don't seem to be thinning themselves out quickly

dude, the Exploding Cars are so much easier to drive than the ones from
Vendor L. (tic)

unfortunately (being a vendor L user myself) you must admit that these too
have problems :frowning: (at times)

> enough. Maybe they're sexually attractive to each other, and reproduce
> before their stupidity kills them. That would be unfortunate. Or maybe
> it's just that none of this computer stuff actually matters, so exploding
> crap isn't actually fatal. Maybe that's it.

I think it sucks that they are exploding on MY highway.

With that in mind is it time yet to talk about solutions to problems like
this from the network point of view? Sure its easy to put up access list's
when needed but I have 100megs available to me on egress and I was trying to
push 450megs. Is there anything protocol, vendor specific or otherwise that
will not allow rogue machines to at will take up 100% of available
resources? I know extreme networks has the concept of Max Port utilization
on thier switches, will this help? Suggestions?

Keep in mind that these problems aren't from 'well behaved' hosts, and
'well behaved' hosts normally listen to ECN/tcp-window/Red/WRED....
classic DoS attack scenario. :frowning:

Well not everyone plays fair out there. I imagine this is built into SLA's
too right? "My network will be up as long as everyone is well behaved"

I understand the evils, but are we really at the mercy of situations like
this? Of course we can firewall the common sense things ahead of time, and
we can jump right in and block evil traffic when it happens, after it takes
down our network but what sorts of things can we design into our networks
today to help with these situations?

-Scotty

Time for someone to fight the product liability included
in the 'shrinkwrap' licenses.

  I do believe that there should be some sort of
inital grace period for the software industry.. they are
well intentioned and not as old as the car industry.. but the
dire affects and lost sleep for some people need to eventually
be reckoned with. The grace period should probally be over
now and the industry declared 'mature and liable' for shoddy
software. If my car has a recall notice, i get a letter saying
"dear sir, your gas tank may explode if used. please come in for
our inspection". If they can keep track of those millions of cars
each year, at least somewhat, it should be simple to track who purchased
the software and send them a letter saying "get these patches now.."

  or perhaps they can do some agrement with AOL to include
all the latest patches in those CDs^H^H^HCoasters they send me.

  - Jared

What about doing some priority-based QoS? If a single IP exceeds X amount
of traffic, prioritize traffic above that threshold as low. It would keep
any one single host from saturating a link if the threshold is low.

For example, you may say that each IP is limited to 10mb of prioirty
traffic. Yes, a compromised host may try to barf out 90mb of chaff, but
the excess would be moved down the totem pole.

<snip>

Down the totem pole isn't off the totem pole. In most cases the issue wasn't
traffic priority. Most network equipment isn't designed to handle 100%
capacity from all ports. Under standard operation, maximum capacity is never
reached. It is cost prohibitive to support it. In addition, this was a dual
issue. Not only did the bandwidth saturate, the packets are so small that in
reaching for 100% saturation, many routers and switches first exceeded their
maximum pps thresholds. The best defense is to monitor and know your
traffic. When traffic becomes uncommon, someone needs to be alerted. A 30%
processor increase is not a good thing; ever. Second, know the optimizations
for your particular equipment and code. Each piece of equipment has it's own
optimizations. In my case, it was better to access-list at the router level
than to run bandwidth limiting, and I run a crummy 7200. It's even nicer on
a 7500+ where it's offloaded to the linecard processors. If a portion of the
network or a specific port is unrecoverable, shut it down. The server won't
be able to handle traffic anyways, and it is better to cut off a portion of
the network than lose the entire network.

Jack Bates
Network Engineer
BrightNet Oklahoma

Well not everyone plays fair out there. I imagine this is built into

SLA's

too right? "My network will be up as long as everyone is well behaved"

You know that customers won't behave. Prepare for it.

I understand the evils, but are we really at the mercy of situations like
this? Of course we can firewall the common sense things ahead of time,

and

we can jump right in and block evil traffic when it happens, after it

takes

down our network but what sorts of things can we design into our networks
today to help with these situations?

If a customer is infected, then the problem is on their end. The fact that
they don't have throughput is their issue, not that of the provider's. As
for collateral damage, proper monitoring of the entire network and early
warning systems allow engineers to hopefully stop the problem before it goes
critical. The spool up on this worm was massive and effected some networks
too fast to prevent them going critical. However, tracking and resolution
should easily have been within the SLA windows.

My policy: Hmm, I'm not sure. *ring* Dude, wake up. It's a critical outage.
The whole network is collapsing. Think! *rambles for 5 minutes* Oh, wait.
Never mind, I got it. Go back to sleep. Thanks.

Jack Bates
Network Engineer
BrightNet Oklahoma

If a customer is infected, then the problem is on their end. The fact that
they don't have throughput is their issue, not that of the provider's.

Many, many customers don't understand this - if they don't have throughput, it's the provider's problem and the provider has to fix it. One of the reasons I'm not providing anymore.

As for collateral damage, proper monitoring of the entire network and early
warning systems allow engineers to hopefully stop the problem before it goes
critical. The spool up on this worm was massive and effected some networks
too fast to prevent them going critical. However, tracking and resolution
should easily have been within the SLA windows.

I've seen various references to this worm firing off and saturating networks worldwide within 1 minute... if *that* isn't scary, I don't know what is. It shows that someone, with the right tools and enough vulnerable servers can take out a good portion of the Internet in seconds. And how can we predict *every* possible issue and block it?

My policy: Hmm, I'm not sure. *ring* Dude, wake up. It's a critical outage.
The whole network is collapsing. Think! *rambles for 5 minutes* Oh, wait.
Never mind, I got it. Go back to sleep. Thanks.

I think there's only so much one can do in advance. Sure, we all know we shouldn't have these servers exposed, but again, many are in the position of having to leave them open to some extent - case in point, I have a developer who uses dialup (because he's in the sticks in northern Georgia, and nothing else is available, and he's a skinflint who uses the free or nearly-free dialup providers)... he's also not going to use a VPN... he'll just bitch because he can't get to the server.

More cases where you do what you have to... a couple of years ago, when I *was* doing the provider bit... I blocked the netbios ports on the border. You have no idea what a cry went up from customers... they *want* to share drives over the Internet, and didn't care what risks might be involved. It was, to them, too complicated and/or expensive to do it via a VPN.

So I ended up having to open them back up, but kept them blocked to my own machines. Sometimes the best you can do is explain the risks, and then let the customer do what they will. Until they're causing problems... of course at that point you can cut 'em off (how many of you shut down customer boxen last night?).

I'm no great thinker, and having said that, I'm just not sure we can protect everything/everybody.

> Keep in mind that these problems aren't from 'well behaved' hosts, and
> 'well behaved' hosts normally listen to ECN/tcp-window/Red/WRED....
> classic DoS attack scenario. :frowning:

I understand the evils, but are we really at the mercy of situations like
this? Of course we can firewall the common sense things ahead of time,

I don't think this one could have been reasonably firewalled using a
non-stateful firewall (such as a simple router access list): the port is
unpriviliged so it will be used as a source port for regular UDP traffic
such as DNS queries. However, rate limiting UDP would have helped. This
is a reasonable thing to do for customers that have a lot of bandwidth
but don't run high-bandwidth UDP protocols.

we can jump right in and block evil traffic when it happens, after it takes
down our network but what sorts of things can we design into our networks
today to help with these situations?

Rate limit everything you can rate limit, make sure your routers and
switches have enough CPU even if interfaces are saturated with
minimum-sized packets to random destinations. But this type of rDOS
(reversed denial of service) is easy: you can simply filter the
offending systems. If it's the other way around (DOS) there is not much
you can do.

To really solve this we need a mechanism for destination hosts to
authorize source hosts to send data in such a way that intermediate
routers/firewalls can check this authorization and drop unauthorized
packets.

"dave" == Dave Stewart <dbs@dbscom.com> writes:

I've seen various references to this worm firing off and
saturating networks worldwide within 1 minute... if *that* isn't
scary, I don't know what is. It shows that someone, with the
right tools and enough vulnerable servers can take out a good
portion of the Internet in seconds. And how can we predict
*every* possible issue and block it?

Exactly!! This is why the Right Answer (TM) is to get end-users to
secure their systems and networks so that an attacker can't get a
critical mass of hosts in 1 minute (or even 1 month). You can only do
so much on the ISP networks. At some point, everyone needs to admit
that it's impossible for us to win this battle as long as people are
allowed to not care about the security of their systems.

I still remember the despair I felt at how successful the sadmind worm
was with Solaris and Windows vulnerabilies that were over 2 years
old. Hell, that was a long time ago, and I bet there are still
systems on the Internet that have those vulnerabilies. I mean, that's
negligence if anything is.

I think there's only so much one can do in advance. Sure, we
all know we shouldn't have these servers exposed, but again,
many are in the position of having to leave them open to some
extent - case in point, I have a developer who uses dialup
(because he's in the sticks in northern Georgia, and nothing
else is available, and he's a skinflint who uses the free or
nearly-free dialup providers)... he's also not going to use a
VPN... he'll just bitch because he can't get to the server.

Note that in the case of a worm, a VPN could work against you. If you
have all the right filters in place at your "perimeter" and yet let
your employees in through a VPN solution of some sort, you could still
be screwed if one of their home systems gets infected somehow.

IMHO,
Michael

Maybe the underlying theme is that, for whatever reasons (market
preassures, business idiocy?), we find ourselves on a network that's
largely a collection of monoculture hosts -- win32 on x86.

--Tk

Note that in the case of a worm, a VPN could work against you. If you
have all the right filters in place at your "perimeter" and yet let
your employees in through a VPN solution of some sort, you could still
be screwed if one of their home systems gets infected somehow.

So what you're saying is that a really good worm could infiltrate any secure
network by targetting those who vpn from exterior sources, collect data, and
then run? Hmmm. Wait a sec. Would that constitute a worm if it had purpose?

Jack Bates
Network Engineer

Maybe the underlying theme is that, for whatever reasons (market
preassures, business idiocy?), we find ourselves on a network that's
largely a collection of monoculture hosts -- win32 on x86.

It's been awhile, but both sendmail and cisco routers themselves have had
their worms that pointed out this very same issue. Apache had it's worm,
although craftily only targeted red had/variants despite others being
vulnerable. win32 wasn't even around during the original worms that
treatened entire networks.

I'm not a win32 fan (ignore the fact that I'm sending this from my win32
gaming system), but it has never been a matter of a specific OS, hardware
platform, or software package. It is a matter of awareness. Many people were
caught off guard by this attack as their networks weren't designed to guard
against and still be managable from inside threats. They won't be caught
unprepared twice. Vendors are slowly learning what to test for and protect
against. Consumers are starting to realize (a little bit) that they have an
impact on the 'net. But seriously, what's the virus policy of many
Providers? Are infected accounts cancelled until fixed? Does I provider
think about their contribution to the network as a whole or just the big
buck?

Jack Bates
BrightNet Oklahoma

> Note that in the case of a worm, a VPN could work against you. If you
> have all the right filters in place at your "perimeter" and yet let
> your employees in through a VPN solution of some sort, you could still
> be screwed if one of their home systems gets infected somehow.

So what you're saying is that a really good worm could infiltrate any secure
network by targetting those who vpn from exterior sources, collect data, and
then run? Hmmm. Wait a sec. Would that constitute a worm if it had purpose?

This is not correct. VPN simply extends security policy to a different
location. A VPN user must make sure that local security policy prevents
other traffic from entering VPN connection.

Alex

Alex, although technically correct, its not practical. How many end users
vpn in from home from say a public ip on their dsl modem leaving
themselves open to attack but now also having this connection back to the
"Secure" inside network. Has anyone heard of any confirmed cases of this
yet?

Alex, although technically correct, its not practical. How many end users
vpn in from home from say a public ip on their dsl modem leaving
themselves open to attack but now also having this connection back to the
"Secure" inside network. Has anyone heard of any confirmed cases of this
yet?

So then they are using a wrong tool. Using a wrong security tool tends to
bite one in the <censored>.

Yes, I have seen attacks mounted via VPNs. Work like charm.

Alex