[arin-announce] IPv4 Address Space (fwd)

> > As the network operators, we move bits and that is what we should stick to
> > moving.
> >
> > We do not look into packets and see "oh look, this to me looks like an evil
> > application traffic", and we should not do that. It should not be the goal
> > of IS to enforce the policy for the traffic that passes through it. That
> > type of enforcement should be left to ES.
>
> Well, that is nice thery, but I'd like to see how you react to 2Gb DoS
> attack and if you really intend to put filters at the edge or would not
> prefer to do it at the entrance to your network. Slammer virus is just
> like DoS, that is why many are filtering it at the highiest possible
> level as well as at all points where traffic comes in from the customers.

Actually, no, it is not theory.

When you are slammed with N gigabits/sec of traffic hitting your network, if
you do not have enough capacity to deal with the attack, no amount of
filtering will help you, since by the time you apply a filter it is already
too late - the incoming lines have no place for "non-evil" packets.

And how many people here operate non-oversubscribed networks?
I mean completely non-oversubscribed end to end; every end
customer link's worth of capacity is reserved through the
network from the customer edge access point, to the aggregation
routers, through the core routers and backbone links out to
the peering points, down to the border routers, and out through
the peering ports?

I've worked at serveral different companies, and none of them
have run truly non oversubscribed networks; the economics just
aren't there to support doing that.

So having 3 Gb of DoS traffic coming across a half dozen
peering OC48s isn't that bad; but having it try to fit onto
a pair of OC48s into the backbone that are already running
at 40% capacity means you're SOL unless you filter some of
that traffic out. And I've been in that situation more
times than I'd like to remember, because you can't justify
increasing capacity internally from a remote peering point
into the backbone simply to be able to handle a possible
DoS attack. Even if you _do_ upgrade capacity there, and
you carry the extra 3Gb of traffic from your peering links
through your core backbone, and off to your access device,
you suddenly realize that the gig port on your access device
is now hosed. You can then filter the attack traffic out
on the device just upstream of the access box, but then
you're carrying it through your core only to throw it away
after using up backbone capacity; why not discard it sooner
rather than later, if you're going to have to discard it
anyhow?

Leave content filtering to the ES, and *force* ES to filter the content.
Let IS be busy moving bits.
Alex

I think you'll find very, very few networks can follow
that model; the IS component almost invariably has some
level of statistical aggregation of traffic occurring
that forces packet discard to occur during heavy attack
or worm activity. And under those circumstances, there
is a strong preference to discard "bad" traffic rather
than "good" traffic if at all possible. One technique
we currently use for making those decisions is looking
at the type of packets; are they 92 byte ICMP packets,
are they TCP packets destined for port 1434, etc.

I'd be curious to see what networks you know of where the
IS component does *no* statistical aggregation of traffic
whatsoever. :slight_smile:

Matt

And how many people here operate non-oversubscribed networks?

The right question here should be "How many people here operate non-super
oversubscribed networks?" Oversubscribed by a a few percents is one thing,
oversubscribed the way certain cable company in NEPA does it is another.[1]

So having 3 Gb of DoS traffic coming across a half dozen
peering OC48s isn't that bad; but having it try to fit onto
a pair of OC48s into the backbone that are already running
at 40% capacity means you're SOL unless you filter some of
that traffic out.

Why does your backbone have only two OC48s that are 40% utilized if you have
half a dozen peering OC48s that can easily take those 3Gb/sec?

And I've been in that situation more times than I'd like to remember,
because you can't justify increasing capacity internally from a remote
peering point into the backbone simply to be able to handle a possible DoS
attack.

This means that the PNIs of such network are full already. So we are back to
the super-oversubscribed issue.

Even if you _do_ upgrade capacity there, and you carry the extra 3Gb of
traffic from your peering links through your core backbone, and off to
your access device, you suddenly realize that the gig port on your access
device is now hosed. You can then filter the attack traffic out on the
device just upstream of the access box, but then you're carrying it
through your core only to throw it away after using up backbone capacity;
why not discard it sooner rather than later, if you're going to have to
discard it anyhow?

Because you do not know what is the "evil" traffic and what is the "good"
traffic.

And under those circumstances, there is a strong preference to discard
"bad" traffic rather than "good" traffic if at all possible. One technique
we currently use for making those decisions is looking at the type of
packets; are they 92 byte ICMP packets, are they TCP packets destined for
port 1434, etc.

And this technique presumes that the backbone routers know what are the
packets that their customers are want to go through and which ones they do
not. Again, this is not a job of backbone routers. It is a kluge that should
be accepted as a kludge.

I'd be curious to see what networks you know of where the IS component
does *no* statistical aggregation of traffic whatsoever. :slight_smile:

The example that you are using is not based on statistical traffic
aggregation. Rather it is based on an arbitrary decision of what is good and
what is bad traffic (just like certain operators that claimed that DHS
ordered them to block certain ports).

Matt

Alex

[1] Bring three T1s of IP. Sell service to serveral hundred cable
customers.