Is there a line of defense against Distributed Reflective attacks?

Having researched this in-depth after reading a rather cursory article
on the topic (http://grc.com/dos/drdos.htm), only two main methods come
to my mind to protect against it.

There are a few more methods, some have already mentioned including
something called pushback. Very few solutions, particularly elegant
ones are widely deployed today.

At some point, sophisticated (or even not so sophisticated) DoS
attacks can be hard to distinguish between valid traffic, particularly
if widely distributed and traffic is as valid looking as any other
bit of traffic.

By way of quick review, such an attack is carried out by forging the
source address of the target host and sending large quantities of
packets toward a high-bandwidth middleman or several such.

It doesn't have to be forged, that step just makes it harder to
trace back to the original source. There are some solutions that
try to deal with this, including an IETF working group called
itrace. UUNET also developed something called CenterTrack. BBN
has something called Source Path Isolation Engine (SPIE). There
are probably other things I'm forgetting, but generally are similar
in concept to these.

To my knowledge the network encompassing the target host is largely
unable to protect itself other than 'poisoning' the route to the host in
question. This succeeds in minimizing the impact of such an attack on

This is true, the survivability of the victim largely depends on
the security of everyone else, which makes solving the problem so
exceptionally difficult.

the network itself, but also acheives the end of removing the target
host from the Internet entirely. Additionally, if the targetted host is
a router, little if anything can be done to stop that network from going
down.

I'm not sure I fully understand what you're saying here, but a router
can be effectively be taken out of service as any other end host or
network can by simply overwhelming it with packets to process (for itself
or to be forwarded).

One method that comes to mind that can slow the incoming traffic in a
more distributed way is ECN (explicit congestion notification), but it
doesn't seem as though the implementation of ECN is a priority for many
small or large networks (correct me if I'm wrong on this point). If ECN

ECN cannot be an effective solution unless you trust all edge hosts,
including the attacking hosts, will use it. Since it is a mechanism
that is used to signal transmitting hosts to slow down, attackers can
choose not to implement ECN or ignore ECN signals. Unless you could
control all the ends hosts, and as long as there is intelligence in
the end hosts a user could modify, this won't help.

is a practical solution to an attack of this kind, what prevents its
implementation? Lack of awareness, or other?

It is still fairly new and not widely deployed. Routers need not only
to support it, but also have to be enabled to use it. It is a fairly
significant change to the way congestion control is currently done in
the Internet and it will take some time before penetration occurs.

Also, are there other methods of protecting a targetted network from
losing functionality during such an attack?

Many are reactive, often because you can't know what a DoS is until
its happening. In that case, providers can use BGP advertisements
to blackhole hosts or networks (though that can essentially finish
the job the attacker started). If attacks target a DNS name, the
end hosts can change their IP address (though DNS servers may still
get pounded). If anything unique about the attack traffic can be
determined, filters or rate limits can be placed as close to the
sources as possible to block it (and that fails as attack traffic
becomes increasingly dispersed and identical to valid traffic). If
more capacity than attack traffic uses can be obtained, the attack
could be ignored or mitigated (but this might be expensive and
impractical). If the sources can be tracked, perhaps they can be
stopped (but large number of sources make this a scaling issue and
sometimes not all responsible parties are as cooperative or friendly
as you might like). There is also the threat of legal response, which
could encourage networks and hosts to stop and prevent attacks in the
future (this could have negative impacts for the openness of the net
and potentially be difficult to enforce when multiple jurisdiations
are involved).

From a proactive approach, hosts could be secured to prevent an

outsider from using it for attack. The sorry state of system
security doesn't seem to be getting better and even if we had perfect
end system security, an attacker could still use their own system(s)
to launch attacks. Eventually it all boils down to a physical
security problem. Pricing models can be used to make it expensive
to send attack traffic. How to do the billing and who to bill
might not be so easy. ...and there may always be a provider who
charges less. Rate limits can be used on a per source, per protocol
or per flow basis. Given enough hosts and not enough deployment in
the network, this has yet to be effective. Similarly, network
based queueing mechanisms (e.g. RED), or pushback approaches already
mentioned, which penalize or limit high rate flows are not widely
deployed yet.

It often takes a combination of techniques used by as many people
as possible. Good incident response teams for as many responsible
network operators as possible and continued, quicker deployment of
best practices to as many network operators, end users and systems
developers as possible.

...or you architecturally change the Internet or not use the Internet.
For example, go back to dumb end systems and place all the control
into the network operated by a select few (e.g. the traditional
telephone model). You potentially lose all the good properties the
current architecture has and you aren't going to get everyone to
change with you anytime soon.

I highly recommend Bruce Schneier's book Secrets and Lies, which
applies to so much of all these problems and gives you a lot more
to think about in a much more readable way than what I have said
here. It is especially insightful with regards to all the non-
technical problems and non-technical responses to the problems.

John

Having researched this in-depth after reading a rather cursory article
on the topic (http://grc.com/dos/drdos.htm), only two main methods come
to my mind to protect against it.

There are a few more methods, some have already mentioned including
something called pushback. Very few solutions, particularly elegant
ones are widely deployed today.

At some point, sophisticated (or even not so sophisticated) DoS
attacks can be hard to distinguish between valid traffic, particularly
if widely distributed and traffic is as valid looking as any other
bit of traffic.

I have been thinking about this for a while due to a number of reasons. But if we look at the source of the attacks and the effects of the attacks. I would draw the conclusions that

a) Unless we fix the "end-system" faults that are used for exploits, the only way that will scale to handle attacks, is simply to make the victims redundant so that you can loose one and loose service for some customers so that you can provide service for the remaining customers.

b) In the short to medium term, the only strategy that will work is to sacrifice some parts of your service (or host, or customers - depending on your role and the type of attack / victim).

Even with the pushback model, the ordinary users will loose to some extent. So what would be needed would be a model where to loss of bandwidth for end-users are projected to the revenue numbers of the service being attacked. Right?

is a practical solution to an attack of this kind, what prevents its
implementation? Lack of awareness, or other?

It is still fairly new and not widely deployed. Routers need not only
to support it, but also have to be enabled to use it. It is a fairly
significant change to the way congestion control is currently done in
the Internet and it will take some time before penetration occurs.

Well, you also need to find another "way" (or buffer, or slowdown) to send the traffic, which in a way also is a successful attack.

to launch attacks. Eventually it all boils down to a physical
security problem. Pricing models can be used to make it expensive

With physical security I would assume actual physical access to the system. Anything else to me is "logical" or "system" security. Correct?

- kurtis -

---SNIP---

It doesn't have to be forged, that step just makes it harder to
trace back to the original source. There are some solutions that
try to deal with this, including an IETF working group called
itrace. UUNET also developed something called CenterTrack. BBN

Wow, again.. Centertrack is a nice idea, but not feasible in a large scale
network... Aside from this, why would you tunnel 100kpps of attack traffic
anyway, why not just drop it, find the source and acl it there?

has something called Source Path Isolation Engine (SPIE). There

This would be cool to see a design/whitepaper for.. Kelly?

  --- SNIP ---

ECN cannot be an effective solution unless you trust all edge hosts,
including the attacking hosts, will use it. Since it is a mechanism
that is used to signal transmitting hosts to slow down, attackers can
choose not to implement ECN or ignore ECN signals. Unless you could
control all the ends hosts, and as long as there is intelligence in
the end hosts a user could modify, this won't help.

Attacking hosts never behave nicely and rarely follow RFC's :slight_smile: This is
another reason that things like rate-limits are only minutely effective at
stopping DoS attacks in a meaningful manner. (Unless you just want to
rate-limit all ICMP or something, which is a fine solution in some
instances, Jared@verio has written on this already)

---SNIP ---

Many are reactive, often because you can't know what a DoS is until
its happening. In that case, providers can use BGP advertisements
to blackhole hosts or networks (though that can essentially finish
the job the attacker started). If attacks target a DNS name, the

This is true, a blachole does finish the attackers job, but consider that
a very high number of attacks are on hosts with DNS names like:
dekadens.ghettot.org, death.hackmania.net, you.know.you.wanna.rapebob.com,
DEATHCRUSH.COM which are obviously just vhosts on a shell box. In these
cases no one really cares if the ip is blackholed, least of all the person
that owns the ip, he just wants to get back on his channel :slight_smile:

end hosts can change their IP address (though DNS servers may still
get pounded). If anything unique about the attack traffic can be

Almost all DoS tools will take a ip number for whom/what to attack, very
few will take a hostname and resolve it, once. NONE resolve for each
packet (or none today in normal use)... So, rotating to a new ip number
and dropping the attacked one is still a valid fix, provided your TTL
isn't more than a few minutes long.

determined, filters or rate limits can be placed as close to the
sources as possible to block it (and that fails as attack traffic
becomes increasingly dispersed and identical to valid traffic). If
more capacity than attack traffic uses can be obtained, the attack
could be ignored or mitigated (but this might be expensive and
impractical). If the sources can be tracked, perhaps they can be
stopped (but large number of sources make this a scaling issue and
sometimes not all responsible parties are as cooperative or friendly
as you might like). There is also the threat of legal response, which
could encourage networks and hosts to stop and prevent attacks in the

Legal response to the kiddies has never shown a marked improvement in
their behaviour. Much like the death penalty... its just not a deterrent,
perhaps because its not enforced on a more regular basis, perhaps because
no one thinks about that before they attack.

future (this could have negative impacts for the openness of the net
and potentially be difficult to enforce when multiple jurisdiations
are involved).

>From a proactive approach, hosts could be secured to prevent an
outsider from using it for attack. The sorry state of system
security doesn't seem to be getting better and even if we had perfect
end system security, an attacker could still use their own system(s)
to launch attacks. Eventually it all boils down to a physical

This is something else that bares some thought. Why all this dicussion
about 'isps should fix this' when 99% of the time its an end system
problem? Why not push back all this legal retoric on the system
manufacturers? Why not hold them responsible for the shoddy code and
workmanship? How is this any different than Ford and teir exploding gas
tanks? (yes.. bad example since it was the news group that made them
explode)

security problem. Pricing models can be used to make it expensive
to send attack traffic. How to do the billing and who to bill
might not be so easy. ...and there may always be a provider who
charges less. Rate limits can be used on a per source, per protocol
or per flow basis. Given enough hosts and not enough deployment in
the network, this has yet to be effective. Similarly, network
based queueing mechanisms (e.g. RED), or pushback approaches already

While Steve Bellovin's idea for pushback is nice, I'm not sure its all
that practical. I don't see that its helpful if it turns off services
'automatically' :frowning: Any automated security fix is subject to the classic:
"Oh now, ebay is attacking me now" syndrome :frowning: People do really have to
interact for security incidents, and those people really do need a high
degree of clue.

mentioned, which penalize or limit high rate flows are not widely
deployed yet.

(see above, is this what you really want?)

On Fri, Jan 17, 2003 at 06:38:08PM +0000, Christopher L. Morrow mooed:

> has something called Source Path Isolation Engine (SPIE). There

This would be cool to see a design/whitepaper for.. Kelly?

The long version of the SPIE paper is at:

  http://nms.lcs.mit.edu/~snoeren/papers/spie-ton.html

The two second summary that I'll probably botch: SPIE keeps a (very tiny)
hash of each packet that the router sees. If you get an attack packet,
you can hand it to the router and ask "From where did this come?"
And then do so to the next router, and so on. The beauty of the scheme
is that you can use it to trace single-packet DoS or security attacks
as well as flooding attacks. The downside is that it's hardware.

  -Dave

This sounds like Steve Bellovin's thing called 'icmp traceback' where you
make up a new icmp type message and send that query through the system,
hop by hop... though I say that after only reading your blurb, not the
paper :slight_smile:

As I recall the icmp thing (that might NOT have been all steve, I just
heard him present it once) was a problem from a memory and processing
perspective, not to mention 'no router does this today' so its a 3 year
off feature addition... nevermind the protocol additions :slight_smile:

I think John was more referring to legal action against networks and
hosts used in the attack.

Without getting too much into the likelihood of any legal body actually
understanding anyone's role in an attack besides the attacker and the
victim, in this land where tobacco companies are sued by smokers who
get lung cancer and fast food restaurants are sued by fat people there
must be room for such cases as:

"XYZ Corp cost me $5mil in lost business. They were negligent in
securing their (network|host) from being used as a DoS attack tool
despite being informed of such by us both before and during said
attack."

Perhaps this would cause companies to take security more seriously?

Have there been any such cases to date? Did they win?

-c

I guess the question of all this is may be... what could be done to
perhaps... to minimize the impact of DoS attacks pointed at a victim host?

Getting everyone to take security more seriously will most likely never
going to happen.. :frowning:

-hc

> has something called Source Path Isolation Engine (SPIE). There
This would be cool to see a design/whitepaper for.. Kelly?

In addition to David's link:

  <http://www.ir.bbn.com/projects/SPIE/>

> mentioned, which penalize or limit high rate flows are not widely
> deployed yet.

(see above, is this what you really want?)

I happen to like the idea of using something like a RED queue that can
more aggressively drop traffic that is 'out of profile' in times of
congestion. Like most things, this probably really works best at the
edges of the network, but my gut feeling is that it can be a relatively
fair and elegant approach. However, it doesn't really solve the DoS
problem, it is really trying to just solve a congestion problem, but it
may have some nice side effects.

For example, I'm planning on trying out some new features from our
border router vendor, where we set a more aggressive RED drop profile
per source IP within our netblock where the source exceeds a configured
transmission rate. The basic idea being to get the high load offering
sources to slow down in times of high usage/congestion. Hopefully they
use TCP, but if not, perhaps drop even more aggressively? If the
capacity is there, high load sources get through.

So, this doesn't stop attacks, but tries to keep some valid data flowing
through a limited egress pipe or in other words, try to provide some
fairness between multiple sources in times of high load. Of course, if
everyone hits the ENTER key at the same time this does't work, but
hopefully statistically multiplexing is working as well as it always has
for us.

John

I guess the question of all this is may be... what could be done to
perhaps... to minimize the impact of DoS attacks pointed at a victim host?

Everyone take security more seriously, have some inhouse security clue,
deal with incidents in a timely manner with a decent response... its about
due diligence, eh?

Getting everyone to take security more seriously will most likely never
going to happen.. :frowning:

If this is the case then we are screwed... I hope its not the case, I hope
that the customer service folks at ISP/NSP's and NOC and Engineering folks
all keep this in their minds and push their upper management to start
doing the right thing. It really doesn't cost that much, and its certainly
cheaper than the cost of outages or lost revenue when your business is
DoS'd, eh?

Without getting too much into the likelihood of any legal body actually
understanding anyone's role in an attack besides the attacker and the
victim, in this land where tobacco companies are sued by smokers who
get lung cancer and fast food restaurants are sued by fat people there
must be room for such cases as:

"XYZ Corp cost me $5mil in lost business. They were negligent in
securing their (network|host) from being used as a DoS attack tool
despite being informed of such by us both before and during said
attack."

....and I always thought the US legal system was flawed.....where do I file? :slight_smile:

- kurtis -