The first step is effective emergency response. I have seen hours pass as
secret handshakes and people on the "right list" were located and made the
right calls. People start their generators on a planned basis to make sure
they work. How many people practice DDOS attack recovery? It's something you
can actually do today that will help the most in a real attack.
In general, companies with effective "all-hazard" contingency plans
survive natural, malicious and procedural error problems the best. Not
only do the teams get more practice, because the teams handle more incidents.
But the team's wider range of experience helps them deal with the unusual,
unexplained, and unexpected. When a failure happens, often it is not
clear what the root cause is. I'm not a fan of the NIPC's approach of
focusing "cybersecurity." Starting with malicious activity seems to delay
effective response. First put the fire out, then figure out if it was
arson (rather simplistic, and not completely true).
As you point out, testing those plans on a regular basis is needed. You
might be surprised, but not everyone starts their generators on a planned
basis. Or if they do have some automated clock schedule the run the
engine, no one every checks if the engine ran or can support a load.
The more reliable the service, the less the backup procedures are tested.
But as folks find out, it only takes one accident to pay for all of
that extra stuff.
I haven't run a NOC in over a year, and I still get asked by people to
pass messages between other NOCs' engineers who can't figure out how to
get through each other's front lines because they've never called each
other before. While it works, everyone agrees its not how it should work.
This is a malicious attack designed to cause failure, so I think that any
measures of the style discussed will only save you from the small attacks. Not
that avoiding some attacks isn't good, it's just not much help in the general
case. I think the issue of malice makes it very hard to plan for like storm
water or freeway traffic. I have no proof to say that large sites fare better
than small ones. They can handle more, but attract much more serious attacks
with much more glory for the perpetrators.
I don't think there is a 100% solution, but the motivation for the
attacks can help a bit. Most of the attacks that make the news seem
to have an opportunistic motivation. I know the underground is going
to think this is trash-talking, but many of the attacks seem to show
the intelligence of natural phenomena. Blow hard until something
falls down. The attackers don't seem to care what falls down.
I like a good hack. But doing the same thing over and over again
If I knew of more targetted attacks, I might change my mind.
Many people are pinning their hopes on traffic flow tagging as the way to
manage/solve this in the long term. No one knows yet when it can be deployed
in a large enough scale to work and what it will take then to handle it.
I'm hoping you realize the spectacular failure mode of your DNS propose. You
DOS the DNS in-addr servers and the whole site goes away. That should take a
lot less traffic than swamping the pipes. DNS has enouh problems digesting all
the things it's trying to do now (DNSSEC, v6, large packets...), no need to
make it worse.
I did say conservative transmission mode, not cease transmission mode. Only
after a positive request to stop transmitting should it throttle things to
zero. But the devil is in the details, is there a way once alerted, an edge
device can probe the data stream and distinguish an uncontrolled flood from
a large data transfer. What is large today, may be ordinary in a few years.
Without a working IPSEC and DNSSEC, how would you know the source of that
UDP packet is authoritative. And I didn't want to suggest RSVP might be
useful for something.
You're correct, source quench has been proposed as the solution for a lot
of problems, but it always seems to be worse than the disease. And since
it would require a change on the source's behaivor, it doesn't really meet
my stated problem.
IP stacks are getting smarter. It seems like I need a distributed
response to a distributed attack. Well-behaived traffic shouldn't be
affected. Senders with poor manners should be put in a queue with other
Concentrating on self-defense at the target seems limited, and not very
effective. Again, trying to find a historical framework, you can't build
a castle so strong, it can't be breached. So who are you gonna call for
The Bell Heads came up with a partial solution for the telephone network,
call-gapping. The network operator can quickly instruct most of the edge
switches in the network to rate limit traffic to a particular destination,
regardless of the root cause. It could be people calling about an
earthquake, wanting to be on Who Wants to be a Millioniare or a telephone
based DDOS attack. Yes, Viriginia it is easier to take down a circuit-based
network with a DDOS than "the" phone company likes to admit. Try calling
a major airline's reservation number from O'Hare airport in the middle
of a snowstorm.
But again, this would require changes outside the control of the target of
the attack. There is risk the operator will make a mistake implementing
the process, making things worse. Its cheaper to tell the customer to move