Over a decade of DDOS--any progress yet?

February 2000 weren't the first DDOS attacks, but the attacks on multiple well-known sites did raise DDOS' visibility.

What progress has been made during the last decade at stopping DDOS attacks?

SMURF attacks creating a DDOS from directed broadcast replies seems to have been mostly mitigated by changing defaults in major router OS's.

TCP SYN attacks creating a DDOS from leaving many half-open connections seems to have been mostly mitigated with SYN Cookies or similar OS changes.

Other than buying lots of bandwidth and scrubber boxes, have any other DDOS attack vectors been stopped or rendered useless during the last decade?

Spoofing?

Bots?

Protocol quirks?

If anything, the potential is worse now than it ever has been unless you
have just ridiculous amounts of bandwidth, as the ratios between leaf user
connectivity and data center drops have continued to close. The finger of
packety death may be rare, but it is more powerful than ever, just ask
Wikileaks, I believe that they were subject to 10Gbit+ at times.

At least the frequency has dropped in recent years, if not the amplitude,
and I am thankful for that, due to in no small part to what you list above,
as it mostly requires compromised bots to preform major attacks now, instead
of having many available unwitting non-compromised assists spread across the
internet like previously.

Besides having *alot* of bandwidth theres not really much you can do to
mitigate. Once you have the bandwidth you can filter (w/good hardware).
Even if you go for 802.3ba with 40/100 Gbps...you'll need alot of pipes.

Spoofed attacks have reduced significally probably because the use of
RPF. However we still see these from time to time.

TCP SYN attacks are still quite frequent...these can push alot of pps at
times.

The attack vectors have changed. Years ago people used hacked *nix boxes
with big pipes to start their attacks as only these had enough
bandwidth. Nowadays the consumers have alot more bandwidth and its
easier than ever to setup your own botnet by infecting users with
malware and alike. Even tho end users usually have less than 2mbps
upstream the pure amount of infected users makes it worse than ever.
Most of the time (depending on the attack) its also hard to
differentiate which IP addresse are attacking and which are legitimate
users.

I do not see a real solution to this problem right now...theres not much
you can do about the unwilligness of users to keep their software/OS
up2date and deploy anti-virus/anti-malware software (and keep it
up2date).
Some approaches have been made like cutting of internet access for users
which have been identified by ISPs for beeing member of some
botnet/beeing infected.
This might be the only long-term solution to this probably. There is
just no patch for human stupidity.

These .pdf presos pretty much express my view of the situation, though I do need to rev the first one:

<https://files.me.com/roland.dobbins/y4ykq0>

<https://files.me.com/roland.dobbins/k54qkv>

<https://files.me.com/roland.dobbins/j0a4sk>

The bottom line is that there are BCPs that help, but which many folks don't seem to deploy, and then there's little or no thought at all given to maintaining availability when it comes to server/service/app architecture and operations, except by the major players who'd been through the wringer and invest the time and resources to increase their resilience to attack.

Of course, the fundamental flaws in the quarter-century old protocol stack we're running, with all the same problems plus new ones carried over into IPv6, are still there. Couple that with the brittleness, fragility, and insecurity of the DNS & BGP, and the fact that the miscreants have near-infinite resources at their disposal, and the picture isn't pretty.

And nowadays, the attackers are even more organized and highly motivated (OC, financial/ideological) and therefore more highly incentivized to innovate, the tools are easy enough for most anyone to make use of them, and tthe services/apps they attack are now of real importance to ordinary people.

So, while the state of the art in defense has improved, the state of the art and resources available to the attackers have also dramatically improved, and the overall level of indifference to the importance of maintaining availability is unchanged - so the overall situation itself is considerably worse, IMHO. The only saving grace is that the bad guys often make so much money via identity theft, click-fraud, spam, and corporate/arm's-length governmental espionage that they'd rather keep the networks/services/servers/apps/endpoints up and running so that they can continue to monetize them in other ways.

Besides having *alot* of bandwidth theres not really much you can do to
mitigate. Once you have the bandwidth you can filter (w/good hardware).
Even if you go for 802.3ba with 40/100 Gbps...you'll need alot of pipes.

There is a variation on that theme. Using a distributed architecture (anycast, CDN, whatever), you can limit the attack to certain nodes. If you have 20 nodes and get attacked from a botnet China, only the users on the same node as the Chinese use will be down. The other 95% of your users will be fine. This is true even if you have 1 Gbps per node, and the attack is 100 Gbps strong.

Spoofed attacks have reduced significally probably because the use of
RPF. However we still see these from time to time.

I disagree. Spoofed attacks have reduced because the botnets do not need to spoof to succeed in some attacks. RPF is woefully inadequately applied.

For attacks which require spoofing, it is still trivial to generate 10s of Gbps of spoofed packets.

I do not see a real solution to this problem right now...theres not much
you can do about the unwilligness of users to keep their software/OS
up2date and deploy anti-virus/anti-malware software (and keep it
up2date).
Some approaches have been made like cutting of internet access for users
which have been identified by ISPs for beeing member of some
botnet/beeing infected.
This might be the only long-term solution to this probably. There is
just no patch for human stupidity.

Quarantining end users sounds like a good idea to me. But I Am Not An ISP. :slight_smile:

The idea of auto-updates at the OS level like in iOS (as opposed to big-I "IOS") may be a solution for many people. Supposedly OSX is going that route. But there will be those who do not want to get their software -only- through a walled garden like iTunes.

Fortunately, the motivations do have some alignment. The users who do not need full access to their machines are the ones who are more likely to get confused & infected, and the ones who want someone to "protect" them more, which makes OS-level auto-update more appealing. So that may help, even if it is not a panacea.

Wish us luck!

I think this is only true if you run your BGP session on a different
path (or have your provider pin down a static route). If you are
using BGP and run it on the same path, the 100Gbps will cause massive
packet loss and likely cause your BGP session to drop which will just
move the attack to another site, rinse / repeat. I don't think very
many people run BGP over a separate circuit, but for some folks, it
might be appropriate.

I also recommend folks anycast with a /22 or /23 and then use BGP for
the /23 or /24 announcements and have their provider pin down the /22
at a few sites so that if all hell breaks loose and the /23 or /24 is
flapping and being dampened then you still have reachability with the
covering prefix. It also lets you harden and strengthen a few smaller
sites that have the /22 statically pinned down. I'm not sure if
people think the "cost" of doing this is worth it, jury still out for
us.

But as you and others have pointed out, not a lot of defense against
DDoS these days besides horsepower and anycast. :slight_smile:

-David

Besides having *alot* of bandwidth theres not really much you can do to
mitigate. Once you have the bandwidth you can filter (w/good hardware).
Even if you go for 802.3ba with 40/100 Gbps...you'll need alot of pipes.

There is a variation on that theme. Using a distributed architecture (anycast, CDN, whatever), you can limit the attack to certain nodes. If you have 20 nodes and get attacked from a botnet China, only the users on the same node as the Chinese use will be down. The other 95% of your users will be fine. This is true even if you have 1 Gbps per node, and the attack is 100 Gbps strong.

I think this is only true if you run your BGP session on a different
path (or have your provider pin down a static route).

You are assuming many things - such as the fact bgp is used at all.

But yes, of course you have to ensure the attack traffic does not move when you get attacked or you end up with a domino effect that takes out your entire infrastructure.

But as you and others have pointed out, not a lot of defense against
DDoS these days besides horsepower and anycast. :slight_smile:

Not just anycast. I said distributed architecture. There are more ways to distribute than anycast.

Not everything is limited to 13 IP addresses at the GTLDs, David. :slight_smile:

The content-side can be duplicated, replicated, distributed. On the
eyeball-side its not as easy to replicate things. DDOS against user
networks doesn't generate as much publicity, outside of the gammer world, but is also a problem.

Other than trying to hide your real address, what can be done to prevent
DDOS in the first place.

Don't piss people off on IRC? :slight_smile:

After I laughed for a minute or two, you're exactly right -- although the
social & political issues involved go far beyond IRC.

Witness the back-and-forth DoS attacks involving Wikileaks and
Anti-Wikileaks proponents going on right now.

But this is not a new phenomenon -- every time there is a perceived insult
or slight against Chinese pride/culture, it always spurs some sort of DoS
attack scenario with grassroots support.

These sorts of attacks have been going on for years, and will escalate far
into the future, methinks.

$.02,

- - ferg

DDoS is just a symptom. The problem is botnets.

Preventing hosts from becoming bots in the first place and taking down existing botnets is the only way to actually *prevent* DDoS attacks. Note that prevention is distinct from *defending* oneself against DDoS attacks.

Botnets are the symptom.

The real problem is people.

Adrian

Well, yes - but short of mass bombardment, eliminating people doesn't scale very well, and is generally frowned upon.

;>

I think history can conclusively state we're much, much better at eliminating
people then we are hacked boxes; that politicians seem much happier somehow
about the former than the latter; and our collective "clue" at being able to
do so is growing much faster than our electronic toolkits. :slight_smile:

(Oh god. :slight_smile:

Adrian

Very little, no, and no.
Not counting occasional application bugs that are quickly fixed.
Even TCP weaknesses that can facilitate attack are still present in
the protocol.

New vectors and variations of those old vectors emerged since the 1990s.
So there is an increase in the number of attack vectors to be
concerned about, not a reduction.

SYN and Smurf are Swords and spears after someone came up with atomic weaponry.
The atomic weaponry named "bot net". Which is why there is less
concern about the former
types of single-real-origin-spoofed-source attacks.

Botnet-based DDoS is just "Smurf" where amplification nodes are
obtained by system compromise,
instead of router misconfiguration, and a minor variation on the
theme where the chain
reaction is not started by sending spoofed ICMP ECHOs.

Since 2005 there are new beasts such as "Slowloris" and "DNS Reflection".
DNS Reflection attacks are a more direct successor to smurf; true
smurf broadcast
amplification points are rare today, diminishing returns for the
attacker, trying to find
the 5 or 6 misconfigured gateways out there, but that doesn't diminish
the vector of spoofed small request large response attacks.

Open DNS servers are everywhere.

SYN attacks traditionally come from a small number of sources and rely
on spoofing
to attack limitations on available number of connection slots for success.

New vectors that became most well-known in the late 90s utilize
botnets, and an attacker
can make full connections therefore requiring zero spoofing, negating
the benefit of SYN cookies.

In other words, SYN floods got supplanted by TCP_Connect floods.

actually, botnets are an artifact. claiming that the tool is the problem
might be a bit short sighted. with the evolution of Internet technologies
(IoT) i suspect botnet-like structures to become much more prevelent and
useful for things other than coordinated attacks.

just another PoV.

--bill

I'm a big advocate of distributed/agile computing models with swarming/flocking behaviors - see slide 32 of this preso for an example:

<https://files.me.com/roland.dobbins/c07vk1>

When these things are harnessed together in order to launch DDoS attacks and steal financial information and intellectual property and so forth, we call them 'botnets'. They're a force-multiplier which allow the attacker to avoid the von Clausewitzian friction of conflict, and which give him a comfortable degree of anonymity, not to mention highly asymmetrical force projection capabilities and global presence.

'Botnet-like structures' = botnets, for purposes of this discussion. Semantic hair-splitting.

Running BGP over a different circuit will cause some blackholing of the traffic if the real link is down but not the BGP path.
So IIMHO the best way is still a good router with some basic QOS to protect BGP on the link.

Thomas

iACLs and GTSM are your friends.

;>

Observing Mastercard today, apparently none.

Can't blame stupid users or Microsoft for this one, either. The 'attackers'
are
using a .NET tool which I'm sure all of us are familiar with, LOIC. It
voluntarily (with user's consent!) adds their machine to a botnet controlled
by
somebody from 4chan over IRC. Because that can end well.

Blaming Microsoft for DoS attacks and spam is so passé. These mouthbreathers
are
the bigger threat, I think.

J