DNS issues various

Future attacks will be stronger and more organized. So how do we protect
the root servers from future attack?

protecting the servers is not the *critical* point. protecting the
service is. don't obsessed up on silly boxes.

of course, box/link protection is *one* aspect of protecting the
service.

randy

protecting the servers is not the *critical* point. protecting the
service is. don't obsessed up on silly boxes.

You're right.

It comes down to risk mitigation, not risk elimination.

I'd posit it's impossible to PREVENT a DDOS attack -- as such, as we did
when they first manifested themselves in 1999, we need to develop response
plans capable of meeting the onslaught and mitigating its impact so that
things continue to function, even if they're degraded somewhat.

It's like airport security - total security is a fantasy, but we have to
raise the bar to make it more difficult for an attacker, and couple that
with effective plans to respond when things occur, thus ensuring both an
acceptable level of service during the incident and a smooth
recovery/investigation afterward.

Of course, in the airport security case, the bar's still lying on the
ground..... :frowning:

Rick
Infowarrior.org

1999?! Doesn't anybody remember the massive SYN attack against Panix in
1995? Or that tfreak released smurf.c in July of 1997? (And was it
fraggle or papasmurf that came the summer of the following year?
Whichever one it was, the other came out within six months after that.)

And those are just the ones I remember since I moved away from Rutgers and
started working in the BBN NOC - I'm sure there were others even before
that. (Not counting accidental operational incidents like the AS 7007
routing chaos in 1997 or the AS 8584 identitical issue a year later.)

1999 was just when Distributed DoS started getting a little airplay. We'd
already had four fruitless years of dealing with DoS attacks by the time
that happened.

What would be wonderful is a radical change in the way we think about DoS
attacks. It would be fabulous for someone (or a group of someones) to
come up with a completely different way to approach the problem. I wish
that I could be the person who does that, who sparks that change, but in
the seven years I've been thinking about it, nothing's come to mind.

So, seven years of hardening hosts against SYN attacks. Five years of
trying to get people to turn off the forwarding of broadcast packets.
Three years of botnets generating meg upon meg of crap-bandwidth.

Where are the suuuuuper-geniuses?

Kelly J.

You know, most bars have bouncers at the door that check IDs. Sure, they're
not perfect, but the bartender can usually be pretty sure the guy ordering a
beer is over 21. The average bar isn't run by a soooper-genius. But it's still
considered fashionable to let packets roam your network without an ID check at
the door.

soooper-genius solutions aren't going to help any when there's a lot of
address space that's managed by Homer Simpson....

> So, seven years of hardening hosts against SYN attacks. Five years of
> trying to get people to turn off the forwarding of broadcast packets.
> Three years of botnets generating meg upon meg of crap-bandwidth.
>
> Where are the suuuuuper-geniuses?

You know, most bars have bouncers at the door that check IDs. Sure, they're
not perfect, but the bartender can usually be pretty sure the guy ordering a
beer is over 21. The average bar isn't run by a soooper-genius. But it's still
considered fashionable to let packets roam your network without an ID check at
the door.

Yeah and how's that working so far?

soooper-genius solutions aren't going to help any when there's a lot of
address space that's managed by Homer Simpson....

But there will always be address space managed by Homer Simpson.

And that's part of my point - we can't fix everybody's networks. There
will always be broken/misconfigured networks run by the willfully
ignorant.

We've been in an arms race for years. They come up with something, we
come up with a response, they come up with something else, we scramble to
find router OS code that doesn't crash, etc.

It's just back and forth, back and forth.

All I'm advocating is breaking out of that pattern.

Kelly J.

Not to mention that the tools being publically available is much different
than being known by a certain community (covert IRC blackhat communities
differ slightly from EFNet which differs even moreso from CERT, etc).

I recall working at GoodNet, and smurf attacks affecting customer networks
the first week of May, 1997. There is speculation that the root name
servers attacks were from a modified tool of a current well-known tool.
How does that fit into the equation?

Information needs to pass quickly and correctly. BUGTRAQ has typically
been the best forum for this, and NANOG as well. However, Internet
operators will continue to lag behind the times even if we have a more
intelligent infrastructure capable of handling these problems.

I see this being done on a organization-by-organization basis, but no real
consistent community. The correct plan is to have one person dedicated to
packet capture infrastructure, another person dedicated to packet-to-tool
identification and reverse engineering, and finally a large group
dedicated to filtering/moving the traffic with open or proprietary
(including home-grown) solutions (proactively and upon peer/customer
notification), e.g.:

rfc2827, rfc3013 (ingress and egress filtering)
rfc1750, rfc1858, rfc1948, rfc2196, rfc3128, rfc3365
rfc2142 (and draft-vixie-ops-stdaddr-01.txt), rfc1173, rfc1746, rfc2350
draft-ymbk-obscurity-00.txt, draft-ietf-grip-isp-expectations-05.txt
draft-moriarty-ddos-rid-01.txt, draft-jones-netsec-reqs-00.txt
draft-turk-bgp-dos-01.txt, draft-dattathrani-tcp-ip-security-00.txt
http://www.secsup.org/Tracking/ http://www.secsup.org/CustomerBlackHole/
http://www.cymru.com/ http://www.tcpdump.org

All of this information needs to be in one place, and organizations need
to understand that working together on these problems in the only way to
fix them (this goes doubly for hardware/software vendors). I'm sure I
left out a ton of information, and the list could become exhaustive very
quickly and easily. The ideas and the strategies all stay the same, and
the end result is hopefully a more secure, resilient infrastructure. In
some ways, you and your organization either get it or you don't. And
there is no way to force people into understanding the concept - let
alone the importance of these issues. How do you solve that problem?

-dre

> You know, most bars have bouncers at the door that check IDs. Sure, they're
> not perfect, but the bartender can usually be pretty sure the guy ordering a
> beer is over 21. The average bar isn't run by a soooper-genius. But it's still
> considered fashionable to let packets roam your network without an ID check at
> the door.

Yeah and how's that working so far?

Works a lot better than making an overworked bartender do it. And yes, that's
an intentional dig at the "but you can't filter at the core" crowd, and the
"but you can't backtrack spoofed traffic easily" crowd...

How well does it work? Well enough that you can drive by a bar and just *know*
that it's a dead night because there's no bouncer. And it's never a dead night
on the Internet.

> soooper-genius solutions aren't going to help any when there's a lot of
> address space that's managed by Homer Simpson....

But there will always be address space managed by Homer Simpson.

Why? I'm asking a serious question here - why is it considered acceptable?

All I'm advocating is breaking out of that pattern.

I bet a few good lawsuits alleging civil liability for contributory
negligence for allowing spoofed packets would do wonders for that problem.

I posit that there won't be any "sooper genius" solution that will actually
work as long as the prevailing model is small islands of clue awash in a
sea of Homer Simpsons.

We have hosts that can take 100Mbit worth of SYN attacks out-of-the-box,
instead of the dialups worth that crippled PANIX.

We have a smurf attack against the root servers which was so small it was
trivially filtered, compared to the gigabits of broadcasts which used to
be open. Heck I got a bigger smurf the last time I made fun of Ralph
Doncaster's "IGP-less network" on this list. Yes it's not so completely
dead that you can only find it in labratories like smallpox, but the once
seemly endless supply of broadcasts has been closed down to the point
where it is now more difficult for attackers to find them then it is worth
in damage when they use them. It's not "dead", but it's so effectively
close that for most of us it might as well be.

We're still working on the distributed attacks, but eventually we'll come
up with something just as effective. If it was as easy to scan for
networks who don't spoof filter as it is to scan for networks with open
broadcasts, I think we'd have had that problem licked too.

It's the nature of people to invent new ways to accomplish their goals,
both from the attackers and the people running the networks. If we hadn't
plugged the PANIX style attacks, do you think anyone would have bothered
writing smurf, when they already had a tool which worked? So the question
is, do you think we're better off because we've created better TCP/IP
stacks and better routers, or worse off because we've created better
attackers with better tools we currently don't have much defense against?

On Thu, Oct 24, 2002 at 04:07:18PM -0400, Richard A Steenbergen mooed:

We're still working on the distributed attacks, but eventually we'll come
up with something just as effective. If it was as easy to scan for
networks who don't spoof filter as it is to scan for networks with open
broadcasts, I think we'd have had that problem licked too.

  Are you sure?

* A smurf attack hurts the open broadcast network as much (or more)
   than it does the victim. A DDoS attack from a large number
   of sites need not be all that harmful to any one traffic source.

* 'no ip directed broadcast', which is becoming the default behavior
   for many routers and end-systems,
              vs.
   'access-list 150 deny ip ... any'
   'access-list 150 deny ip ... any'
   ...
   'access-list 150 permit ip any any'

   (ignoring rpf, which doesn't work for everyone).

Until the default behavior of most systems is to block spoofed packets,
it's going to remain a problem.

  -Dave, whose glass is half-empty this week. :slight_smile:

Something I'd love to see is a blue-ribbon commission (meaning, made
up of people with real clue) whose job it was to come up with a
bird's-eye view of what the internet would look like if it were
designed from scratch today.

Maybe this is some of what Internet-II is supposed to be doing but I
think it's more focused on very high bandwidth gated community stuff.

In theory the internet could be radically redesigned, at least on
paper, and still deliver just about the same function as far as
end-users are concerned; surfing, email, file transfer, routing,
naming, etc.

Task one would be "what must be preserved -- what can be tossed?"

So, e.g., web browsing/serving must be preserved, but all of IP per se
maybe is up for grabs for redesign, etc.

The point being maybe we all spend so much time backpatching etc and
assuming that the technology can't be shifted much due to backwards
compatability that, truth be told, we don't really know what that
shift we're avoiding might be if it were feasible.

Can't really know how hard it is to build the bridge if you don't know
how wide the river is.

And now a song for anyone who read this far:

  Deep in the Heart of Internet
(tune: Deep in the Heart of Texas)

The web at night - is big and bright,
  Deep in the heart of Internet.
The smurfers' eye - are on that pie,
  Deep in the heart of Internet.
The roots do loom - just like perfume,
  Deep in the heart of Internet.
Reminds smurfs of - why they get no love.
  Deep in the heart of Internet.
The admins cry - eat 'wall and die,
  Deep in the heart of Internet.
The smurfers rush - to send their gush,
  Deep in the heart of Internet.
The reporters wail - hot on the trail,
  Deep in the heart of Internet.
And the spammers spam - and spam and spam,
  DEEP IN THE HEART OF INTERNET!

  Lyrics written anonymously by Barry Shein

I assert this is not the case. A significant percentage of DDoS attacks use
legitimate source IP addresses. When there are thousands of throw-away hosts
in the attack network, the difficulty of traceback and elimination remains,
and so does the problem.

Yes, blocking spoofed packets helps. But it is not an end-game.

Kevin

Hi, NANOGers.

] I assert this is not the case. A significant percentage of DDoS attacks use
] legitimate source IP addresses. When there are thousands of throw-away hosts

I track several botnets per week, and a large amount of DDoS per week.
Only around 20% (or a bit less) of all the attacks I log use spoofed
source addresses.

Does anti-spoofing help? Yes. It is but one of many mitigation
strategies.

Thanks,
Rob.

I don't know what botnets you look at, but I wouldn't go that far.

Of course stopping spoofing will not solve everything, but is does and
will make a huge impact on DoS mitigation and tracing.

The problem now is that noone "knows" for certain if the attack they're
tracing is spoofed or not. With a random source syn flood, you know it's
spoofed. With an attack which is spoofing a legit-looking address that is
completely unrelated to the attacker, you don't. Most people who report
DoS (including myself) have been so burned by finding out that legitimate
looking source address on an attack is infact spoofed (or worse yet that
an innocent party gets blamed by incompetent admins), they see a DDoS and
don't even bother. Attackers w/DDoS networks use this to their advantage,
by mixing spoofed attacks (where they can) with unspoofed attacks (where
they can't, such as windows machines, boxes where they havn't compromised
root such as apache worms and the like, and even in rare cases where the
network is doing their job and ingress filtering), to make it effectively
impossible to know which hosts to go after.

Tracing down dumb drones with non-spoofed addresses is a LOT easier than
tracking spoofed packets through the network (or worse explaining to other
networks how to do it). Of course, as more and more ingress filtering is
implemented, the attacks will move to "one-off" spoofing, where they spoof
their neighbors address but are still close enough to get through filters,
and incompetent admins go chasing after ghosts. But we'll deal with that
situation when we come to it. :slight_smile:

How about a council?

http://www.eweek.com/article2/0,3959,642876,00.asp
October 21, 2002
Network Council to Urge New Practices
By Caron Carlson

"A council of the largest telephone carriers and ISPs, charged by the
federal government with preventing disruptions to the nation's
telecommunications system, is preparing a checklist of procedures to
protect networks from terrorism and natural disasters."

That sounds to me more like considering the use of sonic repellants
rather than rat poison to keep the vermin out of the relays and
providing latex gloves for removing the dead rats, rather than
designing out the relays the rodents get into entirely.

Given time, rats can chew through concrete. They are smart enough to trip traps before eating the cheese, or to lick the cheese off triggers rather than pulling or chewing, so as not to cross the alarm threshold. They breed faster than you can keep up with them, which not only ensures a generous supply of them but also ensures that they adapt to new environments quickly. They have been known to become resistant to poisons that killed rats a few years before.

In short, your rats versus script kiddies analogy is perfect, but I think you are forgetting that we still have rats everywhere.

~Ben
(who speaks for himself alone here)

The Bouncer/Bartender who serve an underage person are subject to prosecution, and are liable if someone gets drunk and goes driving and gets into trouble.

We've had BCPs in place for some time on directed broadcast and ingress issues. I expect it will take lawsuits to get many people to get serious about implementing these. While it's up to lawyers and judges to decide if ignoring an industry Best Current Practice opens a company to negligence, I won't be surprised if I'm asked to testify for a prosecution in such a case.

It provides the identity of the party to sue for negligence, should the damage elsewhere be severe. In large networks, it would behoove administrators to establish ingress filters on the routers connecting subnets, so that they can further limit spoofing or help trace the party involved.

Yes, blocking spoofed packets helps. But it is not an end-game.

it's not even middle-game

It provides the identity of the party to sue for negligence,
should the damage elsewhere be severe.

and lawsuits have always been such a major contributor to internet
advances in the past. makes me think of suing the cemetaries and
coffin manufacturers in "night of the living dead." does not
scale.

you might remember or look up smb's presentation on 'pushback' at
some nanog or another (those anonymous hotel rooms kinda blur,
especially before first coffee). that's not perfect, but it
scales. and it's an engineering approach to a technology problem,
always a good sign.

randy

>> Yes, blocking spoofed packets helps. But it is not an end-game.

it's not even middle-game

> It provides the identity of the party to sue for negligence,
> should the damage elsewhere be severe.

and lawsuits have always been such a major contributor to internet
advances in the past. makes me think of suing the cemetaries and
coffin manufacturers in "night of the living dead." does not
scale.

As with the spam problem, the underlying issue is a social issue as well as a technological one. However we proceed on a technological basis, there will continue to be arms races in the DoS world. Lawsuits are inefficient way to advance the Internet technology, but may help on the social side of things. We needn't be binary in our choice of paths to persue.

you might remember or look up smb's presentation on 'pushback' at
some nanog or another (those anonymous hotel rooms kinda blur,
especially before first coffee). that's not perfect, but it
scales. and it's an engineering approach to a technology problem,
always a good sign.

I agree it is something we should be pursued, but disagree the problem is entirely of a technological nature.