DNS was Re: Internet Vulnerabilities

From: Paul Vixie <vixie@vix.com>

mike@sentex.net (Mike Tancsa) writes:

> ... Still, I think the softest targets are the root name
servers. I was
> glad to hear at the Toronto NANOG meeting that this was
being looked into
> from a routing perspective. Not sure what is being done
from a DoS
> perspective.

I think the gtld-servers.net are the target for a globally
disruptive and prolonged DDoS. Servers doing reverse lookup
might also be targets in more specialised attacks, as their
disruption would be continent wide rather than merely country
wide (like most forward look ups).

Paul obviously has the experience to tell me if I'm crazy, but I
would guess the "." zone probably isn't that large in absolute
terms, so large ISPs (NANOG members ?) could arrange for their
recursive servers to act as private secondaries of ".", thus
eliminating the dependence on the root servers entirely for a
large chunks of the Internet user base.

To set up such a backup plan during a DDoS against the root name
servers might be challenging, but it isn't impossible, it would
also stop large ISPs DNS servers forwarding daft queries onto
the root DNS servers, thus lowering the load on the root-servers
when they need it most!

So whilst the root servers make the obvious target they are also
in some ways a relatively easy target to move, or expand in
number. I think private secondaries are a better bet than new
root servers, as that would require trusting less experienced
admins with all of the Internet's DNS, rather than just ISP
users trusting their ISP (which they do implicitly already).

I think the kinds of zones being handled by the gtld-servers
would be harder to relocate, if only due to size, although the
average NANOG reader probably has rather more bandwidth
available than I do, they may not have the right kind of spare
capacity on their DNS servers to secondary ".com" at short
notice.

Now that we've seen enough years of experience from
Genuity.orig, UltraDNS,
Nominum, AS112, and {F,K}.root-servers.net, we're seriously
talking about using
anycast for the root server system.

We have even more experience at zone transfers with DNS, and it
doesn't require complicating anything lower than layer 7, which
has an appeal to me, and I suspect most ISPs who probably have
enough trouble keeping BGP in order.

All I think root server protection requires is someone with
access to the relevant zone to make it available through other
channels to large ISPs. There is no technical reason why key DNS
infrastructure providers could not implement such a scheme on
their own recursive DNS servers now, and it would offer to
reduce load on both their own, and the root DNS servers and
networks.

Other DNS admins could change their caching servers to forward
to their ISPs name servers - and whilst forwarding might be
frowned on by the DNS community, the hierarchical caching model
is typically faster than the current approach, and more
scalable, if potentially less secure (poisoning of a tier in the
hierarchy is bad news - theoretically we lose some redundancy,
although forward-first might address that, and some current DNS
server implementations do not support this model as well as they
could - undoubtably such a scheme would lead to more small
disruptions but presumably avoid the "one big one" being
discussed).

The single limiting factor on implementing such an approach
would be DNS know-how, as whilst it is probably a two line
change for most DNS servers to forward to their ISPs DNS server
(or zone transfer "."), many sites probably lack the inhouse
skills to make that change at short notice.

In practical terms I'd be more worried about smaller attacks
against specific CC domains, I could imagine some people seeing
disruption of "il" as a more potent (and perhaps less globally
unpopular) political statement, than disrupting the whole
Internet. Similarly an attack on a commercial subdomain in a
specific country could be used to make a political statement,
but might have significant economic consequences for some
companies. Attacking 3 or 4 servers is far easier than attacking
13 geographically diverse, well networked, and well protected
servers.

Similarly I think many CC domains, and country based SLD are far
more "hackable" than many people realised due to the extensive
use of out of bailiwick data, as described by DJB. At some point
the script kiddies will realise they can "own" a country or two
instead of one website, by hacking one DNS server, and the less
well secured DNS servers will all go in a week or two.

Date: Fri, 05 Jul 2002 17:50:24 +0100
From: Simon Waters

I think the gtld-servers.net are the target for a globally
disruptive and prolonged DDoS. Servers doing reverse lookup
might also be targets in more specialised attacks, as their
disruption would be continent wide rather than merely country
wide (like most forward look ups).

Maybe I'm nuts, but I also think the gTLD servers would be prime
targets.

Paul obviously has the experience to tell me if I'm crazy,
but I would guess the "." zone probably isn't that large in
absolute terms, so large ISPs (NANOG members ?) could arrange
for their recursive servers to act as private secondaries of
".", thus eliminating the dependence on the root servers
entirely for a large chunks of the Internet user base.

Not only not that large, but not that dynamic.

Personally, I think it would be interesting to allow providers to
stealth slave (and perhaps anycast secondary) as much or as
little of the DNS tree as they wish.

The single limiting factor on implementing such an approach
would be DNS know-how, as whilst it is probably a two line
change for most DNS servers to forward to their ISPs DNS
server (or zone transfer "."), many sites probably lack the
inhouse skills to make that change at short notice.

Ignoring little providers, let's say that only the 10 largest
ASNs anycast root and gTLD zones for their downstreams. I think
the effect would be very significant.

In practical terms I'd be more worried about smaller attacks
against specific CC domains.

Why stop with anycasting the roots? If one wished to mirror gTLD
zones, fine. I argue that provider disk/bandwidth/clue are the
limiting factors.

If a mirror were "0wn3d", it would affect 1) downstreams in the
case of a "private anycast", or 2) multiple parties on "public
anycast" boxen. Hopefully anyone with enough bandwidth to offer
public anycast would have enough clue to operate DNS responsibly.
Hopefully anyone with enough clue to offer _any_ anycast (i.e.,
to think outside the standard BGP box) would be clueful enough
to operate DNS responsibly.

Eddy

I
would guess the "." zone probably isn't that large in absolute
terms, so large ISPs (NANOG members ?) could arrange for their
recursive servers to act as private secondaries of ".", thus
eliminating the dependence on the root servers entirely for a
large chunks of the Internet user base.

-rw-r--r-- 1 9998 213 14102 Jul 14 19:56 root.zone.gz
-rw-r--r-- 1 9998 213 75 Jul 14 20:41 root.zone.gz.md5
-rw-r--r-- 1 9998 213 72 Jul 14 20:42 root.zone.gz.sig

I think the kinds of zones being handled by the gtld-servers
would be harder to relocate, if only due to size, although the
average NANOG reader probably has rather more bandwidth
available than I do, they may not have the right kind of spare
capacity on their DNS servers to secondary ".com" at short
notice.

Exactly. The .com zone is large. I doubt that the average NANOG
reader has a 16GB RAM machine idling just in case some kiddie
wants to DoS Verisign.

All I think root server protection requires is someone with
access to the relevant zone to make it available through other
channels to large ISPs. There is no technical reason why key DNS
infrastructure providers could not implement such a scheme on
their own recursive DNS servers now, and it would offer to
reduce load on both their own, and the root DNS servers and
networks.

Network load is hardly the problem, except in very starved cases;
a big well-used server will perhaps fill a T-1 or two.

The single limiting factor on implementing such an approach
would be DNS know-how, as whilst it is probably a two line
change for most DNS servers to forward to their ISPs DNS server
(or zone transfer "."), many sites probably lack the inhouse
skills to make that change at short notice.

This is the problem with "clever tricks"; they can be implemented
by people who are "in the loop", but most others will not make it
work.

In practical terms I'd be more worried about smaller attacks
against specific CC domains, I could imagine some people seeing
disruption of "il" as a more potent (and perhaps less globally
unpopular) political statement, than disrupting the whole
Internet. Similarly an attack on a commercial subdomain in a
specific country could be used to make a political statement,
but might have significant economic consequences for some
companies. Attacking 3 or 4 servers is far easier than attacking
13 geographically diverse, well networked, and well protected
servers.

Similarly I think many CC domains, and country based SLD are far
more "hackable" than many people realised due to the extensive
use of out of bailiwick data, as described by DJB. At some point
the script kiddies will realise they can "own" a country or two
instead of one website, by hacking one DNS server, and the less
well secured DNS servers will all go in a week or two.

I definitely agree. ccTLDen are in very varying states of security
awareness, and while I believe .il is aware and prepared, other
conflict zone domains might not be...

At 9:07 AM +0200 2002/07/15, M�ns Nilsson quoted Simon Waters
<Simon@wretched.demon.co.uk> as saying:

I
would guess the "." zone probably isn't that large in absolute
terms, so large ISPs (NANOG members ?) could arrange for their
recursive servers to act as private secondaries of ".", thus
eliminating the dependence on the root servers entirely for a
large chunks of the Internet user base.

  1266 A records
  1243 NS records
  1 SOA record
  1 TXT record

  Currently, B, C, & F are open to zone transfers.

I think the kinds of zones being handled by the gtld-servers
would be harder to relocate, if only due to size, although the
average NANOG reader probably has rather more bandwidth
available than I do, they may not have the right kind of spare
capacity on their DNS servers to secondary ".com" at short
notice.

  Edu is pretty good size:

    17188 NS records
     5514 A records
        1 SOA record
        1 TXT record

  A complete zone transfer comprises some 1016491 bytes.

All I think root server protection requires is someone with
access to the relevant zone to make it available through other
channels to large ISPs. There is no technical reason why key DNS
infrastructure providers could not implement such a scheme on
their own recursive DNS servers now, and it would offer to
reduce load on both their own, and the root DNS servers and
networks.

  I disagree. This is only going to help those ISPs that are clued-in enough to act as a stealth secondary of the zone, and then only for those customers that will be using their nameservers as caching/recursive servers, or have their own caching/recursive servers forward all unknown queries to their ISPs. I'm sorry, but that's a vanishingly small group of people, and will have little or no measurable impact.

  Better would be for the root nameservers to do per-IP address throttling. If you send them too many queries in a given period of time, they can throw away any excess queries. This prevents people from running tools like queryperf on a constant basis from excessively abusing the server.

  Indeed, some root nameservers are already doing per-IP address throttling.

In practical terms I'd be more worried about smaller attacks
against specific CC domains, I could imagine some people seeing
disruption of "il" as a more potent (and perhaps less globally
unpopular) political statement, than disrupting the whole
Internet.

  Keep in mind that some ccTLDs are pretty good size themselves. The largest domain I've been able to get a zone transfer of is .tv, comprising some 20919120 bytes of data -- 381812 NSes, 72694 A RRs, 5754 CNAMEs, and 3 MXes.

  Any zone that is served by a system that is both authoritative and public caching/recursive is wide-open for cache-poisoning attacks -- such as any zone served by nic.lth.se [130.235.20.3].

Similarly an attack on a commercial subdomain in a
specific country could be used to make a political statement,
but might have significant economic consequences for some
companies. Attacking 3 or 4 servers is far easier than attacking
13 geographically diverse, well networked, and well protected
servers.

  Who said that the root nameservers were geographically diverse? I don't think the situation has changed much since the list at <http://www.icann.org/committees/dns-root/y2k-statement.htm> was created. I don't call this geographically diverse.

I definitely agree. ccTLDen are in very varying states of security
awareness, and while I believe .il is aware and prepared, other
conflict zone domains might not be...

  Except for the performance issues, IMO ccTLDs should be held to the same standards of operation as the root nameservers, and thus subject to RFC 2010 "Operational Criteria for Root Name Servers" by B. Manning, P. Vixie and RFC 2870 "Root Name Server Operational Requirements" by R. Bush, D. Karrenberg, M. Kosters, & R. Plzak.

  Those of you who are interested in this topic may want to drop in on my invited talk "Domain Name Server Comparison: BIND 8 vs. BIND 9 vs. djbdns vs. ???" at LISA 2002. Root & TLD server issues will figure heavily in comparison. :wink: