Dan Kaminsky

Date: Tue, 04 Aug 2009 13:32:42 -0400
From: Curtis Maurand <cmaurand@xyonet.com>

andrew.wallace wrote:
>
>> at the risk of adding to the metadiscussion. what does any of this have to
>> do with nanog?
>> (sorry I'm kinda irritable about character slander being spammed out
>> unnecessarily to unrelated public lists lately :stuck_out_tongue_winking_eye: )
>>
>>
>
> What does this have to do with Nanog, the guy found a critical
> security bug on DNS last year.
>
He didn't find it. He only publicized it. the guy who wrote djbdns
fount it years ago. Powerdns was patched for the flaw a year and a half
before Kaminsky published his article.

Some thoughts on the recent DNS vulnerability

"However - the parties involved aren't to be lauded for their current
fix. Far from it. It has been known since 1999 that all nameserver
implementations were vulnerable for issues like the one we are facing
now. In 1999, Dan J. Bernstein <http://cr.yp.to/djb.html&gt; released his
nameserver (djbdns <http://cr.yp.to/djbdns.html&gt;\), which already
contained the countermeasures being rushed into service now. Let me
repeat this. Wise people already saw this one coming 9 years ago, and
had a fix in place."

Dan K. has never claimed to have "discovered' the vulnerability. As the
article says, it's been know for years and djb did suggest a means to
MINIMIZE this vulnerability.

There is NO fix. There never will be as the problem is architectural
to the most fundamental operation of DNS. Other than replacing DNS (not
feasible), the only way to prevent this form of attack is DNSSEC. The
"fix" only makes it much harder to exploit.

What Dan K. did was to discover a very clever way to exploit the design
flaw in DNS that allowed the attack. What had been a known problem that
was not believed to be generally exploitable became a real threat to the
Internet. Suddenly people realized that an attack of this sort was not
only possible, but quick and easy (relatively). Dan K. did what a
security professional should do...he talked to the folks who were
responsible for most DNS implementations that did caching and a
work-around was developed before the attack mechanism was made public.

He was given credit for finding the attack method, but the press seemed
to get it wrong (as they often do) and lots of stories credited him with
finding the vulnerability.

By the way, I know that Paul Vixie noted this vulnerability quite some
years ago, but I don't know if his report was before or after djb's.

Now, rather then argue about the history of this problem
(non-operational), can we stick to operational issues like implementing
DNSSEC to really fix it (operational)? Is your DNS data signed? (No,
mine is not and probably won't be for another week or two.)

There is NO fix. There never will be as the problem is architectural
to the most fundamental operation of DNS. Other than replacing DNS (not
feasible), the only way to prevent this form of attack is DNSSEC. The
"fix" only makes it much harder to exploit.

Randomizing source ports and QIDs simply increases entropy, making it harder to spoof an answer. If this is not a "fix", then DNSSEC is not a fix either, as it only increases entropy as well.

Admitted, DNSSEC increases it a great deal more, but by your definition, it is not a "fix".

In a message written on Tue, Aug 04, 2009 at 11:32:46AM -0700, Kevin Oberman wrote:

There is NO fix. There never will be as the problem is architectural
to the most fundamental operation of DNS. Other than replacing DNS (not
feasible), the only way to prevent this form of attack is DNSSEC. The
"fix" only makes it much harder to exploit.

I don't understand why replacing DNS is "not feasible".

* Leo Bicknell:

In a message written on Tue, Aug 04, 2009 at 11:32:46AM -0700, Kevin Oberman wrote:

There is NO fix. There never will be as the problem is architectural
to the most fundamental operation of DNS. Other than replacing DNS (not
feasible), the only way to prevent this form of attack is DNSSEC. The
"fix" only makes it much harder to exploit.

I don't understand why replacing DNS is "not feasible".

Replacing the namespace is not feasible because any newcomer will lack
the liability shield ICANN, root operators, TLD registries, and
registrars have established for the Internet DNS root, so it will
never get beyond the stage of hashing out the legal issues. We might
have an alternative one day, but it's going to happen by accident,
through generalization of an internal naming service employed by a
widely-used application. There are several successful
application-specific naming services which are independent of DNS, but
all the attempts at replacing DNS as a general-purpose naming service
have failed.

The transport protocol is a separate issue. It is feasible to change
it, but the IETF has a special working group which is currently tasked
to prevent any such changes.

I'd be happy to think about replacing the DNS as soon as we've finished off migrating to an ipv6-only internet in a year or two.

Shall we set up a committee to try to make it happen faster?

Nick

Or even more likely, IMHO, that more and more applications will have their own naming services which will gradually reduce the perceived need for a general-purpose system - i.e., the centrality of DNS won't be subsumed into any single system (remember X.500?), but, rather, by a multiplicity of systems.

[Note that I'm not advocating this particular approach; I just think it's the most likely scenario.]

Compression/conflation of the transport stack will likely be both a driver and an effect of this trend, over time.

In message <825C8AC7-C01E-4934-92FD-E7B9E8091A3A@arbor.net>, Roland Dobbins wri
tes:

> We might have an alternative one day, but it's going to happen by
> accident, through generalization of an internal naming service
> employed by a widely-used application.

Or even more likely, IMHO, that more and more applications will have
their own naming services which will gradually reduce the perceived
need for a general-purpose system - i.e., the centrality of DNS won't
be subsumed into any single system (remember X.500?), but, rather, by
a multiplicity of systems.

Been there, done that, doesn't work well. For all it's short comings
the DNS and the single namespace it brings is much better than
having a multitude of namespaces. Yes I've had to work with a
multitude of namespaces and had to map between them. Ugly.

Multiple systems end up with problems. Even standard DNS blows up when
some company (Apple) decides that an extension (.local) should not be
forwarded to the DNS servers on some device (iPhone) because their
service (Bonjour) uses it.

Thanks,
Erik

I agree with you, but I don't think this approach is going to persist as the standard model.

Increasingly, transport and what we now call layer-7 are going to become conflated (we already see all these Rube Goldberg-type mechanisms to try and accomplish this OOB now, with predictable results), and that's going to lead to APIs/data types embedding this information, IMHO.

Yes, and again, I'm not advocating this approach. I just think it's most likely where we're going to end up, long-term.

In a message written on Wed, Aug 05, 2009 at 02:32:27PM +0000, Florian Weimer wrote:

The transport protocol is a separate issue. It is feasible to change
it, but the IETF has a special working group which is currently tasked
to prevent any such changes.

My interest was in replacing the protocol. I've grown fond of the
name space, for all of its warts.

My interest was in replacing the protocol. I've grown fond of the
name space, for all of its warts.

As we evolved from circuit switching to packet switching, which many at the
time said it would never work, and from the HOSTS.TXT to DNS, sooner or later
the “naming scheme” for resources on the net will imho in the future evolve
to something better and different from DNS.

No doubt for more than 25 years DNS has provided a great service, and it had
many challenges and will continue for some time to do so.

But DNS from being a simple way to provide name resolution evolved to something
more complex, and also degenerated into a protocol/service that created a new
industry when a monetary value was stuck to particular sequences of characters
that require to be globally unique and the base to construct a URL.

At some time in the future and when a new paradigm for the user interface is
conceived, we may not longer have the end user “typing” a URL, the DNS or
something similar will still be in the background providing name to address
mapping but there will be no more monetary value associated with it or that
value will be transferred to something else.

It may sound too futuristic and inspired from science fiction, but I never saw
Captain Piccard typing a URL on the Enterprise.

Sooner or later, we or the new generation of ietfers and nanogers, will need to
start thinking about a new naming paradigm and design the services and protocols
associated with it.

The key question is, when we start?

Meanwhile we have to live with what we have and try to improve it as much as
we can.

My .02

Jorge Amodio (jmamodio) writes:

It may sound too futuristic and inspired from science fiction, but I never saw
Captain Piccard typing a URL on the Enterprise.

  That's ok, I've never seen the Enterprise at the airport.

Sooner or later, we or the new generation of ietfers and nanogers, will need to
start thinking about a new naming paradigm and design the services and protocols
associated with it.

The key question is, when we start?

  Let's see how far the SMTP replacement has come, and get some inspiration.
  Heck, it's an application that only _uses_ the DNS, should be easy.

Once upon a time, Phil Regnauld <regnauld@catpipe.net> said:

Jorge Amodio (jmamodio) writes:
> It may sound too futuristic and inspired from science fiction, but I never saw
> Captain Piccard typing a URL on the Enterprise.

  That's ok, I've never seen the Enterprise at the airport.

I have, but not that Enterprise (I saw the space shuttle orbiter
Enterprise on a 747 land here).

  Let's see how far the SMTP replacement has come, and get some inspiration.
  Heck, it's an application that only _uses_ the DNS, should be easy.

There's always somebody looking to re-invent the wheel, but usually they
are startups looking to make a quick buck by patenting and licensing
their technology that "will be the savior of the Internet" (and so they
don't get far).

It may sound too futuristic and inspired from science fiction, but I never saw
Captain Piccard typing a URL on the Enterprise.

   That&#39;s ok, I&#39;ve never seen the Enterprise at the airport\.

Don't confuse sight with vision.

Sooner or later, we or the new generation of ietfers and nanogers, will need to
start thinking about a new naming paradigm and design the services and protocols
associated with it.

The key question is, when we start?

   Let&#39;s see how far the SMTP replacement has come, and get some inspiration\.
   Heck, it&#39;s an application that only \_uses\_ the DNS, should be easy\.

It won't be quick, it won't be easy, and you will have to deal with
the establishment
that at all cost will keep trying to squeeze money out of the current system.

Cheers

  That&#39;s ok, I&#39;ve never seen the Enterprise at the airport\.

I have, but not that Enterprise (I saw the space shuttle orbiter
Enterprise on a 747 land here).

There is one docked at Pier 26 in New York City :slight_smile:

It may sound too futuristic and inspired from science fiction, but I never saw
Captain Piccard typing a URL on the Enterprise.

       That's ok, I've never seen the Enterprise at the airport.

Go to Dulles Airport. She used to be on the runway a long time; now she is at the Udvar-Hazy Center there.

Don't know what this has to do with URIs, though.

Marshall

We're already there. It's called "Google".

  In the the vast majority of cases I have seen, people don't type
domain names, they search the web. When they do type a domain name,
they usually type it into the Google search box.

  (Alternatively, they type everything into the browser's "address
bar", which is really a
"search-the-web bar" in most browsers.)

  (Replace "Google" with search engine of your choice.)

-- Ben

At some time in the future and when a new paradigm for the user interface is
conceived, we may not longer have the end user “typing” a URL, the DNS or
something similar will still be in the background providing name to address
mapping but there will be no more monetary value associated with it or that
value will be transferred to something else.

We're already there. It's called "Google".

In the the vast majority of cases I have seen, people don't type
domain names, they search the web. When they do type a domain name,
they usually type it into the Google search box.

Partially true for web access, very rarely true for email. I type in email domains much more often than I do web domains. And now email addresses are becoming URIs for log ins, SIP calling, video conferencing, etc.

It's also interesting how in some ways twitter and its relatives have been sending
URLs backwards. If you type in

http://www.americafree.tv

you may have some idea what you are getting, but if you type in

http://bit.ly/w5aM4

you have none. (These two URLs go, or at least they should go, to the same place.
Who knows if that will be true in a year, or 5, or 10.)

Here is a place IMO where a better UI and URI philosophy would really help.

Regards
Marshall

Once upon a time, Ben Scott <mailvortex@gmail.com> said:

  In the the vast majority of cases I have seen, people don't type
domain names, they search the web. When they do type a domain name,
they usually type it into the Google search box.

Web != Internet. DNS is used for much more than web sites, and many of
those things are not in a public index. For example, most people type
in their friends' email addresses (at least into an address book).