DNS cache poisoning attacks -- are they real?

ISC SANS has recently disclosed yet another suspected DNS cache
poisoning attack. I reach a different conclusion. based on publicly
available data. Maybe there is unpublished information which suggests
a different view.

Unofficial name servers which pose as authoritative for well-known
zones have been around for ages. An astonishingly large number is
officially authoritative for (at least somewhat) frequented zones, and
from time to time, your resolvers receive authority sections
containing leaked unofficial data. I noticed this unfortunate fact
back in July 2004, when I looked more closely at DNS packet captures
for debugging purposes. Even in my limited sample, the number leaking
name servers was so high that systematically contacting their
operators and convincing them to change their configurations seemed
unfeasible (and many of them were located in regions which are not
exactly known for their cooperative spirit when it comes to such
matters).

Today, I looked again at a few unofficial servers. Quite a few of
them are operated by apparently respectable organizations with an AS
number etc. (definitely not the backyard servers behind a cable modem
I would expect in an attack). It is hard to tell if the more shady
ones legitimately redirect customer traffic, and unintentionally leak
these records to the general Internet, or attempt an actual attack.
(I'm not sure how to tell them apart at the protocol level. Maybe I'm
missing something.) Many of the unofficial records have been
unchanged for quite some (i.e. predating the current "pharming"
craze). Even the DNS cache poisoning case described in the ISC diary
could be the unwanted consequence of an oversimplified DNS
configuration (wildcard RRs for *.com instead of a proper DNS zone).

Are any ISPs actually willing to disconnect customer name servers
which serve unofficial zones? I don't believe that many ISPs would
try to exercise this much control over the packets their customers
send. Furthermore, there are apparently some reasons for running such
servers which generally are considered legitimate.

Should we monitor for evidence of hijacks (unofficial NS and SOA
records are good indicators)? Should we actively scan for
authoritative name servers which return unofficial data? I don't
think this makes sense, even if we could strongly discourage the
practice. Right now, I suspect that many people rediscovered the
relative weakness of the domain name system and started looking for
anomalies, and that's why we see an increasing number of reports --
not because of an increasing number of actual attacks.

And what if you find them? I seem to remember a uu.net server (from memory
ns.uu.net) many many years ago had some polluted data out there as an A
record. All bright and bushy-tailed I told the UUnet folks about this. They
were resigned. Someone, somewhere, had mistyped an IP address, and it had
got into everyone's glue, got republished by anyone and everyone, and in
essence had no chance of going away. Now I understand (a little) more about
DNS than I did at the time so I now (just about) know how DNS servers
should avoid returning such info (where they are both caching and
authoritative), but I equally know this is built upon the principle
no-one does anything actively malicious.

The only way you are going to prevent packet level (as opposed to
organization level) DNS hijack is get DNSSEC deployed. Your ietf list is
over ------> there.

Alex

You forgot the most important requirement, you have to be using
insecure, unpatched DNS code (old versions of BIND, old versions of
Windows, etc). If you use modern DNS code and which only follows
trustworthy pointers from the root down, you won't get hooked by
this. A poisoned DNS cache is irrelevant if your resolver never
queries servers with poisoned caches. If you do, you should
fix the your code.

On the other hand, there are a lot of reasons why a DNS operator may
return different answers to their own users of their resolvers. Reverse
proxy caching is very common. Just about all WiFi folks use cripple
DNS as part of their log on. Or my favorite, quarantining infected
computers to get the attention of their owners.

But it shouldn't matter what other DNS operators do, as long as your
DNS code doesn't use them to resolve names without a pointer from
the root (although you may not be able to log on to some WiFi hotspots).

Why Microsoft didn't make "Secure cache against pollution" the default
setting, I don't know.

* Sean Donelan:

You forgot the most important requirement, you have to be using
insecure, unpatched DNS code (old versions of BIND, old versions of
Windows, etc). If you use modern DNS code and which only follows
trustworthy pointers from the root down, you won't get hooked by
this. A poisoned DNS cache is irrelevant if your resolver never
queries servers with poisoned caches.

Yes, this is yet another reason why I'm inclined to apply Hanlon's
razor here. Totally forgot to mention it, thanks.

If you do, you should fix the your code.

This would defeat its purpose, at least to some extent. :sunglasses: I'm
interested in recording bogus RRs as well because I can't really be
sure whether there isn't some resolver which takes them for valid.

On the other hand, there are a lot of reasons why a DNS operator may
return different answers to their own users of their resolvers. Reverse
proxy caching is very common. Just about all WiFi folks use cripple
DNS as part of their log on. Or my favorite, quarantining infected
computers to get the attention of their owners.

And sometimes such things leak to the Internet. However, most of the
publicly visible bogus records seem to be caused by laziness. If you
handle thousands of com. domains, it's easier to use a fake com. zone
on your authoritative servers, with a few records like:

  com 172800 IN SOA ns1.example.org [...]
  *.com 172800 IN NS ns1.example.org
  *.com 172800 IN NS ns2.example.org
  *.com 172800 IN A 192.0.2.1

In most cases, 192.0.2.1 runs a web server that serves a "buy this
domain" page.

Uh-oh, this hurts. There must be a how-to somewhere which recommends
this shortcut.

Why Microsoft didn't make "Secure cache against pollution" the default
setting, I don't know.

Apparently, they do in recent versions. It might have been viewed as
a change too risky for a service pack or regular patch (Microsoft's
risk assessments are sometimes rather bizarre).

Le 26 mars 2005, � 17:52, Sean Donelan a �crit :

You forgot the most important requirement, you have to be using
insecure, unpatched DNS code (old versions of BIND, old versions of
Windows, etc). If you use modern DNS code and which only follows
trustworthy pointers from the root down, you won't get hooked by
this.

The obvious rejoinder to this is that there are no trustworthy pointers from the root down (and no way to tell if the root you are talking to contains genuine data) unless all the zones from the root down are signed with signatures you can verify and there's a chain of trust to accompany each delegation.

If you don't have cryptographic signatures in the mix somewhere, it all boils down to trusting IP addresses.

Joe

Signatures don't create trust. A signature can only confirm an existing
trust relationship. DNSSEC would have the same problem, where do you get
the trustworthing signatures? By connecting to the same root you don't
trust?

As a practical matter, you can stop 99% of the problems with a lot less
effort. Why has SSH been so successful, and DNSSEC stumbled so badly?

Always initiate the call yourself. Always check the nonce in the
answer. Never accept unsolicited data. Never accept answers to questions
you didn't ask.

Besides, if you don't trust IP addresses even if the entire DNS tree
was signed by trustworthy keys I'd just hijack the IP address in the DNS
answer anyway. Quarantine NAT is very good at this.

Signatures don't create trust. A signature can only confirm an existing
trust relationship. DNSSEC would have the same problem, where do you get
the trustworthing signatures? By connecting to the same root you don't
trust?

No, by using a known local trust anchor for the root and following the chain of trust from there.

As a practical matter, you can stop 99% of the problems with a lot less
effort. Why has SSH been so successful, and DNSSEC stumbled so badly?

For most people SSH encrypts a session, and says nothing about the identity of the remote host. Most people ignore the warnings about host keys changing, and never check an ssh fingerprint with the remote host before accepting it and caching it until next time.

DNSSEC doesn't attempt to encrypt the transport; it is all about the authenticity of the data. So, they are doing different things.

SSH deployment requires no coordination between organisations really; while there are public services deployed over SSH, I would be very surprised if its main use is not intra-organisation. DNSSEC, on the other hand, requires extensive standardisation and buy-in from a huge number of different organisations before it is useful in a general sense.

(You can use DNSSEC in a private, intra-organisational context, much as you might use SSH, today.)

I'm not sure what 99% of DNS authenticity problems you think you can solve without DNSSEC; perhaps it might be useful for you to enumerate them.

Always initiate the call yourself. Always check the nonce in the
answer. Never accept unsolicited data. Never accept answers to questions
you didn't ask.

And, according to your theory, be happy that you have no way to validate the authenticity of any answers you do get?

Besides, if you don't trust IP addresses

If?

We have meandered from the topic at hand, a bit. But the general point I was trying to make was that all the robust DNS software in the world will not avoid the propagation of rogue DNS answers if there's no way for a client (or a trusted, validating resolver) to verify the authenticity of the data contained within them.

Joe

Here is a link about how Cox Cable uses DNS to block phishing and certain
malicious sites.

http://www.broadbandreports.com/forum/remark,12922412

Sean Donelan wrote:

Here is a link about how Cox Cable uses DNS to block phishing and certain
malicious sites.

http://www.broadbandreports.com/forum/remark,12922412

If that manipulation is done on their internal servers, its their
business; that isn't uncommon anymore, and in fact, is on the increase
(mea culpa).

However, if an external server is manipulated, that's a different story.

Jeff

where was www.makelovenotspam.com re-pointed to and 'hacked' again?? I
forget... 'trust of the ip address' :frowning:

I hate that cripple dns stuff - they seem to add transparent proxying
of dns requests to it as well, sometimes.

I've seen cases where my laptop's local resolver (dnscache) suddenly
starts returning weird values like 1.1.1.1, 120.120.120.120 etc for
*.one-of-my-domains.com for some reason.

Thank $DEITY for large ISPs running open resolvers on fat pipes ..
those do come in quite handy in a resolv.conf sometimes, when I run
into this sort of behavior.

--srs

* sean@donelan.com (Sean Donelan) [Sun 27 Mar 2005, 03:16 CEST]:

As a practical matter, you can stop 99% of the problems with a lot less
effort. Why has SSH been so successful, and DNSSEC stumbled so badly?

Because one of these products came with "./configure; make; make install"

  -- Niels.

Suresh Ramasubramanian wrote:

<snip>

Thank $DEITY for large ISPs running open resolvers on fat pipes ..
those do come in quite handy in a resolv.conf sometimes, when I run
into this sort of behavior.

--srs

Slightly OT to parent thread...on the subject of open dns resolvers.

Common best practices seem to suggest that doing so is a bad thing. DNS documentation and http://www.dnsreport.com appear to view this negatively.

Is that the consensus among operators here? Does anyone feel that in spite of the {negligble} risk involved, since any abuse would be local in nature (as opposed to SMTP open relay) one should be good neighborly in this way? Or perhaps the prospect of yet another list of $IP_BLOCKS_THAT_ARE_OUR_NETWORK make this a low priority on the TODO list of DNS operators?

Yes, if your resolvers are open to the world, cache poisoning becomes a lot easier and better targetted -- but then, if your resolvers are vulnerable to that, you would get bit by it sooner or later anyways.

Joe

On the other hand, there are a lot of reasons why a DNS operator may
return different answers to their own users of their resolvers. Reverse
proxy caching is very common. Just about all WiFi folks use cripple
DNS as part of their log on. Or my favorite, quarantining infected
computers to get the attention of their owners.

sean, solving a layer two problem (mac address) at layer four will bite
you in the long run.

Thank $DEITY for large ISPs running open resolvers on fat pipes ..
those do come in quite handy in a resolv.conf sometimes, when I run
into this sort of behavior.

problem is many walled garden providers, e.g. t-mo, block 53.

randy

i have yet to see cogent arguments, other than scaling issues,
against running open recursive servers.

randy

The common example to NOT run them is the DNS Smurf attack, forge dns
requests from your victim for some 'large' response: MX for mci.com works
probably for this and make that happen from a few hundred of your
friends/bots. It seems that MX lookup will return 497 bytes, a query that
returns "see root please" is only 236 today.

Larger providers have the problem that you can't easily filter
'customers' from 'non-customers' in a sane and scalable fashion. While
they have to run the open resolvers for custoemr service reasons they
can't adequately protect them from abusers or attackers in all cases.

-Chris

Suresh Ramasubramanian wrote:
>
<snip>
>
>Thank $DEITY for large ISPs running open resolvers on fat pipes ..
>those do come in quite handy in a resolv.conf sometimes, when I run
>into this sort of behavior.
>
>--srs
>
>

Slightly OT to parent thread...on the subject of open dns resolvers.

Common best practices seem to suggest that doing so is a bad thing. DNS
documentation and http://www.dnsreport.com appear to view this negatively.

  er... common best practice for YOU... perhaps.
  dnsreport.com is apparently someone who agrees w/ you.
  and i know why some COMMERCIAL operators want to squeeze
  every last lira from the services they offer...
  but IMRs w/ unrestricted access are a good a valuable tool
  for the Internet community at large.

  IMR? - you know, an Interative Mode Resolver aka caching server.

Joe

--bill

bmanning@vacation.karoshi.com wrote:

<snip>

  er... common best practice for YOU... perhaps.
  dnsreport.com is apparently someone who agrees w/ you.
  and i know why some COMMERCIAL operators want to squeeze
  every last lira from the services they offer...
  but IMRs w/ unrestricted access are a good a valuable tool
  for the Internet community at large.

  IMR? - you know, an Interative Mode Resolver aka caching server.

Joe

--bill

Thanks for the feedback, bill and all else who have responded.

Just want to clarify -- Thats NOT my position, any resolvers (not like thats a great many big important ones like others here can attest to) I have run were not purposefully closed off from anyone (who was not being abusive).

Security is critical, but I am from the school that advocates leaving open that which

* may be usefull to others

* does not cost me {much} - cost is in terms of {money | cpu | ram | bw

mgmt | what have you}

* takes extra effort to close off

* Has no recent history of badness (insert your definition for "recent")

* Is easily verifiable (you should know real quick if your DNS cache is poisoned)

* avoids issues on how to make things work now that you have screwed it all up by denying resolving to all [insert all corner cases here] (simply as an example)

Easy to make a road, hard to make a prison.

* Joe Maimon:

Slightly OT to parent thread...on the subject of open dns resolvers.

Common best practices seem to suggest that doing so is a bad thing.

There was some malware which contained hard-coded IP addresses of a
few open DNS resolvers (probably in an attempt to escape from
DNS-based walled gardens). If one of your DNS resolvers was among
them, I'm sure you'd closed it to the general public, too -- and made
sure that your others were closed as well, just in case.

* Brad Knowles:

  It only takes a little while to figure out that domains can be
fake-hosted using open caching recursive resolvers. Someone creates
a domain with very small TTLs for the real authoritative servers.
Within the zone, they do lame delegations to a lot of known public
caching recursive servers, with much longer TTLs.

  The lame delegators do what they think is their duty to serve the
data they are requested for, and they are the ones who effectively
serve that data to the world. In fact, the real IP addresses of the
authoritative servers could be changed every five minutes, with the
new policies and procedures in place from NetSol.

I doubt this will work on a large scale. At least recent BIND
resolvers would discard replies from the abused caching resolvers
because they lack the AA bit, so only clients using the resolvers as
actual resolvers are affected.

You can more easily seed open resolvers, sure, but with a reasonably
sized botnet, you can do the same thing with closed ones.