Jonathan Yarden @ TechRepublic: Disable DNS caching on workstations

Here we go again...

http://techrepublic.com.com/5100-10595-5657417.html?tag=nl.e044

My initial reaction is "why?" My followup reaction is "Well, most
workstations don't cache anyway, do they?"

Cheers,
-- jra

Depends on what you call "caching". Does honoring a TTL qualify as caching?

Can you imagine what would happen if every time anyone ever looked up any hostname they sent out a DNS query?

Once upon a time, Patrick W. Gilmore <patrick@ianai.net> said:

Depends on what you call "caching". Does honoring a TTL qualify as
caching?

What other kind of DNS caching is there?

Can you imagine what would happen if every time anyone ever looked up
any hostname they sent out a DNS query?

That's what most Unix/Linux/*BSD boxes do unless they are running a
local caching name service of some time (BIND, nscd, etc.). I wasn't
actually aware that Windows had a DNS cache service.

Open a web browser, go to "foo.bar.com" for some hostname you own. Change the record on the authoritative server and clear the cache on the recursive NS. Reload the page. See if the browser goes to the new IP.

Most desktop OSes do not re-query for the name again.

Sometimes this is a browser issue. Sometimes it is an OS issue. Depends on too many variables to which, or both, is at fault in general.

Once upon a time, Patrick W. Gilmore <patrick@ianai.net> said:

Most desktop OSes do not re-query for the name again.

Don't confuse apps and OSes. If I run "lynx", it does a DNS lookup for
each connect (even when it is the same hostname).

workstations

<snip>

That's what most Unix/Linux/*BSD boxes do unless they are running a
local caching name service of some time (BIND, nscd, etc.). I wasn't
actually aware that Windows had a DNS cache service.

from a windows command prompt, type
ipconfig /displaydns

it's also flushable using
ipconfig /flushdns

I wasn't.

Notice I said: "Sometimes this is a browser issue. Sometimes it is an OS issue."

End of day, most OSes do cache DNS replies, and many apps further cache answers.

Aside from individual OS behavior, doesn't this seem like very bad advice?

What sort of DNS cache poisoning attack could possibly work against a
workstation that has a caching resolver but no DNS server? If a hacker
really wished to do a name resolution attack against workstations, wouldn't
they just write some spyware that injected a hosts file? Seems easier.

At any rate, wouldn't disabling caching/not paying attention to TTLs have a
truly adverse impact on the DNS infrastructure? What is the % difference in
incremental DNS server load between a host that obeys TTLs and one that not,
but makes a new query each time? A single host wouldn't have much impact -
how about a couple million?

Is there something I'm missing here that's motivating Yarden's advice?

- Dan

</head scratching>

I think this is more of a question of who to trust. Caching, in
general, isn't a bad thing provided that TTL's are adhered to. If the
poisoning attack were to inject a huge TTL value, then that would
compromise that cache. (Note, I am no expert on dns poisoning, so I'm
not sure if the TTL is "attackable")

However, on the flip side, if nothing is ever cached, then I would
expect a huge amount of bandwidth to be eaten up by DNS queries.

I think a seasoned op knows when to use caching and when to not use
caching, but the everyday Joe User has no idea what caching is. If
they see a technical article telling them to turn off caching because
it will help stop phishing attacks (which they know are bad because
everyone says so), then they may try to follow that advice. Aside
from the "I broke my computer" syndrome, I expect they'll be very
disappointed when their internet access becomes visibly slower because
everything requires a new lookup...

Is it possible to "prevent" poisoning attacks? Is it beneficial, or
even possible, to prevent TTL's from being an excessively high value?

On Mon, Apr 18, 2005 at 03:05:55PM -0400, Jason Frisvold said something to the effect of:

>
>
> Aside from individual OS behavior, doesn't this seem like very bad advice?

I think this is more of a question of who to trust. Caching, in
general, isn't a bad thing provided that TTL's are adhered to. If the
poisoning attack were to inject a huge TTL value, then that would
compromise that cache. (Note, I am no expert on dns poisoning, so I'm
not sure if the TTL is "attackable")

However, on the flip side, if nothing is ever cached, then I would
expect a huge amount of bandwidth to be eaten up by DNS queries.

You are right. Time spent in security for an ISP yielded many
DoS-against-the-DNS-server complaints that turned out to be
some query-happy non-cachers pounding away at the server. The
solution: block the querying IP from touching the DNS server.
Somehow, I think that might have hampered their name resolution
efforts...? :wink:

cache me if you can,
--ra

It would be very interesting in seeing the difference in DNS traffic for a domain if it sets TTL to let's say 600 seconds or 86400 seconds. This could perhaps be used as a metric in trying to figure out the impact of capping the TTL? Anyone know if anyone did this on a large domain and have some data to share?

If one had to repeate the cache poisoning every 10 minutes I guess life would be much harder than if you had to do it once every day?

* Jason Frisvold:

I think this is more of a question of who to trust. Caching, in
general, isn't a bad thing provided that TTL's are adhered to. If the
poisoning attack were to inject a huge TTL value, then that would
compromise that cache. (Note, I am no expert on dns poisoning, so I'm
not sure if the TTL is "attackable")

I'm not sure if you can poison the entire cache of a stub resolver
(which can't do recursive lookups on its own). I would expect that
the effect is limited to a particular DNS record, which in turn should
expire after the hard TTL limit (surely there is one).

* Mikael Abrahamsson:

If one had to repeate the cache poisoning every 10 minutes I guess life
would be much harder than if you had to do it once every day?

Not necessarily, because every cache refresh is a new attack
opportunity. :sunglasses:

It would be very interesting in seeing the difference in DNS traffic for a
domain if it sets TTL to let's say 600 seconds or 86400 seconds. This
could perhaps be used as a metric in trying to figure out the impact of
capping the TTL? Anyone know if anyone did this on a large domain and have
some data to share?

Our first foray into DNS was using a DNS server that defaulted to
86400 for new entries.. Not being seasoned, we left this alone..
Unfortunately, I don't have any hard data from that dark time in our
past..

Windows 2000 DNS seems to set the ttl to 3600, which is a tad on the
low side, I think... At least for mostly-static domains, anyways.
But I believe the reasoning there was that they depended heavily on
dynamic dns..

If one had to repeate the cache poisoning every 10 minutes I guess life
would be much harder than if you had to do it once every day?

I dunno.. how hard is it to poison a cache? :slight_smile:

Is it possible to "prevent" poisoning attacks? Is it beneficial, or
even possible, to prevent TTL's from being an excessively high value?

--
Jason 'XenoPhage' Frisvold
XenoPhage0@gmail.com

Preventing poisoning attacks:

I guess most attacks are against windows workstations.

1) Hide them behind a NAT-router. If they cannot see them, they cannot
attack them.

2) Have your own DSN-server, root-server, authoritative server, cache.

You can have your own root-server: b.root-servers.net and c.root-servers.net
as well as f.root-servers.net allow cloning. Just run your Bind 9 as a slave
for "." . An authoritative server cannot be poisoned. Only resolvers can.

When you have sensitive addresses put them into your /etc/hosts or clone
their zone. Again Bind 9 allows it. Do their servers?

Get the zone file via ftp or email. Authoritative servers cannot be
poisoned.

Have your own cache behind the NAT-router. If they cannot see you they
cannot poison you.

There is one exception from the rule:

You browse "www.bad.guy". The have a namesever "ns1.bad.guy" that returns
something like

;; ANSWER SECTION:
a.root-servers.net. 86268 IN A 205.189.71.2

Then your cache will be in the "Public-Root.net" .

But remember - an authoritative DNS-server cannot be poisoned.

Regards,
Peter Dambier

Preventing poisoning attacks:

I guess most attacks are against windows workstations.

I'm not sure what you mean by this. Cache poisoning applies to machines
that are doing caching. It can affect any machine that depends on that
cache.

1) Hide them behind a NAT-router. If they cannot see them, they cannot
attack them.

I certainly hope that this would not help. I hope that caching machines
will not simply take a packet from a random address and source port 53 and
use it to update their cache. I hope that the source address, source
port, and destination port, at least, are checked to correspond to an
outstanding dns query. If those all match, the packet will very likely
get through a nat router. In other words, the nat router provides no
protection from this attack at all. Why? Because it's an attack based on
traffic that the natted machine has initiated.

2) Have your own DSN-server, root-server, authoritative server, cache.

You can have your own root-server: b.root-servers.net and

c.root-servers.net

as well as f.root-servers.net allow cloning. Just run your Bind 9 as a

slave

for "." . An authoritative server cannot be poisoned. Only resolvers

can.

Certainly authoritative servers can be poisoned, but not for the domains
that they're authoritative for. Running your own root only provides
protection for the root zone. If I make a query for www.badguy.com and
the auth. server for badguy.com returns an answer for www.yahoo.com in the
additional data, if I cache it, I'm likely poisoned. That can happen even
if I'm auth. for root.

Tony Rall

Mikael Abrahamsson wrote:

Is it possible to "prevent" poisoning attacks? Is it beneficial, or even possible, to prevent TTL's from being an excessively high value?

It would be very interesting in seeing the difference in DNS traffic for a domain if it sets TTL to let's say 600 seconds or 86400 seconds. This could perhaps be used as a metric in trying to figure out the impact of capping the TTL? Anyone know if anyone did this on a large domain and have some data to share?

First hand experience, I can tell you that decreasing the SORBS NS records TTLs to 600 seconds resulted in 90qps to the primary servers, increating the TTLs to 86400 dropped the query rate to less than 5 per second. (That's just the base zone, not the dnsbl NS records)

Regards,

Mat

It would be very interesting in seeing the difference in DNS traffic for a
domain if it sets TTL to let's say 600 seconds or 86400 seconds. This
could perhaps be used as a metric in trying to figure out the impact of
capping the TTL? Anyone know if anyone did this on a large domain and have
some data to share?

there is some good analysis in the classic paper

@inproceedings{ dnscache:sigcommimw01,
  title = {DNS Performance and the Effectiveness of Caching},
  author = {Jaeyeon Jung, Emil Sit, Hari Balakrishnan and Robert Morris},
  booktitle = {Proceedings of the {ACM} {SIGCOMM} Internet Measurement Workshop '01},
  year = 2001,
  month = {November},
  address = {San Francisco, California},
  url = {citeseer.ist.psu.edu/jung01dns.html} }

randy

Chris Adams wrote:

Once upon a time, Patrick W. Gilmore <patrick@ianai.net> said:

Depends on what you call "caching". Does honoring a TTL qualify as caching?
   
What other kind of DNS caching is there?

There's an article on /. today about providers (apparently there are quite a lot of them) whose DNS servers are caching and ignoring DNS TTL settings:

<Providers Ignoring DNS TTL? - Slashdot;

jc