Great Suggestion for the DNS problem...?

[ unthreaded to encourage discussion ]

If NS records pointed to IP's instead of names then this problem might not exist.
The root holds glue going up the chain, and you could reject authoritative responses from IP's not listed as authoritative NS for that zone.

Ie for karnaugh.za.net, net is looked up from root. Root IP addresses are queried directly, so you know to ignore responses coming from someone else. That gives you net (the same gtld, how convenient) and authoritative IP response for its NS. So you look up za.net and get correct glue and so on.

Actually, if glue were always served up the resolution chain then then only crummy glueless delegations would be vulnerable.

Anyone feel like redesigning the DNS protocol? Anyone? No? :frowning:

As you pointed out, the protocol, if properly implemented, addresses
this.

There should always be Glue (A records for the NS) in a delegation. RFC
1034 even specifies this:

4.2.2 <snip>
As the last installation step, the delegation NS RRs and glue RRs
necessary to make the delegation effective should be added to the parent
zone. The administrators of both zones should insure that the NS and
glue RRs which mark both sides of the cut are consistent and remain so.
</snip>

A probably important distinction:

That's not the protocol, that's the specified implementation framework
of the protocol. In general, DNS still works if you screw that up,
which is why it's so often screwed up.

Cheers,
-- jra

Yes it should work. In fact, why *don't* implementations discard authoritative responses from non-authoritative hosts? Or do we? Or am I horribly wrong?

There's an argument that IP spoofing can easily derail this, but I'd shift that argument higher up the OSI, blame TCP, and move on to recommending SYN cookies. Even if forged though, if the forged IP returns NS authority glue that doesn't match the source, the lookup still fails.

DNSSEC kinda does this verification though, just more complicatedly and more reliant on administrative cooperation, and I've never met a DNS person who is cooperative :wink:

My suggestion though was more of replacing
NS -> A -> IP
with
NS -> IP

That is just a brain fart though.

My 0.00264050803375 cents (at current exchange rates).

jra@baylink.com ("Jay R. Ashworth") writes:

[ unthreaded to encourage discussion ]

Nameservers could incorporate poison detection...

Listen on 200 random fake ports (in addition to the true query ports);
if a response ever arrives at a fake port, then it must be an attack,
read the "identified" attack packet, log the attack event, mark the
RRs mentioned in the packet as "poison being attempted" for 6 hours;
for such domains always request and collect _two_ good responses
(instead of one), with a 60 second timeout, before caching a lookup.

The attacker must now guess nearly 64-bits in a short amount of time,
to be successful. Once a good lookup is received, discard the normal
TTL and hold the good answer cached and immutable, for 6 hours (_then_
start decreasing the TTL normally).

Is there any reason which I'm too far down the food chain to see why
that's not a fantastic idea? Or at least, something inspired by it?

at first glance, this is brilliant, though with some unimportant nits.

however, since it is off-topic for nanog, i'm going to forward it to
the namedroppers@ops.ietf.org mailing list and make detailed comments
there.

Hello All:

From: Paul Vixie <vixie@isc.org>
Date: Tue, 29 Jul 2008 01:24:43 +0000
To: Nanog <nanog@merit.edu>
Subject: Re: Great Suggestion for the DNS problem...?

jra@baylink.com ("Jay R. Ashworth") writes:

[ unthreaded to encourage discussion ]

Nameservers could incorporate poison detection...

Listen on 200 random fake ports (in addition to the true query ports);
if a response ever arrives at a fake port, then it must be an attack,
read the "identified" attack packet, log the attack event, mark the
RRs mentioned in the packet as "poison being attempted" for 6 hours;
for such domains always request and collect _two_ good responses
(instead of one), with a 60 second timeout, before caching a lookup.

The attacker must now guess nearly 64-bits in a short amount of time,
to be successful. Once a good lookup is received, discard the normal
TTL and hold the good answer cached and immutable, for 6 hours (_then_
start decreasing the TTL normally).

Is there any reason which I'm too far down the food chain to see why
that's not a fantastic idea? Or at least, something inspired by it?

at first glance, this is brilliant, though with some unimportant nits.

however, since it is off-topic for nanog, i'm going to forward it to
the namedroppers@ops.ietf.org mailing list and make detailed comments
there.
--

Still off topic, but perhaps a BGP feed from Cymru or similar to block IP
addresses on the list?

Regards,

Mike

What would the ip-blocking BGP feed accomplish? Spoofed source addresses are a staple of the DNS cache poisoning attack.

Worst case scenario, you've opened yourself up to a new avenue of attack where you're nameservers are receiving spoofed packets intended to trigger a blackhole filter, blocking communication between your network and the legitimate owner of the forged ip address.

Michael Smith wrote:

however, since it is off-topic for nanog

ha ha. please stop telling people that they are off topic for nanog.

randy

* Paul Vixie:

Listen on 200 random fake ports (in addition to the true query ports);

at first glance, this is brilliant, though with some unimportant nits.

It doesn't work OOTB for most users because the spoofed packets never
reach the name server process if you don't use the ports to send packets
to the authoritative server which is spoofed--the wonders of stateful
firewalling.

That would make no difference to Kaminsky's attack, since it's the NS
records he's overwriting, not the glue.

Tony.

In fact, why *don't* implementations discard authoritative responses
from non-authoritative hosts? Or do we? Or am I horribly wrong?

The response is spoofed so that it appears to come from the correct host.

There's an argument that IP spoofing can easily derail this, but I'd shift
that argument higher up the OSI, blame TCP, and move on to recommending SYN
cookies.

DNS uses UDP.

Tony.

Tony Finch wrote:

Colin Alston wrote:

Why does it use UDP? :stuck_out_tongue:

Faster? Smaller? Less code to break? No perceived need for state?

In this situation, UDP uses one query packet and one reply. TCP uses 3
to set up the connection, a query, a reply, and three to tear down the
connection. *Plus* the name server will have to keep state for
every client, plus TIMEWAIT state, etc. (Exercise left to TCP geek
readers: how few packets can you do this in? For example -- send the
query with the SYN+ACK, send client FIN with the query, send server FIN
with the answer? Bonus points for not leaving the server's side in
TIMEWAIT. Exercise for implementers: how sane can your stack be if
you're going to support that?)

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

It was advocated as T/TCP in 90s.
http://www.kohala.com/start/ttcp.html
Not accepted widely:
http://en.wikipedia.org/wiki/T/TCP
Regads,
      Janos Mohacsi

The bittorrent tracker guys seem to run into problems at around 30kk tracker requests per second (TCP), and they say it's mostly setup/teardown (sy usage in vmstat), the tracker hash lookup doesn't take that much.

They're trying to move to UDP, currently their workload is approx 5% UDP.

I guess TCP DNS workload would be similar in characteristics.

We mainly use UDP for tracker announces, and only use TCP when we have to, and can confirm that the server spends far more time on the TCP setup/teardown than on computing the tracker response.

- LP