AW: Odd policy question.

Assuming that you are running separate authoritative and recursive servers this would only be a problem when someone goes to a lame-delegated domain.

It is probably also good to note that it is a best practice to separate authoritative and recursive servers.
john

-----Urspr�ngliche Nachricht-----

it is a best practice to separate authoritative and recursive servers.

why?

e.g. a small isp has a hundred auth zones (secondaried far
away and off-net, of course) and runs cache. why should
they separate auth from cache?

randy

it is a best practice to separate authoritative and recursive servers.

why?

Cache poisoning (though this is less likely with more modern bind's and other resolvers) and the age old your view is NOT the same as the world view. IE if you've got a customer who has offsite DNS, but hasn't told you, and you've got authoritative records for his zone, you might be delivering mail locally, or to the wrong place, and it can take a long time to figure this out.

Cache poisoning

let's assume that we're not dealing with the bugs of old
versions of server software

randy

In message <838DBE2645430DF70BAFFC9C@dhcp-2-206.wgops.com>, Michael Loftis writ
es:

it is a best practice to separate authoritative and recursive servers.

why?

Cache poisoning (though this is less likely with more modern bind's and
other resolvers) and the age old your view is NOT the same as the world
view. IE if you've got a customer who has offsite DNS, but hasn't told
you, and you've got authoritative records for his zone, you might be
delivering mail locally, or to the wrong place, and it can take a long time
to figure this out.

Yes. However, that has to be weighed against the greater immunity to
cache poisoning in authoritative servers -- if a server *knows* it has
the real data, it has much stronger grounds for rejecting nonsense.
This is, in fact, one of the tests used.

    --Steven M. Bellovin, Steven M. Bellovin

Because it prevents stale, authoritative data on your nameservers being returned to intermediate-mode resolvers in the form of apparently authoritative answers, bypassing a valid delegation chain from the root.

Stale data might be present due to a customer re-delegating a domain away from your nameservers without telling you, or from the necessity with some registries of having to set up a domain on the auth NS set before domain registration can proceed (or be denied). It might also be introduced deliberately, as described by you in this thread.

While periodically checking the zones your authority servers are hosting so that you know when they have been re-delegated away is a good idea, and can reduce the period during which bad answers get sent to clients from a combined auth/res server, segregating the two roles between different nameservers avoids returning *any* stale answers. (Using multiple instances of nameserver daemon running on the same host, bound to different addresses might well be sufficient; you don't necessarily need to add hardware.)

This reasoning is orthogonal to the observation that various species of DNS server software (including BIND) have, in the past, featured bugs for which a workaround is to keep authority/cache functions separate. For people using such software, however, this provides additional incentive.

Joe

> it is a best practice to separate authoritative and recursive servers.

why?

I'm not sure anyone can answer that question. I certainly can't.
Not completely, anyway. There are too many variables and motivations.

Some remember to say "Read RFC2010 section 2.12."

But since that's a document intended specifically for root server
operation, it's not as helpful to those of us that don't operate
roots.

This is about like saying, "Because Vixie wrote it."

e.g. a small isp has a hundred auth zones (secondaried far
away and off-net, of course) and runs cache. why should
they separate auth from cache?

Well, RFC2010 section 2.12 hints at cache pollution attacks, and that's
been discussed already. Note that I can't seem to find the same claim
in RFC2870, which obsoletes 2010 (and the direction against recursive
service is still there).

But in my own personal experience, I can still say without a doubt that
combining authoritative and iterative services is a bad idea.

Spammers resolve a lot of DNS names. Usually in very short order. As
short as they can possibly manage, actually. The bulk of the addresses
they have on their lists aren't even registered domain names.

Resolving some of these bogus domain names uses substantially more CPU
than you might think (spread out over several queries).

The result, at a previous place of employ that did not segregate
these services, was that our nameservers literally choked to death
every time our colocated customers hit us with a spam run.

The process' CPU utilization goes to 100%, queries start getting
retransmitted, and pretty soon our authoriative queries start getting
universally dropped because they're the vast minority of traffic in the
system (or the answer comes back so late the client has forgotten it
asked the question - has already timed out).

So if someone on our network was using our recursive nameservers to
resolve targets to spam, people couldn't resolve our names.

Even though our servers were geographically diverse, they were all
recursive - the miscreant clients would spam them all in harmony.

I guess you could say it made it easy to find and shut these miscreants
down.

But I'd much rather 'spammer detection' be based on something that does
not also take my own network down.

Now, certainly, designing a network around being impervious to "our
clients: the spammers" is not a strong motivation for everyone. But it
doesn't take a spammer to see the same series of events unfold. It can
just as easily be...say...a lame script in a server...handling error
conditions badly by immediately retransmitting the request (we got this
too - it flooded our servers with requests for a name within their own
domain without any pause inbetween...we kept having to call this customer
to reboot his NT box, putting their address space in and out of our
ACL's...a significant operational expense, and outages that affected the
majority of our customers...for a small colocation customer (not a
lot of cash)).

So I think this is pretty valid advice for pretty much anyone. It's
just a bad idea to expose your authoritative servers to the same
problems an iterative resolver is prone to.

> > it is a best practice to separate authoritative and recursive
> > servers.

> why?

I'm not sure anyone can answer that question. I certainly can't.
Not completely, anyway. There are too many variables and motivations.

[...]

Well, RFC2010 section 2.12 hints at cache pollution attacks, and that's
been discussed already. Note that I can't seem to find the same claim
in RFC2870, which obsoletes 2010 (and the direction against recursive
service is still there).

In an environment where customers may be able to add zones (such as a
web-hosting environment), not separating the two may cause problems when
local machines resolve off of the authoritative nameservers. This could
be due to someone maliciously or accidentally adding a domain they don't
control, or simply to someone setting up their domain prior to changing
over the nameservers.

w

it is a best practice to separate authoritative and recursive
servers.

why?

Because it prevents stale, authoritative data on your nameservers
being returned to intermediate-mode resolvers in the form of
apparently authoritative answers, bypassing a valid delegation chain
from the root.

and thereby hiding the fact that someone has either lame delegated
or i have forgotten to remove an auth zone, both cases i want to
catch. not a win here.

randy

Well, RFC2010 section 2.12 hints at cache pollution attacks, and that's
been discussed already. Note that I can't seem to find the same claim
in RFC2870, which obsoletes 2010 (and the direction against recursive
service is still there).

despite others saying that 2870 should apply to servers other
than root servers, i do not support that. and that leaves
aside that some root servers do not follow it very well.

randy

If someone has a lame delegation to one of your servers, that's a different problem (and the one that this thread began with). The link between that problem and the one I'm talking about is the decision to treat the former with bogus data as an incentive for the lame delegator to fix their records.

The impact of forgetting to remove a zone is greatly reduced if nobody ever has a reason to send a query for that data to your nameserver. To all intents and purposes, hosting random, non-delegated zones on an authority-only server doesn't break anything.

However, it's still a good idea to check (e.g. using a script) for forgotten zones, as you say, in the interests of good hygiene.

Joe

I have to agree, with the exclusion that some people, having specific
requirements that are somewhat similar to root service requirements,
find 2870 and 2010 advice useful.

My intent here was to point out that all documented reasoning for this
practice is unfulfilling.

I'm curious if the rest of my response was lost on you due to its
verbosity?

I'm curious if the rest of my response was lost on you due to its
verbosity?

no. it seemed to apply in some cases, and not in others. so
it was useful info, but did not seem to be completely relevant
to the more unconditional original position "it is a best
practice to separate authoritative and recursive servers."

randy

Responding with stale data is, arguably, more damaging than failing
to respond at all.

So much so that the SOA expiry field serves to protect us from this
threat.

So, even though Randy is wrong for wanting to catch misconfigurations
by producing incorrect data, I also don't see where Joe is coming from.

If I hosted my domain with someone whose server was answering recursive
queries, I would probably use a lower value for expiry than I normally
would otherwise.

RFC 2870 was crafted at a time when the machines hosting the
  root zone also hosted several -large- TLD zones. Anycast was
  not widely used when this document was written. RFC 2010 did
  indicate that requirements would likely change in future, while
  RFC 2870 reinforced the then status quo.

  Perhaps the most fatal mistake of RFC 2870 was the ambigious
  treatment of the service provisioning as distinctly different
  than protecting the availability of the (single?) instance of
  the hardware that provides that service.

  Given the changed nature of the publication platform for the root
  zone, (no big TLDs hosted there anymore) and the widescale use of
  anycast in the root, while not with many TLDs - it is clear to me
  that RFC 2870 applicability is oriented more toward TLD operations.

  For these and a few other reasons, no root server operator that
  i am aware of (save ICANN) actually tries to follow RFC 2870...
  Several try and follow RFC 2010 still ... despite the I[E/V]TF's
  marking of "obsolete" on RFC 2010. That said, there might be a
  replacement for both offered up - if time allows.

--bill

at root, i am a naggumite. erik nagum was good at describing
why broken things should not be patched over. it's better to
amplify breakage if that's what it takes to get it fixed asap.
yes, this goes against "be liberal in what you accept." tough
patooties, that way lies entropic death.

an analogous gripe i have is "do-gooder" software, which also
applies to configuration and other policies. if do-gooderism
'successfully' compensates for an error, no one notices. when
it makes a mistake, everyone screams to the heavens and throws
mud. e.g. remember when ejk put in an interceptor cache to
give his customers seriously better performance?

pain is nature's way of telling us to take our hand off of the
stove.

randy

at root, i am a naggumite. erik nagum was good at describing
why broken things should not be patched over. it's better to
amplify breakage if that's what it takes to get it fixed asap.

In this case, the 'break' is only damaging if it is in the query
path. If it is not, it ultimately reaches the expiry timer and
becomes a non-issue for all involved.

Perpetual entropy leading to heat death is not acheived.

So, serving a zone that has a very large expiry on a recursive
nameserver is, in effect, putting your hand on the burner.

Remember I'm on both sides of this fence;

Either use a low expiry on zones hosted by recursive nameservers,
or use any (probably large) expiry on authoritative-only servers.

an analogous gripe i have is "do-gooder" software, which also
applies to configuration and other policies. if do-gooderism
'successfully' compensates for an error, no one notices. when
it makes a mistake, everyone screams to the heavens and throws
mud. e.g. remember when ejk put in an interceptor cache to
give his customers seriously better performance?

Then I guess you won't be a fan of:

http://www.ietf.org/internet-drafts/draft-andrews-full-service-resolvers-01.txt

pain is nature's way of telling us to take our hand off of the
stove.

Above is an example of a software engineer (Mr. Andrews) choosing to
experience a kind of pain now (the ietf standards process), for others
to experience a kind of pain later (those who use rfc1918 space adopting
software implementing this), and for the as112 operators to experience
less pain until gradually it is retired.

Pain in this universe is absolute and eternal. All you can do is
choose for whom it is fair to experience how much of it.

Something like the Cyclops in Krull. Choose to get left behind in
the field to die peacefully, or get crushed in the stone doors saving
your friends.

The mechanics of the result is unimportant. The choice is.

Let me attempt to bring this back to the policy question.

Does someone have the *right* to put one of your IP addresses as an NS
record for their domain even if you do not agree?

Registrar policies imply that this is so, and has been this way for a
long time.

A number of years ago (like 8-10 or so) I had a student host a domain on
my campus that I rather they not host. When I requested the registrar
(or registrar equivalent at the time) to remove the domain, or at least
the NS record pointing at my IP address, they refused. Their position
was that if I didn't like the domain, I should block access to the IP
address. I solved the problem another way...

Presumably this would work today, but it takes the effected IP address
out of action and Drew's goal, presumably, is to get the IP address back
in use without cruft heading its way.

Is this a good policy? I can argue it either way myself...

      -Jeff

Once upon a time, registrar/er policies did NOT allow this. NSI used to
have "GUARDIAN" which controlled who could register domain names
listing name servers with your IP addresses. Unfortunately, NSI never
completely implemented guardian; and it pretty much completely
disappeared after the trademark lawyers took over.

That's a little over-broad considering the number of registries there are (and have been, for a long time). I think it's fair to say that even if this was once the case for COM/NET/ORG registries, there are many more registries where this was never close to being true.

It seems to me that if someone else chooses to insert 32- or 128-bit integers of their choice into their zone files, then there's properly very little I can or should be able to do about it. But that's just me.

Joe