maybe a dumb idea on how to fix the dns problems i don't know....

> Pretending for a moment that it was even possible to make such large
> scale changes and get them pushed into a large enough number of clients
> to matter, you're talking about meltdown at the recurser level, because
> it isn't just one connection per _computer_, but one connection per
> _resolver stub_ per _computer_ (which, on a UNIX machine, would tend to
> gravitate towards one connection per process), and this just turns into
> an insane number of sockets you have to manage.
  
Couldn't the resolver libraries be changed to not use multiple connections?

I think that the text I wrote clearly assumes that there IS only one
connection per resolver instance. The problem is that hostname to IP
lookup is pervasive in a modern UNIX system, and is probably pretty
common on other platforms, too, so you have potentially hundreds or
thousands of processes, each eating up additional system file descriptors
for this purpose.

I cannot think of any reason that init, getty, sh, cron, or a few other
things on a busy system would need to use the resolver library - but that
leaves a whole ton of things that can and do.

Now, of course, you can /change/ how everything works. Stop holding open
connections persistently, and a lot of the trouble is reduced. However,
anyone who has done *any* work in the area of TCP services that are open
to the public will be happy to stamp "Fraught With Peril" on this little
project - and to understand why, I suggest you research all the work that
has been put into defending services like http, irc, etc.

... JG

Joe Greco wrote:

Pretending for a moment that it was even possible to make such large scale changes and get them pushed into a large enough number of clients to matter, you're talking about meltdown at the recurser level, because
it isn't just one connection per _computer_, but one connection per
_resolver stub_ per _computer_ (which, on a UNIX machine, would tend to
gravitate towards one connection per process), and this just turns into an insane number of sockets you have to manage.
      

  Couldn't the resolver libraries be changed to not use multiple connections?
    
I think that the text I wrote clearly assumes that there IS only one
connection per resolver instance. The problem is that hostname to IP
lookup is pervasive in a modern UNIX system, and is probably pretty
common on other platforms, too, so you have potentially hundreds or
thousands of processes, each eating up additional system file descriptors
for this purpose.

Well how I read what you first wrote implied that the resolvers are now going to DOS servers with millions of connections due to each resolver stub making a TCP connection... I say this is something that if true, can and should be changed.

Now you say that file descriptors on the client are going to run out Isn't that changing the topic? And is that even really a problem?

So each process that needs to do a lookup opens a file descriptor for a TCP connection, right? Whereas with UDP we don't have to do this. Is this what I'm hearing you say? That I understand. (Hmm don't udp connections take sockets too? Not sarcastic here.. just asking...)

And it is a good point but is this client file descriptor an insurmountable problem? Also, what about the millions of connections to the server? Is that really necessary for a dns resolver on one system to open more than one TCP connection to its caching dns server?

I'm not saying that caching dns servers should keep open TCP connections to authoritative name servers! OK? But how much latency do you increase e on that uncached recursive lookup by changing to TCP?

CP