>Why not to restrict first-level domains to companies
>which can demonstrate that they have 1000+ hosts?
Creating a problem to solve a problem is not a solution.
Paul already said that a 16MB 486 can handle root DNS just fine.
I have said that a 16MB 486 can handle "." just fine. To handle everything
in COM, EDU, et al, takes about 100MB just for named. And it'll soon double
again, so we're "working the issue."
Here's today's "top" display on f.root-servers.net:
Memory: Real: 96M/103M Virt: 100M/176M Free: 7616K
PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
26822 root 2 0 95M 95M sleep 23.0H 12.65% 12.65% named
If you feel the growth of domain names is such that it will
outstrip a 486 w/16MB soon, tell me when it will be a
SIGNIFICANT problem. I.e. when will it outstrip a real
machine (Sun, VAX, Alpha, SGI) with real memory (64MB?
I think Vadim's point was that if we can require folks to show utilization
before we will give them their own non-provider-based CIDR block, why can't
we require them to show utilization before they are allowed to have a second
The answers aren't fun but they're real: CIDR was forced by address economics;
if the routing table could handle hundreds of thousands (soon to be millions)
of little prefixes, and address space were unlimited and effectively free, we
could never have implemented CIDR. The DNS economics are very different.
One of my clients registered SIKHISM.COM and asked me if I would guest-serve
the WWW.SIKHISM.COM page. This being a religious organization, I refused, and
demanded that he first purge the erroneous domain and register SIKHISM.ORG
instead, which he did, and so I am now serving it.
I wish the average ISP would be as insistent on proper names.