Since I'm watching B5 again on DVD....
I was there at the dawning of the age of 4.2.2.1 ![:slight_smile: :slight_smile:](/images/emoji/apple/slight_smile.png?v=9)
We did it, and we I mean Brett McCoy and my self. But most of the credit/blame goes to Brett... I helped him, but at the time I was mostly working on getting out Mail relays working right. This was about 12 years ago, about 1998, I left Geunitity in 2000, and am back at BBN/Raytheon now. I remember we did most of the work after we moved out of Cambridge and into Burlington.
Genuity/GTEI/Planet/BBN owned 4/8. Brett went looking for an IP that was simple to remember, I think 4.4.4.4 was in use by neteng already. But it was picked to be easy to remember, I think jhawk had put a hold on the 4.2.2.0/24 block, we got/grabbed 3 address 4.2.2.1, 4.2.2.2, and 4.2.2.3 so people had 3 address to go to. At the time people had issues with just using a single resolver. We also had issues with both users and registers since clearly they aren't geographically diverse, trying to explain routing tricks to people KNOW all IPs come in and are routed as Class A/B/C blocks is hard.
NIC.Near.Net which was our primary DNS server for years before I transferred to planet from BBN. It wasn't even in 4/8, I think it was 128.89 (BBN Corp space), but I'm not sure. BBN didn't start to use 4/8 till the Planet build out, and NIC.near.net predates that by at least 10 years.
I still have the power cord from NIC.near.net in my basement. That machine grew organically with every service known to mankind running on it, and special one-off things for customers on it. It took us literally YEARS to get that machine turned off, when we finally got it off I took the power cord so no one would help us by turning it back on, I gave the cord to Chris Yetman, who was the director of operations and told him if a customer screams he has the power to turn it back on. A year or so later, he gave the cord back to me.
Yes we set up 4.2.2.1 as a public resolver. We figured trying to filter it was larger headache than just making it public.
It was always pretty robust due to the BIND code, thanks to ISC, and the fact it was always IPV4 AnyCast.
I don't know about now, but originally it was IPV4 AnyCast. Each server advertised a routes for 4.2.2.1, .2, and .3 at different costs and the routers would listen to the routes. Originally the start up code was, basically:
advertise route to 4.2.2.1, 4.2.2.2, and 4.2.2.3
run bind in foreground mode
drop route to 4.2.2.1, 4.2.2.2, and 4.2.2.3
then we had a Tivoli process that tried to restart bind, but rate limited the restarts. But that way if the bind died the routes would drop.
johno