ns1.twtelecom.net and ns2.twtelecom.net (along with some other DNS servers, ns1.orng.twtelecom.net and ns1.ptld.twtelecom.net) suddenly stopped serving DNS for domains it's not authoritative for this morning. Requests are being actively refused from within their network.
Caused a small issue for us, just thought I'd pass along.
they were recursing previously and are no longer? that seems like a
win... or did I misconstrue what you said?
They definitely do use ns1.twtelecom.net and ns2.twtelecom.net for hosting purposes (which probably shouldn't recurse), but they also generically recommend their clients to use them for recursion. Whatever the issue, all of their DNS servers that I tested lost the ability to recurse for about an hour. They are *mostly* working at this point, but not consistently.
Not a huge operational issue, but I'm sure there are some folks that this hit a little bit.
As Chris indicates, it would be a big win if recursion were disabled on the authoritative servers, and instead handled by dedicated caching-only recursors which would only answer queries from within their network.
Not if you're an end user who was configured, for some reason, to use them
as a recursive server... which is what I infer from the fact that he posted
it. In which case, it would be useful for Wil to provide us the IP
addresses of those servers as he understand them, since that is what such
affected users would have programmed...
Oh sure, here are the ones that I tested and can confirm were down. Well, not "down" but actively refusing queries.
ns1.twtelecom.net (126.96.36.199, 2001:4870:6082:3::5)
ns2.twtelecom.net (188.8.131.52, 2001:4870:8000:3::5)
ns1.twtelecom.net and ns2.twtelecom.net are well known to be authoritative for a number of domain, and I would have presumed that the rest would be their recursive servers. I found an old welcome letter from them that states ns1.twtelecom.net and ms2.twtelecom.net were the preferred forwarders on the circuit.
Some other network segments use their other resolvers but weren't affected because our internal boxes cache. I can't speak as to why they have it set up this way, but that's the list I have and every single one wasn't working from within or outside of their network. From my testing authoritative requests were never denied, however.
There was a complete outage for a bit over an hour, then it was intermittent for a couple hours after that. Also, good or bad, all of the above servers recurse from on and off their network once again. Their NOC gave me a resolution of "Added an ACL to block an IP address".
Regardless, I'm not using their resolvers anymore but thought it would be helpful in case anyone else saw a segment of their network start yammering about facebook and twitter being down.