What are these Google IPs hammering on my DNS server?

At contacts.abuse.net, I have a little stunt DNS server that provides domain contact info, e.g.:

$ host -t txt comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net descriptive text "abuse@comcast.net"

$ host -t hinfo comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net host information "lookup" "comcast.net"

Every once in a while someone decides to look up every domain in the
world and DoS'es it until I update my packet filters. This week it's
been this set of IPs that belong to Google. I don't think they're
8.8.8.8. Any idea what they are? Random Google Cloud customers? A
secret DNS mapping project?

172.253.1.133
172.253.206.36
172.253.1.130
172.253.206.37
172.253.13.196
172.253.255.36
172.253.13.197
172.253.1.131
172.253.255.35
172.253.255.37
172.253.1.132
172.253.13.193
172.253.1.129
172.253.255.33
172.253.206.35
172.253.255.34
172.253.206.33
172.253.206.34
172.253.13.194
172.253.13.195
172.71.125.63
172.71.117.60
172.71.133.51

R's,
John

Before when I had my honeypot firewall off everything that crossed it’s threshold, I ended up blocking myself from a variety of authoritative servers, including Google’s.

They are probably spoofed IPs. So those are the target IP IPs of a DDoS

What king of amplification factor does your DNS server have? I bet with the changes you’ve made, it’s super high. People are looking for DNS servers like that.

Tom

Did a bit of digging on Google’s developer site and came across this: https://developers.google.com/speed/public-dns/faq#locations_of_ip_address_ranges_google_public_dns_uses_to_send_queries

Looks like the IPs you mentioned belong to Google’s public DNS resolver based on that list on their site. They could also be spoofed though from a DNS AMP attack, so keep that in mind.

They are probably spoofed IPs. So those are the target IP IPs of a DDoS

What king of amplification factor does your DNS server have? I bet with the changes you’ve made, it’s super high. People are looking for DNS servers like that.

On the contrary, the reponse packets are tiny.

$ host -t txt comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net descriptive text "abuse@comcast.net"

$ host -t hinfo comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net host information "lookup" "comcast.net"

Those reply packets are 108 and 109 bytes, no addditional section, no DNSSSEC, no nothing.

Any other ideas? One clue is that the queries have random capitalization which would be consistent with them really coming from Google.

172.253.X.X are Google DNS : https://www.gstatic.com/ipranges/publicdns.json

172.71.X.X are Cloudflare : https://www.cloudflare.com/ips-v4/#

Did a bit of digging on Google's developer site and came across this:
Häufig gestellte Fragen  |  Public DNS  |  Google for Developers

Looks like the IPs you mentioned belong to Google's public DNS resolver
based on that list on their site. They could also be spoofed though from a
DNS AMP attack, so keep that in mind.

Per my recent message, the replies are tiny so if it's an amplification attack, it's a very incompetent one. The queries are case randomized so I guess it's really Google. Sigh.

If anyone is wondering, I have a passive aggressive countermeasure against some overqueriers that returns ten NS referral names, and then 25 random IP addresses for each of those names, but I don't do that to Google.

R's,
John

------------------------------------------------------------------------------
*Accuris Technologies Ltd.*

At contacts.abuse.net, I have a little stunt DNS server that provides
domain contact info, e.g.:

$ host -t txt comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net descriptive text "abuse@comcast.net"

$ host -t hinfo comcast.net.contacts.abuse.net
comcast.net.contacts.abuse.net host information "lookup" "comcast.net"

Every once in a while someone decides to look up every domain in the
world and DoS'es it until I update my packet filters. This week it's
been this set of IPs that belong to Google. I don't think they're
8.8.8.8. Any idea what they are? Random Google Cloud customers? A
secret DNS mapping project?

172.253.1.133
172.253.206.36
172.253.1.130
172.253.206.37
172.253.13.196
172.253.255.36
172.253.13.197
172.253.1.131
172.253.255.35
172.253.255.37
172.253.1.132
172.253.13.193
172.253.1.129
172.253.255.33
172.253.206.35
172.253.255.34
172.253.206.33
172.253.206.34
172.253.13.194
172.253.13.195
172.71.125.63
172.71.117.60
172.71.133.51

R's,
John

Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

John-

This is little consolation, but at AS3128, I see the same thing to our downstream at times, claiming to come from both 13335 and 15169 often simultaneously at the tune of 25Kpps , "assuming it's not spoofed", which is pragmatically impossible to prove for me given our indirect relationships with these companies. When I see these events, I typically also see a wide variety of country codes participating simultaneously. Again, assuming it's not spoofed. To me it just looks like effective harassment with 13335/15169 helping out. I pine for the internet of the 1990s.

Recent events in GMT for us were the following, curious if you see the same
~ Nov 26 05:40
~ Nov 30 00:40
~ Nov 30 05:55

Application agnostic, on the low $ end for "fixes", if it's either do something or face an outage, I've found some utility in short term automated DSCP coloring on ingress paired with light touch policing as close to the end host as possible, which at least keeps things mostly working during times of conformance. Cheap/fast and working ... most of the time. Definitely not great or complete at all, and a role I'd rather not play as an educational ISP/enterprise.

So what are most folks doing to survive crap like this? Nothing/waiting it out? Oursourcing DNS? Scrubbing appliance? Poormans stuff like I mention above?

-Michael

This is little consolation, but at AS3128, I see the same thing to our downstream at times, claiming to come from both 13335 and 15169 often simultaneously at the tune of 25Kpps , "assuming it's not spoofed", which is pragmatically impossible to prove for me given our indirect relationships with these companies. When I see these events, I typically also see a wide variety of country codes participating simultaneously. Again, assuming it's not spoofed. To me it just looks like effective harassment with 13335/15169 helping out. I pine for the internet of the 1990s.

Assuming it's really Google and Cloudflare, it is probably not malicious, just very inept mail admins.

They assume that abuse.net is some sort of DNSBL so they configure their mail server to query it for every domain in every message they see, even though the results are useless. I have never been able to get anyone who does this to stop.

It's not unlike the multirbl page at valli.org which proves the truism that any idiot can run a blacklist and many idiots do. He included the abuse.net results and despite a warning right next to the results saying it's not a blacklist, I got a stream of outraged people insisting I was personally blocking their mail. So I was finally able to get him to take it out by returning this custom result:

'Blacklisted. To remove send $100 to xx@valli.org'

R's,
John

Just set TC=1 for those clients. If you get queries over TCP then they where not spoofed. If they are using DNS COOKIE (RFC 7873) you can send back BADCOOKIE to the initial (client cookie only) UDP request with your server cookie. Identifying real DNS clients has been possible for years now. It’s not hard.

Just set TC=1 for those clients. If you get queries over TCP then they where not spoofed. If they are using DNS COOKIE (RFC 7873) you can send back BADCOOKIE to the initial (client cookie only) UDP request with your server cookie. Identifying real DNS clients has been possible for years now. It’s not hard.

I could do that but with the other clues I think it's unlikely they're spoofed and far more likely they're real traffic from clueless users.

Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Google Public DNS (8.8.8.8) attempts to identify and filter abuse, and while we think we’re fairly effective for large attacks (eg, those above 1Mpps), it gets more challenging (due to risk of false positives) to adequately filter small attacks. I should note that we generally see the attack traffic coming from botnets, or forwarding resolvers that blend the attack traffic with legitimate traffic.

Based on ISC BIND load-tests [0], a single DNS server can handle O(1Mpps). Also, no domain should be served by a single DNS server, so O(1Mpps) seems like a safe lower-bound for small administrative domains (larger ones will have more redundancy/capacity). Based on these estimates, we haven’t treated mitigation of small attacks as a high priority. If O(25Kpps) attacks are causing real problems for the community, I’d appreciate that feedback and some hints as to why your experience differs from the ISC BIND load-tests. With a better understanding of the pain-points, we may be able to improve our filtering a bit, though I suspect we’re nearing the limits of what is attainable.

Since it was mentioned up-thread, I’d caution against dropping queries from likely-legitimate recursives, as that will lead to a retry storm that you won’t like (based on a few reports of authoritatives who suffered outages, the retry storm increased demand by 30x and they initially misdiagnosed the root cause as a DDoS). The technically correct (if not entirely practical) mitigation for a DNS cache-busting attack laundered through open recursives is to deploy DNSSEC and issue NSEC/NSEC3 responses to allow the recursives to cache the non-existence of the randomized labels.

[0] https://www.isc.org/blogs/bind-performance-september-2023/

Damian

Thanks for your note.

Here's my problem, which I freely admit puts me way out at the tail of the weird curve. I run abuse.net which lets you look up abuse reporting addresses for domains. If you look up, say, bt.co.uk or mail.bt.co.uk, it'll look the domain up in its internal database and tell you to send reports to abuse@bt.com.

I provide lookups via a web site and a whois server, but it occurred to me a while ago that it'd be much faster for everyone if I made a stunt DNS server that does the lookups and synthesizes the answers, e.g.:

$ dig mail.bt.co.uk.contacts.abuse.net txt

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;mail.bt.co.uk.contacts.abuse.net. IN TXT

;; ANSWER SECTION:
mail.bt.co.uk.contacts.abuse.net. 43200 IN TXT "abuse@bt.com"

The DNS server is a perl script I wrote a while ago that synthesizes answers on the fly. It can't be a normal DNS server because the mapping from queries to responses is more complex than you can express with DNS wildcards, and if a domain isn't in the database it returns a default of abuse@<domain>.

I have two servers on two networks and normally it works fine until some nitwit does a query flood, probably looking up every domain in every message they see, or maybe an inept listwasher, and the two little perl scripts just can't keep up.

What I would like is if large public DNS systems like yours refused to look up anything in contacts.abuse.net, and I tell people that if they want to use the DNS lookup, use your own DNS cache, similar to what DNSBLs do.

I suppose I could try and do a split horizon hack on the parent server (abuse.net itself is on ordinary NSD servers) and say the NS for contacts.abuse.net is at 127.0.0.1, but as we've seen it's a challenge keeping track of all the places your queries can come from.

Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Damian-

Not Google or ISCs fault, our customers have made some decisions that have exasperated the issues. By and away the biggest problem facing my customers is that they have chosen a stateful border firewall that collapses due to session exhaustion and they put everything, including aDNS, behind said firewall. “If it hurts, don’t do it” comes to mind, but out of my hands.

At quick glance following the ISC link I didn’t see the compute infrastructure [core count] needed to get 1Mpps. There is an obvious difference between 99% load of ~500rps and 1M, so we can maybe advise to not undersize ADNS if that’s an issue.

I’m an ISP engineer and am generally not the directly affected party, so I don’t get to pick these implementation details for my customers. I appreciate the background and suggestions from you and others on this thread like Mark. That’s an interesting comment about DNSSEC that I hadn’t considered.

-Michael

The system under test in ISC's perflab is a 12-core Dell R430 of 2016 vintage.

Ray

is the test framework documented where others could setup/run the
test(s)? :slight_smile: (perhaps for mr hare I mean, or me! :slight_smile: )
Are the tests for authoritative or cache resolvers?

-chris

is the test framework documented where others could setup/run the test(s)? :slight_smile: (perhaps for mr hare I mean, or me! :slight_smile: )

Are the tests for authoritative or cache resolvers?

Originally it was just for auth, but there's some recursive support too.

I wrote the framework, but these days I'm too busy running F-root so the BIND QA guys maintain it.

Ray