Open Resolver Problems

So, how would Patrick's caveat affect me, whose recursive resolver *is
on my Linux laptop*? Would not that recursor be making queries he
advocates blocking?

Or don't I remember DNS well enough?

Cheers,
-- jra

You're sending queries, not replies. That's why DPI is needed to do the
blocking, rather than just by port.

Aha. Right. Got it.

Cheers,
-- jra

What queries are sourced from port 53 nowadays?

I'd imagine it's pretty safe to block Internet->customer UDP/53 packets.

The badness that Patrick is talking about blocking are DNS responses being sent from consumer devices to the Internet, answering DNS queries being sent from the Internet towards consumer devices. (I think. This thread is sufficiently circular that I feel a bit dizzy, and could be mistaken.) The DNS traffic outbound from your laptop will be DNS queries (not responses) and the inbound traffic will be DNS responses (not queries). The traffic profiles are different.

The case where infected consumer devices originate source-spoofed queries towards open resolvers, feeding a query stream to an amplifier for delivery to a victim, is mitigated by preventing those consumer devices from spoofing their source address, so BCP38.

The case where infected consumer devices originate non-source-spoofed queries towards DNS servers in order to overwhelm the servers themselves with perfectly legitimate-looking queries is a harder problem to solve at the edge, and is most easily mitigated for DNS server operators by the approach "ensure great headroom".

Joe

I would expect from stubs this will be close enough to zero to be
effectively zero. At least I would hope so. I don't have a great
source of insight for a resolver of this type of source data that I
can easily look at the moment, but if someone does I'd be interested
to hear otherwise.

On the authoritative side, which is easier for me to examine however,
when I've looked at this before, and the last time was a year ago it
was about 1% of all queries came from resolvers using source port 53. I
just now checked another server and the percentage is practically the
same. Before anyone dismisses 1% of queries as insignificant, keep in
mind that if all remaining queries from all other possible source port
values were equally distributed, that 1% (1 out of 100) is easily more
common than any other.

John

This (below) is one of four resolvers, together providing service for two recursive DNS servers used by residential DSL and cable internet users at a medium-sized ISP in Canada. These resolvers are running unbound, which will never choose a source port of 53 on an outbound query; hence anything we see here with src port = dst port = 53 is one of those effective zeros.

[dns1-p1:~]% sudo tcpdump -i em0 -n -c 10000 -w sample.pcap udp port 53
tcpdump: listening on em0, link-type EN10MB (Ethernet), capture size 96 bytes
10000 packets captured
10267 packets received by filter
0 packets dropped by kernel
[dns1-p1:~]% tcpdump -r sample.pcap -q udp src port 53 and udp dst port 53 | wc -l
reading from file sample.pcap, link-type EN10MB (Ethernet)
      26
[dns1-p1:~]%

26/1000 is more than zero but still quite small. Subsequent samples with bigger sizes give 332/100000, 3017/1000000.

No science here, but 2% - 3% is what it looks like, which is big enough to be a noticeable support cost for a medium-scale provider if the customer damage is not robo-mediated in some way (e.g. whitelist known offenders to avoid the support phone glowing red when you first turn it on).

Joe

Thanks Joe. That is interesting.

I can only imagine that on the customer side there are queries coming
from something other than typical OS stub resolvers on unix and
Windows based hosts. I suppose some sort of NAT/PAT box could account
for some of it, maybe more likely could be some common CPE forwarder
that uses that port by default. If the latter, that might be
considered a serious enough risk that the vendor should address it if
they haven't already.

If no one else does, another side project I'll add to my list of things
to do on a rainy day.

John

I think that is .2% - .3%, no?

Oh, you're right -- it does seem substantially closer to zero when you put the decimal point in the right place :slight_smile:

Joe

Huh?

23 in 1000 is in fact 2.3%.

Cheers,
-- jra

That's it, no more e-mail for me today.

Joe

His sample was 10K not 1000. Look higher.

( Ok, ok, another bad customer =D )

Starting today at 5h15m EST...

    There is a bigger than usual DDoS amplification against the IP's
listed below.

    Granted root servers query is barely 1k while the usual isc.org is
3.5k and this is a "possible" 15Mbps from this one source but still :frowning:

PS:

    If you're a Tier and wish to track down the *^%$*#@ source ISP's to
explain to them the joy of BCP38...

    Contact me off list, from your corporate email address, and I'll
provide you with the IP of that server.

----- IP are targeted for DDoS amplification.

Format:

<IP>
    <query count during 10 seconds> [query]

94.23.42.215
        2128 . IN ANY +E
208.98.25.130
        3079 . IN ANY +E
188.134.46.102
        2639 . IN ANY +E
108.61.239.105
        2270 . IN ANY +E
95.129.166.186
        2416 . IN ANY +E
176.9.210.53
        2839 . IN ANY +E
145.53.65.130
        2326 . IN ANY +E
99.198.100.86
        1223 . IN ANY +E
37.59.72.74
        2508 . IN ANY +E
199.83.133.42
        2392 . IN ANY +E
74.63.248.210
        1481 . IN ANY +E
173.199.68.62
        1178 . IN ANY +E
82.80.17.4
        2666 . IN ANY +E
188.162.228.50
        1075 . IN ANY +E
79.225.4.183
        1014 . IN ANY +E
78.108.79.171
        1291 . IN ANY +E
31.53.123.192
        1093 . IN ANY +E
90.3.194.151
        1245 . IN ANY +E
27.50.70.191
        1304 . IN ANY +E
198.7.63.39
        1579 . IN ANY +E
81.220.28.129
        1103 . IN ANY +E
198.105.218.12
        1110 . IN ANY +E
86.160.85.37
        1128 . IN ANY +E
184.95.35.194
        1237 . IN ANY +E
134.255.237.244
        1245 . IN ANY +E
178.32.36.67
        1588 . IN ANY +E
204.45.55.8
        1419 . IN ANY +E
95.211.209.182
        1520 . IN ANY +E
80.192.224.22
        1430 . IN ANY +E
24.244.248.8
        1414 . IN ANY +E
79.71.69.165
        1090 . IN ANY +E
24.244.248.57
        1364 . IN ANY +E
82.132.226.216
        1079 . IN ANY +E
69.162.97.99
        1601 . IN ANY +E

Is anyone in particular being pocketed, or are these random addresses?

It looks like to be a service and some of their customers.

    ( Ok, ok, another bad customer =D )

Starting today at 5h15m EST...

    There is a bigger than usual DDoS amplification against the IP's
listed below.

    Granted root servers query is barely 1k while the usual isc.org is
3.5k and this is a "possible" 15Mbps from this one source but still :frowning:

  With a validating resolver

  "dig any . +edns" return a 1872 byte payload.
  "dig any . +dnssec" return a 2030 byte payload.
  (difference is NS RRSIG records)

  Getting the DNSKEY records included isn't hard. Throw a
  single DNSKEY query into the stream once a day/hour
  and it will be cached for 48 hours.

  If you have the SOA cached as well it gets to

  "dig any . +edns" return a 2087 byte payload.
  "dig any . +dnssec" return a 2245 byte payload.

  Mark

    ( Ok, ok, another bad customer =D )

Starting today at 5h15m EST...

    There is a bigger than usual DDoS amplification against the IP's
listed below.

    Granted root servers query is barely 1k while the usual isc.org is
3.5k and this is a "possible" 15Mbps from this one source but still :frowning:

  With a validating resolver

  "dig any . +edns" return a 1872 byte payload.
  "dig any . +dnssec" return a 2030 byte payload.
  (difference is NS RRSIG records)

  Getting the DNSKEY records included isn't hard. Throw a
  single DNSKEY query into the stream once a day/hour
  and it will be cached for 48 hours.

  If you have the SOA cached as well it gets to

  "dig any . +edns" return a 2087 byte payload.
  "dig any . +dnssec" return a 2245 byte payload.

  Mark

Well during the spamhaus incident I saw some at around 8k.

On another note...

    After 18 hours, that "pot" is still receiving ~200pps (down from
800 and 400pps) and its up to 614 IP now...

I still do not see the motive behind this one:

    Either someone messed up his botnet and he's stuck on it =D

    Could be a rootkit using this server as a DNS server (lots of
targets are hosted Linux in outfit like OVH).
    ( But again why spamming . IN ANY queries and not cache the results )

    And a new query popped up -> doc.gov IN ANY +E, granted I only saw a
few of them.

    And a few of the source IP's are gaming forums mostly Minecraft
oriented.

PS: Reminder, that this server do not actually amplify anything and the
service at that location is not affected.

Is it me or the bigger a corporation gets the more vindictive (a b-word
intended) they are to customers leaving them?

The biggest grievance I have is in regards to carriers with automatic contract renewals. Combined with the fact that these carriers either refused to allow month to month billing or will allow it at double / triple current rates, coordinating disconnection of older services while turning up new services with a different carrier in the same time frame can be a real challenge.

Adding insult to injury is the fact that one does not simply resolve carrier billing issues - I've had multiple incidents which took almost a year to resolve.

I personally think that the automatic renewal of a three year term should be criminal. The same goes for price increases while I'm under a contract rate - Apparently as long as there is a provision in the small print (which is able to be changed at will, due to the small print referencing a document on the carrier's website), be ready to pay more whenever the carrier dictates, regardless of what your contract says.

Typing this was somewhat therapeutic.