well-known Anycast prefixes

I wonder whether anyone has ever compiled a list of well-known Anycast
prefixes.

Such as

1.1.1.0/24
8.8.8.0/24
9.9.9.0/24
...

Might be useful for a routing policy such as "always route hot-potato".

PS. this mail is not intended to start a flame war of hot vs. cold
potato routing.

I don’t know of one.

It seems like a good idea.

BGP-multi-hop might be a reasonable way to collect them.

If others agree that it’s a good idea, and it’s not stepping on anyone’s toes, PCH would be happy to host/coordinate.

                                -Bill

Hi Fredy,

Our anycast prefixes for DNS resolver

185.222.222.0/24
2a09::/48

You can add them if someone will maintain a list.

Regards,

David

On Mar 19, 2019, at 10:12 AM, Fredy Kuenzler <kuenzler@init7.net>
wrote: I wonder whether anyone has ever compiled a list of
well-known Anycast prefixes.

I don’t know of one.

It seems like a good idea.

BGP-multi-hop might be a reasonable way to collect them.

If others agree that it’s a good idea, and it’s not stepping on
anyone’s toes, PCH would be happy to host/coordinate.

Thanks for the effort, much appreciated.

A Well-known BGP community will be better.

You’ll need to rewrite next hop or do something similar if AnyCast prefixes are learnt from a multi hop BGP feed, and it made the configuration more complicated and difficult to debug.

Careful thought should be given into whether the BGP community means “this is an anycast prefix” vs “please hot-potato to this prefix”. Latency-sensitive applications may prefer hot-potato to their network even if it’s not technically an anycast range, as their private backbone may be faster (less congested) than the public internet.

Damian

Careful thought should be given into whether the BGP community means "this
is an anycast prefix" vs "please hot-potato to this prefix".
Latency-sensitive applications may prefer hot-potato to their network even
if it's not technically an anycast range, as their private backbone may be
faster (less congested) than the public internet.

To this point, it is pretty clear that any WK community covering this
will get [ab]used in a way that the prefix annoucer wishes. We'll then
see operators only accepting the WKC if it matches their prefix lists
of known entities, getting us back to "hey maybe this should just be
a registry I could reference".

Woody, maybe generate route-sets to publish in RPSL (RADB?), one per
address-family, of observed anycasters? It might be reasonable to do
so in a format others can emulate if they wish to create/provide their
own lists?

Cheers,

Joe

something like this?

https://github.com/netravnen/well-known-anycast-prefixes/blob/master/list.txt

PR's and/or suggestions appreciated! (Can be turned into $lirDB friendly
format->style RPSL)

Most DNS root servers are anycasted.

Generally, static lists like that are difficult to maintain when they’re tracking multiple routes from multiple parties.

Communities have been suggested, which works as long as they’re passed through to somewhere people can see. Between PCH, RIS, and Route-Views, most should be visible somewhere, but not all.

I think a combination of the two is probably most useful… people tag with a well-known community, then those get eBGP-multi-hopped to a common collector, and published as a clean machine-readable list.

                                -Bill

Right, yeah, I think he was just showing an example, since he had roughly a dozen, out of thousands.

                                -Bill

Hi,

Generally, static lists like that are difficult to maintain when
they’re tracking multiple routes from multiple parties.

agreed.
and on the other extreme, communities are very much prone to abuse.
I guess I could set any community on a number of prefixes (incl anycast)
right now....

So, I think a (moderated) BGP feed of prefixes a'la bogon from a trusted
{cymru[1], pch[2], ...} could be good [3].

Frank Habicht
37084 / 33791
if that matters

{1] dealing with anycast?
[2] biased?
[3] speaking as someone not using (subscribing) any of the useful ones,
nor contributing to any...

Ok, so, just trying to flesh out the idea to something that can be usefully implemented…

1) People send an eBGP multi-hop feed of well-known-community routes to a collector, or send them over normal peering sessions to something that aggregates…

2) Because those are over BGP sessions, the counterparty is known, and can be asked for details or clarification by the “moderator,” or the sender could log in to an interface to add notes about the prefixes, as they would in the IXPdir or PeeringDB.

3) Known prefixes from known parties would be passed through in real-time, as they were withdrawn and restored.

4) New prefixes from known parties would be passed through in real-time if they weren’t unusual (large/overlapping something else/previously announced by other ASNs).

5) New prefixes from known parties would be “moderated” if they were unusual.

6) New prefixes from new parties would be “moderated” to establish that they were legit and that there was some documentation explaining what they were.

7) For anyone who really didn’t want to provide a community-tagged BGP feed, a manual submission process would exist.

8) Everything gets published as a real-time eBGP feed.

9) Everything gets published as HTTPS-downloadable JSON.

10) Everything gets published as a human-readable (and crawler-indexable) web page.

Does that sound about right?

                                -Bill

Hi,

Ok, so, just trying to flesh out the idea to something that can be
usefully implemented…

1) People send an eBGP multi-hop feed of well-known-community routes
to a collector, or send them over normal peering sessions to
something that aggregates…

to clarify, eBGP multi-hop sounds good, that can be a dedicated session
for this purpose, BGP communities probably don't need to be added.
[for the case of 'normal peering sessions', it might seem "wasteful" to
use an additional IP on multiple peering LANs]

...

Does that sound about right?

to me yes.

Frank

Hi,

Interesting discussion and ideas. I like how you've laid it out
above, Bill.

I'm not clear on the use cases, though. What are the imagined use cases?

It might make sense to solve 'a method to request hot potato routing'
as a separate problem. (Along the lines of Damian's point.)

Thanks!

James

Hi James,

I'm not clear on the use cases, though. What are the imagined use cases?

It might make sense to solve 'a method to request hot potato routing'
as a separate problem. (Along the lines of Damian's point.)

my personal reason/motivation is this:
Years ago I noticed that my traffic to the "I" DNS root server was
traversing 4 continents. That's from Tanzania, East Africa.
Not having a local instance (back then), we naturally sent the traffic
to an upstream. That upstream happens to be in that club of those who
don't have transit providers (which probably doesn't really matter, but
means a "global" network).

My Theory :
So just because one I-root instance was hosted at a customer (or
customer's customer), that got higher local-pref and now packets take
the long way from Africa via Europe, NorthAmerica to Asia and that
customer in Thailand. While closer I-root instances would obviously be
along the way, just not from a paying customer, "only" from peering.

I don't know whether or not to blame that "carrier" for intentionally(?)
carrying the traffic that far - presumably the $ they got for that from
the I-root host in Thailand was worth it, and not enough customers
complained enough about the latency?

But I think it would be worthwhile to give them an option and produce a
mechanism of knowing what's anycasted.

Maybe (thinking of it) a solution for really well-known prefixes
available at many instances/locations (like DNS root) would be to have
their fixed set of direct transits at all the "global" nodes and
everywhere else to tell peers to not advertise this to upstreams.

Greetings,
Frank

> I'm not clear on the use cases, though. What are the imagined use cases?
>
> It might make sense to solve 'a method to request hot potato routing'
> as a separate problem. (Along the lines of Damian's point.)

my personal reason/motivation is this:
Years ago I noticed that my traffic to the "I" DNS root server was
traversing 4 continents. That's from Tanzania, East Africa.
Not having a local instance (back then), we naturally sent the traffic
to an upstream. That upstream happens to be in that club of those who
don't have transit providers (which probably doesn't really matter, but
means a "global" network).

Luckily there are other root servers too! :slight_smile:

My Theory :
So just because one I-root instance was hosted at a customer (or
customer's customer), that got higher local-pref and now packets take
the long way from Africa via Europe, NorthAmerica to Asia and that
customer in Thailand. While closer I-root instances would obviously be
along the way, just not from a paying customer, "only" from peering.

I don't know whether or not to blame that "carrier" for intentionally(?)
carrying the traffic that far - presumably the $ they got for that from
the I-root host in Thailand was worth it, and not enough customers
complained enough about the latency?

But I think it would be worthwhile to give them an option and produce a
mechanism of knowing what's anycasted.

Maybe (thinking of it) a solution for really well-known prefixes
available at many instances/locations (like DNS root) would be to have
their fixed set of direct transits at all the "global" nodes and
everywhere else to tell peers to not advertise this to upstreams.

In all instances of what you mention you need cooperation from the
network which is routing in a (from your perspective) suboptimal way.

Either the customer of that upstream should use BGP communities to
localize the announcement, or the upstream themselves need to change
their routing policy to set 'same LOCAL_PREF everywhere' for some
prefixes. Of course any input channel into routing policy can be a
vector of abuse.

Even if you equalize the LOCAL_PREF attribute across your network edge,
you still have other tie breakers such as AS_PATH length. It is not
clear to me how a list of well-known anycast addresses, in practise,
would help swing the pendulum. In all cases you need cooperation from a
lot of networks, and the outcome is not clearly defined because we don't
have a true inter-domain 'shortest latency path' metric.

Kind regards,

Job

Hi James,

I'm not clear on the use cases, though. What are the imagined use cases?

It might make sense to solve 'a method to request hot potato routing'
as a separate problem. (Along the lines of Damian's point.)

my personal reason/motivation is this:
Years ago I noticed that my traffic to the "I" DNS root server was
traversing 4 continents. That's from Tanzania, East Africa.
Not having a local instance (back then), we naturally sent the traffic
to an upstream. That upstream happens to be in that club of those who
don't have transit providers (which probably doesn't really matter, but
means a "global" network).

/snip

Greetings,
Frank

I can think of another ...

We rate-limit DNS from unknown quantities for reasons that should be obvious. We white-list traffic from known trusted (anycast) ones to prevent a DDoS attack from throttling legitimate queries. This would be a useful way to help auto-generate those ACLs.

Not all any-casted prefixes are DNS resolvers and not all DNS resolvers are anycasted. It sounds like you would be better served by a list of well-known DNS resolvers.

Not all any-casted prefixes are DNS resolvers and not all DNS resolvers are anycasted. It sounds like you would be better served by a list of well-known DNS resolvers.

True on both counts, and that's why I said "help".