Public shaming list for ISPs announcing other ISPs IP space by mis take

Okay, I admit I haven't paid the closest attention to RPKI, but I
have to ask: Is this a two-way shared-key issue, or (worse) a case
where we need to rely on a central entity to be a key clearinghouse?

The reason why I mention this is obvious -- the entire PKI effort
has been stalled (w.r.t. authority) because of this particular
issue.

Any thoughts on that?

- - ferg

See Randy et al's presentation here:

<http://www.nanog.org/mtg-0610/bush.html>

In short, the latter, which is precisely DRC's point.

-danny

Okay, I admit I haven't paid the closest attention to RPKI,
but I have to ask: Is this a two-way shared-key issue, or
(worse) a case where we need to rely on a central entity to
be a key clearinghouse?

The reason why I mention this is obvious -- the entire PKI
effort has been stalled (w.r.t. authority) because of this
particular issue.

Who says there needs to be a PKI infrastructure in order to
do this? There are other ways of authenticating data. For instance
ARIN could hold the data that they have validated on their own
servers and people could use HTTPS queries to ensure that they
get the answers that they thought they would get.

As for how the address owner delegates the right to announce
a prefix, they could either operate their own database and
ARIN would have a pointer to it, or they could register the
data in ARIN's database by some secure means. There is no
reason why "secure means" could not include various out of
band authentication systems.

People are too hung up on cryotographically secure PKI systems
which are way overkill for this problem. In fact, it should be
possible to design an architecture that allows for an easy upgrade
to PKI if it should be determined at some future date, that PKI
is necessary.

--Michael Dillon

Okay, I admit I haven't paid the closest attention to RPKI, but I have to ask: Is this a two-way shared-key issue, or (worse) a case where we need to rely on a central entity to be a key clearinghouse?

The reason why I mention this is obvious -- the entire PKI effort has been stalled (w.r.t. authority) because of this particular issue.

Who says there needs to be a PKI infrastructure in order to
do this? There are other ways of authenticating data. For instance
ARIN could hold the data that they have validated on their own
servers and people could use HTTPS queries to ensure that they
get the answers that they thought they would get.

I must point out that HTTPS is still in PKI land - it's just "another one", inviting otherwise unrelated parties (like Verisign et al.) into the system.

As for how the address owner delegates the right to announce a prefix, they could either operate their own database and
ARIN would have a pointer to it, or they could register the
data in ARIN's database by some secure means. There is no
reason why "secure means" could not include various out of
band authentication systems.

The principles for this are included in the SIDR efforts.

People are too hung up on cryotographically secure PKI systems
which are way overkill for this problem. In fact, it should be
possible to design an architecture that allows for an easy upgrade
to PKI if it should be determined at some future date, that PKI
is necessary.

It's hard to switch to a more secure method later on if you start with a less secure one. So, "upgrading" to PKI from something else only makes sense if that previous system was secure enough - but then why would you want to change?

Robert

It's hard to switch to a more secure method later on if you
start with a less secure one. So, "upgrading" to PKI from
something else only makes sense if that previous system was
secure enough - but then why would you want to change?

If the delegation information expires, which it should to ensure
that it still is current, then it should not be so hard to upgrade
the security of the system.

As for why, that's so that people will actually start using
the system instead of fretting about who holds the keys to it
all.

Similarly, this should all be about OSS systems, and not touch
any routers or BGP processes at all. It is up to the individual
ISP to decide how they want to use the information and how
and when they want to push it into their BGP speaking routers.

--Michael Dillon

Okay, I admit I haven't paid the closest attention to RPKI, but I
have to ask: Is this a two-way shared-key issue, or (worse) a case
where we need to rely on a central entity to be a key clearinghouse?

<snip>

In short, the latter, which is precisely DRC's point.

Presuming that you meant to say that the RPKI is a centralized system,
I'd quibble that it is certainly a rooted system, but not centralized.

Like: DNS is rooted, but I'd not call it centralized.

The RPKI is hierarchical and distributed all over everywhere.

--Sandy

The RPKI is hierarchical and distributed all over everywhere.

yes, hierarchic. but the guess is that it is distributed more like the
irr, some dozens of folk will run it, not millions such as the dns.

randy

<security person rant>

"Easy upgrade" to PKI after the fact might as well be a misnomer. In particular, there will likely be no way to ensure that nobody uses the old system instead of the new, spiffy and "secure"-ified system. This means that support for the old, "insecure" system must be kept around indefinitely, for all practical considerations - which opens you to downgrade attacks and all sorts of other unpleasantness from the backwards compatibility baggage involved.

Now, it may well be that we don't need a full blown PKI here, but I think that we should be extremely wary of any scheme that proposes to be future upgradeable to be "more secure", especially when we are talking about a mostly decentralized system where there isn't going to be much of a practical push to force people to upgrade.

At the risk of opening the door to much flame-age, consider that with dnssec, my understanding here is that we will *still* have to keep around support for non-secured queries for a very, very long time until everyone (or some level of "everyone" that we consider "good enough", which is also unlikely to be the case for a very long time) runs dnssec-ified authoritative name servers for their domains. This means that the non-secured "plain" DNS path will continue to remain open for attack for years to come, even if everyone on this list, and the root/gTLDs/cccTLDs magically stopped what they were doing right now and somehow rolled out dnssec tomorrow. Being forced to keep this code around leaves you open to downgrade attacks.

To give a quick example off the top of my head of why this can be dangerous, consider the following back-of-the-napkin scenario:

Even with signature expiration times in place in dnssec to try and prevent replaying of old signed zones that would allow downgrade attacks for any domains not listed as supporting dnssec, an adversary in your packet path can still (probably) have a reasonable shot at successfully forcing a downgrade attack and subsequently spoofing data using "plain" DNS fallback. For example, to validate validity timestamps on signatures, you need to have valid local system time, and how do you update your local system time? Do you use NTP over the public Internet? If so, an attacker in your packet path can change your system time and replay old dnssec signatures, thus allowing downgrade attacks for domains that were previously not using dnssec by taking advantage of "plain" DNS fallback code.

Now, I'm not really trying to bash dnssec here, but rather point out that "upgrading" to something that's secure later on should be considered practically a non-option in a (mostly) decentralized scenario like how the global routing DFZ is managed. I'm also not trying to bash your proposal specifically (or the level of security it provides), but rather just call attention to the uncomfortableness to anything that provides "soft" security from the get-go with a later option for upgrade to "hard" security.

</security person rant>

Now, it may well be that we really don't need PKI here for reasonable security (and I am explicitly *not* commenting on whether this is or is the case here), but we had better be damn sure that we make the right call there before rolling anything like this out, or we'll be dealing with the security consequences for a very long time to come.

There are just *so* many things that make handling a "secure" upgrade to a well-entrenched protocol that provides "hard" security, while keeping reasonable functionality an extremely difficult task (to say the least), that you would likely almost be better scrapping the existing (well, new) protocol entirely and coming up with a new one from scratch should such prove necessary.

- S

"Easy upgrade" to PKI after the fact might as well be a
misnomer. In particular, there will likely be no way to
ensure that nobody uses the old system instead of the new,
spiffy and "secure"-ified system. This means that support
for the old, "insecure" system must be kept around
indefinitely, for all practical considerations

This is nonsense. If I can shut down my gopher server, then
why can't someone stop accepting delegation notifications that
don't meet the requirements of version x+1 for some value of x?

For that matter, since cleanliness of data is a major problem
in this type of database, why can't all records expire 6 months
after they are entered? That would avoid the garbage that collects
in IRR or whois databases. If an entity is not active and does not
refresh their delegation of prefix announcement rights, then
after 6 months, their connectivity will begin to crumble as the
various providers refresh their route filters from their OSS
systems.

Now, it may well be that we don't need a full blown PKI here,
but I think that we should be extremely wary of any scheme
that proposes to be future upgradeable to be "more secure",
especially when we are talking about a mostly decentralized
system where there isn't going to be much of a practical push
to force people to upgrade.

You mean version incompatibility leading to an inability to
refresh your expired data, is not enough of a push? If that
is so, then why are you routing their traffic?

At the risk of opening the door to much flame-age, consider
that with dnssec, my understanding here is that we will
*still* have to keep around support for non-secured queries
for a very, very long time until everyone

That is a different situation even though there are similarities.

To give a quick example off the top of my head of why this
can be dangerous, consider the following back-of-the-napkin scenario:

Even with signature expiration times in place in dnssec to
try and prevent replaying of old signed zones that would
allow downgrade attacks for any domains not listed as
supporting dnssec, an adversary in your packet path can still
(probably) have a reasonable shot at successfully forcing a
downgrade attack and subsequently spoofing data using "plain"
DNS fallback. For example, to validate validity timestamps
on signatures, you need to have valid local system time, and
how do you update your local system time? Do you use NTP
over the public Internet? If so, an attacker in your packet
path can change your system time and replay old dnssec
signatures, thus allowing downgrade attacks for domains that
were previously not using dnssec by taking advantage of
"plain" DNS fallback code.

This is why these fully automated crypto PKI solutions make me
uneasy. There is too darn much complexity and too little experience
with them in the real world. If you move the problem to a different
space where OSS systems check route delegations, and only update
router configs after some human intervention then there is less
chance of wierd attack vectors succeeding.

I'm also not trying to bash your proposal
specifically (or the level of security it provides), but
rather just call attention to the uncomfortableness to
anything that provides "soft" security from the get-go with a
later option for upgrade to "hard" security.

If I agreed with you, I never would have set up an ISP back
in 1994 because of the fundamental insecurity of an IPv4
Internet without IPSEC support baked into the fundamental
protocol.

There are just *so* many things that make handling a "secure"
upgrade to a well-entrenched protocol that provides "hard"
security, while keeping reasonable functionality an extremely
difficult task (to say the least), that you would likely
almost be better scrapping the existing (well, new) protocol
entirely and coming up with a new one from scratch should
such prove necessary.

See, there is a straightforward upgrade route after all.
Even more straighforward if we walk into this with clear
requirements and a clear documented architecture so that
everyone knows what the boundaries are and fewer people
bake things into hard-to-upgrade places.

--Michael Dillon

I respectfully disagree that it's nonsense. You can shut off your Gopher server, because, for some set of "nobody" that you care about, nobody uses Gopher anymore.

There are several basic ways for an old protocol to get replaced:

- Nobody has a use for it any more, for a sufficient level of "nobody" (e.g. Gopher). In the case of Gopher, it could be argued that it never really caught on to the degree that things like, say, HTTP did.
- Everyone moves on to a newer version, typically because somebody forces it on them (vendor ceasing support of old protocol, centralized that can enforce such things mandates it, etc).

The problem here, that I see with deploying a new protocol version, is that operators will be loathe to create a policy saying that "I don't accept delegation notifications of version less than x+1" until doing so does not mean that they will be cutting off things that their customers care about reaching. In this situation, flipping the big red switch to turn off protocol version x is externalized and dependent upon everyone that a particular operator's customers care about having started using version x+1. There isn't a particularly great way right now for said operator to force other networks to deploy version x+1, at least not without trying to do something along the lines of playing chicken between their customers walking and said other networks getting their act together by cutting off version x. (If this perception is not entirely accurate, according to your beliefs, please feel free to explain.)

If you leave version x+1 as optional, then you're going to be waiting a very, very long time for it to naturally "die out" as long as there is still a sizable user base that you care about.

Again, there may not be an exact, down-to-the-line analogue to this particular problem, but there are a whole lot of things on the 'net out there which have some degree of similarity and tend to demonstrate that on average, without some authoritative centralized body somehow forcing action to be taken, old protocols that are "good enough" tend to stick around forever (even when "good enough" often really means "not really, truly secure"). I think that it would be wise to avoid getting ourselves mired into that mess going forward.

Cutting remarks aside, it seems like you tend to find this preferable as well after all ("Even more straighforward if we walk into this with clear requirements and a clear documented architecture so that everyone knows what the boundaries are and fewer people bake things into hard-to-upgrade places."), unless I am misunderstanding you :slight_smile:

[ Not commenting on the matter of expiring IRR database entries, as my posting was, again only discussing the matter of the pitfalls of assuming that protocol upgrades will be particularly workable to provide "add-on" security later in the game. ]

- S