Financial services BGP hijack last week?

I didn't see any mention of this here. Any comments?

"On Wednesday, large chunks of network traffic belonging to MasterCard, Visa,
and more than two dozen other financial services companies were briefly routed
through a Russian government-controlled telecom under unexplained circumstances
that renew lingering questions about the trust and reliability of some of the
most sensitive Internet communications."

https://arstechnica.com/security/2017/04/russian-controlled-telecom-hijacks-financial-services-internet-traffic/

it only proves the need for wider RPKI adoption....

Governments mopping up signals and data isn't a new concept, and
certainly not unique to the Russian Federation.

Personally I'm more concerned about important people giving up passwords
so easily to spearfishers. . .

All know. Nobody care.

a message of 29 lines which said:

I didn't see any mention of this here.

You should susbcribe to @bgpstream on Twitter, and read BGPmon blog
:slight_smile:

https://twitter.com/bgpstream

BGPstream and The Curious Case of AS12389 | BGPmon (five
days ago)

How can we actually encourage RPKI adoption?

Kind regards,

Job

That¹s the million dollar question. I think that there will be more
adoption from the Internet at large when some big players adopt it. Right
now the use of rsync in RPKI is preventing a lot of large ISPs from
implementing it (too difficult to provide redundancy with rsync). There is
a protocol called RPKI Repository Delta Protocol (RRDP)
https://tools.ietf.org/html/draft-ietf-sidr-delta-protocol-08 which will
alleviate these concerns but it is still in draft. I think once that
becomes an RFC we will see more adoption of RPKI.

Rich Compton | Principal Eng | 314.596.2828
14810 Grasslands Dr, Englewood, CO 80112

On 5/2/17, 6:27 AM, "NANOG on behalf of Job Snijders"

Lower cost router platforms don't have RPKI capability. Mikrotik claims that v7 will... whenever that comes out. AFAIK, Ubiquiti doesn't support it either. Both have submitted and acknowledged feature requests for it.

I am curious how much of a performance gap exists between new long haul fiber and fiber laid during the Great Boom from 1998-2001. We are very close to 20 years.

I assume there are two dimensions, namely bit carrying capacity of an individual wave and total bandwidth capacity of a fiber pair. I have been told and readily believe that fiber improvements do make a difference. But I have no sense of magnitudes. My impression is that the 1998-2001 fiber probably cannot handle above 100 gig waves and about 14 terabits per fiber pair at least on Trans-Atlantic cables.

- R.

www.crosslakefibre.ca<http://www.crosslakefibre.ca>

www.unitedcablecompany.com<http://www.s>

it only proves the need for wider RPKI adoption....

How can we actually encourage RPKI adoption?

http://certification-stats.ripe.net/

tim, oleg, alex, ..., the ripe/ncc team, and the ripe community have
worked very hard to make it easy, and the numbers show their success.

lacnic even more so when looked at as a percentage (not shown at the
above url); i.e. they have approximately 25% coverage; also due to solid
policy, community, and technical work.

arin has made it very difficult for a large and important segment of
their membeship, and the numbers show their negative success.

the other regions are asleep.

but the rpki is only part of the equation. to be pedantic,

    The RPKI is the X.509 based hierarchy [rfc 6481] with is congruent
    with the internet IP address allocation administration, the IANA,
    RIRS, ISPs, ... It is the substrate on which the next two are
    based. It is currently deployed in all five administrative regions.

    RPKI-based Origin Validation [rfc 6811] uses the RPKI data to allow
    a router to verify that the autonomous system announcing an IP
    address prefix is in fact authorized to do so. This is not crypto
    checked so can be violated. But it should prevent the vast majority
    of accidental 'hijackings' on the internet today, e.g. the famous
    Pakistani accidental announcement of YouTube's address space.
    RPKI-based origin validation is in shipping code from many vendors.

    Path validation, a downstream technology just finishing
    standardisation, uses the full crypto information of the RPKI to
    make up for the embarrassing mistake that, like much of the internet
    BGP was designed with no thought to securing the BGP protocol itself
    from being gamed/violated. It allows a receiver of a BGP
    announcement to formally cryptographically validate that the
    originating autonomous system was truly authorized to announce the
    IP address prefix, and that the systems through which the
    announcement passed were indeed those which the sender/forwarder at
    each hop intended.

one blocker for origin validation deployment today is lack of solid
testing of vendors' implementations; and one is known to be sorely
mis-implemented.

there is work to be done. as stephane pointed out, if you want to be
overwhelmed with tweets or email, subscribe to the feed of mis-
originations at andree's http://bgpmon.net/. as the sea level rises,
maybe we'll do more about this problem.

randy

how is it hard to provide redundancy with rsync?

the use of rsync in RPKI is preventing a lot of large ISPs from
implementing it (too difficult to provide redundancy with
rsync).

uh, at least the DRL implementation supports caches feeding off of
caches in (if you are silly enough) an arbitrarily complex graph.

some years back, our research group actually used large clusters to
emulate large deployments with multi-level caching and found it quite
efficient. see

   Olaf Maennel, Iain Phillips, Debbie Perouli, Randy Bush, Rob Austein,
   and Askar Jaboldinov, "Towards a Framework for Evaluating BGP
   Security," CSET'12, 5th Workshop on Cyber Security Experimentation
   and Test.
   https://www.usenix.org/system/files/conference/cset12/cset12-final19.pdf

randy

The servers where the RPKI data is published (the Trust Anchor and the CAs) are referred to using a single URI, meaning that any sort of geographic redundancy or failover has to be handled via external means (anycast, load balancing, etc.) but rsync isn’t well-suited for this sort of implementation.

[cid:DE8A0963-605D-4E57-8A58-E154EF0E790C]

Rich Compton | Principal Eng | 314.596.2828
14810 Grasslands Dr, Englewood, CO 80112

The servers where the RPKI data is published (the Trust Anchor and the
CAs) are referred to using a single URI, meaning that any

sure, but even with rrdp there's just one URI you'd follow, which
translates to some hostname + path.

sort of geographic redundancy or failover has to be handled via external
means (anycast, load balancing, etc.) but rsync isn’t well-suited for this
sort of implementation.

why's that? it seems to work fine for many free software repositories, for
instance.
Yes, updates to that repository would have to be 'managed' but that's also
the case for rrdp, or any other 'more than one copy' solutions of publicly
available data, right?

does some of the lifting to sort out the 'how to get my updates to all the
copies of my repository'... it doesn't yet support RRDP, but it's not hard
to see where to stick that in the config/setup.

You can use rpki without even touching your network just by enabling
the ROAs today in the RIPE database.
This is a harmless piece of work that you can do today
helping the wider community and raise awareness as a first step.

/nikos