BGP (in)security makes the AP wire

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

It's a pretty reasonable article, too, though I don't know that I agree about the "simplicity of the routing system"....

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

I worry about the implications in the article in the environment of
recent news....

IRS (or FTC, or FCC) maintains a hosts.txt that everybody uses instead
of BGP?

I wonder how telephone calls are routed? (I know how they were in the
60's and 70's--I really don't know how the current system works.

And when I drive someplace, I do indeed go by the signs I see, which are
not erected by a central authority, as I move along. (I don't have a
route from here to Fairbanks, Alaska, but my MCA shows one from here to
Council Bluffs, Iowa, and from there there are several I might use,
depending on what signs I see ("Warning, I29 N closed at Mondamin due to
flooding") when I get there.)

I'm sorry, but I am very afraid of "Central Authority".

And when I drive someplace, I do indeed go by the signs I see, which are
not erected by a central authority, as I move along. (I don't have a
route from here to Fairbanks, Alaska, but my MCA shows one from here to
Council Bluffs, Iowa, and from there there are several I might use,
depending on what signs I see ("Warning, I29 N closed at Mondamin due to
flooding") when I get there.)

Speaking about that, is anyone currently seeing geographic (local-knowledge)
routing and authorityless address (=position) allocation from coordinates
(e.g. WGS 84 position fixes) in any realistic time frame as a major component
on the Internet?

Presumably, one could prototype something simple and cheap at L2 level
with WGS 84->MAC (about ~m^2 resolution), custom switch firmware and GBIC
for longish (1-70 km) distances, but without a mesh it won't work.

And when I drive someplace, I do indeed go by the signs I see, which are
not erected by a central authority, as I move along. (I don't have a
route from here to Fairbanks, Alaska, but my MCA shows one from here to
Council Bluffs, Iowa, and from there there are several I might use,
depending on what signs I see ("Warning, I29 N closed at Mondamin due to
flooding") when I get there.)

Speaking about that, is anyone currently seeing geographic (local-knowledge)
routing and authorityless address (=position) allocation from coordinates
(e.g. WGS 84 position fixes) in any realistic time frame as a major component
on the Internet?

geographic location doesn't map to topology

It was discussed during the IPng days. My view at the time -- and my view today -- is that there's an inherent conflict between that and multiple competitive ISPs. Suppose there's an IP address corresponding to 40.75013351 west longitude, 73.99700928 north latitude (my building, according to Google maps). To which ISP should it be handed for delivery? Must all ISPs in a given area peer with each other?

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

In orbital line of sight (LoS) constellations and wireless meshes,
yes. Arguably there's a cost function over fiber laid as well, e.g.
long-distance fiber runs typically follow a geodesic. Of course
over short distances relativistic ping isn't a necessarily good
metric for distance.

However, when deploying a geographically routed network with
address assignment/refinement from mutual relativistic ping
triangulation there would be incentive that topology follows
geography.

It was discussed during the IPng days.

I realize the scheme is old, I myself reinvented it around 1990.
I guess give that the idea hasn't gone very far since kind answers
my own question.

My view at the time -- and my view today -- is that there's
an inherent conflict between that and multiple competitive ISPs.

It'd be a standard. Surely people were thinking that before TCP/IP
suite became dominant speaking a particular protocol was a
competitive advantage against a competitor.

Suppose there's an IP address corresponding to 40.75013351 west
longitude, 73.99700928 north latitude (my building, according
to Google maps). To which ISP should it be handed for delivery?
Must all ISPs in a given area peer with each other?

Let's say I buy a mesh radio which speaks the protocol. Who's
the ISP? By putting it up on a pole or a roof I've become a transit
point for traffic which potentially originated far away. I could
use QoS to prioritize traffic by distance, so that far away
traffic doesn't expire.

In larger networks, you could tag packets with your ISP's tag,
until it is being delivered to a "closest" point (of course geographic
distance is not a single metric) of exchange. That way you could
guarantee traffic doesn't exit your network unless it hasn't got
any choice.

Of course you could tunnel anything you want over a geographic link.
Any LoS laser satellite constellation would presumably do that.

[The wonderful New And Improved Thunderbird deleted a response and the
message I was responding to--I don't know how or why--seems to have to
do with the arrival of new messages.]

The message I was responding-to seemed to be a rant that the reason Area
Code changes are (were) a hassle was due to the reluctance of the Evil
Powers to permit overlays and portable numbers.

I was saying that the Evil Powers thing is always a lot of fun, but the
facts are that until fairly recent times, that is the way the technology
worked. (As a point of interest--overlays have indeed been around since
the 1960's.)

When you dialed a number, a class 5 office received the digits dialed
and decided what to do with the call. If the number dialed was one it
served, it connected the call.

If not it would (using engineering decisions encoded in the machine[1])
hand the call off to a higher class office, maybe all the way up to a
class 1 which might hand it off to another class 1 and thence back down
the hierarchy to the class 5 that served the number.

There just was no way to keep track, second by second, to keep track of
where a given number had wandered off to. No Evil Powers (not Bush, not
Halliburton, not Cheney, not Tea Partiers, not ....). It was just the
way things worked--and it was a pretty elegant design, all things
considered.

[1] Things that accounted for traffic patterns in a particular machine.
For example, an office worked in had a class 5 machine in the 214 Area
Code) which had an office that served 805-area numbers so they did go
anywhere. It had high usage to 714 numbers so there were trunk right
straight to the right class 5 office.

Well, egg-on-face time.

Thunderbird has a loss-of-focus issue and the message I was working on
was in a news group--not here.

Sorry.

I'll just grab my coat .....

[The wonderful New And Improved Thunderbird deleted a response and the
message I was responding to--I don't know how or why--seems to have to
do with the arrival of new messages.]

The message I was responding-to seemed to be a rant that the reason Area
Code changes are (were) a hassle was due to the reluctance of the Evil
Powers to permit overlays and portable numbers.

Well, egg-on-face time.

Thunderbird has a loss-of-focus issue and the message I was working on
was in a news group--not here.

Sorry.

I'll just grab my coat .....

as another victim of TBv3.

Todd

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

embarrassing but pretty much correct. a reporter did their homework.
if i knew who it was, i would add them to my very small "willing to talk
to" list.

It's a pretty reasonable article, too, though I don't know that I
agree about the "simplicity of the routing system"....

"The fact that very good computer scientists are studying the internet
as a behavioral phenomonon should scare the hell out of us."
-- me, some years ago

randy

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

embarrassing but pretty much correct. a reporter did their homework.
if i knew who it was, i would add them to my very small "willing to talk
to" list.

Peter Svensson, AP Technology Writer
http://twitter.com/petersvensson

Regards
Marshall

Steven Bellovin wrote:

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

It's a pretty reasonable article, too, though I don't know that I agree about the "simplicity of the routing system"....

I am very skeptical whenever I see claims of this nature "If I do X I can bring down large global system Y in a matter of minutes/hours" where X involves something reasonably simple any single person with some skill could do.

I'm right there with you. I'm pretty sure the Internet would have crashed
and burned long ago from human error if this was the case. People typically
unintentionally bungle things worse than when they mean to.

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

It's a pretty reasonable article, too, though I don't know that I agree about the "simplicity of the routing system"....

I am very skeptical whenever I see claims of this nature "If I do X I
can bring down large global system Y in a matter of minutes/hours" where
X involves something reasonably simple any single person with some skill
could do.

except we have a history of it happening

In a message written on Sun, May 09, 2010 at 09:32:57AM -0400, Steven Bellovin wrote:

http://www.nytimes.com/aponline/2010/05/08/business/AP-US-TEC-Fragile-Internet.html

It's a pretty reasonable article, too, though I don't know that I agree about the "simplicity of the routing system"....

I had avoided this topic at first, but some of the follow on comments
make me feel compelled to post.

Deep down inside every industry are things that on the surface seem
dangerous; particularly to someone outside of the industry. These
make for excellent press headlines "Entire xyz one keystroke from
collapse!", but these stories do not understand even the smallest
fraction of the interaction.

Did you know there are no safeguards to prevent the pilot of your
next airplane from flying it into a building?

Least you think it's just nutjobs,

Did you know tanker trunks with 10,000 gallons of fuel are allowed
to drive in front of your kids school?

One truck took out a significant part of the MacArthur Maze in
Oakland,
http://articles.sfgate.com/2007-04-29/bay-area/17239903_1_tanker-truck-roadway-firefighters

Did you know that one operator of a nuclear power plant could cause
the entire thing to melt down, simply because they weren't trained?

The problem with any of these situations is that they are in fact
complex systems. There is no "one cause", ever. The article
suggests that the lack of route authentication on peering is the
issue; it is not, it is one part of a majorly complex issue. Adding
a filter to every peering session will not prevent prefix hijacking,
it will merely change how it is done.

I agree with Randy that we, as an industry, need to take steps to
prevent prefix hijacking. I don't think letting a reporter dictate
the method from some scare article is the right answer. But we
also need to realize there is a cost/benefit trade off. We can so
lock things down and mire them up in change control that it costs
the economy (us and our customers) millions of dollars every day
in lost productivity, but we never have a hijack. The real shame
is that no one is explaining that side of things.

So I disagree completely with Steven, this was an under-informed
reporter trying to scare people into thinking the Internet is a
massive house of cards that needs deeper regulation, oversight, or
something. It's not reasonable in any sense of the word, and is
not a balanced, engineering based assessment of the risks and costs.

If we want to get this right, we need to quantify the effect of a
route leak in dollars, and the cost of detecting and preventing a
route leak in dollars. We can then mix in some moral and ethical
views of the group and make sane engineering decisions with known
risks that everyone is comfortable implementing.

I will say this, my upstreams mucking up routing registry filters
have caused me outages hundreds of times. I've had sites down for
days because of filtering issues. I've also run into many cases
where I found routes taking suboptimal paths due to mis-entered
filters along the way; problems that those in the middle could not
even detect because they were being filtered! I think if major
ISP's tried to implement routing registry filters on their PNI's
we would have weeks of outages and suboptimal routing, and the cure
would be far worse than the disease.

I hope that work on things like RPKI and SOBGP provide us a better,
more workable framework. However, the jury is very much still out
in my opinion.

Randy Bush wrote:

except we have a history of it happening

You mean the whole innertubes went down because some dewd haxx0red it? I believe that was the claim being made in so many words (maybe he was just trying to land that DARPA job). It's one thing for parts of the innertubes to go down, but the whole thing?

geographic location doesn't map to topology

In LEO satellite constellations and mesh wireless it typically does.
When bootstrapping a global mesh, one could use VPN tunnels over
Internet to emulate long-distance links initially.

Eben Moglen recently proposed a FreedomBox intitiative, using ARM
wall warts to build an open source cloud with an anonymizing layer.
Many of these come with 802.11x radio built-in. If this project
ever happens, it could become a basis for end-user owned
infrastructure. Long-range WiFi can compete with LR fiber
in principle, though at a tiny fraction of throughput.

> Presumably, one could prototype something simple and cheap at L2 level
> with WGS 84->MAC (about ~m^2 resolution), custom switch firmware and GBIC
> for longish (1-70 km) distances, but without a mesh it won't work.

The local 64 bit part of IPv6 has enough space for global ~2 m resolution,
including altitide (24, 24, 16 bit). With DAD and fuzzing lowest
significant bits address collisions could be prevented reliably.

Central authority and decentralism can co-exist.

geographic location doesn't map to topology

In LEO satellite constellations and mesh wireless it typically does.
When bootstrapping a global mesh, one could use VPN tunnels over
Internet to emulate long-distance links initially.

Eben Moglen recently proposed a FreedomBox intitiative, using ARM
wall warts to build an open source cloud with an anonymizing layer.
Many of these come with 802.11x radio built-in. If this project
ever happens, it could become a basis for end-user owned
infrastructure. Long-range WiFi can compete with LR fiber
in principle, though at a tiny fraction of throughput.

"Tiny fraction" is putting it mildly. I once considered starting up a low-infrastructure wireless ISP using mesh radio based on wifi radio technology adapted to work in licensed bands.

If you work out the numbers, the bandwidth you get in any substantial deployment is pitiful compared to technologies like DSL and cable modems, let alone fiber.

New technologies such as distributed space-time multipath coding on the wireless side, and multipath network coding on the bitstream side, look like the way forward on this, but these are brand new, and still the subject of research -- you certainly can't just hot-wire these onto wifi hardware.

Presumably, one could prototype something simple and cheap at L2 level
with WGS 84->MAC (about ~m^2 resolution), custom switch firmware and GBIC
for longish (1-70 km) distances, but without a mesh it won't work.

The local 64 bit part of IPv6 has enough space for global ~2 m resolution,
including altitide (24, 24, 16 bit). With DAD and fuzzing lowest
significant bits address collisions could be prevented reliably.

Central authority and decentralism can co-exist.

Indeed.

The fact that the usable bandwidth resulting from ad-hoc mesh wiki would be tiny compared to broadband connections doesn't mean this sort of thing isn't worth trying: a few tens of kilobits a second is plenty for speech, and even a few hundred bits per second useful for basic text messaging.

Given that the cost of doing this is almost zero, since only software is required to implement it on any modern wifi/GPS equipped mobile hardware, this seems like a great thing to have in the general portfolio of networking technologies: having something like this available could be invaluable in disaster/crisis situations.

-- Neil

it seemed that the 'freedombox' was targeted at (or so I thought from
the snippet I read) striking a blow against regimes that cut off
network access during 'high stress' periods. A few kbps would still be
better than nothing :slight_smile:

of course, a coffee grinder (or something more sophisticated that'd be
available to the repressive regime du jour) can wipe out that few kbps
easily enough as well.

-chris