its upstream provider ... They also want to be multi-homed from day one.
They currently offer all sorts of fully redundant data services, and
understandably they can't see why they should have a single point of
failure in their global Internet access.
Oh, good, the multi-homing discussion again.
(The loud "bang" you just heard was Noel blowing out his brains in sheer
desperation and frustration.)
Here's a repeat of a previous post to a different list:
For those who aren't up on the technical background to multihoming, you
should consider consulting the Big-Internet archives from the end of
September '95 for a long discussion of it.
(In particular, see the thread "Discussing encap/mapping proposal",
especially the messages of Thu Aug 31 23:33:12 1995 from me, Fri, 01 Sep
1995 11:13:39 -0500 from Scott M. Ballew, and Fri Sep 1 13:25:59 1995 and
Sun Sep 3 01:23:59 1995 from me, which discuss the theoretical background
in some detail. The thread "Multihoming", especially the messages Fri Sep 1
11:20:25 1995 and Mon Sep 4 14:26:06 1995 also contains some valuable
material.)
For those who don't wish to paw through all that, here are the salient
points, quickly:
1. Multihoming for robustness adds new capability to the system, and
that new capability does not come without cost, the cost being greater
routing overhead. The amount of routing overhead will vary depending
on many factors.
2. Connectivity-based addressing (i.e. the thing that "provider-based"
is a subset of) provides routes at least as good, and at as least as
small a cost in routing overhead, as any other addressing scheme.
(Connectivity-based addresses for multi-homed sites allow us to use
less than global scopes for the advertisement of routing information
about such sites. Non-connectivity-based addresses do not have this
characteristic; they require advertisment through the entire network.)
3. The cost in routing overhead (at least in conventional hop-by-hop
routing systems such as the current Internet) is dependent on how far
apart, in the connectivity topology, the multiple connections are.
(More widely separated connections can increase the routing overhead
with little payback in increased reliability.)
4. The cost further depends (again, in the same kind of systems) on
how optimal you want routes to be if both links are up, how optimal
you want them to be if one is down (and it will vary depending on
whether it's a primary or secondary).
I hope everyone will keep these 4 main points in mind in any discussion
on multi-homing.
Now, a few extra comments appropriate to this case:
There is *no way*, in the current overall routing architecture (which uses
exchange of routing tables) to add multihoming for fault tolerance without
a cost in routing overhead. Providing this fault-tolerance is not zero-cost
in *any* scheme I can conceive (TANSTAAFL, surprise, surprise), but some
are better than others. Since redesigning the entire architecture is not
something we can do this week, I'll leave the pleasant contemplation of such
alternatives to academic, ivory-tower, environs.
We can *easily* limit the amount of routing overhead caused by multi-homing
among different providers if we provide some structure in the addressing
hierarchy *above* providers. (To be technical, this will allow less than
global advertisement scopes of such multi-homed entities.) Since this could
necessitate renumbering most of the 'Net, I don't expect it to happen
anytime soon.
We can also limit the amount of routing overhead by providing configured
AAB boundaries for a given multihomed site which enclose a path between the
primary and secondary providers. Since this will in some cases require
cooperation among multiple providers, as well as either i) massive amounts of
manual mechanical configuration bookkeeping, or ii) automated tools we don't
have yet, don't hold you breath for that one either.
Noel
PS: If you didn't understand that last paragraph, read the references before
you speak.