Re:

Randy;

> BGP Route Reflector IXPs need a AS number. I'll send you a URL with a
> whitepaper. The BGP Route Reflector IXPs have proved to offer a low entry
> cost for ISPs (for those places that do not have the deep pockets to get
> big routers).

except that big routers are not needed for small-isp exchanges. remember,
an isp participating in such an exchange has only to add the prefixes of
their local peers to their routing, typically a dozen or so. there are very
successful layer-two exchanges where the peers use what we think of as cpe
routers, e.g. cisco 2501s. and what's nice is that this is on the right
path to exchange growth.

l3 exchange ponints are a labor suck and are fragile.

Maybe. However, l2 is for telco.

l2 exchange ponints are a labor suck and are fragile.

The right path is l1, though, then, there is less reason to have
exchange points.

It will be more obvious as the peering speed between two ISPs exceeds
that of a single physical interface.

            Masataka Ohta

Maybe. However, l2 is for telco.

l2 exchange ponints are a labor suck and are fragile.

The right path is l1, though, then, there is less reason to have
exchange points.

It will be more obvious as the peering speed between two ISPs exceeds
that of a single physical interface.

glad to have words of practical wisedom from your experience as a large
provider.

randy

There's another option for IXP architecture, virtual routers over a
scalable fabric. This is the only approach which combines capacity of
inverse-multiplexed parallel L1 point-to-point links and flexibility of
L2/L3 shared-media IXPs. The box which can do that is in field trials
(though i'm not sure the current release of software supports that
functionality).

--vadim

You spelled 'wisdom' wrong Randy.
Now be nice, eh?

This is an especially strange comment, as almost everyone who peers,
interconnects in multiple places - thus, exceeding the capacity of a
single interface.

Layer 1 peering (or pooling, as it's more usually known) is great for
interconnecting fiber networks, fast provisioning, and all that. However,
I fail to see the connection between Layer 1 interconnection and an IP
exchange point of any kind. This seems apples and oranges. Layer 2
exchange points are the only efficient way to go for IP traffic. History
and the "invisible hand" of the market have endorsed this path.

Daniel Golding NetRail,Inc.
"Better to light a candle than to curse the darkness"

There are a number of boxes that can do this, or are in beta. It would be
a horrific mistake to base an exchange point of any size around one of
them. Talk about difficulty troubleshooting, not to mention managing
the exchange point. Get a Foundry BigIron 4000 or a Riverstone
SSR. Exchange point in a box, so to say. The Riverstone can support the
inverse-mux application nicely, on it's own, as can a Foundry, when
combined with a Tiara box.

Daniel Golding NetRail,Inc.
"Better to light a candle than to curse the darkness"

You mean you really have any other option when you want to interconnect
few 300 Gbps backbones? :slight_smile: Both mentioned boxes are in 120Gbps range
fabric capacity-wise. If you think that's enough, i'd like to point out
at the DSL deployment rate. Basing exchange points at something which is
already inadequate is a horrific mistake, IMHO.

Exchange points are major choke points, given that 80% or so of traffic
crosses an IXP or bilaterial private interconnection. Despite the obvious
advantages of the shared IXPs, the private interconnects between large
backbones were a forced solution, purely for capacity reasons.

--vadim

You mean you really have any other option when you want to interconnect
few 300 Gbps backbones? :slight_smile: Both mentioned boxes are in 120Gbps range
fabric capacity-wise. If you think that's enough, i'd like to point out
at the DSL deployment rate. Basing exchange points at something which is
already inadequate is a horrific mistake, IMHO.

Exchange points are major choke points, given that 80% or so of traffic
crosses an IXP or bilaterial private interconnection. Despite the obvious
advantages of the shared IXPs, the private interconnects between large
backbones were a forced solution, purely for capacity reasons.

--vadim

exchange points being choke points are more complex than that:

- backbones direct interconnect because it makes what was public traffic stats now private. also is more financially sound model than a 3rd party being involved. it minimize expenses.

- backbones limiting bandwidth into an Exchange Point also makes it a choke point.

- pulling out of an Exchange or demoting it's importance to a particular backbone means a justification for not having equitable peering.

- knowing so much traffic goes between backbones makes it a political tug of war that brought on direct interconnects.

- private interconnects were not a forced solution. they were for revenue and political, not purely for capacity reasons. there has been this notion of Tier 1,2,3 ... because of this.

- equitable financial return at an Exchange. means turning smaller peers into customers.

i am sure i have not nearly covered everything here.

-craig

Vadim,

If you have that much traffic, privately peer. Public Exchange points of
any sort are geared for smaller amounts of data interchange. The only real
scaling question is number of peers. Please explain why you would want to
interconnect several 300GBps backbones across a virtual router box, as
opposed to direct private peering. For that matter, which networks are you
refering to? I can't think of too many operational 300Gbps IP networks.

Daniel Golding NetRail,Inc.
"Better to light a candle than to curse the darkness"