NAP Solutions

Has anybody though about a packet over sonet solution for an exchange?
Seems like you could get a pretty effective answer out of a GSR with OC3
and OC12 interfaces...

  Brian Horvitz

Has anybody though about a packet over sonet solution for an exchange?

As a matter of fact, yes.

A SONET MPLS switch makes for a very interesting exchange. The use of MPLS
avoids having the box act as a router and thereby avoids the headaches of
figuring out just which routes the exchange should select. If the label
switched paths in the switch are manually configured, there's no easy way
for a non-neighbor to "accidentally" send you traffic. The switch fabric
scales up nicely in that there is an interesting selection of link speeds
that promise to scale up for the forseeable future. You can scale the
fabric up either by growing individual switches, or by creating a switch
fabric, or both. Sites connecting to the switch can do so remotely by
bringing in a SONET link, so no on-site equipment is necessary.

In short, technologically, a SONET MPLS exchange is a fine alternative
because it provides the architectural advantages of a switched fabric
without the bandwidth limitations of LAN technologies and without the cell
loss problems of ATM.

Note that this probably does not fix the economics and politics of the
exchange point. Those are almost orthogonal issues.

Tony

Tony Li <tli@juniper.net> writes:

A SONET MPLS switch makes for a very interesting
exchange.

Yes, for all the reasons you outlined, and because we've
been talking about it for a couple years, this what I
would deploy in the short run if I were interested in
getting large-provider business.

Having a circuit per peer has some advantages with respect
to failure modes, but is expensive. Assuming that there
is still a per bit per second per kilometre cost even for
inhouse applications, if a reliable alternative existed, I
expect that would be used, particularly as the growth
curve of inter-provider traffic necessitates expansion of
the private peering circuits.

I would hope that people at Sprint, UUWHO (and ANS and
MCI) and the various other places using private peering
points are thinking about migrating from a "you buy one
circuit i will buy one circuit" model to a more general
"we will run an exchange point here, you bring circuits to
us; you run an exchange point there, we will bring
circuits to you, they will be running an exchange point
there we will both bring circuits to that" one, although
actually making a decision to do this would be dependent
on costs and the reliability of new big fast routers and
MPLS implementations and interoperability.

One fat physical circuit that buys you N peers is
probably going to be cheaper than N not-so-fat physical
circuits at one peer each, in line costs, manageability
and capital expenses (router ports etc.)

Of course, the key downside to using a SONET MPLS
switch/router is back to scaling.

A question for you Tony. What does one do when one has an
N port MPLS switch/router and has filled all N ports with
traffic? Consider that each of the N ports will become
fuller and that there will probably be a desire or
requirement for N+1 ports with more to come.

The lesson of the Gigaswitches and the ATM counterparts is
that scaling beyond a single switch is hard.

I don't have an answer, given what I know and can imagine
about near-term technology (as opposed to stuff I want you
and Crashco to build :slight_smile: )

the forseeable future.

How long is that these days anyway?

Anyway, other than the "what do you do other than give up
on port density when you have more traffic or connections
than one MPLS switch can handle" concern, I am in complete
agreement with you, surprise surprise.

  Sean.

What does one do when one has an
N port MPLS switch/router and has filled all N ports with
traffic? Consider that each of the N ports will become
fuller and that there will probably be a desire or
requirement for N+1 ports with more to come.

The lesson of the Gigaswitches and the ATM counterparts is
that scaling beyond a single switch is hard.

I don't have an answer, given what I know and can imagine
about near-term technology

Two answers. Parallelism and bypasses. Vadim is working on the parallelism
idea with his project at pluris.com so we will soon see whether or not this
is a workable approach.

Bypasses are good old-fashioned highway network technology. When the switch
is overloaded (i.e. the cross-bar street network in the city core) divert
traffic around the city (switch) with a bypass. Or in other words, when all
N ports on your switch are getting full, look at the other end of the
circuit going into each port and try to divert some traffic into a bypass
that does not go through the switch. Remember that the switch's backplane
is essentially a backbone network that has been collapsed into a single
box. The scaling problem arises when too much traffic from other networks
wants to go through this one box. Step back and look at the bigger picture
and you will see that there are solutions that do not require chaining
switches together.

Of course a simple private interconnect circuit is the classic and the
simplest form of bypass, but anything which causes traffic to flow through
a different path meets the criteria. Exchange points cannot be designed or
scaled as discrete objects; they exist in the context of the entire network
mesh.

A question for you Tony. What does one do when one has an
   N port MPLS switch/router and has filled all N ports with
   traffic? Consider that each of the N ports will become
   fuller and that there will probably be a desire or
   requirement for N+1 ports with more to come.

   The lesson of the Gigaswitches and the ATM counterparts is
   that scaling beyond a single switch is hard.

Yup. The obvious answer is build a bigger switch, and I believe (without
demonstrable proof) that some fairly large switches can be built.

The less obvious answer is to build a mesh, which I assume has been done
for the ATM solutions. If you're familiar with the failure modes, I'd love
to hear 'em. Yes, you do fall into the 'small switch penalty' in which you
start using up significant bandwidth interconnecting your switches. I've
got no magic around that one.

   > the forseeable future.

   How long is that these days anyway?

2 weeks, 3 hours and 17 minutes. :wink:

Tony