Info on MAE-EAST

This is probably a worst-case scenario. What about Ethernet cross-connects
using the 6 port cards? Or zero-mile T1's using the 8 port serial cards?

I keep thinking that I live in a world where providers have MUCH faster
trunking pipes than the pipes they sell to their average customers. If
you can get away with a T1's worth of bandwidth, then the only reason you
and your "peer" would be in the same room is to share access to longhaul,
and failing that cause, the situation you describe would not occur and you
would just run the T1 through the TelCo from your closest hub to theirs.

Ethernet is a slightly different case but only slightly. An ethernet switch
costs a lot less than a GIGAswitch, or for that matter, there are probably
occasions (as occured at the Phoenix IXP) where unswitched 10Mb/s Ethernet
is a fine way to start out -- which means you can grow to a switch and even
to 100Mb/s if you plan it right, but you cannot easily grow to FDDI (which
some customers think you should have to get them 4K PMTU to their semi-local
destinations.)

And you are assuming a full mesh which isn't necessarily what people need.

I thought I'd covered this. I'm assuming full mesh because if you are paying
to colocate equipment and tie your colo back to the rest of your net in some
way, you will pretty naturally want to get as much bang for your buck as can
be had. Each person you don't peer with represents additional load on the
people you do peer with, or on your upstream transit if you're buying any,
and on the upstream transit link of your unchosen-peer if he's got transit.
Once you're in the same room, the cost of not peering is a LOT higher than
the cost of peering, unless and only unless you already have a private
interconnect to the unchosen peer.

I don't think you can generalize about what a provider wants from an
Exchange Point especially not in a world in which exchange points are
breeding like rabbits.

I agree that we're going to see a lot of IXP's of all sizes and shapes soon.

I guess I was visualizing something quite different from current
exchanges. Rather than have an Ethernet switch I was thinking of using
Ethernet point-to-point. And the exchange point was more like a big colo
center in which you could set up as many private interconnects as you
want at the lowest possible cost (interface ports plus installing a cable
versus running T1's or DS3's across town).

The colo nature of such a beast would lead ISP's to install terminal
servers, web farms, etc which would have an effect on the topology.
Squid cache hierarchies would be nice here as well.

I'm not sure if this is a viable exchange point architecture yet but I
think it will be viable and useful as the Internet scales through the next
order of magnitude. My gut feel is that breaking out the traffic into lots
of smaller non-shared circuits will be easier to manage and less
susceptible to being swamped during overload conditions. Not that
overloading cannot occur but the effects would be more isolated than if
the overload occurs on a shared media.

I think that people would be interested in traffic flow data that would
either prove or disprove my theories.

Michael Dillon - Internet & ISP Consulting
Memra Software Inc. - Fax: +1-250-546-3049
http://www.memra.com - E-mail: michael@memra.com

I could equally well see a colo center where the plan is to run a
DS3 to the colo center, put a router there, and buy transit from as many
providers as you wanted by connecting to each provider's switch. For
example, a room where Sprint, MCI, BBNPlanet, PSI, Netcom, and whoever
else wanted to come would each have their own Ethernet switch or Gigaswitch.

  ISPs could then colo a router at the center and with no telco loop
cost obtain transit connections from whatever combination of providers
they wished. If the operators of the colo center had their own regional
OC48 sonet ring, the cost to bring a DS3 to the center could be quite low
for both ISPs and the big boys.

  DS

  I could equally well see a colo center where the plan is to run a
DS3 to the colo center, put a router there, and buy transit from as many
providers as you wanted by connecting to each provider's switch. For
example, a room where Sprint, MCI, BBNPlanet, PSI, Netcom, and whoever
else wanted to come would each have their own Ethernet switch or Gigaswitch.

  ISPs could then colo a router at the center and with no telco loop
cost obtain transit connections from whatever combination of providers
they wished. If the operators of the colo center had their own regional
OC48 sonet ring, the cost to bring a DS3 to the center could be quite low
for both ISPs and the big boys.

This is a great idea, but runs counter to the telco's, all of whom run
the current major NAP's.

The other element needed would be multiple fiber drops from multiple
carriers.

.stb

That's pretty much the nightmare scenario for the long-haul networks.
Frictionless capitalism with buying decision being made by machines, i.e.
routers, based on the current state of the network. The product (long haul
packet transport) becomes a total commodity with non-existent customer
loyalty. Kewl.

Dirk