IX's

From: Vadim Antonov [mailto:avg@kotovnik.com]
Sent: Tuesday, January 09, 2001 3:32 AM

You mean you really have any other option when you want to
interconnect
few 300 Gbps backbones? :slight_smile: Both mentioned boxes are in 120Gbps range
fabric capacity-wise. If you think that's enough, i'd like
to point out
at the DSL deployment rate. Basing exchange points at
something which is
already inadequate is a horrific mistake, IMHO.

All one has to do is look at PAIX. The whole system looks like it is being
used at real close to max capacity. I have a client at AboveNet and my
systems are on a CerfNet block. PAIX is between us. I feel their pain.

Exchange points are major choke points, given that 80% or so
of traffic
crosses an IXP or bilaterial private interconnection.
Despite the obvious
advantages of the shared IXPs, the private interconnects between large
backbones were a forced solution, purely for capacity reasons.

and they aren't keeping up with the growth.

This entire IX thread has been interesting. But, it appears to be one of
those "good theory, implementation sux" sort of things.

* Roeland Meyer <rmeyer@mhsc.com> [20010109 11:46]:
[..]

All one has to do is look at PAIX. The whole system looks like it is being
used at real close to max capacity. I have a client at AboveNet and my
systems are on a CerfNet block. PAIX is between us. I feel their pain.

I'm curious about this comment. Can you elaborate a bit? I do see that
above.net and cerf.net appear to peer in PA but according to both of their
looking glasses the path looks good at the moment. To the tune of a nice
smooth ~16ms from either direction, if traceroute is to be believed..

Anyone else located at PAIX care to share (recent) anecdote?

-jr

rmeyer@mhsc.com (Roeland Meyer) writes:

All one has to do is look at PAIX. The whole system looks like it is being
used at real close to max capacity.

nope.

some paix customers sometimes operate their ports at or near capacity.
however, the paix ISO-L2 switch fabric is made up of switches whose
backplanes have quite a bit of headroom, and trunks between those switches
which have quite a bit of headroom.

in spite of this ample amount of headroom, there are a LOT of private
network interconnects between paix's customers, which are presumably being
used for private peering.

the "whole system" is being used at nowhere near its current capacity, and
if we ever see a capacity limit on the horizon we will upgrade, upgrade,
and then upgrade.

I have a client at AboveNet and my
systems are on a CerfNet block. PAIX is between us. I feel their pain.

probably you should contact abovenet and/or cerfnet and find out why that
is. it sure as hell isn't because of any capacity limits inside PAIX itself.

> Exchange points are major choke points, given that 80% or so of traffic
> crosses an IXP or bilaterial private interconnection. Despite the obvious
> advantages of the shared IXPs, the private interconnects between large
> backbones were a forced solution, purely for capacity reasons.

and they aren't keeping up with the growth.

in what way?

This entire IX thread has been interesting. But, it appears to be one of
those "good theory, implementation sux" sort of things.

a lot of companies live and die according to paix's ability to carry bits
or photons or electrons to from other internet companies and/or circuit
providers. i like to think that if "implementation sux", i'd've heard more
about it before now. but please feel free to educate me. (maybe you
should educate me offline and then post a summary back to the list.)

I've had zero problems with PAIX (other than the routine glitches
when we first setup a connection). I seem to hear stories about other
peering points all the time (power outages, fabric problems, corporate
indifference, etc) but PAIX just keeps rolling along. The folks running PAIX
really seem to believe in running a reliable 100% uptime IX.

joe