Peering point speed publicly available?

On or about July 1 2004, Erik <allegedly at> myevilempire.net> Amundson
allegedly asked about peering point bandwidth.

Some North American ISPs will tell you that under non-disclosure,
but almost all of them will point you to their standards for peering,
and you won't find many Tier 1 ISPs that peer at less than DS3 in the US,
and probably not many in Canada, London, Amsterdam, Tokyo, or Singapore either.
That means the insertion delay is under 0.27ms, or about 27 fiber-miles,
so it's less important than whether the peering is in San Francisco or San Jose.
Queuing due to overload is really much more important than absolute size.
Also, if you're dealing with ISPs that use public peering points,
those may be a performance concern, but in the US that's mostly not Tier1-Tier1.
(Linx is a different case entirely, assuming you want your traffic to be in London.)

Smaller ISPs might be more talkative, or if you're having actual problems,
like why your connection from minneapolis.example1.net to stpaul.example2.net
goes through a peering point in San Francisco instead of peering in Minnesota
or at most Chicago, big ISPs can also be pretty talkative.

Any particular reason you would worry about public peering points these days?

The FDDI MAEs are dead, there is no head of line blocking any more. Every ethernet or ATM switch running a NAP I've seen in the last ... nearly half a decade is more than capable of passing all bits going through it without a problem, and then some.

There might be a concern that, for instance, a provider would show up to a NAP, connect at GigE, then peer with 2 gigabits of traffic. But I fail to see why that is the public fabric's fault, or why things would be any different on private peering. The provider knows when their connection is congested, be it an ethernet to a NAP or an OC to another router. I also have not seen that affect the packets not going to the congested port (unlike some older NAPs).

Public NAPs got a bad name many years ago because a few of them were poorly run, and some other ones had some technical difficulties, and some providers intentionally congested their public ports so they could say "see, public peering sucks", and lots of other reasons.

Today, even free NAPs pass gigabits of traffic and do it robustly.

If you have counter examples, I would be interested in seeing them. A lot of traffic passes on NAPs, and I'd hate to see any of it not get to where it was going.

>Also, if you're dealing with ISPs that use public peering points,
>those may be a performance concern, but in the US that's mostly not
>Tier1-Tier1.
>(Linx is a different case entirely, assuming you want your traffic to
>be in London.)

Any particular reason you would worry about public peering points these
days?

The FDDI MAEs are dead, there is no head of line blocking any more.
Every ethernet or ATM switch running a NAP I've seen in the last ...
nearly half a decade is more than capable of passing all bits going
through it without a problem, and then some.

What is with people in this industry, who latch onto an idea and won't let
go? If someone was talking about 80286 based machines in 2004 we would all
be in utter disbelief, but you can still routinely find people talking
about "the MAEs" and "congested NAPs".

There might be a concern that, for instance, a provider would show up
to a NAP, connect at GigE, then peer with 2 gigabits of traffic. But I
fail to see why that is the public fabric's fault, or why things would
be any different on private peering. The provider knows when their
connection is congested, be it an ethernet to a NAP or an OC to another
router. I also have not seen that affect the packets not going to the
congested port (unlike some older NAPs).

a) Exchange points make a living convincing people to buy their product
   just like everyone else. When stupid people who don't know what they're
   doing buy transit, no one cares. When these same people who really
   don't know how to peer or manage their capacity start jumping on the
   "save money" or "improve performance" bandwagon without finding someone
   experienced to run it, they do stupid things. :slight_smile:

b) The price being charged for the public exchange ports is non-trivial
   (especially compared to the cost of transit these days!), and is billed
   on a port basis instead of a usage basis (at least in the US). Since
   public peering is treated as a "necessary evil", with traffic moved to
   much more economical private peers when they start getting full, no one
   wants to provision extra capacity ahead of demand (in fact, in the US
   it is exceedingly rare to see anyone with 2 ports on a single public
   exchange).

Personally I've never understood why US exchange port operators havn't
insisted on some kind of "80% utilization over Xth percentile and you must
upgrade" rule. Since you don't normally have an idea how hot your peer is
running their public port, you're really putting a *lot* of faith in your
peers' ability to manage their traffic when you peer with them over a
public exchange.

Given how poorly some folks do this, and how quickly a congested port can
degrate the reputation of an exchange point, it seems like this would at
least be a very basic safety net (doesn't help if they only have 1 OC12 of
backhaul off of that GigE port, but still better than nothing). Plus as
I'm sure we all know the price of the exchange point switch port is
covered by the first months' fees. What we're really paying for is the
faith that the EP operator will keep things up and running, prevent
forwarding loops, check for bad things being broadcasted, maybe invest in
a bigger switch down the road, and be able to convince others to join so
that there is a reason to bother peering there, etc. The extra cost of the
ports is really quite trivial.

Public NAPs got a bad name many years ago because a few of them were
poorly run, and some other ones had some technical difficulties, and
some providers intentionally congested their public ports so they could
say "see, public peering sucks", and lots of other reasons.

Some still do. At the very least, I can personally think of at least 4
different folks with public GigE exchange ports sitting at 920-960Mbps
peak *RIGHT NOW*.

Date: Sat, 3 Jul 2004 01:00:35 -0400
From: Patrick W Gilmore

Any particular reason you would worry about public peering
points these days?

ANES, perhaps? Those who finally found old NANOG-L and i-a
archives have decided public peering is bad.

Hmmmm.... let's see.... cheap, uncongested public peering -vs-
expensive private peering. Assuming fixed amount of money to
spend, which buys more?

There. Now we just need to wait a few more years for the "public
peering is good" mentality to spread. Hopefully that will still
be the case at that time. :slight_smile:

There might be a concern that, for instance, a provider
would show up to a NAP, connect at GigE, then peer with 2
gigabits of traffic. But I fail to see why that is the
public fabric's fault, or why things would be any different
on private peering. The provider knows when their

*nods* Private would be worse. Even collocation + overpriced
$500/mo fiber x-c compares favorably with metro OC3.

You've gotta admit, though: It's funny watching someone proclaim
"we avoid public peering!" when their $149/mo dedicated server
lives in a PAIX suite, unbeknowst to them. :slight_smile:

I guess uncongested public peering technically _is_ avoiding
"congested public peering"...

Eddy

Date: Sat, 3 Jul 2004 02:07:06 -0400
From: Richard A Steenbergen

What is with people in this industry, who latch onto an idea
and won't let go? If someone was talking about 80286 based
machines in 2004 we would all be in utter disbelief, but you
can still routinely find people talking about "the MAEs" and
"congested NAPs".

Can I get a class C with that?

[ snip ]

Given how poorly some folks do this, and how quickly a
congested port can degrate the reputation of an exchange
point, it seems like this would at least be a very basic
safety net (doesn't help if they only have 1 OC12 of
backhaul off of that GigE port, but still better than
nothing).

To think some of us thought exchanges would save providers from
tyrannical ILEC loops. :wink:

Eddy

This is counter intuitive to me altho perhaps I need to better understand the IX
operators income model.

If I were a colo company who also operated an IX I'd want to encourage people to
use my IX and put as much traffic over it. The logic being that operators
gravitate towards these high bandwidth exchange areas and that means new
business. The encouragement here would be to make the IX cost quite small.. of
course the other benefit of succeeding in getting a lot of operators and traffic
on your IX is you can publicise the data to show why you're better (or as good
as) your competitors..

This doesnt affect their income from colo, support, cross connects so why not do
it?

Steve

<hi ras!> As one of the folks who gets questioned by Sales all the time about the reasons behind the multiple shared fabric ports at the IXs I'll gladly explain why we have 14 in the US at present and are preparing for ~5-10 abroad.

1. Trials. There are some networks who are not ready to properly manage private peering, they should be but they are not. A 90-day 'try before you buy' helps reduce the nickel & diming to a budget that remote hands and inventory adjustments chew. IMHO, if they do not have their operations activities in order they should not be a peer and that is one of the criteria we verify.

2. PNI sizing. Some networks really don't know how much traffic they will have to other networks when adding peering relations. If they argue about sizing it is best to drop them on to shared fabrics first to confirm with visuals what is flowing.

3. PNIs do not guarantee congestion avoidance. Unfortunately private peering does not remove congestion with some networks, it just shifts it. The peering relations community is well networked with each other. We know which network offenders have capacity issues regardless of public or private options.

4. International peers. Rarely are two network foot prints or goals for business the same. I would rather make available the unique international routes to our customers than miss that opportunity by being a public peering snob. This also allows the view towards new markets which rely heavily on shared fabrics. While not customary in the US, many EU peering IXs are multiple interconnected buildings managed by a single IX vendor at the shared fabric layer. Connecting to the shared fabric is an easy way to reach those networks in various buildings without dark fiber complexities.

5. Costs. Private peering is expensive, don't let anyone fool you. There is a resource investment in human terms that is rarely calculated properly, all the way from planning of inventory to planning for capacity augments after the physical install. It is often difficult to capture the cost to roll all those fibers that are improperly installed. This I'm sure you are painfully aware of <G>.

6. Management. Set a range of expectations on levels for monitoring, hardware, power, staff time, and capacity upgrade paths by designating some peers in a 'group' vs. monitoring all as individuals.

I encourage authors of RFPs to stop placing such an unnecessary stigma on public peering. Those networks without the benefit of options for interconnecting should be penalized for failure to evolve. Quite likely they are not connected to the growing sources in the current peering game. What is this called... the bagel syndrome? -ren

The price being charged for the public exchange ports is non-trivial

Only at the (very few) commercial exchanges. The vast majority are free
or of trivial expense. But some people really like to lose money, since
then they get to hang out with VCs and feel like movers and shakers,
rather than feeling like peons who have to actually turn a profit.

    > Personally I've never understood why US exchange port operators havn't
    > insisted on some kind of "80% utilization over Xth percentile and you must
    > upgrade" rule.

No idea. It works well elsewhere. I think people here just don't like
the idea of being told what to do.

                                -Bill

Bill Woodcock writes on 7/3/2004 7:02 PM:

>b) The price being charged for the public exchange ports is non-trivial
> (especially compared to the cost of transit these days!), and is billed
> on a port basis instead of a usage basis (at least in the US). Since
> public peering is treated as a "necessary evil", with traffic moved to
> much more economical private peers when they start getting full, no one
> wants to provision extra capacity ahead of demand (in fact, in the US
> it is exceedingly rare to see anyone with 2 ports on a single public
> exchange).

<hi ras!> As one of the folks who gets questioned by Sales all the time
about the reasons behind the multiple shared fabric ports at the IXs I'll
gladly explain why we have 14 in the US at present and are preparing for
~5-10 abroad.

You're definitely one of the rare few, especially given your size. In
Europe it seems far more common for people to provision multiple ports and
make certain they have capacity. In the US, even the couple of other folks
I can think of who actually decided to provision multiple ports on the
"modern exchanges" we're thinking of ended up sitting with congestion for
some number of weeks before they actually did it. The general line of
thinking here is "ok exchange port is getting full, lets move someone big
to a PNI". Are there even any exchange points in the US who are actually
doing 10GE right now (major and production, not someone tinkering)?

One way or another, there is definitely room for improvement in the
technology of public peering. Then again, with some classic exchanges
(that are still considered viable, aka not mae's, aads, pbnap, etc) still
charging the same prices they were back in 1999, aka more than transit,
perhaps there is room for improvement in the financial model as well. :slight_smile:

5. Costs. Private peering is expensive, don't let anyone fool you. There
is a resource investment in human terms that is rarely calculated properly,
all the way from planning of inventory to planning for capacity augments
after the physical install. It is often difficult to capture the cost to
roll all those fibers that are improperly installed. This I'm sure you are
painfully aware of <G>.

*grumble* Indeed. The one redeeming quality of your favorite overpriced
colo and mine is that when they go to hook up a crossconnect they extend
it all the way to the gear without a dozen more tickets, they manage to
hook it up correctly the first time, without 1-2 hours of handholding or
playing "find the port", and without the need to dispatch techs or pay for
half an hour of remote hands to roll the damn fibers. :slight_smile:

The price being charged for the public exchange ports is
non-trivial

Only at the (very few) commercial exchanges. The vast majority
are free or of trivial expense.

by count of small 10/100 switches or by traffic volume?

it costs to build, maintain, and manage an exchange which carries
significant traffic. costs get recovered. life is simple.

randy

I agree with you 100%. Working at a nordic european operator being present
at LINX, AMSIX and all the northern europe exchanges my reasoning is this:

With IXes you buy one highspeed interface and get lots of peers and you
can peer with people you might only exchange a few megabit/s with. Buying
loads and loads of OC3s, T3s, OC12 to peer with and purchasing fiber
patching to interconnect these just doesnt make sense when you can buy a
GE or 10GE interface and get tens or hundreds of peers on that single
interface without re-patching or establishing any new fiber connections.

We have a very liberal peering policy which makes peering a pure
operational decision, being handled by the line organisation. Each peering
takes approx 5-10 minutes of someones time and that's it. No meetings of
peering coordinators or alike, so those people are freed up to do better
things.

In a lot of the european exchanges all graphs of all ports on the IX is
available to you as a member (or even publically available). If someone
runs their port full, you probably know about it.

Playing the peering game and trying to increase cost for someone else
means you increase your own cost as well. Is that worth it? You have to be
pretty big to justify it...

What is significant traffic? What is the cost? If you have an exchange
with let's say 20 people connected to it and they all connect using GE.
Running this exchange in an existing facility with existing people, you
can easily run it for under $10k per year per connected operator or less
as you already have engineers that are on site frequently, you already
have a billing department etc.

It's when the exchange is being run by a separate entity that needs a
marketing department, a well-paid staff of managers, technicians etc that
price really goes up. All this to basically manage a simple ethernet
switch that needs some patching a couple of times a month at maximum.

What is significant traffic? What is the cost? If you have an exchange
with let's say 20 people connected to it and they all connect using GE.
Running this exchange in an existing facility with existing people, you
can easily run it for under $10k per year per connected operator or less
as you already have engineers that are on site frequently, you already
have a billing department etc.

It's when the exchange is being run by a separate entity that needs a
marketing department, a well-paid staff of managers, technicians etc that
price really goes up. All this to basically manage a simple ethernet
switch that needs some patching a couple of times a month at maximum.

no. in the first case, you're just hiding the incremental costs.
eventually, some bean counter is gonna want to recover them, and
then folk get quite unhappy.

and, there are known issues when a colo or transit provider is the
exchange.

[ note that i am not talking about small local friendly exchanges.
  i mean stuff that carries multi-gig. it's like is-is, almost no
  one runs it, only the few folk who carry most of the internet's
  traffic. ]

randy, who contributes to and peers at the seattle internet exchange

What costs are you referring to? You basically need a few hours time per
month from engineers and billing department. This for an exchange that has
20 ISPs connected to it. The amount of traffic isn't really a factor, but
the one I know of and am part of running carries multi-gigabit.

Mikael Abrahamsson wrote:

The marginal cost of half a rack being occupied by an IX switch in a
multi-hundred-rack facility is negiglabe. Yes, it should carry a cost of a
few hundred dollars per month in "rent", and the depreciation of the
equipment is also a factor, but all-in-all these costs are not high and if
an IX point rakes in $200k a year that should well compensate for these
costs.

I tend to get suspicious when I know the exchange isn't charging enough
money to cover its costs. I also don't see a need for a "free exchange"
either. I'm perfectly willing to pay a fair price for the service, and I
at least want the BELIEF that I am going to get a certain level of service
from the exchange, not "but we can't afford..." or "duhhhhhhh?". It seems
that most commercial network operators agree, as you rarely see them
popping up at joe bob's local alternative new exchange point, even when
it is free.

The cost for the exchange hardware is really not that much. Just to throw
out some numbers, you can snag a new 6509 w/SUP720 and 48-SFP GE for less
than $50k with very modest discounts. Admittidly this is relatively new
technology compared to most GE exchanges currently deployed, but the
pricing a couple years ago was around the same for the Floundry's that
everyone deployed, just at a lower density. A successful exchange probably
has multiple switches and some 10GE trunks, but with a few customers
paying industry average recurring fees this quickly pays for itself. The
euro players are really the ones to look to for examples here, US players
have been complete failures (especially with multi-site linked exchanges).

The guys best positioned to do it are the actual colo operators who
already have a technician staff on site, they really only need 1-2 higher
level engineers, a support contract for when the switch crashes, etc. The
real cost and value of an exchange point is the marketing (i.e. showing up
at nanog and giving presentations about it, creating your own peering
events, having sales folks promoting the product, etc), not the hardware.

> no. in the first case, you're just hiding the incremental costs.
> eventually, some bean counter is gonna want to recover them, and
> then folk get quite unhappy.

What costs are you referring to? You basically need a few hours time per
month from engineers and billing department. This for an exchange that has
20 ISPs connected to it. The amount of traffic isn't really a factor, but
the one I know of and am part of running carries multi-gigabit.

This is simply untrue.

Whilst it is possible to establish an exchange with minimal cost if it is
successful your costs will soon escalate.

To provide carrier class service for the worlds top carriers you need to invest
in the latest hardware, you need to house multiple switches and odfs in suites,
you need to pay a team of engineers to run the exchange 24x7, you need to
maintain vendor support agreements.

From empirical data this cost is in the order of a few million dollars per year.

This may not be a lot of money compared to the annual turnover of the large
carriers but eg for a typical exchange $5m between 150 companies is on average
about $3k/mo each (of course this will likely be skewed so that the top few
companies pay more).

If you're exchange is in an already developed location then my observation is
that you need to have the above if you are to attract the larger networks which
in turn brings in the traffic and noc requirements that see increasing costs.

Steve

This is simply untrue.

Whilst it is possible to establish an exchange with minimal cost if it is
successful your costs will soon escalate.

To provide carrier class service for the worlds top carriers you need to invest
in the latest hardware, you need to house multiple switches and odfs in suites,
you need to pay a team of engineers to run the exchange 24x7, you need to
maintain vendor support agreements.

IXes are not for "top carriers", they're for the small and middle players,
and in some cases for the top players to talk to smaller players.

IXes is a way to cheaply exchange traffic. It's better to establish two IX
switches and run them with 99.9% availability than to have a single IX
switch and aim for 99.999%.

If you're exchange is in an already developed location then my observation is
that you need to have the above if you are to attract the larger networks which
in turn brings in the traffic and noc requirements that see increasing costs.

If you're already an operator or colo facility owner, you already have all
of that, which makes the cost of running an IX much less than if you're a
separate entity who have to set up all these facilities.

I work in an environment where IXes are readily available in all major
metropolitan areas where we are, and they don't cost an arm and a leg and
fiber is cheap and readily available, so we try to establish everywhere.
This brings the impact of a single IX being down to very negligable, so we
definately don't need 99.999%.

Off the top of my head, I'd estimate that the cost of being present at an
exchange here is around $1-5k per gig per month (including router port,
fiber connection and IX exchange fee). We run these at approx 50%
utilisation so the price per megabit is $5-10/megabit per month.

This also adds a lot of reduced latency from our customers to our
competitors customers which is very appreciated, it also cuts down on
long-haul costs.

If an IX costs $50-100k a year for a gig it tilts the whole equation, so I
can understand if a lot of people don't like them if that's the cost of
being connected.