CW?

http://biz.yahoo.com/djus/021113/0217000178_2.html

-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben --
-- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --

Since it seems to be public, no harm in sharing it.

http://biz.yahoo.com/djus/021113/1031000599_1.html

I am sure a lot of customer will feel better. Stronger balance sheet means Switch and Data will be a definite survivor and so will PAIX. Looks like we're coming back to a peering location consolidation again. Equinix and S&D (PAIX) will be the new peering exchanges.

Question is, outside of 6 exchanges domestically, what scenario would force a move to doubling that to 12. Long haul circuits rising again, or perhaps some new killer app. Right now seems domestically 6 may be all we need.

Equinix and S&D (PAIX) will be the new peering exchanges.

I hate to think how many exchange points that leaves out. Telehouse
and Terramark come to mind. Even if there are some dominant players,
domestic neutral exchange points are still a diverse, vibrant market.

Question is, outside of 6 exchanges domestically, what scenario would
force a move to doubling that to 12. Long haul circuits rising
again, or perhaps some new killer app. Right now seems domestically
6 may be all we need.

I'm putting the number closer to 40 (the "NFL cities") right now, and
150 by the end of the decade, and ultimately any "metro" with population
greater than 50K in a 100 sq Km area will need a neutral exchange point
(even if it's 1500 sqft in the bottom of a bank building.)

I'm putting the number closer to 40 (the "NFL cities") right now, and
150 by the end of the decade, and ultimately any "metro" with population
greater than 50K in a 100 sq Km area will need a neutral exchange point
(even if it's 1500 sqft in the bottom of a bank building.)

What application will require this dense peering?

Pete

Date: 14 Nov 2002 05:14:30 +0000
From: Paul Vixie

[ re number of US exchange points ]

Right now seems domestically 6 may be all we need.

I'm putting the number closer to 40 (the "NFL cities") right
now, and 150 by the end of the decade, and ultimately any
"metro" with population greater than 50K in a 100 sq Km area
will need a neutral exchange point (even if it's 1500 sqft in
the bottom of a bank building.)

Are we discussing:

1) locations primarily for peering between large carriers, or
2) carrier hotels including virtually all providers, where cheap
   faste/gige peering runs are easily justified?

If #1, I agree with David. In the case of the latter, I think I
see what Paul is saying. IMESHO, local/longhaul price imbalance
and the growth of distributed hosting {would|will} help fuel the
smaller exchanges.

Eddy

Well thanks for the agreement Ed.

Philosophically, I agree with Paul. I think 40 exchange points would be a benefit. At this time though, there is no model that would support it.

1) Long haul circuits are dirt cheap. Meaning distance peering becomes more attractive. L3 also has an MPLS product so you pay by the meg. I am surprised a great many peers are using this. But apparently CFOs love it

2) There is a lack of a killer app requiring peering every 100 sq Km. VoIP might be the app. Seems to be gaining a great deal of traction. Since it's obvious traffic levels would sky rockets, and latency is a large concern, and there is a need to connect to the local voice TDM infrastructure, local exchanging is preferred. However, many VoIP companies claim latency right now is acceptable and they are receiving no major complaints. So we are left to guess at other killer apps, video conferencing, movie industry sending movies online directly to consumers etc.

3) In order to get to the next level of peering exchanges... from 6 major locations to 12.... we are going to need the key peers in those locations. Many dont want to manage that growing complexity for diminishing returns, as well as the increased cost in equipment. Perhaps it's up to the key exchange companies to tie fabrics together allowing new (tier2 locations) to gain visibility to peers at other larger locations. This would allow peers at the larger locations to engage in peering discussions, or turn ups, and when traffic levels are justified a deployment to the second location begins. Problem with new locations are 'chicken and the egg.' Critical mass must be achieved before there is a large value proposition for peers.

And to everyone that emailed me with their "we also are an exchange email." Yes, I readily admit there are other companies doing peering besides the ones I mentioned. I just did a quick post so I did not list every single exchange company. So you have my apologies, and I wont even hold it against you all that you were sales people....

dave

Date: Thu, 14 Nov 2002 10:22:09 -0500
From: David Diaz

1) Long haul circuits are dirt cheap. Meaning distance
peering becomes more attractive. L3 also has an MPLS product
so you pay by the meg. I am surprised a great many peers are
using this. But apparently CFOs love it

Uebercheap longhaul would _favor_ the construction of local
exchanges.

Let's say I pay $100k/mo port and $10M/mo loop... obviously, I
need to cut loop cost. If an exchange brings zero-mile loops to
the table, that should reduce loop cost. Anyone serious will
want a good selection of providers, and the facility offering the
most choices should be sitting pretty.

Likewise, I agree that expensive longhaul would favor increased
local peering... but, if local loop were extremely cheap, would
an exchange be needed? It would not be inappropriate for all
parties to congregate at an exchange, but I'd personally rather
run N dirt-cheap loops across town from my private facility.

Hence I refer to an "imbalance" in loop/longhaul pricing; a large
proliferation in exchanges could be precipitated by _either_ loop
_or_ longhaul being "expensive"... and it seems expensive loop
would be a more effective driver for local exchanges.

2) There is a lack of a killer app requiring peering every
100 sq Km. VoIP might be the app. Seems to be gaining a

<minirant>
By the time IP packets are compressed and QOSed enough to support
voice, one essentially reinvents ATM or FR (with ATM seeming
suspiciously like FR with fixed-length cells)...
</minirant>

great deal of traction. Since it's obvious traffic levels
would sky rockets, and latency is a large concern, and there
is a need to connect to the local voice TDM infrastructure,

Yes, although cost would trump latency. Once latency is "good
enough", cost rules. Would I pay a premium to reduce latency
from 50ms to 10ms for voice calls? No.

local exchanging is preferred. However, many VoIP companies
claim latency right now is acceptable and they are receiving
no major complaints. So we are left to guess at other killer
apps, video conferencing, movie industry sending movies
online directly to consumers etc.

The above are "big bandwidth" applications. However, they do not
inherently require exchanges... _local_ videoconferencing, yes.
Local security companies monitoring cameras around town, yes.
Video or newscasting, yes. Distributed content, yes. (If a
traffic sink could pull 80% of its traffic from a local building
where cross-connects are reasonably priced...)

3) In order to get to the next level of peering exchanges...

[ snip ]

Perhaps it's up to the key exchange companies to tie fabrics
together allowing new (tier2 locations) to gain visibility to
peers at other larger locations. This would allow peers at
the larger locations to engage in peering discussions, or
turn ups, and when traffic levels are justified a deployment
to the second location begins. Problem with new locations
are 'chicken and the egg.' Critical mass must be achieved
before there is a large value proposition for peers.

Yes.

Eddy

Date: Thu, 14 Nov 2002 10:22:09 -0500
From: David Diaz

1) Long haul circuits are dirt cheap. Meaning distance
peering becomes more attractive. L3 also has an MPLS product
so you pay by the meg. I am surprised a great many peers are
using this. But apparently CFOs love it

Uebercheap longhaul would _favor_ the construction of local
exchanges.

Let's say I pay $100k/mo port and $10M/mo loop... obviously, I
need to cut loop cost. If an exchange brings zero-mile loops to
the table, that should reduce loop cost. Anyone serious will
want a good selection of providers, and the facility offering the
most choices should be sitting pretty.

This is an interesting and good point, but any carrier hotel provides the same thing.

Likewise, I agree that expensive longhaul would favor increased
local peering... but, if local loop were extremely cheap, would
an exchange be needed? It would not be inappropriate for all
parties to congregate at an exchange, but I'd personally rather
run N dirt-cheap loops across town from my private facility.

Hence I refer to an "imbalance" in loop/longhaul pricing; a large
proliferation in exchanges could be precipitated by _either_ loop
_or_ longhaul being "expensive"... and it seems expensive loop
would be a more effective driver for local exchanges.

Tried this. Yes you are right, problem is local loops are sometimes extremely difficult to get delivered in a timely manner and upgrading them can be an internal battle with the CFO. To solve this, we deployed the Bellsouth mix. I actually came up with the idea while have a terrible time getting private peering sessions up while at Netrail. 6months was a ridiculous timeframe. Bellsouth liked it and deployed it, eventually. So now you have a distributed optical exchange where you can point and click and drop circuits btw any of the nodes; nodes were located at many colos and undersea fiber drops. Theoretically this meant the exchange was "colo" neutral. With flat rate loops it meant location wasnt important. Each node also allows for hairpinning so you could do peering within the room at a reduced rate (since u werent burning any ring-side capacity).

The neat part was that customers would be able to see and provision their own capacity via a login and pwd. Also, with UNI 1.0, the IP layer would be able to upgrade capacity on the fly. No one has put that into production but real world test have worked. In reality a more realistic scenario was the ability of a customer to upgrade from an OC3 to an OC12. The ports were the same so it was just a setting on the NMS to change. It was a nice feature and meant engineers did now have to justify ESP feelings on how traffic would grow to a grouchy CFO.

2) There is a lack of a killer app requiring peering every
100 sq Km. VoIP might be the app. Seems to be gaining a

<minirant>
By the time IP packets are compressed and QOSed enough to support
voice, one essentially reinvents ATM or FR (with ATM seeming
suspiciously like FR with fixed-length cells)...
</minirant>

great deal of traction. Since it's obvious traffic levels
would sky rockets, and latency is a large concern, and there
is a need to connect to the local voice TDM infrastructure,

Yes, although cost would trump latency. Once latency is "good
enough", cost rules. Would I pay a premium to reduce latency
from 50ms to 10ms for voice calls? No.

I agree. A couple off-list emails to me did not seem to understand this. Just because we post something does not mean it's our personal pref, it's just we are posting the reality of what will likely happen in our opinion. If there is not a competitive advantage, backed up by reduced cost or increased revenue, it would be a detriment to deploy it... more likely a CFO would shoot it down.

Someone sent an example as if I am making the statement no one needs more then 640k of ram on their computer. Never made that analogy, but there is a limit. It also seems to me that shared supercomputer time is making a come back. IBM seems to be pushing in that direction, and there are several grid networks being setup. The world changes.

Let's face it. Has anyone talked about the protocols to run these super networks? Where we have something like 100-400 peering nodes domestically? Injecting those routes into our IGP? Talk about a complex design... now we need to talk about tricks to prevent the overflow of our route tables internally... ok I can here people getting ready to post stuff about reflectors etc.

Truth is, it's just plain difficult to hit critical mass at a new exchange point. No one wishes to be 1st since there is little return. Perhaps these exch operators need to prime the pump by offered tiered rates, the 1st 1/3 of peers deploying coming in at a permanent 50% discount.

Thus spake "E.B. Dreger" <eddy+public+spam@noc.everquick.net>

> 1) Long haul circuits are dirt cheap. Meaning distance
> peering becomes more attractive. L3 also has an MPLS product
> so you pay by the meg. I am surprised a great many peers are
> using this. But apparently CFOs love it

Uebercheap longhaul would _favor_ the construction of local
exchanges.

Incorrect. Cheap longhaul favors a few centralized exchanges. If there is no
economic value in keeping traffic local, it is in carriers' interests to
minimize the number of peering points.

Let's say I pay $100k/mo port and $10M/mo loop... obviously, I
need to cut loop cost. If an exchange brings zero-mile loops to
the table, that should reduce loop cost. Anyone serious will
want a good selection of providers, and the facility offering the
most choices should be sitting pretty.

Most vendor-neutral colos have cheap zero-mile loops.

Likewise, I agree that expensive longhaul would favor increased
local peering... but, if local loop were extremely cheap, would
an exchange be needed? It would not be inappropriate for all
parties to congregate at an exchange, but I'd personally rather
run N dirt-cheap loops across town from my private facility.

What is the cost of running N loops across town, vs. the cost of pushing that
traffic to a remote peering location and back? Be sure to include equipment,
maintenance, and administrative costs, not just circuits.

The above are "big bandwidth" applications. However, they do not
inherently require exchanges... _local_ videoconferencing, yes.
Local security companies monitoring cameras around town, yes.
Video or newscasting, yes.

None of these applications require local exchanges. There is a slight increase
in end-to-end latency when you must use a remote exchange, but very few
applications care about absolute latency -- they only care about bandwidth and
jitter.

Distributed content, yes. (If a traffic sink could pull 80% of its traffic

from

a local building where cross-connects are reasonably priced...)

Distributed content assumes the source is topologically close to the sink. The
most cost-efficient way to do this is put sources at high fan-out areas, as this
gets them the lowest _average_ distance to their sinks. This doesn't
necessarily mean that putting a CNN mirror in 100,000 local exchanges is going
to reduce CNN's costs.

S

Date: Thu, 14 Nov 2002 13:32:55 -0600
From: Stephen Sprunk

Incorrect. Cheap longhaul favors a few centralized
exchanges. If there is no economic value in keeping traffic
local, it is in carriers' interests to minimize the number of
peering points.

True. However, cheap longhaul / expensive local means providers
_will_ try to reduce loop costs, favoring "carrier hotels".

Most vendor-neutral colos have cheap zero-mile loops.

Correct. In my original post... are we discussing #1 or #2? It
seems as if #2. Where are we drawing the line between "carrier
hotel" and "exchange"? I believe Paul was being perhaps more
nebulous than today's definition of "exchange" when he referenced
1500 sq-ft in-bottom-of-bank-building facilities.

What is the cost of running N loops across town, vs. the cost
of pushing that traffic to a remote peering location and
back? Be sure to include equipment, maintenance, and
administrative costs, not just circuits.

"It depends."

None of these applications require local exchanges. There is
a slight increase in end-to-end latency when you must use a
remote exchange, but very few applications care about
absolute latency -- they only care about bandwidth and
jitter.

With bounded latency and "acceptable" typical throughput, one
seeks to minimize jitter and cost. Jitter is caused by variable
queue time, which is due to buffering, which is a side-effect of
statmuxed traffic w/o strict { realtime delivery constraints |
QoS | TDM-ish architecture }... yes. And N^2 makes full-mesh
irresponsible when attempting to maximize bandwidth... yes.
(I think buying full transit from 10 providers is well beyond
the point of diminishing return; no offense to INAP.)

Again... if loop is expensive, and providers are concentrated in
"carrier hotels" with reasonably-priced xconns... when does it
become an "exchange"? Note that some exchanges do not provide a
switch fabric, but rather run xconns.

Sure, one must factor in all the costs. The breakeven point
varies, if it exists at all.

Distributed content assumes the source is topologically close
to the sink. The most cost-efficient way to do this is put
sources at high fan-out areas, as this gets them the lowest
_average_ distance to their sinks. This doesn't necessarily
mean that putting a CNN mirror in 100,000 local exchanges is
going to reduce CNN's costs.

It depends. Akamai certainly is overkill for smaller sites, and
perhaps not cost-effective for others. However, high fan-out can
be a _bad_ thing too: Assuming one has substantial traffic flow
to various regions, why source everything from NYC? Why not
replicate in London, AMS, SJO, IAD, CHI, DFW, LAX, SEA, KSCY?

From a source's point, distribution makes sense when cost of

geographically-diverse server presence (incremental admin/hw,
content distribution) is less than the cost of serving everything
from a centralized point. Once that happens... if a substantial
portion of Internet traffic were sourced from one local point,
sinks would gravitate toward said point.

Of course, I may well be stuck in distance-sensitive mode. If
local loop is the primary expense... we're back to what you said
about "few, centralized exchanges" and "many carrier hotels"?
So, where's the dividing line?

Eddy

Peering every 100 sq km is absolutely infeasible. Just think of the
number of alternative paths routing algorithms wil lhave to consider.

Anything like that would require serious redesign of Internet's routing
architecture.

--vadim

## On 2002-11-14 14:44 -0800 Vadim Antonov typed:

> 2) There is a lack of a killer app requiring peering every 100 sq Km.

Peering every 100 sq km is absolutely infeasible. Just think of the
number of alternative paths routing algorithms wil lhave to consider.

Anything like that would require serious redesign of Internet's routing
architecture.

  What about:

IPv6 with hierarchial(sp?) geographical allocation ?

BGP with some kind of tag limiting it to <N> AS hops ?
( say N=2 or N=3? )

Voice of reason...

The only possible reason I can think of is if these data networks replace the present voice infrastructure. Think about it, if we really all do replace our phones with some video screen like in the movies, then yes, most of those calls stay local within the cities. Mom calling son etc etc

So we can think of these "peering centers" as replacements for the 5-10 COs in most average cities.

Otherwise what apps require such dense peering.

## On 2002-11-14 14:44 -0800 Vadim Antonov typed:

> 2) There is a lack of a killer app requiring peering every 100 sq Km.

Peering every 100 sq km is absolutely infeasible. Just think of the
number of alternative paths routing algorithms wil lhave to consider.

Anything like that would require serious redesign of Internet's routing
architecture.

  What about:

IPv6 with hierarchial(sp?) geographical allocation ?

BGP with some kind of tag limiting it to <N> AS hops ?
( say N=2 or N=3? )

Hope count wont work. You would see the same hop count at all your peering locations. How your traffic exited would depend on your IGP decision tree. Do we want to get into exporting meds or tags? And with >100 domestic peering points how would you manage that? Vadim is correct, it would take a whole new protocol and that is unlikely. Proof of that is IPv6. IPv4 is obviously still the big winner.

Doesnt this model sound a bit like internap to anyone? Why even have a backbone if you have peering in every location.

I can think of several ways to do it, but all of them amount to
significant change from how things are being done in the current
generation of backbones.

--vadim

On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled:

> I'm putting the number closer to 40 (the "NFL cities") right now, and
> 150 by the end of the decade, and ultimately any "metro" with population
> greater than 50K in a 100 sq Km area will need a neutral exchange point
> (even if it's 1500 sqft in the bottom of a bank building.)

What application will require this dense peering?

To power the IPv6 networks of refridgerators, ovens, and light switches,
  as well as your 3G video conferencing phone

Michael C. Wu wrote:

On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled:

> I'm putting the number closer to 40 (the "NFL cities") right now, and
> 150 by the end of the decade, and ultimately any "metro" with population
> greater than 50K in a 100 sq Km area will need a neutral exchange point
> (even if it's 1500 sqft in the bottom of a bank building.)

What application will require this dense peering?

To power the IPv6 networks of refridgerators, ovens, and light switches,
as well as your 3G video conferencing phone

All of the above combined don't generate bandwith even near what a current
generation peer2peer file sharing client does.

The mentioned applications are not really delay sensitive to the sub-20ms
range either.

Pete

Thus spake "Michael C. Wu" <keichii@iteration.net>

On Thu, Nov 14, 2002 at 10:00:48AM +0200, Petri Helenius scribbled:
>
> > I'm putting the number closer to 40 (the "NFL cities") right now, and
> > 150 by the end of the decade, and ultimately any "metro" with

population

> > greater than 50K in a 100 sq Km area will need a neutral exchange

point

> > (even if it's 1500 sqft in the bottom of a bank building.)
>
> What application will require this dense peering?

To power the IPv6 networks of refridgerators, ovens, and light switches,
  as well as your 3G video conferencing phone

None of these applications have any requirement for peering every 100km2.
I'd expect my refrigerator, oven, light switches, etc. to be behind my
house's firewall and only talk using link-local addresses anyways.

Try again.

S

Thus spake "Brad Knowles" <brad.knowles@skynet.be>

> None of these applications have any requirement for peering every

100km2.

> I'd expect my refrigerator, oven, light switches, etc. to be behind my
> house's firewall and only talk using link-local addresses anyways.

Using Rendezvous and multicast DNS? What happens when you bring
in the rogue appliance that decides to start spoofing answers from
other equipment, or maybe you contract a computer virus that does so?

That's a potentially interesting discussion, but it has nothing to do with
requiring peering in every 100km2.

I think the real risk is VoIP and mobile phones used as Internet
video phones with H.323 or other protocols that require high
bandwidth and low latency. Imagine doing this for tens of millions
of people in a large city.

And the half-dozen carriers who operate those tens of millions of phones
will have private peering in place if it makes technical sense -- just like
they do for TDM phones. That doesn't mean those carriers will want to peer
publicly in every city, nor does it necessarily mean that private peering in
every city makes economic or technical sense.

As I previously asserted, every point in the US is within 20ms RTT of a
major exchange today, and 20ms latency is irrelevant in the VoIP arena.

Try again.

S