Info on MAE-EAST

Brett writes:

MAE-Houston, a small NAP in the scheme of things, but it makes for a good
example so we'll use it. $2000/month to get your foot in the door, then
another large chunk of cash to connect to the Gigaswitch which all things
considered, isn't really needed. Rather than waste their money on equipment
that all in all just doesn't need to be there, why not make it more economic
for local players to get involved and cross connect to eachother. In the end
you not only save money by not bringing in useless hardware but you garner
more customers by lessening the price of the private interconnect.

Hmmm. According to what I learnt in school, the cost of a connected network
like a GIGAswitch or Catalyst or DELNI with N participants is:

  (N x interface_cost) + (N x port_cost)

...while the cost of a connected network made up of wire peers is:

  (2 x sum(N - 1) x interface_cost)

"sum(N-1)" is an interesting function. Here are some examples:

  % calc
  > define sum(n) = n > 0 ? n + sum(n-1) : 0;
  "sum" defined
  > for (n = 2; n < 20; n++) print n,2*sum(n-1);
  2 2
  3 6
  4 12
  5 20
  6 30
  7 42
  8 56
  9 72
  10 90
  11 110
  12 132
  13 156
  14 182
  15 210
  16 240
  17 272
  18 306
  19 342

That means with 19 ISP's in a GIGAswitch-free room, there are 342 FIP's at
a cost of, what, US$12000 each after discount? I'll betcha I can buy quite
a few GIGAswitches for US$4.1M. Oops, that's not a fair comparison, since
with a GIGAswitch I also need 19 FIP's. Figure that a fully configured
GIGAswitch retails without discount for US$80K and that 19 FIPs are going
to run another US$228K. That's still a *lot* less than 2*sum(n-1).

This also assumes that we all have VIP2 cards and want to burn 9 7513 slots
just on local peering, and it further assumes that a 7513 won't just simply
melt if all the interfaces ever get hot at the same time.

The breakeven is between N=3 and N=4. On the Internet, N never stays small.

(And that breakeven assumes that the 4 people have to buy the whole
GIGAswitch with noone like MFS to underwrite the costs of the unused ports;
that means four people in a room together could SAVE MONEY buying the GIGA-
switch.)

Gah.

This is probably a worst-case scenario. What about Ethernet cross-connects
using the 6 port cards? Or zero-mile T1's using the 8 port serial cards?
And you are assuming a full mesh which isn't necessarily what people need.
I don't think you can generalize about what a provider wants from an
Exchange Point especially not in a world in which exchange points are
breeding like rabbits.

Michael Dillon - Internet & ISP Consulting
Memra Software Inc. - Fax: +1-250-546-3049
http://www.memra.com - E-mail: michael@memra.com

Maybe, maybe not, I've done a fair bit of watching to see where packets are
flying in my short time. Now I don't pretend to be an expert, not by a long
shot, and someone spank me if I'm way off but... From what I've seen, in any
given city (assume a reasonable size of 200,000+) 50% or more of the traffic
is local. By providing reasonable rates for private interconnects at a local
peering point, one can not only speed up the response for customers but cut
down on a great deal of traffic that needn't circle the globe to get to its
destination. If we cut down on the amount of hops packets have to take, we
cut down on congestion, etc.. or am I just being idealistic :slight_smile:

[-] Brett L. Hawn (blh @ nol dot net) [-]
[-] Networks On-Line - Houston, Texas [-]
[-] 713-467-7100 [-]

Nope. In a given reasonably sized (.i.e a city or so) geographical area,
you'd be lucky to get better than 20% locality of your traffic. There are
some exceptions where there are major traffic sources in the area, but those
tend to be pretty concentrated.

The percentage decreases further when you take into account traffic to/from
NSPs' customers in the locality as the NSPs are not likely to private peer
with local providers.

This is in no way a case against local peering, (every bit less traffic dumped
into the core from every locality adds up) but one needs to be aware of what
is gained from "exchange in every town" scenario.

-dorian

Interesting. I wonder if this will continue to be a long term trend.

I can't claim to have recent numbers that suggest otherwise, but, some
historical information might at least be interesting. In the early 80s, I
did a good deal of X.25 capacity planning. At what was then GTE Telenet,
we found that up to 50% of our traffic stayed local in large cities. The
larger the city, the more that seemed to stay local...this was especially
obvious in New York, where a great deal of financial data flowed.

Now, these old statistics reflect mainframe-centric traffic, and more
private-to-private than arbitrary public access. The latter is much more
characteristic of Internet traffic.

SNA and X.25 tended to emphasize the ability to fine tune access to a
limited number of well-known resources, with relatively well-understood
traffic patterns. The Internet, however, has emphasized arbitrary and
flexible connectivity, possibly to the detriment of performance tuning and
reliability.

While I recognize that putting mission critical applications into the
general Internet (as opposed to VPNs), in many cases, is a clear indication
that someone needs prompt psychiatric help, I wonder whether the increasing
commercialization of Internet information resources might tend to have
greater volumes of traffic that stays within the service area of an
exchange point.

Web cacheing would seem to encourage traffic to stay local.

Howard Berkowitz
PSC International

From what I've seen, in any given city (assume a reasonable size of
200,000+) 50% or more of the traffic is local.

For those who are new here, this one has been around a decade or two.

    A host is a host from coast to coast.
    no one will talk to a host that's close.
    Unless the host (that isn't close)
    is busy, hung or dead.
    -- David Lesher wb8foz@nrk.com

randy

I can't claim to have recent numbers that suggest otherwise, but, some
historical information might at least be interesting. In the early 80s, I
did a good deal of X.25 capacity planning. At what was then GTE Telenet,
we found that up to 50% of our traffic stayed local in large cities. The
larger the city, the more that seemed to stay local...this was especially
obvious in New York, where a great deal of financial data flowed.

remember that in the early 80's you basically couldn't lease a T1
from AT&T (I think it was 82 or so when they were first tariffed?)
(watch out for that DC voltage...ouch! :-).
also DDS services were scarce, etc. So (expensive) low speed analog
was the option for leased lines - and private networks were rare.
Since then of course the fallout from Judge Greene has changed some
things, and it is cheap and easy to put up a DS0 across town - the
cost justification vs. per packet charges is a lot different.

Now, these old statistics reflect mainframe-centric traffic, and more
private-to-private than arbitrary public access. The latter is much more
characteristic of Internet traffic.

SNA and X.25 tended to emphasize the ability to fine tune access to a
limited number of well-known resources, with relatively well-understood
traffic patterns. The Internet, however, has emphasized arbitrary and
flexible connectivity, possibly to the detriment of performance tuning and
reliability.

well the strategies for performance tuning are certainly different.

[stuff cut]

Web cacheing would seem to encourage traffic to stay local.

ahhh....yup.

            dave

I can't claim to have recent numbers that suggest otherwise, but, some
historical information might at least be interesting. In the early 80s, I
did a good deal of X.25 capacity planning. At what was then GTE Telenet,
we found that up to 50% of our traffic stayed local in large cities. The
larger the city, the more that seemed to stay local...this was especially
obvious in New York, where a great deal of financial data flowed.

remember that in the early 80's you basically couldn't lease a T1
from AT&T (I think it was 82 or so when they were first tariffed?)

Dave, reality was funnier than that. It was 1980 or so when we actually
did get a T1 between Washington and New York, but eventually released it
because all of the DC-NY public network traffic wasn't enough to justify
that HUGE amount of bandwidth.

I did get the first nonmilitary T1 in the DC area in '77 or '78 at the
Library of Congress. The then C&P Telephone couldn't really figure out how
to charge for it, so we got it dirt cheap -- and it worked very well.

(watch out for that DC voltage...ouch! :-).

I have a very painful memory of running my finger over a punchdown with
some stranded wire that slightly got loose and broke the skin. Knocked me
flat and sprained my shoulder.