RE: Current street prices for US Internet Transit

rabbit. :wink: Now excuse me while I soak my hands in bleach for having typed

I'd hate to hear what you have to do if you read that out loud. :slight_smile:

Just to be on-topic:

I think the question of what equipment the network is running for the purposes of a customer savvy enough to know the difference between a 12000 network or a 7xxx network or what-have-you, would be able to mitigate a vast many of these concerns by being multihomed correctly. Such a customer would be able to see significant cost improvements and not see much in the way of penalties -- e.g. reconvergence issues. Two pieces of equipment with low MTBFs may exceed a single piece of equipment with a high MTBF's availability overall.

On-topic, but slightly different:

Other than packet buffer depths and some theoretical ACL limits, is there any reason why a 7600 network would be worse than a 12000 built one? MTBF, reconvergence and other issues should all be pretty nice and like others have mentioned packet buffers are not necessarily a good thing <tm>. Throughput-wise, a 7600 should be able to hold its own against a 12000 provided we are talking about 40Gb/s blades and SUP720s.

Deepak Jain
AiNET

I've had this discussion a few times with people working at cisco. The
answers I usually get has to do with how well it handles overload, ie what
happens when ports go full.

If you want to be able to do single TCP streams at 5 gigabit/s over your
long-haul 10gig network that is already carrying a lot of traffic, you
need deep packet buffers. If your fastest customer is less than 1gig and
your network is 10gig, you do not.

So, if I were to provision a transatlantic line that cost me a lot of
money, I would use a GSR or a juniper. If I were to provision a 80km dark
fiber between two places where I already own 24 pairs, there is a wide
choice in equipment.

Those are apples & oranges. You cannot compare bandwidth in countries
without the same fiber infrastructure as the US ( and with government
owned PTTs controlling almost all access to the US market.

Bang on!
U.S. prices reflect a mostly complete disintermediation of the telecom
industry in that the provider who sells you transit probably also owns
the fiber in the ground and is able to specify the entire suite
of technology and operations between the glass strands and IP transit.
So rather than reflecting slim margins, perhaps the prices reflect
sensible cost structures.

Let's not forget that it was only about ten years ago that telcos were
able to get away with selling telecom services in which 75% or more of
their cost base was in billing systems and overhead.

Anyway, I suspect "more turbulence in the industry" for the next few
millennia, no matter where prices are. :slight_smile:

It will take at least a generation before the people who once experienced
grossly overinflated margins are all gone and people stop trying to
recreate the golden age of telecom again. Think highways and gas pipelines
and electrical grids. IP transit networks are a utility and they should
be cheap, ubiquitous and reliable. Anyone who wants to get rich in this
business should be looking at value added services and not transit.
It won't be long before IP transit is a real commodity and everyone will
have the same cost structure and prices across the board. There will be
margin for well-run IP transit utilities but no more boom times.

--Michael Dillon

Well, with the GSR (and alike) you're paying for high MTBF, large

buffers

and quick re-routing when something happens, so yes, this is a quality
issue and that's why you should care and make an informed decision.

There's more than one way to do things.

Some people manage MTBF by having more cheaper boxes in a resilient
architecture so that the failure of a box has minimal impact on
the transport of packets.

Some people don't have buffers in their routers because they
provide a consistently low latency service (low jitter).

Some people do rerouting at the SDH layer so that routers don't need
to reroute. Or they put a lot of effort into managing their lower
layers so that failures happen very infrequently and therefore routers
don't need to reroute.

To make a truly informed decision you need hard data on network
performance. Brands and models of routers are irrelevant. When I look
at point-to-point latency graphs on a network and see constantly
varying latency in almost a sine wave pattern, I know that the
provider is doing something wrong. I may not know whether it is
too-large buffers on the routers, congested circuit, or poorly
managed underlying ATM/FR network, but the data tells the true
story.

If you care about quality, don't buy unless you can see hard data
on the network's performance over a reasonable time period, i.e.
6 months to a year.

And not everybody needs to care about quality that much.

-Michael Dillon

of course, if you wait for someone to go bankrupt then buy them you can buy the
entire company and network for about that price :slight_smile:

Steve

Stephen J. Wilcox wrote:

of course, if you wait for someone to go bankrupt then buy them you can buy the entire company and network for about that price :slight_smile:

I did hear about an isp called optigate.net (coarsegold, CA) that went bankrupt quite recently ... [at least, an ex optigate customer emailing out of a dynamic dsl ip who ran into our filters told me optigate had shut down suddenly ...]

You might not want their IP space though, if you propose to put mailservers on it.

I've had this discussion a few times with people working at cisco. The answers I usually get has to do with how well it handles overload, ie what happens when ports go full.

If you want to be able to do single TCP streams at 5 gigabit/s over your
long-haul 10gig network that is already carrying a lot of traffic, you
need deep packet buffers. If your fastest customer is less than 1gig and
your network is 10gig, you do not.

So, if I were to provision a transatlantic line that cost me a lot of
money, I would use a GSR or a juniper. If I were to provision a 80km dark
fiber between two places where I already own 24 pairs, there is a wide
choice in equipment.

Maybe I am wrong here, but what does the router's packet buffers have to do with a TCP stream? Buffers would add jitter and latency to the pipe. Wouldn't a 5Gb/s TCP stream over 3000+ miles imply huge buffers on the sender and receiver side? Since when do the routers buffers make a difference for that? If your application is such that jitter and latency don't matter, buffers are great. If dropping a packet on congestion is worse than queuing it, also great. But how does that improve the stream's performance otherwise?

"What happens when ports go full" are you implying some kind of HOL problem in the 7600?

DJ

Maybe I am wrong here, but what does the router's packet buffers have to
do with a TCP stream? Buffers would add jitter and latency to the pipe.

Have you tried running a single TCP stream over a 10 meg ethernet with a 5
megabit/s policer on the port? Do that, figure about what happens and
explain to the rest of the class why this single TCP stream cannot use all
of the 5 megabit/s itself.

Wouldn't a 5Gb/s TCP stream over 3000+ miles imply huge buffers on the
sender and receiver side? Since when do the routers buffers make a
difference for that? If your application is such that jitter and latency
don't matter, buffers are great. If dropping a packet on congestion is
worse than queuing it, also great. But how does that improve the
stream's performance otherwise?

"What happens when ports go full" are you implying some kind of HOL
problem in the 7600?

I'm implying that a 7600 with non-OSM doesn't have more than a few ms of
buffers making a single highspeed TCP stream go into saw-tooth performance
mode via it's congestion mechanism being triggered by packet loss instead
of via change in RTT.

Yes, the GSR/juniper with often 500+ ms buffers are often of no use in
todays world, but it's nice to have 25ms buffers anyway, so TCP has some
leeway.

If you have thousands of TCP streams it doesn't matter, then small packet
buffers will simply act as a high-speed policer when the port goes full
and they'll be able to fill the pipe together anyway.

Have you tried running a single TCP stream over a 10 meg ethernet with a 5
megabit/s policer on the port? Do that, figure about what happens and
explain to the rest of the class why this single TCP stream cannot use all
of the 5 megabit/s itself.

That's entirely a different example. If we are talking about a stream that is _exactly 5Gb/s or _exactly_ 5mb/s, the policer won't be hit. In the example we are talking about below, an _approximately_ 5Gb/s stream on an _approximately_ full pipe the performance will be significantly better than you imply. And I have customers that do it pretty regularly (2 ~500Mb/s streams per GE port - telemetry data) on their equipment with very small buffers (3550s).

I'm implying that a 7600 with non-OSM doesn't have more than a few ms of
buffers making a single highspeed TCP stream go into saw-tooth performance
mode via it's congestion mechanism being triggered by packet loss instead
of via change in RTT.

Yes, the GSR/juniper with often 500+ ms buffers are often of no use in
todays world, but it's nice to have 25ms buffers anyway, so TCP has some
leeway.

Yes, if you are trying to fill your pipe for more than a few miliseconds and are schooling your GSR/Juniper to drop or prevent queuing beyond say 50ms, that might be a useful improvement. Not that anyone does that....

I suppose your example of transoceanic connectivity vs an 80km span was an example where a congestion case would exist for a long time rather than a decent upgrade plan. I guess that is a spend more on HW vs spend more on connectivity model -- or trust that C or J overengineered so the network doesn't have to be properly engineered [by assumption].

If you have thousands of TCP streams it doesn't matter, then small packet
buffers will simply act as a high-speed policer when the port goes full
and they'll be able to fill the pipe together anyway.

Agreed. I guess it depends where you want to spend your engineering dollars. If your interfaces are pretty small and subject to bursting to wirespeed often and they somehow make it into your core [and are not dropped by your aggregation gear with its smaller buffers] then you can queue it.

If you run a network where your bursts disappear by the time they hit your core [either because of statistical aggregation or simply being dropped by the smaller interface buffers along the way] or you have ample capacity or you have engineered properly sized core trunks, its not an issue. I hope most fall into this category, but I could be wrong.

DJ

I'm implying that a 7600 with non-OSM doesn't have more than a few ms of
buffers making a single highspeed TCP stream go into saw-tooth performance
mode via it's congestion mechanism being triggered by packet loss instead
of via change in RTT.

Yes, the GSR/juniper with often 500+ ms buffers are often of no use in
todays world, but it's nice to have 25ms buffers anyway, so TCP has some

I hate following up on my own message, so I'm following up on this. A point just raised privately was that *IF* you need the buffers you could just OSM the ports under stress [say the ones dedicated to the 1 or 2 expensive WAN links you may want to run near their top]. Considering a 4 port GE-WAN OSM is $800 on Ebay, I don't see how its even a pricing consideration.

DJ

the example we are talking about below, an _approximately_ 5Gb/s stream
on an _approximately_ full pipe the performance will be significantly
better than you imply. And I have customers that do it pretty regularly
(2 ~500Mb/s streams per GE port - telemetry data) on their equipment
with very small buffers (3550s).

Well, my experience is that 500 meg on a gig link background, and then a
single highspeed tcp stream on top of that, it's basically the same thing
as putting a 500 meg policer on it. And on a 500 meg policer on a gig link
and trying to go as fast as you can with a gig-connected machine, you
won't be able to use the remaining 500 meg, you'll get 200-300 meg.

I suppose your example of transoceanic connectivity vs an 80km span was
an example where a congestion case would exist for a long time rather
than a decent upgrade plan. I guess that is a spend more on HW vs spend
more on connectivity model -- or trust that C or J overengineered so the
network doesn't have to be properly engineered [by assumption].

Yes, that is exactly what I mean. If connectivity is expensive, spend more
on what you connect to that connectivity, if connectivity is cheap, buy
two and buy cheaper things to connect to it.

Deepak Jain wrote:

Have you tried running a single TCP stream over a 10 meg ethernet with a 5
megabit/s policer on the port? Do that, figure about what happens and
explain to the rest of the class why this single TCP stream cannot use all
of the 5 megabit/s itself.

That's entirely a different example. If we are talking about a stream that is _exactly 5Gb/s or _exactly_ 5mb/s, the policer won't be hit. In the example we are talking about below, an _approximately_ 5Gb/s stream on an _approximately_ full pipe the performance will be significantly better than you imply. And I have customers that do it pretty regularly (2 ~500Mb/s streams per GE port - telemetry data) on their equipment with very small buffers (3550s).

The required buffer size depends on the RTT of the TCP stream going over
it. If you have the 3550 with small buffers and 5ms TCP RTT then everything
is well. If you have the 3550 with small bufferns and 200ms TCP RTT you
will run into troubles.

William B. Norton wrote:

> The Cost of Internet Transit in�
> Commit AU SG JP HK USA
> 1 Mbps $720 $625 $490 $185 $125
> 10 Mbps $410 $350 $150 $100 $80
> 100 Mbps $325 $210 $110 $80 $45
> 1000 Mbps $305 $115 $50 $50 $30

As mentioned before, Europe is about the same as US.

With these US street prices in mind, how can anyone justify paying
prices of some commercial exchanges (the last offer I got from PAIX Palo
Alto was USD 5500 per month for a FE port about a year ago, and Equinix
Ashburn was not much cheaper). Please note: I'm not talking of the
technical advantages of peering.

Fredy K�nzler
Init Seven AG, AS13030

With these US street prices in mind, how can anyone justify paying
prices of some commercial exchanges (the last offer I got from PAIX Palo
Alto was USD 5500 per month for a FE port about a year ago, and Equinix
Ashburn was not much cheaper). Please note: I'm not talking of the
technical advantages of peering.

Or, perhaps the better question is. How can one justify the cost of _public_ peering when fiber cross-connects are $200-$300/month each. That is at least 20-40 fiber direct connects [twice that if you & your peers split the cost of cross-connects]. If you only need 1Gb/s of cross-connect capacity you can take a 3x50 switch [or use it as a router] and terminate all of the peering sessions on it or via VLAN-trunking directly on your real router [C/J/what have you]. Your hardware cost is marginally increased and your capacity is MANY times larger.

I don't think there are too many exchanges anymore that have 80+ active peers. If you do participate in such an exchange, have 80 peers on it, and don't exceed a single port's speed, shame on you. :slight_smile:

DJ

You cant, perhaps they'll realise that before they become deprecated

Steve

* deepak@ai.net (Deepak Jain) [Wed 18 Aug 2004, 18:52 CEST]:

Or, perhaps the better question is. How can one justify the cost of
_public_ peering when fiber cross-connects are $200-$300/month each.

Perhaps not at the site previously mentioned.

I believe fiber crossconnects are cheaper than that at the various
AMS-IX housing sites but people still choose to connect to the exchange
switch. Bushes of private interconnects tend to quickly become
unmanageable (and no, not just those of "throw wire over wall" discussed
here some months ago - that's not allowed at any AMS-IX housing site).

I don't think there are too many exchanges anymore that have 80+ active
peers. If you do participate in such an exchange, have 80 peers on it,
and don't exceed a single port's speed, shame on you. :slight_smile:

AMS-IX has almost 200 connected parties. Luckily hardly anybody is
trying to suck more traffic through their port than it can physically
handle.

Not everybody has a gigabit per second worth of traffic. Some even make
do with a 10baseT connection (full duplex of course :). Apparently
still a worthwhile proposition in a world of falling transit prices.

  -- Niels.