transit and peering costs projections

This set of trendlines was very interesting. Unfortunately the data
stops in 2015. Does anyone have more recent data?

I believe a gbit circuit that an ISP can resell still runs at about
$900 - $1.4k (?) in the usa? How about elsewhere?

...

I am under the impression that many IXPs remain very successful,
states without them suffer, and I also find the concept of doing micro
IXPs at the city level, appealing, and now achievable with cheap gear.
Finer grained cross connects between telco and ISP and IXP would lower
latencies across town quite hugely...

PS I hear ARIN is planning on dropping the price for, and bundling 3
BGP AS numbers at a time, as of the end of this year, also.

I’m a couple years removed from dealing with this on the provider side but the focus has shifted rapidly to adding core capacity and large capacity ports to the extent that smaller capacity ports like 1 Gbps aren’t going to see much more price compression. Cost per bit will come down at higher tiers but there simply isn’t enough focus at lower levels at the hardware providers to afford carriers more price compression at 1 Gbps, even 10 Gbps. I would expect further price compression in access costs but not really in transit costs below 10 Gbps.

In general I agree that IXs continue to proliferate relative to quantity, throughput and geographic reach, almost to the degree that mainland Europe has been covered for years. In my home market of Atlanta, I’m aware of at least four IXs that have been established here or entered the market in the last three years - there were only two major ones prior to that. This is a net positive for a wide variety of reasons but I don’t think it’s created much of an impact in terms of pulling down transit prices. There are a few reasons for this, but primarily because that growth hasn’t really displaced transit demand (at least in my view) and has really been more about a relatively stable set of IX participants creating more resiliency and driving other performance improvements in that leg of the peering ecosystem.

Dave Cohen
craetdave@gmail.com

I would say that a 1Gbit IP transit in a carrier neutral DC can be had for a good bit less than $900 on the wholesale market.

Sadly, IXP’s are seemingly turning into a pay to play game, with rates almost costing as much as transit in many cases after you factor in loop costs.

For example, in the Houston market (one of the largest and fastest growing regions in the US!), we do not have a major IX, so to get up to Dallas it’s several thousand for a 100g wave, plus several thousand for a 100g port on one of those major IXes. Or, a better option, we can get a 100g flat internet transit for just a little bit more.

Fortunately, for us as an eyeball network, there are a good number of major content networks that are allowing for private peering in markets like Houston for just the cost of a cross connect and a QSFP if you’re in the right DC, with Google and some others being the outliers.

So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. :blush:

See y’all in San Diego this week,
Tim

Why not place the routers in Dallas, aggregate the transit, IXP, and PNI’s there, and backhaul it over redundant dark fiber with DWDM waves or 400G OpenZR?

It’s better for customer experience to keep it local instead of adding 200 miles to the route. All of the competition hauls all of their traffic up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping transit and peering as close to the customer as possible.

Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs to get up to Dallas, not including colo, that we can spend on more transit or better gear!

It’s better for customer experience to keep it local instead of adding 200 miles to the route. All of the competition hauls all of their traffic up to Dallas, so we easily have a nice 8-10ms latency advantage by keeping transit and peering as close to the customer as possible.

Plus, you can’t forget to mention another ~$10k MRC per pair in DF costs to get up to Dallas, not including colo, that we can spend on more transit or better gear!

Texas's BEAD funding and broadband offices are looking for proposals
and seem to have dollars to spend. I have spent much of the past few
years attempting to convince these entities that what was often more
needed was better, more local IXPs. Have you reached out to them?

I am under the impression that many IXPs remain very successful,

I know of 760 active IXPs, out of 1,148 total, so, over 31 years, two-thirds are still successful now. Obviously they didn’t all start 31 years ago, they started on a gradually-accelerating curve. I guess we could do the visualization to plot range of lifespans versus start dates, but we haven’t done that as yet.

states without them suffer

Any populated area without one or more of them suffers by comparison with areas that do have them. States, countries, cities, etc. There are still a surprising number of whole countries that don’t yet have one. We try to prioritize those in our work:

https://www.pch.net/ixp/summary

I also find the concept of doing micro IXPs at the city level, appealing, and now achievable with cheap gear.

This has always, by definition, been achievable, since it’s the only way any IXP has ever succeeded, really. I mean, big sample set, bell curve, you can always find a few things out at the fringes to argue about, but the thing that allows an IXP to succeed is good APBDC, and the thing that most frequently kills IXPs is over-investment. An expensive switch at the outset is a huge liability, and one of the things most likely to tank a startup IXP. Notably, that doesn’t mean a switch that costs the IXP a lot of money: you can tank an IXP by donating an expensive switch for free. Expensive switches have expensive maintenance, whether you’re paying for it or not. Maintenance means down-time, and down-time raises APBDC, regardless of whether you’ve laid out cash in parallel with it.

Finer grained cross connects between telco and ISP and IXP would lower latencies across town quite hugely...

Of course, and that requires that they show up in the same building, ideally with an MMR. The same places that work well for IXPs. Interconnection basically just requires a lot of networks be present close to a population center. Which always presents a little tension vis-a-vis datacenters, which profit immensely if there’s a successful IXP in them, but can never afford to locate themselves where IXPs would be most valuable, and don’t like to have to provide free backhaul to better IXP locations.

                                -Bill

Exactly. Speed x distance = cost. This is _exactly_ why IXPs get set up. To avoid backhauling bandwidth from Dallas, or wherever. Loss, latency, out-of-order delivery, and jitter. All lower when you source your bandwidth closer.

                                -Bill

Transit 1G wholesale in the right DCs is below $500 per port. 10gigE full port can be had around $1k-1.5k month on long term deals from multiple sources. 100g IP transit ports start around $4k.

The cost of transport (dark or wavelength) is generally at least as much as the IP transit cost, and usually more in underserved markets. In the northeast it is very hard to get 10GigE wavelengths below $2k/month to any location, and is generally closer to $3k. 100g waves are starting around $4k and go up a lot.

Pricing has come down somewhat over time, but not as fast as transit prices. 6 years ago a 10Gig wave to Boston from Maine would be about $5k/month. Today about $2800.

With the cost of XCs in data centers and transport costs, you generally don’t want to go beyond 2x10gigE before jumping to 100.

I’ve seen some attempts to put an IX at every corner, but I don’t think those efforts will be overly successful.

It’s still difficult to gain sufficient scale in NFL-sized cities. Big content won’t join without big eyeballs (well, not the national-level guys because they almost never will). Big eyeballs just can’t be bothered. Small guys don’t move the needle enough.

Houston is tricky as due to it’s geographic scope, it’s quite expensive to build an IX that goes into enough facilities to achieve meaningful scale. CDN 1 is in facility A. CDN 2 in facility B. CDN 3 is in facility C. When I last looked, it was about 80 driving miles to have a dark fiber ring that encompassed all of the facilities one would need to be in.

Man, I wanna know where you’re getting 100g transit for $4500 a month! Even someone as fly by night as Cogent wants almost double that, unfortunately.

I’ve found that most of the CDNs that matter are in one facility in Houston, the Databank West (formerly Cyrus One) campus. We are about to light up a POP there so we’ll at least be able to get PNIs to them. There is even an IX in the facility, but it’s relatively small (likely because the operator wants near-transit pricing to get on it) so we’ll just PNI what we can for now.

This may be of interest:

  Peering Costs and Fees
  <https://arxiv.org/abs/2310.04651&gt;

John

So for now, we’ll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. :blush:

There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.

However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That’s not cheap, so there’s a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.

I don’t work for Google and obviously don’t speak for them, but I would suspect that they’re happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.

For starters I would like to apologize for cc-ing both nanog and my
new nn list. (I will add sender filters)

A bit more below.

So for now, we'll keep paying for transit to get to the others (since it’s about as much as transporting IXP from Dallas), and hoping someone at Google finally sees Houston as more than a third rate city hanging off of Dallas. Or… someone finally brings a worthwhile IX to Houston that gets us more than peering to Kansas City. Yeah, I think the former is more likely. :blush:

There is often a chicken/egg scenario here with the economics. As an eyeball network, your costs to build out and connect to Dallas are greater than your transit cost, so you do that. Totally fair.

However think about it from the content side. Say I want to build into to Houston. I have to put routers in, and a bunch of cache servers, so I have capital outlay , plus opex for space, power, IX/backhaul/transit costs. That's not cheap, so there's a lot of calculations that go into it. Is there enough total eyeball traffic there to make it worth it? Is saving 8-10ms enough of a performance boost to justify the spend? What are the long term trends in that market? These answers are of course different for a company running their own CDN vs the commercial CDNs.

I don't work for Google and obviously don't speak for them, but I would suspect that they're happy to eat a 8-10ms performance hit to serve from Dallas , versus the amount of capital outlay to build out there right now.

The three forms of traffic I care most about are voip, gaming, and
videoconferencing, which are rewarding to have at lower latencies.
When I was a kid, we had switched phone networks, and while the sound
quality was poorer than today, the voice latency cross-town was just
like "being there". Nowadays we see 500+ms latencies for this kind of
traffic.

As to how to make calls across town work that well again, cost-wise, I
do not know, but the volume of traffic that would be better served by
these interconnects quite low, respective to the overall gains in
lower latency experiences for them.

I agree, but there are fortunately several large content networks that have had the forethought to put their stuff in Houston - Meta, Fastly, Akamai, AWS just to name a few… There is enough of a need to warrant those other networks having a presence, so hopefully it’s just a matter of time before other content networks jump in too.

Those 4 (plus Google cache fills) make up a huge majority of our transit usage, so at least we’ll get a majority of it peered off after we get these PNI’s stood up. And yes, I will continue to push for Google to light something up in Houston. :rofl:

[…]
The three forms of traffic I care most about are voip, gaming, and
videoconferencing, which are rewarding to have at lower latencies.
When I was a kid, we had switched phone networks, and while the sound
quality was poorer than today, the voice latency cross-town was just
like “being there”. Nowadays we see 500+ms latencies for this kind of
traffic.

When you were a kid, the cost of voice calls across town were completely
dwarfed by the cost of long distance calls, which were insane by today’s
standards. But let’s take the $10/month local-only dialtone fee from 1980;
a typical household would spend less than 600 minutes a month on local calls,
for a per-minute cost for local calls of about 1.6 cents/minute.
(data from https://babel.hathitrust.org/cgi/pt?id=umn.319510029171372&seq=75 )

Each call would use up a single trunk line–today, we would think of that as an
ISDN BRI at 64Kbits. Doing the math, that meant on average you were using
64Kbit/sec600minutes60sec/min or 2304000Kbit per month (2.3 Gbit/month).

A 1Mbit/sec circuit, running constantly, has a capacity to transfer 2592Gbit/month.
So, a typical household used about 1/1000th of a 1Mbit/sec circuit, on average,
but paid about $10/month for that. That works out to a comparative cost of
$10,000/Mbit/month in revenue from those local voice calls.

You can afford to put in a LOT of “just like “being there”” infrastructure when
you’re charging your customers the equivalent of $10,000/month per Mbit to
talk across town. Remember, this isn’t adding in any long-distance charges,
this is just for you to ring up Aunt Maude on the other side of town to ask when
the bake sale starts on Saturday. So, that revenue is going into covering
the costs of backhaul to the local IXP, and to your ports on the local IXP,
to put it into modern terms.

As to how to make calls across town work that well again, cost-wise, I
do not know, but the volume of traffic that would be better served by
these interconnects quite low, respective to the overall gains in
lower latency experiences for them.

If you can figure out how to charge your customers equivalent pricing
again today, you’ll have no trouble getting those calls across town to
work that well again.
Unfortunately, the consumers have gotten used to much lower
prices, and it’s really, really hard to stuff the cat back into the
genie bottle again, to bludgeon a dead metaphor.
Not to mention customers have gotten much more used to the
smaller world we live in today, where everything IP is considered “local”,
and you won’t find many willing customers to pay a higher price for
communicating with far-away websites. Good luck getting customers
to sign up for split contracts, with one price for talking to the local IXP
in town, and a different, more expensive price to send traffic outside
the city to some far-away place like Prineville, OR! :wink:

I think we often forget just how much of a massive inversion the
communications industry has undergone; back in the 80s, when
I started working in networking, everything was DS0 voice channels,
and data was just a strange side business that nobody in the telcos
really understood or wanted to sell to. At the time, the volume of money
being raked in from those DS0/VGE channels was mammoth compared
to the data networking side; we weren’t even a rounding error. But as the
roles reversed and the pyramid inverted, the data networking costs didn’t
rise to meet the voice costs (no matter how hard the telcos tried to push
VGE-mileage-based pricing models!
– see https://transition.fcc.gov/form477/FVS/definitions_fvs.pdf)
Instead, once VoIP became possible, the high-revenue voice circuits
got pillaged, with more and more of the traffic being pulled off over to
the cheaper data side, until even internally the telcos saw the writing
on the wall, and started to move their trunked voice traffic over to IP
as well.
But as we moved away from the SS7-based signalling, with explicit
information about the locality of the destination exchange giving way
to more generic IP datagrams, the distinction of “local” versus “long-distance”
became less meaningful, outside the regulatory tariff domain.
When everything is IP datagrams, making a call from you to a person on
the other side of town may just as easily be exchanged at an exchange point
1,000 miles away as it would be locally in town, depending upon where your
carrier and your friend’s carriers happen to be network co-incident. So, for
the consumer, the prices go drastically down, but in return, we accept
potentially higher latencies to exchange traffic that in earlier days would
have been kept strictly local.

Long-winded way of saying “yes, you can go back to how it was when
you were a kid–but can you get all your customers to agree to go back
to those pricing models as well?” ^_^;

Thanks!

Matt

1 Like

The issue in Houston is Dallas.

I reached out to 30-40 networks and 90% of them all said they just back haul to Dallas and have no interest in peering in Houston. It’s a real hard town to get any traction in. If you’re local and have some insight, I’d be super happy to talk to you.

Aaron

Haha, when I was at Cisco in the late 90's and was working on VoIP stuff we were working with Sprint trying to get them onboard for a residential voice project. They were really insistent on using AAL2 to conserve bandwidth. I told them at the time that the bandwidth for voice was going to be insignificant and it wasn't a big deal that RTP wasn't as efficient. They looked at me like i had leprosy with body parts falling off. Like the next month it was announced that data had surpassed voice for the first time. We didn't get the contract, fwiw. But they never launched anything either. Was there ever any significant deployment of AAL2?

Mike