net neutrality and peering wars continue

The tools cannot estimate burden into the peers network very well,
particularly when longest-exit routing is implement to balance the
mileage burden, so each party shares their information with each other
and compares data in order to make decisions.

It's not common, but there are a handful of peers that share this
information with each other.

i have not been able to find it easily, but some years back rexford and
others published on a crypto method for peers to negotiate traffic
adjustment between multiple peering points with minimal disclosure. it
was a cool paper.

randy

* woody@pch.net (Bill Woodcock) [Thu 20 Jun 2013, 16:59 CEST]:

Right. By "sending peer" I meant the network transmitting a packet, unidirectional flow, or other aggregate of traffic into another network. I'm not assuming anything about whether they are offering "content" or something else - I think it would be better to talk about peering fairness at the network layer, rather than the business / service layer.

In that case, it's essentially never an issue, since essentially every packet in one direction is balanced by a packet in the other direction, so rotational symmetry takes care of the "fairness."

You're mistaken if you think that CDNs have equal number of packets going in and out.

I think you may be taking your argument too far, though, since by this logic, the sending and receiving networks also have control over what they choose to transit and receive, and I think that discounts too far the reality that it is in fact the _customers_ that are making all of these decisions, and the networks are, in the aggregate, inflexible in their need to service customers. What a customer will pay to do, a service provider will take money to perform. It's not really service providers (in aggregate) making these decisions. It's customers.

I think the point is here that networks are nudging these decisions by making certain services suck more than others by way of preferential network access.

  -- Niels.

And even if the number of packets match, there's the whole "1500 bytes
of data, 64 bytes of ACK" thing to factor in...

They are roughly equal (modulo delayed acks, etc.). However, the number of octets is very different from the number of packets. There is much greater asymmetry in number of octets than in number of packets.

To the best of my knowledge, most (if not all) of the peering agreements that discuss traffic ratios do so in terms of data transferred, not number of datagrams.

Owen

* owen@delong.com (Owen DeLong) [Thu 20 Jun 2013, 23:38 CEST]:

* woody@pch.net (Bill Woodcock) [Thu 20 Jun 2013, 16:59 CEST]:

Right. By "sending peer" I meant the network transmitting a packet

[...]

every packet in one direction is balanced by a packet in the other direction

You're mistaken if you think that CDNs have equal number of packets going in and out.

They are roughly equal (modulo delayed acks, etc.). However, the number of octets is very different from the number of packets. There is much greater asymmetry in number of octets than in number of packets.

Thank you, Captain Obvious.

Also, if you don't have data, best to keep your opinion to yourself, because you might well be wrong.

  -- Niels.

Perhaps last-mile operators should
A) advertise each of their metropolitan regional systems as a separate AS
B) establish an interconnection point in each region where they will accept traffic destined for their in-region customers without charging any fee

This leaves the operational model of WAN backbone transit networks unchanged: fights about traffic balance and settlement fees can continue in perpetuity.

Those big sources who fall afoul of balance can opt to deliver traffic directly to the last-mile network(s) in given markets.
      Transfers WAN networking cost-burden to the content originator (through their agents: CDN operators or transit providers)
      Reduces financial burden on last-mile operator (demand is reduced on their company operated backbone and/or transit capacity that they purchase)

RESULTS
Customers get to receive content they are requesting: technical and political impediments are removed.
Last-mile operator only has to improve in-region network facilities: to deliver the data that their own customers have requested

C) Buck up and carry the traffic their customers are paying them to carry.

Least I just sound like a complainer, I actually think this makes rational business sense.

The concept of peering was always "equal benefit", not "equal cost". No one ever compares the price of building last mile transport to the cost of building huge data centers all over with content close to the users. The whole "bit-mile" thing represents an insignificant portion of the cost, long haul (in large quantities) is dirt cheap compared to last mile or data center build costs. If you think of a pure content play peering with a pure eyeball play there is equal benefit, in fact symbiosis, neither could exist without the other. The traffic flow will be highly asymmetric.

Eyeball networks also artificially cap their own ratios with their products. Cable and DSL are both 3x-10x down, x up products. Their TOS policies prohibit running servers. Any eyeball network with a asymmetric edge technology and no-server TOS need only look in the mirror to see why their aggregate ratio is hosed.

Lastly, simple economics. Let's theorize about a large eyeball network with say 20M subscribers, and a large content network with say 100G of peering traffic to go to those subscribers.

* Choice A would be to squeeze the peer for bad ratio in the hope of getting them to pay for, or be behind some other transit customer. Let's be generous and say $3/meg/month, so the 100G of traffic might generate $300,000/month of revenue. Let's even say you can squeeze 5 CDN's for that amount, $1.5M/month total.

* Choice B would be to squeeze the subscribers for more revenue to carry the 100G of "imbalanced traffic". Perhaps an extra $0.10/sub/month. That would be $2M/month in extra revenue.

Now, consider the customer satisfaction issue? Would your broadband customers pay an extra $0.10 per month if Netflix and Amazon streaming never went out in the middle of a movie? Would they move up to a higher tier of service?

A smart end user ISP would find a way to get uncongested paths to the content their users want, and make it rock solid reliable. The good service will more than support not only cost recovery, but higher revenue levels than squeezing peers. Of course we have evidence that most end user ISP's are not smart, they squeeze peers and have some of the lowest customer satisfaction rankings of not just ISP's, but all service providers! They want to claim consumers don't want Gigabit fiber, but then congest peers so badly there's no reason for a consumer to pay for more than the slowest speed.

Squeezing peers is a prime case of cutting off your nose to spite your face.

It's only cutting off your nose to spite your face if you look at the
internet BU in a vacuum. The issue comes when they can get far more money
from their existing product line, than what they get being a dumb bandwidth
pipe to their customers.

They don't want reasonable or even unreasonable pricing per meg, they want
content to pay for access to their customers in the same range of cost that
they currently get from their other arm's subscribers or to sit down and
shut up and stop competing with their much more profitable broadcast arm.
Because they can't just charge a premium on the internet access itself, as
their customers would leave due to competition from providers that *are*
just dumb pipes to transit based content.

-Blake

Maybe someone could enlighten my ignorance on this issue.

Why is there a variable charge for bandwidth anyways?

In a very simplistic setup, if I have a router that costs $X and I run a $5
CAT6 cable to someone elses router which cost them $Y, plus a bit of
maintenance time to set up the connections, tweak ACLs, etc...

So now there's an interconnect between two providers at 1 gigabit, and the
only issue I see is the routers needing to be replaced within Z years when
it dies or when it needs to handle a 10 gigabit connection.

So it seems I should be able to say "Here's a 1 gigabit connection. It
will cost $Q over Z years or you can pay $Q/Z yearly", etc...

And wouldn't the costs go down if I had a bunch of dialup/DSL/cable/fiber
users as they are paying to lower the costs of interconnects so they get
content with less latency and fewer bottlenecks?

-A

Why is there a variable charge for bandwidth anyways?

In a very simplistic setup, if I have a router that costs $X and I run a $5
CAT6 cable to someone elses router which cost them $Y, plus a bit of
maintenance time to set up the connections, tweak ACLs, etc...

So now there's an interconnect between two providers at 1 gigabit, and the
only issue I see is the routers needing to be replaced within Z years when
it dies or when it needs to handle a 10 gigabit connection.

Many things aren't as obvious as you state above. Take for example routing table growth. There's going to be a big boom in selling routers (or turning off full routes) when folks devices melt at 512k routes in the coming years. Operating a router takes a lot of things, including power, space, people to rack it, swap failing or failed hardware, OPEX to the vendor to cover support contract (assuming you have one), fiber cleaning kits, new patch cables, optics, etc.

These costs are variable per city and location as space/power can be different. This doesn't include telecom costs, which may be up/down depending on if you are using leased/dark/IRU or other services.

Building fiber, data centers, can be quite capital expensive. Fiber, expect 50-100k per mile (for example). It can be even more depending on the market and situation. Much of that cost is in the labor to the technicians as well as local permits as opposed to what the fiber actually costs.

Many people have fiber they built 10 years ago, or even older. Folks like AT&T have been breathing life into their copper plant that was built over the past 100 years. Having that existing right-of-way makes permit costs lower, or allows you to get a blanket permit for entire cities/counties in cases.

Some cable company has a presentation out there (maybe it was at a cable labs conference, or otherwise) I saw about average breaks per year. This costs splicing crews that you either have to pay to be on call or outsource to a contract company for emergency restoration.

http://www.southern-telecom.com/AFL%20Reliability.pdf has some details about these.

So it seems I should be able to say "Here's a 1 gigabit connection. It
will cost $Q over Z years or you can pay $Q/Z yearly", etc...

And wouldn't the costs go down if I had a bunch of dialup/DSL/cable/fiber
users as they are paying to lower the costs of interconnects so they get
content with less latency and fewer bottlenecks?

There was a presentation by Vijay about the costs of customer support. Many states have minimum wages higher than the federal minimum wage, but even that being said, you need to pay someone, train them, give them a computer, manager, phone and other guidance to provide support for billing, customer retention and sales.

I recall Vijay saying that if a customer phoned for support it wiped out the entire profit from the customer for the lifetime of them being a customer. That may not still be the case, but there are costs each time you provide a staff person to answer that phone. Sometimes it's due to outage, sometimes it's PBKAC, sometimes you don't know and have to further research the issue.

Your overhead costs may be much higher due to the type of other costs you bear (pension, union contracts, etc..) vs a competitor that doesn't have that same structure. This is often seen in the airline industry.

I for one would like to see more competition in the last mile in the US, but I think the only people that will do it will be folks like sonic.net, google and other smaller independent telcos.

Take someone like Allband Communications in Michigan. They brought POTS service (just recently) to locations that Verizon/AT&T were unwilling to build. The person who wanted the phone service ended up having to start a telco to get POTS service there. They just went triple-play since it was the same cost to trench fiber as to put in the copper.

- Jared

Indeed. We're running PFC3CXL's and had already reallocated FIB TCAM to
768K IPv4s in anticipation. We also had maximum-prefix 500000 with a
warning at 90%, and today it triggered (or at least first time I noticed
it)... we ran > 450K prefixes from 3 providers about 1:30 EDT today and
got the warnings.

The end is near :slight_smile: If you haven't made provisions, please do so now :slight_smile:

Jeff

[snip]

Also, if you don't have data, best to keep your opinion to yourself,
because you might well be wrong.

The deuce you say! Replacing uninformed conjecture and conspiracy
theories with actual data? Next thing you know there will be actual
engineering discussions instead ...

It's like 2008 all over again, but worse. In 2008, the Sup2 was nearing the end of its ability to hold full v4 routes. The "good news" back then was that you could upgrade to Sup720-3bxls for a little more than (IIRC) about $10k per unit. This time, at least as of today, Cisco hasn't provided an upgrade path that'll keep the 6500 family usable for a full-table router when the "1 Million" route slots aren't enough to hold your 768k v4 routes and 128k v6 routes.

At this rate, if they do produce a PFC that takes the 6500 to several million routes, it's probably going to be too late for those to be available in any real quantity on the secondary market. Maybe that's the plan.

You're mistaken if you think that CDNs have equal number of packets going in and out.

I'm aware that neither the quantity nor the size of packets in each direction are equal. I'm just hard-pressed to think of a reason why this matters, and so tend to hand-wave about it a bit… To a rough approximation, flows are balanced. Someone requests something, and an answer follows. Requests tend to be small, but if someone requests something large, a large answer follows. Conversely, people also send things, which are followed by small acknowledgements. Again, this only matters if you place a great deal of importance both on the notion that size equals fairness, and that fairness is more important than efficiency. I would argue that neither are true. I'm far more interested in seeing the cost of Internet service go down, than seeing two providers saddled with equally high costs in the name of fairness. And costs go down most quickly when each provider retains the full incentivization of its own ability to minimize costs. Not when they have to worry about "fairness" in an arbitrary metric, relative to other providers.

The only occasion I can think of when traffic flows of symmetric volume have an economic benefit are when a third party is imposing excess rent on circuits, such that the cost of upgrading capacity is higher than the cost of "traffic engineering" flows to fill reverse paths. And that's hardly the sort of mental pretzels I want carriers to be having to worry about, instead of moving bits to customers.

I think the point is here that networks are nudging these decisions by making certain services suck more than others by way of preferential network access.

I agree completely that that's the problem. But it didn't appear to be what Benson was talking about.

                                -Bill

It's clear to me that you don't understand what I've said. But whether you're being obtuse or simply disagreeing, there is little value in repeating my specific points. Instead, in hope of encouraging useful discussion, I'll try to step back and describe things more broadly.

The behaviors of networks are driven (in almost all cases) by the needs of business. In other words, decisions about peering, performance, etc, are all driven by a P&L sheet.

So, clearly, these networks will try to minimize their costs (whether "fair" or not). And any imbalance between peers' cost burdens is an easy target. If one peer's routing behavior forces the other to carry more traffic a farther distance, then there is likely to be a dispute at some point - contrary to some hand-wave comments, carrying multiple gigs of traffic across the continent does have a meaningful cost, and pushing that cost onto somebody else is good for business.

This is where so-called "bit mile peering" agreements can help - neutralize arguments about balance in order to focus on what matters. Of course there is still the "P" side of a P&L sheet to consider, and networks will surely attempt to capture some of the success of their peers' business models. But take away the legitimate "fairness" excuses and we can see the real issue in these cases.

Not that we have built the best (standard, interoperable, cheap) tools to make bit-mile peering possible... But that's a good conversation to have.

Cheers,
-Benson

Again, this only matters if you place a great deal of importance both on the notion that size equals fairness, and that fairness is more important than efficiency.
...

I think the point is here that networks are nudging these decisions by making certain services suck more than others by way of preferential network access.

I agree completely that that's the problem. But it didn't appear to be what Benson was talking about.

It's clear to me that you don't understand what I've said. But whether you're being obtuse or simply disagreeing, there is little value in repeating my specific points. Instead, in hope of encouraging useful discussion, I'll try to step back and describe things more broadly.

The behaviors of networks are driven (in almost all cases) by the needs of business. In other words, decisions about peering, performance, etc, are all driven by a P&L sheet.

This isn't exactly true and it turns out that the subtle difference from this fact is very important.

They are driven not by a P&L sheet, but by executive's opinions of what will improve the P&L sheet.

There is ample evidence that promiscuous peering can actually reduce costs across the board and increase revenues, image, good will, performance, and even transit purchases.

There is also evidence that turning off peers tends to hamper revenue growth, degrade performance, create a negative image for the organization, reduce good will, etc.

One need look no further than the history of SPRINT for a graphic example. In the early 2000's when SPRINT started depeering, they were darn near the epicenter of internet transit. Today, they're yet another also ran among major telco-based ISPs.

Sure, their peering policy alone is likely not the only cause of this decline in stature, but it certainly contributed.

So, clearly, these networks will try to minimize their costs (whether "fair" or not). And any imbalance between peers' cost burdens is an easy target. If one peer's routing behavior forces the other to carry more traffic a farther distance, then there is likely to be a dispute at some point - contrary to some hand-wave comments, carrying multiple gigs of traffic across the continent does have a meaningful cost, and pushing that cost onto somebody else is good for business.

Reasonable automation means that it costs nearly nothing to add peers at public exchange points once you are present at that exchange point. The problem with looking only at the cost of moving the bits around in this equation is that it ignores where the value proposition for delivering those bits lies.

In reality, if an eyeball ISP doesn't maintain sufficient peering relationships to deliver the traffic the eyeballs are requesting, the eyeballs will become displeased with said ISP. In many cases, this is less relevant than it should be because the eyeball network is either a true monopoly, an effective monopoly (30/10Mbps cable vs. 1.5Mbps/384k DSL means that cable is an effective monopoly for all practical purposes), or a duopoly where both choices are nearly equally poor.

In markets served by multiple high speed providers, you tend to find that consumers gravitate towards the ones that don't engage in peering wars to the point that they degrade service to those customers.

On the other hand, if a content provider does not maintain sufficient capacity to reach the eyeball networks in a way that the eyeball networks are willing to accept said traffic, the content provider is at risk of losing subscribers. Since content tends to have many competitors capable of delivering an equivalent service, content providers have less leverage in any such dispute. Their customers don't want to hear "You're on Comcast and they don't like us" as an excuse when the service doesn't work. They'll go find a provider Comcast likes.

The bottom line is that these ridiculous disputes are expensive to both sides and degrade service for their mutual customers. I make a point of opening tickets every time this becomes a performance issue for me. If more consumers did, then perhaps that cost would help drive better decisions from the executives at these providers.

The other problem that plays into this is, as someone noted, many of these providers are in the internet business as a secondary market for revenue added to their primary business. They'd rather not see their primary business revenues driven onto the internet and off of their traditional services. As such, there is a perceived P&L gain to the other services by degrading the performance of competing services delivered over the internet. Attempting to use this fact to leverage (extort) money from the content providers to make up those revenues also makes for an easy target in the board room.

This is where so-called "bit mile peering" agreements can help - neutralize arguments about balance in order to focus on what matters. Of course there is still the "P" side of a P&L sheet to consider, and networks will surely attempt to capture some of the success of their peers' business models. But take away the legitimate "fairness" excuses and we can see the real issue in these cases.

The problem I see with "bit mile peering" agreements is that the measurement of traffic that would be necessary to make such an agreement function reliably and verifiably by both sides would likely cost more than the moving of the traffic in question. I'd hate to see the internet degrade to telco style billing where it often cost $0.90 of every $1 collected to cover the costs of the call accounting and billing systems.

Not that we have built the best (standard, interoperable, cheap) tools to make bit-mile peering possible... But that's a good conversation to have.

It might be an interesting conversation to have, but at the end of the day, I am concerned that the costs of the tools and their operations exceeds the cost being accounted. In such a case, it is often better to simply write off the cost.

It's like trying to recover all the screws/nuts/washers that fall on the floor in an assembly plant in order to save money. The cost of retrieving and sorting them vastly exceeds their value.

Owen

i have not been able to find it easily, but some years back rexford
and others published on a crypto method for peers to negotiate
traffic adjustment between multiple peering points with minimal
disclosure. it was a cool paper.

I don't know Jen's work on this off the top of my head, but Ratul
Mahajan had some papers on this too (for his dissertation). One is
called Wiser.

good stuff. but i thought the paper i had in mind was normal bgp and
the exchanges were negotiations of prefix and med policies at mutual
peering points using crypto to cloak one's internals. but i could
easily be wrong.

randy

That's easily solved by padding the ACK to 1500 bytes as well.

Matt

Or indeed by the media player sending large amounts of traffic back to the CDN via auxiliary HTTP POST requests?

Neil

That's easily solved by padding the ACK to 1500 bytes as well.

Matt

Or indeed by the media player sending large amounts of traffic back to the CDN via auxiliary HTTP POST requests?

Neil

That would assume that the client has symmetrical upstream bandwidth over which to send such datagrams. At least in the US, that is the exception, not the rule.

Owen