modeling residential subscriber bandwidth demand

How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy?

Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?

Tom

Residential whatnow?

Sorry, to be honest, there really isn’t any.

I suppose if one is buying lit services, this is important to model.

But an incredibly huge residential network can be served by a single basic 10/40g backbone connection or two. And if you own the glass it’s trivial to spin up very many of those. Aggregate in metro cores, put the Netflix OC there, done.

Then again, we don’t even do DNS anymore, we’re <1ms from Cloudflare, so in 2019 why bother?

I don’t miss the days of ISDN backhaul, but those days are long gone. And I won’t go back.

An article was published recently that discusses the possible impact of Cloud-based gaming on last-mile capacity requirements, as well as external connections. The author suggests that decentralized video services won't be the only big user of last-mile capacity.
https://medium.com/@rudolfvanderberg/what-google-stadia-will-mean-for-broadband-and-interconnection-and-sony-microsoft-and-nintendo-fe20866e6c5b

We use trendline/95% trendline that’s built into a lot of graphing tools… solarwinds, I think even cdn cache portals have trendlines… forecasts, etc. My boss might use other growth percentages gleaned from previous years… but yeah, like another person mentioned, the more history you have the better it seems… unless there is some major shift for some strange big reason… but have we ever seen that with internet usage growth ? …yet. ?

I mean has the internet bandwidth usage ever gone down nationally/globally , similar to like a graph of the housing market in 2007/2008 ?

-Aaron

“…especially in the face of transport tweaks such as QUIC and TCP BBR? “

Do these “quic and tcp bbr” change bandwidth utilization as we’ve know it for years ?

-Aaron

We have GB/mo figures for our customers for every month for the last ~10 years. Is there some simple figure you’re looking for? I can tell you off hand that I remember we had accounts doing ~15 GB/mo and now we’ve got 1500 GB/mo at similar rates per month.

I don't see how QUIC and BBR is going to change how much bandwidth is flowing.

If you want to make your eyeballs happy then make sure you're not congesting your upstream links. Aim for max 50-75% utilization in 5 minute average at peak hour (graph by polling interface counters every 5 minutes). Depending on your growth curve you might need to initiate upgrades to make sure they're complete before utilization hits 75%.

If you have thousands of users then typically just look at the statistics per user and extrapolate. I don't believe this has fundamentally changed in the past 20 years, this is still best common practice.

If you go into the game of running your links full parts of the day then you're into the game of trying to figure out QoE values which might mean you spend more time doing that than the upgrade would cost.

+1 on this. its been more than 10 years since I’ve been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I’d had used back in the 90’s they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The “rapid” growth we say in the 90’s and the 2000’ and even this decade are all magically the same curve, we’d just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :slight_smile:

-jim

+1 Also on this.

But for something that’s simpler and actionable now, yeah, just make sure that your upstream and peering(!!) links are not congested. I agree that the 50-75% is a good target not only for the lead time to bring up more capacity, but also to allow for spikes in traffic for various events throughout the year.

Louie,

Its almost like us old guys knew something, and did know everything back then, the more things have changed the more that they have stayed the same :slight_smile:

-jim

I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers.

The first 75 users look a lot different than the next 900. You get different population scale and average usage.

I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough.

Thursdays are still the peak because date night is still generally Friday.

- Jared

Certainly.

Projecting demand is one thing. Figuring out what to buy for your backbone, edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth is quite another.

And yeah, Jim, overall, things have stayed the same. There are just the nuances added with caches, gaming, OTT streaming, some IoT (like always-on home security cams) plus better tools now for network management and network analysis.

FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification.

  paul

I would say this is perhaps atypical but may depend on the customer type(s).

If they’re residential and use OTT data then sure. If it’s SMB you’re likely in better shape.

- Jared

Mixed residential (ages 25 - 75, 1 - 6 people per unit), group who worked together to keep costs down. Works well for them. Friday nights we get to about 85% utilization (Netflix), other than that, usually sits between 25 - 45%

  paul

I’m mostly just wondering what others do for this kind of planning - trying to look outside of my own experience, so I don’t miss something obvious. That growth in total transfer that you mention is interesting.

I always wonder what the value of trying to predict utilization is anyway, especially since bandwidth is so cheap. But I figure it can’t hurt to ask a group of people where I am highly likely to find somebody smarter than I am :slight_smile:

I know FTTH footprints where peak evening average per customer is 3-5 megabit/s. I know others who claim their customers only average equivalent 5-10% of that.

It all depends on what services you offer. Considering my household has 250/100 for 40 USD a month I'd say your above solution wouldn't even be enough to deliver an acceptable service to even 10 households.

A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It’s a lot of bandwidth even today.

-Ben

And what happens when a significant fraction of those users fire up Netflix with
an HD stream?

We're discussing residential not corporate connections, I thought....

Paul,

I have hard time seeing how you aren’t maxing out that circuit. We see about 2.3 mbps average per customer at peak with a primarily residential user base. That would about 575 mbps average at peak for 250 users on our network so how do we use 575 but you say your users don’t even top 100 mbps at peak? It doesn’t make sense that our customers use 6 times as much bandwidth at peak than yours do.

We’re a rural and small town mix in Minnesota, no urban areas in our coverage. 90% of our customers are on a plan 22 mbps or less and the other 10% are on a 100 mbps plan but their average usage isn’t really much higher.

Enterprise environments can easily handle many more users on a 100 meg circuit because they aren’t typically streaming video like they would be at home. Residential will always be much higher usage per person than most enterprise users.