Capacity planning , transit vs last mile

Canada is to hold a 3 week long hearing on discussing whether the
internet is important and whether the farcical 5/1 speed promoted by the
government is adequate.

In this day and age, it would be easy to just set FTTP as target
technology and be done with it, but too many want to have a policy that
is technologically neutral.

To this end, I will not only be proposing that subsidized deployments
not only meet advertised service speed standards, but also a capacity
per end user metric for the last mile technology as well as for the
backhaul/transit.

(One of the often subsidized companies deploys fixed wireless which
delivers the advertised speed for the first week, but routinely gets
oversubscribed after a while and customers feel like on dialup.)

I know that for sufficiently large ISPs, they currently provision just
over 1mbps of transit capacity per end user (so 800-1000 customers per
1gbps of transit). The number rises by over 30% a year as usage grows.
(The CRTC can get exact figure from telecom operators and generate
aggregate industry-wide growth in traffic to do yearly standard adjustment).

QUESTION:

Say the policy is 1mbps per customer if 1000 customer or more. Is there
some formula (approx or precise) to calculate how that 1mbps changes for
smaller samples ? (like 500 customers, 200 ? )

And on the last mile portion where one has typically few users on each
shared capacity segment (fixed wireless, FTTP, cable), are there fairly
standard oversubscription ratios based on average service speed that is
sold in that neighbourhood ? (for instance if I have 100 customers with
average subscibed speed of 15mbps, how much capacity should the antenna
serving those customers have ?

I realise that each ISP guards its oversubscription ratios as very
proprietary, but aren't there generic industry-wide recommendations ? My
goal is to have some basic standards that prevent gross over
subscription that result in unusable service.

As well, I want that a company pitching a broadband deployment be able
to demonstrate that the technology being deployed will last X years
because it has sufficient capacity to handle the number of customers as
well as the predicted growth in usage each year.

Any help ? comments on whether this is crazy ? sanity check ?

Like. Good question.
There are a lot of papers on traffic model, but it is still an open issue...

takashi.tome

You will find that total data download per month is unrelated to service
speed except for very slow service.

If you have too few users the pattern becomes chaotic.

You will find that total data download per month is unrelated to service
speed except for very slow service.

Yep. Netflix still takes 7mbps even if you are on a 1gbps service.
However, with families, higher speed allow more family members to be
active at same time and this could begin to have visible impact (if not
already).

Note that a rural coop deployed FTTP for their territory, but only offer
40mbps max subscription because they can't afford the transit from the
single incumbent who offers transit in their region. So I assume that
with under 1000, service speed starts to matter when sizing the transit
capacity.

If you have too few users the pattern becomes chaotic.

My question has to do with how does one determine that theshold where
you start to get more chaotic patterns and need more capacity per
customer than if you had over 1000 customers ?

My goal is to suggest some standard to prevent gross underprovisioning
by ISPs who win subsidies to deploy in a region.

From what I am reading, the old standard (contention ratio of 50:1 for

residential service) is no longer valid as a single metric especially
for smaller deployments with higher speeds.

For the last mile, GPON is easy as it is essentually 32 homes per GPON
link. For cable, the CRTC would have its own numbers for number of homes
per node. (BTW, Australia's NBN V2.0 plans to have 600 homes per node
since they don't have budget to split nodes).

But for fixed wireless and satellite, there needs to be some standard to
provent gross oversubscription.

Say you have a fixed wirelss tower with enough spectrum to broadcast
40mbps. How do you calculate how many customers at average speed of
15mbps can comfortably fit on this antenna ?

I realise existing ISPs use monthly statistics of 95th percentile to see
how much capacity is used and whether the link has reached saturation
and there should be s stop sell or get more spectrum.

But from a regulatory point of view, in evaluating a bid to serve a
region, the regulator would need rules to establish whether the bidder
will be able to serve the population with whatever solution the bidder
proposes.

Does the FCC have any such rules in the USA, or are its broadband
deployment subsidies only based on the ISP marketing speeds that meet
the FCC's definition of "broadband" ? (and no concern about whether
those will be delivered or not).

Hi

40 Mbps is plenty fast that even a family will not be using much more data
with higher speed. So that transit argument is a poor excuse.

A formula could be:

[max speed sold to customers] * 2 + [number of customers] * [average peak
number]

The number 2 in the above is not well researched but I would expect it to
be in that ballpark.

The [average peak number] is 2 Mbps for our network. Others seem to claim
they get away with only 1 Mbps, but our users are doing 2 Mbps for sure. I
believe this is probably because we have many families. It does matter when
each customer is really a family of 5 compared to one single person.

The formula holds up for both few and large number of users. With few users
the left side of the formula dominates and with many users it is the right
side.

Regards,

Baldur

"enough spectrum to broadcast 40mbps."

I'd say you need better equipment. :slight_smile:

While the formula provided by Baldur may very well be accurate, it allows projections on several data points (advertised speed, average per customer speed, customer counts, customers online @ peak) which may or may not turn out to be accurate or could be fudged to create misleading estimates. I think a simpler projection, taking harder numbers into account may be easier to implement as a requirement and less likely to be gained by someone. I think it would be sufficient to simply specify a minimum Mbps bound per customer (aka a committed information rate) for a network build. As you and others have stated, typical current use is currently ~1-2Mbps per household/customer. If we forecast growth to be 30% per year, a 10 year build might anticipate using a last mile technology capable of providing ~13Mbps CIR [using the compound interest formula where A = 1Mbps * (1 + 0.3)^10years ] in 10 years time. This projection uses growth, which has historical precedent, and current usage, which you indicate is already being measured and reported. Something that is less static would be the technology improvements. An operator might be able to assume gpon now and 10G-pon in a few years (or DOCSIS 3.0 -> 3.1), to meet future requirements.

Regarding your question about how such a formula might scale, especially on the small end. I think it's a valid concern. This could be measured in today's rural HFC networks or wireless network where it may be common to have fewer than 50 customers sharing a 40Mbps-100Mbps access media. I think you'll find that at this small a scale, the 1-2Mbps per customer number probably still holds true. You obviously won't be able to sell 1000Mbps service (or even 100Mbps) to these customers, given the access media limitations. This could be addressed elsewhere in your requirements. For example, if it were required that operators "...offer services capable of delivering 100Mbps service to each end user with a CIR derived from table A", the operator would be forced to provide a 100Mbps+ pipe even if it were only serving 10 customers on that pipe that see ~ 20Mbps aggregate peak period usage today. That 80Mbps of overhead is not wasted, it's intentionally there for your users' future needs and to provide burst capacity. A smart operator would probably avoid such cases and attempt to aggregate users in ways that minimize expense and make the most of the access technology.

--Blake