Keeping Track of Data Usage in GB Per Port

I see in past news articles that cable companies are inaccurately
calculating customers data usage for their online GB of usage per month. My
question is how do you properly determine how much traffic in bytes a port
passes per month? Is it different if we are talking about an ethernet port
on a cisco switch vs a DSL port on a DSLAM for example? I would think these
access switches would have some sort of stat you can count similar to a
utility meter reader on a house. See what it was at last month, see what is
is at this month, subtract last months from this months, and the difference
is the total amount used for that month.

Why are the cable companies having such a hard time? Is it hard to
calculate data usage per port? Is it done with SNMP or some other method?

What is the best way to monitor a 48 port switch for example, and know how
much traffic they used?

https://gigaom.com/2013/02/07/more-bad-news-about-broadband-caps-many-meters-are-inaccurate/

Assume a 20mbit connection. How many times can this roll over a
32 bit counter in a month if it's going full blast?

If your switch doesn’t support 64-bit counters return it.

- Jared

Folks, use sflow with rrdtool!

Quite awesome & handy

So based on the response I have received so far it seems cable was a
complicated example with service flows involved. What if we are talking
about something simpler like keeping track of how much data flows in and
out of a port on a switch in a given month? I know you can use SNMP, but I
believe that polls in intervals and takes samples which isn't really
accurate right?

It depends on what you're talking about.

Network devices implementing the SNMP IF-MIB have counters for each
interface that when polled, show the number of bytes being transmitted
and received.
Conventionally, network operators poll these counter values, compute
the difference from the last time it was polled, and extrapolate a
rate (bit volume over a time unit) from that. Often, this is done over
a 5 minute interval.
This introduces some averaging error.

However, if an operator is just computing cumulative transfer, it's pretty easy.
Just continue to sum up the counter value deltas from poll to poll.
It could be easy to mess this up if the counter size is too small, or
rolls more than once in-between polls.

If a large telecom can't get billing correct, they shouldn't be
allowed to do business.
Easier solution: stop metering customers, and sink more money into
expanded infrastructure.

You may want to start learning more at http://www.netforecast.com/wp-content/uploads/2014/05/NFR5116_Comcast_Meter_Accuracy_Report.pdf. This report is written by Netforecast – the same firm interviewed by GigaOm in the story link you provided.

Their first audit was in 2009: http://www.netforecast.com/wp-content/uploads/2012/06/NFR5101_Comcast_Usage_Meter_Accuracy_Original.pdf

Their 2nd audit was in 2010: http://www.netforecast.com/wp-content/uploads/2012/06/NFR5101_Comcast_Usage_Meter_Accuracy.pdf

And here is a report on best practices for data usage in cable networks: http://www.netforecast.com/wp-content/uploads/2012/10/NFR5110_ISP_Data_Usage_Meter_Specification_Best_Practices_for_MSOs1.pdf

- Jason Livingood
Comcast

There are lots of ways to do it. Cable uses IPDR, which is baked into
DOCSIS standards.
http://en.wikipedia.org/wiki/Internet_Protocol_Detail_Record

>So based on the response I have received so far it seems cable was a
>complicated example with service flows involved.

Don't forget that between your port on your DSL/Cable modem and the actual
port they may be monitoring there could be transitions through various
protocols that can chew up bandwidth with framing bits and whatnot.

See: http://www.yourdictionary.com/cell-tax as an example.

This can, in worse but common cases, be as much as one fifth of the
bandwidth.

IPDR under DOCSIS and generally RADIUS or TACACS(+) for DSL. Unclear
personally about fiber/FiOS deployments (never been near enough to know)

Flow (sflow, nflow, ipfix, etc) generally doesn't scale and is woefully
inaccurate.

This all becomes even more complicated when some traffic isn't counted (Eg. "free facebook") on a given service which generally then necessitates the need for some level of flow-based accounting, even if it's just collecting flows for the free traffic to subtract from the port counters. I can see how it could get messy.

So it looks like DOCSIS cable has a great solution with IPDR, but what
about DSL, GPON, and regular Ethernet networks?

It was mentioned that DSL uses radius, but most new DSL systems no longer
use PPPoE, so I don't believe radius is a viable option.

What about Wifi Access Points? What would be the best way to track usage
across these devices?

There's no correlation between PPPoE and RADIUS. Many (if not all) BRAS/BNG platforms will support RADIUS based accounting for IPoE sessions.

The majority of accounting is done that way; with outliers using some other mechanism (Diameter; proprietary vendor billing solutions; flow based platforms; or counters elsewhere on the network).

WiFi in my experience also typically uses a RADIUS based approach, although it can depend on the deployment context.

AJ

Original Message

If you're measuring per month, there is no reason you can't use SNMP, poll that 64bit counter once per day or something, and then add the values up each month. It'll be accurate enough. SNMP isn't sampled, if you poll the IfOctet counter, it just counts upwards and if you're not worried about the switch rebooting, you could poll it once per month and be accurate. I'd say polling it once or a few times a day protects enough against that.

For GPON and Ethernet it's just SNMP counters.

Frank