#However, my question is simply.. for ISPs promising broadband service.
#Isn't it simpler to just announce a bandwidth quota/cap that your "good"
#users won't hit and your bad ones will?
Quotas may not always control the behavior of concern.
As a hypothetical example, assume customers get 10 gigabytes worth of
traffic per month. That traffic could be more-or-less uniformly
distributed across all thirty days, but it is more likely that there
will be some heavy usage days and light usage days, and some busy times
and some slow times. Shaping or rate limiting traffic will shave the
peak load during high demand days (which is almost always the real issue),
while quota-based systems typically will not.
Quota systems can also lead to weird usage artifacts. For example,
assume that users can track how much of their quota they've used --
as you get to the end of each period, people may be faced with
"use it or lose it" situations, leading to end-of-period spikes in
Quotas (at least in higher education contexts) can also lead to
things like account sharing ("Hey, I'm out of 'credits' for this
month -- you never use yours, so can I login using your account?"
"Sure..." -- even if acceptable use policies prohibit that sort of
And then what do you do with users who reach their quota? Slow them
down? Charge them more? Turn them off? All of those options are
possible, but each comes with what can be its own hellish pain.
And finally, manipulating all types of total traffic could also
be bad if customers have a third party VoIP service running, and
you block/throttle/other wise mess with untouchable voice service
traffic when they need to make a 911 call or whatever.
#Operationally, why not just lash a few additional 10GE cross-connects
#and let these *paying customers* communicate as they will?
I think the bottleneck is usually closer to the edge...
Part of the issue is that consumer connections are often priced
predicated on a relatively light usage model, and an assumption
that much of that traffic may may be amenable to "tricks" (such as
passive caching, or content served from local Akamai stacks, etc.
-- although this is certainly less of an issue than it once was).
Replace that model with one where consumers actually USE the entire
connection they've purchased, rather than just some small statistically
multiplexed fraction thereof, and make all traffic encrypted/opaque
(and thus unavailable for potential "optimized delivery") and the
default pricing model can break.
You then have a choice to make:
-- cover those increased costs (all associated with a relatively small
number of users living in the tail of the consumption distribution)
by increasing the price of the service for everyone (hard in a highly
competitive market), or
-- deal with just that comparative handful of users who don't fit the
presumptive model (shape their traffic, encourage them to buy from
your competitor, decline to renew their contract, whatever).
The later is probably easier than the former.
#I don't see how Operators could possibly debug connection/throughput
#problems when increasingly draconian methods are used to manage traffic
#flows with seemingly random behaviors. This seems a lot like the
#evil-transparent caching we were concerned about years ago.
Middleboxes can indeed make things a mess, but at least in some
environments (e.g., higher ed residential networks), they've become
pretty routine. Network transparency should be the goal, but
operational transparency (e.g., telling people what you're doing to
their traffic) may be an acceptable alternative in some circumstances.
#What can be done operationally?
Tiered service is probably the cleanest option: cheap "normal" service
with shaping and other middlebox gunk for price sensitive populations
with modest needs, and premium clear pipe service where the price
reflects the assumption that 100% of the capacity provisioned will be
used. Sort of like what many folks already do by offering "residential"
and "commercial" grade service options, I guess...
#For legitimate applications: