Re Netflix Is Eating Up More Of North America's Bandwidth Than Any Other Company

From Wed May 25 04:21:13 2011
Date: Wed, 25 May 2011 10:19:09 +0100 (BST)
From: Brandon Butterworth <>
Subject: Re: Netflix Is Eating Up More Of North America's Bandwidth Than Any
  Other Company

> > So... would this have been feasible today? given the bandwidth required
> > to send a full feed these days, i suspect likely not, eh? (even if you
> > were able to do it on all 500+ channels in parallel)
> On the financial side, it is trivial.

The opposite, the bits were paid for but unused back then so
financially it was worth using them. In digital tv every bit has a use
and so a cost, hence they are used for more TV channels instead for
parasitic services. You end up competing with TV if you want any
quantity so hard to make viable today.

You demonstrate you have no understanding of what the word 'feasable'

> On the engineering side, _impossible_.

The opposite, completely trivial now.


                                      Digital TV is a mux of a number
of bit streams, some with compressed video others with meta data for
epg, alternate sound, interactive apps etc. Adding another stream to
the mux is trivial, you just have to pay for the bandwidth though as
most are stat muxed it's possible to create room at the expense of the
vbr streams where the video encoders reduce the quality of as result of
back pressure from the stat muxer

Obviously, you misunderstood the 'do this' reference of the prior poster.
The 'elegence' of the original Stargate scheme was that the data was
injected into the original video signal at the oint of origin, passed
through _all_ the intermediate points, WITHOUT any special handling (or
even 'awareness' that the 'extra' information is there) and delivered
to the end-user, where it was extracted by a _standard_ TV receiver to a
normal _video_ signal, from which the extra data was then extracted.

One _cannot_ do this with 'modern' digital TV trasmission, because the
_end-to-end_ technolgy does not support it.

*IF* the signal =originates= as an _analog_ video signal, as many of the
cable-only channels _still_ do, "data" in the vertical interval is lost
at the analog-to-digital conversion, and simply _cannot_ be recovered at
the end-user's TV reciever output.

OTOH, if the signal originates as a digital stream, while it may be
"possible" to multiplex in an additional data stream, said data stream
will *NOT* survive _intermediate_ transcoding to an analog video stream
before transmission to the end-user. And, even if the actual digital
stream is delivered to the end-user, a *STANDARD* digital TV receiver has
no means to deliver that 'additional' information to the end-user in any

*GIVEN* the constraints of:
  1) injection at the point of video signal origination (-not- the satellite
     uplink point
  2) "transparent" carriage to the end-user viewer of the video signal,
  3) end-user extraction of data from the output of a commercial off-the-shelf
     TV tuner/receiver

> To pick a conservative number, say you get an effective throughput of
> 2k bytes/sec

It'd be easy to squeeze that into a normal tv satellite mux

Sorry, you've got to squeeze _several_times_ that figure 'into the mux',
to provide the 'redundant' transmissions needed to compensate for momentary
LOS events.

So?? what about the _rest_ of the path the signal goes through to get to
the end-user's TV set. The original methodology was -not- about delivering
a USENET feed to a satellite receiving station with special eqquipment to
extract it, but delivering it _transparently_ (or even 'invisibly' :slight_smile: ALL
THE WAY to the _end-user_ consumer/viewer.

> As I understand it, a current USENET 'full feed', including binaries, take
> two dedicated 100mbit FDX fast ethernet links, and they are saturated _most_
> of the day. At that rate, A full day of TV vertical interval transmission
> wuould handle under _ten_seconds_ worth of the inbound traffic. You would
> around =ten=thousand= analog TV channels to handle a contemporary 'full
> feed'.

Or just 3 full muxes at cost of around 10M (probably the same in any
currency) per year

*TRIPLE* that, _at_least_, to allow for automatic multiple transmissions of
everything, to avoid 'lost data' due to data errors and/or momentary signal-
path obstructions. This is a one-way 'broadcast' link with no ACK/NAK type
responses from the receiving end(s) possible.

Recurring cost for the sat link, more like 30-50 million/year. Plus the costs
_or the dedicated gear at _every_ downlink point.

Further, that gets it on the satellite -- "Now What??" applies. _How_ do you
get it the proverbial 'last mile' to the end-user consumer? And at what kind
of cost for the CPE for _each_ customer? Or are you 'assuming' that some
third-party handles that?

Don't forget to add the capital cost for all the CPE, _plus_ the MRC for the
last-mile distribution to the circa 40 million/year for the 'bare' satellite

The value of 'Stargate' was that it delivered a feed to _anyone_ who could
get the video signal -- with _zero_ action required by any intermediate
third party who was re-transmitting the signal -- in a form that could be
extracted with *inexpensive* CPE. There was also an _esisting_ 'standard'
for the encoding of data into the video signal, so that the same CPE could
be used to extract data from _any_ video source carrying it. And, there
_were_ multiple possible video sources. If the local cable company carried
a ny of them you were in business.

Yes, you *CAN* engineer a "completely different" methodology for delivering
a full USENET feed to end-users via satellite transmission. However, doing
so is contrary to the premise of the existing discussion.

While not material to the technical discussion, I would point out that it is
doubtful any large corp. would want to distro full USENET these days given
the legal implications - see - mind you Cuomo is
otherwise engaged these days.