AT&T: 15 Mbps Internet connections "irrelevant"

err. directv?

   Moral indignation is a technique to endow the idiot with dignity.
                                                 - Marshall McLuhan

Sorry if I wasn't clear, but I meant IP-based STB's, like those made from
Amino, Entone, i3 Micro, Motorola's Kreatel, Cisco's Scientific-Atlanta,
Wegener, Sentivision and middleware from vendors such as Infogate,
Microsoft, Minerva, Orca Interactive, and Siemen's Myrio. And now that
content providers are starting to require encryption, none of these earlier
pairs can actually be used unless they include conditional access solutions
from the likes of Irdeto, Latens, Nagravision, Verimatrix, Widevine.

DIRECTV does not use an IP-based STB, AFAIK, and delivers their content to
consumers via satellite, not using AT&T last-mile's infrastructure, which
initiated this thread.


Again, I don't see how AT&T can claim "DSL is fast enough" in one
breath, then turn around and say they're ready to deliver IPTV.

This has been covered in other public presentations. The access
link for VDSL2 has about 25Mbps at the proposed distances. Using
6Mbps of the access link for Internet leaves about 19Mbps for IPTV.

I'm curious how program content is currently stored. (Note that I'm
totally ignoring live broadcast.) If MPEG-2, I'd guess conversion to
MPEG-4 might produce less-than-desirable image quality.

There is no standard, or rather lots of standards. Based on published
articles, ESPN stores SD content at about 40Mbps MPEG2, and HD content in
100Mbps DVCPRO HD. Starz/Encore stores their content at about 15Mbps
MPEG2. A lot of video is still stored on film and tape (e.g. DigiBeta,
Beta SP, etc)

Cable, satellite and telco are all moving towards a new codec, most
people predict H.264/MPEG AVC, but have different migration timelines.
I wouldn't be surprised if some programmers supply their video streams
in multiple native formats, while other program streams will be
transcoded from an existing stream.

The majority of U.S.-based IP TV deployments are not using MPEG-4

Agreed. However, I'd say that any IPTV provider currently using MPEG2 would
be planning a migration to MPEG4/H.264 - half the bandwidth means double the

in fact,
you would be hard-pressed to find an MPEG-4 capable STB working with

I disagree. There are several MPEG4 capable STB available now, and they all
have support of middleware vendors.

SD MPEG-2 runs around ~4 Mbps today and HD MPEG-2 is ~19 Mbps. With ADSL2+
you can get up to 24 Mbps per home on very short loops, but if you look at
the loop length/rate graphs, you'll see that even with VDSL2 only the very
short loops will have sufficient capacity for multiple HD streams. FTTP/H
is inevitable.

Anyone looking to do HD will be looking at H.264, and looking to bring the
bandwidth requirement down to 8-10Mbps. That is certainly more practical with
ADSL2+ deployments (unless you want more than one STB per DSL).

US homes with digital cable or satellite typically do have more than one STB at this point simply becasue you need one for each TV...

I should have qualified my statement by saying that I have a predominantly
UK focus for my IPTV work.

In the UK, looking at the Satellite and Cable, I believe that 2nd box has only
really taken off in the last couple of years. You don't tend to hear about
3 or more boxes.

In the SD world, multiple STBs isn't a real problem with 8Mbps ADSL, and
definitely not for ADSL2+ (which 1 or 2 providers in the UK are doing).

For the deployment I'm working on at the moment, we have "ethernet to the
home", so it's not a big problem at the moment, and when we move onto DSL
based deployments, hopefully 8M+ will norm for connection speeds.

The big problem we face in the UK is that the majority of DSL connections
are provisioned on BT DSLAMs, and presented centrally to the ISP as L2TP.
There is no real benfit of multicast, as the connections are fanned out at
the ISP, not at the DSLAM. The non-BT DSL connections (known as LLU - Local
Loop Unbundled) fare much better, with the deployment of Lucent Stingers or
equivalent which do IP in the DSLAM, so enable proper multicast to the edge.


Regardless of the chitter-chatter about IPTV in this thread, I can say
pretty definitively that the 6Mbps I am getting via DSL (I'll get to cable
next) is much faster in practice than 1.5Mbps DSL. I most certainly can
sustain ~4Mbps for a single stream video feed, with the remaining headroom
still mostly usable.

Now, when you get into a shared channelized medium like cable (Comcast),
there is a difference in the backing network, and congestion is a much
bigger threat. That said, I was using Comcast when they went 3Mbps, and at
the time, I could sustain 2.4Mbps downstream from an external ASN with no
problem. I still have MRTG graphs showing it.

FUD, indeed. I have no idea how to sustain 2.4Mbps on a 1.5Mbps DSL
connection, but if someone here knows how, I'm all ears!

(...The frustrating part about those figures is that I might as well have
FTTH, because my DSLAM is less than 50 feet from my premises -- it's in a
green-monster canister on the corner of the block. The modem says I *could*
attain better than 9Mbps down and 2Mbps up, were such service available to
consumer low-lifes like myself. <g>)

The GigEthernet interface on my PC says I should be able to get 1,000Mbps
too. There are lots of different bottlenecks in a typical network.
Changing your access link speed may or may not make a performance

Suppose you hacked your cable modem configuration or your DSLAM
configuration, and opened your access link full throttle. Would you
be able to download 27Mbps cross-country from your favorite server? It
depends where the bottleneck was.

All things being equal, a faster access link usually results in better
performance. But I would think the people on this list would know
better than most, that things are almost never equal in the network
world. Remember all those debates whether Keynote or other performance
tests were actually valid measurements.