Real Media and M-Bone feeds

The problem is that everyone thinks that multicasting is
more complicated than it really is.

The problem is, everyone thinks that million lemmings can't be wrong.

Multicasting cannot be made scalable. It is as simple as that.
One can play with multicast routing protocols as much as one
wishes - in pilots and small networks. It only takes a question -
"ok, how are we going to handle 100000 multicast channels?" to
realize that L3 multicasting is not going anywhere as a real-world
production service.

Yes, there are better multicasting schemes (like EXPRESS from Stanford,
or even more scalable multicasting I described some time ago in TRAP
documents). Still, they are not enough.

People try and overthink a few too many things, and this I
think is one.

Nah. It is a clear and present case of not thinking hard enough.

Worse yet, it distracts from deployment of the real solution - cacheing.

Multicasting is faster than disk.

This is a rather strange statement. I worked on a product (which was
shipped) which delivered something like 20Gb/s of streaming video content
from disk drives. RAID can be very fast :slight_smile:

I'm not sure how caching
is the solution. Distributed content is also good.

Ah, distributed content :slight_smile: Yet another kludge requiring lots of maintenance
and "special" relatioships between content providers and ISPs.

--vadim

Disclaimer: I think I know what I'm talking about, caveat emptor. :wink:

The problem is, everyone thinks that million lemmings can't be wrong.

I am not foolish enough to argue with a million lemmings. One need only
convince the one in front to change direction.

Multicasting cannot be made scalable. It is as simple as that.
One can play with multicast routing protocols as much as one
wishes - in pilots and small networks. It only takes a question -
"ok, how are we going to handle 100000 multicast channels?" to
realize that L3 multicasting is not going anywhere as a real-world
production service.

That a more scalable solution has not yet been developed is not evidence
that one does not exist. Classful routing and address assignement wasn't
scalable, either, but we have (hopefully) come out of that era with some
cleverness and reliance on good design. You ask "ok, how are we going to
handle 100000 multicast channels?" rhetorically, assuming that there is
no answer to the query. Multicast is, in all reality, in its infancy. There
really aren't enough people using it to make scalability an urgent issue.
In order for cool ideas like internet broadcast television and radio
services work, the internet will require multicasting or ma-bell is going
to have to create cheap bandwidth of gargantuan proportions. I'm not holding
my breath for the telephone companies to do anything earth-shattering in the
next decade. <cliche>Necessity is the mother of invention.</cliche>

Nah. It is a clear and present case of not thinking hard enough.

Indeed, and you're willing to dismiss multicast without second thought?

> Worse yet, it distracts from deployment of the real solution - cacheing.

One needs pay close attention to the problem trying to be solved. I see it
as being 2 cases:

1. Broadcast "live" or "real-time" data. This is what multicast is (should
   be) really good at. Videoconferences with friendly geeks via some caching
   mechanism would be awful at best, and still require more than one feed
   from the source, or from some replication server (a la CUSeeMe).

2. "On-demand" data, such as your friendly neighborhood internet-movie
   rental center. Don't laugh, I expect to see it in my lifetime. These
   could be cached "close to home", assuming that there weren't some legal
   issues with intercepting and storing data someone else paid for. Caching
   is only good for asynchronous data likely to be requested numerous times
   from various sources. i.e. I want to watch the same movie my neighbor
   is, but I want to see it from the start, not pick up in the middle where
   he is.

> Multicasting is faster than disk.

This is a rather strange statement. I worked on a product (which was
shipped) which delivered something like 20Gb/s of streaming video content
from disk drives. RAID can be very fast :slight_smile:

In case 1 above, as I stated, cache would not work well for several people
in disparate locations trying to videoconference. How many 20Gb/s streams can
you feed out simultaneously from your box? Say 100 people had the ability
to view data streams running at that speed. Now, consider that they all have
22Gb/s connections to your network. With multicast, you can feed all of them
simultaneously from a single 22Gb/s connection to your box of streaming
data. This would require 2.2Tb/s if done on unicast from your box. And it
would still require the same 22Gb/s incoming stream. Please explain to me
how the cache saves bandwidth in this case.

> I'm not sure how caching
> is the solution. Distributed content is also good.

Ah, distributed content :slight_smile: Yet another kludge requiring lots of maintenance
and "special" relatioships between content providers and ISPs.

As time passes, I can assure you that the line between the two will blur,
whether because of multicast, caching, or greed is yet to be seen.

Some suggested goals for multicast design*:

. Ensure that data are replicated as close to the destination as possible.
. Ensure that multicast routers not carry more topology data than are
  absolutely necessary.
. Ensure that the multicast system does not lend itself to DoS abuse, as
  other methods of one-to-many data replication do.

* I am a multicast newbie, and largely illiterate in current implementation,
so don't laugh at my suggestions publicly, please. :slight_smile:

> > Worse yet, it distracts from deployment of the real solution - cacheing.

One needs pay close attention to the problem trying to be solved. I see it
as being 2 cases:

1. Broadcast "live" or "real-time" data. This is what multicast is (should
   be) really good at. Videoconferences with friendly geeks via some caching
   mechanism would be awful at best, and still require more than one feed
   from the source, or from some replication server (a la CUSeeMe).

In case of DENSE network, yes, in case of (sorry, forget right word) RARE
network, (SPARE?), it's almost the same. How much peopel over the world
use CUSeeeMe even withouth the packet replicators? And how much use MBONE?
Now compare...

2. "On-demand" data, such as your friendly neighborhood internet-movie
   rental center. Don't laugh, I expect to see it in my lifetime. These
   could be cached "close to home", assuming that there weren't some legal
   issues with intercepting and storing data someone else paid for. Caching
   is only good for asynchronous data likely to be requested numerous times
   from various sources. i.e. I want to watch the same movie my neighbor
   is, but I want to see it from the start, not pick up in the middle where
   he is.

Just again - the CACHING and the MULTICASTING is two edges of the ONE
PROCESS - name it _CACHE AND REPLICATE THE DATA_. Why to build two
independent systems (caching and multicasting) instead of building one
common (CACHE-REPLICATE servers, with the _on the fly_ mechanism like
CISCO WWW-CPP protocol work), and with multicast on the far ends of the
data tree?

And again - MCAST is not scalable, IP address (classical or CIDR) is.

In case 1 above, as I stated, cache would not work well for several people
in disparate locations trying to videoconference. How many 20Gb/s streams can

Where do you see 20Gb/s? I see the contents over the Internet with
16Kbit's, or (worst case) 80Kbit/s data streams. For the conference (when
it's not high quality movie watching) 80Kbit streams are more then
necessary.

100 * 80 = 8,000Kbit = 8Mbit. Image 2 - 3 level data replication - it
should be 200 - 300Kbit. Image MCAST on the far end, in the LAN's - no
bandwidth problems at all.

No one ask to use data replication for the WEB TV in your local provider
offer you it as the new TV system. But it's not INTERNET.

Some suggested goals for multicast design*:

. Ensure that data are replicated as close to the destination as possible.

Yep. It cause this to be useless for the ON-DEMAND data, and when there is
non-multicast networks between the data sor=urces and data listeners (i.e.
at 99% cases).

. Ensure that multicast routers not carry more topology data than are
  absolutely necessary.

And this cause the routers to know EVERY DATA STREAM you have in your
media... And so on...

Please, understand - MCAST is not bad idea. MCAST has it's own fields. But
it should not became GLOBAL MULTIMEDIA TRANSPORT for the global INTERNET.

. Ensure that the multicast system does not lend itself to DoS abuse, as
  other methods of one-to-many data replication do.

* I am a multicast newbie, and largely illiterate in current implementation,
so don't laugh at my suggestions publicly, please. :slight_smile:

--
Those who do not understand Unix are condemned to reinvent it, poorly.
                -- Henry Spencer

Aleksei Roudnev, Network Operations Center, Relcom, Moscow
(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 230-41-41, N 13729 (pager)
(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)