RA/RV Feeds from RBN

Or is it simply time to declare NANOG yet another
bastion of closed minds, and closed protocols?

There's not much rocket science in this, if someone was to
get together an open product that did this things may
be different. So far most of the systems have been coming out
of start ups that end up going for the money.

The MBONE tools are OK but from where I sit are useless as the
end users don't have MBONE. This is an operational issue
and you can programme your Cisco's to do it :slight_smile:

If MBONE is to be a serious contender the ISPs must deliver it, then us
content providers that are wasting tons of net bandwith could become
nicer netizens and wouldn't use the unicast products that end users are
asking for. Of course the ISPs may not take as much money off us as we
wouldn't need quite so much bandwidth (perhaps there's a reason there?)

I would be happy if there was actual support for non-Intel/non-MMXprocessors,
aside from Macintosh.

I happily run the player and servers on Sparc/Solaris boxes.

No Intel, no Microsoft, no problem.

It wouldn't hurt to see more platforms though, Real could do with
being a bit more open and not trying to do everything themselves,
there's plenty that'd help them out (most platforms have some good
hacker zealots that'd do a port).

As it is, everything since their version 3 release has been
"Error 88" in our office
since they won't support these so-called 'orphan' CPUs.

I think people are getting a bit heavy with the not so bad guys,
it could be worse - you could be looking at a Microsoft only
system instead.

I've found Real to be more receptive to cross platform support,
if a little tardy in following it through to delivery.

Brandon

brandon@rd.bbc.co.uk (BrandonButterworth) writes:

If MBONE is to be a serious contender the ISPs must
deliver it, then us content providers that are wasting
tons of net bandwith could become nicer netizens and
wouldn't use the unicast products that end users are
asking for. Of course the ISPs may not take as much
money off us as we wouldn't need quite so much bandwidth
(perhaps there's a reason there?)

The real problem is that IPv4 Multicast scales badly with
the number of groups, and Multicast routing is difficult.
If you doubt any of this, kindly review the recent Dave
Meyer presentations at any of your favourite conferences.

Content multicast is *wonderful* news for any ISP that
overbooks its backbone capacity: the amount of overbooking
possible increases with the volume of highly-popular
scheduled content.

That is, if you have lots of replication happening in
multicast routers (presumably because your customers have
things attached to them who want the same content at the
same time), then you can fill your customer links with
stuff that has less impact on your relatively large
backbone pipes.

InternetMCI is among a number of ISPs who have worked on
eliminating redundant copies of multicast traffic crossing
their backbones by deploying MBONE infrastructure. This
is not the same as deploying a multicast infrastructure,
however that is a very different rant.

So, frankly, your suggestion that ISPs make less money off
you when there is lots of well-laid-out distribution trees
carrying lots of content to lots of places is simply
wrong.

The problem is mostly that people are busy making unicast
mostly work and are finding that sufficiently difficult
that making multicast work better than it does now just
does not get attention, particularly since multicast
routing is hard and multicast routing code is buggy at
least in part due to that. The lack of perceived utility
for multicast also has an effect on the other side of a
cost-benefit analysis that does not favour rapid
deployment.

  Sean.

Sean M. Doran wrote:

The real problem is that IPv4 Multicast scales badly with
the number of groups, and Multicast routing is difficult.
If you doubt any of this, kindly review the recent Dave
Meyer presentations at any of your favourite conferences.

The _real_ problem is that the very concept of IP multicasting
is brain-dead. A multicast-based production service is neither
implementable, nor needed. Here are my reasons for concluding
so:

1) Multicast routing is not scalable and _cannot be_ scalable.
There are three distinct ways to do IP-level multicasting:

a) the most broken one: flooding multicast routing information
     all over the network. Equivalent to host routing to source hosts.

b) on-demand spanning tree, like that used in EXPRESS multicasting
     as developed by David Cheriton's students. That requires every
     gateway carrying multicast packets to keep track of all transit mc
     channels. Equivalent complexity-wise to virtual circuit-based networking.

c) sparse spanning tree, with state kept only in replicating gatways,
     like that described in a TRAP draft. This has the best scaling properties,
    but replicating is very likely to occur at exchange points for the majority
    of channels - i.e. the exchange-point gateways will still have to keep
    the significant amounts of per-flow state.

  Any multicasting sheme makes routing dependent on user-supplied
  routing information. This is a major operational nightmare. Just imagine
  what happens if some bozo starts injecting streams of Join-s and Leave-s.

This all means that multicasting is ok only as a nice toy. When you try to
  make it available to "masses" you end up with serious routing problem.

2) IP multicasting represents a major security problem since anyone can
  produce major avalanches of bogons by sending them down the existing
  multicast trees.

3) Not all links are created similar. A multicast stream suitable for a T-1 does
not fit into 28.8 modem connection. It means that multiple different-rate streams
have to be transmitted simultaneously to accomodate different conditions.

4) There is no good way to make IP-level multicasting congestion control-friendly.
  Any provider brave enough to let multicasting into his backbone w/o strict rate
  controls is asking for a serious trouble.

So much for technical feasibility of multicasting in the real-life backbones. Now,
do we really need it?

1) There already are ubiquotus, cheap, high-bandwidth and technically feasible
  multicasting delivery systems - also known as "boob tube", "idiot's box" and,
  sometimes, "television".

  The experience with TV illustrates that there's no shortage of transport capacity
  to large audiences; there's a shortage of quality content. Due to the technical
  problems, it is highly unlikely that multicasing can be used for large numbers
  of small audiences (aka "oligocasting").

2) IP muticasting offers no significantly new services which could be attractive to
  consumers. The receiption is still simultaneous and extracting parts of it require
  waiting until unwanted content is skipped.

  Are there any alternatives to multicasting? Yes, of course. The same service
  can be provided by application-level caches run by backbone ISPs. Unlike
   multicasting, cacheing allows non-simultaneus usage of the material (i.e. you
   can imagine treating a newscast as a VCR tape - for example back up to view
   the segment once again, etc). This makes "Internet TV" a completely different
   medium from the multicasting TV - for example, it would kill "soundbite TV" of
   CNN style, in favour of a more detailed in-depth stories, w/o sacrificing breadth
   or immediacy of coverage. Just compare CNN.COM and CNN on TV - their Web
   site is a lot more informative.

   Cacheing does not break congestion control and routing in backbones. It allows
   network operators to have finer-grained control of their traffic. In the most trivial
   case, cacheing with zero-time expiration time is equivalent to multicasting in terms
   of traffic savings. On the other hand, intelligent usage of cache preloads during
   night hours would allow to reduce high-bandwidth canned-content traffic significantly
   during the peak hours.

   In other words - why anyone would ever need IP multicasting? I strongly suspect that
   the whole IP multicasting hoopla is going the same way as ATM - stupid idea propped
   by enourmous expenditure of research and development resources, ultimately discarded
   in favour of more conservative and demonstrably working technology.

--vadim

Vadim,

I sympathize with all of your cogent comments when applied to multicasting
in the large. However, multicasting in the small, as practiced by certain
ISPs seems to violate all of your assumptions and thus violates your
conclusions. A specific example is UUcast. Comments?

Tony

Tony Li wrote:

Vadim,

I sympathize with all of your cogent comments when applied to multicasting
in the large. However, multicasting in the small, as practiced by certain
ISPs seems to violate all of your assumptions and thus violates your
conclusions. A specific example is UUcast. Comments?

There's always niche applications. It is still silly to build a significant chunkof complexity into the core network just to carry insignificant amount of
traffic.

Tunnels are our friends.

--vadim

PS I do not see how isolated network violates "all assumptions". And i fail
          to see why the same functionality (and a lot more) couldn't be implemented
         in application-level host-based caches/mcast servers.

         There's no particular reason to drag these functions into core gateways.

Those of you interested in this kind of thing will probably want to read
the article entitled Cache and Carry Internet at http://www.boardwatch.com

> I sympathize with all of your cogent comments when applied to multicasting
> in the large. However, multicasting in the small, as practiced by certain
> ISPs seems to violate all of your assumptions and thus violates your
> conclusions. A specific example is UUcast. Comments?

There's always niche applications.

Agreed.

It is still silly to build a significant chunkof complexity into the
core network just to carry insignificant amount of traffic.

Ok, it's not clear to me that a niche application implies that the amount
of traffic is insignificant. If UUnet decides to deliver live real-time
CNN to all desktops in all of their customers, that IMHO might be
significant.

PS I do not see how isolated network violates "all assumptions".

For example, you assumed (implicitly) that arbitrary systems could become
mcast sources. Clearly the UUnet model does not allow this. This makes
the problem tractable.

And i fail to see why the same functionality (and a lot more)
couldn't be implemented in application-level host-based
caches/mcast servers. There's no particular reason to drag these
functions into core gateways.

Once again, if the traffic volume is significant, there would be a
significant bandwidth savings.

Tony