Hi Every one,
Recently we had good discussion over multicast uses in public internet. From discussion, it was pointed out uses of multicast is more with in enterprise. Wanted to understand how much % multicast traffic present in network
* If there is any data which can provide what % of traffic is multicast traffic. And if multicast is removed, how much unicast traffic it would add up?
* Since this forum has people from deployment area, I would love to know if there is real deployment problems or its pain to deploy multicast.
These questions is to work / discussion in IETF to see what is pain points for multicast, and how can we simplify it.
Multicast by itself does not reduce much bandwidth : that reduction is
purely based on the network design
If you place unicast nodes near your customers, multicast is effectively
unicast (just think about it)
As someone else remarked, part of this will depend on the type of network
you are profiling. One enterprise networking may have critical internal
applications that depend on multicast to work and others may have nothing
but the basic requirements of the network itself (e.g. IPv6 uses multicast
instead of broadcast for some network control information distribution).
There is at least one company that is using multicast for video switching, or in other words to replace HDMI switchers in rooms with video sources and displays.
They have devices that encode video from an HDMI input to a multicast stream.
And devices that receive a multicast stream and output the video from that stream to an HDMI output.
So you can have multiple cameras and a multicast stream for each camera is input into the network.
Then you can have a projector that can choose any of those multicast streams to display.
I believe the video is uncompressed
Multicast by itself does not reduce much bandwidth : that reduction is
purely based on the network design
If you place unicast nodes near your customers, multicast is effectively
unicast (just think about it)
The amount of multicast traffic on an enterprise network will depend greatly on how multicast is being used, and to some extent, the type of business the enterprise is in.
An enterprise that uses multicast primarily for IPTV distribution might have different business and technology drivers than, say, a hospital or healthcare organization that has patient monitors that use multicast to communicate back to a central monitoring station. The percentage of multicast traffic in those two scenarios might be vastly different, but no less important to their respective organizations.
This is almost never true, it's rare exception rather than common case.
The idea was that in IPv4 networks ARP broadcast waste bandwidth and
host CPU. To fix this problem, each host (sufficiently small group of
IPv6 addresses unlikely to collide) subscribes to its own multicast
group. So we don't need to flood the ND traffic to hosts not needing
it.
But it turned out supporting ~infinitely many multicast states is
harder problem than pushing frames in hardware to all ports. So all
practical networks run IPv6 ND same as ARP.
Another 'we fixed a problem in IPv6', which turned out to be worse
than the original problem and was quietly ignored in practical
networks.
I once worked for a financial futures broker-dealer where I implemented multicast, which was around 2009. They had one main application, which was a trading "screen" that traders and customers used to execute trades. I would guesstimate maybe 5-10% of the packets and bytes flowing over the network was multicast, depending on network conditions.
In terms of bandwidth savings, I'm not sure how much we saved. We had nine or ten participants using that particular application. However, they all worked on different desks, trading different products. The app was smart enough to send only the price feeds in which the user was interested. Assuming at least 50% of the users looked at the same price feeds 50% of the time, I'd say it saved about 25-50 meg.
We also had one major exchange distributing price feeds via multicast. However, that feed was not routed on our network. Our systems plugged directly into exchange-provided switches for the feed.
The hurdles I had to overcome to implement multicast were:
* The learning curve for PIM. Deciding on the deployment model was difficult, as were the first few support calls. We wound up going with PIM-SM w/ BSR for RP selection.
* Vendor support for PIM on our gear. These were mainly troubles with PIM running on firewalls in high-availability mode.
If I had to do it over again, I wouldn't have bothered with multicast. It was a great opportunity and we learned a lot, but the app had a unicast mode of operation that would have worked perfectly fine for our purposes.
I work for an ISP now. We have decided not to support multicast on our network for now mainly because of the learning curve, and also because we simply don't see that much demand. Those two or three prospective customers that wanted it, wanted it for multi-site video conferencing on an MPLS VPN.
Let me fix that for you.
Using multicast on IPv6 grant us the ability to do more.
Today, this is worthless.
Will it be the same tomorrow ?
Problem is, to handle the Neigbour Discovery design (16M multicast
groups), we need hardware that does not exist yet, is unlikely to
exist for at least another decade and will not be down to reasonable
prices (< 5 kUSD) for even longer. In the meantime, IPv6 neigbour
discovery *precludes* the use of multicast for other purposes on
non-small networks, because IPv6 ND will use up all multicast groups.
If ND had limited itself to 256 multicast groups, it wouldn't have
been a problem.
Multicast without NDP is broadcast,
(With s/NDP/MLD/ as you yourself mentioned.)
That's the case for Ethernet. Hop over to the Infiniband or OmniPath
world, and multicast without MLD will cause packets to be dropped.
There is no such thing as broadcast on IB or OPA.
(At least OmniPath does have something called "multicast LID sharing",
where the IPv6 ND groups are clamped down to just 512 distinct groups,
but I haven't read up on the details for how that works, yet. I don't
know offhand if there is something similar for Infiniband.)
A recent customer uses multicast to have the same packet arrive at
multiple destinations at the same time for resilience (their own
internal systems, not IPTV or media etc). Having just refreshed their
network for the next 5-10 years it's not going away anytime soon.
I believe the same time delivery is motivation for stock exchanges
too. One of the larger exchanges used MX and multicast, which of
course does btree or utree (in this case utree) replication, which
makes delivery times are very much variant.
So it is very much implementation detail what type of delivery time
differences to expect in different ports and it is not by design
superior to unicast.
Multicast was also required for earlier versions of VXLAN. But later versions or VXLAN only require unicast.
For the far future it seems like Named Data Neworking, Content Centric Networking, Information Centric Networking, Data Centric Networking etc all list multicast as a requirement or fundamental part of their architecture.
Multicast was also required for earlier versions of VXLAN. But later
versions or VXLAN only require unicast.
For the far future it seems like Named Data Neworking, Content Centric
Networking, Information Centric Networking, Data Centric Networking etc all
list multicast as a requirement or fundamental part of their architecture.
And they would all be better served by using BIER.
In terms of other Internet use, the BBC recently published this white
paper on the R&D efforts with HTTP Server Push/QUIC, part of which
describes an "experimental IP multicast profile of HTTP over QUIC".
The BBC publishes this list of multicast partners but I'm not sure if
it's up-to-date and they still offer multicast outside of their own
network? iPlayer is unicast to consumer, so? Do they offer multicast
to iPlayer to these partners or only for set-top-box services?
We're doing this as part of our work on moving the entirety of
broadcast, from camera to viewer, to IP. The broadcast industry is
going this way now (visit IBC or NAB and see) so you'll see plenty of
multicast inside their networks
I've kept out of the multicast hate fest, it's a tool and some tools
work better in some situations than others, some may be a bit old and
blunt.
I don't think banishing multicast is the answer. It would be better to
fix the problems instead, if we don't want to sustain the content based
balkanisation of the internet by content rights holders and eyeball
networks that support them to exlude competition.
Internet VOD is huge but there is little linear TV. VOD traffic is
driving standards development not linear TV, so there is no demand to
fix inter domain multicast. We think our iPlayer VOD service traffic is
quite large but it is only 5% of our viewing so linear TV is not dead
yet, especially for major events. We could scale our CDNs for this but
multicast does it more efficiently in some networks.
Our aim with MPEG-DASH is to have one standard for uni and multicast
streaming where clients transparently use whichever works. If an edge
network wishes to use multicast, as many do for IPTV, they can but they
may have to use unicast from the origin, we didn't want to be delayed
for another 10 years waiting for other networks to turn it (back) on.
The edge network does not have to roll out multicast 100% as it will
just be used wherever it happens to have been deployed.
There have been standards for this, usually with tunnels. Using the
same DASH stream format means it's simpler to do transparently this way
giving networks more flexibility to handle the capacity issues when we
do a World Cup in IP only.
The latency problem remains, people don't like hearing the goal cheers
through the wall and waiting for it to appear on their screen.
Sometimes not all new standards, forcing video over HTTP rather than
RTSP, are progress.