SLA for voice and video over IP/MPLS

Hello,

I am looking for industry standard parameters to base the SLA of one
network regarding to voice, video and data application.

Which are the the accepted values for jiiter, delay, latency and
packet loss for voice, video and data in a IP/MPLS ?

Thanks

./diogo -montagner

I'd be looking at packet ordering perhaps for voice and esp video,
having the packets arrive in order makes a huge difference for video

Hello,

I am looking for industry standard parameters to base the SLA of one
network regarding to voice, video and data application.

One won't find many, but a common rule of thumb is most apps will be
'fine' with networks that provide 10E-6 BER or lower loss rates.

Which are the the accepted values for jiiter, delay, latency and
packet loss for voice, video and data in a IP/MPLS ?

This question is being framed backwards -- an engineer should ask ask
what the particular codecs can tollerate, then seek out networks which
can deliver on those needs. If the a/v equipment vendor can't tell the
customer or user what sort of network is required, I recommend
selecting a new a/v vendor. In any event, audio codecs such as ILBC,
g729, and 722 are well positioned for 'loss concealment' mechanisms in
the decoders, masking some reasonable amount of loss. This has been
exhaustively tested, and the data is readily available [0].

Video codecs that degrade gracefully are also fairly common, though
the industry focus seems to be on concealing loss for generic
real-time data, and offloading this work onto a different abstraction.
One example would be packetized 'forward error correction' schemes,
which can be configured or adapted to nearly arbitrarily 'high' loss
rates (eg. "ProMPEG" [1] and related work). If the a/v system in
question can support FEC of any sort, then this should substantially
reduce ones transport-layer loss rate concerns.

-Tk

[0]: Speech Codecs and Associated PSQM Values
[1]: http://www.ispa-sat.ru/info/Inside%20Pro-MPEG%20FEC%20(IBC)%20.pdf

out of pure curiosity, have you ever gotten a reasonable answer when
asking a carrier about this? I can imagine a sale-rep's brain
essentially exploding upon asking it. Additionally 'the network' is
not 'the path my packets take' ... so what number are you really
getting here?

-Chris

Hi Chris,

I never got this answer.

Chris, Tim, Anton and Martin,

thank you for all inputs. Really appreciate them.

Thanks
./diogo -montagner

I suspect you won't... at least not a reasonable/usrful answer.

Anton,

Who uses BER to measure packet switched networks? Is it even possible
to measure a bit error rate on a multihop network where a corrupted
packet will either be discarded in its entirety or transparently
resent?

Regards,
Bill Herrin

Who uses BER to measure packet switched networks?

I do, some 'packet' test gear can, bitstream oriented software often will, etc.

Is it even possible
to measure a bit error rate on a multihop network where a corrupted
packet will either be discarded in its entirety or transparently
resent?

Absolutely -- folks can use BER in the context of packet networks,
given that many bit-oriented applications are often packetized. Once
processed by a bit, byte, or other message-level interleaving
mechanism and encoded (or expanded with CRC and FEC-du-jour), BER is
arguably more applicable. These types of packetized bitstreams, when
subjected to a variable and sundry packet loss processes, may only
present a few bits of residual error for to application. I would argue
that in this way, BER and PER are flexible terms given (the OP's A/V)
context.

For example, if we have 1 bit lost in 1000000, that'd be ~1 packet
lost every 82 packets we receive, for a IP packet of 1500 bytes. More
importantly, this assumes we're able to *detect* a single bit error
(eg. CRC isn't absolute, it's probabilistic). Such error-expansion due
to packetization has the effect of making 10E-6 appear as if we lost
the nearest 11,999 bits as well. However, not all networks check L2
CRC's, and some are designed to explicitly ignore them--an advantage
given application-level encoding data encoding schemes.

It follows that if 1 in ~82 packets becomes corrupted, regardless of a
CRC system detecting and dropping it, then we have a link no *better*
than 10E-6. If the CRC system detected an error, then it's possible
that >1 bit was corrupted. This implies that we can't know precisely
how much *worse* than 10E-6 the link is, since we're aggregated (or
limited) to a resolution of +/- 12k bits at a time.

-Tk

Who uses BER to measure packet switched networks?

I do, some 'packet' test gear can, bitstream oriented software often will, etc.

Hi Anton,

So... Not really, no.

You get a bit error on an Ethernet in the middle, the next router
flunks the Ethernet CRC and you never see the packet.

You get congestion in the middle, the router drops the packet and you
never see it.

You get a bit error on a 802.11 link in the middle, it retransmits and
you get a clean packet with a little jitter and maybe out of order.

Point is, you don't get a measurement that looks like Bit Error Rate
because you don't have access to layer 1 and you see a very incomplete
layer 2. Evaluating an MPLS virtual circuit, you want metrics that
make sense for layer 3 in a packet switched network: loss at various
sizes, delay, jitter, packet order.

Don't take this the wrong way, but someone starts asking me about BER
rates in the SLA on a packet switched network and the message I hear
is that they're asking to be lied to. Like when I describe DSL quality
in terms of the birds perched pooping on the lines. His mental model
for datacom is stuck in the '80s and I'll have to accommodate that if
I want to do business. And when he calls to complain that we owe him a
day's credit because of a high BER, he'll be the nice gentleman who we
humor because he pays his bills on time and the occasional service
credits are built in to our price.

Loss. Delay. Jitter. Not BER. BER is the wrong tool for even
attempting to evaluate the end to end performance of an MPLS virtual
circuit.

For example, if we have 1 bit lost in 1000000, that'd be ~1 packet
lost every 82 packets we receive,

If you're losing 1 packet in 82, you're fired. Seriously, that's an
order of magnitude off even for tasks less demanding than VoIP and
streaming video. Doesn't matter if you flipped 1 bit or 20, 1.2%
packet loss one way (2.4% round trip) is way excessive. That's at the
level where you start to notice sluggish web browsing because of TCP's
congestion control algorithms.

-Bill