RE: QOS or more bandwidth

FWIW, I recently heard someone ask the question - "how do you go to your
investors and tell them you need more money for more bandwidth because you
don't want to efficiently manage your existing capacity?"

This is the business case for QoS, IMHO.

Irwin

Whenever I did the cost of deploying and managing fancy QoS
and compared it with the cost of getting and managing more capacity,
it was always MUCH MUCH cheaper to get and manage more capacity
than to mess with more QoS.

        Other folks mileage might vary. I'd encourage folks in
that situation to fire up a spreadsheet and do the math. The
critical variable in my cases was accounting properly for the
increased ongoing operational costs of maintaining a QoS-enabled
network. Those turned out to be quite high.

Ran
rja@inet.org

        Whenever I did the cost of deploying and managing fancy QoS

    > and compared it with the cost of getting and managing more capacity,
    > it was always MUCH MUCH cheaper to get and manage more capacity
    > than to mess with more QoS.

We did one VoIP network deployment, and I tried each of the different QoS
services in IOS at that time (about 18 months ago) both in the lab and in
the field, and more bandwidth was the answer then.

                                -Bill

I know of someone who is trying to do a VOIP system over a wireless
network - they are having limited success, but when they did some packet
switching magic, it seemed to help some, but last I checked they are still
having issues with it dropping calls and the phone system constantly
resetting. Is VOIP really ready for such practices as to allow business
to totally rely on VOIP in this matter?

ok ok a little off topic :wink:

-Eric

I think you're asking the wrong question:

Is low-cost commidity wireless networking ready to support delay and
jitter critical applications?

I think the answer is no.

Bill Woodcock wrote:

    > Whenever I did the cost of deploying and managing fancy QoS
    > and compared it with the cost of getting and managing more capacity,
    > it was always MUCH MUCH cheaper to get and manage more capacity
    > than to mess with more QoS.

We did one VoIP network deployment, and I tried each of the different QoS
services in IOS at that time (about 18 months ago) both in the lab and in
the field, and more bandwidth was the answer then.

Interesting. We have a national VoIP network which handles
long-distance calls for the Australian universities. It's
not a trial, it's a real VoIP rollout that interconnects
the PBXs of the universities. We think that's about
300,000 handsets.

More bandwidth doesn't cut it, as the voice calls then
fail during DDOS attacks upon the network infrastructure.

It's not too hard to fill even a 2.5Gbps link when new
web servers come with gigabit interfaces and when GbE
campus backbones are being rolled out. If these DDOS
attacks use protocols like UDP DNS then traffic shaping
the protocol is problematic and the source IP subnet
from where the attack is launched needs to be filtered
instead. Just finding and filtering a DDOS source
can easily take more than five minutes, which pushes
the availability to less than 99.999% and leads to
legal issues with telecommunications regulators about
access to emergency services.

We found the Cisco low latency queuing to be adequate.
It still has a fair amount of jitter, but not enough to
matter for VoIP calls with a diameter of 4,000Km.

We do have issues with Cisco's 7500: dCEF is still problematic,
but needed for the LLQ feature. The QoS features are
too linked to the hardware (you can't configure a
service-policy that is enacted on the VIP or main
CPU, depending upon the hardware). Despite QoS being
most needed on cheap E1/T1 links, they expect you to
upgrade to a many thousand dollar VIP4 to support QoS.

We police access to QoS by source IP address, mapping
non-conforming traffic to DSCP=0. As this requires an
access list executed for each packet, the number of
VoIP-speaking connecting sites to 50 or so, and requires
H.323/SIP proxies at the edge of the sites.

We don't use IntServ. When a link is down RSVP
takes so long to find an alternative path that
the telephone user hangs up.

So what we are really waiting for is the implementation
of the combined IntServ/DiffServ model so that hosts on
member networks can do local authentication and bandwidth
reservation and we can police the amount of QoS traffic
presented at each edge interface.

"Local authentication and bandwidth reservation" also
hides a host of issues that are yet to be fully addressed.
Most universities don't even *have* an authentication
source that covers everyone that can use IP telephony.

In practice, even with our deliberately simplistic implementation
this whole area is a IOS version nightmare. There's no excuse
for a monolithic statically-linked executable in this day and
age. Hopefully Juniper's success with monolithic kernel and
user-space programs will lead to Cisco looking to leapfrog
the competition and adopt the on-the-fly upgradable software
modules as described in Active Bridging (1997).

Having dissed Cisco, I should point out that their H.323/ISDN
gateway software that runs on their RAS boxes (5300, etc) was
the most solid of all the manufacturers we tested and Cisco was
the vendor most willing to fix the differing interpretations of
ISDN we enountered when we connected the North American-developed
RAS to our European-developed PABXs (Ericsson MD110, Alcatel, etc).