T1 Circuit actual throughput 1290Kbps

Does anybody have experience having a T1 circuit with PPP
encapsulation getting only 1290 Kbps maximum throughput looking
at "sh int" result from cisco router or MRTG ?

This is the explanation our upstream provider gave us:

You have a 1.536Mbps port. However, there is the overhead from
PPP and the translation overhead which takes place in all circuits.
Judging by your settings that limit ends up somewhere between
1.3 and 1.4. This overhead would be the non-data portion of cells
or frames for example. For example, you might have 1.3 Mbps of
data which gets framing or cell information appended onto it before
sending taking up additional bandwidth. It is to be expected in all
circuits.

Thank you for your help.

Tony S. Hariman
http://www.tsh.or.id
Tel: +62(21)574-2488
tonyha@compuserve.com

Does anybody have experience having a T1 circuit with PPP
encapsulation getting only 1290 Kbps maximum throughput looking
at "sh int" result from cisco router or MRTG ?

This is the explanation our upstream provider gave us:

You have a 1.536Mbps port. However, there is the overhead from
PPP and the translation overhead which takes place in all circuits.
Judging by your settings that limit ends up somewhere between
1.3 and 1.4. This overhead would be the non-data portion of cells
or frames for example. For example, you might have 1.3 Mbps of
data which gets framing or cell information appended onto it before
sending taking up additional bandwidth. It is to be expected in all
circuits.

That is.... well, not correct. (Please insert favorite euphemism for "not
very smart" in regards to this upstream.) First of all, PPP overhead is
not even close to 200 Kbps on a T1. Second of all, I believe MRTG includes
the overhead in it's graphs. Could someone please correct me on that if
I'm wrong? And lastly, there are no "cells" in PPP. If you are doing ATM
over that T1, you would have cells. But you say you are doing PPP, not
ATM. Besides, I think you'd get less than 1.3 Mbps - probably more like
1.1 or 1.0.

According to my MRTG graphs, the most I've ever gotten on a PPP
encapsulated link is 1521.4 kb/s. I think that's a bit higher than your
upstream told you is possible. :slight_smile: This is a Cisco talking to a Bay router
(hence the PPP encap as opposed to HDLC). So tell your upstream he's full
of it.

Tony S. Hariman

TTFN,
patrick

Does anybody have experience having a T1 circuit with PPP
encapsulation getting only 1290 Kbps maximum throughput looking
at "sh int" result from cisco router or MRTG ?

Oh, around 1536 :slight_smile:

This is the explanation our upstream provider gave us:

You have a 1.536Mbps port. However, there is the overhead from
PPP and the translation overhead which takes place in all circuits.
Judging by your settings that limit ends up somewhere between
1.3 and 1.4. This overhead would be the non-data portion of cells
or frames for example. For example, you might have 1.3 Mbps of
data which gets framing or cell information appended onto it before
sending taking up additional bandwidth. It is to be expected in all
circuits.

Hehe, find a new ISP. There is some overhead with PPP, but it is not close
to that amount. Most likely your ISP does not have the bandwidth to
deliver all of your bandwidth.

Thank you for your help.

No problem.

<>

Nathan Stratton Telecom & ISP Consulting
www.robotics.net nathan@robotics.net

It also depends on whether the line encoding is AMI or
B8ZS. If it is AMI, you'll start out with 1340k, not
1536k, if I'm not mistaken.

Then you can factor in any type of Layer2 or Layer3
encapsulation overhead.

- paul

ferguson@cisco.com (Paul Ferguson) writes:

It also depends on whether the line encoding is AMI or
B8ZS. If it is AMI, you'll start out with 1340k, not
1536k, if I'm not mistaken.

AMI with 24 channels is 1344000bps (56000 by 24). (A continuing
crisis with many LECs is that data links are set up as AMI/D4, when
there's usually no reason not to run B8ZS/ESF. When our drop was put
in, the technician installed the B8ZS/ESF line, then proceeded to
configure the CSU/DSU for AMI/D4. Sigh.)

"Patrick W. Gilmore" writes:

That is.... well, not correct.

It certainly has incorrect elements, but their response is correct in
general -- there exists overhead, in some cases substantial overhead.

PPP overhead is not even close to 200 Kbps on a T1.

Without information on the traffic being handled this statement cannot
be supported. I agree that its doubtful.

I believe MRTG includes the overhead in it's graphs.

MRTG has no "smarts" in this regard. Whether or not overhead is
included is determined by the agent. The agent is responding
according to a standard, private or public, which should, and usually
does, specify whether overhead is to be included. Since the MIB being
discussed isn't known, to me, neither is the inclusion, or lack of,
overhead known.

there are no "cells" in PPP.

Its certainly possible that cells are being used, but their use should
not impact the delivered capacity.

According to my MRTG graphs, the most I've ever gotten on a PPP
encapsulated link is 1521.4 kb/s.

Thats the sort of number we see as well on a general traffic mix.

mlm@ftel.net (Mark Milhollan) writes:

"Patrick W. Gilmore" writes:
>PPP overhead is not even close to 200 Kbps on a T1.

Without information on the traffic being handled this statement cannot
be supported. I agree that its doubtful.

Actually, we can probably do better than that. For HDLC, the worst
case is all ones user data, which expands 6/5, plus 7 bits per packet
of shared flags. The PPP overhead, assuming no header compression, is
four bytes of header plus two bytes of CRC per packet. Now, if we
assume 256 byte packets (not a bad assumption of average given IP
traffic measurements I've seen) with worst case data, the user portion
will expand from 2048 to 2458 bits, and the HDLC/PPP overhead adds 55
bits. Thus, our efficiency is .815, or 284Kbps of overhead.

With random data, rather than worst-case data, things are much
better. The expansion is 161/160 on random data, which means that our
2048 bits of data go as 2061 encoded. The efficiency is .968, or
49Kbps.

>there are no "cells" in PPP.

Its certainly possible that cells are being used, but their use should
not impact the delivered capacity.

(!) The cell tax is about 10% -- the SAR expands user data from 48
bytes to 53 bytes, plus an additional amount of overhead for internal
fragmentation on the final cell (AAL-5), plus overhead for whatever
encapsulation mode is being used.

On a T1, you'd be lucky to get away with only 154Kbps wasted in cell
overhead, let alone any of the L2 stuff.