New MAE-EAST

Perhaps he is referring to latencies that some beleive is incurred as ATM
'packet shredding' when applied to typical data distributions encountered on
the Internet that fall between the 53byte ATM cell size and any even
multiple thereof?

Some reports that I have seen show a direct disavantage for data where a
large portion of 64byte TCP ACKS, etc. are inefficiently split among two
53byte ATM cells, wasting a considerable amount of 'available' bandwidth.
i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of
space. If a large portion of Internet traffic followed this model, ATM may
not be a good solution.

I am not an authority on this position, so feel free to dispute, but perhaps
that is one latency factor to which he was referring?

> Paul,
>
> I have not spoken with you before, so I do not know if your
> posting below is meant in a literal, nonfacetious manner.

It was, however not having spoken to me would have given you a better chance
of getting that right :wink:

>
>> > With all of the problems with MAE-EAST.....
>> >
>> > Any plans from anyone to create a ATM exchange point in the DC area?
>
> For what it's worth, I do understand that there is a plan to create
> an ATM exchange point in the DC area, at speeds exceeding those
> currently available.
>
>> Given the latency we've seen over some ATM backbones,
>
> The latency increased in network areas that are switched is generally
> (by all but the zealots) given to be less than that of comparable
> layer three data moving topologies.
>
> The latency induced by several providers claiming an ATM backbone
> is generally attributable to an error: they leave off one important
> word -- shared -- . The latency about which I assume you speak is
> caused by large amounts of queuing. This queuing is demanded by network
> oversubscription. The latency introduced by the oversubscription
> is consistent with any oversold network.
>

I'm sure that's a part of it, I initially saw a lot of dropped packets
through a couple of ATM clouds. I'm seeing some improvements in
some of the providers, however, given the trumpeting of ATM (magic bullet
syndrome), it seems that it's just not something which happens correctly
by default. Before we go off on the 'nothing happens correctly by
default' tangent, it's just been my general observation that whenever
my packets have been transited over ATM, my latency has been less than
ideal. I would have figured that oversubscription would result more in
lost packets and timed out connections (which were also seen, but more
easily screamed about) than latency, but I guess that's a factor of how
oversubscribed the line is.

Perhaps he is referring to latencies that some beleive is incurred as ATM
'packet shredding' when applied to typical data distributions encountered on
the Internet that fall between the 53byte ATM cell size and any even
multiple thereof?

Some reports that I have seen show a direct disavantage for data where a
large portion of 64byte TCP ACKS, etc. are inefficiently split among two
53byte ATM cells, wasting a considerable amount of 'available' bandwidth.
i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of
space. If a large portion of Internet traffic followed this model, ATM may
not be a good solution.

This was my preliminary guess. I expect that it'll be mid next year
before we start playing with ATM internally, if that soon. Once I get it on a
testbed, I'll know for sure where the issues lie. Is there a good place
to dig up this stuff, or am I doomed to sniffers and diagnostic code?

It'll be a couple of months until I start gathering latency stats again.

Paul

I'm sure that's a part of it, I initially saw a lot of dropped packets
through a couple of ATM clouds. I'm seeing some improvements in
some of the providers, however, given the trumpeting of ATM (magic bullet
syndrome), it seems that it's just not something which happens correctly
by default. Before we go off on the 'nothing happens correctly by
default' tangent, it's just been my general observation that whenever
my packets have been transited over ATM, my latency has been less than
ideal. I would have figured that oversubscription would result more in
lost packets and timed out connections (which were also seen, but more
easily screamed about) than latency, but I guess that's a factor of how
oversubscribed the line is.

We run ATM between POPs over our own DS3, simply because it gives us the
ability to flexibly divide it into multiple logical channels. Right now,
we don't need all of it so I'm not concerned about only getting 34Mbps of
payload data across the DS3. When we get closer to that, we may need to
investigate other solutions.

What we see on a 250mi DS3 running ATM is 8ms RTT (ICMP echoes), never
varying. I don't have a similar mileage circuit running HDLC or PPP over
DS3 to compare with, but assuming a propagation speed of .7c the round trip
time just to cover the distance is 3.8ms. Adding the various repeaters and
mux equipment along the way, then going through our ATM switches on each end
and to a router on each end and the processing there, that doesn't sound
bad to me. We may also add voice circuits across the link at some point.

> Perhaps he is referring to latencies that some beleive is incurred as ATM
> 'packet shredding' when applied to typical data distributions encountered on
> the Internet that fall between the 53byte ATM cell size and any even
> multiple thereof?
>
> Some reports that I have seen show a direct disavantage for data where a
> large portion of 64byte TCP ACKS, etc. are inefficiently split among two
> 53byte ATM cells, wasting a considerable amount of 'available' bandwidth.
> i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of
> space. If a large portion of Internet traffic followed this model, ATM may
> not be a good solution.

This was my preliminary guess. I expect that it'll be mid next year
before we start playing with ATM internally, if that soon. Once I get it on a
testbed, I'll know for sure where the issues lie. Is there a good place
to dig up this stuff, or am I doomed to sniffers and diagnostic code?

That shouldn't significantly affect latency, but it does waste bandwidth.
With a 5-byte header per ATM cell, you already waste 9% of the line rate
to overhead, and then you have AAL5/etc headers on top of that. Nobody is
saying that ATM is the best solution for all things, but you do get something
for the extra overhead -- the ability to mix all types of traffic over a
single network, and for the allocation of bandwidth to these types of traffic
to be done dynamically in a stat-mux fashion. If you have enough traffic for
the various types that you can justify multiple circuits for each type, then
there is less justification for using ATM.

There was another comment about wanting to use a larger MTU at a NAP which
confused me. What benefit is gained by having a large MTU at the NAP if
the MTU along the way (such as at the endpoints) is lower, typically 1500?

John Tamplin Traveller Information Services
jat@Traveller.COM 2104 West Ferry Way
205/883-4233x7007 Huntsville, AL 35801

Perhaps he is referring to latencies that some beleive is incurred as ATM
'packet shredding' when applied to typical data distributions encountered on
the Internet that fall between the 53byte ATM cell size and any even
multiple thereof?

I'm going to rant a little. Sorry Al, but it was you repeating something
allegedly BAD about ATM that once ATM promoters used to say was GOOD, well
it's just too funny and too ironic to pass up.

One of the advantages of ATM as touted by ATM bigots in the early days was
the advantage of "cell interleaving". When two "packets" meet at an
intermediate ATM node, their cells interleave as they are switched through.
This reduces the per-hop latency of an ATM network over a frame network on
the order of microseconds for large packets. An idiotic marketing-initiated
"advantage" that I used to make fun of when ATM marketers would trot it out.

Now you tell me that ATM segmentation probably increases latency because
the modulo 48 byte payload causes the extra padding bytes on some packets
to "take a long time" to be forwarded? On the order of picoseconds. An
idiotic "what else can we think of that's wrong with ATM"
engineering-initiated disadvantage.

And if we could remember what we were actually talking about -- an ATM
switch for an exchange point and not an ATM network -- we can see that none
of this matters, except to show how we know that ATM is Just Bad and we
would never do that.

Some reports that I have seen show a direct disavantage for data where a
large portion of 64byte TCP ACKS, etc. are inefficiently split among two
53byte ATM cells, wasting a considerable amount of 'available' bandwidth.
i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of
space. If a large portion of Internet traffic followed this model, ATM may
not be a good solution.

The TCP ACKs are 40 bytes long and if you aren't trying to solve too many
problems at once, you can use an encapsulation that will fit a 40 byte TCP
ACK in a single cell. There isn't a way to stuff a 64 byte packet into a 48
byte payload. Is that a problem!? Only if you have a lot of 64 byte
datagrams, which you don't, because the ACKs are 40 bytes long. I have
actually looked at some Internet traffic distributions to see how big a
problem this isn't.

There is no point agreeing with the Big Backbone Network Engineers that the
MAEs suck. It is in their best interest that the MAEs suck, the CIX is
crippled, you aren't bugging them to plug into a high perf exchange, and
that you, the little ISP, go out of business soon. THEY have private
interconnects which you can't join. Find a co-lo where you can
cross-connect without being robbed or build your own NAP, just don't use
DEC-designed Gigaswitches and FDDI. Use full duplex 100 Mbps Ethernet
switch or find an old Fore switch cheap.

--Kent

Kent W. England President and CEO
Six Sigma Networks Experienced Internet Consulting
1655 Landquist Drive, Suite 100 Voice/Fax: 760.632.8400
Encinitas, CA 92024 mailto:kwe@6SigmaNets.com
PGP Key-> http://keys.pgp.com:11371/pks/lookup?op=get&search=0x6C0CDE69

At the risk of litigation, Kent makes a good point here: how much of
the problems we see are engineering based, and how much are (let's say
it softly: political?

This is right on the edge of topic for the list; construct your replies
carefully, or cross-post to nodlist, per reply-to.

Cheers,
-- jra

"Kent W. England" <kwe@6SigmaNets.com> writes:

There is no point agreeing with the Big Backbone Network Engineers that the
MAEs suck. It is in their best interest that the MAEs
suck,

Hm. Well, it depends on how deep you want to get into
conspiracy theories, of course.

If there is a way to sell normalized services over an IXP
such that the costs-versus-revenue split is not
significantly worse than offering normal services over
non-IXP technology, then there is no reason to dislike
IXPs. There is technology to offer normalized services
now, and apparently it is being put to some use. How much
this is thought of depends on how holistically you want to
view organizational financial structure and cash flow.

A facilities based telco that is at an IXP that is not
operated by itself is less likely to be thrilled by the
thought of relatively small Internet access fees available
there in comparison to the potentially very lucrative
business that can be obtained through bundling Internet
access as sugar in securing a more comprehensive account.

(Hi, we're a telco. Buy our long distance and use our WAN
outsourcing services and we will give you nearly free
Internet connectivity. -- This is difficult to do at an
IXP...)

On the other hand, an organization that is running a large
IXP and can do so without losing money -- for instance,
when they are provided with a captive and unwilling market
thanks to government pressure -- probably will be very
fond of them, particularly if the infrastructure being
paid for by the IXP participants can be turned into an
aggregation point for their own Internet service offerings
(and possibly, in the case of LECs who run IXPs, bundled
in with other services like inter-campus WANs or even VPN
telephony).

(Hi. Welcome to our ATM switch. Did you know that we can
make some VCs between you and University of XYZ, not to
mention that we can offer a whole range of unregulated
services thanks to this wonderful new NAP technology.)

The thing about Internet Engineers is even the most evil
greedy bastardlike ones mostly seem to want the Internet
to work. Relying on IXPs which are bursting at the seams
technologically and physically seems really dangerous.

That some exchanges (I note you say MAEs) suck is a
side-effect of their popularity. Keeping that popularity
from exposing scaling problems beyond the IXPs is an
intelligent design goal, which also may have convenient
financial implications.

the CIX is crippled,

The CIX ceased to have any real function when ANS CO+RE captiulated.

Now that Rick Adams has finally eaten Al Weis's lunch, the
continued existence of the CIX is almost a joke. Sorry,
Bob and John.

you aren't bugging them to plug into a high perf exchange, and
that you, the little ISP, go out of business soon.

Um, interesting theory. Given some statistics on where
traffic loads are and who seems to present what amount of
aggregate traffic in the USA, I am not sure it's really
tenable though.

THEY have private interconnects which you can't join.

How do you "join" a point-to-point circuit?

My position on these private interconnects is that each
party is offering some degree of connectivity and that
normal business negotiations on the price of those
services determines who pays whom what amount, if a deal
is to be made at all. This is entirely like a negotiation
on pricing done between any two entities on the Internet.
Peering, 102: it's exactly the same as any other deal on
connectivity. (cf. Vadim Antonov's question two years and
change ago, "does anyone ever actually pay list prices?")

Most of these private point-to-point circuits are
negotiated in circuit pairs, with each party paying for
one out of every two circuits. Long discussions sometimes
happen about who should pay for which of a pair of
circuits and whether there should be some further
consideration (financial or otherwise) even within a
contractual framework that is geared to make this sort of
thing straightforward.

Find a co-lo where you can cross-connect without being
robbed or build your own NAP, just don't use
DEC-designed Gigaswitches and FDDI. Use full duplex 100
Mbps Ethernet switch or find an old Fore switch cheap.

This is good advice except that there are MTU implications
wrt Ethernet that need considering. Personally I am
hoping that people start fixing little things like the old
decision to assign low default MTUs and MSSes to remote
things. Something useful to consider is that there is no
difference between a properly designed router and a
properly designed switch, and that given a
next-hop-resolution scheme (tunnels, tag switching, you
name it) a router acting as a switch (a "srouter") only
needs to know about its immediate adjacencies in order for
connectivity to work.

  Sean.

"Jay R. Ashworth" <jra@scfn.thpl.lib.fl.us> writes:

At the risk of litigation, Kent makes a good point here: how much of
the problems we see are engineering based, and how much are (let's say
it softly: political?

A great deal of the Internet's evolution has been affected
in the past by a number of strong personalities, each of
whom had her or his own set of political beliefs. There
is probably no aspect of the Internet which is untouched
by this observation.

To some extent ALL of the problems at MAE-EAST and
MAE-WEST are political, which is unsurprising, as they
were both born out of politics. The first was created as
a somewhat practical, somewhat political action against
unfair ENSS access terms, the second was created as a
political action against bad NAP design, PAC*Bell, the ATM
heads at Bellcore, and probably the NSF as an agency.

The names themselves come from Andrew Partan, one of those
people with strong personalities and technical acumen.

MAE-EAST and MAE-WEST are exploding because they are
victims of success. They completely blew the official
alternatives out of the water, to the extent that ANS and
the ATM NAP operators are generally seen these days as
Also-Rans. Unfortunately, they are in danger of blowing
themselves out of the water too, thanks to the
difficulties of scaling to meet demand.

The history of MAE-EAST's technical evolution is amusing.
There have been enormous problems in the past which have
lead to threats of complete withdrawal by the initial
parties, and occasional partial withdrawals. The current
trend towards using private point-to-point links is really
not much more different than, for example, the SWAB (in
reaction to the MAE distributed ethernet not working under
load, and MFS taking a long time to figure out how to
address the problem properly), except that it was better
thought-out than that was, and considerably more popular.

To be brutal (who, me?), I think that the people who
scream "this is purely (or even primarily) political not
technical!" about decisions which clearly favour an NSP's
stability and technical survivability are those people who
also have very strong personalities but lack the technical
acumen to affect the evolution of the Internet in general.

On the other hand, those people who assert that the
decisions are purely technical are probably being disingenuous.
You may now feel free to quote some of my messages from
previous lives if you like. It would serve me right. --:slight_smile:

  Sean.