>> For what it's worth, I do understand that there is a plan to create
>> an ATM exchange point in the DC area, at speeds exceeding those
>> currently available.
>>> Given the latency we've seen over some ATM backbones,
>> The latency increased in network areas that are switched is generally
>> (by all but the zealots) given to be less than that of comparable
>> layer three data moving topologies.
>> The latency induced by several providers claiming an ATM backbone
>> is generally attributable to an error: they leave off one important
>> word -- shared -- . The latency about which I assume you speak is
>> caused by large amounts of queuing. This queuing is demanded by network
>> oversubscription. The latency introduced by the oversubscription
>> is consistent with any oversold network.
>Perhaps he is referring to latencies that some beleive is incurred as ATM
>'packet shredding' when applied to typical data distributions encountered on
>the Internet that fall between the 53byte ATM cell size and any even
>Some reports that I have seen show a direct disavantage for data where a
>large portion of 64byte TCP ACKS, etc. are inefficiently split among two
>53byte ATM cells, wasting a considerable amount of 'available' bandwidth.
>i.e. one 64byte packet is SARd into two 53byte ATM cells, wasting 42bytes of
>space. If a large portion of Internet traffic followed this model, ATM may
>not be a good solution.
Just an observation: most of the super-fast L3 routers and frame relay
switches chop their packets into cells before transporting them across the
switching backplane. ATM cells are only chopped once at the ingress into
the ATM cloud, but the "L3 packets" have more chances to be chopped inside
the non-ATM cloud. So which one induces more latencies ?