Bell Labs' Discovery May Lead to Efficient Networks

There had been reports of this discover about four years ago -- as
I recall, Tony Li reported hearing from customers that traffic levels at
the core were remarkably stable.

I had some talks with various of the self-similarity experts at the time
and they said this was perfectly plausible -- the law of large numbers
eventually says this must happen, the self-similarly results simply implied
that the amount of traffic required to reach stability was *much* larger
than required if traffic was Poisson. At the core, we've apparently
reached that point.

My recollection of the discussions is that there's a lot of interesting
work to be done on the structure of those stable flows (what's going on
within the aggregate) as well as working out where the traffic gets small
enough to become self-similar. But these discussions were a few years ago
and my memory may be faulty.

Craig

In message <0C875DC28791D21192CD00104B95BFE70146DD09@BGSLC02>, Irwin Lazar writ
es:

I attended a seminar where self-similarty people from Ericsson were
talking (At IETF INET2001). They had not tested the theory with thousands
of TCP connections on high capacity links, but one thing that caught my
eye was that for self similarity to occur, congestion need to exist
(according to them). Well, if you have congestion in your core, you're
doing something wrong. Fix it, and the problem goes away (at least in the
core).

Same thing with end-to-end QoS that some people were talking about there
(for some reason, they all seemed to originate from the telephony world,
god knows why *smirk*), you only need QoS in the core if you have
congestion, and then you're only throwing manhours at the problem instead
of buying hardware and more capacity. I'd go for overprovisioning every
day of the week.