Routing without source information and Traffic self-similarity

First, what problems arise in a network that routes without information about
the sender? In other words, imagine that the IP header did not contain the
source address - what problems would this raise? Are optimal routing,
security, billing, traceability, network management, etc., suddenly

With respect to routing, while conventional unicast forwarding doesn't
normally use the source address in the packet, multicast forwarding and
RSVP packet classification do. While multicast routing isn't particularly
pretty no matter what you do, source-specific routing in this case yields
a much, much more attractive result than the source-independent alternatives
(e.g. spanning tree) you'd be forced to in the absense of a source address.
And while I'm not sure RSVP is all that good an idea, others might think

As for the issues other than routing, in a perfect world a field in the
header which can be set to any random number by the originator of the
packet wouldn't be depended upon for anything important, but the world
is less than perfect.

Second, to what degree do you believe Internet traffic exhibits
self-similarity, and how does this change with varying levels of traffic
aggregation? I have read papers from Bellcore and others suggesting that
no matter the level of aggregation (be it a single Internet user or a major
NAP), Internet traffic exhibits high degrees of self-similarity - but more
recent research does not seem to agree with this.

It has been several years since I was in a position to see this, but on
routers filling circuits with very highly aggregated traffic you just about
couldn't measure anything without finding a result which contradicted the
of aggregation-independent self-similarity. We used to find routers with
9 milliseconds worth of output queue would consistantly fill long-haul
T3 circuits to 5 minute average percentage loads in the mid-90's with
5 minute average output queue packet drops staying well below 1%
(filling circuits at low loss rates with aggregation-independent self-similar
traffic should require output queues on the order of the delay-bandwidth
product. 9 ms wasn't even close to this). And if you measured enough
data points to draw a plot of output load versus packet loss, and overlaid
the M/M/1/K queuing prediction on the same plot, you'd find that assuming
Poisson arrivals gave you results that weren't that far off from what the
equipment was doing.

If the bandwidth of the circuitry by which end users connected to your
network was at least equal to the bandwidth of the network trunks (a
situation which wasn't uncommon not that long ago) I could perhaps see
why self-similar behaviour might occur. If you're filling big, wide
trunks with traffic aggregated from bizillions of 33.6 kbps modems
(a situation which is probably more representative of what networks do
today), however, expecting self-similar behaviour doesn't make even the
least bit of intuitive sense.

Dennis Ferguson