Solutions that move the overall architecture toward the need for "bigger,
faster, etc." will drive the Internet down the same path as the supercomputer
In my opinion, we need to sit back, look at the big picture, and come up with
incremental architectural changes. We need to RE-apply the Internet Philosophy
to the Internet itself and investigate the result.
I prefer incremental changes as well. The only problem is that they
might not be sufficient to solve the current bottlenecks. It might
work if we restrict new users just to V.34 modem speeds. But if we
have *potential* new user growth with ATT Worldnet with up to 80 million,
@home with up to 50 million and the fast growing traditional ISPs,
(which show no signs of consolidation as far as I can tell) as well as
widespread access speed increases to ISDN BRI, ADSL and cable, it may
not be enough.
As Robert Moskovitz pointed out, even the growth in common used backbones
speeds is not keeping up:
1. 56 kbps
2. 1.544 Mbps increase by 24
3. 44.736 Mbps increase by 28
4. 155.520 Mbps increase by merely 3
Just keeping in step with past growth patterns would require a step
to OC-24c at 1244.15 Mbps now, but there are no routers which come
even close to those speeds.
PS. www.whnet.com/wolfgang/giga.html has an overview of routers.