This assumes that you consider web server location and web server
performance to NOT be a part of overall network performance. Our view
steps back a bit from that. The majority of traffic would appear to be
webcentric. From an end users perspective, what does a web site on a
specific network look like and how does that compare to a web site on
another network? There are ENDLESS variables contributing to that
including intercity links, hub architecture, host hardware, host software,
peering, connectivity points with other networks, transit agreements, type
of routers, ATM switching (or not). All contribute. We think most people
notice Internet performance (or lack thereof) while viewing world wide web
pages. If we measure such page transits, the results are indicative of the
accumulation of ALL of those factors. The web sites chosen were on the
network under study, operated by the network, and under their control.
They have total control over the hardware and software used, how it is
connected to their network, just as they have control over all other
aspects of their networks. Does web server performance affect the results?
I would hope so. Can we break it down into what is purely web server
hardware performance, what is web server software performance, what is NIC
card on the web server, what is the impact of the first router the web
server is connected to, what is the impact of hub design and the interface
between IP routing and ATM switching, what part is the impact of
interconnections with other networks, what part is peering, what part is
just goofy router games? Uh, NO we can't.
I would posit that it is only the network engineers at the heart of this
that would or should care. I don't know at this point what portion of the
equation can be levied on web servers and I don't think anyone else can
either. I have held for several years that the performance breakdown is in
the "last inch" of the Internet between the drive controller and the disk
surface. But in working with Keynote, I generated the broad theory that if
that view held true, then by massively averaging measurements across time
and geography, we should flatline the Internet. In other words, all
results should factor to zero relatively. They didn't. They didn't to a
shocking degree. And at this point I am under the broad assumption that
server performance doesn't account for all of it, perhaps little of it.
But I could be widely wrong on the entire initial assumption.
In any event, the networks have total control and responsibility for their
own web servers, much as they do for their own network if you define that
as something separate from their networks. We measured web page downloads
from an end user perspective, and those are the results in aggregate. If
it leads to a flurry of web server upgrades, and that moves the numbers,
we'll know more than we did. If it leads to a flurry of web server
upgrades, and it FAILS to move the numbers, that will tell us something as
Our broad theory is that nothing is going to improve as long as anything
you do doesn't count and is not detectable by anyone anywhere. If a
particular network can move their results in any fashion, that is an
improvement in the end user experience, however achieved.