SYN -> SYN/ACK time (actual connection) 22%
Web browser says "Contacting www.website.com..."
SYN/ACK -> first data (web server work-- 78%
getting material, processing material)
Web browser says "www.website.com contacted, waiting for response"
Note that this didn't include different types of content. But it *did*
truly measure one thing--that the delay caused by web servers is
considerably higher than that of "network performance" (or actual connect
time).
Urm, maybe I'm missing something here, but taking an incredibly
simplistic model where you have a probability p of losing any packet,
there is a (1-p)^3 chance of 3 way handshake losing a packet and stalling,
and a (1-p)^(2 * no packets reqd for 1st data) for the latter. With
slow start etc there are bound to be more than two packets back before
it starts processing the response, so the latter is always going to
have a higher chance of failing. Now add to the fact that with technology
such as ATM, it is more likely large packets will be dropped than small
ones (with a given cell loss probability), and being careful to remember
all that good stuff at the last but one NANOG about broken client stacks,
and I think you might find the above is a "non measurement".
I *think* (and am not sure) that if you have a proxy set up, you
always get the latter once you have connected to the proxy.
Oh, and to skew the figures in the other direction, doesn't the first
prompt come up while the DNS lookup is being done?
==>ones (with a given cell loss probability), and being careful to remember
==>all that good stuff at the last but one NANOG about broken client stacks,
==>and I think you might find the above is a "non measurement".
It's a rough measurement, and if you'd go so far as to assign a 20% error
margin, you'd stillsee that a web server still owns a *significant* piece
of the click-to-data time, over 50%.
I think that a 20% error margin would be fair for this, provided neither I
nor my provider was having network problems at the time. At the time,
this was intended as a rough measurement to determine how much time was
wasted in waiting for inefficient web servers.
==>I *think* (and am not sure) that if you have a proxy set up, you
==>always get the latter once you have connected to the proxy.
==>
==>Oh, and to skew the figures in the other direction, doesn't the first
==>prompt come up while the DNS lookup is being done?
Nope. You'll see "Looking up host www.website.com..." in most browsers.
(I didn't use a browser to measure this; those "web browser says" lines
were there for the reference--a lot of people ask me why it sits there a
while after saying "contacted, waiting for response".
It's a rough measurement, and if you'd go so far as to assign a 20% error
margin, you'd stillsee that a web server still owns a *significant* piece
of the click-to-data time, over 50%.
Especially if it's using the particularly sub-optimal (aka 'broken')
network stack that a very popular server operating system has.
In fact, some recent measurements of mine show a large variance even
for a connection setup on a local network depending upon what IP
stacks are involved: varying between 0.39s and 0.007s. (Unsurprisingly
the broken stack referred to above works quite well with itself, and
not too bad with an earlier OS from the same company - if I didn't
know better I might believe that they only tested with their own
systems.)
(this isn't really operational so if you want names let me know
privately. Perhaps someone out there has some clout.)