In a previous episode Marc Slemko said...
:: On Mon, 9 Feb 1998, Phil Howard wrote:
:: > Patrick McManus writes:
:: > > At the risk of introducing meaningful background literature:
:: > > ftp://ds.internic.net/rfc/rfc2068.txt
:: > >
:: > > I direct folks to 14.36.1 "Byte Ranges" which when interleaved with
:: > > pipelined requests comes very close to achieving client-driven
:: > > multiplexing that I'd suggest from a UI pov will behave much better
:: > > than the multiple connections method (eliminating the cost of tcp
:: > > congestion control but at the cost of some application protocol
:: > > overhead).
:: As a server implementor, let me simply say this makes no sense and is a
:: perversion of what byte ranges are intended for.
As a serverside protocol implementor myself (though it's a very
different kind of server) I'll note that you're right that this
was not the primary motivation for byte ranges (primary use: the
continuation of aborted transfers) but suggesting that this use hasn't
been discussed at length is misleading.
I am not suggesting that every object have their chunks interleaved in
1500 byte intervals, that'd be very naive indeed. I am suggesting that
because this is __client__ driven it can be used modestly to
significantly enhance perceived user response time based on content
types and screen positioning, things typically opaque to the
server. The browser can request chunks (of appropriate sizes) of just
those elements comprising the opening viewing area of the page (and in
the case of some graphics types just enough information to get a pass
or two of it drawn).. That means we've added an extra request
(pipelined, so no RTT) or maybe 2 for each object in the viewing area
(not the whole document!) . After that's accomplished the request
should be either for the rest of the object, or a substantially bigger
chunk (like minimally an order of magnitude larger).
:: will put far too much load on the server, will break whenever you
:: get dynamic content, etc.
'break' meaning that it won't multiplex, maybe. 'break' in the page
not loading sense, is not true (as you note, the whole document is
just likely to be transferred instead of a chunk). For the former,
cacheable dynamic content should work just fine under this scheme
saving some cases, if your local proxy can do it for you then even
:: > What is the correct behaviour of the server if the request is made for
:: > bytes 0-2047 of an object which invokes a CGI program to create that
:: > object? Obviously it can send the first 2048 bytes, but then what?
:: For dynamic content, it normally has to send the whole document. That
:: is a legal response to a byte range request.
CGI doesn't imply non static content.. there are lots of reasons to
make something CGI other than the state of the world changing in
between invocations... derivable functions for instance.
that may be static html, or dynamically generated html (or not even
html at all, but that's another beef for another day) but it shouldn't
be of any consequence to the browser. It's just interested in the
content body and the expiration information. Seeing as how the
distance from ny to chicago doesn't change very often, the server
should set some decent cacheability attributes for this.. and if
that's the case then byte ranges should still apply, whether processed
by CGI, server (playing proxy) or an explicit proxy.