"Cheating" is of course encouraged. This isn't an academic test at your
local university. We're all out of school now. If you can figure out a way
to beat the game, you have ipso facto figured out a way to make the web
look faster to end users. As it appears to be, so it is. As Martha Stewart
says, that would be a "good thing." It would be a good thing for you and
your product line. It would be a good thing for your web site customers.
It would be a good thing for end-users viewing such web sites. If the net
effect of all this is that all the smart people on the net get mad at me
and go figure out a way to make the web look faster, it will be well and
good enough for me as well.
I had previously agreed NOT to publish IP numbers of the Keynote host
machines. Keynote does make this information available on their web site,
so I myself was a little bemused by the request, but I did agree to honor
it. In any event, someone else has already posted the locations and
networks (which we DID publish), along with the IP numbers, here on the
list. So you should have them.
If mirroring/caching moves the numbers definitively, it then establishes a
"real" value to such a technique, and it can be offered to customers at a
higher price with some actual data comparing how they will "look" to the
world using both the less expensive simple hosting as compared to the more
expensive geographic mirroring technique. I personally think this would
move the numbers more than anything else that could be done, but that's
what looks LOGICAL to me, not anything I know or have tested. I am rather
convinced that moving a single web site to another location, putting it on
larger iron, or using different OS will have very minor impact on the
My own personal theory is a little vaguely formed. But I think the heart
of the performance problems lie in a visceral connundrum of the network.
It is and should be one network. The perceptual disjuncture I already
detect, even among NANOG members, between straight "routes" as viewed by
traceroute and ping, and the way data actually moves on the network (at
least vis a vis the mental model or "theory" I have of it) is a somewhat
shocking part of the problem. I was actually unaware that many of the
network engineers actually viewed the world in this way until this
exercise. I was even a bit flip about not dealing with ping/traceroute at
any level of comparison. Perhaps an article on this is in order.
But I think most of it has to do with interconnects between networks, and
architectural decisions accummulated over the years based on concepts of
what should be "fair" with regards to the insoluble, but ever moronic
"settlements" concept and who gets value from whom based on what. If
decisions had been more based on optimizing data flows, and less on whose
packets transit who's backbones and why, performance would have been
improved. I don't know how much, but certainly some. When the main thing
on the minds of CEOs of networks is preventing anyone from having a "free
ride" (ala SIdgemore's theory of optimizing Internet performance by it
being owned totally by UUNET), or the relatively mindless routing algorithm
of moving a packet to the destination network at the earliest opportunity
to make sure "I" am not paying for it's transit, if it goes to "your"
location, I suspect performance suffers. My sense is larger numbers of
smaller networks, interconnected at ever more granular locations, would be
a good thing. This will get me in big caca with the "hop counting" mind
set, and of course at about 254 hops a minor little problem arises, but I
think so nonetheless. Very small ISPs know this viscerally. They all want
to multihome to multiple backbones, and have done some work to interconnect
among themselves locally.
Savvis actually has a very interesting concept though it upsets everybody.
It kind of upsets me because it makes my head hurt, literally. They've
carried it almost to another level of abstraction. If you ponder it long
beyond the obvious, it has some interesting economic consequences.
Checkbook NAPS lead to an inverted power structure where the further away
you move from centralized networks such as internetMCI and Sprint, by
blending layers after the fashion of a winemaker, the better your product
becomes and the better apparent connectivity your customers have. The head
hurting part is that if you extend this infinitely we would all wind up
dancing to the tune of a single dialup AOL customer in Casper Wyoming
somewhere in the end. But there is a huge clue in here somewhere. In all
cases, Savvis numbers were better than the either UUNET, Sprint, or
internetMCI numbers individually. Would it then be true, that if there
were three Savvis's each aggregating three networks with a private NAP
matrix, and I developed a private NAP matrix using the three Savvis level
meshes, would my performance be yet better again?
And what if Savvis opened the gates and allowed UUNET end users to connect
to Sprint IP Services web sites transiting via Savvis network?
More vaguely, if you have four barrels of wine with one a bit acidic, one a
bit sweet, one a bit oakey, and one a bit tannic, and you blend them all
together, it would appear apparent that you would have a least common
denominator wine that is acidic, sweet, oakey, and tannic. You don't. You
get a fifth wine that is infinitely superior than the sum of the parts or
any one component barrel. It is an entirely "new thing." This is
sufficiently true that it is in almost all cases how wines are made now.
Are networks like wine?