Keynote/Boardwatch Internet Backbone Index A better test!!!

b3n wrote:

you mean the average speed *to the known to be badly positioned in the
network server*. i am quite certain performance is dramatically faster to

Just out of curiosity, why do many (not all) of the large backbone providers
establish their face to the web (their corporate webserver) on slow,
badly positioned machines? I would have thought they would have chosen
differently. We do, but we are not as sophisticated. I can see how
Jack might have been confused, perhaps he is not so sophisticated also.

Ken Leland
Monmouth Internet

Just out of curiosity, why do many (not all) of the large backbone providers
establish their face to the web (their corporate webserver) on slow,
badly positioned machines?

Because it is done by marketing, not engineering?

randy

randy@psg.com (Randy Bush) writes:

> Just out of curiosity, why do many (not all) of the large backbone providers
> establish their face to the web (their corporate webserver) on slow,
> badly positioned machines?

Because it is done by marketing, not engineering?

In my opinion, this alone should reduce the Value Number
of any given provider.

Personally, flaws or not, I welcome Boardwatch's attempts
to come up with a widely-published metric for the
Internet. This likely will lead other publications into
similar investigations, some of which may well bring the
writers and editors of various periodicals into contact
with the folks at CAIDA.

Also, flaws or not, Boardwatch did do something
fantastically clever, and that's examining things on an
end-to-end basis, rather than obsessing about details of
what's going on between the endpoints. People concerned
about the abysmal end-to-end throughput of even modern TCP
across much of the present Internet should be rejoicing
and helping other journalists develop better and more
scientific approaches to categorizing expected versus
observed end-to-end performance.

The combination of work aimed at measuring the internals
of one's network and work that measures the "(un)happiness"
of certain classes of applications should be of enormous
value to engineers willing to admit that they don't know
all the things that affect network performance observed by
end-users.

The reality, however, is that most American ISPs seem to
engage in knee-jerk denial or aggressive posturing
whenever there is a suggestion that their network is
anything but perfect. I certainly have been in the middle
of that kind of thing, so I can hardly claim innocence.

Such reactions are pure marketing: we can't admit that
maybe there is some way we could improve things or some
set of things our network doesn't do well because that
would hurt our product image.

People who react with marketing-think and who build with
marketing-think in the first place deserve to be torn
apart by analyses based in marketing-think. While the
article in question isn't in my hands yet, based on Mr
Rickard's and others' comments on the NANOG list, I think
it's safe to guess that this is precisely what the
Boardwatch study does.

  Sean.

Probably because you'll find that co-located web servers make the company
more money than if they put their server in the same spot or on the same
machine. Web sites for providers make them money, sure, but definitely
not as much as if they hosted mtv.com or other popular sites. So, they
put the high traffic sites where they need to be, and leave their own
servers on another part of the network, as they probably are one of the
least utilized machines there. This isn't in all cases, and blanket
statements are dangerous. But, in the majority of cases I've seen,
someone like UUNet's web site, or sprint's isn't nearly as busy as
www.playbow.com, www.comedycentral.com, etc.

Joe Shaw - jshaw@insync.net
NetAdmin - Insync Internet Services
"Learn more, and you will never starve." - Paraphrase of Lee

(As always, speaking only for myself.)

Such reactions are pure marketing: we can't admit that
maybe there is some way we could improve things or some
set of things our network doesn't do well because that
would hurt our product image.

  I dunno...from what I saw on here, most of the people who
  were giving Jack a really hard time weren't even measured
  in that study.

  From what I saw, the most consistent problem that people
  pointed out was that what Jack said the story was measuring
  and what it actually measured were two different things.

  IMHO, Jack made the problem /much/ worse by trying to
  defend it without paying much attention to the arguments
  presented -- he had already made up his mind as to what
  kind of response he'd get (angry from "losers," supportive
  from "winners"), and did not seem to really notice when
  the actual ratio of response was different.

  This strikes me as exceedingly poor journalism (and a fairly
  dumb thing to do in and of itself). So, while I would love
  to help /somebody/ get better statistics on this kind of
  stuff, it seems (based on his actions here as well as some
  of his articles and editorials) that Jack is not enough of
  an objective journalist to do such a study in a thorough
  and scientific manner.

  Hopefully somebody else will...and by the way, Jack, I'd be
  quite happy for you to prove me wrong in the meantime.

==>The reality, however, is that most American ISPs seem to
==>engage in knee-jerk denial or aggressive posturing
==>whenever there is a suggestion that their network is
==>anything but perfect. I certainly have been in the middle
==>of that kind of thing, so I can hardly claim innocence.
==>
==>Such reactions are pure marketing: we can't admit that
==>maybe there is some way we could improve things or some
==>set of things our network doesn't do well because that
==>would hurt our product image.

I don't work for an NSP or ISP whose image can be hurt; my reactions were
based solely upon the flawed methodology used by the magazines. For the
most part, my beef with the study was that it didn't measure *at*all* what
they claim to have measured. If they had said "end-to-end performace to
various providers' web servers", great. But it certainly does NOT measure
backbone performance. But this point has been made enough, and I digress.

But it's one of the things we have to live with, I suppose. Trade rags
aren't known to be completely un-biased and have accurate technical
content.

==>People who react with marketing-think and who build with
==>marketing-think in the first place deserve to be torn
==>apart by analyses based in marketing-think. While the
==>article in question isn't in my hands yet, based on Mr
==>Rickard's and others' comments on the NANOG list, I think
==>it's safe to guess that this is precisely what the
==>Boardwatch study does.

I would have to disagree, Sean, and say that there are one of two paths
which could give a re-design of this study any merit (after the
methodologies are fixed):

1. Stick with the backbone performance figure and install an equal-sized
circuit to each backbone. Place a web server on it, no other traffic, and
download the *exact* *same* file on each web server from the original 27
locations. Probably should up that number a bit more.

2. Stick with the end-to-end performance, figure, change the study's
title, and make it a bit more scientific by asking the backbones to put
this file on their server for testing. Sure, some backbones may decline,
and that can be published too. Readers can draw their own conclusions as
to why a provider declines.

/cah