Why aren't ISPs providing stratum 1 NTP service?

So the list works... if you don't want to provide "public" services, adjust
the server to allow connections only from your own IP blocks.

Currently I, and the company I work for, don't mind providing some services
to the "public" Internet community. The company almost always gets more out
of it than it costs to supply. I just don't want to get trapped into
"supporting" public services. I get enough hate mail now when one of
our no charge services goes out of service on occasion.

A related, but more on topic for NANOG, issue is aligning more network
services with network topology. I don't think putting a NTP stratum 1
server on the NAP network fabric is a good idea. I do think providers
exchanging NTP across the NAP fabric is a good idea. Redundant voting
catches a bunch of dumb errors.

A packet saved, is a packet you don't have to carry. Its not as huge
a problem as multiple MBONE tunnels transiting the same physical lines.
But I'd rather have multiple associations at the edge of my network, and
pass NTP in a structured manner around inside my network. The same thing
is true for several other network services.

I've always felt that the topology of USENET news distribution could be
done better if it matched the topology better. I'm not sure how best to do
this although a decent set of maps of the USENET topology might be enough
to make news admins adjust things on their own.

And heavily used WWW servers are another thing that could benefit from
aligning themselves with the topology. I'm thinking of a scenario where
the service was actually provided by a distributed set of WWW servers
located one hop from an XP, maybe at an XP peer's special customer colo
site, and the central WW server site would issue redirects to the
topologically closest WWW server. For this to work best I think network
operators would need to provide some data to allow the customer to more
effectively redistribute their traffic load.

Michael Dillon - ISP & Internet Consulting
Memra Software Inc. - Fax: +1-604-546-3049
http://www.memra.com - E-mail: michael@memra.com

In article <hot.mailing-lists.nanog-Pine.BSI.3.93.960719091042.14736C-100000@sidhe.memra.com>,

And heavily used WWW servers are another thing that could benefit from
aligning themselves with the topology.

The protocols don't support this cleanly. So far nothing I've seen would
allow a single URL to be used to access the "nearest" server. Until
something like that exists (i.e. the end users don't need to know a thing
about network topology) it seems pointless to align WWW servers with
the topology. Your suggested use of redirects just complicates things --
consider how the URLs would end up looking in a search engine.

Dean

>And heavily used WWW servers are another thing that could benefit from
>aligning themselves with the topology.

The protocols don't support this cleanly. So far nothing I've seen would
allow a single URL to be used to access the "nearest" server.

WWW servers can issue a "redirect" to a different URL. Anybody can hack
this up with something like Apache by adding index.cgi to the index page
possibilities and then enabling .cgi as an extension to automatically run
a CGI script which could issue the redirect in the HTTP headers instead of
emitting an HTML document. The CGI script would only need to be a simple
table lookup similar to what a Cisco's SSP does. Of course their needs to
be something more intelligent (like BGP to continue the analogy) that
builds and maintains the lookup table based on some sort of heuristics.

Until
something like that exists

It exists right now. Several people are doing this kind of thing. It's
just not an off-the-shelf product. Yet.

the topology. Your suggested use of redirects just complicates things --
consider how the URLs would end up looking in a search engine.

Life is never perfect. :wink:

Michael Dillon - ISP & Internet Consulting
Memra Software Inc. - Fax: +1-604-546-3049
http://www.memra.com - E-mail: michael@memra.com

You first have to get a decent metric for "nearest" down first, and then be
able to measure and use it. Something basic like AS path lengths doesn't work,
so you'll probably end up having to use something like history of RTTs. Not an
easy problem to solve, to put it mildly.

-dorian

And heavily used WWW servers are another thing that could benefit from

   >aligning themselves with the topology.

   The protocols don't support this cleanly.

The _protocols_ DO support this (although "cleanly" could be
questioned). Very few clients have ever attempted to implement it,
and few of them even came close to getting it right...although there
were some pretty good examples a decade ago...

   So far nothing I've seen would allow a single URL to be used to
   access the "nearest" server.

Here's how the current standards specify that allow this to work(*):
The URL has a "fictitious" hostname, which resolves via the DNS to a
set of A records, one for each of the redundant servers that carry the
data (these servers can be extremely widely separated). The client
then applies an algorithm to select the "nearest" one (by its
definition of nearest). In fact the spec says you should rank them
and try in order several of the addresses, in case one is non-working.
This is _exactly_ the functionality that distributed web servers need.
The only possibly "unclean" thing about this is using the fictitious
host name.

Of course, the weak link here is allowing the client to make this
decision. Most clients use the stupidest algorithm available, which is
pick one (and only one, another violation) of the addresses,
essentially at random, and use that. This has as likely a chance to
pick the worst one as it does to pick the best one.

There are several heuristics available to any client program (some
were implemented over a decade ago :slight_smile: and there is some work (SONAR
and SRV) addressing better answers. Unfortunately, there is a
chicken-and-egg problem here. The client programmers have no
incentive to implement the better algorithms because no services are
provisioned this way, and that's because few clients would make use of
the distributed nature.

  -MAP

(*) And a clearly "unclean" way, which will work with existing
clients, is to take a routing prefix and distribute _that_ around.
The servers would all have the same address (making them hard to
manage), and the routing prefix would be advertised from each of these
locales, and routing would find you the "closest" one. After all, you
really are looking at a routing problem here. (Although something
about hammers and looking like nails comes to mind. :slight_smile:

(*) And a clearly "unclean" way, which will work with existing
clients, is to take a routing prefix and distribute _that_ around.
The servers would all have the same address (making them hard to
manage), and the routing prefix would be advertised from each of these
locales, and routing would find you the "closest" one. After all, you

Yep, works. Management isn't a problem if each box is fitted with
two interfaces and addresses, one for public consumption of contents,
one for internal use. Not that we have done any of this for real (yet).

really are looking at a routing problem here. (Although something
about hammers and looking like nails comes to mind. :slight_smile:

Well, yes, but then something like WWW and other stuff could be
fitted with a number of unflattering descriptions.

It's interesting to see ... it's only recently people on the US side
have begun getting concerned about bandwidth issues, attempting to
localize traffic if possible. So far, there wasn't anything that
couldn't be solved with a couple of those DS3s, which cost the same
on your side as one or two E1s on our side. (Part of the reason for
the high cost of leased lines over here is that Europe is a large
collection of twisty little places, all different. Hence, your
leased lines go international and/or intercontinental at the drop of a
hat.) So for a long time, localization and good geographic spread of
servers of various kinds has been given very serious attention on
this side.

Now here's me waiting for some moron to invent Son of CU-SeeMe ...

In article <hot.mailing-lists.nanog-199607222012.AA05135@jotun.EU.net>,

Isn't IBM doing some sort of fancy load redistribution for the WWW
servers its running for the 1996 Summer Olympics? I seem to recall they
were determining the "closest" server via a technique called "ping
triangulation", whatever that is.

Christopher E. Stefan
flatline@ironhorse.com
http://www.ironhorse.com/~flatline finger for PGP key
System Administrator Phone: (206) 783-6636
Ironhorse Software, Inc. FAX: (206) 783-4591