Properly deployed NTP should calibrate the local hardware clocks to prevent drift even during connectivity outages. (I'm talking both the low resolution hardware clocks used for timing across power cycles and reboots, and the oscillators used while the OS is running). While most computer hardware is temperature sensitive, if your datacenter is suddenly changing temperature enough to cause clock drift, well, you have bigger problems.
I admit that this is an anecdote, but in our environment, I find that our GPSDO loses its GPS signal due to weather more often than we lose our connections to internet NTP servers.
On the other hand, we once had a site-wide Kerberos authentication outage because all of our Windows clients were using some windows NTP client that by default used two NTP sources owned by the software developer; when they both suddenly stepped by 20 minutes, Kerberos locked everyone out.
For sure, sudden loss of time "shouldn't" happen, but having a local refclock is comparatively cheap insurance against it in many deployments.
I've seen things like this when there's a sudden power loss across a small site e.g. a remote PoP. Think a loss of utility power and UPS fails to transfer for some unanticipated reason. Everything will come back up when either the utility power comes back or generator spins up, but it will all be hard reset. Depending on your NTP implementation, the local hardware clock may not be particularly accurate. Even good implementations often lack the necessary hardware capabilities to trim the low-resolution hardware reference and have to resort to simply flushing the time to hardware every so often.
Relative inaccuracies of a few seconds are pretty normal in that kind of situation in my experience. Putting everything together from logs where there's an unknown time offset of a few seconds after the fact can be tough. Then again, maybe you don't care in this example case since the cause of the problem is proximate - the frigging UPS didn't do its job. More complex scenarios might be easily envisioned, though.
Now, obviously you've still got an issue of the fact that the GPS refclk will take a while to lock and start serving time, but at least you've potentially got known-good time info before you start bringing higher-level network protocols up (and can purposely delay until you do, if desired) which is potentially impossible if your only source of time is the network itself.
Properly deployed NTP should calibrate the local hardware clocks to
prevent drift even during connectivity outages. (I'm talking both the
low resolution hardware clocks used for timing across power cycles and
reboots, and the oscillators used while the OS is running). While most
computer hardware is temperature sensitive, if your datacenter is
suddenly changing temperature enough to cause clock drift, well, you
have bigger problems.
For sure, sudden loss of time "shouldn't" happen, but having a local
refclock is comparatively cheap insurance against it in many deployments.
BCP these days is "orphan mode", not "local refclock".
I've seen things like this when there's a sudden power loss across a
small site e.g. a remote PoP. Think a loss of utility power and UPS
fails to transfer for some unanticipated reason. Everything will come
back up when either the utility power comes back or generator spins up,
but it will all be hard reset. Depending on your NTP implementation,
the local hardware clock may not be particularly accurate. Even good
implementations often lack the necessary hardware capabilities to trim
the low-resolution hardware reference and have to resort to simply
flushing the time to hardware every so often.
Relative inaccuracies of a few seconds are pretty normal in that kind of
situation in my experience. Putting everything together from logs where
there's an unknown time offset of a few seconds after the fact can be
tough. Then again, maybe you don't care in this example case since the
cause of the problem is proximate - the frigging UPS didn't do its job.
More complex scenarios might be easily envisioned, though.
Now, obviously you've still got an issue of the fact that the GPS refclk
will take a while to lock and start serving time, but at least you've
potentially got known-good time info before you start bringing
higher-level network protocols up (and can purposely delay until you do,
if desired) which is potentially impossible if your only source of time
is the network itself.
Ah, this is the dance with "have enough sources of time"...
tl;dr: if any of the below is too much work, just run reasonably well monitored NTP server syncing from other NTP servers. If you want more than that, you need to see the sky. Donât do the CDMA thing.
Depending on your requirements having the antenna in the window may or may not be satisfactory. If itâs fine you probably could just have done a regular NTP server in the first place. For long swaths of the day you might not see too many satellites which will add to the uncertainty of the signal.
Meinbergâs GPS antenna has a bit more smarts which helps it work on up to 300 meters on RG58 or 700 meters on RG213. (They also have products that use regular L1 antennas with the limitations Bryan mentioned).
They also have a multi-mode fiber box to have the antenna be up to 2km from the box or 20km with their single mode fiber box, if you have fiber to somewhere else where you can see the sky and place an antenna.
It will be more than the one you linked to, but their systems are very reasonably priced, too. For âhundreds of customersâ whatever is the smallest/cheapest box they have will work fine. Even their smallest models have decent oscillators (for keeping the ticks accurate between GPS signals).
The Meinberg time server products (I am guessing all of them, but Iâm not sure) also have a mode where they poll an upstream NTP server aggressively and then steer the oscillator after it. I havenât used it in production, but it worked a lot better than it sounded like it would. (In other words, even without GPS itâs a better time server than most systems).
But with a small compact server like the DC-powered TimeMachines Inc unit, which costs something like $300, you simply put the server where the visibility is and connect back to the nearest Ethernet port in your network, up to 300â away, or virtually any distance with fiber transceivers. Weâve installed these in Cantex boxes on a windy, rainy tenth-story rooftop in upstate NY and it runs flawlessly, warmed by its own internal heat at sub-zero temps, and perfectly happy at ambient temps of 110F.
Itâs hard to consider messing with signal converters and pricey remotely-powered active antennas when you can solve the problem for $300.
As I said, it really depends on your requirements and expectations.
For my ânormalâ use cases there hasnât been room for a lot of stuff between âwell run NTP server with networked time sourceâ and âserver with fancy clocks and frequency inputâ.
Though, on the topic of unusual requirements there are a bunch of contributors to the NTP Pool using this curious device that can do line rate NTP responses (100Mbps, but still):
Anyone know of a solution that doesnât require an external antenna, is NEBS compliant, and has T1-type outputs for me to hook into my Metaswitch gear?
You buy a powered GPS antenna for it. Which antenna depends on the cable length and type. The amplifier in the antenna amplifies the signal just enough to overcome the cable loss between the antenna and the receiver. Nice thick cables lose less signal. Dinky thin ones are easier to work with.
You sure you need a GPS NTP server? You understand that if you do, you need two for reliability right, and probably at geographically diverse locations? If youâre not on an air-gapped network, consider syncing a couple head-end NTP servers against tick and tock (.usno.navy.mil, the naval observatory) and not worrying about it. One less piece of equipment to manage, update, secure, etc.
That entirely depends on what you need the time for.
For example, in a Continuous Control environment you really do not care about the accuracy of the time -- just like a printer will not suddenly fail to print documents with dates in them because of Y2K, the printer neither cares nor knows what time it is.
What you may care about, however, is that all your Distributed Control and Outboard Systems have the SAME TIME and that that time, relative to each other, is closely synchronized. This has a huge impact when comparing log events from one system to another. What is important is that they all have the same time, and that they all drift together.
If you have one such installation, then you really do not care about the "accuracy" of the time. However if you have multiple such installations then you want them all to have the same time (if you will be comparing logs between them, for example). At some point it becomes "cheaper" to spend thousands of dollars per site to have a single Stratum 0 timesource (for example, the GPS system) at each site (and thus comparable time stamps) than it is to pay someone to go though the rigamarole of computing offsets and slew rates between sites to be able to do accurate comparison. And if you communicate any of that info to outsiders then being able to say "my log timestamps are accurate to +/- 10 nanoseconds so it must be you who is farked up" (and be able to prove it) has immense value.
If your network is air gapped from the Internet then sure. If itâs not, you can run NTP against a reasonably reliable set of time sources (not random picks from Pool) and be able to say, âmy log timestamps are accurate to +/- 10 milliseconds so it must be you who is farked up.â While my milliseconds loses the pecking order contest, itâs just as good for practical purposes and a whole lot less expensive.
If your system is Internet-connected. If you run an air gapped network then yeah, get your time out of band.
And while time source stability is a good criteria, the most important NTP criteria is path latency symmetry between directions. Itâs better to have a path that is 100 ms of 1-way latency both ways than a path that is 1 ms one way, 100 ms the other way.
If your network is air gapped from the Internet then sure. If it's
not, you can run NTP against a reasonably reliable set of time
sources (not random picks from Pool) and be able to say, "my log
timestamps are accurate to +/- 10 milliseconds so it must be you who
is farked up." While my milliseconds loses the pecking order contest,
it's just as good for practical purposes and a whole lot less
expensive.
You mean something like this, which is relatively easy to achieve:
One word of caution when using a low-priced NTP appliance: your network
activity could overwhelm the TCP/IP stack of the poor thing, especially
if you want to sync your entire shop to it. In the case of the networks
I set up, I set up a VLAN specific to the NTP appliance and to the two
servers that sync up with it. Everything else in the network is
configured to talk to the two servers, but NOT on the three-device "NTP
Appliance VLAN".
NOTE: Don't depend on the appliance to provide VLAN capability; use a
configuration in a connected switch. How you wire from the appliance to
a port on your network leaves you with a lot of options to reach a
window with good satellite visibility, as CAT 5 at 10 megabits/s can
extend a long way successfully. Watch your cable dress, particularly
splices and runs against metal. (Or through rooms with MRI machines --
I'm not joking.)
The two servers in question also sync up with NTP servers in the cloud
using whatever baseband or VLANs (other than the "NTP VLAN") you
configure. Ditto clients using the two servers as time sources.
The goal here is to minimize the amount of traffic in the "NTP Appliance
VLAN". What killed one installation I did was the huge amount of ARP
traffic that the appliance had to discard; it wasn't up to the deluge.
LOL. Thatâs not a real problem with todayâs microprocessors. The TM1000A, for example:
â...is capable of serving 135+ synchronizations per second.
That provides support for over 120,000+ devices updating
every 15 minutes on the network.â
As for ARP traffic deluges, if thatâs happening on your LAN, you have bigger problems
Or in our case, a Canada Goose lands on the transfer switch, shorting it out and disconnecting street, UPS, and generator. TBH I wasn't monitoring NTP at the time, being slightly more concerned with critical applications, so I concede your point
but I will be placing this inside a data center, do these need an actual
view of a sky to be able to get signal or will they work fine inside a data
center building? if you have any other hardware requirements to be able to
provide stable time service for hundreds of customers, please let me know.
You buy a powered GPS antenna for it. Which antenna depends on the cable
length and type. The amplifier in the antenna amplifies the signal just
enough to overcome the cable loss between the antenna and the receiver.
Nice thick cables lose less signal. Dinky thin ones are easier to work with.
You sure you need a GPS NTP server? You understand that if you do, you need
two for reliability right, and probably at geographically diverse
locations? If you're not on an air-gapped network, consider syncing a
couple head-end NTP servers against tick and tock (.usno.navy.mil, the
naval observatory) and not worrying about it. One less piece of equipment
to manage, update, secure, etc.
Two is not a great number. If they disagree, there is no majority
clique to be found.
Also, there is something to be said for using different models/vendors
for the time sources. If you only have the same model from one vendor
and there is a bug, you can lose all your time sources at once. The
GPS week rollover happens every ~19.7 years, and when that problem hits
is a function of the firmware and a manufacturing date put in the firmware.
These problems can be mitigated if you have "enough" time sources for
your internal NTP servers and you peer with enough other, possibly your,
servers.
but I will be placing this inside a data center, do these need an actual
view of a sky to be able to get signal or will they work fine inside a data
center building? if you have any other hardware requirements to be able to
provide stable time service for hundreds of customers, please let me know.
You buy a powered GPS antenna for it. Which antenna depends on the cable
length and type. The amplifier in the antenna amplifies the signal just
enough to overcome the cable loss between the antenna and the receiver.
Nice thick cables lose less signal. Dinky thin ones are easier to work with.
You sure you need a GPS NTP server? You understand that if you do, you need
two for reliability right, and probably at geographically diverse
locations? If youâre not on an air-gapped network, consider syncing a
couple head-end NTP servers against tick and tock (.usno.navy.mil, the
naval observatory) and not worrying about it. One less piece of equipment
to manage, update, secure, etc.
Two is not a great number. If they disagree, there is no majority
clique to be found.
Also, there is something to be said for using different models/vendors
for the time sources. If you only have the same model from one vendor
and there is a bug, you can lose all your time sources at once. The
GPS week rollover happens every ~19.7 years, and when that problem hits
is a function of the firmware and a manufacturing date put in the firmware.
These problems can be mitigated if you have âenoughâ time sources for
your internal NTP servers and you peer with enough other, possibly your,
servers.
Four is a better number locally for ntpd instances. As for different models/vendors for the time sources, I consider the GPS constellation as one vendor so I add multiple internet-connected sources as well to my ntp.conf instances.