Leap Second planned for 2016

Looks like we'll have another second in 2016:
http://www.space.com/33361-leap-second-2016-atomic-clocks.html

Time to start preparing

Time to start preparing

Unless you are running something that can't handle leap seconds what do you
really need to prepare for?

Its a whole extra second you can spend doing something awesome. You have to
plan now!

I'll just leave this here :slight_smile:

http://spendyourleapsecondhere.com/

Time is actually harder than it seems. Many bits of software break in unexpected ways. Expect the unexpected.

Jared Mauch

Aye. How many have written code like this:

start = time();
do_something();
elapsed = time() - start;

Virtually all code dealing with passage of time assumes time moves
only forward, I'm amazed we don't see more issues during leap seconds.
Portable monotonic time isn't even available in many languages
standard libraries.

Hopefully they'll decide in 2023 finally to get rid of leap seconds
from UTC. Then GPS_TIME, TAI and UTC are all same with different
static offset.

That was great, I would actually like NIST to link to it…

How about you run your systems on TAI or satellite time?

But time _DOES_ flow. The seconds count
  58, 59, 60, 00, 01, …
If you can’t keep up, that’s not UTC’s fault.

As for stopping the leap seconds, talk to the planet Earth. It’s the one who will not conveniently rotate properly. Either that, or run REEEEEEALLY Fast in that -> direction every once in a while. :slight_smile:

Once upon a time, Patrick W. Gilmore <patrick@ianai.net> said:

But time _DOES_ flow. The seconds count
  58, 59, 60, 00, 01, …
If you can’t keep up, that’s not UTC’s fault.

Here in the real world of modern computers, virtually everybody has
copied the UNIX/C behavior that doesn't actually allow for 61 seconds in
a minute. POSIX time_t is defined to be 86,400 seconds per day, with
"(time % 86400) == 0" being midnight UTC. The conversion from time_t to
(hours, minutes, seconds) documents that minutes can be more or less
than 60 seconds, but it is moot, since the input that the system uses to
actually keep time cannot represent anything but 60-second minutes (and
still be in sync with the outside world).

So, all the systems we use either double count 59 (which means time
jumps backwards, because it goes 59.9999... to 59.0), or count half time
during second 59 (so going from 59.0 to 0.0 takes two actual seconds).
Both have their pluses and minuses, and IIRC both have exposed software
bugs in the past. The bugs usually get fixed after the leap second, but
the next one always seems to expose new bugs.

Leap second handling code is not well-tested and is an ultimate corner
case. There's been debate about abolishing leap seconds; with all the
every-day bugs people have to deal with, few people set up a special
test environment to handle something that may never happen again (until
you get less than six months warning that it'll happen at least once
more), and even then, tests tend to focus on what broke before, because
it is really hard to test EVERYTHING.

You can debate the correctness of these things, but you can't really
debate that they are the way things work.

There are experiments to handle leap seconds differently, such as
smearing them over a longer period (up to 24 hours IIRC), but they
require custom time-keeping code and running without any external time
reference during the smear (because there's no standard way to do that).

As for stopping the leap seconds, talk to the planet Earth. It’s the one who will not conveniently rotate properly. Either that, or run REEEEEEALLY Fast in that -> direction every once in a while. :slight_smile:

Leap seconds are inserted to keep the atomic clocks synced with an
arbitrary time base (that is guaranteed to vary forever). There's
nothing magic about having noon UTC meaning the Sun is directly over 0°
longitude; if we didn't insert leap seconds, it would have drifted
slightly, but so what?

Once upon a time, Javier J <javier@advancedmachines.us> said:

Unless you are running something that can't handle leap seconds what do you
really need to prepare for?

The last several leap seconds have exposed weird and hard to predict
bugs in various bits of software. Those previous bugs have (probably)
all been fixed, but there will likely be new bugs that nobody tested
for.

Chris Adams <cma@cmadams.net>:

Leap seconds are inserted to keep the atomic clocks synced with an
arbitrary time base (that is guaranteed to vary forever). There's
nothing magic about having noon UTC meaning the Sun is directly over 0°
longitude; if we didn't insert leap seconds, it would have drifted
slightly, but so what?

Here is "so what". From my blog, earlier this year: "In defense of
calendrical irregularity"

I’ve been getting deeper into timekeeping and calendar-related
software the last few years. Besides my work on GPSD, I’m now the tech
lead of NTPsec. Accordingly, I have learned a great deal about time
mensuration and the many odd problems that beset calendricists. I
could tell you more about the flakiness of timezones, leap seconds,
and the error budget of UTC than you probably want to know.

Paradoxically, I find that studying the glitches in the system (some
of which are quite maddening from a software engineer’s point of view)
has left me more opposed to efforts to simplify them out of
existence. I am against, as a major example, the efforts to abolish
leap seconds.

My reason is that studying this mess has made me more aware than I
used to be of the actual function of civil timekeeping. It is to allow
humans to have consistent and useful intuitions about how clock time
relates to the solar day, and in particular to how human circadian
rhythms are entrained by the solar day. Secondarily to maintain
knowledge of how human biological rhythms connect to the seasonal
round (a weaker effect but not a trivial one).

Yes, in theory we could abolish calendars and timestamp everything by
atomic-clock kiloseconds since an epoch. And if there ever comes a day
when we all live in completely controlled environments like space habs
or dome colonies that might actually work for us.

Until then, the trouble with that sort of computer-optimized timestamp
is that while it tells us what time it is, it doesn’t tell us what
*kind* of time it is – how the time relates to human living. Day? Night?
Season?

Those sideband meanings are an important component of how humans use
and interpret time references. Yes, I know January in Australia
doesn’t mean the same thing as January in the U.S. – the point is that
people in both places have stable intuitions about what the weather
will be like then, what sorts of holidays will be celebrated, what
kind of mood is prevalent.

I judge that all the crap I go though reconciling scientific absolute
time to human-centered solar time is worth it. Because when all is
said and done, clocks and calendars are human instruments to serve
human needs. We should be glad when they add texture and meaning to
human life, and beware lest in our attempts to make software easier to
write we inadvertently bulldoze away entire structures of delicate
meaning.

UPDATE: There is one context, however, in which I would cheerfully
junk timezones. I think timestamps on things like file modifications
and version-control commits should always be kept, and represented, in
UTC, and I’m a big fan of RFC3339 format as the way to do that.

The reason I say this is that these times almost never have a
human-body-clock meaning, while on the other hand it is often useful
to be able to compare them unambiguously across timezones. Their usage
pattern is more like scientific than civil time.

Hey,

But time _DOES_ flow. The seconds count
        58, 59, 60, 00, 01, …
If you can’t keep up, that’s not UTC’s fault.

Check the implementation on your PC. This is why code is broken and
people don't even know it's broken. You have to use monotonic time to
measure passage of time, which is not particularly easy to do
portable, in some languages.

As for stopping the leap seconds, talk to the planet Earth. It’s the one who will not conveniently rotate properly. Either that, or run REEEEEEALLY Fast in that -> direction every once in a while. :slight_smile:

In practice this does not appear to be significant problem. Several
thousand years must pass until clocks have shifted one hour, and we
have experience on shifting clocks one hour within a year, so I'm sure
we can tolerate slippage caused by not having leaps. I'm holding my
thumbs up for 2023 and sanity prevailing.

Hi,

Leap second handling code is not well-tested and is an ultimate corner
case. There's been debate about abolishing leap seconds; with all the

well, we've gone through a few of these now...so if it was all okay before
its likely to be again... exception: any NEW code that
you are running since last time - THAT hasnt been tested :wink:

alan

In most cases the bugs are not pathological if the elapsed time is
long, it may not break anything if they are 1s off. And if they
measure short elapsed time, but are done infrequently, you might just
not have hit the code path at the right moment.

I'm sure you've had your share of difficult to track down bugs
requiring very specific set complicated conditions. I give little
value in black-box testing, lot of effort and very high chance of just
not hitting all bug prerequisites, unit testing is much more fruitful,
but alas in walled garden not possible.

  ++ytti

It doesn't help that the POSIX standard doesn't represent leap seconds
anyplace, so any elapsed time calculation that crosses a leap second
is guaranteed to be wrong....

POSIX (Unix) (normal) time does not have leap seconds.
Every POSIX (Unix) (normal) minute has exactly 60 seconds.
Every POSIX (Unix) (normal) hour has exactly 60 minutes.
Every POSIX (Unix) (normal) day has exactly 24 hours.
Every POSIX (Unix) (normal) year has 365 days, unless it is a leap year, in which case it has exactly 366 days.

The POSIX time is the number of seconds (or parts thereof) that has passed since midnight, 1 January 1970 GMT. The time scale is UT1. Outside of very specialized scientific applications, EVERY computer system everywhere uses POSIX time and does time/date calculations based on POSIX time (UT1).

UTC time has leap seconds. This is because UTC is a "different" time-scale than the UT1 timescale on which POSIX/Unix time is based. UTC proceeds at a different "rate" from UT1 and so, from time to time, UTC time must be adjusted to keep it in sync with POSIX/Unix (normal) time.

So how can we solve the problem? Immediately and long term?

a) use UTC or unix time, and accept that code is broken
b) migrate to CLOCK_MONOTONIC, and accept that epoch is unknown (you
cannot serialise clock and consume it in another system)
c) use NTP smear to make clocks run incorrectly to hide the problem
d) use GPSTIME or TAI and implement leaps at last possible moment (at
the presentation layer)
e) wait for 2023 and hope the problem goes away

On Sun 2016-07-10T11:27:33 +0300, Saku Ytti hath writ:

So how can we solve the problem? Immediately and long term?

The ITU-R had the question of leap seconds on their agenda for 14
years and did not come up with an answer. Their 2015 decision was to
drop the question and ask an alphabet soup of international acronym
agencies to come up with something better by 2023.

The problem remains that simply abandoning leap seconds has the effect
of redefining the calendar, and Pope Gregory's last attempt to do that
took 300 years to consolidate. For time scales there are three
desirable goals, but it is only possible to pick two

http://www.ucolick.org/~sla/leapsecs/picktwo.html

Since one problem is that the leap second code isn't exercised regularily, I propose that each month there is a leap second either forward or backward. These forward/backward motions should be fudged to over time make sure that we stay pretty much correct.

If POSIX needs to be changed, then change it. By making leap second not a rare event, this would hopefully mean it'll get taken more serously and the code would receive wider testing than today.