Meeting in Versailles, France, on Friday, the Bureau International des Poids et Mesures has called time-out on “leap seconds” – the little jumps occasionally added to clocks running on Coordinated Universal Time, to keep them in sync with Earth’s rotation.

From 2035, leap seconds will be abandoned for 100 years or so and will probably never return. It’s time to work out exactly what to do with a problem that has become increasingly urgent, and severe, with the rise of the digital world.

Why leap seconds?

Roll back to 1972, when the arrival of highly accurate atomic clocks laid bare the fact that days are not exactly 86,400 standard seconds long (that being 24 hours, with each hour comprising 3,600 seconds).

The difference is only in milliseconds, but accumulates inexorably. Ultimately, the sun would appear overhead at “midnight” – an indignity metrologists (people who study the science of measurement) were determined to prevent. Complicating matters further, Earth’s rotation, and thus the length of a day, actually varies erratically and can’t be predicted far in advance.

The solution arrived at was leap seconds: one-second corrections applied at the end of December and/or June on an ad hoc basis. Leaps were scheduled to ensure the timekeeping system we all use, Coordinated Universal Time, is never more than 0.9 seconds away from the Earth-tracking alternative, Universal Time.

But all this was before computers ruled the Earth. Leap seconds were an elegant solution when first proposed, but are diabolical when it comes to software implementations.

This is because a leap second is an abrupt change that badly breaks key assumptions used in software to represent time. Base concepts such as time never repeating, standing still, or going backward are all at risk – as well as other quaint notions like each minute lasting exactly 60 seconds.

Leaping into danger

Question: what’s worse than mixing computers and leap seconds? Answer: mixing billions of interconnected networked computers, all trying to execute a leap second jump at (theoretically) the same time, with a great many failing in a wide variety of ways.

It gets better: most of those computers are learning about the impending leap second from the network itself. Better still, almost all are constantly synchronising their internal clocks by communicating over the internet to other computers called time servers, and believing the timing information these supply.

Imagine this scene then: during leap-second madness, some time-server computers can be wrong, but client computers relying on them don’t know it. Or they can be right, but client computer software disbelieves them. Or both client and server computers leap, but at slightly different times, and as a result software gets confused. Or perhaps a computer never receives word that a leap is happening, does nothing, and ends up a second ahead of the rest of the world.

All of this and more was seen in the analysis of timing data from the last leap-second event in 2016.

The ways in which computer confusion over time can impact networked systems are too numerous to describe. Already there are documented cases of significant outages and impacts arising from the most recent leap second events.

More broadly though, consider the networked critical infrastructure our world runs on, including electricity grids, telecommunications systems, financial systems, and services such as collision avoidance in shipping and aviation. Many of these rely on accurate timing at millisecond scales, or even down to nanoseconds. An error of one second could have huge and even deadly impacts.

Russia voted against the decision to abandon leap seconds, in part because this will require a major update to its global navigation satellite system which incorporates leap seconds. Credit: Alexandru Vicol/Unsplash

Time’s up!

In recognition of the growing costs to our computer-based world, the idea of doing away with leap seconds has been on the table since 2015.

The International Telecommunications Union, the standards body that governs leap seconds, pushed back a decision several times. But pressure continued to grow on multiple fronts, including from major tech players such as Google and Meta (formerly Facebook).

The majority of international participants in the vote, including the US, France and Australia, supported the recent decision to drop the leap second.

The Versailles decision is not to abandon the idea of keeping everyday timekeeping (Coordinated Universal Time) aligned with Earth. It’s more a recognition that the disadvantages of the current leap second system are too high, and getting worse. We need to stop it before something really bad happens.

The good news is we can afford to wait the suggested 100 years or so. During this time, the discrepancy may grow to as much as a minute, but that’s not very significant if you consider what we endure with daylight savings time each year. The logic is that by dropping the leap second right now, we can avoid its dangers and allow plenty of time to work out less disruptive ways to keep time aligned.

Solution?

An extreme approach would be to fully adopt an abstract definition of time, abandoning the long-held association between time and Earth’s movements. Another is to make larger adjustments than a second, but far less frequently and with far better preparation to limit the dangers – perhaps in an age where software has evolved beyond bugs.

The decision of how far we’re willing to let things drift before a new approach is decided upon has its own deadline: the next meeting of the Bureau International des Poids et Mesures is set for 2026. In the meantime, we’ll be stuck with leap seconds until 2035.

Since the Earth has surprisingly begun to spin faster in recent decades, the next leap second may, for the first time, involve removing a second to speed up Coordinated Universal Time, rather than adding a second to slow it down.

Software for this case is largely already in place, but has never been tested in the wild – so be prepared to leap into the unknown.

Darryl Veitch is Professor of Computer Networking, University of Technology Sydney.

This article first appeared on The Conversation.