Time is money, as they say, so why are so many companies so complacent when it comes to ensuring that their data centres are accurately telling the time? That can have some severe consequences, according to Simon Kenny, CEO of Hoptroff, a timing synchronisation service, and it’s those consequences that led to the creation of Traceable Time as a Service.
Now that businesses have adopted the cloud, every data centre has become a link in a network chain, where different elements of a single distributed process are executed. Companies have given up operating their own data centres, on the promise that a series of outsourced locations can perform their computer processes as efficiently as owning their own data centres, but for much cheaper and with increased operational flexibility.
The key to delivering on that promise is that all data centres (links in the chain) share a common frame of reference, so that even though they may be miles apart, they all operate to a common set of rules and standards. However, one of the most important elements in maintaining that common frame of reference is becoming fragmented and in need of an update as networking and applications get faster — time synchronisation.
If all the devices in a distributed process don’t share the same time to sufficient accuracy, then the records they produce will put events in the wrong sequence and with incorrect intervals. Not only does this compromise the ability of the records to be used to reconstruct events to confirm outcomes or support customer or regulator audits, it undermines the reporting of causality, because when events are transposed in the record, cause can be presented as coming before effect. Clocks can easily drift, creating chasms of doubt in the data. Particularly in financial services, where thousands of transactions take place every second, the results of unsynchronised clocks can be chaotic. A transaction can appear to arrive at the recipient before it left the sender, two parties may disagree over the timeline of events and disputes cannot be easily resolved.
Network Traceable Protocol
Network Time Protocol (NTP) has been the most popular source of accurate time for data centres until now, but it has two problems which mean it could be unsuitable for the challenges of the future.
First, it is generally only accurate to approximately 50 milliseconds, when many of the processes it is being used to monitor execute multiple actions within that length of time: examples include Real Time Buying (RTB) in digital advertising, synchronisation in IP Media streaming (the new SMPTE standard requires 1 microsecond) and trading in financial services (MiFID II requires 1 millisecond for automated trading and 100 microseconds for HFT). As networks and applications get faster, clocks need to keep pace, or the timestamps they produce will either be wrong, or they will give multiple events the same timestamp and sequence, and causality will be lost.
Second, unless NTP is connected directly to a Stratum Zero time source (such as NPL, RISE or NIST), it cannot provide the unbroken chain of comparisons to a proven time source (traceability) which enables timestamps to prove they are correct. Timestamps are the ideal way to settle disputes, but if you cannot rely on them as reference data because they do not have the unbroken chain of comparisons back to the trusted source, then the dispute needs to be specifically negotiated by people instead.
Precision Time Protocol
The alternative to NTP, Precision Time Protocol (PTP) addresses both of these problems; it can deliver accuracy levels of microseconds at the application level in a server, and because it is derived from a trusted time source (GPS or a dedicated time feed), it can provide full traceability on all time stamps. However, until now it has been costly to install and complex to maintain, because it needs continuous checking and adjustment to ensure accuracy and traceability are being maintained. So, while accurate and traceable timing are desirable, they are not a service you ‘install and forget’. They are high maintenance and not easy to SLA.
What is Traceable Time as a Service?
Traceable Time as a Service (TTaaS) offers time synchronisation that is accurate to microseconds, produces fully traceable timing records and automates monitoring and maintenance. It does for accurate traceable time synchronisation what the cloud did for data centres, because it removes the need for customers to install their own timing hardware and timing feeds at every network node where they need synchronised time; customers simply subscribe to the software service, and scale up as they wish.
The leverage traceable timing offers, the way it reconciles virtual events with the physical world, will become increasingly important to data centre customers. It creates a new class of data that can be verified and trusted. Managing the quantity of cloud-based transactions, as well as ensuring the quality of that data, requires fundamental agreement on when transactions took place – accurate, traceable, verifiable time. This is not just important for managing the transactions themselves, but also for being able to track the permissions given by customers to use their personal data and to be able to respond to any questions about precisely how it was used.
TTaaS extends time traceability to these permissions, so that they can be tracked, logged and used to answer questions authoritatively. TTaaS is fully resilient against disconnection from primary time sources and Grandmaster failure by using redundancy, holdover and failover protocols.
In Hoptroff’s TTaaS set-up, three independent satellite time sources are continuously compared, each of which is individually linked to a grandmaster clock with nanosecond accuracy. Even in the event that all three satellite feeds are down simultaneously, each grandmaster clock has a built-in holdover of 1.5us per 24 hours: all the satellites could be down for weeks and the cloud timing feed would still be comfortably accurate to better than 100 microseconds. Time is distributed to the target data centre locations using dedicated lines provided by the company’s connectivity partners who provide fully redundant connections to data centre facilities creating a highly resilient timing feed.
TTaaS offers the complete timing service, from connecting to a primary time source, synchronisation of the server clock, maintenance and records all in one integrated package. It can service any of the demanding new applications that customers in advertising, media and financial services require at a lower cost of implementation per server than local timing hardware and with lower regular maintenance costs.
As network connections get faster and applications get more complex, accurate traceable time will become the new utility that customers will expect to be able to access in any data centre. TTaaS makes it possible for any data centre provider to offer that service, so that fragmented time is repaired, and customers achieve the data quality they require to execute their business strategies. Customers will increasingly want time as part of the service they expect from data centres and not an option they have to build for themselves.