Skip to content Skip to footer

Top tips for delivering Applied Observability

your
Image: Adobe Stock / your123

Time series data is increasingly being recognised as a crucial component of Applied Observability, a discipline that involves taking digital footprints from sources such as logs, traces, API calls, dwell time, downloads, and file transfers and using them in a highly orchestrated and integrated way to inform decision making across an organisation.

By combining current observations with historical data, Applied Observability allows organisations to make better-informed decisions, improve performance, and respond more quickly to issues in order to achieve better outcomes in terms of quality of service, uptime, and other factors.

Cited by Gartner as a top trend for 2023, it operates by taking digital footprints from sources like logs, traces, API calls, dwell time, downloads and file transfers and combining them in a “highly orchestrated and integrated approach to enable decision making in a new way across many levels of the organisation”. 

Like the concepts of “big data” in the past and “data analytics” today, it is a discipline whose mainstream adoption belies the fact that it has been practiced for years, but technology is now making it easier to practice Applied Observability, and time series data is a key component of this. By capturing and processing large volumes of data from various sources and formats, time series data allows organisations to analyse and make informed decisions in real-time.

This is particularly useful in industries such as finance, where Applied Observability can be used to improve trading outcomes by monitoring quote acceptance and rejection levels or tracking trade and order ratios to detect trends that may indicate manipulative trading activities. In manufacturing, time series data can be used to identify anomalies and prevent batch loss or machine downtime. And in telecommunications, Applied Observability can be used to monitor the flow and profile of streaming data sources and issue alerts when threshold levels are breached, enabling timely adjustments to be made in order to maintain the quality of service or protect the overall network.

Although it’s been practiced for years, not all analytics solutions are capable of handling the huge increases in the volume, velocity, and variety of data being created by modern enterprises. We see five key must-haves for applied observability deployments:

Optimised for time series data 

Most data today is time series based, generated by processes and machines rather than humans. Any analytics database should be optimised for its specific characteristics like append-only, fast, and time-stamped. It should be able to quickly correlate diverse data sets (asof joins) and perform in-line calculations (vwaps, twaps), as well as execute fast reads and provide efficient storage. 

Openness and connectivity 

The data landscape of most large, modern enterprises is broad. This means that any analytics engine has to interface with a wide variety of messaging protocols (eg: Kafka, MQ, Solace) and support a range of data formats (eg: CSV, JSON, FIX) along with IPC (interprocess communication), REST and OPENAPIs for quick, easy connectivity to multiple sources. It should also cater to reference data, like sensor or bond IDs, that enable it to add context and meaning to streaming data sets, giving the ability to combine them in advanced analytics and share as actionable insights across the enterprise. 

Real-time and historical data 

By combining real-time data for immediacy with historical data for context, companies can make faster and better in-the-moment responses to events as they happen and eliminate the development and maintenance overhead of replicated queries and analytics on separate systems. This ability to rapidly process vast quantities of data using fewer computing resources is also well suited for machine learning initiatives, not to mention reducing TOC and helping businesses to hit sustainability targets. 

Easy adoption 

Look for analytics software, increasingly but not necessarily built with microservices, that enable developers and data scientists to quickly ingest, transform and publish valuable insights on datasets without the need to develop complex access, tracking, and location mechanisms. Complications like data tiering, ageing, archiving, and migration can take up valuable time and resources which could be better used to concentrate on extracting actionable insights. Natively integrated with major cloud vendors and available as a fully managed service should also be an important consideration for easy adoption. 

Proven in production 

While time series databases have been around for a long time, the ever-growing volume, velocity, and variety of data, and the need to generate rapid insights and actions from it, means that many technologies are not proven in the field. Look for software with robust use cases and clear examples of ROI. 

In summary, time series data is an essential element of Applied Observability, a discipline that is becoming increasingly important, particularly in cloud-first architectures, as organisations seek to make better-informed decisions and improve performance. By combining current observations with historical data, time series data allows organisations to respond more quickly to issues and achieve better outcomes in terms of quality of service, uptime, and overall process efficiency.

Picture of Steve Wilcockson
Steve Wilcockson
Data Science Lead at KX

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.