Paolo Platter, CTO and Co-Founder at Agile Lab, reveals why businesses risk falling behind if they fail to embed governance guardrails that turn raw data into real competitive advantage.
Data is the ‘new oil’ is a well-worn expression, but one that still illustrates the profound commercial impact data has had in all four corners of the world. Every minute, more is created than ever before, amounting to vast wells of information that can deliver lucrative insights when handled properly. Extracting the most out of this precious asset is vital if businesses are to prosper.
However, a survey conducted by Agile Lab and Witboost revealed that UK businesses are a long way off managing their data either effectively or efficiently. It found 70% of UK businesses are unable to leverage data for strategic decision-making due to a lack of good data management practices. This doesn’t bode well for success, maybe even survival, in the expected tougher economic times ahead.
Even more concerning, nearly half (43.7%) felt they couldn’t derive any or enough value from the volumes of data they hold, compared to 22.4% who rated their organisation as very good at leveraging insights from data. That’s a staggering number needing to play catch up to the fifth that claim they already have their house in order. Getting left behind more dynamic competitors is a risk for those that delay taking action.
Avoiding the data tools trap
It’s tempting to assume widespread inertia is down to a lack of appropriate data management tools. But, generally, that’s not the case. Many organisations have made significant investment in data management technologies that have facilitated the collection of large stores of data, often creating further duplication during the process.
Instead of resolving data quality issues, more tools have often exacerbated the problem by generating additional copies of data and more repositories. As a result, unwieldy data silos have continued to expand, where valuable insights remain untapped and under-utilised. Determining the integrity of data languishing within these silos has become increasingly complex and time-consuming, stalling projects and blocking innovation.
According to the research, over half of respondents cited poor data quality as the biggest impediment to better data management, followed by lack of trust in its integrity. These data issues were perceived as having a negative effect on the quality of decision-making, making it far slower and less strategic. Hampering efforts to improve the situation were multiple copies of data sets distributed across an enterprise, making it virtually impossible to identify who owned the master or locate original datasets.
Relying on trustworthy data
In the quest to become a truly data-driven enterprise, IT decision makers are realising that acquiring yet another set of data management tools isn’t going to provide the answer. Instead, enforcing data standards is the priority, with the objective of aligning data management practices with strategic goals, compliance, and security requirements. But this presents enterprises with the multi-faceted challenge of managing vast data volumes as well as ensuring their governance framework is agile, accurate, and efficient.
Traditionally governance has relied on manual checks, which increase time to production, often require costly fixes, and result in an endless cycle of incomplete projects. What’s really needed is a solution that initiates governance right at the beginning of production with automated policies, acting as guardrails embedded in the code. These unavoidable guardrails enable responsibility for data quality to shift safely from the domain of IT to data owners and producers. By taking such an approach, organisations can start to reduce operational overhead, accelerate innovation and efficiency, and encourage agility.
Guardrails in operation
However, putting well-founded governance guardrails in place requires meticulous planning. A computational governance platform will help to ensure this follows a standardised process, setting out rules for data quality, architecture, compliance, and security.
Following the overarching standards, each department becomes responsible for curating its own data into defined products so that other users enterprise-wide can easily access and utilise these ‘data products’, dependent on permissions. The IT team manages the centralised self-service platform, maintaining security and capacity. As a result, users are no longer dependent on IT staff to find, analyse, and export data insights.
Importantly, the governance platform takes care of enforcing guidelines as all projects must adhere to policies that are agreed and predetermined by the compliance, security, and architecture teams. Although this may sound draconian, automated templates actually speed up project set-up for data practitioners across multiple tools and technologies. Users are relieved of the burden of validating and re-checking data and can focus their efforts on delivering outcomes and innovative solutions.
In essence, computational governance can help turn piles of under-utilised data into strategic assets to drive success. It improves decision making across enterprises by simplifying access to high-quality data, ensuring consistency and accuracy.
Businesses can confidently act on valuable insights from vast datasets, enabling them to take advantage of new trends, optimise operations, and innovate quickly. Those that take the initiative to bridge the yawning gap between collecting data and turning it into valuable insights, will rapidly gain advantage over organisations that fail to release the potential from their data silos.