Paolo Platter, CTO & Co-founder at Agile Lab, sheds some light on how to meet the AI compliance challenge with automated data governance
With the EU AI Act now in force, organisations have a range of important rules to address if they build or use AI systems. At a high level, the Act is designed to ensure AI systems are ‘safe, transparent, traceable and non-discriminatory’, and in this context, its focus is on protecting the rights of EU citizens. As was the case with GDPR, the EU Act applies to organisations that operate within the European Union, regardless of whether they are based inside or outside of it.
This forms part of a rapidly developing regional and global legislative landscape, with governments racing to regulate the use of AI. From the UK, US, and Canada to China, Japan, and Australia, each country is approaching AI regulation with varying degrees of rigour.
Looking at the UK specifically, the previous UK government adopted a ‘pro-innovation’ approach, where the emphasis is on the outcomes AI might create in specific applications rather than regulating the technology itself. Existing sector-specific regulators will be responsible for applying laws and issuing guidance, but how this evolves further under the new Labour administration remains to be seen.
Inevitably, the overall regulatory environment will become even more complex as new rules are introduced and existing ones updated and refined – all of which present a challenge to businesses focused on the use of AI. Commenting earlier this year, Deloitte pointed out that, “Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.”
One of the obvious questions to ask is: what are the risks of non-compliance? Well, breaching the EU AI Act could result in fines of up to 35 million euros or 7% of worldwide annual turnover, depending on circumstances – a higher bar than GDPR. If GDPR enforcement is anything to go by (and many believe it has been under-enforced), the collective bill could be enormous, with GDPR having surpassed €5.3 billion this year.
The role of computational governance
With the pressure on to ensure compliance, organisations everywhere have some important decisions to make about how they approach these increasingly complex challenges.
One of the most important areas is data lifecycle management, a process which establishes internal rules for collecting, storing, and handling of data to ensure it remains accurate, complete, and secure. Given data is the fuel that powers advanced AI technologies, getting this right not only provides a firm basis for ensuring AI technologies are fit for purpose, but also helps minimise the risk of a subsequent regulatory breach.
In practical terms, this should be based on organisational governance rules that require data owners to take responsibility for maintaining the integrity of the data they generate and manage. At the same time, data consumers should be given the appropriate level of permission to search and retrieve the data they need to build AI services and products. Crucially, this should be achieved without constraining creativity or agile development.
Striking this balance can be extremely challenging and, as a result, organisations are turning to computational governance to provide a structured, automated framework that enforces data standards, compliance, and quality across complex data ecosystems. At the same time, it integrates regulatory requirements and internal policies directly into data workflows, providing automated ‘guardrails’ that ensure consistency without the need for extensive manual oversight or infrastructure changes.
This includes everything from data quality, integrity, and architecture to compliance and security, and helps to ensure every AI-related project adheres to relevant laws and regulations. As a result, AI projects can’t be released into production unless all predefined policies are followed, not least because the platform will prevent non-compliant components from being released.
For example, consider the requirements of an organisation in the finance sector that has grown via acquisition. They are likely to have complex and siloed data sources and processes that need to be brought together for various reasons, including the development of AI applications. Computational governance could help such an organisation to streamline its disparate data projects and enable the data practitioners to govern data in a unified platform. In doing so, they are in a much better position to meet compliance requirements and minimise the risks of a potentially costly breach.
As AI is integrated more deeply into organisational technologies and processes, being able to establish automated data workflow guardrails will grow in importance. Without the capabilities that computational governance offers, there is a very real risk of regulatory breach accompanied by headlines focusing on a lack of effective control. In contrast, organisations that establish a strong foundation now will be ideally placed to deliver on the potential that advanced AI offers.