Darren Thomson, Field CTO EMEAI at Commvault, warns that Britain’s hands-off stance could leave businesses exposed to poisoned data and supply-chain sabotage just as a $500 billion AI surge reshapes the global playing field.
The global AI race has reached new heights with the US Government’s announcement of a $500 billion AI initiative that includes the landmark Project Stargate partnership with OpenAI, Oracle, and Softbank. This development, coupled with the UK’s recent AI Action Plan, marks a pivotal moment in the international AI landscape.
While both nations demonstrate clear ambitions for AI leadership, a concerning gap is emerging between aggressive growth agendas on the one hand, and the regulatory frameworks needed to ensure secure, resilient AI development.
This divergence creates a singular challenge for organisations that build and implement AI systems. One that could potentially expose them to business risks and hamper their ability to innovate with confidence.
The AI policy disconnect – navigating an increasingly fragmented regulatory landscape
The contrast between regulatory approaches across Europe and the UK is stark. While the EU’s comprehensive AI Act sets out unequivocal obligations for AI development and deployment that includes mandatory risk assessments and significant fines for non-compliance, the UK Government is adopting a much more nuanced and lighter touch approach to AI governance.
This regulatory divergence, combined with the US Government’s recent withdrawal of key AI safety requirements, creates a complex landscape for organisations implementing AI systems. A situation that is particularly challenging, given the evolving nature of AI-specific cyber threats.
British businesses now face the unique challenge of deploying AI solutions globally without a clear domestic governance framework. While the UK Government’s AI Action Plan admirably prioritises stimulating innovation and growth, there is a risk that its light touch approach could lead to firms failing to implement adequate safeguards against harmful AI risks – something that would leave UK organisations exposed to emerging cyber threats and potentially undermine public trust in AI systems.
From a security perspective, two threats in particular represent a growing challenge for UK organisations: data poisoning attacks and AI supply chain vulnerabilities.
Data model poisoning
The risk of data poisoning, where malicious attackers deliberately manipulate or contaminate data to compromise or manipulate the performance or outcomes of AI and machine learning models represents a significant and growing threat in today’s data-driven world. The aim of the game here is to undermine an AI system’s integrity and dependability by introducing biases, creating vulnerabilities or disrupting and retraining systems such as cybersecurity, fraud detection and medical diagnostics.
Difficult to detect, data poisoning can take many different forms. For example, inserting malicious code that modifies decisions made by AI models or adding errors that distort algorithmic outputs. The motivations and goals of these attacks are varied. Attackers may engage in imperceptible tampering with the aim of compromising an organisation over time, or they may be compromising AI systems so that they will reveal the sensitive personal data of users directly or indirectly. If politically motivated, it could also promote biases and influence attitudes.
To ensure they can combat sophisticated data poisoning attacks, firms will need robust data collection, validation and anomaly detection frameworks along with appropriate safeguards to prevent the inadvertent introduction of poisoned data from infected sources when sharing data sets with third parties.
Supply chain data security
The UK Government has proposed creating a National Data Library to support AI development and extract new value from public data assets and make private data work for public good.
How these data sets are assembled and protected, however, will be critical for guaranteeing their integrity in years to come. This is especially important when they are integrated into the AI models utilised by businesses, public sector services, and the wider supply chain.
The ambitious scope and scale of the National Data Library announcement comments on security in the vaguest of terms and provides limited detail on the formal standards that will govern data quality and provenance. As AI data supply chains will be a top target for attackers intent on injecting malicious data and vulnerabilities into AI models, this is concerning.
To ensure resilience across their supply chains and minimise the likelihood of rogue AI entering the supply chain, organisations will need to prioritise the applications that matter the most and ensure they have strong end-to-end defences in place. A fully tested disaster recovery plan will also be essential for ensuring that critical backups can be restored quickly in the event of a compromise.
Moving forward: adopt a balanced approach
As AI models become increasingly integrated into organisational infrastructures, the scope for security breaches and abuse look set to increase substantially. Building resilience into AI systems and implementing protections against traditional and AI-specific cyber threats will be mission critical for business leaders that want to innovate and reap the benefits of AI without compromising security.
The current patchwork of AI regulations and policies around the world means that a coordinated global framework for AI safety and security is unlikely to appear anytime soon. To successfully address the risks and opportunities of AI, UK organisations will need to conduct thorough risk assessments, implement strong data privacy and protection measures, and ensure they are appropriately equipped to mitigate AI-data risks.