AI vs. legislation: 2024’s biggest battle

With the advent of AI, the landscape of data protection and regulation is rapidly evolving. Here, experts weigh in on the current challenges, and how industry leaders can balance innovation with safety.

There was once a time in which GDPR (General Data Protection Regulation) was heralded as the gold standard for data sharing. But with recent technological advancements, notably AI, coming onto the scene, data protection no longer looks like it did in 2018.

But it’s not just data protection regulations that are in a questionable state. Across all sectors and industries, organisations are contending with a mountain of legislation – much of which does not go far enough or has waning relevance as they struggle to keep up with the rapidly changing digital landscape.

In short, the UK’s regulatory landscape is a mess. Paolo Platter, CTO at Agile Lab & Product Manager on Witboost, elucidates: “In today’s digital age, there’s certainly no shortage of data to draw on, with IoT and AI creating volumes at an unprecedented rate. Businesses also recognise the inherent value that their data holds for insights like customer patterns, or to harness for future AI tools. However, there is mounting frustration at how difficult it is to harness these insights while also complying with the ever-increasing number of regulations like the EU’s AI Act, Data Act, and DORA, which all seek to standardise how businesses manage data”.

Balancing innovation with safety

When a new technology bursts onto the scene, a common topic that arises is how to find the right amount of innovation ¬¬– how much is too much? This may sound counterintuitive, but unchecked innovation has the potential to cause serious harm. For example, common considerations for organisations often include business continuity, risk, and practicalities. With AI, the biggest concern is the balance between facilitating innovation and ensuring that businesses and individuals stay safe, particularly regarding their data.

In the UK, we seem to be on the right track. Iju Raj, Executive Vice President R&D at AVEVA, highlights that, “in the UK, the government’s recent efforts in establishing a pro-innovation framework for AI are welcome, as it balances assessment and monitors the risks posed by AI with unlocking the transformative benefits of this technology. This framework envisages an agile and iterative approach for AI regulation to match the pace of change in the underlying technologies themselves. This in turn requires software industry players to actively engage with regulators, standards agencies, customers and other stakeholders in order to participate in this conversation and ensure companies like AVEVA strike the right balance, and can advance responsibly.

“For the field of AI to develop in the UK, we need a focus on both innovation and safety,” he adds.

Mark Skelton, Chief Technology and Strategy Officer at Node4, encourages technology companies to lead the way in finding this balance: “Technology companies and individual businesses should be stepping up and enforcing their own guardrails to control the use of AI. This will enable the industry to foster investment and innovation in AI with the confidence that they are doing so in a safe, respectful and moral way. The cat is out of the bag and there is no stopping AI in its tracks now. But the sooner we decide how to use it safely, the sooner we can reap the benefits and plan for a future with AI on our side.”

What about our privacy?

Although the UK’s attempts to create a pro-innovation framework appear to be off to a good start, one place the country (and the rest of the world) is lacking is around privacy concerns. For example, “European data protection legislation states if an organisation wishes to make a decision about a person, they must be able to demonstrate how that decision was made; however, with AI it is not possible to query the LLM and ask why it made a particular decision,” explains Richard Starnes, CISO at Six Degrees. “It is continuously learning, but it doesn’t (and likely doesn’t have the capabilities to) keep track of where it’s learnt from and therefore how it came to that decision”. This is a serious concern when it comes to data protection.

As such, Chris Denbigh-White at CSO Next DLP, explains there are several things organisations should do to ensure that they are staying as safe as possible: “As with any other software-as-a-service (SaaS) tool, organisations need to act thoughtfully through a framework whereby they understand the data flows and risks. There’s no reason AI can’t be compliant with GDPR, but companies need to take the time to get it right. This means balancing deployment and legality. Rushing to get a shiny AI product out in three weeks is of no value if things aren’t done properly and there’s a huge consumer backlash.

“As sought after as AI is, they’re not going to rewrite the GDPR rules for it. Organisations looking to compete with compelling AI tools need to take the time to tailor their product to meet existing regulations. Only by understanding the data flows, parameters and risks of the technology, can they ensure compliance”.

It’s all about frameworks

Even if governments may be slow to implement effective legislation, organisations still have a lot of autonomy to go above and beyond to ensure they are doing their best for employees and customers alike. One key tool they have to aid them with this is frameworks. Terry Storrar, Managing Director at Leaseweb UK, explains, “Technology has always outpaced regulation. However, the rate of change in recent years – particularly with the explosion of AI – has underscored the challenge legislatures face in identifying and mitigating the risks of evolving technology. For businesses, legal compliance is table stakes and increasingly we are seeing organisations go much further for their customers by focusing on rigorous independent standards that fill the gaps in regulation… The business climate is becoming increasingly more competitive, so to stay one step ahead companies need to continue going above and beyond. Modern businesses that put their customers first need to go beyond a tick-box culture of compliance and instead drive the industry where it needs to go by setting themselves the highest standards.”

Chris Rogers, Senior Technology Evangelist at Zerto, a Hewlett Packard Enterprise Company, adds: “In this context, frameworks such as Network and Information Systems Directive (NIS2) can prove invaluable. The NIS2 framework offers guidance based on regulatory content from around the world to provide businesses with the most effective best practice advice.

“As a framework, adherence is not a legal requirement, so businesses can pick and choose elements of it that work best for their organisation and budget,” he continues. “However, organisations that do adhere to frameworks like NIS2 are likely to align closely with regulatory requirements, as these frameworks encapsulate the core principles of the laws they are based on. By following these guidelines, organisations can better ensure compliance and reduce the risk of regulatory issues, whilst still securely protecting data, even as the AI landscape continues to evolve rapidly.

“While regulations naturally cannot keep pace with the rapid advancements in AI, frameworks play a crucial role in assisting with data protection to the best extent possible and should absolutely be implemented as part of an organisation’s cybersecurity measures.”

Matt Hillary, CISO at Drata, concludes: “As we enter an era of rapid innovation with the advancements of and incorporation of AI in real-time, it is crucial that technology companies continue to iterate to bake-in the support for regulation. We all need to embed privacy in the design aspects of our development lifecycle while we continue the rapid advancements in technology, particularly in the realm of data collection and processing.”

Related Articles

Top Stories