Skip to content Skip to footer

Is the ICO’s hands-off approach to AI an abdication of its duty to protect consumers?

Image: Adobe Stock / peshkova

Artificial intelligence (AI) has rapidly emerged as a transformative technology with significant implications for privacy, copyright, and the ethical use of data.

The UK’s privacy watchdog, the Information Commissioner’s Office (ICO), has warned businesses that they will be taking action where privacy risks have not been tackled prior to generative AI being introduced. In spite of this muscular statement of intent from the ICO, it is clear that the current regulatory framework in the UK does not yet match up to the scale of the challenge posed by AI.

Although the development of this technology brings exciting opportunities for consumers, there are also concerns surrounding misinformation and discrimination, amongst other potential risks. As these systems become increasingly sophisticated, it is essential to establish legal guardrails to ensure responsible and fair practices. Earlier this year, the ICO updated their guidance on AI and data protection to include a focus on fairness considerations and transparency principles as they apply to AI.

Whilst the ICO has warned businesses that there will be tougher compliance checks, Big Tech companies are notorious for violating consumer privacy. This is most strongly evident in their use of AI to sharpen the targeting of consumers with ever more bespoke ad content, a process underpinned by their relentless harvesting of customer data. Whilst the ICO has condemned this in the past, there has been little enforcement action in this area by the watchdog, and companies are seemingly evading penalties for their harmful practices.

Adequate protection?

Within the last few months, the Irish Data Protection Commission (“DPC”) published its decision concerning the transfer of Facebook users’ data from the EU to the United States by Meta Platforms Ireland Limited (“Meta”). The decision saw Meta fined a hefty 1.2 billion euros, with an order to cease processing EU Facebook users’ personal data in the US. It does not appear the ICO will take any similar enforcement action, showing the unwillingness of the regulator to prioritise such investigations.

This calls into question whether British people are adequately protected when the risk of action being taken against organisations is seemingly low. Similarly, the civil courts have so far provided little protection for consumers and we have seen the failure of several high-profile actions against Big Tech for wholesale breaches of data protection laws.

The UK GDPR stipulates that data subjects have the right not to be subject to decisions producing legal effects based solely on automated processing without appropriate human oversight. In addition to placing limitations on automated individual decision-making, the UK GDPR also mandates that individuals are provided with specific details about the processing activities and that measures are taken to prevent errors, bias, and discrimination. Whilst this provides a useful outline for addressing data protection concerns related to algorithmic-like systems, there is currently no explicit UK regulation of the technical aspects or specifics of algorithmic design and implementation.

‘Pro-innovation approach’

Rather than exercising caution, the UK has adopted what it terms a ‘pro-innovation approach’ to policing AI. The UK AI whitepaper is based on principles such as transparency, accountability, and fairness, however it sets out no concrete plans for regulatory control and states that there will be no statutory regulation of AI in the near future.

Compare and contrast this with the European Union, which has opted for a much stricter approach. On 14 June, the European Parliament voted to approve the draft legislation of the Artificial Intelligence Act (“AI Act”), establishing guidelines for AI usage. This legislation focuses on a risk-based approach implementing a tiered system of regulatory obligations for specific applications.

For example, the AI Act proposes to explicitly prohibit some uses of AI where the risk is deemed ‘unacceptable’; examples of prohibited practices include social scoring and ‘real-time’ remote biometric identification systems. Applications categorised as high risk, such as those relating to education, employment, and welfare, will need to undergo a conformity assessment as well as meet numerous additional requirements. Limited and minimal risk applications are also subject to certain obligations, including labelling AI-generated content.

Whilst the UK and EU clearly hold diverging views on how best to regulate AI, there has undoubtedly been significant progress in the ongoing mission to ensure the ethical handling of personal data and the AI systems that process and act upon it. Yet while some consider the UK’s pro-innovation approach a positive step for the AI landscape, with the ICO favouring a hands-off strategy, the task of policing AI and protecting consumers in the UK now falls to novel civil lawsuits as they attempt to rein in this disruptive and revolutionary technology.

Lucy Burrows
Lucy Burrows
Associate in the Data and Privacy Department at Keller Postman UK

You may also like

Benefits of registering with Data Centre Review

Receive two newsletters each week

Magazine sent directly to your inbox

Invites to exclusive events

Access the Digital Edition now
Review Network

The Review Network is the go-to source for all the latest developments within the data centre and electrical industries.

© SJP Business Media.