Skip to content Skip to footer

The AI Safety Summit: Making history or simply talking shop?

Image: Adobe Stock / ipopba

Prasad Prabhakaran, Generative AI Practice Lead at esynergy, shares his take away from the recent AI Safety Summit, and how we can surf the wave of AI instead of drowning in it.

Elon Musk labelled AI “the most disruptive force in history” at the highly-anticipated AI Safety Summit at Bletchley Park last month. Attending the first ever summit of this kind, while the world waited for its outcome with bated breath, certainly felt like witnessing the beginning of an entirely new chapter unfurl in the human story.

On the surface, the disruptiveness of AI ironically generated a strong sense of unity across the different political, diplomatic and corporate parties in attendance. The most fundamental question that everyone – from senior government officials to private sector tech leaders – had about this ‘new’ technology was in regard to its regulation. Should it be regulated? Can it be regulated? What would such regulation look like?

Although bound by the collective aim to reach a consensus about AI’s regulation, the motivations of different groups for doing so seemed to vary. At this latent stage, some might argue that differing motives do not matter – what matters is agreeing on some form of regulation and implementing it as quickly as possible. Yet, this risks laying down faulty foundations which expose fundamental cracks later on, such as the blocking of particular regulations by one stakeholder group, or the disregard for certain concerns due to a lack of representation from a particular sector.

Governing the unknown:

For political representatives, the predominant motivation for regulation tends to be national security – an understandable priority, given the threat AI poses of advanced cyberattacks, autonomous weapons, information warfare and surveillance.

Delegates from the 28 attending nations grappled with what AI safety might mean across their different countries and cultures, culminating in the signing of the Bletchley Declaration. Recognised by many as a major diplomatic achievement, the aim of the agreement to tackle the risks of frontier AI models is certainly an admirable one. As are the UK’s efforts to build the first ‘testing facility’ which will independently test the models, including their parameters and data used, for safe usage before their release.  

Unlike other technologies, which have, historically, had much longer lead times to go through studies before being received by the general population, generative AI has been placed in the hands of the public without the supply of any of the usual background knowledge or controls. Such pre-deployment testing is, therefore, a vital step in safeguarding against the risks that these machine-learning models pose.

In the hands of the giants

On the surface, the tech giants’ desire for regulation seems to broadly align with that of the various governments. For some, this willingness of the giants to participate in discussions about regulation may come as a surprise. Do regulations and legal limitations not risk stifling innovation and growth potential?

Potentially. However, the likes of Open AI and Google seem to have a different primary concern: the open-source question. Open-source machine-learning models cannot be tracked in the same way as closed-source ones can, as the user does not need to make an account to access the software. Therefore, it is much more difficult to track activity back to a specific individual in the event of misuse. Whilst this is, undoubtedly, an important factor, open-source models can also turbo-charge the speed and potential of innovation by enabling collaboration and knowledge-sharing. For the private sector tech giants, this poses a risk to their position as top-dog and so, it is unsurprising that their desire for regulation seems to be motivated by a need for exclusivity.

The forgotten majority

Teachers, shop assistants, administrators – those with ‘everyday’ jobs – were large left out of the summit. Ironically, this larger section of society will likely feel the impact of AI on their day-to-day lives the most: advanced machine-learning will mean that administrative tasks increasingly become automated, school lessons will be supplemented by (and maybe even, one day, replaced by) chatbots and it will be possible for shelf-stacking in shops to be done by robots. As Musk put it – “AI will put an end to work”. For some, this may be an exciting prospect, but for many, potential job losses in an already-fraught labour market and amid an ongoing cost-of-living crisis is a terrifying and depressing prospect.

National security, innovation potential and the debate between open and closed source models are all vital topics for discussion. However, the majority of the population, who will be affected by all of these things, cannot be forgotten. Representatives from industries which are not directly involved in the building of these technological products and solutions must also be invited to the table to discuss AI safety and regulation – so that their concerns can be addressed and their voices heard.  

To achieve truly robust regulation, therefore, diplomatic collaboration must take place across all stakeholder groups – including the general public. With the summit set to become an annual event, along with the additional half-year smaller event in the first half of 2024, there will be plenty of opportunities to expand and diversify the attendees. Future summits will present the chance to build on the learnings from those prior, and ensure that earlier commitments come to fruition.

However, if we do not listen to those whose daily lives will be most drastically changed by the technology, and acknowledge the differing motivations at play, discussions around the safe use of AI are always destined to be a talking shop.

The year ahead

Large organisations have been looking to events like these and international regulation to help guide them on their path to an AI-enabled future. Whilst many of the leaders I speak to are still hesitant about applying these technologies on a large scale in their business, I have seen a widespread agreement that now is the time for further experimentation, in a controlled way.

By identifying very specific business use cases and starting to take the first steps, we will be able to make sure that the output of AI is reliable. From there, trust in AI will grow, opening up further opportunities which will see these technologies deliver on the lofty ambitions they promise.     

Picture of Prasad Prabhakaran
Prasad Prabhakaran
Generative AI Practice Lead at esynergy

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.