Skip to content Skip to footer

Europe’s push for private AI

Image: Adobe Stock

Fabrizio Garrone, Enterprise Solutions Director at Aruba S.p.A, explores the rise of private AI and its role in the European landscape.

AI development and adoption in Europe have increased dramatically in recent years. This year alone, the EU announced a substantial boost in funding of over €176 million to invest in AI projects across the continent. This acted as an important landmark in Europe’s ‘Digital Decade’, an initiative introduced by the EC that aims to see Europe achieve a prosperous and human-centred digital future by 2030.

So, what are the main reasons behind the explosion of AI development in recent years? Offering businesses increased efficiency and productivity, it is no surprise that adoption rates have been so high. But this has only been possible due to two crucial factors. The first is data availability. Recently, a massive amount of digital data has been made available across Europe. Of course, data fuels AI algorithms’ ability to learn and improve, so the more that is available, the more intelligent AI can be. The second factor comes down to processing power. Widespread investment in technology, such as machine learning, has accelerated how quickly and intelligently this data can be processed.

However, a key tension in the development and deployment of AI in Europe revolves around the need for data to train algorithms, which conflicts with strict data privacy laws and the desire for digital sovereignty. Let’s examine these tensions and how private AI could be the key to solving these issues.

Data privacy and AI in Europe

For businesses adopting AI solutions, there must be a balancing act between increased efficiency and data privacy. It’s crucial to consider where the data used to train AI models has come from and the privacy laws under which it may fall. Europe has some of the strictest data laws in the world. The General Data Protection Regulation (GDPR) in the EU, for example, dictates how businesses handle personal information, threatening harsh penalties for those who are found to be non-compliant. Therefore, there are significant limits on the kind of data businesses can use to train their AI algorithm. This is where private AI comes in.

Introducing private AI

Private AI allows on-premises training and running of AI models using a company’s own data, providing an array of benefits for European businesses. Systems can run in a private setting, meaning company data is not shared with outside sources. Businesses with enough data to train their own AI models can, therefore, benefit from the efficiency and productivity gains of using an AI model without facing data privacy and compliance issues.

Large enterprises and public institutions often have an abundance of data available to train models and are, therefore, set to benefit from this kind of technology. Public institutions are also often subject to more extensive data privacy requirements, meaning private AI is a useful alternative.

Developing and adopting private AI can also offer businesses a competitive advantage in Europe. To many customers, it demonstrates a commitment to ethical data use, which in turn helps build customer trust. In addition to this, by minimising the need for extensive data collection and storage, businesses using private AI are less at risk of breaches, which can also be more appealing to customers.

Finally, by keeping data use transparent and localised, European businesses can also demonstrate adherence to privacy laws more efficiently, avoiding complex procedures for cross-border data transfer.

Europe at the forefront of private AI adoption

Europe is set to be a leader in private AI adoption. With its regulatory focus on data protection, keeping data under the control of the business aligns with EU priorities, and it’s undoubtedly a natural choice for European companies that need to comply with GDPR.

The recent growth of the European data centre market positions the region to host an influx of private AI infrastructures. A recent report, for example, predicts a 16% increase in supply in the core FLAP-D markets and a 49% increase in the market size of secondary markets in 2024. With an abundance of data centres located not only in the FLAP-D region (Frankfurt, London, Amsterdam, Paris, and Dublin) but also, and mainly, outside it, we’re likely to see widespread development and adoption across the continent.

Take Italy as an example. The data centre market is expected to grow from 411.4 MW in 2024 to 805.2 MW by 2029, and European-based operators are driving growth. These projections mean more companies can host their systems, including private AI, within Europe, aiding regulatory compliance and ensuring data sovereignty.

Looking forward

AI adoption will only increase in the second half of Europe’s ‘Digital Decade’. However, as the EC and EU continue to focus on data sovereignty, the ethical use of AI will be and already is at the forefront of many minds. The AI Act, for example, was announced earlier this year and assesses the risks the technology may pose. Businesses looking to harness the power of this emerging technology also need to focus on protecting and appealing to privacy-focused European consumers. As private AI advancements continue to take hold, European companies will be well-positioned to leverage this technology for a competitive advantage, driving innovation with data privacy at its core.

Picture of Fabrizio Garrone
Fabrizio Garrone
Enterprise Solutions Director at Aruba S.p.A

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.