Skip to content Skip to footer

Is our private data safe with AI?

Image: Adobe Stock / khunkorn

The artificial intelligence (AI) boom and onslaught of press coverage celebrating the benefits of using it has been timed perfectly with a period of instability for cloud.

Businesses and individuals had been increasingly rejecting cloud over concerns around the perceived misuse of their data when it comes to both direct and indirect monetisation. 

In January, this trend was so widespread that some industry experts were even predicting that 2023 could be ‘the year of public cloud repatriation’. With the explosion of tools like ChatGPT and Bard, however, cloud is once again back on top – even if people are forgetting that it is cloud that they’re using.

While offering new services in generated content and creativity, tools like ChatGPT are subject to the same longstanding and proven risks and vulnerabilities as other cloud-based tools. In March this year, for example, ChatGPT had to go offline due to a bug which caused issues including the first message of a newly-created conversation being visible in someone else’s chat history, and the unintentional visibility of payment-related information of some ChatGPT Plus subscribers. While ChatGPT acted quickly to fix the issue and are confident that there’s no ongoing risk, this incident still exposed the ongoing risks associated with cloud data use and storage. 

Why are AI companies using vulnerable infrastructure?

Essentially, generative AI tools are just the latest in a long line of cloud apps focussed on turning users into products through targeted advertising and other direct and indirect means of monetisation.

What is so surprising, is that in an era of substantial innovation in data storage and use with Web3, these huge AI companies have chosen to rely on infrastructure that has been continually proven to be vulnerable. 

Web3, on the other hand, makes it impossible for anyone but the data owner to access their data. While this would fully protect users from potential data leaks like the ones we have already seen, it would also mean that it would be impossible to monetise the data shared with AI chatbots and other tools.

Don’t confuse ChatGPT with your real-life GP

Alarmingly, not only have these AI tools increased cloud use once more, but they have actually caused people to start sharing even more private and sensitive information than they ever did previously. For example, if you ask ChatGPT how it could help a doctor with their daily workflow, it will reply by listing various different things it could help with if sensitive, confidential, and regulated data was inputted. For example:

“Differential Diagnosis: Doctors can consult me for assistance in generating a list of potential diagnoses based on patient symptoms, medical history, and test results. I can analyse the provided information and suggest possible conditions, helping doctors consider a broader range of possibilities.”

While not actually telling somebody to share this information with them, the AI tool is still suggesting it would be beneficial to share heavily confidential health data. And even in the tool’s warning message that follows the list of potential ways it could help, it does not reference data laws around sharing this sort of information and instead focuses on ChatGPT’s ‘valuable information and support’ not being a substitute for a doctor:

“It’s important to note that while I can provide valuable information and support, I am not a substitute for professional medical advice, diagnosis, or treatment. Doctors should exercise their clinical judgment and consider multiple sources of information when making medical decisions.”

What’s more, though ChatGPT might come across as an entity with specialist skills, in reality it is no more than a large book of consumed ‘stuff’ that it has ingested and will regurgitate when asked a question. This is undeniably very clever and solves a number of genuine problems for users, however it is no substitute for a doctor who has expertise and peer-approved data to call upon. 

This is not just an abstract example either, with many reports suggesting doctors are actually using ChatGPT in similar ways already. Professor Robert Pearl of Stanford medical school told WIRED that he knows of physicians using ChatGPT and asserts that, “it will be more important to doctors than the stethoscope was in the past.” Adding that “no physician who practices high-quality medicine will do so without accessing ChatGPT or other forms of generative AI.”

Keep your private data private

Users must be aware that when using ChatGPT and similar tools, the more sensitive and confidential data they share with the AI bots, the more vulnerable they will become. 

We are seeing time and time again that these legacy data storage services are not fit for purpose when it comes to defending against the attacks, and so it is only a matter of time until these huge pools of private data are exposed. And the more data we share with them, then the more attractive they will be to attack. 

At the end of the day, if these AI tools are telling people that they should share their sensitive and confidential data with them, then they have a responsibility to ensure that data is safe and secure. Today that is simply not the case.

Picture of Simon Bain
Simon Bain
CEO at OmniIndex

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.