Skip to content Skip to footer

How much of a cyber threat is AI?

Image: Adobe Stock / Bo Dean

Suid Adeyanju, CEO of RiverSafe, explains why four out of five cybersecurity leaders see AI as the biggest cyber threat.

From Chatbots to Large Language Models like ChatGPT, AI is everywhere – whether businesses are ready for it or not.

AI and machine learning has been a hot topic in the tech industry for several years, with software developers and big cloud vendors racing to imbue AI into their offerings, making their products smarter and their customers’ lives easier in the process.

But the sudden ubiquity of AI has nevertheless taken many organisations, and their security teams, by surprise. While businesses may have been strategising on how to bring AI into their operations, they’re now faced with a landscape in which AI technology is easily accessible to their employees. And that open access to new and largely unapproved third-party apps is causing concern for security leaders.

To find out more about the risk that AI poses to cybersecurity, and what security leaders are doing to mitigate it, RiverSafe conducted research asking 250 cybersecurity leaders to share their thoughts.

Perhaps our most troubling finding was that 80% of security leaders believe AI to be the biggest cyber threat to their organisations. Let’s take a look at some more data from the report to find out why AI is top-of-mind for CISOs today. 

AI will facilitate a huge increase in scope and scale of cyber threats

One of the most worrisome aspects of AI is the significant potential it offers cybercriminals to amplify the scale and complexity of their attacks. Many types of cyber-attacks are effectively numbers games, relying on a spray-and-pray strategy to hit as many targets as possible and hoping that they find a weak spot.

With AI tools within their reach, cybercriminals can execute existing tactics much faster and on a considerably larger scale – greatly enhancing their likelihood of a successful breach.

AI algorithms will enable cybercriminals to expand the scope of all kinds of attacks, from relatively simple schemes like cracking passwords and scouting for vulnerabilities within websites to crafting frighteningly persuasive social engineering attacks using new technologies like deepfakes.

This looming escalation in cyber-attacks is extremely alarming, especially considering how many businesses already suffer breaches due to cybercrime. A fifth of the CISOs we spoke to in our survey said they suspected their organisation had fallen victim to a cyber-breach in the past year. A further 18% confirmed they’d experienced a serious breach. It appears that, despite putting additional measures in place, many security leaders are resigned to this increase in attacks. Almost two-thirds (63%) told us they expect to see more data loss within their organisation this year than ever before.

AI is making attacks more complex and harder to defend against

Not only is AI allowing cybercriminals to carry out more attacks, more frequently, but it’s also being used to create new types of attacks that are more sophisticated and harder for cybersecurity products and people alike to spot.

According to our survey results, 61% of CISOs said they’d already seen AI being employed to make attack methods smarter and more complex.

The vast quantity of personal data available on the internet is being utilised by AI algorithms to better deceive potential victims. All of us have a digital footprint, and even data that you might not consider particularly sensitive can be used by AI to craft targeted and persuasive messages intended to win the recipient’s trust. By scraping an Instagram account, for example, an AI bot can write a message packed with information about your recent weekend away, creating a credible email that really does sound like it could be from a familiar colleague.

While some of this sophistication will come from cutting-edge technology that can be used to deceive employees, AI is also being used in far simpler ways to help attacks slip by cyber defences. AI-powered spelling and grammar checkers, for example, can quickly and accurately root out errors from poorly constructed or awkwardly written phishing messages, eliminating one of the most common ways we tend to differentiate between genuine and suspicious communications.

AI is heightening data breach concerns

Given the enormous number of generative AI tools now available for anyone and everyone to use, controlling the movement of sensitive data becomes more difficult. These products use written prompts to create and deliver the requested content to specification (copy, code, or digital imagery for example), meaning users must feed data into them to produce a result.

But how do you control what users are ‘telling’ an AI tool – or what it does with that data once it’s been ingested?

These third-party tools are incredibly difficult to secure within a business, as the user has no control over where their inputted data goes or what it’s used for. Once information has been provided as part of a prompt, it enters the bank of data used by Large Language Models (LLMs) to provide answers for other users; meaning it could crop up anywhere the LLM thinks it could be relevant.

This gives rise to a major challenge for CISOs who are now battling potential data breaches based on what employees are entering into these tools. Some of the world’s biggest companies have already banned the use of generative AI tools among employees, and judging from our survey data, many businesses are following suit by taking a zero trust approach to keeping their data secure.

Many of the security leaders we spoke to are pushing back against the infusion of AI into their organisations too, with almost a quarter (22%) banning the use of openly accessible generative AI tools such as ChatGPT.

Taking action against the AI threat

AI may be giving security leaders sleepless nights right now, but closing the door on it completely just isn’t an option. Although it’s relatively early days for the technology, there’s no hiding from it; it’s only going to become more prevalent in our software and our business processes. The best time to take action to make sure your business can enjoy the advantages AI brings, while protecting your digital environment from threats, is right now.

With robust awareness training, reinforced cybersecurity posture, and investment in AI-augmented security tools designed to fight back against autonomous threats, you can shore up your defences and mitigate the risks to your business. 

Picture of Suid Adeyanju
Suid Adeyanju
CEO of RiverSafe

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.