Skip to content Skip to footer

Staying secure in the age of AI

Image: Adobe Stock / Song_about_summer

How can organisations stay secure in the face of increasingly powerful AI attacks, asks Michael Lyborg, CISO at Swimlane.

It’s almost impossible to escape the hype around artificial intelligence (AI) and generative AI. The application of these tools is powerful. Text-based tools such as OpenAI’s ChatGPT and Google’s Bard can help people land jobs, significantly cut down the amount of time it takes to build apps and websites, and add much-needed context by analysing large amounts of threat data. As with most transformative technologies, there are also risks to consider, especially when it comes to cybersecurity.

AI-powered tools have the potential to help organisations overcome the cybersecurity skills gap. This same technology that is helping companies transform their businesses is also a powerful weapon in the hands of cybercriminals. In a practice, that’s sometimes referred to as offensive AI, where cybercriminals use AI to automate scripts that exploit vulnerabilities in an organisation’s security system or make social engineering attacks more convincing. There’s no doubt that it represents a growing threat to the cybersecurity landscape that security teams must prepare for.

With attacks in general already outpacing defensive efforts and leaving security teams on the back foot, AI could represent an existential threat. With that in mind, how can organisations ensure that they stay ahead?

The security risks of AI

Before looking at how they can do so, it’s worth taking a deeper dive into what makes AI, and generative AI in particular, such a serious potential security threat.

One of the biggest concerns is the ability of cybercriminals to use large language model (LLM) AI tools such as ChatGPT for social engineering. That’s because they allow cybercriminals to realistically spoof users within the organisation, making it increasingly difficult to distinguish between  fake conversations and real ones.

Additionally, cybercriminals may be able to exploit vulnerabilities in AI tools to access databases (something that’s already happened), opening up the possibility of additional attacks. It’s also worth mentioning that because models like ChatGPT draw data from such a broad variety of sources, it could make it difficult for security researchers to nail down exactly where a vulnerability originated if it’s released by an AI tool. This could be further exacerbated by integrating security alert systems with public-facing AI models, further increasing the risk of a breach or leak of proprietary data.

Addressing skill and perception disparities

One of the first steps any organisation should take when it comes to staying secure in the face of AI-generated attacks is to acknowledge a significant top-down disparity between the volume and strength of cyberattacks, and the ability of most organisations to handle them. Our latest report shows that just 58% of companies are addressing every security alert. Without the right defences in place, the growing power of AI as a cybersecurity threat could see that number slip even lower.  

In part, this stems from an inherent disconnect between executives and frontline cybersecurity workers when it comes to understanding how prepared the organisation is to face AI-powered attacks. Our research also revealed that 82% of executives believe they will eventually have a fully-staffed security team, but only 52% of security team members think this will be a reality. Given that 82% of organisations report it takes three months or longer to fill a cybersecurity role, the security boots on the ground are probably closer to the truth.  

Some 87% of executives also believe that their security team possesses the skills required for the adoption of ‘heavy-scripting’ security automation tools, while only 52% of front-line roles stated they had enough experience to use these tools properly. These disparities are at the heart of what is putting many security operations teams on the back foot of cyber defence. This trend cannot persist if businesses are to get ahead of the increasingly sophisticated cyber threats generated by AI.

Embracing low-code security automation

Fortunately, there is a solution: low-code security automation.

This technology gives security teams the power to automate tedious and manual tasks, allowing them to focus on establishing an advanced threat defence. Of the organisations surveyed that address every alert, more than three-quarters (78%) use low-code security automation in their security stack.

There are other benefits too. These include the ability to scale implementations based on the team’s existing experience and with less reliance on coding skills. And unlike no-code tools that can be useful for smaller organisations that are severely resource-constrained, low-code platforms are more robust and customisable. This can result in easier adaptation to the needs of the business.

All of these factors mean that such tools could be pivotal counters to the threats created by cybercriminals using AI tools.

Balancing the best of both worlds

Ultimately, it should be clear that AI represents an existential threat to the cybersecurity sector. Furthermore, taking a traditional approach to security orchestration, automation and response (SOAR) simply isn’t tenable given the resources required and today’s hiring environment. And while no-code tools offer a band-aid for organisations with extremely limited resources, their scope is limited and they cannot be adapted to every organisational requirement.                   

It’s also critical that organisations attempt to close the perception gaps between executives and security workers when it comes to the threats posed by AI and their ability to confront AI threats. As those gaps close, the best option is to adopt security automation tools powered by low-code principles that allow security teams to automate as many functions as possible.

In doing so, they free up their teams to focus on addressing every alert, as well as to research and address the latest threats and vulnerabilities with greater ease.

Picture of Michael Lyborg
Michael Lyborg
CISO at Swimlane

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.