Skip to content Skip to footer

AI: Security’s new frontier

Image credit: Adobe Stock / 2ragon

Alexander Feick, Vice President, eSentire Labs, explores the role generative AI and LLMs play in cybersecurity, and why a ‘secure by design’ mindset must be adopted from the start.

Innovative businesses understand the transformative potential of generative AI, especially large language models (LLMs), and the roles they play in the evolution of industry practices. McKinsey estimates that generative AI could add between $2.6 trillion to $4.4 trillion to the global economy annually across 63 different use cases, boosting employee productivity and delivering services to customers faster. 

However, alongside the implementation of these powerful tools, it is imperative to define strategies for navigating potential risks around these tools as well. When designing and implementing generative AI projects, we should ensure these deployments are secure right from the start. 

Currently, conversations around LLMs and security tend to focus on how threat actors can use LLMs and generative AI services to improve attacks like phishing. However, LLM-based cyberattacks are expected to reach far beyond the sophisticated phishing and impersonation scams reported market-wide today. Using LLMs in the wrong way could lead to loss or exposure of sensitive data, while the LLM services themselves could be targeted to provide wrong or dangerous responses. Additionally, LLMs embedded in application flows can be targeted to manipulate the behaviour of the application.

To make Generative AI and LLMs useful, we must take a ‘secure by design’ mindset from the start. Rather than trying to bolt security onto these deployments after services have already been implemented, security teams should be involved right from the get-go, so that they can build in the right guardrails around how data is used as well as methods to check that any security policy is current and being followed correctly over time.

Getting security for LLMs right from the start

Securing the use of LLMs begins with defining a clear consent mechanism for how LLMs are used, as well as enforcing this mechanism from the start. This includes implementing a directive that requires users to acknowledge their intent before accessing LLMs, which ensures a robust first line of defence against tools being mis-used or data shared inappropriately. By incorporating principles of consent, security, and practicality, organisations can create a user experience that is both smooth and secure, forming the foundation of any LLM security policy.

However, implementing a consent mechanism is just the first step. Establishing a consistent means of risk monitoring is equally crucial. An internal gateway model can prove invaluable in securing all interactions with LLMs. This gateway should serve as a single point of interaction for users, data and services with any LLM service. A gateway should provide detailed data logging of interactions, using metrics such as usernames, response times, business or software purpose of LLM use, and text details, as this enables organisations to identify and mitigate potential risk hotspots effectively. 

Without this point of control, companies will have to rely on any LLM tools that individuals access having metrics and reporting on their use, and then gather up that user data from each tool in order to correlate activities. Not only is this difficult to execute in practice, it is time-consuming and hard to maintain consistently.

Using a centralised LLM gateway should provide more insight into activity and provide a way to manage LLM consumption over time. However, it is not the only step that you should take around securing your use of LLMs.

Skilling up

Alongside the technology to track use, you should also consider your company’s processes and support for staff. Targeted training on how to maximise the effective use of generative AI is critical to turn the potential benefits of LLMs into reality. By incorporating online learning modules and showcasing practical examples, organisations can closely monitor user adoption rates for LLMs and associated tools. From there you can not only enforce security standards, but also promptly address any pain points that users might have. 

Rather than being a security standard alone, this can ensure that users make continued progress in harnessing the potential of generative AI. Rather than simply blocking use, security teams can enable users to be more successful and share best practices.

Once you set users on the right path, you will want to track their performance over time. As businesses integrate LLMs into their daily operations, it will be vital to ensure that these tools add tangible value. Practical output evaluation can provide insights into the quality, accuracy, and effectiveness of the LLMs that you use and the data that they provide to internal users or to customers. 

Focusing on quality and efficiency can help improve results. However, while the strength of LLMs emerges in tightly embedded application flows, looking at quality of output also provides a way to approach the output of an LLM with a careful eye for potential risks or attempted attacks. If a threat actor tries to introduce vulnerabilities such as prompt injection and goal hijacking attacks, tracking quality and efficiency can protect users.

Always be vigilant

LLMs introduce new and unique threat surfaces, requiring ongoing vigilance and rapid adaptation by IT security teams. From a security and trust perspective, implementing LLMs is very similar to data sharing with an external party, so you will have to adopt similar defence in depth processes for all your LLM interactions. You should adapt your risk and security management solutions to include active monitoring and threat hunting for anomalies, particularly in high frequency and repeatable use cases (such as LLMs embedded in software), and treat all LLM outputs as potential security threats to whatever is consuming them.

The journey towards making AI technologies useful within a business is just starting, while ensuring robust IT security is a continuous objective. By looking at your security policies, implementing dynamic risk monitoring, and taking a pragmatic approach to training your users, you can navigate the emergent AI landscape while mindfully addressing potential threats. This should help you pave the way towards securely harnessing the full potential of AI.

Picture of Alexander Feick
Alexander Feick
Vice President at eSentire Labs

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.