Skip to content Skip to footer

Defending against the growing threat of state hackers

Image credit: Adobe Stock / 2ragon

The explosion of generative AI products this year looks set to be a major accelerant in digital transformation for businesses and individuals.

These tools, such as OpenAI’s ChatGPT, offer huge potential for organisations looking to automate time-consuming and costly tasks. But it also comes at a cost, with bad actors already abusing the technology to create a new evolution in cyberattacks.

The spread of generative AI is likely to benefit state-sponsored hackers, too, who in recent years have been ramping up attacks on critical infrastructure, government agencies and businesses. As such, the UK government moved quickly to implement new cybersecurity measures to protect IT systems in April, following a new strategy to protect the NHS from cyberattacks.  

Growing threats

New generative AI models make it possible for state hackers to completely automate spear phishing attacks. Crucially, it is capable of generating new coherent and compelling content from text, audio, video, or images. With just a few prompts, countless variants of targeted-bait-messages can be created and sent to many different addresses.  

Phishing is a social engineering method that uses humans as its greatest vulnerability by sending emails that look legitimate, which recipients are more likely to fall for. Phishing involves everything from mass mailings to personalised spear phishing emails, with the goal of getting the recipient to click on an innocent-seeming malicious link. 

Recent research from Hornetsecurity reveals that up to 90% of all cyberattacks start with a phishing email, 40% of all email traffic poses a threat, and 5% of daily global email traffic is classed as malicious. That might not sound like a big percentage, but it amounts to billions of emails every day. 

Since AI tools are scalable, countless variants of spear phishing messages can be generated and sent to different targets in a very short time. These AI systems can also be adapted through machine learning to continuously optimise and update information. Through this process, spear phishing messages can be amended, based on their success level, leading to a constant increase in the efficacy of social engineering campaigns. It’s now clear that in the wrong hands, generative AI technology has the potential to create near-perfect spam messages, malicious code, and even teach novice cybercriminals how to launch attacks. 

Alongside the increasing attacks from state hackers, there seems to have also been a rise in authoritarian states such as Russia, Iran, North Korea and China carrying out spear phishing attacks against other countries, with the goal of compromising their supply security, tapping information, or stealing cryptocurrencies. 

Fighting back

In response to these increasing cyber threats and evolving digitalisation, the EU has issued the new NIS2 (Network and Information Security) cybersecurity directive, which tightens the security requirements for operators of critical infrastructures (CRITIS) in the member states.

Together, CRITIS, government agencies and enterprises must act quickly to prepare and protect their employees and citizens from this new wave of AI-supported cyberattacks. 

Governments are using a host of tactics to combat the rise in more sophisticated cyberattacks from nefarious states, such as setting up specific teams to disrupt terrorist groups and state hackers. However, ensuring they have the appropriate IT measures in place, such as email filters, firewalls, and network and data monitoring tools, remains vital. 

Security awareness training

In addition to having the latest technical security measures, the best defence for government agencies and CRITIS is also to train employees to recognise spear phishing attempts using security awareness training. This virtual training includes simulated spear phishing attacks, which helps prepare and educate users about what an attack might look like. From this training, governments can monitor employee behaviour in order to ensure all employees are prepared to identify potential threats and act accordingly.

Approaching security awareness training from the triad of mindset, skillset and toolset is vital in making sure a government’s data remains safe. This approach ensures employees have the ability to recognise new cyberattack methods, and helps foster a sustainable and well-rounded cybersecurity culture equipped to deal with current and future cyber threats.

Generative AI has the potential to change the face of the cyber threat landscape, with attack methods becoming easier to carry out and harder to detect. But with effective cybersecurity methods, including ongoing security awareness training, governments, critical infrastructures, and enterprises can safeguard themselves from current and future threats. 

Picture of Daniel Hofmann
Daniel Hofmann
CEO at Hornetsecurity

You may also like

Stay In The Know

Get the Data Centre Review Newsletter direct to your inbox.