Defending against deepfakes

Simon Jefferies
Simon Jefferies
Director of Technology at Sharp UK

Simon Jefferies, Director of Technology at Sharp UK, explains how AI can help thwart evolving impersonation attacks.

In recent years, cybercriminals have escalated their use of technology and its evolution, to deploy increasing sophisticated attacks. Deepfake technology is now enabling them to take attacks to a new level of sophistication, launching scams and impersonation attacks that exploit advanced machine learning to prey on organisations and their people.

Deepfakes originally came to the public’s attention via the entertainment industry, but the malicious technology has since entered the realm of business, with bad actors weaponising them for fraud, data breaches, and other criminal objectives. From voice-based impersonations targeting financial departments to video deepfakes that can outwit basic verification processes, the rapid evolution of AI technology is making deepfakes sound and look increasingly authentic. To compound these issues, access to these tools is fast becoming cheaper and easier.

To understand this rising tide of cybercriminal activity and more importantly, how to get protected against it, organisations, alongside their technology and IT partners, must build an awareness of how AI-driven verification tools can detect deepfakes. This will support them in adapting their security practices to build a defence against this growing threat.

The rise of deepfake cybercrime

Deepfake technology uses AI to create or manipulate images, audio, and video, producing media files so realistic, and convincing, users run through process they’re asked, and the threat actors can then bypass security. This proves especially difficult to circumvent in scenarios where cybercriminals impersonate high-ranking executives or well trusted teams such as IT Helpdesks, to trick employees into making bank transfers or sharing confidential information.

Recent examples highlight the chilling realism of AI-generated audio that mimics a CEO’s voice, deceiving even cautious employees and leading to significant financial and reputational losses. In one high-profile case, a deepfake audio of a CEO’s voice was used to trick an employee into transferring $243,000 to a fraudster’s account.

Beyond fraud, deepfakes also pose a risk to data security. Imagine a scenario where a deepfake impersonates a cybersecurity officer during an incident response, manipulating the team into actions that allow unauthorised access to sensitive data. Such attacks compromise trust within organisations and erode confidence in digital communications – a concerning challenge as our reliance on remote, digital interactions grows.

The next wave: AI-driven detection and verification tools

To counter these threats, both new and existing tools can be leveraged to spot and stop deepfakes. These include:

  1. Synthetic media detectors: These tools use AI models trained to spot signs of manipulation in media files such as video and audio. By identifying irregularities in pixel patterns, audio anomalies, or inconsistent voice modulation, these detectors can flag suspicious content. Tools like Microsoft’s Video Authenticator and DARPA’s Semantic Forensics program analyse minute distortions that even advanced deepfakes leave behind.
  2. Biometric authentication systems: AI-driven biometrics now go beyond basic facial recognition to detect micro-movements, like eye blinks or subtle muscle shifts, that deepfake technology often struggles to replicate. These systems add a layer of verification that can stop impersonation attacks, especially when paired with other identity checks.
  3. Multi-factor and continuous authentication: With deepfake attacks targeting voice and video verification, multi-factor authentication (MFA) is more critical than ever. By requiring several forms of identity confirmation, MFA makes it harder for attackers to succeed. Continuous authentication, which verifies a user’s identity throughout an interaction by analysing behaviour patterns, can also reveal deepfakes.
  4. Blockchain and digital watermarking: Companies are exploring blockchain for media verification, using digital signatures to confirm the authenticity of images, audio, or video. Blockchain-based watermarks offer a way to ensure that media hasn’t been tampered with, a promising line of defence as more media circulates online.

The evolution of deepfake technology

To counter this growing threat, organisations need a proactive strategy that combines regular team training, identity verification, advanced detection tools and a ‘trust but verify’ approach to unusual instructions.

Investing in security training is an essential first step. Team members should be educated on the potential risks and uses of deepfakes and other phishing activities in cybercrime, as well as learning how to spot potential attacks. By following established guidelines, team members can confirm requests involving sensitive data or financial transactions, helping to mitigate the risk of falling victim to these scams.

Cybercriminals today are no longer lone wolves and opportunistic hackers. They are businesses that are pursuing their own ‘leads’ to exploit unsuspecting organisations. It’s not usual for these criminal organisations to spend a lot of time and effort in evaluating the best avenues to infiltrate a business and maximise the return on their investment.

While high-profile attacks are often on big enterprises, small businesses are commonly targeted as low hanging fruit. As deepfake technology becomes increasingly sophisticated and easier to get hold of, organisations need to ensure their people are trained and educated as the first line of defence.  

Related Articles

Top Stories