Cyber Threats
A Deepfake Scammed a Bank out of $25M — Now What?
A finance worker in Hong Kong was tricked by a deepfake video conference. The future of defending against deepfakes is as much as human challenge as a technological one.
What happened?
Over the weekend a Hong Kong firm claimed a $25 million loss to fraudsters using deepfake technology to allegedly impersonate the company's chief financial officer via video conferencing.
This AI-driven incident reads more like the plot from a science fiction movie than a real-world event — but it’s not just theory. If true, it’s a watershed moment testing our collective ability to respond to the next evolution of social engineering and deception attacks.
The potential for this novel attack was not entirely unforseen. At AWS re:Invent in November, Trend Micro detailed a similar business impact scenario, advising global business leaders on how to prepare and adjust internal processes and policy against the backdrop of rapidly advancing deepfake technology.
Why are deepfake scams on the rise?
AI-powered fraud uplevels existing attacker tactics and techniques to be more effective. This style of attack became inevitable with high accessibility to deepfake technology and tools as open-source software (generative adversarial networks or GANs), and as avatar-builder SaaS applications.
AI tools like HeyGen make it easy for anyone to create a deepfake where an AI model is trained from a real video of the speaker, then generate scripted deepfake videos with turnaround time measured in minutes. The results are impressive and have long-range implications outside of this scenario for mass misinformation spread via social networking sites.
In the context of the Hong Kong incident, attackers wouldn't likely be able to power a real-time video call with ad-hoc conversation given that today’s typical tools require 30 minutes of processing to generate a few sentences of video. We predict the technology will reach real-time ability soon, with recent reports suggesting text-to-video advancements are an evolution we could see sooner than later. While investigation is ongoing to fully understand the development and execution of the deepfake videos used in this scenario, it’s likely the adversary pre-generated a set of content clips to play in real-time during the call.
The evolution of AI-driven attacks
AI is influencing three main categories of cybercrime: social engineering and fraud, jailbroken GPT services, and hijacking and model poisoning. Between these three categories, fraud is leading the way.
This scenario marks a sharp escalation in the effectiveness of business email compromise (BEC) techniques — AI is steadily making BEC and phishing more compelling and effective — and makes it urgent for organisations to ensure their funds transfer processes are fraud-resistant. In this news story, an employee claims they received authorisation on a live video call, which gave them confidence to proceed without additional process.
How organisations can improve preparedness
Defending against deception attacks is not solely a technological battle; it is equally a human challenge, necessitating a combination of adjustments across people, process, and technology to secure financial transactions, data transfers, and contracts against emerging threats.
Strengthening process, collaboration, and awareness
Despite the ongoing investigation surrounding this case, it's a wake-up call for organisations to scrutinise and enhance their verification processes for funds transfers. Process questions for cybersecurity and risk leaders to consider:
- Is there a clearly defined verification process, no matter who is requesting the transfer?
- Do the verification steps include strong authentication processes that are out-of-band from the current interaction with a potential attacker? (i.e./ Don't trust a phone number provided in the email instructions, don't trust a video call invite provided by the requestor)
- Is there a predetermined safe list of contacts and contact information?
- Is there a predetermined and understood language (i.e. verification word or statement) and process to verify transactions?
- Are staff empowered to raise concern about a request that appears to come from the CEO? Is there a defined process for verifying voice or video instructions?
According to cyber-insurer Coalition's 2023 mid-year Cyber Claims Report, there were 63% more funds transfer fraud claim incidents than ransomware incidents. Security teams need to work with their colleagues in finance to ensure current and future fraud techniques won't be successful.
Defence technology best practises and the role of AI-driven protection
From a defence perspective, security teams have a renewed opportunity to align with Zero Trust frameworks, rethink approaches to social engineering and identity-based attacks and take advantage of AI driven detection.
- Zero Trust Alignment: Limiting access and adopting a ‘never trust, always verify’ approach ensures only necessary individuals have access to sensitive information or processes using conditional and dynamic risk-based rules.
- AI-Driven Security: Monitoring internal traffic, investing in identity threat detection, and modernising email defence tooling (i.e. integration into email systems using APIs, leveraging AI/ML and computer vision to assess writing style, intent, and fake log-in pages) are all necessary advancements to combat growing BEC and phishing effectiveness — which are frequently the first step adversaries will take to launch the full scale of a sophisticated attack.