Artificial Intelligence (AI)
AI Pulse: Election Deepfakes, Disasters, Scams & more
In the final weeks before November’s U.S. election, cybersecurity experts were calling October 2024 the “month of mischief”—a magnet for bad actors looking to disrupt the democratic process through AI-generated misinformation. This issue of AI Pulse looks at what can be done about deepfakes and other AI scams, and why defence-in-depth is the only way to go.
From capitalising on catastrophes to putting phoney words in candidates’ mouths, the final month before the U.S. Presidential election has seen a spike in AI-generated misinformation—some of it disseminated by passionate partisans, much of it by state-actor adversaries actively seeking to disrupt the vote. Cybercriminals, too, are jumping on the deepfake bandwagon, using synthetic media in novel ways to exploit the vulnerable and extort the unknowing. So what can be done to prevent the worst? These days, there’s no single magic bullet solution: it takes a concerted defence-in-depth approach to fend off the deepfake threat.
Seeing is no longer believing
Deepfakes have become weapons of misinformation in the runup to the 2024 U.S. Presidential election and synthetic media are enabling new forms of slander, extortion, and other cybercrimes. Threats like these are only going to intensify as the quality of AI-generated images, audio, and video improves, and they’re already hard to fight—quick to spread, difficult to detect, and for the most part impossible to unsee. Even when they’re debunked, their effects linger. It’s safe to say we still haven’t seen the full power of AI scams to interfere in elections and other civic processes.
While an informed public is essential to countering deepfakes and disinformation, authorities and institutions also need effective AI-detection tools. But are such tools even possible? We look at that question and more in this issue of AI Pulse.
What’s New: The Deepfake Edition... or is it?
Undermining elections one lie at a time
Execs from the University of Southern California’s Election Cybersecurity Initiative dubbed October the “month of mischief” in a briefing to the U.S. Department of State, rhyming off a who’s-who of election cyber-disruptors including Iran (hacking the Trump campaign), China (sowing local-level distrust), and Russia (casting doubt on the democratic process via paid influencers). The initiative’s executive director, Adam Clayton Powell, called these state actor assaults a form of “asymmetrical warfare” that pits relatively unskilled defenders against expert attackers.
Some observers warn election interference won’t end on voting day. Former Homeland Security chief of staff Miles Taylor said an AI-driven “November surprise” could be brewing, with post-election AI deepfakes and other cyber disinformation surfacing after the vote to challenge results. As Taylor wrote in TIME: “Election officials admit they’re unprepared.... [fearing] authentic-looking ‘evidence’ that an election was stolen that they can’t readily disprove.”
Fake pics, real consequences
Hurricane Helene caused destruction at the end of September that will take years to recover from. It also spawned now-famous deepfake images of terrified children clutching soggy pets—and one of former U.S. President Donald Trump wading through floodwaters to aid the rescue effort. Forbes reported the AI Trump was shared 160,000 times on Facebook in just two days. While the images were quickly debunked, commentators say that often doesn’t matter. The emotional impressions stick, and the debunking itself can be seen as politically motivated.
AI-generated synthetic media aren’t just propaganda tools. They also provide trojan horses for phishing schemes and fundraising scams and do direct harm to personal reputations. By the time a viral audio clip of a Baltimore-area principal making racist statements was exposed as fake earlier this year, the principal had already been put on administrative leave and many people refused to accept the recording wasn’t real. Safety concerns at the school required the police to get involved.
Clearly no one is immune from the deepfake threat. As proof, a phoney video of Vladimir Putin declaring martial law made it all the way onto Russian TV in 2023 before being shut down by the Kremlin.
Truth or freedom—pick one?
Last issue we looked at the controversy surrounding California’s sweeping AI bill, SB 1047. The state was back in the AI spotlight this month when a U.S. District Judge blocked its newly passed law entitling any person to sue for damages over election deepfakes. Challengers claimed existing laws provide sufficient protection against defamation and that the new law (AB 2839) violated the First Amendment right to free speech.
At the heart of the case was a social media content creator, ‘Mr Reagan’, who sued the state as a proactive defence after posting a synthetic media video of Vice President Kamala Harris saying she is “the ultimate diversity hire”. The judge’s ruling called AB 2839 “a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas,” further proof of the complexities and challenges surrounding AI regulation.
Yet some jurisdictions are showing that steps can be taken. In late September, South Korea took steps toward banning the possession or viewing of sexually explicit deepfake images or videos, building on previous laws prohibiting the creation of such content. In both cases, the penalties range from fines to imprisonment.
AI Threat Trends
Law & Disorder: Fake digital trials and AI scams
A Hollywood-worthy caper played out in India this August when scammers impersonating federal investigators accused a businessman of laundering money, kept him under round-the-clock digital surveillance, and staged a fake online court proceeding with a phoney Chief Justice—all to extort more than US$800,000.
Digital ‘arrests’ like these are a growing type of online fraud. They involve convincing victims they’re in trouble with the law and requiring them to stay in continuous contact with scammers via video conference so they can be manipulated. It’s an approach that aligns with Trend Micro predictions about ‘Harpoon Whaling’ from a year ago—that scams (in that case, romance scams) will become increasingly targeted and convincing thanks to AI tools, often with a focus on wealthy, powerful, and high-ranking individuals.
Shopping for your next deepfake app? Telegram has what you need.
A probe by the UN Office for Drugs and Crime (UNODC) found the messaging platform Telegram is a hot spot for Southeast Asian cybercrime including the sale of synthetic media software. The investigation discovered a growing use of deepfake-related search terms—which could suggest rising demand for the tech—as well as stolen data, payment card information, passwords, and more, all up for sale. Estimates put the financial losses from scams targeting victims in East and Southeast Asia between US$18 billion to US$37 billion for 2023, with the UNODC saying human trafficking is often used to supply cybercrime workers. Deepfake tech could give online gangs a major boost to the volume, types, and effectiveness of their criminal schemes.
What’s Next in AI Scams
Will the real AI please stand up?
In the 1982 sci-fi classic, Blade Runner, Harrison Ford’s job is to hunt rogue cyborgs. His big problem: it’s almost impossible to know who’s a ‘replicant’ and who’s an organic human being.
Source: In a 1980s sci-fi setting, a futuristic detective faces the challenge of identifying advanced cyborgs that are indistinguishable from humans. His mission to FIND these beings.
Critics have warned from the get-go that AI will eventually pose the same challenge. As it becomes more sophisticated, it will be harder and harder for people or machines to tell if a document, image, or recording is real or AI-generated.
Some AI deepfakes are already difficult to detect. In September, the Chair of the U.S. Senate Foreign Relations Committee booked a video conference with someone he thought was a legitimate, known contact in Ukraine. But the email he’d received was fake, and the video call—which seemed to feature the real foreign official—was also an AI scam. When the conversation veered into “politically charged” territory, the Chair and his team realised something was up and pulled the plug.
We are all targets
Public figures are by no means the only ones vulnerable to synthetic media scams. Trend Micro data shared with Dark Reading this past summer showed that 80% of consumers had seen deepfake images, 64% had seen deepfake videos, and just over a third—35%—had been personally exposed to deepfake scams.
Training people to be aware of deepfakes and other AI-generated threats is clearly essential. But as Trend’s Shannon Murphy points out, humans can’t see down to the pixel level. Technology-based tools are also a must—to make AI-generated content identifiable and to detect it when it doesn’t identify itself.
Getting AI to show itself
On the ‘AI identifier’ side of the question, one commonly promoted technique is the use of digital watermarks: machine-detectable patterns embedded in AI-generated content. The Brookings Institute notes these watermarks are effective but not invulnerable to tampering—and can be hard to standardise while maintaining trust.
Microsoft is putting something along these lines into practise with Content Credentials, a way for creators and publishers to authenticate their work cryptographically and use metadata to certify who made something, when, and if AI was involved. The Content Credentials regime conforms to the C2PA technical standard and can be used with photos, video, and audio content.
OpenAI is concentrating more heavily on the AI detection part of the puzzle. According to Venture Beat, the company’s GPT-4o is designed to identify and stop deepfakes by detecting content from generative adversarial networks (GANs), audio and video anomaly detection, authenticating voices, and checking that audio and visual media components match up—for example, that mouth movements and breaths correspond to what appears onscreen in video.
The only real answer is defence in depth
Deepfakes and other AI threats are going to continue to challenge our senses and assail our institutions. Vigilant humans, AI identifiers, and analytical AI detection technologies are all key defences, but none of these is perfect, meaning still more needs to be done. Zero-trust models are also essential to orient organisations and processes around protecting themselves—taking a “trust nothing, verify everything” stance. Consider the risks before taking actions based on digital content.
Combining all of the above along with legal and regulatory guardrails will provide true defence-in-depth and our best possible protection against AI-generated threats.
More perspective from Trend Micro
Check out these additional resources:
- Generative AI in Elections: Beyond Political Disruption
- The Illusion of Choice: Uncovering Electoral Deceptions in the Age of AI
- AI-Powered Deepfake Tools Becoming More Accessible Than Ever blog post
- Survival Guide for AI-generated Fraud infographic