Artificial Intelligence (AI)
AI Pulse: Siri Says Hi to OpenAI, Deepfake Olympics & more
AI Pulse is a new blog series from Trend Micro on the latest cybersecurity AI news. In this edition: Siri says hi to OpenAI, fraud hogs the AI cybercrime spotlight, and why the Paris Olympics could be a hotbed of deepfakery.
Drawing on Trend Micro’s 20 years of experience in machine learning and AI, AI Pulse takes a look at the latest developments in cybersecurity AI and what’s next on the horizon. This inaugural post highlights some of the latest big news, looks at the problem of AI-enabled fraud, and weighs the cybercrime risks surrounding the upcoming Paris Olympic Games.
Welcome to AI Pulse – the cybersecurity AI blog
Artificial intelligence has been around for decades but the launch of ChatGPT in late 2022 basically amounted to AI’s ‘Big Bang’. As explosive as it was, we’re still barely out of the starting blocks. So much is yet to come—and most of it will have some kind of impact on cybersecurity.
At Trend Micro, we’ve been knee-deep in machine learning and AI for more than 20 years. We’ve built an internal practice around anticipating what AI could mean for cybersecurity both as a defensive tool and a destructive force. AI Pulse is a way for us to share our perspective and offer a glimpse of what we see coming around the corner.
In this first post, we look at recent announcements bringing the AI future closer to now; consider the growing problem of AI-enabled fraud; and point out a few reasons why the upcoming Paris 2024 Olympics could be ripe for deepfake exploitation. We look forward to bringing you much more on cybersecurity AI in the months ahead.
Sincerely,
The Trend Micro AI Team
What’s new in cybersecurity AI
Siri says hi... to OpenAI
Part of what makes AI a growing concern for cybersecurity teams is its sheer ubiquity. The more platforms and devices it’s deployed in, the more questions there are about how to maintain safety and privacy. With Apple announcing plans to integrate OpenAI’s ChatGPT into its iOS, iPadOS, and macOS systems, AI’s footprint is about to get a whole lot bigger. Under the umbrella of ‘Apple Intelligence’ (which cleverly secures a branded form of the AI abbreviation just for Apple), the new feature will support image and document understanding and bolster Siri’s ability to answer questions. It will also track an abundance of behavioral data to make devices more personalized and responsive. Not everyone thinks Apple has gone far enough to work out the security implications of the move, including Elon Musk, who has threatened to ban Apple devices from his facilities if the AI integration goes ahead.
Smarter AI is all the RAG(e)
Up to this point, the notorious (and sometimes hilarious) unreliability of large language models (LLMs) has put practical constraints on the potential uses and misuses of AI. That’s changing with implementations of retrieval-augmented generation (RAG) and AI fine-tuning techniques. RAG gives LLMs access to trusted, current information sources that make their outputs more accurate, while fine-tuning trains AI tools deeply on specific tasks or topics. Both underscore the crucial role data has to play in making generative AI ‘smarter’. The race is on to connect AI applications to rich, relevant, and trusted ‘golden data’—driving the value of that data up. Microsoft, Apple, and Google are all working to incorporate RAG and fine-tuning into their technologies. There’s little doubt bad actors will seek to benefit from them as well—and from the data powering them, which is essential for targeted, high-return fraud.
Picture this
China’s Kuaishou Technology beat OpenAI to market in June with a text-to-video generator called Kling AI, edging out the rival company’s SORA platform. Both mark the latest evolutionary leap of AI image generation into photorealistic full-motion media—and intensify concerns about proliferating deepfakes. Further blurring the lines between the virtual and the real, the World AI Creator Awards is hosting a ‘Miss AI’ beauty pageant that exclusively features digital avatars.
AI threat trends
Invasion of the body snatchers
Fraud is incredibly lucrative for cybercriminals. The returns on investment scams and business email compromise (BEC) schemes are hundreds of times greater than household-name attacks like ransomware. While AI plays a relatively small role today, that’s changing fast. Open-source audio and image generators have been perfected for deepfakes, and LLM platforms make attacks easy to scale.
So now BEC is quickly morphing into ‘BVC’—business voice compromise attacks that use AI audio generators to impersonate executives’ voices and authorize illegitimate transactions. In May, the CEO of global ad giant WPP was the target of a headline-grabbing fraud attempt using his photo and voice likeness over WhatsApp. Back in April, Romania’s Energy Minister was impersonated in a deepfake promo for a non-existent investment platform.
Leading victims to the slaughter
Another type of fraud just waiting for an AI boost is the grotesquely named ‘pig butchering’ phenomenon (depending on how you feel about pigs). Pig butchering centers on chat conversations between unsuspecting victims and predators posing as attractive online companions. Over time, these predators build trust and eventually convince targets to pour thousands of dollars into nonexistent investments.
With generative AI already being used to jack up conventional phishing, it’s only a matter of time before cybercriminals apply it here, too—automating deepfake chat avatars to save manpower and enabling them to scale.
Weaponizing image generators
Deepfakes are also giving a boost to extortionists. In China, a group dubbed Void Arachne by Trend Research is using malicious Windows installer files to inject AI-driven nudifiers, voice and facial technologies, and pornography generators into user systems for use in sextortion and virtual kidnapping attacks. The scheme specifically targets users who are already interested in adopting AI technologies.
What’s next in cybersecurity AI
A deepfake Olympics?
Large-scale events provide target-rich environments for fraudsters, drawing crowds of people out of their element who have to rely on unfamiliar sources of information. The 2024 Paris Olympic Games is proving to be among the first to see widespread AI in the mix—such as a fake ‘CIA’ video spread by Russian disinformationists warning Americans not to use the Paris metro due to risk of a terrorist attack.
Some 15 million people are expected to descend on the City of Lights in late July and early August—strangers navigating an unfamiliar city in search of transportation, accommodations, and other arrangements. That opens the way for all kinds of predatory apps, malicious links, and phishing and smishing schemes tricking people into sharing personal or financial information that can be exploited.
Will the real winner please stand up?
Deepfake technologies being used in ‘influence operations’ in other quarters could also be used to corrupt the Games. Fake news could misrepresent who won or lost a particular event. Deepfake audio could tarnish a coach, player, team, or judge by putting controversial words in their mouths. Deepfake images or video could be used to smear an athlete and prevent them from competing. Threat actors may want to sew discord with AI-generated content, and the Olympics provide a large platform for driving social wedges.
Poisoned AI data also pose hazards. Injecting biased data into the training process makes models more susceptible to serving cyberattackers’ desired ends. Axios recently found that virtually all of the world’s leading chatbots are repeating Russian propaganda, often citing fake local news sites as their source of information.
What can be done?
As a rule, fans should exercise the usual cautions: don't believe anything they see online—by default. Start from trusted sites and seek additional confirmation of any information they read. Be aware that misinformation can spread on trusted sites, too. Inspect all hyperlinks and applications before using—and don’t disclose anything that could be compromising.
At the same time, ensure devices are protected: phones, computers, and anything linked to a person’s identity. This applies to every channel a person uses to get information, including SMS, email, productivity apps, and more.
Fraudsters will be looking to exploit people’s confusion and vulnerability. The best defense when it comes to cybersecurity AI is vigilance and a personal form of zero trust—to protect both personal data and one’s very identity.
More on cybersecurity AI
Check out these additional resources: