Cyber Threats
The Dangers of AI and ML in the Hands of Cybercriminals
We delve into the many ways that cybercriminals abuse ML and AI presently and how they could exploit these technologies for ill gain in the future.
In a survey conducted by Oxford University’s Center for the Governance of AI, more Americans are in favor of developing technologies such as artificial intelligence (AI) and machine learning (ML). However, 34% of survey respondents also feel that the high-level impact of these advanced technologies would have an adverse effect on humanity.
Despite the many advantages of AI and ML technologies in thwarting cybercrime, including their capability to analyze vast amounts of data, files, and events to identify and block threats, these capabilities could also be abused by cybercriminals to improve existing threats and attacks. In our research paper “Malicious Uses and Abuses of Artificial Intelligence,” a joint project among Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol, we delve into the many ways that cybercriminals abuse ML and AI presently and how they could exploit these technologies for ill gain in the future.
Future Exploitation of AI, ML
As explored in our research, here are some of the plausible future scenarios in which cybercriminals can misuse these advanced technologies.
Content Generation
By using ML and AI, cybercriminals can create arbitrary content that would seem human-made and use such content to generate and distribute high-quality (spear-)phishing and spam emails. They can even create realistic yet fraudulent content in a wide range of languages, including less popular ones, thereby increasing the scope and scale of cybercrime across the world. Attackers can also abuse AI and ML for the creation and spread of disinformation campaigns. For example, they could integrate false pieces of information with legitimate ones. They could also use algorithms to better determine which types of content would be more effective and distributed more widely.
Content Parsing
Aside from generating false information, cybercriminals can use AI and ML to extract structured data from unstructured documents. For example, they could obtain personally identifiable information (PII) from data dumps or compromised networks. Malicious actors are currently developing something similar to named entity recognition (NER), an application that can identify credit cards, phone numbers, and addresses in arbitrary text. This would improve their ability to find key-value data that is not in more standardized formats, such as data that can be found in password dumps. Such malware can even be programmed to look for specific valuable information. Indeed, we believe that with improved NER techniques for malware creation in the future, malicious actors would be able to perform more targeted and sophisticated data scraping.
AI-Supported Ransomware
In the future, cybercriminals could use AI to help propagate ransomware attacks by using deep neural networks to either enhance target selection based on predefined attributes or disable security measures to help facilitate lateral movement and security evasion. As a result of AI-supported ransomware, companies, cities, and governments that handle critical infrastructure and essential services would become more vulnerable to advanced threats and become greatly affected if these attacks proved successful. Cybercriminals can also use ML to more accurately determine the price at which to set their ransom, based on a range of parameters such as the size of the network and email conversations.
Robocalling v2.0
According to the Federal Communications Commission, unwanted phone calls, exacerbated by robocalling, are one of the biggest consumer complaints that they regularly receive. Through advanced technologies like ML and AI, scammers could add smart automation to their robocalling systems and discover which types of victims are more likely to fall for a scam, or understand which arguments and logic would lead to more successful schemes. With enough data points, malicious actors can refine their scamming systems and wage more sophisticated attacks. Aside from this, cybercriminals could also create audio deepfakes to lure users. By using audio clips that mimic the voice of a person that a user knows or trusts, as well as by making it seem that this trusted person is in distress, cybercriminals could fool their potential victim into sending them money.
Positive Use Cases of AI, ML
Although abused by malicious actors for nefarious means, AI and ML still have great potential to address many of our world’s complex challenges. These technologies can help enterprises develop smarter and better products, automate repetitive tasks to save time and energy, revolutionize healthcare, and even help monitor and conserve wildlife and natural resources. It is precisely because of such considerable positive potential that it is imperative for us to understand how these technologies could be weaponized by cybercriminals. In doing so, we can be better prepared for both AI- and ML-powered risks and threats.
Learn more about deepfakes, the current abuses of AI and ML by cybercriminals, and other possible future scenarios of such abuse in our special feature titled “Exploiting AI: How Cybercriminals Misuse and Abuse AI and ML.”