Malware
Malicious AI Tool Ads Used to Deliver Redline Stealer
We’ve been observing malicious advertisement campaigns in Google’s search engine with themes that are related to AI tools such as Midjourney and ChatGPT.
The rising popularity of artificial intelligence (AI) tools such as ChatGPT has made them attractive targets for threat actors who are now exploiting them as social engineering ploys to entice victims into downloading malware droppers that ultimately result in the deployment of stealers like Vidar and Redline.
Recently, we’ve been observing malicious advertisement campaigns in Google’s search engine with themes that are related to AI tools. Figure 1 shows some examples of malicious ads served when a user searches for the keyword "midjourney" in Google (note that Midjourney is an AI tool that generates images from natural language descriptions).
Technical analysis
When a user clicks on these sponsored ads, the user's IP address is sent to a backend server, after which a malicious webpage (shown in Figure 2) is served to the user.
For some of these malicious advertisements, the backend server can filter bots that are visiting the malicious domain to minimise detection. If the IP address visiting these Midjourney-themed URLs is either blocked (typically bots that constantly access the webpages) or visiting it directly by manually typing the URL (that is, not through the Google ads redirector), the server will display a non-malicious version of the domain.
This campaign abuses Telegram's API to communicate with its command-and-control (C&C) server. This acts as an evasion technique that allows network communication with the C&C server to blend in with normal traffic, therefore helping it avoid network detection.
When a victim executes the downloaded installer (Midjourney-x64.msix), it will display a fake installation window while the malicious PowerShell download process continues to run in the background. Note that there is no desktop version of Midjourney, so this in itself should already be a red flag for users.
Figure 6 shows the campaign’s infection chain leading to the PowerShell execution of the script as seen from the Trend Vision One™ console. Trend Micro can proactively block this malicious process from executing via its Behaviour Monitoring Solution.
In this particular campaign, victims are eventually led to a Redline stealer once they have downloaded and executed the fake Midjourney installer.
The MSIX file (Midjourney-x64.msix) will attempt to execute an obfuscated PowerShell script named frank_obfus.ps1. The decoded version of this script will download and execute the Redline payload from the server openaijobs[.]ru.
Once the script downloads and executes the Redline stealer, it will proceed with the exfiltration of sensitive information such as browser cookies, passwords, cryptocurrency wallet data, and file information.
Conclusion and Recommendations
Threat actors have begun capitalising on the explosive popularity of AI tools as more people use them to optimise their work processes. As such, it is important for both organisations and individuals to continue being vigilant when it comes to the apps and tools they download and use. Users should avoid clicking on suspicious ads and downloading unverified or unofficial apps since they can lead to malware infections and other malicious behaviour. Many AI tools, such as ChatGPT and Midjourney, do not have desktop or app versions, so if one is being offered for download, then there is a high chance that this is malicious.
A multilayered approach can help organisations guard possible entry points into their system. The following security solutions can detect malicious components and suspicious behaviour, which can help protect enterprises:
- Trend Vision One™provides multilayered protection and behaviour detection, which helps block questionable behaviour and tools before they can do any damage.
- Trend Micro Apex One™offers next-level automated threat detection and response against advanced threats, ensuring endpoint protection.
Indicators of Compromise (IoCs)
The indicators of compromise for this entry can be found here.