Unusual CEO Fraud via Deepfake Audio Steals US$243,000 From UK Company
An unusual case of CEO fraud used a deepfake audio, an artificial intelligence (AI)-generated audio, and was reported to have conned US$243,000 from a U.K.-based energy company. According to a report from the Wall Street Journal, in March, the fraudsters used a voice-generating AI software to mimic the voice of the chief executive of the company’s Germany-based parent company to facilitate an illegal fund transfer.
The cybercriminals called the U.K. company's CEO pretending to be the CEO of the parent company. The attackers demanded that an urgent wire transfer be made to a Hungary-based supplier and the U.K. company’s CEO was assured of a reimbursement. After the money had been transferred, it was forwarded to an account in Mexico and then other locations, making the identification of the fraudsters more difficult.
The fraudsters called the company a second time for another transfer, stating that the first payment had already been reimbursed. With the reimbursement not going through successfully, the U.K. company’s CEO refused. On the third call, the fraudsters attempted to demand a followup payment. However, this call was already met with suspicion, especially since the call was made using an Austrian phone number.
Staying Safe From Social Engineering Scams Through Best Practices, Machine Learning-Powered Solutions
Deepfake audio fraud is a new cyberattack, further highlighting how AI can be abused by cybercriminals to make scams harder to detect. Despite newfound ways of siphoning money from companies, however, tried-and-tested ways such as phishing and business email compromise (BEC) remain top attack vectors businesses should be on the lookout for.
BEC scams continue to swindle large sums of money from businesses on a global scale. In fact, the Trend Micro midyear security roundup reported that BEC rose 52% from the second half of 2018. It has been recently reported that cybercriminals attempt to steal a whopping US$301 million per month via BEC scams.
To prevent companies from falling for BEC attacks, both company personnel and business partners must make a concerted effort to practice prudence as well as raise security awareness within the organization. These are some best practices to apply:
-
Fund transfer and payment requests, especially those that involve large amounts, should always be verified, preferably by contacting the supplier via a phone call and confirming the transaction. If possible, a secondary sign-off should also be done by someone higher up in the organization.
-
Look for red flags when it comes to business transactions. For example, a change in bank account information with no prior notice is a red flag and a possible sign of a BEC attempt.
-
BEC threat actors try to masquerade as a member of, or at least as an individual connected with, the organization. Employees should always scrutinize received emails for any suspicious elements — for example, the use of unusual domains or changes in email signatures.
Furthermore, enterprises can also consider using a security technology designed to fight against BEC scams, such as Writing Style DNA, which is used by the Trend Micro™ Cloud App Security™ and ScanMail™ Suite for Microsoft® Exchange™ solutions. It can help detect email impersonation tactics used in BEC and similar scams. It uses AI to recognize the DNA of a user’s writing style based on past emails and then compares it to suspected forgeries. The technology verifies the legitimacy of the email content’s writing style through a machine learning model that contains the legitimate email sender’s writing characteristics.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Postagens recentes
- Ransomware Spotlight: Ransomhub
- Unleashing Chaos: Real World Threats Hidden in the DevOps Minefield
- From Vulnerable to Resilient: Cutting Ransomware Risk with Proactive Attack Surface Management
- AI Assistants in the Future: Security Concerns and Risk Management
- Silent Sabotage: Weaponizing AI Models in Exposed Containers