The term, “deepfake”, is a blend of “deep learning” and “fake”, and was coined in 2017 by a Reddit user. A deepfake is a fake image, video, or voice recording where one person’s likeness is replaced with another’s. These are made to deceive or entertain people. With advanced machine learning, deepfakes can look very realistic. Previously, deepfakes were primarily known for their humorous use on social media. However, their potential for abuse quickly became apparent and deepfakes are now a significant concern for privacy, security, and information.
In creating realistic, believable videos that are increasingly difficult to spot as AI-generated, deepfake videos pose a significant data security risk. They are typically seen depicting high-profile, well-known individuals such as political figures and celebrities, though they can be generated to capture the likeness of other individuals as well. Depending on the goal of their creator, they may be used to spread disinformation, defraud an individual or organization, or request sensitive data and/or funds.
Deepfake videos are generated through the complex analysis of source content. Essential details such as facial features and movements, dimensions, skin tone, hair and eye color, and body language are fed into the AI to generate as accurate of a representation as possible. This also applies to the background; if the office, boardroom, or other setting in which the subject appears is well-known, efforts will be made by the threat actor to replicate it as accurately as possible using source imagery and video.
Similar to the generation of deep fake video content, audio can be generated with AI using available training material found online. Reference sources tend to include voicemail messages, phone calls, guest appearances in podcast and news recordings, and authentic video content containing audio that features the likeness of a key individual or group.
The generated audio can be made to sound highly convincing, closely matching the source material to make it as believable as possible. The generative AI tool used by the threat actor analyzes several key details including the tone, pitch, speech pattern, clarity, enunciation, and audible emotion of those speaking in the reference materials.
While audio and video can be deepfaked using GenAI, cheapfakes forego the use of such technologies. Instead, they are typically manually created to deceive individuals or groups. These tend to be optical, audio, or text-based illusions meant to trick those not paying close enough attention, such as when met with a sense of urgency or experiencing emotional stress. As noted by the U.S. Department of Homeland Security, cheapfakes pre-date the digital age, meaning that threat actors have had centuries to learn from one another and hone their capabilities.
Physically cutting and splicing film
Wiretapping and/or splicing fragments of recorded phrases and/or full sentences
Slowing or accelerating video and/or audio content to convey a desired effect or suggestion
Filming and/or recording lookalikes and/or soundalikes posing as a key individual
Low-budget, low-quality computer-generated imagery (CGI), motion capture technology, and green screens
Creating deepfakes involves using advanced computer programs called Generative Adversarial Networks (GANs) to make fake images that look real. There are four main steps:
Data Collection: Firstly, media content (images, videos, or audios) of the target person is gathered. This collection is used to train the computer program.
Training the Model: The GAN is trained with the collected data. One part of the GAN creates fake images, while the other part checks if they look real.
Refinement: Techniques like facial landmarks and motion capture are used to make the deepfake look natural, with realistic expressions and movements.
Final Production: The finished deepfake is then combined with the original media, creating a seamless and convincing piece of fake content.
While the above may sound somewhat complex, the fact is anyone can make a deepfake, given the huge number of software applications accessible to the public, from DeepFaceLab to DALL-E and Midjourney (though the latter has safeguards in place).
It is easy to think of cybersecurity as an abstract concept, removed from everyday life — but the implications of malicious deepfake use affect the individual and society at large:
Deepfakes can violate your personal privacy by creating non-consensual and often harmful content. This is particularly concerning in cases of deepfake pornography, where individuals’ faces are superimposed onto explicit content without their consent.
Deepfake video calls may be used to impersonate people, often with the intent to deceive you into giving away money or sensitive information. Anyone can fall victim to a deepfake scam — and suffer grave consequences like financial fraud and identity theft.
Deepfakes can be weaponized to create political turmoil. Fabricated videos of politicians making inflammatory statements or engaging in illicit activities can influence public opinion and disrupt democratic processes.
The creation and distribution of deepfakes raise significant legal and ethical questions. Issues of consent, intellectual property, and the right to one’s likeness are at the center of ongoing societal debates.
Journalists and media organizations face new challenges in verifying the authenticity of content. Deepfakes can undermine the credibility of news outlets
Deepfakes pose several threats to cybersecurity:
Cybercriminals can use deepfakes during video calls to impersonate individuals. Whether it’s a friend, family member, potential partner, or a job interview online — video calls provide a perfect opportunity for the scammer to conduct a deepfake attack, impersonating the target and tricking you into giving money or personal information.
Deepfakes can be used to create convincing but false content, spreading misinformation/disinformation. This can undermine public trust in media, influence elections, and destabilize societies.
Deepfakes can facilitate identity theft by creating realistic fake identities or compromising existing ones, leading to financial and reputational damage.
Malicious actors can create compromising deepfake videos to blackmail or extort individuals, leveraging the power of fabricated evidence.
The existence of deepfakes can erode trust in digital content. People are beginning to doubt the authenticity of legitimate media, leading to a broader crisis of confidence in digital communications.
Detecting deepfakes is becoming increasingly challenging as the technology improves. Whether you’re watching a video online, listening to an audio clip, or having a video call with someone, follow your instincts and be on the lookout for the following:
Deepfakes may exhibit subtle inconsistencies in facial expressions and movements. Look for unnatural blinking, lip-syncing issues, or odd facial tics.
Pay attention to lighting and shadows. If the lighting on the face does not match the lighting in the rest of the scene, it could be a deepfake.
Be on the lookout for sudden changes in tone, and unusual pauses or intonation that doesn’t reflect the speaker’s normal speech. Inconsistencies in background noise or sudden shifts in ambient sounds can also indicate a deepfake.
Deepfakes often have slight blurring around the edges of the face, particularly during quick movements.
Listen for discrepancies between the audio and visual elements. Mismatched lip movements and audio can be a sign of a deepfake.
If the content seems out of character for the person or implausible given the circumstances, it may be a deepfake. For example, if the person you know well has an urgent, unusual request, such as money or personal information, and you’re feeling pressured to act quickly, this is a red flag.
There are several steps you can take to reduce your risk of being the target of a deepfake or cheapfake. These include the following measures, several of which are recommended by the National Cybersecurity Alliance:
Screening incoming calls from unknown numbers and letting them go to voicemail
Setting up multi-factor authentication across all online accounts
Using unique, lengthy, and complex passwords
Setting up a webcam with a physical shutter to cover the lens when not using it
Adding a digital watermark to your photos and/or videos before sharing them
Confirming details in-person that were disclosed online or over the phone (when feasible)
Scrutinizing details in suspicious emails such as punctuation, tone, and grammar
Leverage zero-trust principles and deepfake detection solutions
A zero-trust approach is crucial in cybersecurity. When it comes to protecting against deepfakes, its principles could be considered a blueprint for minimizing risk. For instance:
Ensure authentication and authorization processes are in place and being followed
Proactively regulate and monitor user access to data and networks
Assume a breach upon detecting a threat and minimize the “blast radius”
In addition, purpose-built deepfake inspection and detection solutions can help safeguard the identities, wellbeing, and data of users. Such tools are essential in the age of ever-accelerating AI innovation, as deepfakes are often difficult for humans to detect manually. “As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder,” notes a detailed 2023 National Library of Medicine report on the subject. “The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.”
In February 2024, a Hong Kong company was defrauded through a video conference that exploited deepfakes . It was reported that $25 million was transferred to a group of fraudsters who impersonated the company's chief financial officer. The video conference was held with several other participants in addition to the employee who was defrauded, but all of the participants were fake colleagues generated by deepfakes, and the defrauded employee did not realize that all of them were fake.
In April 2023, a virtual kidnapping incident occurred in Arizona, USA . An anonymous individual demanded that a woman pay a ransom of $1 million for her 15-year-old daughter. The woman reportedly heard her daughter's cries, screams, and pleading over the phone with the perpetrator. It has been revealed that the daughter was not actually kidnapped in this case, but was a virtual kidnapping. It is believed that a cloned voice created based on the daughter's voice was used in the phone call with the perpetrator in this case. In fact, the US Federal Trade Commission has issued a warning about scams using cloned voices of family members.
We’ve all heard about online romance scams in which scammers impersonate someone else, like a military service member based overseas, and ask for money online. While most of us think we know all the tricks and won’t fall victim, scammers are employing new tactics using advanced deepfake technology to exploit people.
Historically, one of the red flags of a romance scam is that the scammers won’t join a video call or meet you in person. However, with deepfake face-swapping apps, scammers can now get away with doing video calls — they can win your trust easily with a fake visual that makes you believe the person on the video call is real.
That’s how the “Yahoo Boys” upgraded their tactics. Notorious since the late 1990s, the Yahoo Boys used to send scammy Yahoo mails to carry out various scams like phishing and advanced-fee fraud. Today, the Yahoo Boys use fake video calls powered by deepfake technology, earning the trust of victims across dating sites and social media.
These types of deepfake romance scams can get pretty creative. In 2022, Chikae Ide, a Japanese Manga artist, revealed that she lost $75 million yen (almost half a million USD) to a fake “Mark Ruffalo” online. Although she was suspicious at first, it was the convincing deep fake video call that removed her doubts about transferring money.
With deepfake technology, scammers can pose as ANYONE, for example, impersonating recruiters on popular job sites such as LinkedIn.
Scammers offer what may appear as a legitimate online job interview. They use deepfake audio and face swapping technology to convince you that the interviewer is from a legitimate employer. Once you receive confirmation of a job offer, you are asked to pay for the starter pack and asked to share your personal information such as bank details for salary set-up.
They also pose as interviewee candidates. The FBI warned that scammers may use deepfake technology and people’s stolen PII to create fake candidate profiles. Scammers apply for remote jobs, with the goal of accessing sensitive company customer and employee data, resulting in further exploitation.
Deepfakes are also commonly used for fake celebrity endorsement in investment scams. In 2022, deepfake videos featuring Elon Musk giving away crypto tokens circulated online. These deepfakes advertise too-good-to-be-true investment opportunities and lead to malicious websites. Below is a recent example of a fake YouTube live stream of an Elon Musk deepfake promoting cryptocurrency airdrop opportunities.
Even with legitimate and popular mobile applications there is the possiblity of deepfakes. Below is Elon Musk again promoting “financial investment opportunities” in an advertisement seen on Duolingo. If you fall for the scam and click on the ad, it leads you to a page that, when translated, offers “Investment opportunities; invest €250, earn from €1000.” In other cases, malware could even start to download once you click. Be cautious!
Dive into our AI hub, where innovative tech and top-tier security converge. Explore how AI empowers security teams to swiftly predict, anticipate, and detect threats. Check back regularly for updated resources on AI’s transformative impact on cybersecurity, helping you stay ahead of emerging threats and securely implement AI solutions.
Related Research
Related Articles