The term, “deepfake“, is a blend of “deep learning” and “fake”, and was coined in 2017 by a Reddit user. A deepfake is a fake image, video, or voice recording where one person’s likeness is replaced with another’s. These are made to deceive or entertain people. With advanced machine learning, deepfakes can look very realistic. Previously, deepfakes were primarily known for their humorous use on social media. However, their potential for abuse quickly became apparent and deepfakes are now a significant concern for privacy, security, and information. In a recent Trend Micro study conducted in June 2024, 71% of people reported they feel negatively about deepfakes, with a belief that the top reason anyone creates deepfakes is for fraud.1 

How Are Deepfakes Made? 

Creating deepfakes involves using advanced computer programs called Generative Adversarial Networks (GANs) to make fake images that look real. There are four main steps: 

  1. Data Collection: Firstly, media content (images, videos, or audios) of the target person is gathered. This collection is used to train the computer program. 
  2. Training the Model: The GAN is trained with the collected data. One part of the GAN creates fake images, while the other part checks if they look real. 
  3. Refinement: Techniques like facial landmarks and motion capture are used to make the deepfake look natural, with realistic expressions and movements. 
  4. Final Production: The finished deepfake is then combined with the original media, creating a seamless and convincing piece of fake content. 

While the above may sound somewhat complex, the fact is anyone can make a deepfake, given the huge number of software applications accessible to the public, from DeepFaceLab to DALL-E and Midjourney (though the latter has safeguards in place). 

Illustration of video call screenshots on mobile phones with text 'Deepfakes are everywhere' promoting the free Trend Micro Deepfake Inspector

Why Care About Deepfakes?

It is easy to think of cybersecurity as an abstract concept, removed from everyday life — but the implications of malicious deepfake use affect the individual and society at large: 

  • Your personal privacy:  
    Deepfakes can violate your personal privacy by creating non-consensual and often harmful content. This is particularly concerning in cases of deepfake pornography, where individuals’ faces are superimposed onto explicit content without their consent. 
  • Your security and finances: 
    Deepfake video calls may be used to impersonate people, often with the intent to deceive you into giving away money or sensitive information. Anyone can fall victim to a deepfake scam — and suffer grave consequences like financial fraud and identity theft. 
  • Political stability:  
    Deepfakes can be weaponized to create political turmoil. Fabricated videos of politicians making inflammatory statements or engaging in illicit activities can influence public opinion and disrupt democratic processes. 
  • Legal and ethical concerns:  
    The creation and distribution of deepfakes raise significant legal and ethical questions. Issues of consent, intellectual property, and the right to one’s likeness are at the center of ongoing societal debates. 
  • Media integrity:  
    Journalists and media organizations face new challenges in verifying the authenticity of content. Deepfakes can undermine the credibility of news outlets and contribute to the spread of fake news. 
Man browsing his smartphone and holding a credit card on street

Threats & Consequences of Deepfakes 

Deepfakes pose several threats to cybersecurity: 

  • Impersonation and video call scams:  
    Cybercriminals can use deepfakes during video calls to impersonate individuals. Whether it’s a friend, family member, potential partner, or a job interview online — video calls provide a perfect opportunity for the scammer to conduct a deepfake attack, impersonating the target and tricking you into giving money or personal information. 
  • Misinformation/disinformation: 
    Deepfakes can be used to create convincing but false content, spreading misinformation/disinformation. This can undermine public trust in media, influence elections, and destabilize societies. 
  • Identity theft:  
    Deepfakes can facilitate identity theft by creating realistic fake identities or compromising existing ones, leading to financial and reputational damage. 
  • Blackmail and extortion:  
    Malicious actors can create compromising deepfake videos to blackmail or extort individuals, leveraging the power of fabricated evidence. 
  • Erosion of trust:  
    The existence of deepfakes can erode trust in digital content. People are beginning to doubt the authenticity of legitimate media, leading to a broader crisis of confidence in digital communications. 
Illustration of video call screenshots on mobile phones with text 'Real or fake? Can you tell the difference?' promoting the free Trend Micro Deepfake Inspector

How to Spot a Deepfake Video 

Detecting deepfakes is becoming increasingly challenging as the technology improves. Whether you’re watching a video online, listening to an audio clip, or having a video call with someone, follow your instincts and be on the lookout for the following: 

  • Unnatural facial movements:  
    Deepfakes may exhibit subtle inconsistencies in facial expressions and movements. Look for unnatural blinking, lip-syncing issues, or odd facial tics. 
  • Inconsistent lighting:  
    Pay attention to lighting and shadows. If the lighting on the face does not match the lighting in the rest of the scene, it could be a deepfake. 
  • Sound issues:  
    Be on the lookout for sudden changes in tone, and unusual pauses or intonation that doesn’t reflect the speaker’s normal speech. Inconsistencies in background noise or sudden shifts in ambient sounds can also indicate a deepfake. 
  • Blurring:  
    Deepfakes often have slight blurring around the edges of the face, particularly during quick movements. 
  • Audio-visual mismatches:  
    Listen for discrepancies between the audio and visual elements. Mismatched lip movements and audio can be a sign of a deepfake. 
  • Contextual inconsistencies:  
    If the content seems out of character for the person or implausible given the circumstances, it may be a deepfake. For example, if the person you know well has an urgent, unusual request, such as money or personal information, and you’re feeling pressured to act quickly, this is a red flag. 

Being aware of these signs will help you detect a deepfake — to better protect yourself against the threat of deepfakes, be sure to follow the advice in “How to Protect Yourself from Deepfakes: Tips and Best Practices“. The fact is, however, as deepfake technology continues to advance, it’s becoming much harder for the human eye to reliably detect a deepfake. 

Introducing Trend Micro Deepfake Inspector 


Deepfake video calls are on the rise, making it hard to know if the person you’re talking with is who they say they are. You could be chatting with a friend, family member, potential partner, or having a job interview online — scammers can use these opportunities to impersonate the person you think you are talking to and trick you into giving away money or sensitive information. 

Protect yourself with our FREE tool, Trend Micro Deepfake Inspector. Designed for live video calls on Windows PCs, it scans for AI face-swapping content in real time, alerting you if you’re talking with a potential deepfake scammer and protecting you from harm. To learn more about Deepfake Inspector and how it can help you spot people using AI to alter their appearance on video calls, click the button below. 

Picture of an ongoing video call on a laptop with texts 'Anomalies detected' and 'Identify deepfakes in real time' promoting the free Trend Micro Deepfake Inspector

Download Trend Micro Deepfake Inspector   It’s free Download Trend Micro Deepfake Inspector   It’s free


Don’t risk a deepfake disaster — download Trend Micro Deepfake Inspector today! If you’ve found this article interesting or helpful, please SHARE it with friends and family to help keep the online community secure and protected. Here’s to a secure 2024! 

1 Trend Micro Snap Study – Consumers and Deepfakes, Conducted June – July 2024, US & Australia, N=2097 

Avril Ronan
Avril Ronan

Avril Ronan is Global Program Manager of the Internet Safety for Kids and Families Program at Trend Micro. Avril is best known for working in community; engaging students, parents, educators and senior citizens in the conversation about online safety. The ultimate goal of each conversation is to empower people to be online in safe, responsible and successful ways. As a regular public speaker, Avril collaborates with academia, law enforcement, industry and government having coordinated and delivered programs to date around the world such as What’s Your Story?, Cyber Academy (now in 19 languages), and the #StayAtHome
Webinar Series.