What are Deepfakes?

Deepfakes are assets that may include doctored audio, visual, or text content created using generative AI (GenAI). They are leveraged by cybercriminals to manipulate targets into willfully providing sensitive data.

Deepfake videos

In creating realistic, believable videos that are increasingly difficult to spot as AI-generated, deepfake videos pose a significant data security risk. They are typically seen depicting high-profile, well-known individuals such as political figures and celebrities, though they can be generated to capture the likeness of other individuals as well. Depending on the goal of their creator, they may be used to spread disinformation, defraud an individual or organisation, or request sensitive data and/or funds.

Deepfake videos are generated through the complex analysis of source content. Essential details such as facial features and movements, dimensions, skin tone, hair and eye color, and body language are fed into the AI to generate as accurate of a representation as possible. This also applies to the background; if the office, boardroom, or other setting in which the subject appears is well-known, efforts will be made by the threat actor to replicate it as accurately as possible using source imagery and video.

Voice cloning

Similar to the generation of deep fake video content, audio can be generated with AI using available training material found online. Reference sources tend to include voicemail messages, phone calls, guest appearances in podcast and news recordings, and authentic video content containing audio that features the likeness of a key individual or group.

The generated audio can be made to sound highly convincing, closely matching the source material to make it as believable as possible. The generative AI tool used by the threat actor analyses several key details including the tone, pitch, speech pattern, clarity, enunciation, and audible emotion of those speaking in the reference materials.

Cheapfakes

While audio and video can be deepfaked using GenAI, cheapfakes forego the use of such technologies. Instead, they are typically manually created to deceive individuals or groups. These tend to be optical, audio, or text-based illusions meant to trick those not paying close enough attention, such as when met with a sense of urgency or experiencing emotional stress. As noted by the U.S. Department of Homeland Security, cheapfakes pre-date the digital age, meaning that threat actors have had centuries to learn from one another and hone their capabilities.

Cheapfake approaches

  • Physically cutting and splicing film
  • Wiretapping and/or splicing fragments of recorded phrases and/or full sentences
  • Slowing or accelerating video and/or audio content to convey a desired effect or suggestion
  • Filming and/or recording lookalikes and/or soundalikes posing as a key individual
  • Low-budget, low-quality computer-generated imagery (CGI), motion capture technology, and green screens

Deepfake and cheapfake examples

Malicious individuals employ deepfakes and/or cheapfakes for a variety of purposes, including but not limited to the following:

  • Manipulating new employees into giving up company and/or personal information
  • Posing as a celebrity or political figure to obtain funds and/or spread misinformation
  • Falsifying circumstances such as disasters, injury, or a death for insurance claims
  • Misleading others into using fake websites that they are led to believe are real
  • Manipulating stocks and investments by assuming the appearance of an executive
  • Causing embarrassment and/or reputational harm to individuals

Protection measures

There are several steps you can take to reduce your risk of being the target of a deepfake or cheapfake. These include the following measures, several of which are recommended by the National Cybersecurity Alliance:

  • Screening incoming calls from unknown numbers and letting them go to voicemail
  • Setting up multi-factor authentication across all online accounts
  • Using unique, lengthy, and complex passwords
  • Setting up a webcam with a physical shutter to cover the lens when not using it
  • Adding a digital watermark to your photos and/or videos before sharing them
  • Confirming details in-person that were disclosed online or over the phone (when feasible)
  • Scrutinising details in suspicious emails such as punctuation, tone, and grammar

Leverage zero-trust principles and deepfake detection solutions

A zero-trust approach is crucial in cybersecurity. When it comes to protecting against deepfakes, its principles could be considered a blueprint for minimising risk. For instance:

  • Ensure authentication and authorisation processes are in place and being followed
  • Proactively regulate and monitor user access to data and networks
  • Assume a breach upon detecting a threat and minimise the “blast radius”

In addition, purpose-built deepfake inspection and detection solutions can help safeguard the identities, wellbeing, and data of users. Such tools are essential in the age of ever-accelerating AI innovation, as deepfakes are often difficult for humans to detect manually. “As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder,” notes a detailed 2023 National Library of Medicine report on the subject. “The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defences against this threat are needed.” 

Related Research

Related Articles