Deepfakes are assets that may include doctored audio, visual, or text content created using generative AI (GenAI). They are leveraged by cybercriminals to manipulate targets into willfully providing sensitive data.
In creating realistic, believable videos that are increasingly difficult to spot as AI-generated, deepfake videos pose a significant data security risk. They are typically seen depicting high-profile, well-known individuals such as political figures and celebrities, though they can be generated to capture the likeness of other individuals as well. Depending on the goal of their creator, they may be used to spread disinformation, defraud an individual or organisation, or request sensitive data and/or funds.
Deepfake videos are generated through the complex analysis of source content. Essential details such as facial features and movements, dimensions, skin tone, hair and eye color, and body language are fed into the AI to generate as accurate of a representation as possible. This also applies to the background; if the office, boardroom, or other setting in which the subject appears is well-known, efforts will be made by the threat actor to replicate it as accurately as possible using source imagery and video.
Similar to the generation of deep fake video content, audio can be generated with AI using available training material found online. Reference sources tend to include voicemail messages, phone calls, guest appearances in podcast and news recordings, and authentic video content containing audio that features the likeness of a key individual or group.
The generated audio can be made to sound highly convincing, closely matching the source material to make it as believable as possible. The generative AI tool used by the threat actor analyses several key details including the tone, pitch, speech pattern, clarity, enunciation, and audible emotion of those speaking in the reference materials.
While audio and video can be deepfaked using GenAI, cheapfakes forego the use of such technologies. Instead, they are typically manually created to deceive individuals or groups. These tend to be optical, audio, or text-based illusions meant to trick those not paying close enough attention, such as when met with a sense of urgency or experiencing emotional stress. As noted by the U.S. Department of Homeland Security, cheapfakes pre-date the digital age, meaning that threat actors have had centuries to learn from one another and hone their capabilities.
Malicious individuals employ deepfakes and/or cheapfakes for a variety of purposes, including but not limited to the following:
There are several steps you can take to reduce your risk of being the target of a deepfake or cheapfake. These include the following measures, several of which are recommended by the National Cybersecurity Alliance:
A zero-trust approach is crucial in cybersecurity. When it comes to protecting against deepfakes, its principles could be considered a blueprint for minimising risk. For instance:
In addition, purpose-built deepfake inspection and detection solutions can help safeguard the identities, wellbeing, and data of users. Such tools are essential in the age of ever-accelerating AI innovation, as deepfakes are often difficult for humans to detect manually. “As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder,” notes a detailed 2023 National Library of Medicine report on the subject. “The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defences against this threat are needed.”
Related Research
Related Articles