Last Updated on 5th August 2022

Share this with your friends, family and colleagues

This guide to deepfakes explains not only what deepfakes are, but also outlines the risks they could pose and provides practical, helpful advice on how to help protect the children and young people in your care.

We’ve all had that wow moment when something we thought possible only in the imagination of sci-fi writers suddenly exists at our fingertips! Just when we think we have seen everything technology has to offer, something new comes along.

These moments can stop us in our tracks and make us wonder about what could be next. At INEQE Safeguarding Group, we have seen a lot of new apps and platforms utilising deepfakes. The big question is: should you be worried?

You may have seen (and laughed) at some funny videos that use deepfake technology. For example, videos in which celebrities, friends, or relatives have had their faces manipulated to sing, dance, or even tell jokes. When deepfakes are used in this way, they are unlikely to cause serious harm.

However, there is a much more worrying and disturbing side to how deepfakes are being used. Read our guide to deepfakes to find out how you can protect the children in your care as this technology continues to evolve.

synthetic media

What are Deepfakes?

A deepfake is an image, video, sound, voice, or GIF which has been manipulated by a computer to superimpose someone’s face, body or voice onto something else.

Deepfakes have become popular due to the accessibility of mobile and computer-based apps. This means users without sophisticated technological skills can easily access, create, and distribute deepfakes.

Deepfakes can be produced using computer and mobile apps to upload images from a user’s camera roll. The quality of deepfakes can vary depending on the sophistication of the tech used and the skills of the creator. However, the standard of the deepfake produced will rarely matter compared to the potential harm it may cause when used to harass, bully, or abuse a victim.

Fast Facts about Deepfakes

  • In 2019, AI company Sensity warned that 96% of deepfakes they surveyed online were non-consensual ‘pornographic’ materials.

  • Of the above deepfakes, 90% were targeted at women, making women the overwhelming majority of victims of sexualised deepfake tech.
  • Sensity registered that 124 million deepfake ‘pornographic’ videos were available on the top four pornography websites.
  • From 2019 to 2021, the number of deepfakes online grew from roughly 14,000 to 145,000.

  • Deeptrace has estimated that 96% of deepfake videos circulating online contain pornographic content.

  • According to Ofcom’s 2022 Online Nation report, fake or deceptive images and videos including deepfakes are in the top 20 online potential harms encountered by UK users in the past month.


The Harmful Side of Deepfakes

There are already several ways deepfakes are being misused to cause distress and harm. As this technology evolves and becomes more accessible to users, it is important to include in this guide to deepfakes some of the ways it can be misused.


Deepfakes have been used in cases of cyberbullying to deliberately mock, taunt, or inflict public embarrassment on victims. The novel appearance of these images may distract from the real issue that they can be used to bully or harass children and young people.

Extortion and Exploitation

Deepfakes can be used to create incriminating, embarrassing, or suggestive material. Some deepfakes are so good that it becomes difficult to distinguish between them and the real thing. Convincing other people that an embarrassing or abusive image is fake can create additional layers of vulnerability and distress. These images can then be used to extort money or additional ‘real’ images.

Additionally, deepfakes can be used to create so-called ‘revenge porn’. This is a form of image-based sexual abuse as retaliation or vengeance typically associated with the end of a relationship or not agreeing to a sexual relationship with the perpetrator.

Despite common usage of the term ‘revenge porn’, particularly in the media, INEQE Safeguarding Group believes that this terminology is inappropriate and misleading. We propose that this type of misuse of deepfakes comes under the umbrella term of ‘image-based sexual abuse’. Some research shows that victims do not like the term ‘revenge porn’ as it implies they must have done something to deserve retaliation.

In the past, deepfakes were often used to create pornographic materials of famous women. With the tech becoming more accessible to use, non-famous people are increasingly becoming victims too.

Deepfakes can also be used as a form of homophobic abuse, in which a person is depicted in gay pornography. This could then be used to ‘out’ the person or as an attempt to ‘destroy their reputation.’ For young people struggling with their sexual orientation, being depicted in any sexualised deepfakes may be particularly distressing.

Image-Based Sexual Abuse

There have been cases where images of children have been harvested and used to generate sexualised deepfakes. The realistic depiction of a victim engaging in a sex act can damage a child’s wellbeing and mental health. We know that deepfake software can be used to remove clothing from victims digitally. In some cases, there are commercial services where users can pay to have images professionally manipulated.

It is important that parents, carers, and safeguarding professionals are aware of the risks of this form of (non-contact) sexual abuse. In some cases, victims themselves may be unaware that their images have been harvested and misused to create deepfakes.

While many children and young people may be aware of and understand how images can be manipulated in this way, others may not. It is important to speak to them about the issue of deepfakes and how they can be misused.

Deepfakes in the News

  • It has been recommended by the Law Commission of England and Wales that sharing deepfake pornography without consent could be given a sentence of up to 3 years in prison.  

  • Google has prohibited users from creating deepfakes on its machine-learning research tool Colaboratory, also known as Colab. Google Colab is a free browser programme that allows users to create and run Python code and is most commonly used by developers and researchers in the AI community.

  • Dutch police created a deepfake video of a murdered 13-year-old and received dozens of new leads almost 2 decades after the child’s murder.

How to spot a Deepfake

Deepfakes can vary in their quality and professionalism. Some will be quite obviously fake. For others, it can be tricky to spot whether they are real or not. Here are some tell-tale signs to look out for:

  • Glitches – There are typically signs if you look closely at the video itself. Is there rippling, pixelation or blurring around key facial features, like the neck, eyes, or mouth? This may become more obvious when a person moves, blinks, or turns their head or body.
  • Audio – There may be an indication that lip movements do not match what you are hearing. Look closely for natural mouth movements.

Our Advice

  • LEARN – The best way to help protect children from deepfakes is to educate yourself. Share this guide to deepfakes with other parents and safeguarding professionals to help spread the word.
  • TALK – Discuss deepfakes and the importance of image consent with the children in your care. Ensure they know why they should ask someone before using an image of them to create a deepfake or manipulated picture.
  • CHECK – Make sure all the devices your children own or have access to have the best safety settings enabled. Speak to the children in your care about their safety and privacy settings online. You should also check that they limit public access to their social media images. Visit our Safety Centre for further help and guidance.

Share this with your friends, family and colleagues

Join our Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online. 

Sign Up

Who are your Trusted Adults?

The Trusted Adult video explains who young people might speak to and includes examples of trusted adults, charities and organisations.

Pause, Think and Plan

Use our video for guidance and advice around constructing conversations about the online world with the children in your care.

Go to Top