Last Updated on 19th February 2025

Safeguarding Students

A Guide for Schools on Preventing and Responding to AI-Generated Image Exploitation

Reading Time: 10.8 mins

Published: February 22, 2025

Share this with your friends, family, and colleagues

A small number of schools are reporting incidents where photos, often of girls, are being copied from their websites and social media channels. Scammers are then using Artificial Intelligence (AI) to sexualise the images in an attempt to blackmail schools, targeting both students and staff.

The increasing prevalence and accessibility of AI image generation tools is a worrying development, as it makes these scams more likely to occur. This guide provides schools with information on how to prepare for, prevent, respond to, and report incidents of AI-generated image exploitation.

*These images are AI-generated and do not depict real children.

Preparing for and Preventing Online Extortion

It’s easy to think, ‘it won’t happen here’, but staying ahead of online risks is key to minimising harm. By proactively educating your school community, you empower them to recognise threats and respond appropriately. You can do this by:

Create awareness-raising initiatives to inform students, staff, and parents about the risks of online image manipulation, responsible image sharing, and the potential dangers of AI-generated images. These initiatives should include practical tips for staying safe online. Resources to support you are available on the Safer Schools Teach Hub.
Provide staff training on identifying and responding to online risks, including AI-generated sexual extortion. The online world is ever changing, so training should be regular and recurring to keep up with emerging threats and trends.

Review and update school policies regarding data protection, image consent, and online safety to reflect current best practices and legal requirements.

Revise the school’s risk register to incorporate preventative and responsive measures for targeted extortion incidents, particularly those involving social media and image-based threats.

Use clear and comprehensive consent forms that explicitly state how student images will be used, where they will be published, and for how long.

Regularly update consent forms to ensure they remain relevant and reflect any changes in online platforms or practices.

Ensure consent is time-bound, not just in policy but in practice – such as having a plan for removing images of pupils whose consent has expired.

Provide parents, students and former students with an easy and accessible way to opt out of having their child’s photo shared online. You may wish to consider having dual consent (both parent and young person).

Audit all online platforms where student photos are currently displayed (website, social media, newsletters, etc.). Aim to reduce the number of photos shared publicly online that can identify a student, such as full names, uniforms with school logos, or locations. You should also:

  1. Minimise the use of popular public messaging platforms: Avoid the use of easily accessible messaging platforms, such as Messenger on school Facebook pages.
  2. Opt for private platforms: Use alternative private platforms (in which you control access) for sharing photos with parents, such as password-protected school intranets, private social media groups, or dedicated apps.
  3. Use watermarking and copyright: Consider adding watermarks and copyright indicators to photos to deter unauthorised use and make it easier to remove the photo when it is shared without permission.
  4. Obscure faces: Take images that do not show a direct shot of a face. Images of the side and back of heads can be more difficult to manipulate.
  5. Use AI-generated images of children: Use photos that do not include real children in your promotional material websites and social media as a preventative measure.
  6. Switch off ‘Right Click’ functions: Switch off ‘Right Click’ functions (software permitting) on your website.

If Your School is Targeted in an Online Extortion Scam

It can be distressing to have the threat of student images being misused. Remember to stay calm and do the following.

1. Report and Respond

Capture the Evidence Carefully

It is only a good idea to screenshot, save or otherwise capture evidence of online abuse or scams if you are sure there are no indecent images of children contained within the material. Sharing or forwarding the images, even within the school, can cause further harm. Whilst there are limited defences for possessing this type of imagery (including coming across it in the course of your employment) this is limited to particular roles and conditions. If you find an image, do not share it or show it to others. Secure the device/s, inform your Safeguarding Lead and call the police.

Do Not Engage With Scammers

Responding to extortionists is likely to escalate the risk for you and your pupils. If they have sent any links or files, do not click them.

Report Immediately

Contact your Safeguarding Lead or Deputy Safeguarding Lead, who should immediately engage statutory services, including the police. It may be useful to also report any indecent images to the Internet Watch Foundation (IWF).

Maintain A Critical Incident Log

Keep chronological records of all communications, including emails, messages, and any other relevant information, such as usernames, dates and times.

Refer to Your Crisis Response Plan

Your school’s crisis response policy will contain established procedures on communication and incident management. This may suggest that you consider pausing your social media accounts.

For schools who are registered charities, you should consider reporting this incident to the Charity Commission as a Serious Incident Report.

2. Support

Consider who has been targeted based on the images. If they are sexually explicit, do not view them again. Instead, seek police assistance in identifying those affected and ensure they receive the necessary support.

Supporting the Victim(s)

Offer immediate support to those involved and reassure them that it is not their fault. You might see a range of reactions from laughter to fear of how this will impact them in the future. It may be time to consider further options for support such as pastoral care, counselling or similar professional services.

Supporting Your School Community

Your wider school community may be aware and any delay in addressing the issue may result in the spread of fear or misinformation. Tackle it sensitively, but quickly. Use our ‘Spotting and Stopping AI Image Scams’ lesson on Teach Hub to help support learning.

Communication with Parents

Consider communicating with your school’s legal team. Creating template letters may help to ensure a consistent tone and appropriate messaging to specific groups, for example, parents/carers of children in the images, other parents/carers and the wider school community, including staff and teachers.

Signpost parents/carers in your school to our ‘Is Your Child’s Photo Safe?’ guidance.

Want to ensure your staff are informed and prepared? Use our Staffroom Briefing PowerPoint to update them on this issue, and keep our Action Plan Checklist on hand for a clear, structured response if an incident occurs.

What Can You Do?

Join us on Thursday 20th March for our ‘Sextortion’ webinar, where attendees will:

  • Understand what sextortion is and the different forms it can take, including the use of real and AI-generated images.
  • Identify those most at risk, such as children and young people.
  • Learn about the laws surrounding sextortion.
  • Recognise the emotional, social, and financial impact sextortion has on victims.
  • Apply your knowledge to identify potential cases of sextortion and provide effective support to children and young people.

How to detect AI-generated imagery

The UK has laws against using AI to create harmful content. It is illegal to create, possess, or share AI-generated child sexual abuse material (CSAM) and non-consensual sexually explicit ‘deepfake’ images.

The UK has introduced specific legislation targeting AI-generated CSAM, making it one of the first countries to do so.

Existing laws, including the Protection of Children Act 1978 and the Sexual Offences Act 2003, already criminalise the creation, possession, and distribution of indecent images of children, regardless of whether they involve real children or are artificially generated.

The Online Safety Act 2023 strengthens regulations on harmful online content and holds platforms accountable for preventing CSAM, including AI-generated material.

Upcoming legislation, such as the Crime and Policing Bill 2025, will further criminalise the possession of AI tools used to generate CSAM. The UK government is also reviewing laws related to deepfakes and other AI-driven online harms, signalling a focus on addressing both harmful content and the tools that enable its creation.

Commonly referred to as ‘synthetic media’, this content is generated using an AI model (a computer program that uses data to learn patterns and make decisions). The program analyses lots of images and videos in its library, and the text associated with them, to learn how to create new content.

A user can then ask the program to use its library to create new images and videos, or edit existing ones.

It is important to remember that image manipulation is not limited to AI-generated content.

Look critically at the image – if your gut is telling you something isn’t right, then listen to it, investigate and search for the evidence. If the images include Child Sex Abuse Material (CSAM), don’t investigate it yourself. That is a job for the police who have the lawful authority and skills to do so.

If it is an otherwise innocent image, here are some details you might want to look out for:

  • Physical features: Do they have the correct number of toes and teeth? Does their smile look a little too forced and like it belongs in a stock-image? Is their skin a little too smooth?
  • Patterns in their clothes: AI can struggle with patterns, so look for inconsistencies in patterns such as knit-patterns, seam lines missing, or fabric looking too smooth.
  • Text: It might look like text from far away but when you zoom in, if it is illegible scribbles in the shape of text, this might be an indicator AI has been used to edit or create the image.
  • What’s going on behind them? The program might put more effort into the parts of the image that are in focus, but the background can give a lot away. You might see faces that are misshapen, furniture that bends unnaturally or perhaps everyone looks a little too perfect.

Emergency services

If the child in your care is in immediate danger, ring 999

REPORT

IWF

The IWF help to remove online child sexual abuse imagery hosted anywhere in the world. You can report an explicit image of a child on their website, and they provide advice and guidance for supporting victims. You can choose to remain anonymous or give your details.

Report Remove

The Report Remove tool is a service provided by Childline and the IWF, that allows young people to report sexual images or videos of themselves that they have shared online in order to have them removed. This service is confidential and provides advice and guidance throughout the reporting process.

CEOP Safety Centre

The National Crime Agency’s CEOP Safety Centre provides a reporting route for under 18 year olds to report online sexual abuse and grooming directly to NCA Child Protection Advisors. Find out more at www.ceop.police.uk

Google

You can report non-consensual or explicit images on Google directly by clicking the ‘report abuse’ link below the image or through their online form.

Further Resources

Catching a Catfish (Shareable)

What is Catfishing (Powerpoint)

Catching a Catfish: How to Avoid being Baited (includes video)

Protecting Young People from Sextortion

Sextortion and the Rise of AI

Safeguarding Alert: Financially Motivated Sexual Extortion

To access this guidance and other essential resources, download the Safer Schools NI App

Developed in partnership with the Department of Education, INEQE Safeguarding Group offer the award-winning Safer Schools NI App for FREE! Not sure if your school is registered? Find out today

https://saferschoolsni.co.uk/register-your-school/

What Is the Safer Schools NI App? 

In an ever-evolving online world, keeping up-to-date with the latest news, trends and risks can be difficult. Finding credible and relevant resources can be even harder. With the Safer Schools NI App, your school community will be kept informed on the fast-paced changes of the digital landscape, designed to be a one-stop-shop for accessing essential safeguarding information, advice, and guidance.  

To access our ‘Spotting and Stopping AI Image Scams’ lesson, visit our Teach Hub*:

*You need to be a registered NI Safer School to access the Teach Hub. Not sure if your school is registered? Find out today:

https://saferschoolsni.co.uk/register-your-school/

What Is the Teach Hub?  

The Safer Schools library of resources created by teachers for teachers. 

Our team have produced a range of high-quality lesson plans, PowerPoints, videos and worksheets for you to use in your classroom. Covering topics such as catfishing, influencers and gaming, these will be your go-to when it comes to addressing the big issues facing today’s children and young people.

Share this with your friends, family, and colleagues

Join our Safeguarding Hub Newsletter Network

Members of our network receive weekly updates on the trends, risks and threats to children and young people online. 

Sign Up

Who are your Trusted Adults?

The Trusted Adult video explains who young people might speak to and includes examples of trusted adults, charities and organisations.

Discussing Online Life With Your Child

Use our video for guidance and advice around constructing conversations about the online world with the children in your care.

2025-02-19T10:35:38+00:00
Go to Top