When truth feels unstable: deepfakes, distressing content and the safeguarding challenge

23rd February 2026 |

Artificial intelligence can now generate images, video and audio that look and sound real. Known as deepfakes, this content is already creating safeguarding incidents in education settings.

 
Leaders across the sector report cases where students have created explicit or abusive images using the likeness of a peer. Some settings have also dealt with manipulated audio or video involving staff. The impact is immediate. Harm spreads quickly through social media and messaging platforms. For the child or adult targeted, the emotional effect can be serious and long lasting.

  
At GLF Schools Trust, Fiona Jarman, Services and Safeguarding Lead, has shared that the trust managed a case involving a user-produced pornographic deepfake circulated among students. The incident required immediate safeguarding intervention, clear communication with families and targeted support for those affected.

In response, the trust has now embedded deepfakes into its annual safeguarding refresher training for staff. These are not easy conversations, but they are necessary. As Fiona has reflected, this is an issue schools cannot afford to ignore.

 
From 6 February 2026, creating or sharing non-consensual sexually explicit deepfake images is a criminal offence in the UK. The law provides clearer accountability. There is also increasing focus on watermarking and content provenance, where AI-generated material carries a digital marker to show its origin. However, adoption is not yet universal and the technology remains widely accessible. Detection is increasingly difficult without specialist tools.

 
Anthony Magee, Head of Data and Insights at Supporting Education Group, shared in our recent webinar:

“When the line between real and generated content becomes harder to see, uncertainty increases. For schools, the challenge is not just technical. It is relational. It is about how young people interpret what they see, who they trust and how quickly harm can escalate when content is shared without context.

Technology and software companies are making progress in building accountability and tracking into their systems. We are in a stronger position than we were two years ago. But further development and consistent standards are still needed.

Deepfakes, conspiracy narratives and graphic online material all test a school’s safeguarding culture. They require calm leadership, clear communication and the confidence to address difficult topics directly.”

 
Deepfakes sit within a wider pattern. High-profile and distressing stories move rapidly across online platforms. Whether content is verified, speculative or manipulated, students encounter it first. Schools often manage the emotional consequences later, in the form of anxiety, anger, confusion or conflict between peers.

 
In our upcoming webinar, we will explore how to respond to high-profile stories, such as the recent circulation of material lined to the Jeffrey Epstein case, in a way that prioritises pupil safety and emotional wellbeing. The session provides a trauma-informed framework to help you decide whether and how to acknowledge such issues, create safe spaces for questions without amplifying harm, address misinformation and rumours, recognise signs of distress, and protect staff wellbeing. The focus is practical and grounded, supporting leaders and pastoral teams to manage difficult conversations with clarity and confidence across primary, secondary and post-16 settings.

Wednesday 4th March | 4.00pm - 4.45pm GMT

 
Register for free

 Join our community of senior leaders and classroom staff

CONTACT US