Rooting out AI child sexual abuse materials must start in schools
It has never been easier to create child sexual abuse imagery.
Listen to this article
Not only are we finding more webpages containing videos of children suffering sometimes the most extreme forms of sexual abuse available on the internet with a few clicks – but this material can now be created, using AI, of any child out there, whether or not they have suffered sexual abuse.
This is why the IWF and NCA have issued urgent guidance today to schools up and down the country to help them take action against this worsening epidemic.
Children are being targeted and their photos, shared innocently on the internet or with friends, are being harvested by online criminals who are transforming them into nude and sexual imagery. There are even apps available which allow people to conjure up sexual imagery of real children from their phone, as easily as they can order their shopping. The fact is that it is just too easy, and it is having a devastating impact on children and young people.
The resulting imagery can get passed around the darkest corners of the internet – it can be abused by criminals running scams to blackmail children – it can be used as a grooming tool, or can result in horrendous bullying.
AI-generated child sexual abuse content is a rapidly increasing threat. The IWF, the UK hotline dedicated to finding and removing child sexual abuse material from the internet, processed 245 reports in 2024 which contained actionable AI-generated images of child sexual abuse. This is a 380% increase on 2023 where just 51 contained AI imagery. Furthermore, these 245 reports equated to over 7,500 images and a small number of videos – reflecting the volume of illegal imagery that can be found on webpages.
Adult perpetrators are targeting school children. Sometimes children target their own peers – perhaps not understanding the severity of their actions or dismissing it as a joke. But the children this imagery is based on are real. The cruelty and harm they suffer is real. And, without robust intervention, the likelihood things will get worse is also, sadly, very real.
Teachers, educators, and professionals working with children are at the front line of this growing international scandal. They can be the first to notice when things are going wrong, and they must not be left to face this crisis alone. These are new threats – and we hope the guidance issued this week will give them the power to respond if this happens to children or young people in their care.
The guide, sent to schools in England, Scotland, Wales, and Northern Ireland, as well as education professionals and those working with children, will equip education practitioners and all those who support young people in the UK with the knowledge they need to appropriately respond to incidents involving AI-generated child sexual abuse material.
Issued this week, the new guide makes it clear AI child sexual abuse imagery “should be treated with the same level of care, urgency and safeguarding response as any other incidence involving child sexual abuse material” and aims to dispel any misconception that AI imagery causes less harm than real photographs or videos.
The vital guidance has been issued by the National Crime Agency (NCA) and the Internet Watch Foundation (IWF) and has been developed in response to the risks that the misuse of swiftly evolving AI-image generation tools pose and the need to provide professionals with essential clarity and information on the issue.
We must also not look away from the fact girls are disproportionately being victimised by people abusing this new tech.
A report on nudification tools by the Children’s Commissioner for England reported instances of girls explicitly reducing their online presence to keep themselves safe online. The Commissioner noted “this pattern of behaviour is similar to girls avoiding walking home alone at night, or not going to certain public places alone”.
Tackling the misuse of technology, and particularly the spread of child sexual abuse material, is vital to achieving the Government’s pledge to halve violence against women and girls, and the forthcoming Violence Against Women and Girls Strategy must include clear and deliverable objectives to tackle technology facilitated sexual abuse against girls.
Reports of AI-generated child sexual abuse material must be taken seriously and the safety, care and support of victims should be the central pillar of any response.
The IWF Hotline acts swiftly to assess and remove AI-generated child sexual abuse imagery from the internet, and I would urge any young people who have been affected to use the Report Remove service which will put the power back in their hands and allow us to get this material taken down if it does get spread on the open web.
According to UK law, child sexual abuse material is always illegal, regardless of how it is created, and the taking, distribution, and possession of an “indecent photograph or pseudo photograph of a child” is a criminal offence. It is incumbent on all of us, in all our roles in society, to make sure there is no ambiguity around this issue, and to be alive to the new threats which can reach children and young people even in the classroom, the scout hut, or at home – where they should be safe.
I hope this new guidance gives teachers, educators, and professionals the tools they need to fight back.
________________
Derek Ray-Hill is Interim CEO of the Internet Watch Foundation.
LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.
The views expressed are those of the authors and do not necessarily reflect the official LBC position.
To contact us email opinion@lbc.co.uk