Exclusive

Online Safety Act 'not enough' to protect young people from AI generated child sexual abuse images, claims Labour MP

27 April 2025, 21:06 | Updated: 28 April 2025, 09:56

Sarah Champion is warning that the Online Safety Act is not enough to stop the spread of AI generated child sexual abuse imagery.
Sarah Champion is warning the Online Safety Act is not enough to stop the spread of AI generated child sexual abuse imagery. Picture: UK Parliament

By Katy Ronkin

Tech companies must build safeguards into their products from the start, a Labour MP has said - warning the Online Safety Act is not enough to stop the spread of AI generated child sexual abuse imagery.

Listen to this article

Loading audio...

Sarah Champion called on the Government to use the upcoming AI Bill to require companies to implement risk mitigation strategies in their products from the start to ensure that their AI products cannot be used to generate illegal content.

In an opinion piece for LBC, Ms Champion warned that AI has made the creation of child sexual abuse material more widespread, pointing to a report from the Internet Watch Foundation that found a 380% surge in reports involving AI-generated child sexual abuse material.

The MP for Rotherham said efforts by the Government to address the issue in the Crime and Policing Bill and the Online Safety Act are "not enough" to protect children from "constantly evolving" AI tools.

Read more: AI-Generated sexual child abuse imagery is rampant – we must ban the tools and educate our children

Read more: AI is fuelling a new wave of child exploitation. Tech giants cannot be allowed to look away, writes MP Gregor Poynton

In an opinion piece for LBC, Ms Champion warned that AI has made the creation of child sexual abuse material more widespread.
In an opinion piece for LBC, Ms Champion warned that AI has made the creation of child sexual abuse material more widespread. Picture: UK Parliament

She wrote: "To truly protect children, we need stronger safeguards in place before these AI models even reach the market. Risk mitigation strategies must be built into these technologies from the start, and companies should be required to perform thorough assessments to ensure their products cannot be used to generate illegal content.

"The forthcoming AI Bill is the ideal vehicle for implementing these necessary protections to safeguard our children."

The MP also called on schools to take a more active role in educating children about the potential risks of using so-called "nudification apps," online tools that allow the user to upload a picture of any person and generate a nude image.

LBC reporter speaks to a panel of teenagers about their online safety

She said the increasing ability of these apps is a growing concern as they "contribute to a rise in peer-to-peer abuse, with children sharing AI-generated sexual imagery of their peers and teachers."

"Schools ought to communicate the laws about the use of nudification tools to protect children from creating, making, or distributing abusive AI images, as well as services that young people can turn to if they need support," Ms Champion continued.

Any legislation involving AI in schools must be introduced with "clear guidance" and "ongoing training to support teachers," said Tasha Henstock, a product manager and online safety expert at RM plc, a technology company that provides IT products and services to schools across the UK.

AI tools have put child sexual abuse ‘on steroids’, Home Secretary warns

She told LBC that the schools she works with see "huge potential" for the use of AI in education, but are looking for guidance on how to use the technology safely and responsibly.

"There is a massive recognition in the fact that AI tools could have a huge potential in education. But the feedback we get from schools is that it needs to be introduced in a really responsible way.

"Schools need clear guidance, things like filtering systems and monitoring systems. They need ongoing training to be able to support teachers."

It comes after the Internet Watch Foundation (IWF), which finds and helps remove abuse imagery online, said 291,273 reports of child sexual abuse imagery were reported in 2024.

The organisation also said it was seeing rising numbers of cases being driven by threats, including AI-generated sexual abuse content, sextortion and the malicious sharing of nude or sexual imagery.

In response, the IWF announced it was making a new safety tool available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms.

The tool, known as Image Intercept, can spot and block images in the IWF’s database of more than 2.8 million which have been digitally marked as criminal imagery.

The IWF said it will give wide swathes of the internet new, 24-hour protection and help smaller firms comply with the Online Safety Act.