Skip to main content
On Air Now

Deepfake abuse has arrived, and children are bearing the cost of delay

Share

Children are being abused by deepfakes, and our laws are still pretending it’s theoretical
Children are being abused by deepfakes, and our laws are still pretending it’s theoretical. Picture: LBC/Alamy
Alex Laurie

By Alex Laurie

Deepfake abuse isn’t the future. It’s happening to children now.

Listen to this article

Loading audio...

The recent Newcastle case - in which a headteacher was sentenced for possessing AI-generated indecent images of children - should have ended any lingering belief that deepfake abuse is a future concern.

It is already here; the victims are real, and our legal framework is lagging dangerously behind.

For young people, deepfake technology has arrived in an online environment that is already hostile. Tools for generating synthetic sexual imagery are no longer niche or technical; they are cheap, accessible and often built into the same platforms children use every day.

On sites like X, this kind of content is not just available - it is actively pushed into young users’ feeds by algorithms designed to boost engagement.

Synthetic sexual imagery obliterates the concept of digital consent. It leaves young people vulnerable to humiliation, coercion and long-term psychological harm. Unlike earlier forms of online abuse, deepfakes can be created without a victim ever taking or sharing an image themselves. A face, a name, or a school rumour spread online can be enough to fuel an attack.

Yet, our legal framework still treats much of this as an “edge case”. Laws were written for a world where abuse material was either human-made or non-existent. AI breaks those assumptions.

While possession laws are beginning to catch up, they are still slow, fragmented and poorly suited to technology that can generate abuse at scale. We are stuck in a gap where harm moves faster than accountability.

In that gap sits schools and parents, largely unsupported. Teachers are expected to manage the fallout from technology they have little power or knowledge to control.

Parents often discover the problem only after the damage has been done. Children, meanwhile, are left to navigate a world where they can no longer trust their own eyes, lacking even the most basic tools to question or challenge what they see.

With the Joint Human Rights Committee examining AI regulation, there is a chance to act with clarity and urgency. We need to move beyond “online manners” or screen-time limits.

Young people need practical deepfake defence skills: understanding how to spot synthetic imagery, knowing how to immediately report it on platforms, and - most importantly - having a clear, non-judgmental path to support if they are targeted.

Protecting children in an AI-driven world requires more than just updated laws; it requires an education that reflects the reality young people are living now. Anything less than a total overhaul of our digital safety standards is an admission that we are willing to let children pay the price while we catch up on paper.

As a parent, I am having these conversations at home while working in an industry that understands how fast this technology is moving. Even with that knowledge, keeping up is difficult, which should worry anyone assuming children can manage it alone.

____________________

Alex Laurie is Go-To-Market Chief Technology Officer at Ping Identity and a long-standing expert in online safety and digital identity.

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us email opinion@lbc.co.uk