Charity issues AI warning over online child sexual abuse

15 February 2024, 00:04

Online abuse
Online abuse. Picture: PA

New research from the Lucy Faithfull Foundation shows many adults worry about AI but were not aware it was already being used in online abuse.

Most UK adults have fears around the advances in artificial intelligence (AI), in particular in relation to its potential to harm children, but many are unaware about how it is already being used in sexual abuse, according to new research.

The study, from child protection charity the Lucy Faithfull Foundation, found that 66% of adults said they had concerns about the impact of the technology on children.

This was despite the research uncovering that the majority (70%) were not even aware that AI was already being used to generate sexual abuse images of children, and 40% did not know that such content was illegal.

The Lucy Faithfull Foundation said it was publishing its findings in an effort to boost awareness around the use of AI to exploit children.

The foundation, which runs the anonymous and confidential helpline Stop It Now, said it wanted to remind people of the law and what constitutes online child sexual abuse in the UK.

Donald Findlater, director of the helpline, said: “With AI and its capabilities rapidly evolving, it’s vital that people understand the dangers and how this technology is being exploited by online child sex offenders every day.

“Our research shows there are serious knowledge gaps amongst the public regarding AI – specifically its ability to cause harm to children.

“Unfortunately, the reality is that people are using this new, and unregulated, technology to create illegal sexual images of children, as well as so-called ‘nudified images’ of real children, including children who have been abused.

“People must know that AI is not an emerging threat – it’s here, now. Stop It Now helpline advisers deal with hundreds of people every week seeking help to stop viewing of sexual images of under-18s.

“These callers include people viewing AI-generated child sexual abuse material. We want the public to be absolutely clear that viewing sexual images of under-18s, whether AI-generated or not, is illegal and causes very serious harm to real children across the world.

“To anyone that needs support to change their online behaviours, contact the Stop It Now helpline. We can offer free, confidential support before it’s too late.”

The campaign to boost awareness has been backed by the Internet Watch Foundation (IWF), which proactively tracks down and removes child sexual abuse imagery online.

Dan Sexton, IWF chief technical officer, said: “There is a very real risk we could face a landslide of AI generated child sexual abuse which is completely indistinguishable from real images of children being raped and sexually tortured.

“These new technologies are allowing offenders to mass produce imagery of children who have already suffered abuse in real life, and create imagery of those same children in new scenarios.

“The potential impact for those trying to rid the internet of this material is profound – and the impact on children, who are made victims all over again every time this imagery is shared, is hard to comprehend.

“This is a bleak new front in the war on child sexual abuse imagery, and one which could well get out of control if serious action is not taken now to get a grip on it.”

By Press Association

More Technology News

See more More Technology News

Person on laptop

UK cybersecurity firm Darktrace to be bought by US private equity firm

Mint Butterfield is missing in the Tenerd

Billionaire heiress, 16, disappears in San Francisco neighbourhood known for drugs and crime

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays