'Nudifying apps' are a growing threat to women and girls. Will the Online Safety Act be enough?

10 March 2025, 06:48

'Nudifying apps' are a growing threat to women and girls. Will the Online Safety Act be enough?
'Nudifying apps' are a growing threat to women and girls. Will the Online Safety Act be enough? Picture: Getty
Katie Mehew

By Katie Mehew

Taylor Swift, Natalie Portman, Cathy Newman - three exceptional women with something deeply troubling in common: they have all been victims of sexually explicit deepfakes.

Listen to this article

Loading audio...

Sexually explicit deepfakes are AI-generated or altered images or videos depicting individuals in intimate or sexual situations, without consent. Advances in technology mean that deepfakes are alarmingly lifelike, with devastating impacts on victims.

From 17 March 2025, new provisions under The Online Safety Act will require internet service providers - including social media platforms and search engines – to proactively remove illegal content including intimate image abuse such as sexually explicit deepfakes, and prevent it from appearing in the first place. Websites hosting such content could face fines of up to 10% of their global revenue from Ofcom, the UK's Communications Regulator.

The Government also plans to criminalise the creation of sexually explicit deepfakes without consent. The Online Safety Act already made it a criminal offence to share or threaten to share intimate images without consent, including deepfakes. These measures align with the Government's pledge to halve violence against women and girls within the next decade.

These legal reforms cannot come soon enough. Research shows that 98% of deepfakes are pornographic, with 99% targeting women. While celebrities are often targeted, ordinary women face the greatest risk. A 2024 study by My Image, My Choice, found that the majority of deepfake victims are not public figures, but everyday women.

Nudifying apps, which remove clothing from images, are fuelling this abuse. My Image, My Choice reported that one nudifier app processed 600,000 photos of women within 21 days of launching. These apps are inherently misogynistic - often ineffective on photos of men - and, while banned from app stores, they remain accessible via search engines and social media ads. Their continued existence enables and generates profits from the exploitation of women and girls.

Children at risk

According to Internet Matters, 13% of children - around half a million teenagers in the UK - have encountered sexual deepfakes, whether by sending, receiving, creating or viewing them online. There is a concerning rise in schools of child-on-child abuse involving these apps. Whether driven by bullying or curiosity, young users may not realise that generating deepfakes of other children is not only harmful but illegal.

The experience of being deepfaked in adulthood is horrifying and deeply violating. For a child, it could be even more traumatic, with long-term consequences.

The need for urgent action

English law already criminalises the creation or distribution of sexual images of under 18s, including AI-generated images. While the new legislation is a critical step, enforcement and education - particularly in schools - are equally vital. Without urgent action, the unchecked rise of deepfake technology will continue to fuel online misogyny, exploitation, and lasting harm to victims.

________________

Katie Mehew is an Associate in the Reputation Protection and Crisis Management team at Mishcon de Reya.

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us email views@lbc.co.uk