One in four Brits 'unconcerned' about AI deepfake porn images made without consent
A senior police officer has warned that the use of AI is surging amid an epidemic in violence against women and girls
One in four people think there is nothing wrong with sharing deepfake porn photos online, even if those depicted have not given consent, a police-commissioned survey has revealed.
Listen to this article
A study of 1,700 people by the crime and justice consultancy Crest Advisory has laid bare the British public's perception of AI generated sexual images at a time when reports of such cases are soaring.
A quarter of those surveyed said they are unconcerned, or feel neutral, about creating and sharing AI generated sexual images of other people, even without their consent.
Almost one in 20 people said they have created some kind of deepfake in the past, while more than one in ten said they would make one in the future.
Seven percent of respondents have had their identities used in a sexual or intimate deepfakes, but just 51 percent had reported it to the police.
A further 12 percent of people felt it morally acceptable to create or share non-consensual sexual or intimate AI images of others compared to older people.
Read more: Revealed: AI deepfakes of famous doctors, including Michael Mosley, used by firms to push products
Among those did not speak out, the most commonly cited reasons were embarrassment and uncertainty that the offence would be treated seriously.
That is despite the fact creating non-consensual sexually explicit AI images, or deepfakes, is a criminal offence under the new Data Act.
The survey, commissioned by the Office of the Police Chief Scientific Advisor, suggests men under 45 were the most likely to find it acceptable to create and share deepfakes.
This category was also more likely to watch pornography and hold misogynistic views and positive attitudes towards AI.
However, the report admitted this association of age and gender was limited and it called for further research to study this apparent link.
The findings have prompted a senior police officer to warn that the use of AI is surging amid an epidemic in violence against women and girls.
Det Ch Supt Claire Hammond said: "The rise of AI technology is accelerating the epidemic of violence against women and girls across the world. Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button, and they have to act now to stop it.”
Urging victims to go to the police, she said: “This is a serious crime, and we will support you. No one should suffer in silence or shame.”
It comes after another recent survey of more than 2,500 girls and young women by Girlguiding found that 26 percent of girls aged 13 to 18 have seen a sexualised deep fake of themselves, a friend or a celebrity.
Meanwhile, the number of indecent deepfake images has surged by 1,780% in the last five years.
Felicity Oswald, who was previously interim chief executive and before that chief operating officer at the National Cyber Security Centre (NCSC), said social media firms are not doing enough to protect girls and women, with many tech firms slashing budgets for online safety.
She said: “I’ve worked really closely with technology companies based across the world, including social media companies, and I know many of their staff work tirelessly to keep their users safe.
“However, it’s not enough yet. The trends are getting worse rather than better.“Technology companies across the world seem to be reducing the number of staff and the systems thinking about harmful content.”