Skip to main content
On Air Now

Deepfakes are no longer a novelty, they are a threat to everyone and the law is not keeping up

Share

Deepfakes are genuine threats to all of us – we need new protections
Deepfakes are genuine threats to all of us – we need new protections. Picture: LBC/Alamy

By Anna Webb

Deepfakes are already a major threat in the corporate world, but now they put the identities of all of us at risk. Anyone could end up being faked or cloned by criminals and nefarious actors.

Listen to this article

Loading audio...

Alarmingly, as we have seen in the headlines, this has extended to children and vulnerable people.

The sophistication of deepfakes has enabled criminals to steal £20m from a big-name, technology-driven UK-based company through AI-created video.

In this attack, deepfakes instructed an employee to initiate a series of transfers, citing a confidential acquisition that required secrecy and speed. Impostors persuaded an employee to execute 15 separate transfers to bank accounts in Hong Kong.

In other attacks, fraudsters have used AI to clone voices for “bank break-ins” in Europe and Asia by-passing biometric checks.

One bank’s fraud team noted the attackers conversed fluidly with customer service, using the AI voice in real-time.

If major companies, with all their protections, are susceptible to attacks facilitated by AI, what should the rest of society do?

The risks to ordinary people

The risk to individuals from deepfakes is significant and includes the possibility of fraud and extreme forms of harassment and abuse.

Cloning of images to fake humiliating or sensitive content is a growing and profoundly concerning aspect of the deepfake threat to individuals.

For vulnerable people and children, the dangers are becoming acute. Existing online safety education has not kept pace with the rapid evolution of AI, leaving a gap in how young people are prepared to recognise and respond to these threats.

Addressing that gap will require an understanding of the risks posed by AI technologies to be explicitly embedded into the curriculum, rather than treated as an extension of traditional internet safety.

While some cyber security companies are already helping schools with awareness initiatives, these risks are not temporary and are only likely to increase.

With a clear duty of care, the education sector needs to take a proactive and structured approach to preparing young people for an AI-enabled digital future.

New laws needed

Training alone is not sufficient, however. It’s important we revise legislation to protect individuals from having their images used to create deepfake content.

The Danish government’s pioneering proposal to expand copyright law to include personal likenesses – our face, our voice, our body – is very welcome.

It acknowledges that those aspects essential to our identities deserve legal protection.

We should be under no illusions, identifying and removing deepfakes is complex, which makes enforcing new laws a big challenge. Unfortunately, content – real or fake – can spread rapidly across social media and into different jurisdictions.

Since the internet knows no international boundaries, cooperation between nations will be necessary to reduce the risks.

We must do everything we can to protect everyone, especially those who are least able to protect themselves. The evidence from the frontline of cyber security strongly suggests legal reform needs to be matched with more effective detection technology and interception techniques.

More fundamentally, we need to improve public awareness so that we can all be vigilant and contribute to the protection of society and especially, children and the vulnerable.

_____________________

Anna Webb is the Head of Global Security Operations, Kocho

LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.

The views expressed are those of the authors and do not necessarily reflect the official LBC position.

To contact us email opinion@lbc.co.uk