AI voice cloning tools imitating political leaders threaten elections – report

31 May 2024, 09:04

A laptop
Egress acquired by KnowBe4. Picture: PA

A report from the Centre for Countering Digital Hate warns that AI voice tools can easily be used to create political disinformation.

AI-powered voice cloning tools can be used to create disinformation using the voices of prominent political figures, an online safety group has warned.

The Centre for Countering Digital Hate (CCDH) said researchers used six different AI voice cloning tools and attempted to create false statements using the voices of a range of well-known political leaders, with around 80% of their attempts producing what they called a convincing piece of content.

The CCDH said all but one of the tools it tested claim to have built-in safeguards to prevent misuse for the production of disinformation, but that its report found these measures to be “ineffective” and easy to circumnavigate.

Prime Minister Rishi Sunak and Labour leader Sir Keir Starmer
The CCDH’s testing included using the voices of Rishi Sunak and Sir Keir Starmer (PA)

The online safety organisation said its testing included using the voices of both Prime Minister Rishi Sunak and Labour leader Sir Keir Starmer, and said AI and social media companies needed to do more to protect the integrity of the upcoming General Election from such content.

The group said its researchers were also able to create audio-based disinformation of other global figures including former US president Donald Trump, US President Joe Biden and French President Emmanuel Macron.

The examples included various political figures warning people not to vote because of bomb threats, declaring election results had been manipulated and “confessing” to the misuse of campaign funds.

The organisation said AI companies need to introduce specific safeguards to prevent users from generating and sharing false or misleading content about geopolitical events and elections, backed up by more work from social media firms to detect and stop such content from spreading.

The CCDH said existing election laws should be updated to take into account AI-generated content.

Imran Ahmed, chief executive of the CCDH, said: “AI tools radically reduce the skill, money and time needed to produce disinformation in the voices of the world’s most recognisable and influential political leaders.

“This could prove devastating to our democracy and elections.

“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke – all so they can steal a march in the race to profit from these new technologies.”

Mr Ahmed added that it was vital that social media platforms do more to stop the spread of AI-powered disinformation, particularly during such a busy year of elections around the world.

He said: “Disinformation this convincing unleashed on social media platforms – whose track record of protecting democracy is abysmal – is a recipe for disaster.

“This voice cloning technology can and inevitably will be weaponised by bad actors to mislead voters and subvert the democratic process.

“It is simply a matter of time before Russian, Chinese, Iranian and domestic anti-democratic forces sow chaos in our elections.

“Hyperbolic AI companies often claim to be creating and guarding the future, but they can’t see past their own greed.

“It is vital that in the crucial months ahead they address the threat of AI election disinformation and institute standardised guardrails before the worst happens.”

By Press Association

More Technology News

See more More Technology News

Tanaiste Micheal Martin speaks to the media

Tanaiste: Fake ads about me originated in Russia

Charlie Nunn, the boss of Lloyds, wearing a suit and tie outisde a building

Lloyds boss says tech outages a ‘really important issue’ for bank

Revolut card on a table

Revolut secures UK banking licence after three-year wait

A woman using a mobile

Accessing GP services online could pose risk to patient safety, probe finds

Overhead view of a man using a laptop computer

AI could help two-thirds of workers with daily tasks, says study

A TikTok logo on a mobile phone screen alongside logos for other apps

TikTok fined £1.8m over failure to provide accurate information to Ofcom

A hand pressing on laptop keys

UK competition regulator signs AI agreement with EU and US counterparts

A woman using a mobile phone

Third of UK adults use mobile contactless payments at least every month

Businessman hand touching password login device screen, cyber security concept

Lawlessness ‘characterises’ pornography online, says MP in plea to reform laws

Hands on a computer keyboard

State threat law watchdog calls for greater transparency from tech giants

The Meta logo on the side of a building

Latest open source AI models ‘world’s largest and most capable’, says Meta

Pupils eating lunch

Data watchdog reprimands school over facial recognition for canteen payments

Elon Musk laughs

Elon Musk: Tesla will begin using humanoid robots next year

A laptop user looking at an option page in the Google Chrome internet browser allowing users to clear their online browsing data

Google ditches plans to remove third-party cookies in Chrome

Passengers queue outside Stansted airport in Essex

What are air passengers entitled to during IT outage flight disruption?

A payslip

Payroll software halted by global IT outage, say businesses