Action needed to protect election from AI disinformation, study says

29 May 2024, 00:04

AI study
AI study. Picture: PA

The Alan Turing Institute’s Centre for Emerging Technology and Security said AI disinformation could be used to undermine democratic processes.

Artificial intelligence-generated deepfakes could be used to create fake political endorsements ahead of the General Election, or be used to sow broader confusion among voters, a study has warned.

Research by The Alan Turing Institute’s Centre for Emerging Technology and Security (Cetas) urged Ofcom and the Electoral Commission to address the use of AI to mislead the public, warning it was eroding trust in the integrity of elections.

The study said that while there was, so far, limited evidence that AI will directly impact election results, the researchers warned that there were early signs of damage to the broader democratic system, particularly through deepfakes causing confusion, or AI being used to incite hate or spread disinformation online.

It said the Electoral Commission and Ofcom should create guidelines and request voluntary agreements for political parties setting out how they should use AI for campaigning, and require AI-generated election material to be clearly marked as such.

The research team warned that currently, there was “no clear guidance” on preventing AI being used to create misleading content around elections.

Some social media platforms have already begun labelling AI-generated material in response to concerns about deepfakes and misinformation, and in the wake of a number of incidents of AI being used to create or alter images, audio or video of senior politicians.

In its study, Cetas said it had created a timeline of how AI could be used in the run-up to an election, suggesting it could be used to undermine the reputation of candidates, falsely claim that they have withdrawn or use disinformation to shape voter attitudes on a particular issue.

The study also said misinformation around how, when or where to vote could be used to undermine the electoral process.

Sam Stockwell, research associate at the Alan Turing Institute and the study’s lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period.

“Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.

“That’s why it’s so important for regulators to act quickly before it’s too late.”

Dr Alexander Babuta, director of Cetas, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face.

“Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.”

By Press Association

More Technology News

See more More Technology News

X logo

Irish watchdog ‘surprised’ over X move on user data

A sign reminding people of new UK customs rules (PA)

Global trade to go digital as UK and 90 other countries agree paperless switch

A broadband router

Now most complained-about broadband and landline provider – latest Ofcom figures

Tasty Spoon

High-tech spoon developed to enrich lives of dementia patients

The NCSC said the Andariel group has been compromising organisations around the world (PA)

North Korea-backed cyber group sought to steal nuclear secrets, NCSC says

Tanaiste Micheal Martin speaks to the media

Tanaiste: Fake ads about me originated in Russia

Revolut card on a table

Revolut secures UK banking licence after three-year wait

IT outages

CrowdStrike faces backlash over 10 dollar apology vouchers for IT outage

Charlie Nunn, the boss of Lloyds, wearing a suit and tie outisde a building

Lloyds boss says tech outages a ‘really important issue’ for bank

A woman using a mobile

Accessing GP services online could pose risk to patient safety, probe finds

Overhead view of a man using a laptop computer

AI could help two-thirds of workers with daily tasks, says study

A TikTok logo on a mobile phone screen alongside logos for other apps

TikTok fined £1.8m over failure to provide accurate information to Ofcom

A hand pressing on laptop keys

UK competition regulator signs AI agreement with EU and US counterparts

A woman using a mobile phone

Third of UK adults use mobile contactless payments at least every month

Businessman hand touching password login device screen, cyber security concept

Lawlessness ‘characterises’ pornography online, says MP in plea to reform laws

Hands on a computer keyboard

State threat law watchdog calls for greater transparency from tech giants