UK vulnerable to misinformation, fact-checking charity warns

9 April 2024, 00:04

Hands on a laptop
UK vulnerable to misinformation. Picture: PA

Full Fact’s annual report says the UK is ill-equipped to tackle misleading content.

The UK is highly vulnerable to misinformation and is currently not properly equipped to combat it, the annual report from fact-checking charity Full Fact says.

The report warns that without urgent action, the UK risks falling behind the pace of international progress in protecting citizen from misinformation.

It says that a combination of significant gaps in the Online Safety Act and the rapid rise of generative AI means a fundamental overhaul of the UK’s legislative and regulatory framework is needed in order to adequately combat misinformation and disinformation.

Generative AI, most notably chatbots such as ChatGPT, have become more prominent parts of daily life over the past 18 months as they have been made widely available as content creation and productivity tools – including being used to create misleading images, video and audio.

In its report, Full Fact warns that this technology could be used to power disinformation campaigns that disrupt or undermine confidence in the result of forthcoming elections.

A number of politicians, including Prime Minister Rishi Sunak, Labour leader Sir Keir Starmer and the mayor of London Sadiq Khan have been the subjects of misleading content – or deepfakes – in recent times.

The report says that generative AI could play a role in the upcoming general election by making it cheap, easy and quick to spread content so plausible that it cannot easily or rapidly be identified as false.

The charity says that concerns around the rapid evolution of AI technology are being exacerbated by what it said were “fundamental gaps” in the Online Safety Act, which passed in law last year and is designed to protect internet users from encountering online harms.

“Despite promises that the regulation would apply to disinformation and misinformation that could cause harm to individuals, such as anti-vaccination content, there are only two explicit areas of reference to misinformation in the final Act,” Full Fact’s report says.

“One is that a committee should be set up to advise the regulator, Ofcom, on policy towards misinformation and disinformation, and how providers of regulated services should deal with it.

“The other is that Ofcom’s existing media literacy duties should expand to cover public awareness of misinformation and disinformation, and the nature and impact of harmful content.

“This is not good enough, given the scale of the challenge we face.”

Chris Morris, chief executive of Full Fact, said: “The Government’s failed promise to tackle information threats has left us facing down transformative, digital challenges with an analogue toolkit.

“An Online Safety Act with just two explicit areas of reference to misinformation cannot be considered fit for purpose in the age of AI.”

The charity has listed 15 recommendations for government, political parties, regulators and tech companies to protect the UK’s information environment, including urging either amendments to the Online Safety Act – or bring in new legislation – to better target misinformation and disinformation.

The recommendations also call on political parties to commit to using generative AI responsibly and transparently.

“A better future is possible, but only if all political parties show that they are serious about restoring trust in our system by campaigning transparently, honestly, and truthfully,” Mr Morris said.

A spokesperson for the Department for Science, Innovation and Technology said: “We are working extensively across government to ensure we are ready to rapidly respond to misinformation.

“The Online Safety Act has been designed to be tech-neutral and future-proofed, to ensure it keeps pace with emerging technologies.

“Once implemented, it will require social media platforms to swiftly remove illegal misinformation and disinformation, including where it is AI-generated, as soon as they become aware of it.

“In addition to the work of our defending democracy taskforce, the digital imprints regime will also require certain political campaigning digital material to have a digital imprint making clear to voters who is promoting the content.”

By Press Association

More Technology News

See more More Technology News

Cyber attacks

New laws to protect consumers from cyber attacks take effect

Person on laptop

UK cybersecurity firm Darktrace to be bought by US private equity firm

Mint Butterfield is missing in the Tenerd

Billionaire heiress, 16, disappears in San Francisco neighbourhood known for drugs and crime

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider