AI can be ‘sword and shield’ against misinformation, Sir Nick Clegg says

9 April 2024, 14:54

Sir Nick Clegg, president, global affairs speaks at Meta’s AI event in London
AI at Meta. Picture: PA

The former deputy prime minister is now head of global affairs at tech giant Meta.

Artificial intelligence (AI) can be a “sword and a shield” against harmful content, not just a tool to spread it, Sir Nick Clegg has said.

The former Liberal Democrat deputy prime minister is now the head of global affairs at tech giant Meta, the parent company of Facebook, Instagram and WhatsApp.

Speaking during an AI event at Meta’s London offices, Sir Nick said that while it was “right” to be “vigilant” about generative AI being used to create disinformation to disrupt elections, he said AI was the “single biggest reason” Meta was getting better at reducing the spread of “bad content” on its platforms.

In 2024, billions of people are set to go to the polls with elections due in a number of the world’s largest democracies, including the UK, US and India.

It has led some experts to warn of the potential threat posed by the rapid rise of generative AI tools – including image, text and audio content apps – and the possibility of them being used to spread misinformation and disinformation with the aim of disrupting democratic processes.

A number of senior UK politicians have already been the subjects of so-called deepfakes, which have spread on social media.

And on Tuesday, fact-checking charity Full Fact said the UK was currently vulnerable to misinformation, and more government intervention was needed on the issue with elections on the horizon.

Sir Nick said focus on the issue was important, but argued that good AI was potent protection against bad AI, and that Meta and others had the tools needed to fight the spread of harmful material.

“I would urge everyone – yes, there are risks – but to also think of AI as a sword, not just a shield, when it comes to bad content,” he said.

“If you look at Meta, the world’s largest social media platform, the single biggest reason why we’re getting better and better in reducing the bad content that we don’t want on Instagram and Facebook is for one reason; AI.”

He added that the use of AI to scan Meta’s platforms to find and remove harmful content had reduced the levels of bad content by “50 to 60% over the last two years” meaning that now “for every 10,000 bits of content, one bit of content might be hate speech”.

Left to right, Dr Anne-Marie Imafidon, Nick Clegg, President, Global Affairs, Yann LeCun, Chief AI Scientist Joelle Pineau, Vice President, AI Research and Chris Cox, Chief Product Officer at Meta’s AI event in London gathering leaders across the UK’s AI, business and tech sectors to discuss Meta’s AI research and AI product developments
Left to right, Dr Anne-Marie Imafidon, Nick Clegg, President, Global Affairs, Yann LeCun, Chief AI Scientist, Joelle Pineau, Vice President, AI Research and Chris Cox, Chief Product Officer at Meta’s AI event in London (David Parry Media Assignments/PA)

“Some of the work teams have been doing inside Meta to improve the way that we use our most advanced AI tools to triage content, so that we make sure that the 40,000 people we have working on content moderation really look at the most acute edge cases and they don’t waste a lot of their time looking at stuff that is inoffensive or not a problem has really improved rapidly in recent months,” he said.

“It is right that there is an increasingly high level of industry wide cooperation, particularly this year because of this unprecedented number of elections.

“We should be vigilant, but I would urge you to also think of AI as a great tool to navigate that difficult landscape and I’m quietly optimistic that the whole industry is trying to really lean into this as cooperatively as possible.”

During the event, Sir Nick also announced that Meta’s next AI large language model – used to power AI tools, including chatbots built by Meta and other firms – would be released shortly.

Sir Nick said the new model, known as Llama 3, would begin to roll out “within the next month, hopefully less” and would continue over the course of the year.

By Press Association

More Technology News

See more More Technology News

Dr Laura Cinti looks up at an E.woodii plant growing in a glasshouse at Kew Gardens

AI enlisted in the hunt for female partner for lonely ancient plant

A mobile phone next to a telecoms mast near Dundry, Somerset

Pace required to hit targets on rural mobile signal unsustainable, report says

A NatWest sign

NatWest apologises to customers after mobile and online banking suffer outages

Greg Clark

AI regulators in UK are ‘under-resourced’, warns science committee chairman

Openreach engineer

Plans to build full fibre broadband in more than 500 new places unveiled

TikTok strategy

Tory TikTok launch ‘pathetic’ compared with Labour’s ‘savvier’ approach – expert

Person using laptop

More than 300 million children a year face sexual abuse online, study suggests

There are calls for mobile phones to be totally banned for under 16s

Calls for mobile phones to be totally banned for under 16s

A young girl using a mobile phone (picture posed by model)

Next government should consider banning phones for under-16s, report says

Sir Chris Bryant

AI should be used to develop an app which detects skin cancer, Labour MP says

Handout image from Microsoft of its Copilot virtual assistant displayed on a laptop screen

Microsoft expanding Copilot AI assistant to organise meetings and support teams

Solar panels on a house roof

Octopus Energy launches ‘buy now, pay later’ for solar panels

Raspberry Pi

Raspberry Pi set for June IPO in welcome boost for London market

A person using a laptop

Nations agree to develop shared risk thresholds for AI as Seoul summit closes

Microsoft new equiment

Data regulator looking into Microsoft’s AI Recall feature

An easyJet plane

EasyJet uses AI to better manage flights from new control centre