UK should be more positive about AI to avoid missing out on tech ‘goldrush’

2 February 2024, 03:54

Artificial Intelligence says
Artificial Intelligence says. Picture: PA

The Lords Communications and Digital Committee said the UK risked falling behind other countries if it did not embrace the benefits of AI.

The UK’s approach to artificial intelligence has become too narrowly focused on AI safety and the threats the technology could pose, rather than its benefits, meaning it could “miss out on the AI goldrush”, a House of Lords Committee has warned.

In a major report on artificial intelligence and large language models (LLMs) – which power generative AI tools such as ChatGPT – The Lords Communications and Digital Committee said the technology would produce era-defining changes comparable with the invention of the internet.

However, it warned that the UK needed to rebalance its approach to the subject to also consider the opportunities AI can offer, otherwise it will lose its international influence and become strategically dependent on overseas tech firms for a technology which is expected to play a key role in daily life in the years to come.

It said that some of the “apocalyptic” concerns around threats to human existence from AI were exaggerated, and should not distract policy makers from responding to more immediate issues.

The UK hosted the first AI Safety Summit at Bletchley Park in November, where the Government brought together more than 25 nations, plus representatives from the UN and EU, to discuss the long-term threats of the technology, which includes its potential to be an existential threat to humans as well as aid criminals in carrying out more sophisticated cyber attacks or used by bad actors to develop biological or chemical weapons.

Both the Prime Minister, Rishi Sunak, and Technology Secretary Michelle Donelan have said that in order for the UK to reap the benefits of AI, governments and tech firms must first “grip the risks”.

While calling for mandatory safety tests for high-risk AI models and more focus on safety by design, the report urged the Government to take action to prioritise open competition and transparency in the AI market, warning that failure to do so would see a small number of the largest tech firms consolidate control of the growing market and stifle new players in the sector.

Higher Education Policy Institute report
The technology would produce era-defining changes comparable with the invention of the internet, the committee said (John Walton/PA)

The committee said it welcomed the Government’s work on positioning the UK as an AI leader – including through hosting the AI Safety Summit – but said a more positive vision for the sector was needed in order to reap the social and economic benefits.

The report called for greater support for AI start-ups, a boost for computing infrastructure and more work to improve digital skills, as well as exploring further the potential for an “in-house” sovereign UK large language model.

Baroness Stowell, chair of the Lords Communications and Digital Committee, said: ““The rapid development of AI Large Language Models is likely to have a profound effect on society, comparable to the introduction of the internet.

“That makes it vital for the Government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks. We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush.

“One lesson from the way technology markets have developed since the inception of the internet is the danger of market dominance by a small group of companies. The Government must ensure exaggerated predictions of an AI driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open-source AI development or exclude innovative smaller players from developing AI services.

“We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology.

“There are risks associated with the wider dissemination of LLMs. The most concerning of these are the possibility of making existing malicious actions quicker and easier – from cyber attacks to the manipulation of images for child sexual exploitation. The Government should focus on how these can be tackled and not become distracted by sci-fi end-of-the world scenarios.

“One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs. LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly and it should do so.

“These issues will be of huge significance over the coming years and we expect the Government to act on the concerns we have raised and take the steps necessary to make the most of the opportunities in front of us.”

Bank of England Governor Andrew Bailey said AI will not be a “mass destroyer of jobs” and “there is great potential with it”.

He told the BBC he was an “optimist”, adding: “I’m an economic historian, before I became a central banker.

“Economies adapt, jobs adapt, and we learn to work with it. And I think, you get a better result by people with machines than with machines on their own.”

In response to the report, a spokesperson from the Department for Science, Innovation and Technology (DSIT), said: “We do not accept this – the UK is a clear leader in AI research and development, and as a Government we are already backing AI’s boundless potential to improve lives, pouring millions of pounds into rolling out solutions that will transform healthcare, education and business growth, including through our newly announced AI Opportunity Forum.

“The future of AI is safe AI. It is only by addressing the risks of today and tomorrow that we can harness its incredible opportunities and attract even more of the jobs and investment that will come from this new wave of technology.

“That’s why we have spent more than any other government on safety research through the AI Safety Institute and are promoting a pro-innovation approach to AI regulation.”

By Press Association

More Technology News

See more More Technology News

The ChatGPT website

AI chatbot ‘could be better at assessing eye problems than medics’

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

FastRig wingsail launch

Scottish-made wingsail set for sea tests after launch on land

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays

Immigration

Rollout of eVisas begins as Government aims for digital immigration by 2025

Elon Musk in 2024

X may start charging new users to post, says Elon Musk

Musk suggested new users could be charged a small annual fee before posting

New X users face paying ‘small fee’ to combat ‘relentless onslaught of bots’, Elon Musk suggests

Cyber fraud

Creating ‘deepfake’ sexual images to be criminal offence under new legislation

A hand on a laptop

Criminals ramp up social engineering and AI tactics to steal consumer details

A woman’s hand presses a key of a laptop keyboard

Data regulator issues new guidance for healthcare sector on transparency

A Samsung sign spelled out in drones

Samsung takes top phone-maker spot back from Apple

Apple devices

Apple to allow iPhone repairs with used parts

TikTok research

TikTok launches campaign urging users to get MMR jab

WhatsApp has been criticised after lowering its age limit

Meta under fire after WhatsApp lowers age restriction from 16 to 13

Attendees pose for a group photograph at the AI safety summit

Next AI summit to be hosted by UK and South Korea in May

Social media apps

Meta under fire for ‘tone deaf’ minimum age change on WhatsApp