Bid to create AI Authority amid pleas for swifter action from UK Government

22 March 2024, 09:54

AI warning
AI warning. Picture: PA

Peers will consider the Artificial Intelligence (Regulation) Bill on Friday March 22.

Britain risks “sliding into global irrelevance” on artificial intelligence (AI) if the Government does not introduce new laws to regulate the sector, according to a Conservative peer.

A new body, known as the AI Authority, would be established under a proposal tabled in Parliament by Lord Holmes of Richmond.

His Artificial Intelligence (Regulation) Bill, to be debated at second reading on Friday, would require the authority to push forward AI regulation in the UK and assess and monitor potential risks to the economy.

Security, fairness, accountability and transparency are among the principles that the AI Authority must take into consideration, according to the Bill.

The Government believes a non-statutory approach provides “critical adaptability” but has pledged to keep it under review.

Lord Holmes said: “The current Government approach risks the UK sliding into global irrelevance on this hugely important issue of protecting citizen rights and ensuring AI is developed and deployed in a humanity-enhancing, rather than a society-destroying, way.

“The Government claims that their light-touch approach is ‘pro-innovation’ but innovation is not aided by uncertainty and instability.

“AI offers some of the greatest opportunities for our economy, our society, our human selves.

“It also, if unregulated holds obvious existential harms. Self-governance and voluntary agreements just don’t cut it.

“We need leadership, right-sized regulation, right now.

“The UK can, the UK must lead when it comes to ethical AI.

“This Bill offers them that very opportunity. I hope they take it.”

The Bill would also seek to ensure any person involved in training AI would have to supply to the authority a record of all third-party data and intellectual property (IP) they used and offer assurances that informed consent was secured for its use.

Lord Holmes added on the labelling system: “People would know if a service or a good had used or deployed AI in the provision of that service.”

AI safety summit
Prime Minister Rishi Sunak (left) and Elon Musk, chief executive of Tesla and SpaceX in-conversation in central London, at the conclusion of the second day of the AI Safety Summit on the safe use of artificial intelligence (Kirsty Wigglesworth/PA)

Speaking in November last year, Rishi Sunak said Britain’s AI safety summit would “tip the balance in favour of humanity” after reaching an agreement with technology firms to vet their models before their release.

The Prime Minister said “binding requirements” would likely to be needed to regulate the technology, but now is the time to move quickly without laws.

Elon Musk, the owner of social media platform X, described AI as “one of the biggest threats” facing humanity.

The Government announced in February that more than £100 million will be spent preparing the UK to regulate AI and use the technology safely, including helping to prepare and upskill regulators across different sectors.

Minister have chosen to use existing regulators to take on the role of monitoring AI use within their own sectors rather than creating a new, central regulator dedicated to the emerging technology.

A Government spokesman said: “As is standard process, the Government’s position on this Bill will be confirmed during the debate.”

By Press Association

More Technology News

See more More Technology News

Person on laptop

UK cybersecurity firm Darktrace to be bought by US private equity firm

Mint Butterfield is missing in the Tenerd

Billionaire heiress, 16, disappears in San Francisco neighbourhood known for drugs and crime

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays