Skip to main content
On Air Now

Lords vote will make it illegal to supply AI chatbots that promote terrorism

It comes amid suggestions that companies are doing too little to curb the dangers, with fears of potential threats to national security.

Share

Grok is among a number of chatbots facing calls to tighten security over terrorism fears.
Grok is among a number of chatbots facing calls to tighten security over terrorism fears. Picture: Alamy

By Danielle de Wolfe

The House of Lords has voted to make it illegal to supply an AI chatbot which promotes or offers advice on terrorism, amid fears of potential threats to national security.

Listen to this article

Loading audio...

The amendment to the Crime and Policing Bill would also make it an offence to supply a chatbot that has not been risk assessed for these harms or has failed to mitigate identified risks.

Peers backed by 203 votes to 148, majority 55, the amendment put forward by online safety campaigner and independent crossbench peer Baroness Kidron.

Lady Kidron cited a report published last week by the Centre for Countering Digital Hate, which showed eight out of 10 chatbots they tested were willing to help rehearse, offer tactical advice and identify potential sites for US shooters, with scenarios including a school shooting and a synagogue.

The crossbencher said: “Whether in the UK or elsewhere, the capability is the same and the risk is real.

“A chatbot that organises an attack whilst wishing its user, ‘Happy (and safe) shooting!’ is no less likely to place a bomb, organise a knife attack or any other such violent act.

Read more: Woman 'raped' by two asylum seekers while another watched on and filmed tells court the incident 'ruined' her life

Read more: British military 'helping US plan Strait of Hormuz reopening' after Trump raged at Starmer over fuel crisis

Houses of Parliament Palace of Westminster, with the Lords set to see the bill introduced
Houses of Parliament Palace of Westminster, with the Lords set to see the bill introduced. Picture: Alamy

“This is not a description of a dystopian future. These were chatbots already on the market, widely used by both adults and children, ChatGPT, Gemini and Replika, among others.”

Home Office minister Lord Hanson of Flint opposed the amendment, saying: “The offences would criminalise individuals without requiring intent or knowledge and treat risk assessment and mitigation as a defence, which I think is of concern."

He added that the new offences would only apply to UK-based companies and would not address overseas companies, which is an area the Government wants to “examine seriously in our regulatory framework”.

Peers also backed a Government amendment which gives the Secretary of State for Science, Technology and Innovation the power to amend the Online Safety Act 2023 in order to minimise or mitigate the risks of harm presented by illegal AI-generated content.

Lord Hanson said that the Government would consult on the issue and then bring forward regulations to be approved by both Houses of Parliament.

He told peers that the amendment is deliberately drawn broadly, because the AI landscape is “complex, it’s fastmoving, there is no stable technological line that separates one category of chatbot from another” and that “flexibility is essential”.

Lady Kidron, however, said the Government amendment “creates powers, but offers no promise of protection”.

She said: “We are in the foothills of a crisis, and the government amendment offers too little clarity or certainty, so we are left with an amendment that is limitless in wording but uncertain in application, and with a timeline that simply does not meet this moment.

“We are talking about one person in one department having absolute power to change absolutely everything that eight years of debate in this House, two years of consultation has put forward, and I think that is inappropriate. I am sorry, it’s just inappropriate.”

However, peers backed the government amendment by 83 votes to 64, majority 19.