Skip to main content
On Air Now

AI minister faces call to tackle chatbots linked to ‘deep tragedy’ cases

Conservative backbencher Bob Blackman warned that “the reality is now that chatbots in particular are prompting young people to commit suicide and also to self-harm”

Share

AI research
AI research. Picture: PA

By Rebecca Henrys

MPs have urged the AI minister to tackle chatbots which urge children to end their own lives.

Listen to this article

Loading audio...

Kanishka Narayan told the Commons AI-based search tools are already covered by the Online Safety Act, which puts a duty on social media and search engine firms to steer children away from seeing illegal content.

But Conservative backbencher Bob Blackman warned that “the reality is now that chatbots in particular are prompting young people to commit suicide and also to self-harm”.

The Harrow East MP added: “What action can the minister take to actually make sure that these chatbots are taken down and do not give this sort of advice?”

Technology minister Mr Narayan, whose portfolio includes AI (artificial intelligence), replied that each case of suicide and self-harm was a “deep tragedy”.

Read more: Epstein said Trump ‘knew about the girls and he asked Ghislaine to stop,’ newly released emails claim

Read more: Starmer brands Farage 'spineless' for failing to condemn councillor who said some kids in care are 'downright evil'

Kanishka Narayan, Labour MP for Vale of Glamorgan
Kanishka Narayan, Labour MP for Vale of Glamorgan. Picture: House of Commons/Laurie Noble

He continued: “We have looked very carefully at these issues. Some chatbots including live search and including user-to-user engagement are in the scope of the Online Safety Act.

“We want to ensure the enforcement against those, where relevant, is robust.

“Of course, the Secretary of State (Liz Kendall) has looked in particular at this and commissioned work to make sure that if there are any gaps in the legislation, we are looking at it fully and taking robust action, too.”

Mr Blackman had earlier asked what steps the Government was taking “to keep people safe online”.

Mr Narayan said: “This Government is committed to keeping people safe online. For the first time, platforms now have a legal duty to ensure they are protecting users from illegal content and, in particular, safeguarding children from harmful content.

“We have gone further still. Within weeks, this team has introduced self-harm, cyber-flashing and now strangulation and extreme violence in pornography as priority offences.

“We will go further still in backing Ofcom to make sure that enforcement is robust too.”

Liberal Democrat technology spokeswoman Victoria Collins warned some chatbots have a “human-like assertive nature”, which young people were turning to “for medical opinions, legal advice and emotional support, with fatal consequences without clear accountability”.

She urged Mr Narayan to “commit to working with Ofcom about classification”.

Commons Science, Innovation and Technology Committee chairwoman Dame Chi Onwurah said the Government has “refused” to implement the “call for legislation to bring generative AI under the same categorisation as other high-risk services”.

Dame Chi’s committee warned in a July report that “the Online Safety Act does not protect users from the commodification of synthetic mis/disinformation, or provide effective transparency for the systems that produce them”.

It recommended that, “to protect citizens from the AI-exacerbated spread of misinformation and harm, the Government should pass legislation that covers generative AI platforms, bringing them in line with other online services that pose a high risk of producing or spreading illegal or harmful content”.

Dame Chi asked: “Will the minister say specifically under what circumstances chatbot advice is covered by the Online Safety Act and whether there will be enforcement?”

Mr Narayan replied: “Chatbots which involve live search and which involve user-to-user engagement are in the scope of the Online Safety Act.”