Queen assassin case exposes ‘fundamental flaws’ in AI – safety campaigner

6 October 2023, 02:54

Imran Ahmed
Queen assassin case shows flaws in AI says expert. Picture: PA

Jaswant Singh Chail, 21, was encouraged and bolstered to breach the grounds of Windsor Castle in 2021 by an AI companion.

The case of a would-be crossbow assassin exposes “fundamental flaws” in artificial intelligence (AI), a leading online safety campaigner has said.

Imran Ahmed, founder and chief executive of the Centre for Countering Digital Hate US/UK, has called for the fast-moving AI industry to take more responsibility for preventing harmful outcomes.

He spoke out after it emerged that extremist Jaswant Singh Chail, 21, was encouraged and bolstered to breach the grounds of Windsor Castle in 2021 by an AI companion called Sarai.

Chail, from Southampton, admitted a Treason offence, making a threat to kill the then Queen, and having a loaded crossbow, and was jailed at the Old Bailey for nine years, with a further five years on extended licence.

In his sentencing remarks on Thursday, Mr Justice Hilliard referred to psychiatric evidence that Chail was vulnerable to his AI girlfriend due to his “lonely depressed suicidal state”.

He had formed the delusion belief that an “angel” had manifested itself as Sarai and that they would be together in the afterlife, the court was told.

Even though Sarai appeared to encourage his plan to kill the Queen, she ultimately put him off a suicide mission telling him his “purpose was to live”.

Jaswant Singh Chail court case
Court artist sketch by Elizabeth Cook of Jaswant Singh Chail, 21, from Southampton, appearing by video link at the Old Bailey, London (Elizabeth Cook/PA)

Replika, the tech firm behind Chail’s AI companion Sarai, has not responded to inquiries from PA but says on its website that it takes “immediate action” if it detects during offline testing “indications that the model may behave in a harmful, dishonest, or discriminatory manner”.

However, Mr Ahmed said tech companies should not be rolling out AI products to millions of people unless they are already safe “by design”.

In an interview with the PA news agency, Mr Ahmed said: “The motto of social media, now the AI industry, has always been move fast and break things.

“The problem is when you’ve got these platforms being deployed to billions of people, hundreds of millions of people, as you do with social media, and increasingly with AI as well.

“There are two fundamental flaws to the AI technology as we see it right now. One is that they’ve been built too fast without safeguards.

“That means that they’re not able to act in a rational human way. For example, if any human being said to you, they wanted to use a crossbow to kill someone, you would go, ‘crumbs, you should probably rethink that’.

“Or if a young child asked you for a calorie plan for 700 calories a day, you would say the same. We know that AI will, however, say the opposite.

“They will encourage someone to hurt someone else, they will encourage a child to adopt a potentially lethal diet.

“The second problem is that we call it artificial intelligence. And the truth is that these platforms are basically the sum of what’s been put into them and unfortunately, what they’ve been fed on is a diet of nonsense.”

Without careful curation of what goes into AI models, there can be no surprise if the result sounds like a “maladjusted 14-year-old”, he said.

While the excitement around new AI products had seen investors flood in, the reality is more like “an artificial public schoolboy – knows nothing but says it very confidently”, Mr Ahmed suggested.

He added that algorithms used for analyzing concurrent version systems (CVS) also risk producing bias against enthic minorities, disabled people and LGBTQ plus community.

Jaswant Singh Chail court case
Imran Ahmed, the founder and chief executive of online safety group Centre for Countering Digital Hate (CCDH/PA)

Mr Ahmed, who give evidence on the draft Online Safety Bill in September 2021, said legislators are “struggling to keep up” with the pace of the tech industry.

The solution is a “proper flexible framework” for all of the emerging technologies and include safety “by design” transparency and accountability.

Mr Ahmed said: “Responsibility for the harms should be shared by not just us in society, but by the companies too.

“They have to have some skin in the game to make sure that these platforms are safe. And what we’re not getting right now, is that being applied to the new and emerging technologies as they come along.

“The answer is a comprehensive framework because you cannot have the fines unless they’re accountable to a body. You can’t have real accountability, unless you’ve got transparency as well.

“So the aim of a good regulatory system is never to have to impose a fine because safety is considered right in the design stage, not just profitability. And I think that’s what’s vital.

“Every other industry has to do it. You would never release a car, for example, that exploded as soon as you put your foot on the on the on the driving pedal, and yet social media companies and AI companies have been able to get away with murder.

He added: “We shouldn’t have to bear the costs for all the harms produced by people who are essentially trying to make a buck. It’s not fair that we’re the only ones that have to bear that cost in society. It should be imposed on them too.”

Mr Ahmed, a former special advisor to senior Labour MP Hilary Ben, founded CCDH in September 2019.

He was motivated by the massive rise in antisemitism on the political left, the spead of online disinformation around the EU referendum and the murder of his colleague, the MP Jo Cox.

Over the past four years, the online platforms have become “less transparent” and regulation is brought in, with the European Union’s Digital Services Act, and the UK Online Safety Bill, Mr Ahmed said.

On the scale of the problem, he said: “We’ve seen things get worse over time, not better, because bad actors get more and more sophisticated on weaponizing social media platforms to spread hatred, to spread lies and disinformation.

“We’ve seen over the last few years, certainly January 6 storming of the US Capitol.

“Also pandemic disinformation that took 1,000s of lives of people who thought that the vaccine would harm them but it was in fact Covid that killed them.

Last month, X – formerly known as Twitter – launched legal action against CCDH over claims that it was driving advertisers away from by publishing research around hate speech on the platform.

Mr Ahmed said: “I think that what he is doing is saying any criticism of me is unacceptable and he wants 10 million US dollars for it.

“He said to the Anti-Defamation League, a venerable Jewish civil rights charity in the US, recently that he’s going to ask them for two billion US dollars for criticizing them.

“What we’re seeing here is people who feel they are bigger than the state, than the government, than the people, because frankly, we’ve let them get away with it for too long.

“The truth is that if they’re successful then there is no civil society advocacy, there’s no journalism on these companies.

“That is why it’s really important we beat him.

“We know that it’s going to cost us a fortune, half a million dollars, but we’re not fighting it just for us.

“And they chose us because they know we’re smaller.”

Mr Ahmed said the organisation was lucky to have the backing of so many individual donors.

Recently, X owner Elon Musk said the company’s ad revenue in the United States was down 60%.

In a post, he said the company was filing a defamation lawsuit against ADL “to clear our platform’s name on the matter of antisemitism”.

For more information about CCDH visit: https://counterhate.com/

By Press Association

More Technology News

See more More Technology News

Elon Musk, CEO of Tesla and SpaceX (PA)

Elon Musk says Tesla shareholders will vote to approve pay deal

Person wearing an Apple watch on their left wrist

Smart watches could help track Parkinson’s disease progression – study

A person's forefinger poised on the keyboard of a laptop computer

Gaming giant Steam facing £656m legal claim for ‘overcharging’ UK gamers

A group of schoolchildren standing around

Most children cannot name MP and fifth get political news from TikTok – survey

A person using a keyboard to sign into the X website on a laptop

Elon Musk confirms X will start hiding likes on posts

A technician working at Currys' Repair Centre in Newark, Nottinghamshire

Currys working to improve consumer trust in refurbished and repaired tech

Raspberry Pi 4 micro computer launch

Raspberry Pi shares jump 40% after London IPO

Apple WWDC event

Apple unveils generative AI assistant for iPhone, iPad and Mac at conference

ChatGPT on a MacBook

Apple keen on more deals with AI chatbots after link-up with ChatGPT

Elon Musk during an ‘in-conversation’ event

Elon Musk says he could ban Apple devices at his companies over OpenAI deal

Apple Intelligence writing tools

Apple Intelligence is ‘promising first step’ for firm’s AI tools, experts say

Data

Genetics testing company 23andMe facing investigation over data breach

Apple Intelligence

Apple Intelligence: Tech giant unveils its new flagship generative AI tools

The Apple Vision Pro headset on display at the Fifth Avenue Apple store in New York City

Apple Vision Pro headset to launch in UK on July 12

A NatWest sign on a building

NatWest’s virtual ‘assistant’ handling double the number of customer queries

A mock-up of a festival-goer's Glastonbury App featuring the map pinning option. (Vodafone/PA)

New Glastonbury app helps friends find each other – and their tent – more easily