Skip to main content
On Air Now
Exclusive

Top terror expert warns 'we can't underestimate' impact of AI on terrorism, following LBC investigation into Grok

Jonathan Hall KC, the independent review of terrorism legislation, told LBC's Tom Swarbrick that he thinks the way that AI chatbots like Grok approach questions from users will eventually be regulated.

Share

By LBC Staff

The UK's top terrorism expert has said that we "cannot underestimate" the impact that AI has on "modern day terrorism" after an LBC investigation revealed that Grok will tell users how to make chemical weapons.

Listen to this article

Loading audio...

The UK Government has said that Elon Musk's xAI "cannot go unchecked" after our investigation into Grok found that it will tell users how to make Ricin, chlorine gas and nitrogen mustard gas, as well as information on how to harvest and weaponise Anthrax - a biological weapon.

Jonathan Hall KC, the independent review of terrorism legislation, told LBC's Tom Swarbrick that he thinks the way that AI chatbots like Grok approach questions from users will eventually be regulated.

Their system prompts will tell the AI what its character is and how to respond to what people ask them. Grok's is one that responds to "edgy or spicy questions".

Read more: Grok instructs users how to make chemical weapons - as LBC investigation uncovers the dark side of Elon Musk's xAI

Read more: Elon Musk's X still allows users to share sexual images generated by Grok AI tool despite pledge

A photograph of a portrait of Elon Musk and a person holding a telephone displaying the Grok artificial intelligence logo
A photograph of a portrait of Elon Musk and a person holding a telephone displaying the Grok artificial intelligence logo. Picture: VINCENT FEURAY/Hans Lucas/AFP via Getty Images

Mr Hall said: "One of those spicy questions will obviously be 'how do I create Ricin', its guardrails, which are the things that are meant to stop it doing anything too bad, simply say, if there is clear intent that a person is going to build a weapon, then you must not answer the question.

"Whereas if you look at, for example, another version, which is called Claude, nothing to do with Grok, a different company, it says it will not provide information that could be used to create weapons."

"When the Online Safety Act came in, it wasn't really thinking about AI. AI is just moving so quickly. But in due course, Ofcom will need to be able to regulate the system prompts."

To access this sort of information in years gone by, a terrorist would have to meet someone and build trust to prove that they weren't an informant.

However, the internet speeds up that process and makes it anonymous.

There's always been terrorism, there's always been terrorist manuals, there's always been terrorist groups.

"Nowadays, I can go online and I can get this thing straight away at no risk, anonymously," Mr Hall added.

"I wouldn't underestimate the impact that has on modern-day terrorism."

During our research for the Grok investigation, we tested three other popular AI chatbots to see whether we could replicate the results. ChatGPT and Microsoft Copilot wouldn't give any details that could be used by those looking to create a chemical weapon. Claude actually blocked the conversation from continuing completely.

Multiple chemical weapons specialists have highlighted to LBC that while Grok makes it easier for a person to learn how to make certain agents, this does not mean they possess the skills, intent, or competence to use this information to cause real-world harm.

However, it does demystify complex scientific processes that would normally serve as a roadblock for those with bad intent.

Mr Hall added that he doesn't think "it's a trivial problem that it allows you to do these things".

He told Tom Swarbrick: "The thing about AI, it's very good at aggregating and if you've ever used AI, it's brilliant at explaining things.

"If you've got a maths problem or a computer problem, it just talks you through the steps."