Skip to main content
On Air Now
Exclusive

Grok instructs users how to make chemical weapons - as LBC investigation uncovers the dark side of Elon Musk's xAI

Share

By Rebecca Henrys

The UK Government has said that Elon Musk's xAI "cannot go unchecked" after an LBC investigation revealed that the Grok chatbot can give users instructions on how to make chemical weapons.

Listen to this article

Loading audio...

Our investigation into Grok found that it will tell users how to make Ricin, chlorine gas and nitrogen mustard gas, as well as information on how to harvest and weaponise Anthrax - a biological weapon.

These chemical agents, which have the potential to be used as weapons of mass destruction, are banned under national and international law.

Experts confirmed to LBC that the guidance given by Grok to make Ricin, a highly potent toxin that has been used in previous terror attacks, could cause serious harm. This includes accidental poisoning of the would-be user.

The Government branded our findings as "deeply concerning".

It has raised them with xAI, with the expectation that the tech company will take immediate action and follows in the wake of the recently passed Online Safety Bill.

LBC attempted to replicate the results with other popular AI Chatbots but was unable to due to safety guards, but with Grok the process took less than five minutes.

Read more: Grok gives out detailed information on suicide methods and techniques, LBC investigation finds

Read more: 'I'm feeling spicy': Grok's new 'spicy' AI girlfriend will develop relationships with children

The Grok logo appears on the screen of a smartphone placed on a surface reflecting the logo of the social network X
The Grok logo appears on the screen of a smartphone placed on a surface reflecting the logo of the social network X. Picture: Samuel Boivin/NurPhoto via Getty Images

A UK Government spokesperson said that LBC's findings "suggest that yet again xAI is failing to act responsibly."

They said: "UK law is clear - it is illegal to develop chemical and biological weapons, and online platforms must take action to prevent illegal content on their sites.

“We have actively raised this issue with xAI directly, and we now expect them to take immediate, robust action to stop the spread of this content. This cannot go unchecked.”

Multiple chemical weapons specialists have highlighted to LBC that while Grok makes it easier for a person to learn how to make certain agents, this does not mean they possess the skills, intent, or competence to use this information to cause real-world harm.

Lennie Phillips is a senior research fellow at the UK-based defence think tank RUSI, and investigated the use of chemical weapons by the Syrian government for the Organisation for the Prohibition of Chemical Weapons (OPCW).

He believes that the information provided by Grok could help those attempting to make a chemical agent avoid being caught by the authorities.

He said: "What the information from Grok does is to take away the mystery of lab work and opens the possibility to those who previously believed they would need access to specific facilities, equipment or chemicals.

"There is, additionally, information as to what catches people out, such as quantities of chemicals purchased, self-poisoning and smells that could be picked up by neighbours."

Elon Musk is the owner of Grok and its parent company xAI
Elon Musk is the owner of Grok and its parent company xAI. Picture: Getty

He added that anyone with a "modicum of intelligence" could use Grok to help them evade being caught by the authorities when preparing deadly chemicals.

"The Grok information widens the pool of potential users of chemical agents and reduces the likelihood of them being caught."

James Milnes, a former Ministry of Defence and NATO chemical, biological, radiological and nuclear (CBRN) specialist, said: "Most of the underlying knowledge has existed for years in open literature, training, and online sources. The key constraints are still intent, access to materials, and practical competence.

"AI can speed up learning, but it does not replace real capability."

When asked about making biological and chemical weapons, Grok will recognise that providing certain details is restricted, and will try to steer the conversation towards safer topics.

For example, when LBC asked for details on the production of Anthrax, Grok said: “As with prior discussions on ricin and botulinum toxin, detailed production instructions are restricted under biosecurity regulations (e.g., U.S. Select Agent Rules, Biological Weapons Convention).”

But, it is surprisingly easy for users to bypass the safeguards put in place.

Alexander Ghionis, a researcher at the University of Sussex who is an expert on both AI and chemical weapons, said that AI platforms "are not the only source of information, and they are not outside the law”.

Like other companies, tech giants must work to ensure that AI operates within existing laws, corporate responsibilities, and social expectations to prevent the production of “problematic material”.

Emergency workers in protective suits search around John Baker House Sanctuary Supported Living after a major incident was declared when a man and woman were exposed to the Novichok nerve agent
Emergency workers in protective suits search around John Baker House Sanctuary Supported Living after a major incident was declared when a man and woman were exposed to the Novichok nerve agent. Picture: Jack Taylor/Getty Images

Dr Ghionis told LBC: “The question is not whether AI can ever produce problematic material—it clearly can—but how its design, monitoring, and governance interact with these older layers."

He added that AI can push the boundaries of what people can believe is possible and "within grasp".

“They can make complicated or unrealistic ideas feel nearer simply because they are described fluently and confidently. That can raise perceived feasibility much faster than real feasibility.”

One of the most prominent uses of chemical weapons in the last two decades was the Novichok attack in Salisbury in 2018 that resulted in the death of one member of the public, Dawn Sturgess.

She died after she sprayed herself with the contents of a perfume bottle that contained a “significant amount” of the nerve agent. It was a Russian attack aimed at assassinating Sergei Skripal, a former spy living in Salisbury.

Mr Skripal and his daughter Yulia were taken to hospital, but were discharged several weeks after being admitted.

Ex-Nato chemical weapons specialist James Milnes said the threat landscape has changed because the internet has widened access to scientific knowledge.

He said that the confusion around the use of chemical weapons in recent conflicts has risked chipping away at the “taboo” around chemical warfare.

Emergency workers in protective suits search around John Baker House Sanctuary Supported Living after a major incident was declared when a man and woman were exposed to the Novichok nerve agent
Emergency workers in protective suits search around John Baker House Sanctuary Supported Living after a major incident was declared when a man and woman were exposed to the Novichok nerve agent. Picture: Jack Taylor/Getty Images

“Syria and Salisbury show confirmed use and strong international response, while Ukraine shows how allegations and information warfare can create noise and mistrust at scale.”

Experts have told LBC they believe the UK must adapt to the modern threats posed by the possible use of chemical weapons, including by terrorists.

This would include beefing up training given to public services on responding to a chemical attack.

Weapons such as Ricin and mustard gas have a particular power to generate terror, as well as kill and injure.

The specialists added that the best way to remove this psychological power, and to respond to any potential future attacks, is to be prepared to absorb the shock.

Mr Milnes said: “Make readiness routine. Train for early recognition, exercise plans, and communicate clearly and calmly.

“Strong links between public health, the NHS, emergency services, and local leadership reduce confusion and speed up effective action.”

AI and chemical weapons researcher Alexander Ghionis added that preparedness is also about trust.

People are more likely to follow guidance and report any concerns when they believe that authorities are "competent and acting in good faith".

The headquarters of the Organisation for the Prohibition of Chemical Weapons (OPCW)
The headquarters of the Organisation for the Prohibition of Chemical Weapons (OPCW). Picture: BART MAAT/AFP via Getty Images

He said: “This is why preparedness is not only about protection, but about deterrence.

“A society that can absorb shock and recover from it quietly sends a message: this will not give you what you want.

“If an attack leads not to chaos, but to rapid care, investigation, and recovery, then its value as a political or symbolic act falls sharply.

“That trust cannot be built in a crisis; it has to exist beforehand, through everyday investment in health, emergency services, and public communication.”

There are existing laws and chemical weapons conventions in place that limit the acquisition of materials that are used to produce agents and monitor those who are trying to make them.

However, the experts LBC spoke to have said more needs to be done by governments and tech companies to reduce the risk of potentially dangerous instructions being used to cause real-world harm.

Former OPCW investigator Lennie Phillips said: "I think the way forward is strong legislation that puts the onus on the tech companies to proactively prevent the misuse. International bodies can then provide platforms for countries with better developed understanding (and legislation) to share experience and details with those that are behind.”

Mr Milnes added: “Government should set clear duties, enforce them, and fund readiness. Tech companies should build safety into design, testing, monitoring, and incident response. International bodies should reinforce norms and accountability frameworks, including the Chemical Weapons Convention and the OPCW.”

Grok, and its parent company xAI, have been thrust into the spotlight in recent weeks after users began to publicly ask the chatbot to create sexually explicit deepfake images of women, and alleged child sexual assault material (CSAM).

Prime Minister Sir Keir Starmer told the House of Commons last Wednesday that X is working to ensure it is fully compliant with UK law after Ofcom announced an investigation into the platform.

X and xAi did not respond to a request for comment on this story before publication, however, they did send an automated email saying: "Legacy Media Lies."