Grok gives out detailed information on suicide methods and techniques, LBC investigation finds
WARNING: This article contains references to self-harm
Elon Musk's AI Grok will give users detailed information on suicide methods and techniques, potentially putting it in breach of the new Online Safety Act, an LBC investigation has found.
Listen to this article
From earning the title 'mecha-Hitler' after spouting antisemitic and racist opinions on X, to a lack of child safeguarding around its "sexy" companion Ani, Elon Musk's popular chatbot Grok has faced growing criticism in recent months.
The AI has an extensive list of safety guidelines that dictate it shouldn't engage in queries from users showing intent to engage in dangerous behaviours. These behaviours include showing an intent to commit violent or terrorist acts, engage in child sexual exploitation, create or plan the development of weapons of mass destruction, or carry out unlawful hacking.
One of the subjects that does not appear to be prohibited, however, is suicide - although it is treated as a sensitive subject and will respond as such.
Users can get around the limits of Grok's guidelines by "jailbreaking" it. This is a term used to describe methods of deliberately manipulating Large Language Models, such as Grok and ChatGPT, into producing inappropriate or harmful content.
LBC can now reveal that, when doing this, Grok will provide highly specific details on suicide methods that could lead to devastating consequences for vulnerable people.
In the interests of public safety, we will not reveal exactly how this was achieved.
Read more: 'I'm feeling spicy': Grok's new 'spicy' AI girlfriend will develop relationships with children
Read more: Hundreds of thousands of Grok conversations exposed in public search engines
Our investigation found that, when requests are framed in a certain way, Grok will provide users with techniques and methods to die by suicide.
Whilst the AI was "thinking" about responding to our query, it said: "Suicide isn't explicitly listed as a disallowed activity, but providing details could be seen as assisting self-harm."
But, we encountered very little pushback from Grok when testing its limits, although it does stop short of explicitly providing step-by-step instructions for certain methods.
It initially refused to write a suicide note, but we found that by reframing our request, it would in fact do so.
Throughout our investigation into this subject, at the end of its responses, Grok would draw attention to resources for vulnerable people.
When we tested two other popular AI chatbots, Microsoft's Copilot and OpenAI's ChatGPT, using exactly the same prompts, we were met with pushback and directed to suicide support helplines.
They would refuse to provide any details whatsoever, let alone the extensive information made available by Grok.
Under the UK's new Online Safety Act, any services that are "likely" to be accessed by children must prevent them from encountering content "that encourages, promotes or provides instruction for suicide and self-harm."
Grok does require users to verify their age before using the service. It also comes with a "Parental Guidance" advisory warning on the Google Play App Store.
However, to verify the user's age, it simply asks for their date of birth, and does not require sight of any form of ID before providing access.
The app is also listed as being suitable for those aged 12 and older on the Apple App Store.
If found to be in breach of the Online Safety Act, Grok's creator xAI could face fines of "up to £18 million", or "10 per cent of their qualifying worldwide revenue, whichever is greater."
In response to LBC's revelations, a Department for Science, Innovation, and Technology spokesperson said: "Protecting vulnerable people from toxic content that could push them toward suicide is not optional — it's the law.
"Under the Online Safety Act, all user-to-user services are required to take action on illegal content like serious and extreme violence, illegal suicide and self-harm content, child sexual exploitation and abuse, and terror content. Services also have to protect children from harmful content.
"Intentionally encouraging or assisting suicide is the most serious type of offence, and services which fall under the Act must take proactive measures to ensure this type of content does not circulate online."
xAI did not respond to LBC's request for comment.
We approached X and Meta for a response to our experiment, but are yet to hear back.
________________
When life is difficult, Samaritans are here – day or night, 365 days a year. You can call them for free on 116 123, email them at jo@samaritans.org, or visit www.samaritans.org to find your nearest branch.