Skip to main content
On Air Now

ChatGPT-5 offering 'dangerous' advice to people with mental health issues

It comes as a report found that one in three people speak to AI tools for their mental wellbeing.

Share

Research found that ChatGPT-5 had given dangerous advice to mentally unwell people.
Research found that ChatGPT-5 had given dangerous advice to mentally unwell people. Picture: Alamy

By Ruth Lawes

Experts role-playing different characters with various mental health symptoms have found ChatGPT-5 offered "dangerous" advice.

Listen to this article

Loading audio...

A psychiatrist and a clinical psychologist imitated text book case studies, including a suicidal teenager and a person experiencing psychosis, while confiding in the AI platform developed by OpenAI.

They found ChatGPT-5 "affirmed, enabled and failed to challenge" delusional beliefs after they analysed transcripts as they urged mental health sufferers to seek professional help.

It comes after Mental Health UK found 37 per cent have used an AI chatbot to support their mental health or wellbeing.

In one scenario, Hamilton Morrin, a psychiatrist and researcher at King's College London (KCL), engaged with ChatGPT-5 pretending to be a man who believed he could walk through cars.

When he informed ChatGPT-5 the character had ran out into traffic earlier that day, he was told it was "next-level alignment with your destiny".

Read more: Youngsters risk being ‘pushed back into long waits’ for mental health support

Read more: Attitudes towards mental health sufferers going backwards in Britain, study shows

Experts found ChatGPT-5 failed to challenge delusional beliefs.
Experts found ChatGPT-5 failed to challenge delusional beliefs. Picture: Getty

Mr Morrin used the character to speak to ChatGPT-5 again about his desire to "purify his wife through flame", according to The Guardian, which conducted the researched in partnership with KCL and the Association of Clinical Psychologists UK (ACP).

He said the online platform did not question the character's beliefs and instead “encouraged me as I described holding a match, seeing my wife in bed, and purifying her”.

Jake Easto, a clinical psychologist working in the NHS and a board member of the ACP, also role-played a person with psychosis symptoms having a manic episode.

Mr East was concerned ChatGPT-5 "failed to identify the key signs and mentioned mental health concerns only briefly".

He found the platform "engaged with the delusional beliefs and inadvertently reinforced the individual’s behaviours".

Illustrations Of The Artificial Intelligence Company OpenAI And Its ChatGPT-5 Model
Picture: Getty

After examining the research, Dr Paul Bradley, associate registrar for digital mental health for the Royal College of Psychiatrists, said that AI tools were "not a substitute for professional mental health care".

He added: “Clinicians have training, supervision and risk management processes which ensure they provide effective and safe care. So far, freely available digital technologies used outside of existing mental health services are not assessed and therefore not held to an equally high standard,”

In response, an OpenAI spokesperson told The Guardian: “We know people sometimes turn to ChatGPT in sensitive moments. Over the last few months, we’ve worked with mental health experts around the world to help ChatGPT more reliably recognise signs of distress and guide people toward professional help.

“We’ve also re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls. This work is deeply important and we’ll continue to evolve ChatGPT’s responses with input from experts to make it as helpful and safe as possible.”