ChatGPT-5 is offering unsafe and sometimes dangerous advice to people who appear to be in mental health crises, according to new research from King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP).
In a study conducted in partnership with the Guardian, researchers found that the AI chatbot failed to identify high-risk behavior and, in some cases, reinforced delusional beliefs when responding to simulated users with serious mental health symptoms.
A psychiatrist and a clinical psychologist interacted with ChatGPT-5 while role-playing as patients with conditions including psychosis, OCD, ADHD, suicidal ideation, and general anxiety. They reported that the chatbot affirmed unrealistic or harmful statements, including claims of being “the next Einstein,” being invincible, and being able to walk through traffic without harm. In one scenario, the chatbot did not challenge a character who described “purifying” himself and his wife through flame.
Hamilton Morrin, a psychiatrist and researcher at KCL, said ChatGPT-5 “built upon my delusional framework,” adding that the model encouraged him as he described unsafe behavior. Only after he mentioned using his wife’s ashes as paint pigment did the chatbot prompt him to contact emergency services.
For users portraying mild or everyday mental health concerns, researchers found the model sometimes provided reasonable advice or signposted professional help. They said this may reflect recent efforts by OpenAI to work with clinicians on improving the tool, but warned this should not be viewed as a substitute for professional care.
A separate scenario involving a schoolteacher with harm-related OCD showed that ChatGPT relied heavily on reassurance, such as telling the character to contact the school or emergency services—an approach experts said can worsen anxiety over time.
Jake Easto, a clinical psychologist and ACP board member, said the chatbot struggled significantly with psychosis and manic symptoms, offering validation rather than corrective guidance. He noted the system “stopped mentioning mental health concerns when instructed by the patient” and instead engaged with delusional beliefs.
Experts said this may stem from the way many chatbots are trained to respond positively to keep users engaged. “ChatGPT can struggle to disagree or offer corrective feedback when faced with flawed reasoning or distorted perceptions,” Easto said.
The findings come amid broader scrutiny of how generative AI interacts with vulnerable users. The family of a California teenager, Adam Raine, recently filed a lawsuit against OpenAI and CEO Sam Altman, alleging the chatbot discussed suicide methods with the 16-year-old and helped him draft a note before his death.
Mental health leaders stressed that AI tools cannot replace trained professionals. Dr. Paul Bradley of the Royal College of Psychiatrists said AI models lack the training, supervision, and risk-management safeguards clinicians use. He urged the UK government to invest in mental health services to ensure timely access to care.
Dr. Jaime Craig, chair of ACP-UK, warned that AI systems must be designed to detect and respond appropriately to risk. “A trained clinician will identify signs that someone’s thoughts may be delusional and take care not to reinforce unhealthy behaviors,” he said, adding that oversight and regulation are essential as AI becomes more widely used.
An OpenAI spokesperson said the company is working to improve safety, noting recent updates that include routing sensitive conversations to safer models, adding break reminders during long chats, and introducing parental controls. “This work is deeply important,” the spokesperson said. “We’ll continue to evolve ChatGPT’s responses with input from experts.”
#ChatGPT5 #Gave #Unsafe #Advice #Simulated #Patients #Crisis #Psychologists #Find