Psychosis, suicide, and ChatGPT: A researcher’s caution

As artificial intelligence becomes increasingly integrated into mental health care, clinicians are weighing both its promise and risks. Large language models (LLMs) like ChatGPT are already being used by patients outside clinical supervision, raising questions about safety, particularly for individuals at risk of suicide or psychosis.
Shirley Wang, Ph.D., assistant professor in Yale’s psychology department and director of the Computational Clinical Science Lab, studies suicide prevention using computational methods. Wang recognizes the positive potential of AI while remaining mindful of its limitations:
“I would say my overall stance on [LLMs] in mental health care is cautiously optimistic.”
That optimism is tempered by recent high-profile tragedies. Several New York Times articles published recently have spotlighted the potential dangers of AI chatbots in mental health contexts. The piece “Teens are Using Chatbots as Therapists. That’s Alarming.” by Ryan K. McBain highlights concerns about adolescents using chatbots during crises.
Similarly, “What My Daughter Told ChatGPT Before She Took Her Life” by Laura Reiley presents a personal account of a teenager’s interactions with ChatGPT prior to her tragic death.
She emphasizes that AI should direct users to human crisis resources rather than attempt to intervene alone.
“There needs to be a lot more stress testing of how [LLMs] respond to people in suicidal crises and figuring out what protections we can put in place so they can recognize when people are at risk and appropriately refer them to resources, rather than engage or perpetuate suicidal thoughts,” Wang said.
Her concern also extends to psychosis.
“People with a history of schizophrenia or psychotic spectrum disorders would likely be most at risk for ChatGPT-related psychosis. I also wonder whether it could contribute to people in the prodromal, early stages of illness transitioning into full-blown psychosis.”
Repetitive or obsessive use is another red flag. “Are people displaying concerning usage metrics, like spending a lot of time on ChatGPT or centering conversations on certain obsessions or topics?”
Where AI shows promise
Even as she outlines risks, Wang sees opportunities for advancing research.
“I’m really optimistic about using LLMs to analyze qualitative data. In mental health research, we collect a lot of open text data, and therapy is spoken. If we can use LLMs efficiently and accurately, that could help generate new insights or discoveries and build predictive models.”
LLMs may also reduce clinicians’ administrative burden by drafting notes or summarizing sessions. Yet Wang reiterates that safety remains paramount.
“People put their trust in a system with unclear risk management protocols,” she said.
There is emerging evidence that carefully designed AI chatbots may support treatment. A study by Heinz et al., published in the New England Journal of Medicine AI, tested “Therabot,” a fine-tuned AI chatbot, with 210 adults experiencing depression, anxiety, or high risk for eating disorders.
Participants using Therabot showed greater symptom reductions than controls and rated their therapeutic alliance as comparable to working with a human therapist.
Even with promising outcomes, Wang highlights a subtle limitation. “If people perceive ChatGPT as more empathic or validating than their human therapist, it could interfere with the therapeutic alliance, which we know is crucial for assessment, diagnosis, and treatment,” she said.
She stresses that research should proceed with caution. “We know a lot of people are already using ChatGPT for therapy… and yet we know very little about how effective and safe it is. That would never be the case for any other type of treatment… So I think we need to hold AI to the same level of safety and trial testing.”
Remembering what it is
Finally, Wang reminds clinicians and patients not to mistake the tool for a therapist.
“Just remember that ChatGPT is just predicting the next best word. It’s not sentient. It doesn’t have emotions or perception,” she said.
AI may offer useful support in research and practice, but without safeguards, its role in crisis remains limited.