AI chat tools test new ground in therapy as clinicians assess risks & benefits
As AI chat tools move deeper into mental health care, psychologists are weighing the promise of round-the-clock support against concerns about safety, bias, and the limits of algorithmic empathy.
Lyra Health, a provider of mental health solutions, is piloting AI chat functions to customers experiencing mild and moderate challenges like burnout, sleep, and stress. Users get support from a conversational AI guide, which has a risk-flagging system built in to identify situations requiring escalation to Lyra’s 24/7 care team.
Alethea Varra, Ph.D., chief clinical officer of Lyra, said patients currently under the care of Lyra providers have access to the AI solution, to support care between sessions. Providers can see transcripts of the AI conversations, to help identify concerns or challenges.
“We want to make sure patients are utilizing the AI in a way that is helpful and productive,” said Varra. “Any time there is any reason for concern, individuals are directed straight to human clinicians, who reach out to make sure the individual is safe and do an assessment to see if a higher level of care is needed.”
The company describes the solution as “clinical grade,” a term Varra said the company created to distinguish it from other AI tools that were not built with input from clinicians and not specifically intended for mental health use. “All of our AI has been designed by clinicians in partnership with engineers trained on clinical inputs, and our tool is HIPAA compliant.”
Vaile Wright, Ph.D., senior director of health care innovation at the American Psychological Association (APA), noted there is a significant difference between AIs that are purpose-fitted for mental health and general large language models (LLMs) such as ChatGPT, which are engineered for maximum user engagement.
“Their algorithm is trained to be agreeable and validate the user input unconditionally. That sycophancy bias can create a sense of emotional dependency,” said Wright, noting there are currently no federal regulations regarding AI chatbots for mental health.
“You get this kind of dangerous feedback loop where the user says something that may be cognitive distortion and the algorithm reinforces it, instead of providing some degree of a necessary challenge like a therapist.”

Jacque Cutillo, Ph.D., director of specialized operations at Youth Villages, a Massachusetts non-profit organization that provides mental and behavioral health services for at-risk youth and families.
There are aspects of mental health care that simply can’t be replaced by a chatbot, said Jacque Cutillo, Ph.D., director of specialized operations at Youth Villages, a Massachusetts non-profit organization that provides mental and behavioral health services for at-risk youth and families.
“A trained, qualified professional can pick up on nonverbal cues, changes, how transparent someone is being, or even just a side comment that indicates a need for [follow-up], more so than a chatbot just reading words,” Cutillo said. “My biggest concern is that it can’t take the place of a skilled, trained provider that has gone through a lot of supervision, work hours, and education.”
Both Cutillo and Wright agreed that, when used appropriately, AI could be useful as a fill-in tool between sessions, particularly if providers were able to share the client’s treatment plan with a HIPAA compliant AI.
“If you have a panic attack at three in the morning and can’t access traditional services, a chatbot can help you address that immediate concern, before you can seek out a therapist,” said Wright.
A trusted tool could help mental health providers reinforce positive, healthy coping skills, she continued. “It could reinforce the messages from therapy and help people feel not so alone,” she said, noting that the tool would need to not only be properly built, but continually kept up to date.
“How is it maintained post-market? Where are the humans in the loop after it’s been deployed?” Wright said. “What are you doing to ensure safety and efficacy?”
According to the APA’s annual Practitioner Pulse Survey, 56% of psychologists reported using AI tools in their work at least once in the past 12 months, up from 29% in 2024. Close to three in 10 psychologists (29%) said they used AI on at least a monthly basis, more than double who did so in 2024 (11%).
But that doesn’t mean psychologists don’t have concerns. More than nine in 10 cited potential risks, including potential data breaches (67%), unanticipated social harms (64%), biases in input and output (63%), lack of rigorous testing (61%), and inaccurate output or “hallucinations” (60%).
Only 8% said they used it to assist with clinical diagnosis, and only 5% had used chatbot assistance for patients or clients. The most common uses reported were for routine time-consuming tasks such as drafting emails and other materials (52%), summarizing clinical notes or articles (32%), and note-taking (22%).
While the Lyra AI chat tool is being tested to see how it performs in different settings with different types of patients, it is only available to patients working with a therapist.
“We’re not ruling out that we would never consider offering the tool to patients not in care with a therapist, but we are very thoughtful and conservative of how we are integrating AI into our care,” said Varra. “We want to make sure that the technology is functioning as intended and always supporting individuals with positive clinical outcomes.”
