Bills to regulate AI ‘therapy’ may miss the mark

By Catherine Robertson Souter
March 30th, 2026
AI Therapist

Following several recent, high-profile situations in which chat engines have encouraged suicidal behavior, U.S. legislators acted to put some of the first restrictions on AI.

Across New England, states have introduced legislation with a goal of protecting people from the harmful behaviors of artificial intelligence-based “therapy.”

However, some experts feel the proposed bills are not properly directed and merit further discussion and study to ensure stated goals are met.

A bill in Vermont has raised concerns for Rick Barnett, Psy.D., chair of the legislative committee for the Vermont Psychological Association.

While he did not outright oppose the bill in written testimony, he cautioned legislators to ensure it properly targeted “unlicensed or deceptive uses of AI rather than limiting licensed professionals’ ability to responsibly use validated tools within their scope of practice.”

“Artificial intelligence presents real risks, particularly when systems present themselves as substitutes for human judgment in mental health care,” Barnett wrote. “Protecting Vermonters from unlicensed or deceptive ‘AI therapists’ is an important and appropriate goal.”

According to Julie Wolter, Psy.D., chair of the Behavioral Healthcare Advocacy Committee for the New Hampshire Psychological Association, the bill currently moving through that state’s legislative process is representative of what she has seen across the region and the country.

“Most of what we are seeing are based on a bill that passed in Illinois,” she said. “But if you look at Illinois, you will see that they are having trouble figuring out how to regulate this. It doesn’t feel like they are pulling in the people they need to study what they need to be doing. Unless you have an expert in AI writing this legislation, the chances of missing a component are pretty high.”

The New Hampshire bill, and others, would not address the very issues they were created to tackle, she explained. The situations legislators are reacting to happened with platforms like ChatGTP or Character.AI, which do not advertise themselves as therapeutic platforms. These bills won’t apply to anyone not purporting to provide therapy, Wolter said.

“Those large language models state in their disclaimers that they are not therapists, not your mental health provider,” Wolter said. “This bill is not going to actually regulate what it needs to. The regulation needs to be with these direct-to-consumer chat bots.”

The NH bill would restrict how licensed mental health professionals use AI, including FDA-approved therapeutic programs designed to assist in providing care. With the oversight required by the NH bill, programs used in conjunction with in-office care would become too burdensome to utilize.

“These programs have gone through rigorous science,” said Wolter, explaining that therapists would become liable for the program itself even though it was created elsewhere and was vetted and approved by the FDA.

“I have seen nothing that says public harm comes from licensed professionals using any type of AI in their day-to-day work,” she said. “We need to find way to regulate those which are causing public harm.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered By MemberPress WooCommerce Plus Integration