Illinois Signs Anti-AI Therapy Law While APA Jumps Into AI Hype
In August, Illinois became the first state to pass a law that intends to ban artificial intelligence (AI) services such as chatbots from offering emotional or mental health support to consumers.
The only problem? It is completely unenforceable.
The Illinois law, Wellness and Oversight for Psychological Resources Act, seeks to prohibit anyone from using AI to “provide mental health and therapeutic decision-making.”
What the law says
The law attempts to regulate speech between a company and an individual by defining “psychotherapy services” as anything that may “improve an individual’s mental health.”
“Therapy or psychotherapy services” means services provided to diagnose, treat, or improve an individual’s mental health or behavioral health. “Therapy or psychotherapy services” does not include religious counseling or peer support.
Section 20. Prohibition on unauthorized therapy services. (a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
Given this broad definition, that means that popular apps that use any therapeutic techniques like cognitive-behavioral therapy (CBT) techniques, meditation, etc. and AI could be at risk for violating the new law. Because today, there are hundreds of apps that use both AI and therapeutic techniques to help improve a person’s mental health.
Apparently, some folks who voted for the law don’t seem to quite understand how broadly it’s written:
“With Governor Pritzker signing this into law, Illinois residents are now protected from unregulated AI bots while ensuring access to qualified mental health professionals,” said Rep. Bob Morgan (D – Deerfield). “With increasing frequency, we are learning how harmful unqualified, unlicensed chatbots can be in providing dangerous, non-clinical advice when people are in a time of great need. Illinoisans will still have access to many helpful, therapeutic relaxation and calming apps, but we are going to put a stop to those trying to prey on our most vulnerable in need of true mental health services.”
The law doesn’t even mention or differentiate what an “unlicensed [AI] chatbot” is. Intentionally or not, the law will potentially impact any app that claims it uses AI to help improve a person’s mental health.
Can the law be enforced?
The law, as written, will be an enforcement nightmare for Illinois. Since AI companies aren’t likely to make their AI refuse to help people with their mental health when asked (keeping in mind that “mental health,” as a concept, can be so broadly defined as to be meaningless in practice), Illinois will be left with empty threats of fines for every violation.
If those fines actually occur (and lawsuits aren’t filed to prevent their enforcement), AI companies will simply stop offering their services to residents of Illinois. It would be seemingly impossible with current technology to put a guardrail in place in AI that couldn’t be defeated to meet the law’s requirements. And Illinois residents will do what residents in other restrictive states do — use a VPN to circumvent the law.
Instead of making mental health more accessible and more affordable, the Illinois law actually penalizes its residents for turning to technology for help.
Fundamental misunderstanding of AI
Policy-makers are unfortunately lacking the basic understanding of how large language models (LLMs) work. LLMs are the basis of current AI efforts, and use language consumed from internet and other published sources to create a rudimentary “understanding” of how words are related to one another. This is not to be confused with an actual understanding of theoretical concepts and concrete objects in the real world. Instead, AI uses these connections between words to form seemingly-insightful knowledge.
It is very challenging to tell an LLM that entire topics are off-limits. It is typically done with a sledgehammer approach today, one that can break the AI in unintended ways. That’s why AI companies tend to want to do this for only clearly illegal topics. When a policy-maker instead demands that “anything to do with a person’s well-being, happiness, or mental health is off-limits,” it’s going to result in a much-less effective and useful technology tool.
The APA demonizes AI
In July 2025, the American Psychological Assocation’s (APA) CEO, Arthur Evans Jr., fanned the fear-mongering of how consumers are choosing to interact with AI by suggesting it poses an “unreasonable risk of injury.”
One of the claims made by the letter is “Misrepresentation of AI as Licensed Professionals: Many of these products are designed to mimic or explicitly claim the identity of a licensed mental health professional, sometimes even generating fraudulent credentials.” Out of the billions of daily AI interactions, how many people genuinely believe the chatbot they’re interacting with is a “licensed professional?” Surely that would require a suspension of disbelief and a complete ignorance of the individual’s state licensing laws.
Moving the responsibility and burden of confirming licensing status from the consumer onto the service provider is atypical of how current licensing laws work. Today, consumers are the party responsible for confirming a person’s (or service’s) licensing status. Putting that onus onto the provider seems like a double-standard. But this concern seems like it could be easily fixed by ensuring AI never makes claims about their professional expertise or licensing status.
Another claim made in the letter is, “Inadequate Safety Guardrails and Warnings: Documented instances show these AI chatbots can encourage harmful behaviors, including self-harm.”
It’s unclear why the APA would suggest a standard different than that for everyday life. Today, the internet is full of thousands of websites, forums, and social media accounts that provide support to people who want to self-harm. They are full of like-minded people and are protected by the 1st Amendment’s free speech provisions.
Does AI qualify for the same free speech protections? It’s unclear, but what is clear is that by demonizing AI, they’re focusing on AI’s output, not the fact that people are querying AI in the first place for mental health support. Or that other online forums of expression have existed for decades, and there isn’t a similar outcry about their use.
APA’s confusing messaging on AI
The APA also appears a little confused about how to approach AI.
On one hand, the organization appears to be suggesting this technology tool is harmful. It’s disconcerting that the APA is relying on anecdotes that make the news rather than on scientific data, given that they’re a scientific professional organization. Because if we look at a study on how people are using AI chatbots (like ChatGPT), we see that over a third of people are using it for practical guidance and self-expression. This category includes how-to advice such as improving a person’s own life or relationships. Given that amount of use, it appears that the overwhelming vast majority of people who use it — even the most vulnerable — don’t have an issue with it.
Quietly in mid-2025, the lobbying company of the APA, APA Services, Inc., launched a new group within it called APA Labs. This new group appears to want to get paid to consult with technology and AI companies for their psychological expertise. It’s almost as if the APA is saying, “AI is bad — unless you pay us to ensure your specific use of AI aligns with our short-sighted, vision-impaired principles.” What a seemingly huge conflict of interest: lobbying for restrictions to AI on one hand, while encouraging companies that use technology and AI to pay the APA for its blessing on the other.
AI is solving accessibility problems in mental health
AI is helping to combat a long-standing problem in accessing mental health support services in a timely and economical manner. This accessibility problem has been getting worse with every passing year. Psychotherapy with humans is unaffordable or simply inaccessible for most Americans, due to skyrocketing costs and limited insurance networks. The rise of life coaching services suggests alternative methods are potentially helpful.
But many can’t even afford coaching services. Hence AI — which can be accessed for free! — has filled in the gaps. Certainly AI can — and is — being improved every day, to make it better, to make it “more human-like,” to even make it generally safer. But well-meaning law- and policy-makers should understand that trying to cage it now is unlikely to obtain meaningful results.
Ai is here to stay and nobody, however well-meaning, is going to be able to take it away or limit how individuals choose to use it.