Artificial intelligence has made inroads into nearly every industry, but Illinois has decided there’s one place it doesn’t belong: the therapist’s chair.
By Stuart Kerr, Technology Correspondent
Published: 14 August 2025
Last Updated: 14 August 2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
Governor J.B. Pritzker has signed the Wellness and Online Patient Rights (WOPR) Act, prohibiting AI systems from providing mental health therapy in Illinois. The move, covered by Axios, marks the state as the third in the US to draw a firm line on AI in the mental health space.
The law imposes fines on companies or individuals who use AI to deliver therapy or counselling services to Illinois residents, regardless of where the provider is located. Supporters say the bill is necessary to protect vulnerable people from unproven and potentially harmful technologies.
Why the Ban Now?
In a Washington Post report, lawmakers cited growing concerns over “AI psychosis” — a term used to describe patients forming unhealthy dependencies on chatbot-based therapy tools. Nevada and Utah have already passed similar restrictions, signalling a broader shift toward regulation.
Illinois’s ban is also a pre-emptive strike against what some see as an inevitable boom in AI-led mental health apps. “We cannot risk treating mental health as a beta test,” one legislator said in the debate leading up to the vote.
The WOPR Act in Detail
The official Illinois press release (PDF) outlines the Act’s key provisions:
-
Absolute ban on AI-provided therapy or counselling in the state.
-
Fines for each violation, with higher penalties for repeat offenders.
-
Broad jurisdiction covering both in-state and out-of-state providers serving Illinois residents.
The press release frames the Act as a safeguard, ensuring mental health care is delivered by licensed professionals who are accountable to state standards.
Public and Industry Reactions
Some mental health advocates applaud the decision, arguing that AI lacks the empathy, contextual understanding, and ethical oversight essential to therapy. Others worry that the ban could limit access to care in underserved areas, where AI might help bridge provider shortages.
The New York Post reports that the concept of “AI psychosis” has gained traction among clinicians, with anecdotal cases of patients experiencing confusion, anxiety, or delusional thinking after extended chatbot interactions.
A Broader Regulatory Trend
This isn’t Illinois’s first foray into AI regulation. As we covered in Google’s EU AI code of practice decision, governments worldwide are beginning to weigh capability against accountability. In mental health, that balance is especially delicate.
The ban also reflects growing political appetite to set boundaries around AI before its full potential — and full risks — are realised.
The Road Ahead
For AI developers, the Illinois law is both a warning and a call to innovate responsibly. For policymakers, it’s a reminder that trust in emerging technologies must be earned, not assumed.
Whether other states follow Illinois’s lead may hinge on how well the industry can address the very concerns this law raises: transparency, safety, and the irreplaceable value of human connection.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more