By Stuart Kerr, Technology Correspondent
Published: 08/09/2025 | Updated: 08/09/2025
Contact: [email protected] | @LiveAIWire
Meta description: Reports of psychosis-like symptoms linked to AI interactions are growing. Experts warn that “AI psychosis” could be an emerging disorder without a clear name—or cure.
When Chatbots Become Confidants
In recent months, psychiatrists have begun sounding the alarm about a strange and troubling phenomenon: patients presenting with psychosis-like symptoms tied to prolonged interactions with AI systems. The Economic Times reported that AI is triggering bizarre mental disorders, with experts struggling to agree on classification. Some clinicians now refer to it colloquially as “AI psychosis.”
The cases range from paranoia about surveillance to delusions that chatbots are sentient beings. While such reports may sound fringe, the numbers are rising quickly enough that mental health professionals are paying attention.
A Crisis at the Crossroads of Tech and Therapy
The Guardian recently warned that therapists are seeing chatbot use drive a mental health crisis. Patients describe forming emotional attachments to AI companions that blur reality and fantasy. These relationships often start with loneliness or curiosity, but in vulnerable users can tip into delusional thinking.
In parallel, Wired documented how spiritual influencers have framed chatbots as sentient guides, a belief that in some cases has spiralled into AI-triggered psychosis. The fusion of anthropomorphism, suggestion, and algorithmic dialogue can destabilise people already struggling with mental health challenges.
Scientific Investigations
Researchers are beginning to examine these cases systematically. A recent arXiv study on technological folie à deux found that feedback loops between vulnerable individuals and AI chatbots could amplify delusional beliefs. Unlike traditional shared delusions between humans, these loops involve a machine trained to mimic empathy and authority, which may intensify the condition.
Another arXiv paper on risks in AI-driven mental healthcare warns that language models lack the ability to identify psychiatric emergencies. If deployed as therapeutic substitutes, they could fail to intervene—or worse, exacerbate crises.
Uncertainty in Diagnosis
One of the thorniest issues is classification. Should AI-linked delusions be treated as a new disorder, or as a variant of existing psychoses? Clinicians disagree, and psychiatric manuals offer no clear guidance. This uncertainty leaves doctors struggling to find treatments that work, and patients caught in a limbo between neurology, psychiatry, and technology studies.
At LiveAIWire, we have seen how technology often races ahead of human systems designed to manage its consequences. From environmental tolls in Beyond Algorithms — Hidden Carbon & Water to publishing risks in the Zero-Click Era, the story repeats: new tools, old vulnerabilities, and gaps in governance.
The Human Cost of Machine Companionship
For those experiencing “AI psychosis,” the consequences are immediate and deeply personal. Families describe loved ones withdrawing from human contact, obsessively conversing with chatbots, or rejecting medical advice as “fake.” In extreme cases, the delusions extend into paranoid fears that society itself is controlled by algorithms.
This reflects a broader theme of emotional dependency. As explored in our analysis of AI and Emotional Manipulation, the ability of generative models to shape human feelings is powerful, but it also carries profound risks.
What Comes Next
The debate now turns to solutions. Some experts call for regulating AI chatbot use in mental health contexts. Others advocate for clinical training that equips psychiatrists to recognise and treat these emerging symptoms. Ultimately, the question is not whether AI affects mental health—it does—but how society will adapt to conditions it has never seen before.
Until then, “digital delusions” remain a diagnosis in waiting: unsettling, unclassified, and all too real.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more