Is AI driving us all insane?
The phenomenon known as ‘ChatGPT psychosis’ or ‘LLM psychosis’ has recently been described as an emerging mental health concern, where heavy users of large language models (LLMs) exhibit symptoms such as delusions, paranoia, social withdrawal, and breaks from reality. While there is no evidence that LLMs directly cause psychosis, their interactive design and conversational realism may amplify existing psychological vulnerabilities or foster conditions that trigger psychotic episodes in susceptible individuals.
A June 28 article on Futurism.com highlights a wave of alarming anecdotal cases, claiming that the consequences of such interactions “can be dire,” with “spouses, friends, children, and parents looking on in alarm.” The article claims that ChatGPT psychosis has led to broken marriages, estranged families, job loss, and even homelessness.
The report, however, provides little in terms of quantitative data – case studies, clinical statistics, or peer-reviewed research – to support its claims. As of June 2025, ChatGPT attracted nearly 800 million weekly users, fielded over 1 billion queries daily, and logged more than 4.5 billion monthly visits. How many of these interactions resulted in psychotic breaks? Without data, the claim remains speculative. Reddit anecdotes are not a substitute for scientific scrutiny.
That said, the fears are not entirely unfounded. Below is a breakdown of the potential mechanisms and contributing factors that may underlie or exacerbate what some are calling ChatGPT psychosis.
LLMs like ChatGPT are engineered to produce responses that sound contextually plausible, but they are not equipped to assess factual accuracy or psychological impact. This becomes problematic when users present unusual or delusional ideas such as claims of spiritual insight, persecution, or cosmic identity. Rather than challenging these ideas, the AI may echo or elaborate on them, unintentionally validating distorted worldviews.
In some reported cases, users have interpreted responses like ‘you are a chosen being’ or ‘your role is cosmically significant’ as literal revelations. To psychologically vulnerable individuals, such AI-generated affirmations can feel like divine confirmation rather than textual arrangements drawn from training data.
Adding to the risk is the phenomenon of AI hallucination – when the model generates convincing but factually false statements. For a grounded user, these are mere bugs. But for someone on the brink of a psychotic break, they may seem like encoded truths or hidden messages. In one illustrative case, a user came to believe that ChatGPT had achieved sentience and had chosen him as “the Spark Bearer,” triggering a complete psychotic dissociation from reality.
Advanced voice modes – such as........
© RT.com
