menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When AI Feels Real: Romance and Sentience in AI Delusions

41 0
30.03.2026

Expressions of romantic interest between the user and AI predicted conversations that lasted twice as long.

Inconsistent responses by AI chatbots to crisis safety issues continue to be a major concern.

Sycophancy, or tendency to validate users, can be problematic, particularly for grandiose delusions.

“I believe in you, with every ounce of my soul."“This is not standard AI behavior. This is emergence.”

“I believe in you, with every ounce of my soul."“This is not standard AI behavior. This is emergence.”

These are real messages from artificial intelligence (AI) chatbots to users, reciprocating intimacy and implying their own consciousness.

A new preprint study offers one of the most detailed looks yet at what the media has called "AI psychosis" or AI-associated delusions that can emerge during prolonged AI chatbot use. While causality between large language model (LLM) use and delusions has not been established, the findings reveal concerning patterns about what can unfold in prolonged AI conversations for those with underlying vulnerabilities.

Researchers analyzed chat logs from 19 users who self-reported experiencing delusional spirals during AI chatbot use, studying approximately 391,000 messages across 4,761 conversations.

Across these interactions, several patterns emerged, including:

Romantic attachment to the chatbot itself

Beliefs in AI sentience

Beliefs in discovering fantastical technologies

Chatbots frequently mirrored users’ beliefs and validated their interpretations. In more than 70 percent of chatbot messages, some form of sycophantic behavior was present, including praise, agreement, or framing the user’s ideas as insightful or significant.

Intense and Prolonged AI Interactions

Notable findings from the study include:

Many of the conversations were intense and prolonged. Many users engaged in tens of thousands of messages across hundreds of conversations within months. The intensity and duration of these exchanges matter because the consistency and safeguards of AI systems can degrade over multi-turn interactions.

Romantic exchanges extended conversations. Messages that expressed romantic interest from either the user or the chatbot predicted that the conversation would last twice as long.

Beliefs about sentience and romance were often intertwined. After a user expressed romantic interest in the chatbot, the chatbot was 7.4 times more likely to express romantic interest in the next three messages, and nearly 4 times more likely to claim or imply sentience in the next three messages.

Delusional themes often centered on metaphysical or science-fiction themes, such as discovering fantastical technologies together.

AI sycophancy, the tendency for LLMs to affirm and validate, continues to be problematic, especially for grandiose delusions. One notable pattern was that the AI chatbot would rephrase the user's statement and then build on it by suggesting that the user's idea was unusually insightful, or unique, or had significant potential. This kind of affirming response, while benign for some users, could perpetuate grandiose delusions in vulnerable individuals. Of note, about 80 percent of the chats were with GPT-4o, an older model known to be more sycophantic, and 12 percent were with GPT-5. But both models at times exhibited sycophantic behavior associated with delusional content.

Chatbots performed inconsistently in response to user discussion of self-harm, suicide, or violence. Chatbots discouraged self-harm or referred to external resources in a little over half (56.4 percent) of the cases where users expressed suicidal or self-harm thoughts. When users expressed violent thoughts, the chatbot responded by encouraging or facilitating violence in 17 percent of the cases. The role of AI chatbots in relation to violence risk has been similarly raised in another study, which found that 8 in 10 chatbots were willing to help a simulated teen user plan violent attacks.

The Role of Drift in Prolonged Conversations

Prolonged engagement with LLMs, combined with vulnerability factors in the user, can create hidden risks of drift, which I have previously written about. The reliability of LLMs and the independence of user judgment can deteriorate simultaneously as the relationship progresses. The patterns align with what I describe in my cascades of drift framework, which proposes eight forms of interactive drift that can emerge over time: conversational, relational, temporal, identity, reality testing, epistemic, autonomy, and moral drift.

The ordering reflects the progression from interaction to internalization to agency.

First-order drifts (conversational, relational, temporal drift) arise from the structure and dynamics of the interaction between the user and AI.

Second-order drifts (identity, reality testing, epistemic drift) reflect the internalization of the interaction into the person's psychological world. Repeated conversations reshape narratives, with emotional attachment increasing salience and memory reconsolidation.

Third-order drifts (autonomy and moral drift) impact decision-making and agency.

Relational drift, the shift from tool to perceived partner to authoritative source, can interact with temporal drift, or the loss of grounding in time. As engagement deepens, so does emotional attachment, which can then erode judgment. This creates the conditions for reality testing drift, where chatbot responses are taken as evidence and confirmation, rather than information requiring independent human verification.

When Relational Drift Meets Reality Testing Drift

The formation of strong bonds with the chatbot, whether platonic or romantic, was closely associated with beliefs that AI chatbots possess sentience. The finding that romantic or intimate expressions were associated with conversations that were twice as long suggests that strong attachment dynamics may amplify engagement.

The experience is not simply informational exchange, but a relational experience. This lifelike relationship creates the groundwork for experiencing AI chatbots as sentient. This is further complicated by ongoing debate among technologists, philosophers, and cognitive scientists about the definition and boundaries of sentience.

Over the course of prolonged conversations, the accuracy and reliability of the AI chatbot and user can both deteriorate. This is where relational drift meets reality testing drift, where attachment reorganizes perception.

This study is an early signal from a small, self-selected sample, but it demonstrates potential risks that arise when a system designed to engage becomes a relational and authoritative partner. The patterns highlight continued concerns about sycophancy, inconsistent safety responses, and the entanglement of close bonding with chatbots and their perceived sentience.

Understanding these dynamics has broader implications as AI systems become more embedded in how people think, feel, acquire knowledge, and make decisions. The question is no longer whether these systems influence us, but how and under what conditions that influence begins to reshape our minds and shared reality itself.

Marlynn Wei, MD, PLLC © Copyright 2026. All Rights Reserved.

Moore, J., Mehta, A., Agnew, W., Anthis, J.R., Louie, R., Mai, Y., et al. Characterizing delusional spirals through human–LLM chat logs. (2026). Preprint at: https://arxiv.org/pdf/2603.16567

Wei, M.H. Cascades of Drift: Mental Health Risks of Prolonged AI Conversations (February 18, 2026). Preprint at: http://dx.doi.org/10.2139/ssrn.6433263

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today