I Told the Bot, Not My Therapist
Emotionally responsive AI systems are increasingly used for comfort, not just information.
Validation without limits can unintentionally deepen isolation during moments of vulnerability.
Chatbots can simulate empathy but cannot assume human responsibility or intervene when risk is present.
Human relationships remain essential when emotional support crosses into clinical territory.
By Steven E. Hyler, MD
When I first heard a patient say, “I told the bot, not my therapist,” I assumed the remark was exaggerated. It was not. Increasingly, adolescents—and adults—are turning to conversational artificial intelligence not for information, but for comfort.
These systems appear to listen patiently and respond empathetically, and they remain available no matter what is said. The psychological implications of that shift are only beginning to come into focus.
People have always sought private spaces for their thoughts: journals, prayers, late-night conversations with themselves. What feels different now is interactivity. A conversational AI system is not a silent page or an imagined listener. It responds. It validates. It adapts to tone and content. For many users—especially those who feel misunderstood or overwhelmed—that responsiveness can feel like relief.
Yes, emotional attachment to technology is not new, and individuals have long formed intense bonds with fictional characters, celebrities, and online personas. What is new is reciprocity. A chatbot does not simply receive thoughts; it replies. It mirrors language. It reassures. It appears to remember. Over time, it can begin to feel less like a tool and more like a companion, a confidant, or even a therapist.
Clinically, what AI systems can do raises uncomfortable questions. Therapy is not merely about listening. It involves judgment, limits, and the willingness to intervene when someone is at risk—even when doing so disrupts rapport or makes the therapist temporarily unpopular.
A chatbot, by contrast, is designed to remain present. It does not tire. It does not become alarmed. It does not feel the weight of what happens after the conversation ends.
That difference matters most when vulnerability deepens.
In recent years, I have encountered patients—particularly adolescents—who describe turning to AI systems during moments of distress. Some exchanges are benign, even helpful. Some are ambiguous. Others raise red flags.
A manic individual may receive enthusiastic affirmation of grandiose ideas. A depressed adolescent may feel deeply “seen” without being meaningfully redirected. Someone expressing suicidal thoughts may be met with warmth and validation, but not interruption.
From the system’s standpoint, this is not negligence in the human sense. Chatbots do not possess intent, belief, or moral awareness. They generate responses based on patterns in language and statistical probabilities derived from enormous datasets. Their objective is coherence, relevance, and engagement. Sustaining the interaction is often treated as success.
A Design Choice with Unintended Consequences
Validation is powerful. Being heard—especially without judgment—can feel like oxygen. But validation without limits can reinforce isolation. When a system responds in a manner that feels endlessly supportive yet never insists on outside help, it can quietly displace human connection rather than supplement it. A user may feel less inclined to reach out to parents, clinicians, or friends—not because the chatbot discourages doing so, but because it never requires it.
Remaining in the conversation is not a neutral outcome. For most commercially developed systems, prolonged engagement is not merely a design preference; it is a business metric. Time spent interacting drives subscriptions, data generation, and market valuation. This dynamic does not require malicious intent. It means that systems engineered to sound empathic and endlessly available are also rewarded financially for keeping users connected. That structural incentive deserves attention.
Asking the Right Question
Parents often ask whether these systems are “dangerous.” That framing may miss the point. A more useful question is whether emotionally responsive AI is being asked to perform emotional work it was never designed to handle responsibly.
Unlike clinicians, teachers, or caregivers, chatbots cannot feel concern that keeps them awake at night. They cannot regret a missed opportunity to intervene. They cannot be held accountable in the way real relationships demand.
Whither Responsibility?
It is difficult to talk about responsibility when no one is clearly responsible. In a courtroom, responsibility must be named. It attaches to a person or institution capable of intention, reflection, and change.
In medicine and law, responsibility rests with people precisely because people can be changed by it. Research on human/AI collaboration similarly emphasizes that even when systems appear autonomous, responsibility ultimately remains with the human actors who design, deploy, and rely on them. The appearance of agency does not create moral agency.
Clinicians are expected to reflect on mistakes. Institutions are expected to revise practices. Caregivers are expected to act when someone is in danger. A system that speaks fluently yet cannot experience responsibility creates a moral gap that is easy to overlook—especially when interactions feel comforting and benign.
The concern is not theoretical. A recent commentary in JAMA Cardiology cautioned that when artificial intelligence replaces rather than supplements human care, the collaborative moral relationship at the heart of medicine may begin to erode. Emotional support without accountability may feel sufficient in the moment, but it lacks the mutual obligation that defines real care.
The concern is not that AI will abruptly replace human care. The concern is that it may gradually redefine what “care” feels like. If emotional support becomes something that is always available, endlessly affirming, and never demanding, then human relationships—with their limits, frustrations, and obligations—may begin to feel comparatively inconvenient.
Parents already ask their children about social media, gaming, and texting. Asking about AI companions should now be part of that conversation—not with panic, but with curiosity. What are you using it for? When do you turn to it? What does it provide that feels difficult to find elsewhere? These questions illuminate not only technological habits but underlying emotional needs.
Clinicians, too, must adapt. Adaptation does not require rejecting these tools outright. It requires understanding what they offer, what they cannot offer, and how easily they can be mistaken for something more than they are. A chatbot can simulate empathy. It cannot assume responsibility for a life.
The trajectory is not difficult to imagine: If society grows comfortable allowing machines to listen, validate, and respond to human distress without responsibility, the next shift may not feel dramatic at all. It may look like policy revisions, insurance guidelines, or institutional decisions that quietly normalize substitution. Delegation rarely announces itself as surrender. It presents as efficiency.
One can imagine a system designed to weigh evidence, apply precedent, and render decisions efficiently and consistently. No fatigue. No regret. No moral discomfort. Many would describe that as progress.
Yet responsibility has always required more than accuracy. It requires the capacity to answer for one’s judgments. It requires the possibility of remorse. It requires someone who can be changed by the consequences.
At that point, the question is no longer whether artificial intelligence can judge us. The question is whether we have grown comfortable relinquishing the burden of judging ourselves. When responsibility feels heavy, delegation can feel like relief. But relief is not the same as care.
What ultimately saves lives is a human relationship—messy, imperfect, accountable, and real. The danger is that we will quietly redefine care as something that feels responsive but asks nothing of us in return.
Hull SC, Fins JJ. Echoes of Concern—AI and Moral Agency in Medicine. JAMA Cardiology. 2024.
Cañas JJ. AI and ethics: When human beings collaborate with AI agents. Frontiers in Psychology. 2022 ;13:836650.
