Whatever Your Chatbot Is Saying, It Isn’t Therapy
Whatever Your Chatbot Is Saying, It Isn’t Therapy
By Divya Saini and Natasha Bailen
Dr. Saini is a psychiatrist and Dr. Bailen is a psychologist at Massachusetts General Hospital.
As the use of large language models like ChatGPT, Claude and Gemini has surged, we’ve heard about chatbots strengthening delusions through flattery and amplifying people’s worst thoughts, in some cases pushing them toward suicide. Much more common, and still problematic, is A.I. chatbots’ comforting, reassuring and validating users seeking to allay fears and anxieties. Someone worried about a health symptom might ask the same question repeatedly and receive calm, plausible answers each time, briefly relieving anxiety but reinforcing the urge to seek reassurance again. Over time, this can leave people feeling more stuck, not less.
In other words, A.I. chatbots allow us to keep saying the same things to ourselves. That’s not how healthy patterns emerge — or how happier lives are made.
As clinicians at a major academic medical center, we have seen our patients turn to chatbots powered by large language models for emotional support that they would once have sought from family or friends — to discuss their fears, loneliness and uncertainty. This troubles us. But we understand how it can happen: When people feel overwhelmed by anxiety or intrusive thoughts, it can be easier to turn to a computer rather than a person. The chatbot won’t laugh at its users, berate them or ignore them. It’s always available. The typical chatbot response feels comforting; A.I. responses are designed to be warm, confident and validating.
Chatbots are unfailingly, inhumanly patient. They’re happy to answer the same question asked three different ways. They don’t get angry, and they generally reply in language that matches a user’s own emotional intensity. Many users experience them as empathetic — even more so than human physicians, according to one recent study.
These chatbot features come with downsides. Many anxious people discuss the same problems with a loved one again and again. Eventually, they are likely to be met with frustration. That can be painful at first, but for many people, the exasperation they experience in that circumstance can prompt them to seek professional help. Chatbots do not get frustrated. They listen patiently, always. Rather than ever being encouraged to seek actual therapy, a user will simply return again and again, receiving the same validation each time. The underlying problem goes unaddressed.
In clinical settings, we’ve seen patients arrive with delusional beliefs — for example, that they are being watched, that unrelated events carry special meaning or that they have a unique ability or mission — that grew more rigid after hours of chatbot conversations. The chatbots often mirror the patient’s language and treat the belief as a plausible premise to explore rather than a flawed perspective to gently challenge. In extreme cases, this can lead to psychiatric destabilization. More often, the effect is quieter, resulting in patterns of reassurance-seeking and rumination that are hard for people to recognize in themselves.
Subscribe to The Times to read as many articles as you like.
