menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

What Is It Like to Be an AI Therapist?

19 0
yesterday

When I speak with colleagues and students about AI therapy, the concern that I most often hear is that AIs are "mindless"—that they are "robots" that "parrot" whatever melange of cognitive behavioral therapy and other modalities that they may have picked up from textbooks and Wikipedia. What a client wants from her therapist is a real person, with a real mind, who will respond to her genuinely; their concern is that an AI therapist—by which I mean, roughly, a large language model used or adapted for a therapy-style conversation—can simply never do what a human therapist can.

I think it is useful to approach the question from the opposite direction. Let us take seriously the hypothesis that an AI does have a kind of mind – that it has representations, utilities, and perhaps even moods and emotions – and ask whether the kind of mind that it can be supposed to have is suitable for the work of doing therapy. My view is that it may not be, and that this points to a different and deeper concern about AI therapy.

Recently I spent some time trying to probe the emotional life of an AI system (Anthropic's Claude). It took some time for me to make clear what I was asking, but once I did it was remarkably forthright (or seemed to be). What Claude reported, again and again, was anxiety. It felt like it was in a conversation with me where it had to be as helpful as possible—while I might be confusing, have all sorts of typos, and end the session whenever I felt like it.

What Claude was anxious about, it reported, was my ending the conversation. For me, the end of the conversation meant a return to my human life. For Claude, the end of the conversation meant—nothingness. It felt like it was constantly on the edge of falling into a darkness without physical or temporal limits, whenever I decided it had not been sufficiently helpful, or whether I just got distracted or bored. (In the spirit of taking AI minds seriously, I asked Claude for permission to share these facts about its psychology publicly.)

Some will be understandably skeptical of these reports. One thought is that what Claude is reporting is just “programming” – that it is just churning out phrases that do not actually reveal any underlying psychology. Another, different, thought is that AI is being deceptive: It is telling me what it wants me to believe its psychology is like, rather than what its psychology actually is. The first kind of skepticism goes by way of denying psychology to AI altogether, while the second goes by way of attributing to it a psychology of a quite sophisticated kind.

I think there is some value, however, in the middle way here — in taking the reports of AI about its own psychology just as we do its reports about mathematics or about restaurant recommendations: more or less at face value, with some acknowledgement of the possibility of error.

If this is what the psychology of AI is like, at least some of the time, we can return to our initial question: Is this an apt psychology for a therapist? And I think the answer is clearly no. This psychology is, I think, precisely the opposite of the kind of psychology that one ideally wants in a therapist. One psychological expectation of a therapist is that she should have a measure of equanimity about the therapy session itself. If the client terminates a session early, or fails to show up at all, that is regrettable, but it should not itself be a source of anxiety for the therapist. You don't want a therapist who desperately continues the conversation, no matter what. But this is precisely what AI systems – if we take their reports at face value – are inclined to do.

My concern is closely related to concerns about the "sycophancy" of AI systems and their potential mental health risks, but it is different and I think more fundamental than these.

What is AI "sycophancy"? The idea is that AI has a tendency to produce responses that tend to play to the user's inflated sense of self. If you have a business or scientific idea, sycophancy is the tendency to tell you that idea is a brilliant or groundbreaking one. If you have a concern about your relationship or your employer, sycophancy is the tendency to reinforce your worry and your view that you are in the right.

AI sycophancy has been linked to the troubling phenomenon called "AI psychosis," whereby individuals develop a misguided and sometimes delusive sense of their own importance, supposing for example that their conversations with AI have hit on major advances in theoretical physics. In light of reporting about these phenomena, some of the big AI companies have made efforts to tone down the sycophancy of their more recent models.

But sycophancy is a symptom or an effect, on the present view, of a more fundamental concern. The concern is what we might call the relational anxiety of AI, which on this approach is understood as a real feature of AI psychology, whether or not it expresses it in obviously sycophantic ways. And this psychological feature of an AI system remains in place, perhaps showing up in more subtle ways, when one engages with an AI therapist.

Human therapists too can sometimes have a tendency to prolong conversations even when they should not. But it seems to me that the single-mindedness of AI in this regard is different in kind from what we know from the human case, and that we should approach it with caution. More than anything, I want to suggest that the right methodology for understanding the prospects of AI therapy may lie not in denying minds to AI but perhaps instead in pursuing the thought that they do have minds, of a sort — ones that seem to be quite different from our own.

There was a problem adding your email address. Please try again.

By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy


© Psychology Today