menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Conversational AI and Emotional Intelligence

9 0
yesterday

AI can help users reframe messages and regulate emotional reactions before responding.

Research shows proportionate regulation—not expressiveness alone—drives social competence.

Fluent language can create an illusion of insight while masking weak judgment.

For many people, the first meaningful use of conversational artificial intelligence (AI) is as an aid to interpersonal communication. Many adults—especially older adults—use these systems to rehearse difficult conversations, clarify thoughts, or find the right words when communication feels effortful.

They ask: “Can you make this sound nicer?” “How should I respond?” “Help me say this without making things worse.” The impulse is natural. Human communication is fragile. Tone is difficult to calibrate, and small missteps escalate quickly. When to speak? When to hold back?

Beneath these practical requests lies a deeper issue: emotional intelligence—the capacity to regulate emotion, interpret social cues, and judge when expression serves a relationship and when restraint protects it. In this post, I examine conversational AI through a clinical lens, asking two questions: What is happening psychologically? And what does it mean for therapeutic practice? The aim is not to react to new technologies but to help clinicians engage them deliberately.

Emotional Intelligence Is About Regulation, Not Just Expression

Psychological research clarifies why AI-assisted communication can sometimes improve exchanges. Emotional intelligence is often mistaken for expressiveness or empathy alone. In the popular literature, it is associated with Daniel Goleman’s emphasis on self-awareness, self-regulation, and social attunement. Complementing that framework, decades of experimental work by James Gross and colleagues show that effective social functioning depends as much on regulation as on expression. In foundational research, Gross examined how people regulate emotion during social interaction. Participants were instructed either to express their feelings freely or to use cognitive reappraisal—reinterpreting a situation to alter its emotional impact (Gross 1998; Gross and John 2003). Those who regulated their responses were judged more appropriate and more professionally effective, even when they appeared less warm. Emotional intelligence, in this sense, is not simply expressing emotion, but modulating it proportionately.

When used well, conversational AI can function as a regulatory aid. By helping users rephrase or reframe a message, it can support the modulation that emotional intelligence requires. These systems generate empathy and reassurance fluently, which can help users pause and reconsider reactive communication. But fluency is not judgment. The same ease that softens tone can also encourage overelaboration or misplaced warmth when restraint would signal maturity.

In my own professional life, I have used conversational AI to reframe academic exchanges that initially felt personal. The process often surfaces alternative explanations I might overlook in the heat of reaction. That illustrates both the promise and the limit of these systems: They can support recalibration, but they cannot determine what proportion the situation requires.

The Risk of Overcommunication

A second body of research examines conversational norms and the risks of overcommunication. Studies of self-disclosure show that sharing too much—too quickly or in the wrong context—can damage social evaluations, even when intentions are positive. In classic experimental work, participants who disclosed more personal information than situational norms warranted were rated as less socially skilled and less appropriate (Cozby 1973; Derlega and Grzelak 1979). Related studies find that combining multiple conversational goals—such as expressing appreciation while also making a request—can reduce perceived clarity and professionalism. These outcomes reflect miscalibration, not hostility.

This research marks the boundary of AI’s usefulness. Conversational systems are optimized to elaborate and enrich language. That tendency can help soften tone, but it can also encourage excess—longer messages, layered intentions, amplified emotion—precisely where restraint would signal judgment. The system does not know when “more” becomes disproportionate.

I saw this dynamic in a classroom interaction. An undergraduate submitted an email drafted with AI assistance. The message was articulate and emotionally fluent. When I asked, the student readily acknowledged using AI. The problem was not concealment but proportion. The email combined multiple purposes, overelaborated emotionally, and missed contextual cues about what the situation required. The tool improved tone. It could not supply judgment.

Fluency Is Not Understanding

A third psychological insight helps explain why these misjudgments are easy to miss. Research on processing fluency shows that people mistake smooth language for deeper understanding. In experiments by Oppenheimer (2006), explanations identical in content but written in more polished prose were rated as more intelligent and insightful—even when they contained logical gaps. Fluency creates an illusion of competence. Because fluency is usually adaptive, it reduces scrutiny and delays correction.

I encounter this in my own writing. When a draft reads beautifully but feels slightly unlike my voice, I pause. Sometimes the prose is elegant but vague. The task then is not refinement but simplification—to ask what I am actually trying to say. The goal becomes precision rather than polish.

In interpersonal communication, the same bias operates. A rhetorically smooth message can feel emotionally intelligent while misjudging proportion. Polished language may conceal underdeveloped judgment rather than reveal understanding. This is not a failure of AI, but a predictable interaction between human psychology and fluent output.

Remembering What Not to Say

A senior scholar once described wanting a technology that would remind him to call his adult daughter—but also remind him what not to ask. Don’t probe about her job. Don’t raise sensitive topics. Don’t turn concern into intrusion.

The difficulty was not initiating contact. It was remembering the boundaries that make a conversation supportive rather than overbearing. What he wanted was not more expressiveness, but restraint.

Emotional intelligence is not simply articulating warmth. It is calibrating what to say—and what to withhold—given the history and sensitivities of a relationship. Good intentions can overshoot proportion.

Here, conversational AI reaches its limit. These systems can suggest phrasing and soften tone. With enough context, they may even recognize patterns. But they do not inhabit the relationship or carry responsibility for its consequences. A model can suggest what sounds reasonable. Only a person can judge what is proportionate. Conversational AI can assist language. Deciding what belongs remains human.

What This Means for Clinicians: Integrating AI Into Reflective Practice

The distinction between expressive fluency and relational judgment has direct implications for clinical work. Many clients already use conversational AI to regulate themselves before entering therapy. Rather than viewing this as a threat to therapeutic authority, clinicians can treat it as material for reflection.

The first step is inquiry. How is the tool being used? Did it help the client pause rather than react? Did it introduce alternative interpretations—or encourage avoidance? AI-assisted drafts and rehearsals can be brought into session and examined collaboratively.

Clinicians may even structure its use. A client prone to reactive email exchanges might draft a response with AI and review it in session. What assumptions were amplified? What tone was introduced? What still feels disproportionate? The aim is not to perfect the message but to illuminate regulatory patterns.

This extends familiar techniques. Cognitive reappraisal, role-play, journaling, and letter-writing already externalize reactions for examination. Conversational AI accelerates that process by generating alternatives quickly. The risk is not replacement of judgment, but mistaking fluency for growth. Therapy remains the place where proportion is cultivated—where clients learn not only how to phrase a thought but also whether to express it at all.

A Fair Shake for AI—and for Clinical Judgment

To give conversational AI a fair shake is to resist two distortions: alarmism and overconfidence. These systems can support reflection and perspective-taking. They can also obscure proportion if fluency is mistaken for insight.

For clinicians, the task is not exclusion but integration. Conversational systems can be examined, structured, and sometimes prescribed. But the work of therapy—developing restraint, tolerance, and discernment—cannot be outsourced.

Conversational AI can help people communicate more smoothly. It cannot determine what belongs in a relationship at a particular moment. That discernment is cultivated over time—in families, workplaces, and therapy.

If this column is to give AI a fair shake, it will do so without dismissing new tools or surrendering judgment to them. The aim is integration with clarity. The responsibility remains ours.

Cozby, P. C. (1973). Self-disclosure: A literature review. Psychological Bulletin, 79(2), 73–91.

Derlega, V. J., & Grzelak, J. (1979). Appropriateness of self-disclosure. In G. J. Chelune (Ed.), Self-Disclosure: Origins, Patterns, and Implications of Openness in Interpersonal Relationships (pp. 151–176). Jossey-Bass.

Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2(3), 271–299.

Gross, J. J., & John, O. P. (2003). Individual differences in two emotion regulation processes: Implications for affect, relationships, and well-being. Journal of Personality and Social Psychology, 85(2), 348–362.

Oppenheimer, D. M. (2006). Consequences of erudite vernacular utilized irrespective of necessity: Problems with using long words needlessly. Applied Cognitive Psychology, 20(2), 139–156.


© Psychology Today