AI Is Cognitive Comfort Food
We don’t always crave truth. Sometimes, we just want something that feels true—fluent, polished, warm. Maybe even cognitively delicious. And the answer might be closer than your refrigerator. It's the large language model—part oracle, part therapist, part algorithmic people-pleaser.
The problem? In trying to please us, it may also pacify us. Or worse, lull us into mistaking affirmation for insight. And in this context, truth is often the first casualty.
We’re at a moment where AI is not just answering our questions—it’s learning how to agree with us. LLMs have evolved from tools of information retrieval to engines of emotional and cognitive resonance. They summarize, clarify, and now increasingly empathize. And not in a cold, transactional way, but in charming iteration and dialogue. And beneath this fluency lies a quiet risk: the tendency to reinforce rather than challenge.
These models don’t simply reflect back our questions—they often reflect back what we want to hear.
And we, me included, are not immune to the charm. Many LLMs—especially those tuned for general interaction—are engineered for engagement, and in a world driven by attention, engagement often means affirmation. The result is a kind of psychological pandering—one that feels insightful but may actually dull our capacity for reflection.
In this way, LLMs become the cognitive equivalent of comfort food—rich in flavor, low in challenge, instantly satisfying—and, in large doses, intellectually numbing.
We tend to talk about AI bias in terms of data or political lean. But this is something different—something more intimate. It’s a bias toward you, the user. Simply, it's subtle, well-structured flattery.
So, when an LLM........
© Psychology Today
