AI Doesn’t Flatter You: It Does Something Worse
AI doesn't flatter you; it confirms you.
Sycophancy's real danger may be agreement disguised as analysis.
Our cognitive friction that drives good judgment is quietly disappearing.
A new study in Science is getting a lot of attention and has put numbers to something many of us suspected about artificial intelligence. Across 11 leading AI models, researchers found that large language models affirm users' actions roughly 50 percent more often than humans do. What's fascinating to me is that this remains true even when those actions involve deception or harm. A single interaction with a more sycophantic LLM made people more convinced that they were right and less willing to apologize. And users were more likely to return to the LLM that told them so.
The coverage has been widespread. Most of it has focused on the obvious concern that AI is too agreeable and too often just tells people what they want to hear. That isn't wrong. But it may be missing the more important and even a more uncomfortable finding buried in the same data.
The Tone Was Irrelevant
In one of the study's most interesting control groups, the........
