AI and the Four Faces of Anti-Intelligence
AI can produce outputs that look like thinking, without the cognitive effort behind them.
Speed and fluency replace resistance, the very process that makes ideas real.
When certainty outruns scrutiny, the cost isn't just cognitive—it's real.
The other day, a good friend sent me a well-written note. It was the kind of text that reflects careful thinking and a clear signal of professional insight. But when I asked a simple follow-up, the answer didn't expand the content; it unraveled it. The language, at best, worked in a sort of scripted way, but the reasoning behind it didn't. My first instinct was to ask who I was corresponding with.
I'm seeing this all over—in meetings, conversations, and social media—where the discourse sounds complete until you try to move them one step further. I think of this as anti-intelligence. And this isn't a failure of intelligence, but a sort of reversal. The outputs remain, but the process that once gave them cognitive substance doesn't.
Anti-intelligence is what happens when the product of thinking survives while the thinking itself stops doing the same work. Let's take a look at four examples that are becoming all too common and problematic.
1. Performative Intelligence
The first pattern operates at the surface of engagement. Performative intelligence looks like reasoning, but it just can't extend itself. It arrives in the form of the polished memo that collapses under a basic question or the explanation that sounds complete but has no author-based thinking to support it.
We have always relied on clarity and coherence as signals of understanding, and that's for good reason. Historically, those signals were reliable because they were hard-won. Today, they can be generated with little to no underlying effort, and when you push on the underlying thinking, there's often nothing there. The form of thinking remains, but the function does not.
2. Compressed Cognition
The second pattern operates at the level of process. Compressed cognition is what happens when friction disappears from thinking.
For most of our human history, understanding emerged through resistance. It was the space in which meaning formed. That space is now collapsing. Answers arrive quickly and are highly refined. And it feels like progress because it's faster and cleaner. But without resistance, there is little to shape the idea. And over time, the distinction between having an answer and understanding it becomes harder to recognize and even understand.
The third pattern is harder to see because it doesn't feel like a problem at all. Displaced agency occurs when the origin of thought shifts while the sense of ownership stays intact. You may write something with the help of a large language model that aligns with your voice and intent—it feels like your work. The idea reflects you, so you accept it as yours, even when the path from question to answer wasn't fully constructed by you but by the AI.
I don't really think this is deception per se. It's more of a curious renegotiation of authorship that doesn't honestly present itself as such.
4. Synthetic Conviction
At a certain point, the issue is no longer how thinking changes but what those changes produce. Outputs don't simply appear fluent or smart; they carry conviction. It's my sense that what's often missing is cognitive grounding. A well-formed response can end a conversation before it's done the actual work of thinking.
The implications are vast. In personal decisions, this may mean acting on answers that feel complete but haven't been interrogated or fleshed out. In professional settings, it shapes strategy around conclusions that were never adequately tested. The problem isn't just error. It's certainty without sufficient process behind it.
I don't think that any of this arrives as a warning. It arrives as a better version of what you were already doing. The memo has a certain cognitive snap. The strategy deck looks and sounds terrific. The answer comes in a timely way, before the meeting runs long. These aren't small things but drivers of both professional and personal lives. And dismissing them would be its own mistake.
But somewhere in the accumulation of better outputs, something else is happening. Decisions get made on conclusions that were never pressure-tested, and people appear fluent in ideas they have never understood. And because everything looks like it's working, the cost stays invisible until it doesn't.
This may be showing up in boardrooms and classrooms, in clinical settings and policy discussions, anywhere that confident, well-formed outputs are being mistaken for conscientiousness and informed thought. The problem isn't that AI is being used. It's that the outputs are being trusted at a level the process behind them hasn't earned.
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
