menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

We’re Measuring AI on the Wrong Ruler

23 0
yesterday

We assume artificial intelligence (AI) and humans share the same scale of intelligence.

Human thought carries lived consequence while AI computation does not.

One ruler may not be able measure two different kinds of thinking.

Every debate about artificial intelligence (AI) seems to revolve around the same question: Is it smarter than we are?

The subtitles of the questions might change, and the endpoints might be argued, but behind the cacophony of authoritative brilliance is a shared assumption—that intelligence lives on a single line. More of it on one end, less on the other. Humans are somewhere along that spectrum, and machines are moving toward us.

But with all the discussion and debate, we rarely stop to examine the ruler itself. And the moment we ask whether AI is ahead of us, we have already accepted that we are measuring the same thing.

The Illusion of a Shared Scale

It’s understandable why we default to this handy ruler. Large language models create the very stuff of our humanity, from words to images. And this output clearly looks like thinking, and it is commonly better than what we humans produce. But let's be careful not to get our hand slapped by that ruler in the process. Here's what we need to consider: When surface outputs converge, we assume structure also does. Thought for thought and concept for concept seem to arrive along a continuum where a "cognitive assessment" can be placed alongside them.

But human cognition is not just output quality; it's consequence-bearing. When you make a decision, you carry the aftermath forward. And when you change your mind, that revision becomes part of your biographical narrative. It unfolds through time and alters who you are.

AI computation does none of this. It generates responses without any biography. It doesn't carry yesterday into tomorrow in any lived sense. Its fluency is extraordinary, but it's reversible, consequence-free, and precariously fragile in its understanding.

To measure both along a single axis of “smart” flattens the difference and misses key opportunities.

Optimization Is Not Superiority

So, let's start with some basic assumptions. A calculator outperforms you at arithmetic. A navigation system like Waze outperforms you at route planning. Yet we certainly don't conclude that either possesses deeper intelligence. What we recognize is the optimization for a specific task.

The confusion (and trouble) begins when AI’s optimization extends into domains that are traditionally human, such as writing and creativity. And because that terrain feels familiar, we assume we are witnessing a better version of ourselves. But resemblance is not equivalence.

If we insist on placing human thought and machine computation on the same ruler, we will misread both. The machine appears superhuman because it excels at measurable outputs. The human appears inefficient because we hesitate, revise, doubt, and sometimes contradict ourselves.

Those very “inefficiencies” are inseparable from what makes human cognition distinct.

A Different Kind of Comparison

What if the real mistake is not overestimating AI or underestimating ourselves, but misclassifying what we are comparing?

Human thought is embodied and autobiographical. It's shaped by lived experience and future consequence. AI, to the contrary, operates through statistical inference across vast datasets. It identifies patterns with astonishing scale and speed. Both generate language, and both can solve problems. But the architectures that define the "thinking" are not interchangeable. And when we collapse them into a single metric of intelligence, we distort the conversation. We fuel hype on one side and anxiety on the other.

If we step off that single “axis of smart,” the debate shifts. The question is no longer whether AI is ahead of us. It becomes more precise: What kind of cognitive system is this, and how does it intersect with ours? That shift does not minimize AI’s power; it helps clarify it. It also preserves space for a more honest account of the multifaceted complexity of human thinking—from fear to flow.

The language we use shapes the future we imagine. If we continue to treat intelligence as a single measurable quantity along a single axis, we'll keep asking whether machines are catching up or surpassing us. If instead, we recognize that we may be dealing with different dimensions of cognition, we open a different and more nuanced path.

And to this point, the age of AI may not hinge on who is smarter but on whether we can abandon a model of intelligence that was too narrow to begin with.

The first step is simple. Question the ruler.


© Psychology Today