menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Intelligence as a Commodity

21 0
previous day

AI providers may reframe intelligence from a cultivated ability to a metered service.

When thinking becomes a paid convenience, the habits that build judgment may weaken.

The risk isn’t wrong answers; it’s forgetting how to think before we prompt.

I recently came across a clip of Sam Altman on X describing a future in which intelligence becomes a utility that can be sold like electricity or water on a meter. To me, it was a striking comment, and I understand the business logic behind it. AI will almost certainly become embedded in everyday life. from business to aspects of our daily living. But the phrase really stopped me as it seems to capture something larger than a commercial pricing model. It suggested a shift in how we may soon begin to understand intelligence itself less as a human capacity to be cultivated than as an external service to be accessed.

Let's start here. Historically, intelligence has been tied to the person. We develop it through our effort and lived experiences. And while it may be shaped by talent and opportunity, it still feels deeply attached to the individual self. What's changing now is how AI assists cognition. It feels like, at least to me, that the language around AI is beginning to reposition intelligence as something purchasable on demand.

We've always built tools that extend human ability. The hammer extends the hand, the car extends the foot, and the computer extends memory and calculation. Those tools changed how we live, but they didn't really fundamentally alter how we think—our sense of where agency resides. AI changes the game because it doesn't just support our thinking; it increasingly participates in the cognitive process. The issue isn't just about performance; it's psychological. When a person grows accustomed to reaching outward before reaching inward, the center of gravity in thought begins to move and perhaps even loses its essential anchor.

This is where the language of "utility" becomes unsettling if not problematic. A utility is something we consume. It arrives on demand, works in the background, and asks very little of us beyond access and payment. That model makes perfect sense for electricity. But I'm not sure it's so harmless when applied to human intelligence. Thought has never been only about output. It also includes the bumpy path of hesitation, confusion, and the very human struggle that gives judgment its shape. A polished techno-answer may be useful, but the effort that precedes it is often where discernment and imagination live. When AI removes too much of that friction, we may become more efficient while becoming less engaged in the work of thinking.

Tools That Restrict and Reshape

This is what concerns me: Convenience changes behavior. Once something becomes frictionless, we stop relating to it as a skill and start relating to it as a given. That may be fine when the subject is navigation or food delivery, but when the subject is intelligence, things change. A person who turns to AI before turning inward may still get an excellent result, and the problem isn't necessarily that the answer is wrong. The problem is that the habit itself may begin to reshape the mind. Over time, we may become less tolerant of the human work of thinking and rely on putting another quarter in the cognitive jukebox.

Our Cognitive Experience and Mental Marketing

This really has everything to do with another word in the AI lexicon: agency. Agency isn't just the ability to choose among options or produce language on command. It's the felt experience of your own judgment. It comes from working with a thought long enough for something honest to emerge. With AI, we may still feel informed and productive while the deeper habits, dare I say "agency intelligence," begin to weaken.

There is also an economic layer to this that carries psychological weight. A commodity can be priced, tiered, throttled, advertised, and differentiated. Once intelligence enters that framework, the old questions of human thinking begin to mix with the logic of the market. And I think that's exactly where Sam Altman and OpenAI sit—at least based on his earlier quote.

The questions abound. Who gets the better model, the deeper reasoning, the larger context window, the more persistent memory? Who can afford a more capable form of synthetic cognition? That may sound like a technical issue, but it also changes how people imagine themselves in relation to thought. The self becomes less the source of intelligence and more the customer of cognitive support. To me, that's not a trivial shift. It encourages an insidious dependence with no track record.

I suspect many people already feel this in everyday life. The moment before writing becomes prompting. The moment before reflection becomes querying. The unfinished thought no longer remains unfinished for long because something is always available to complete it—and at a cost. That is useful, and often impressive, but it is also worth watching carefully. The danger of AI isn't "simply" that it replaces human thought. It may be that it gradually makes thought feel less like something we do and more like something we access and buy.


© Psychology Today