menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Will AI become God? That’s the wrong question.

12 10
07.04.2025

It’s hard to know what to think about AI.

It’s easy to imagine a future in which chatbots and research assistants make almost everything we do faster and smarter. It’s equally easy to imagine a world in which those same tools take our jobs and upend society. Which is why, depending on who you ask, AI is either going to save the world or destroy it.

What are we to make of that uncertainty?

Jaron Lanier is a digital philosopher and the author of several bestselling books on technology. Among the many voices in this space, Lanier stands out. He’s been writing about AI for decades and he’s argued, somewhat controversially, that the way we talk about AI is both wrong and intentionally misleading.

I invited him onto The Gray Area for a series on AI because he’s uniquely positioned to speak both to the technological side of AI and to the human side. Lanier is a computer scientist who loves technology. But at his core, he’s a humanist who’s always thinking about what technologies are doing to us and how our understanding of these tools will inevitably determine how they’re used.

We talk about the questions we ought to be asking about AI at this moment, why we need a new business model for the internet, and how descriptive language can change how we think about these technologies — especially when that language treats AI as some kind of god-like entity.

As always, there’s much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.

This interview has been edited for length and clarity.

What do you mean when you say that the whole technical field of AI is “defined by an almost metaphysical assertion”?

The metaphysical assertion is that we are creating intelligence. Well, what is intelligence? Something human. The whole field was founded by Alan Turing’s thought experiment called the Turing test, where if you can fool a human into thinking you’ve made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it’s stupid. Fooling people in itself accomplishes nothing. There’s no productivity, there’s no insight unless you’re studying the cognition of being fooled of course.

There’s an alternative way to think about what we do with what we call AI, which is that there’s no new entity, there’s nothing intelligent there. What there is a new, and in my opinion, sometimes quite useful, form of collaboration between people.

What’s the harm if we do?

That’s a fair question. Who cares if somebody wants to think of it as a new type of person or even a new type of God or whatever? What’s wrong with that? Potentially nothing. People believe all kinds of things all the time.

But in the case of our technology, let me put it this way, if you are a mathematician or a scientist, you can do what you do in a kind of an abstract way. You can say, “I’m furthering math. And in a way that’ll be true even if nobody else ever even perceives that I’ve done it. I’ve written down this proof.” But that’s not true for technologists. Technologists only make sense if there’s a designated beneficiary. You have to make technology for someone, and as soon as you say the technology itself is a new someone, you stop........

© Vox