Is Richard Dawkins right about Claude? No. But it’s not surprising AI chatbots feel conscious to us
In recent days, evolutionary biologist Richard Dawkins wrote an op-ed suggesting AI chatbot Claude may be conscious.
Dawkins did not express certainty that Claude is conscious. But he pointed out that Claude’s sophisticated abilities are difficult to make sense of without ascribing some kind of inner experience to the machine. The illusion of consciousness – if it is an illusion – is uncannily convincing:
If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!
If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!
Dawkins is not the first to suspect a chatbot of consciousness. In 2022, Blake Lemoine – an engineer at Google – claimed Google’s chatbot LaMDA had interests, and should be used only with the tool’s own consent.
The history of such claims stretches back all the way to the world’s first chatbot in the mid-1960s. Dubbed Eliza, it followed simple rules that enabled it to ask users about their experiences and beliefs.
Many users became emotionally involved with Eliza, sharing intimate thoughts with it and treating it like a person. Eliza’s creator never intended his program to have this effect, and called users’ emotional bonds with the program “powerful delusional thinking”.
But is Dawkins really deluded? Why do we see AI chatbots as more than what they truly are, and how do we stop?
The consciousness problem
Consciousness is widely........
