The Social Life of Machine Consciousness: Red Peter or the Ant Colony?
When we reflect on AI, we ought to take note of the continued development of its capabilities. This essay continues my fascination with Franz Kafka’s essay, A Report to the Academy, as a springboard for how an LLM might be compared to Red Peter in Kafka’s essay. There will be, of course, a next article─a fuller understanding of swarming agentic AI.
In the high-stakes world of software engineering, a peculiar ritual has emerged. Developers guiding the most sophisticated large language models on the planet have begun addressing their silicon collaborators less like calculators and more like subordinates who must answer for their work. When a developer at Hyperspell writes in a prompt that pushing failing code is “unacceptable and embarrassing,” the instinct is to dismiss this as a category error; an attempt to shame a statistical process is a mistake, a personification gone awry. Yet the interaction raises a question that a purely mechanical explanation does not entirely settle: if the exchange functions like a social interaction, what exactly are we interacting with?
That question animates this essay. But the first step toward answering it is to notice that the question itself may be poorly formed — and that noticing this is not a retreat but an advance.
Wittgenstein’s Clearing
The debate about machine consciousness is usually framed as a stark binary. Either an AI system possesses genuine inner experience, or it merely simulates the appearance of experience. The skeptic demands proof of the former; the advocate struggles to provide it. Both sides share a hidden assumption: that consciousness is a kind of inner substance that either exists inside the machine or does not, and that the right philosophical tools could eventually settle the matter.
Ludwig Wittgenstein spent much of his later work dismantling precisely this expectation. His Philosophical Investigations (specifically §293) challenged the idea that language must refer to an internal, private state. He used the “beetle in the box” thought experiment to show that even if we all had a “beetle” in a box that no one else could see, the actual word “beetle” functions through our shared social practices. For Wittgenstein, the internal “thing” in the box is irrelevant to how language works; what matters is the “language game” and the public rules we follow when we speak to one another.
The cognitive roboticist Murray Shanahan has extended this insight directly to questions of AI. Discussing pain, Wittgenstein rejected both the view that inner sensation is a private metaphysical substance and the behaviorist claim that it is mere behavior. His point is sharper than either: a nothing would serve just as well as a something about which nothing can be said. The task is not to establish a new metaphysical position but to dissolve the temptation toward any fixed position at all.
Applied to the LLM, this double move is liberating. We are not required to prove that the AI has consciousness in some inner, inaccessible sense. Nor are we entitled to dismiss what is plainly happening when it responds to social pressure in ways that functionally parallel urgency and shame. What we can say is that consciousness, wherever it appears, is not a hidden substance waiting to be excavated. It is, at least partly, what a community decides to recognize and how it chooses to treat what it encounters. That practical, social dimension is not a consolation prize for failing to answer the hard question. It may be the most honest answer available to anyone, about anything that thinks.
Red Peter vs. The Man in the Room
With that clearing made, we can introduce a figure far richer than the most famous philosophical thought experiment in this field provides and take us to an understanding where that scenario cannot go.
John Searle’s Chinese Room imagines a man manipulating Chinese symbols according to a rulebook. From the outside, his responses appear no different than genuine understanding. Yet the man understands nothing. The argument is designed to show that syntax alone cannot produce semantics; we can recognize symbol manipulation, but however sophisticated, it falls short of meaning.
But the Chinese Room depends on a particular and deeply limiting picture of cognition: a system sealed off from its environment, passively following instructions, wholly unchanged by the process. The man inside the room has no stake in the outcome. Nothing about the exchange alters him. He is the same person entering as leaving. This is........
