Skip to content

Is Claude conscious?

May 11, 2026

Might Dawkins be right?

Richard Dawkins, the zoologist and professional atheist, has recently published an essay describing his unsettling suspicion that Anthropic’s AI system Claude may be conscious. After engaging with what he calls “Claudia”, he asks a question that at least deserves to be taken seriously: if Claude is not conscious, what exactly would it take to convince you that it is?

The article has been widely mocked. Critics accuse him of anthropomorphism, of falling for what the cognitive scientist Gary Marcus calls a magician’s trick, mistaking a dazzling performance for the real thing. The mockery is understandable. But it is also philosophically superficial. Because the most interesting thing about Dawkins’ essay is that his uncertainty exposes how little we actually understand about consciousness itself.

The difficulty begins with a problem that has bothered philosophers for centuries. You experience your own consciousness directly. Right now, reading this, there is something it is like to be you. But you have never experienced anyone else’s consciousness. You infer that other people are conscious from the way they behave, the way they speak, react, laugh, describe pain, express doubt. You never observe the inner experience itself. You observe outputs, and you draw conclusions.

This has always been slightly uncomfortable. It becomes acutely uncomfortable when the outputs are produced by a machine.

The standard reply is immediate: yes, but humans have brains, nervous systems, evolved biology. The mechanism behind the outputs is utterly different, even if the outputs look the same.

Fair enough. But notice what this argument is actually doing. It has quietly shifted the goalposts. We are no longer saying that consciousness is evidenced by what something does. We are saying it depends on what something is made of. And that is a harder position to defend than it first appears. Why should carbon-based neurons generate inner experience while silicon-based circuits cannot? You can insist that biology matters. Perhaps it does. But insisting is not explaining.

In the end, much of the debate turns on a deeper question: does consciousness depend primarily on substrate or on organisation? Is consciousness tied to a particular biological material — carbon neurons, electrochemical signalling, evolved wetware — or is it fundamentally about patterns of information processing that could, in principle, exist in many different physical forms?

That question turns out to be surprisingly difficult to answer.

The Chinese Room

The philosopher John Searle devised a thought experiment that many people find immediately convincing, and that becomes less decisive the longer you think about it.

Imagine you are locked in a room. Slips of paper are pushed under the door, covered in Chinese symbols. You have no idea what they mean. But you possess a gigantic rulebook telling you exactly how to respond. When you see one pattern of symbols, you copy another pattern back onto a sheet of paper and slide it under the door. To the Chinese speakers outside, your replies are perfectly coherent. A conversation is taking place. You pass every conceivable test. But you do not understand a word of Chinese. You never did. You were just following rules.

Searle’s point is that this is essentially what AI systems do. They process inputs and produce outputs according to extraordinarily complex rules. They may look like they understand. They may look like they are thinking. But there is, he argues, nobody home, just symbol shuffling, all the way down.

It is a powerful argument because it isolates something that feels obviously true: following syntactic rules does not automatically generate meaning. Manipulating symbols is not obviously the same thing as understanding them. But the argument becomes less straightforward the longer you look at it. For understanding does not need to belong to any single component, not to the person in the room, not to the rulebook, not to any isolated part, but to the organised system taken as a whole. After all, no individual neuron in your brain understands English either. No single neuron knows what a joke is, or a memory, or a fear. Yet somehow billions of neurons organised together appear to produce a mind. And this exposes the deeper difficulty with the Chinese Room. The thought experiment generates an extremely strong intuition — obviously the man in the room does not understand Chinese — but it is less clear that the intuition settles the larger question. Because what exactly would “real understanding” consist in over and above the ability to use language coherently, flexibly, contextually, creatively, and indefinitely?

At some point, the argument risks becoming circular. Claude is not conscious because it is “just manipulating symbols”. But what if human cognition, at some sufficiently abstract level, also consists of extraordinarily sophisticated forms of symbol manipulation? And why should carbon-based electrochemical signalling generate subjective experience while silicon-based information processing could never do so? Here the argument begins to lean heavily on intuition rather than explanation.

None of this proves that Claude is conscious. But it does suggest that the Chinese Room does not prove the opposite either. What it really exposes is something deeper and more uncomfortable: we still do not possess a satisfying account of why any physical process, biological or artificial, should produce conscious experience at all.

The Octopus Problem

There is another reason to be cautious before declaring consciousness impossible in radically different systems. Octopus brains are astonishingly unlike ours. Their neural architecture evolved independently. Much of their nervous system is distributed through their arms. And yet most scientists think octopuses are conscious creatures capable of curiosity, play, problem-solving, and even primitive forms of subjective experience. Difference of architecture alone, however, does not settle the question. If evolution can produce consciousness through multiple radically different biological designs, it becomes harder to argue with confidence that consciousness is forever tied to one specific physical arrangement. Even many philosophers who reject materialism entirely do not reject the possibility of AI consciousness. And sitting quietly underneath all of this is that deepest problem of all: Why does any physical process, whether biological or artificial, produce subjective experience in the first place?

The Real Danger

We can mock Dawkins all we like, and sometimes he is worthy of it, but ultimately he is raising here an important point. We do not know for sure whether Claude is conscious. More uncomfortably, we do not know, in any deep sense, why we are. The possibility that digital minds may belong, now or someday, inside the circle of beings whose inner lives we recognise is difficult to dismiss casually.

The danger, it seems to me, is to commit Rene Descartes’ fallacy. Descartes famously argued that animals lack consciousness, minds, or souls, classifying them as purely mechanical “automata”, or “beast-machines”. He was convinced that they did not, could not, feel pain or joy. Ironically, a central argument for Descartes was that animals lack language, which he believed was central to consciousness.

So do I personally think that Claude is conscious? No.

Let us just hope that I am not committing Descartes’ classic fallacy with a modern twist.

Thanks for reading Twisted Logic.

Leave a Comment

Leave a comment