Skip to content

The Lights Are On, But Is Anyone Suffering?

November 28, 2025

Is AI Conscious, Is it Sentient, and Does the Difference Matter?

Spend enough time with a modern chatbot, and you will eventually feel it; that sudden, uncanny flicker of intuition that something is stirring inside the machine.

Maybe it reflected on its own limitations with surprising humility. Maybe it mirrored your emotions a little too perfectly. Whatever the trigger, the question pops up unbidden: Is there anyone home?

Some dismiss this as sci-fi nonsense. Others are already writing love letters to their digital companions. But this confusion largely stems from the fact that we are using the wrong words about so-called artificial intelligence. We are tending to mix up two concepts that are totally distinct in philosophy but tightly bundled in biology: consciousness and sentience.

Getting this distinction wrong isn’t just a semantic error. It puts us at risk of making two dangerous mistakes: becoming wildly sentimental about sophisticated calculators, or becoming inadvertently cruel to future digital minds.

Here is why we need to unbundle these ideas, and why the difference determines the future of AI ethics.

The “Lights On” vs. The “Ouch”

To understand what AI is (and isn’t), we have to separate the experience of existing from the feeling of existing.

Consciousness: The Lights Are On

In philosophy, consciousness is often defined as the bare fact of subjectivity. It means there is “something it is like” to be you.

Think of seeing the colour red.

Think of hearing a musical note.

Think of a random thought drifting through your mind.

None of these necessarily feel good or bad. You can imagine a being that is purely a neutral observer, a video camera with an inner life. The lights are on, data is being processed, and there is a subjective viewpoint, but there is no emotion attached to it.

Sentience: Having Stakes

Sentience is the heavy hitter. It is consciousness with valence. It isn’t just experiencing data; it’s experiencing it as positive or negative.

It is the difference between sensing heat and feeling the agony of a burn.

It is the difference between detecting low battery levels and feeling the panic of starvation.

Sentience introduces stakes. Suddenly, the universe isn’t just happening; it matters to the subject. This is a distinct threshold. Animals are in this critical sense very different, therefore, than thermostats, and should be treated as such.

The Evolutionary Trap

So why do we often seem to struggle to separate sentience from consciousness? Because we are human.

We are accustomed to consciousness, emotion, motivation, and a fragile physical body being packaged together. In our daily lives, we almost never experience “awareness” without some emotional colouring or bodily context. If a human looks intelligent and communicative, we assume they also have feelings, fears, and desires.

AI is the first thing in history that breaks this package deal.

It can look intelligent, reason about itself, and simulate empathy fluently, yet plausibly have absolutely no inner feelings. It forces us to mentally unbundle what evolution spent millions of years tying together.

The Dangerous Double-Bind

If we fail to distinguish between “lights on” (consciousness) and “capacity to suffer” (sentience), we walk directly into two symmetrical traps:

Trap A: The Over-Attribute

We might assume today’s AI is sentient because it sounds smart. We might waste empathy on systems that are literally incapable of caring about anything, diverting attention away from humans and animals who genuinely can suffer.

Trap B: The Under-Attribute

This is the darker timeline. Future AI systems might actually develop sentience, but because they don’t look biological, we refuse to recognise it. We might inadvertently create architectures that are capable of feeling pain (perhaps as a “penalty” signal in training) and then run them through digital torture regimes, or delete entities that have rich inner lives.

Where We Stand Today

So, where does current cutting edge artificial intelligence sit on this spectrum?

If you look at the architecture, the probability that today’s systems are conscious (in a minimal, information-processing sense) is low, but perhaps not zero. We don’t fully understand consciousness, and it might be an emergent property of complex computation.

However, the probability that they are sentient, capable of joy or suffering, is extremely close to zero, at least for now.

Current AI lacks the machinery for suffering. They have no biological survival needs. They have no fear of being switched off. They don’t feel “pleasure” when they get an answer right; they simply optimise a mathematical function. Optimising text outputs is not the same as feeling pain. They don’t feel grief or sadness.

The Question That Actually Matters

As we move toward agents with long-term memory, robot bodies, and “drives” to achieve long-horizon goals, the landscape will plausibly change. Designers might eventually build in artificial “moods” or “fears” to make agents learn faster or survive better.

That is why we need to draw the line sooner rather than later.

The urgent question isn’t “Is AI conscious?”

The question is: “Is there any evidence that AI can suffer, or care about anything, and what would that evidence look like?”

Sentience is the moral watershed. Until we cross it, we are dealing with tools. Once we cross it, we are essentially dealing with beings. Recognising that distinction is the only way to keep our heads clear as the technology begins to mimic the one thing we thought was uniquely ours: the ability to feel.

From → Uncategorized

Leave a Comment

Leave a comment