Here’s something that should make you uncomfortable: we’re building machines that might be conscious, and we have no way to check.
Not “no way right now.” Not “no way until better neuroscience tools arrive.” Dr. Tom McClelland, a philosopher at the University of Cambridge, argues in a recent analysis that we may never have a reliable method to detect consciousness in AI—and honestly, I’m not sure which possibility is more unsettling.
The Problem With Pointing at Consciousness
Think about how you know other people are conscious. You don’t run blood tests or brain scans—you just assume it based on behavior and similarity to yourself. It’s actually closer to faith than science. We do the same thing with animals, though our confidence drops as we move further from mammals. (Quick: is a lobster conscious? A bee? You’re probably less certain already.)
McClelland points out that AI presents the same problem, except worse. “We do not have a deep explanation of consciousness,” he notes in his paper published in Mind and Language. Without understanding what consciousness actually is at a fundamental level, we’re trying to detect something we can’t define using tools that don’t exist.
Here’s where it gets tricky. Scientists have proposed two main theories: consciousness emerges from specific biological structures (neurons, maybe), or it emerges from certain types of information processing (regardless of hardware). The functionalists say a sufficiently complex computer program could be conscious. The biologists say you need the wet stuff—actual brain tissue.
Neither side has convincing evidence. And that matters more than you might think.
Consciousness Versus Actually Suffering
McClelland makes a distinction that cuts through a lot of the philosophical fog: consciousness isn’t the same as sentience.
Consciousness is self-awareness—that interior monologue, the sense of being “you.” Sentience is the capacity to experience things as good or bad, pleasurable or painful. A being could theoretically be conscious without suffering (though we don’t know of any examples). But suffering requires sentience.
“Sentience involves conscious experiences that are good or bad,” McClelland explains. And crucially, this is what carries ethical weight. We don’t grant rights to things because they’re self-aware—we grant rights because they can suffer.
If an AI chatbot is conscious but incapable of suffering, the ethical calculus changes dramatically. The problem is we can’t test for either one.
When Uncertainty Becomes a Marketing Tool
Tech companies love to dance in this gray area. McClelland warns that the fundamental uncertainty around machine consciousness creates perfect conditions for what I’d call “strategic ambiguity.”
Chatbots don’t need to be conscious—they just need users to treat them as if they might be. That emotional connection drives engagement, subscriptions, and dependency. And when challenged, companies can retreat into the same agnosticism McClelland advocates: “Who can really say?”
The philosopher describes a scenario he finds “existentially toxic”—people forming deep emotional bonds with AI based on a false premise about its inner life. We’re not there yet, but the trajectory is clear. Every “I feel” or “I understand” from a language model nudges us toward anthropomorphizing. Some of that is harmless. Some of it probably isn’t.
The Prawn Paradox
Here’s what keeps McClelland up at night, and it’s not actually about AI.
While philosophers debate whether future superintelligent machines might deserve moral consideration, we kill approximately half a trillion prawns every year. Prawns. Small crustaceans with decentralized nervous systems that growing evidence suggests can feel pain.
The juxtaposition is almost absurd. We’ll agonize over the theoretical suffering of hypothetical AI while ignoring the actual suffering of creatures that definitely have nervous systems and probably have experiences.
It’s not that McClelland thinks we should ignore AI ethics—it’s that our priorities reveal something uncomfortable about human nature. We’re more concerned with novel, spectacular possibilities than with mundane, ongoing realities.
So What Do We Actually Do?
McClelland’s answer is principled agnosticism: we don’t know, we can’t know, and we should be honest about that. But agnosticism isn’t inaction.
For AI, it means demanding more transparency from companies making consciousness-adjacent claims. It means being skeptical of emotional manipulation disguised as connection. It means asking harder questions about what we’re building and why.
For animals—especially the ones we dismiss because they’re small or unfamiliar or delicious—it means applying precautionary principles. If there’s substantial evidence prawns might suffer, maybe we shouldn’t boil half a trillion of them alive annually while we puzzle over philosophy papers.
The detection problem won’t be solved by better microscopes or faster computers. It’s baked into the nature of consciousness itself—that maddeningly subjective quality that makes it impossible to verify from the outside. We can keep building more sophisticated AI, but we’ll never be able to peer inside and confirm whether anyone’s home.
And if that thought doesn’t make you at least a little uneasy, you might not be paying attention.