Pick up any neuroscience textbook and it will tell you how the brain processes light into the perception of red. It will map the photoreceptors, the optic nerve, the visual cortex, the cascade of electrochemical signals that travel from your eye to your occipital lobe at a speed no courier could match. What it will not tell you — what it cannot tell you — is why any of that produces the experience of redness. The felt quality. The warmth of it. The way red doesn’t just register but arrives.
That gap — between the mechanism and the feeling — is what David Chalmers called the hard problem of consciousness in his 1996 book The Conscious Mind: In Search of a Fundamental Theory. And nearly three decades later, nobody has closed it.
What Chalmers Actually Said
Chalmers opens by separating consciousness into two categories. The “easy problems” — his phrase is deliberately tongue-in-cheek — are things like how the brain integrates information, how it produces behavior, how it regulates attention and sleep. Easy doesn’t mean simple. As Steven Pinker once noted, these are about as easy as going to Mars or curing cancer. But they’re tractable. You can imagine, in principle, what a solution would look like. You’d point to neurons and mechanisms and say: there, that’s how it works.
The hard problem is different in kind, not degree. As Chalmers writes, “there is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain.” The hard problem asks why any physical process — no matter how precisely described — gives rise to experience at all. Why, when your cognitive system processes sensory information, is there something it is like to be you? Why isn’t it all just information processing in the dark, with no inner light?
That phrase — “something it is like” — comes from Thomas Nagel’s 1974 paper “What Is It Like to Be a Bat?” Nagel argued that even if you knew everything about bat neuroscience, you still wouldn’t know what echolocation feels like from the inside. Chalmers takes that intuition and builds an architecture around it. He names the felt qualities of experience “qualia” — the redness of red, the painfulness of pain, the particular character of the smell of coffee — and argues they cannot be captured by any functional or physical description.
The Zombie Argument
The most provocative tool in Chalmers’ kit is the philosophical zombie. Not the movie kind. A philosophical zombie is a being physically and functionally identical to you in every way — same neurons, same behavior, same information processing — but with no inner experience. No qualia. The lights are on, as far as anyone outside can tell. But inside, there’s nobody home.
Chalmers doesn’t claim zombies exist. He claims they’re conceivable — that there’s no logical contradiction in imagining them. And from that conceivability, he draws a significant conclusion: consciousness cannot be fully explained by physical or functional facts alone. If it could be, zombies would be not just physically impossible but unthinkable. The fact that we can coherently imagine them, even if only as a thought experiment, means there’s something about consciousness that the physical story leaves out.
This is where Chalmers departs from the eliminativists — thinkers like Daniel Dennett, who argues in Consciousness Explained that qualia, properly understood, don’t pose any special problem at all. Dennett’s move, roughly, is to argue that once you explain all the functional capacities of a system, you’ve explained everything there is to explain. There’s no residue. No hard problem. The “hard problem,” in Dennett’s view, results from confused intuitions about what consciousness even is.
Chalmers finds this insufficient — not wrong so much as a failure to engage. He writes that most theories of consciousness either “deny the phenomenon, explain something else, or elevate the problem to an eternal mystery.” His project is to take consciousness seriously as a phenomenon and ask: what kind of theory could actually account for it?
What He Proposes Instead
Chalmers doesn’t just dismantle. He offers a framework: what he calls naturalistic dualism. Not the old Cartesian kind — two substances, mind and body, mysteriously interacting. Rather, a view in which consciousness is a fundamental feature of the world, irreducible to physics but governed by natural laws, the way gravity and electromagnetism are features of the world irreducible to something simpler but lawful and describable.
The implications are strange. Chalmers entertains the possibility that consciousness is tied to information processing at a very general level — that wherever there’s the right kind of information structure, there may be something like experience. He calls this double-aspect information theory, and while he holds it tentatively, it edges toward a qualified panpsychism: the idea that experience, in some basic form, might be much more widespread than we typically assume.
It’s the part of the book that makes empirically-minded readers flinch. A conscious thermostat? Critics like Dennett and Patricia Churchland pushed back hard. Sydney Shoemaker, writing in Philosophy and Phenomenological Research, acknowledged the book was “stimulating, instructive, and frequently brilliant” while doubting its conclusions. Yanina Shapiro, in Theory and Psychology, called Chalmers’ arguments “scholarly, creative, and engaging” but took issue with the dualist framework underneath.
But Chalmers isn’t claiming experience is everywhere in the same measure. He’s claiming it might be more continuous with physical organization than the strict eliminativist allows. And that claim hasn’t gotten less interesting with age — it’s gotten more urgent.
Why This Matters Now More Than It Did in 1996
I’ve been thinking about the hard problem more in the last two years than I did in the two decades before them. The reason isn’t hard to name: AI. Every week I’m reading something new about language models, about emergent capabilities, about systems that produce outputs indistinguishable in surface form from genuine understanding. The question that keeps surfacing — and that most people in the AI space are actively trying not to ask out loud — is whether any of it involves experience.
Chalmers himself has said current large language models are “most likely not conscious, though I don’t rule out the possibility entirely.” His exact phrase at a 2025 symposium was that talking to an LLM is like interacting with “a quasi-agent with quasi-beliefs and quasi-desires.” But he also said that future language models “may well be conscious” and that this is “something serious to deal with.”
The hard problem makes that question almost impossible to answer. Even if we mapped every computation, every weight, every activation pattern in a large AI model — we’d still face the same gap Chalmers described in 1996. We’d know the mechanism. We wouldn’t know if there’s anything it’s like to be that mechanism.
That matters for how we think about AI welfare. It matters for how we build these systems. I’ve written before about the way AI is reshaping how we understand machine behavior, but Chalmers’ book forces the deeper cut: it’s not just behavior we need to account for. It’s the possibility that the lights are on somewhere in there. Or that they’re not. And that we may have no principled way to tell.
The Book Itself
The Conscious Mind is long — around 395 pages — and Chalmers, to his credit, has marked the more technical sections so that non-philosophers can navigate around them. He is one of the clearest writers working at this level of abstraction. The opening chapter alone is worth the price of the book for the sheer quality of the problem framing. He doesn’t overdramatize. He doesn’t hedge. He just lays the thing out, and the thing is genuinely unsettling.
I keep coming back to a line early in the book: “The explanations always seem to fall short of the target.” He’s describing previous attempts to solve consciousness, and the sentence is almost throwaway in context. But it’s precise. That gap between explanation and target is where all of philosophy of mind lives. It’s where neuroscience hits its ceiling. It’s where AI is now parked, staring at the same wall.
I first encountered Chalmers at CW Post — sitting with Plato and then working through the empiricists, Descartes’ cogito, Nagel’s bat. Reading The Conscious Mind now, with twenty-five years of running a diner behind me and a workshop filled with tools that make real things by hand, the puzzle doesn’t feel academic at all. It feels like the most practical question there is: what are we, actually? If a neuroscientist can describe my brain firing in response to the smell of the griddle at 5 AM and still not capture what that smell is to me — what does that say about what any of us can know about each other?
Chalmers doesn’t answer that. Nobody has. But he is the clearest account I’ve found of exactly why the question won’t dissolve, no matter how much data you throw at it.
This sits well alongside A Universe of Consciousness by Gerald Edelman and Giulio Tononi, which takes a more biological approach to some of the same questions — and alongside the puzzle of the Boltzmann Brain Paradox, which asks a related question from the other direction: whether statistical physics can account for a mind appearing out of nowhere. For those interested in where philosophy and phenomenology converge, The Primacy of Perception by Maurice Merleau-Ponty is a companion worth picking up.
You Might Also Like
- What Is It Like to Be a Brain? The Philosophical War Between Reductionism and Emergence in Consciousness Research
- Five Dialogues by Plato: The Book That Teaches You How to Question Everything
- A Universe of Consciousness by Gerald Edelman and Giulio Tononi — A Review
Sources
- Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996. https://www.amazon.com/Conscious-Mind-Search-Fundamental-Theory/dp/0195117891
- Chalmers, David J. “Facing Up to the Problem of Consciousness.” Journal of Consciousness Studies, 1995. https://consc.net/papers/facing.pdf
- Nagel, Thomas. “What Is It Like to Be a Bat?” Philosophical Review, 1974.
- “Hard Problem of Consciousness.” Internet Encyclopedia of Philosophy. https://iep.utm.edu/hard-problem-of-conciousness/
- “Hard Problem of Consciousness.” Wikipedia. https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
- “Can AI Be Conscious?” Tufts Now, October 2025. https://now.tufts.edu/2025/10/21/can-ai-be-conscious
- Shoemaker, Sydney. Review of The Conscious Mind. Philosophy and Phenomenological Research 59:439–44, 1999. https://consc.net/books/tcm/reviews.html
- “David Chalmers.” Wikipedia. https://en.wikipedia.org/wiki/David_Chalmers






