Tuesday, July 01, 2025

Three Epistemic Problems for Any Universal Theory of Consciousness

By a universal theory of consciousness, I mean a theory that would apply not just to humans but to all non-human animals, all possible AI systems, and all possible forms of alien life. It would be lovely to have such a theory! But we're not at all close.

This is true sociologically: In a recent review article, Anil Seth and Tim Bayne list 22 major contenders for theories of consciousness.

It is also true epistemically. Three broad epistemic problems ensure that a wide range of alternatives will remain live for the foreseeable future.

First problem: Reliance on Introspection

We know that we are conscious through, presumably, some introspective process -- through turning our attention inward, so to speak, and noticing our experiences of pain, emotion, inner speech, visual imagery, auditory sensation, and so on. (What is introspection? See my SEP encyclopedia entry Introspection and my own pluralist account.)

Our reliance on introspection presents three methodological challenges for grounding a universal theory of consciousness:

(A.) Although introspection can reliably reveal whether we are currently experiencing an intense headache or a bright red shape near the center of our visual field, it's much less reliable about whether there's a constant welter of unattended experience or whether every experience comes with a subtle sense of oneself as an experiencing subject. The correct theory of consciousness depends in part on the answer to such introspectively tricky questions. Arguably, these questions need to be settled introspectively first, then a theory of consciousness constructed accordingly.

(B.) To the extent we do rely on introspection to ground theories of consciousness, we risk illegitimately presupposing the falsity of theories that hold that some conscious experiences are not introspectable. Global Workspace and Higher-Order theories of consciousness tend to suggest that conscious experiences will normally be available for introspective reporting. But that's less clear on, for example, Local Recurrence theories, and Integrated Information Theory suggests that much experience arises from simple, non-introspectable, informational integration.

(C.) The population of introspectors might be much narrower than the population of entities who are conscious, and the first group might be unrepresentative of the latter. Suppose that ordinary adult human introspectors eventually achieve consensus about the features and elicitors of conscious in them. While indeed some theories could thereby be rejected for failing to account for ordinary human adult consciousness, we're not thereby justified in universalizing any surviving theory -- not at least without substantial further argument. That experience plays out a certain way for us doesn't imply that that it plays out similarly for all conscious entities.

Might one attempt a theory of consciousness not grounded in introspection? Well, one could pretend. But in practice, introspective judgments always guide our thinking. Otherwise, why not claim that we never have visual experiences or that we constantly experience our blood pressure? To paraphrase William James: In theorizing about human consciousness, we rely on introspection first, last, and always. This centers the typical adult human and renders our grounds dubious where introspection is dubious.

Second problem: Causal Confounds

We humans are built in a particular way. We can't dismantle ourselves and systematically tweak one variable at a time to see what causes what. Instead, related things tend to hang together. Consider Global Workspace and Higher Order theories again: Processes in the Global Workspace might almost always be targeted by higher order representations and vice versa. The theories might then be difficult to empirically distinguish, especially if each theory has the tools and flexibility to explain away putative counterexamples.

If consciousness arises at a specific stage of processing, it might be difficult to rigorously separate that particular stage from its immediate precursors and consequences. If it instead emerges from a confluence of processes smeared across the brain and body over time, then causally separating essential from incidental features becomes even more difficult.

Third problem: The Narrow Evidence Base

Suppose -- very optimistically! -- that we figure out the mechanisms of consciousness in humans. Extrapolating to non-human cases will still present an intimidating array of epistemic difficulties.

For example, suppose we learn that in us, consciousness occurs when representations are available in the Global Workspace, as subserved by such-and-such neural processes. That still leaves open how, or whether, this generalizes to non-human cases. Humans have workspaces of a certain size, with a certain functionality. Might that be essential? Or would literally any shared workspace suffice, including the most minimal shared workspace we can construct in an ordinary computer? Human workspaces are embodied in a living animal with a metabolism, animal drives, and an evolutionary history. If these features are necessary for consciousness, then conclusions about biological consciousness would not carry over to AI systems.

In general, if we discover that in humans Feature X is necessary and sufficient for consciousness, humans will also have Features A, B, C, and D and lack Features E, F, G, and H. Thus, what we will really have discovered is that in entities with A, B, C, and D and not E, F, G, or H, Feature X is necessary and sufficient for consciousness. But what about entities without Feature B? Or entities with Feature E? In them, might X alone be insufficient? Or might X-prime be necessary instead?


The obstacles are formidable. If they can be overcome, that will be a very long-term project. I predict that new theories of consciousness will be added faster than old theories can be rejected, and we will discover over time that we were even further away from resolving these questions in 2025 than we thought we were.

[a portion of a table listing theories of consciousness, from Seth and Bayne 2022]