Friday, September 05, 2025

Are Weird Aliens Conscious? Three Arguments (Two of Which Fail)

Most scientists and philosophers of mind accept some version of what I'll call "substrate flexibility" (alternatively "substrate independence" or "multiple realizability") about mental states, including consciousness. Consciousness is substrate flexible if it can be instantiated in different types of physical system -- for example in a squishy neurons like ours, in the silicon chips of a futuristic robot, or in some weird alien architecture, carbon based or not.

Imagine we encounter a radically different alien species -- one with a silicon-based biology, perhaps. From the outside, they seem as behaviorally sophisticated as we are. They build cities, fly spaceships, congregate for performances, send messages to us in English. Intuitively, most of us would be inclined to say that yes, such aliens are conscious. They have experiences. There is "something it's like" to be them.

But can we argue for this intuition? What if carbon is special? What if silicon just doesn't have the je ne sais quoi for consciousness?

This kind of doubt isn't far fetched. Some people are skeptical of the possibility of robot consciousness on roughly these grounds, and some responses to the classic "problem of other minds" rely on our biological as well as behavioral similarity to other humans.

If we had a well-justified universal theory of consciousness -- one that applies equally to aliens and humans -- we could simply apply it. But as I've argued elsewhere, we don't have such a theory and we likely won't anytime soon.

Toward the conclusion that behaviorally sophisticated aliens would be conscious regardless of substrate, I see three main arguments, two of which fail.

Argument 1: Behavioral Sophistication Is Best Explained by Consciousness

The thought is simple. These aliens are, by hypothesis, behaviorally sophisticated. And the best explanation for sophisticated behavior is that they have inner conscious lives.

There are two main problems with this argument.

First, unconscious sophistication. In humans, unconscious behavior often displays complexity without consciousness. Bipedal walking requires delicate, continuous balancing, quickly coordinating a variety of inputs, movements, risks, and aims -- mostly nonconscious. Expert chess players make rapid judgments they can't articulate, and computers beat those same experts without any consciousness at all.

Second, question-begging. This argument simply assumes what the skeptic denies: that the best explanation for alien behavior is consciousness. But unless we have a well justified, universally applicable account of the difference between conscious and unconscious processing -- which we don't -- the skeptic should remain unmoved.

Argument 2: The Functional Equivalent of a Human Could Be Made from a Different Substrate

This argument has two steps:

(1.) A functional equivalent of you could be made from a different substrate.

(2.) Such a functional equivalent would be conscious.

One version is David Chalmers' gradual replacement or "fading qualia" argument. Imagine swapping your neurons, one by one, with silicon chips that are perfect functional equivalents. If this process is possible, Premise 1 is true.

In defense of Premise 2, Chalmers appeals to introspection: During the replacement, you would notice no change. After all, if you did notice a change, that would presumably have downstream effects on your psychology and/or behavior, so functional equivalence would be lost. But if consciousness were fading away, you should notice it. Since you wouldn't, the silicon duplicate must be conscious.

Both premises face trouble.

Contra Premise 1, as Rosa Cao, Ned Block, Peter Godfrey-Smith and others have argued, it is probably not possible to make a strict functional duplicate out of silicon. Neural processing is subserved by a wide variety of low level mechanisms -- for example nitric oxide diffusion -- that probably can't be replicated without replicating the low-level chemistry itself.

Contra Premise 1, as Ned Block and I have argued, there's little reason to trust introspection in this scenario. If consciousness did fade during the swap, whatever inputs our introspective processes normally rely on will be perfectly mimicked by the silicon replacements, leaving you none the wiser. This is exactly the sort of case where introspection should fail.

[DON'T PANIC! It's just a weird alien (image source)]


Argument 3: The Copernican Argument for Alien Consciousness

This is the argument I favor, developed in a series of blog posts and a paper with Jeremy Pober. According to what Jeremy and I call The Copernican Principle of Consciousness, among behaviorally sophisticated entities, we are not specially privileged with respect to consciousness.

This basic thought is, we hope, plausible on its face. Imagine a universe with at least a thousand different behaviorally sophisticated species, widely distributed in time and space. Like us, they engage in complex, nested, long-term planning. Like us, they communicate using sophisticated grammatical language with massive expressive power. Like us, they cooperate in complex, multi-year social projects, requiring the intricate coordination of many individuals. While in principle it's conceivable that only we are conscious and all these other species are merely nonconscious zombies, that would make us suspiciously special, in much the same way it would be suspiciously special if we happened to occupy the exact center of the universe.

Copernican arguments rely on a principle of mediocrity. Absent evidence to the contrary, we should assume we don't occupy a special position. If we alone were conscious, or nearly alone, we would occupy a special position. We'd be at the center of the consciousness-is-here map, so to speak. But there's no reason to think we are lucky in that way.

Imagine a third-party species with a consciousness detector, sampling behaviorally sophisticated species. If they find that most or all such species are conscious, they won't be surprised when they find that humans, too, are conscious. But if species after species failed, and then suddenly humans passed, they would have to say, "Whoa, something extraordinary is going on with these humans!" It's that kind of extraordinariness that Copernican mediocrity tells us not to expect.

Why do we generally think that behaviorally sophisticated weird aliens would be conscious? I don't think the core intuition is that you need consciousness to explain sophistication or that the aliens could be functionally exactly like us. Rather, the core intuition is that there's no reason to think neurons are special compared to any other substrate that can support sophisticated patterns of behavior.