Friday, December 14, 2018

Three Arguments for Alien Consciousness

Someday we might meet spacefaring aliens who engage us in (what seems to be) conversation. Some philosophers -- for example Susan Schneider and (if I may generalize his claims about superficially isomorphic robots to superficially isomorphic aliens) Ned Block -- have argued that such aliens might not really have conscious experiences. In contrast, I hold what I believe to be the majority view that aliens who outwardly behaved similarly to us would very likely have conscious experiences. In the phrase that Thomas Nagel made famous, there would be "something it is like" to be an alien.

I offer three arguments in defense of this general conclusion. I stipulate that the aliens I'm considering have arisen through a long evolutionary process, that they are capable of sophisticated cooperative technological behavior, and that they interact with us in ways that it is natural for non-philosophers to interpret as having comprehensible linguistic content.

(I set aside some more specific criticisms of Schneider's argument.)

The Linguistic Argument.

An alien descends from its spaceship. Upon meeting the local population, it raises an appendage and begins touching things. It touches its leg and says "bzzbl". It touches its elbow and says "tikpt". It touches a tree and says "illillin". And so forth. Furthermore, it does so in reliable and repeatable ways, so that this behavior is naturally interpreted as linguistic labeling. When a human touches a tree and says "illillin" the alien says "hi". When a human touches a tree and says "bzzbl", the alien says "pu". The pattern of hi/pu responses is naturally interpreted as affirmation and negation. From this starting point, humans seem to learn the alien language and the alien seems to learn the local human language. Learning proceeds smoothly, except for a few understandable hiccups, so that after several months, the aliens and the local humans are cooperating in complex activities with apparently complex linguistic understanding. For example, the alien emits sounds like this: "After I enjoy eating the oak tree down the road by David's house, I plan to take a half-hour nap at the bottom of Blackberry Pond. Could we meet to talk about Martian volcanology after I've finished my nap?" All proceeds as expected. The alien eats the tree, takes the nap, and afterwards engages in what appears to be a dicussion of Martian volcanology. If this was not approximately how things went, the alien would not be the right kind of outwardly similar entity that I have in mind.

To be outwardly similar -- and also just for good architectural reasons in a risky world -- the alien will also presumably be able to discuss its interior states and perceptual states. When it is running low on nutrition and needs to eat, it will say something like "I'm getting hungry". When it can't visually detect a distant object that a human interlocutor is pointing out, it will say something like, "Sorry, I can't see that. Oh, wait, now I can!" When it fears for the safety of its mate who has just wandered onto the highway, it will say something like, "I'm worried that she might be struck by a car" or "I hope she gets across the road okay!" Now suppose a human interlocutor says something like, "Do you really have conscious experiences? I mean, is there something it's like for you to experience red and to feel pain? Do you have imagination and understanding and emotional feelings?" If the alien is generally similar to us in its linguistic behavior, and if the question is phrased clearly enough, it will say yes. I think this is plausible given the rest of the set up, but we can also stipulate if necessary that if such an alien said no it wouldn't be a superficial isomorph outwardly similar to us in the intended respect.

Although I'm not sure how Schneider and Block would react to this particular case, my interlocutor is someone who thinks it still remains a live possibility that the alien really has no conscious experiences underneath it all, because it has the wrong type of internal structure. (Maybe it's made of silicon inside, or hydraulics, or maybe it engages in fast serial cognitive processing rather than parallel processing.) The thought behind the linguistic argument (which I leave undeveloped for now) is that it would be unnatural, awkward, inelegant, and scientifically dubious to interpret the alien's speech as failing to refer both to trees and to genuine conscious mental states that it possesses.

The Grounds of Consciousness Argument.

What theoretical reasons do we have for thinking that creatures other than us have conscious experiences? I'm inclined to think we rely on two main grounds: (a.) sophisticated outward behavior similar to outward behavior that we associate with consciousness in our own case, and (b.) structural similarity between the target creature and us, with respect to the types of structures we associate with consciousness in our own case. By stipulation, (a) favors the alien. So the question is whether divergence in (b) alone would be good enough grounds to seriously doubt the existence of conscious experience despite seemingly-introspective reports about consciousness.

If the structural situation is bad enough, that can ground plausible denial. A remote-controlled puppet with a speaker in its mouth might exhibit sophisticated outward behavior, but we would not want to attribute consciousness to the puppet. (We might want to attribute consciousness to the puppet-manipulator system or at least to the manipulator.) Similarly, we might reasonably doubt the consciousness of an entity programmed specifically to act as though it is conscious, even if that entity passes the Turing test or similar, because there is possibly something suspicious about having such a programming history: Maybe the best explanation of the entity's seeming-consciousness is not that it is conscious but only that it has been programmed to act as though it is. (I'm not saying I agree with that position, only that it is a reasonable position.)

To avoid these doubts about the structural story, I have stipulated that the aliens have a long evolutionary history. The question then becomes whether a naturally-evolved cognitive structure underlying such sophisticated and apparently linguistic behavior might not be sufficient for consciousness, if it is different enough from our own -- such as maybe a fast serial cognitive process rather than the massively parallel (but slowish) structure of neurons, or relying on a material substrate different than carbon. My intended sense of "might" here is not the thin metaphysical sense of "might" in which we might allow for philosophical zombies, but rather scientific plausibility.

A good approach to this question, I think, is to consider what it is about neurons aligned in massively parallel structures that explains why such neurons give rise to consciousness in our case. There must be something functionally awesome about neurons; it's not likely to be mysterious carbon-magic, independent of what neurons can do for us cognitively. So what could be that functionally awesome thing? The most plausible answers are the kinds of answers we see in broadly functionalist approaches to consciousness -- things like the ability to integrate information so as to respond in various sophisticated ways to the environment, including retaining information over time, monitoring one's own cognitive condition, complex long-term strategic planning, being capable of creative solutions to novel predicaments, etc. If serial processing or silicon processing can do all of the right kind of cognitive work, it's hard for me to see good theoretical reason to think that something necessary may be missing due merely to, say, the different number of protons in silicon or the implementational details of transfer relations between different cognitive subprocesses. It's a possible skepticism, but it's a skepticism without warranted grounds for doubt.

The Copernican Cosmological Argument.

According to the Copernican Principle in scientific cosmology, we are unlikely to be in a privileged position in the universe, such as its exact center. It's more reasonable, according to the principle, to think that we are in mediocre, mid-rent location among all of the locations that possibly support observers capable of reflection about cosmological principles. (However, the Anthropic Principle allows that we shouldn't be surprised to be in a location that supports cosmological observers, even if such locations are uncommon. For purposes of this post I am assuming that being an "observer" requires cognitive sophistication but does not require phenomenal consciousness if the two are separable. We can argue about that in the comments, if you like!)

The Copernican Principle can, I think, plausibly be applied to consciousness as follows. Stipulate, as seems plausible, that complex coordinated functional responsiveness to one's environment, comparable to the sophistication of human responsiveness, can evolve in myriad different ways that diverge in their internal structural basis (e.g., carbon vs non-carbon, or if carbon is essential because of its lovely capacity to form long organic molecules, highly divergent carbon-based systems). If only one or a few of these myriad ways gave rise to actual conscious experience, then we would be especially lucky to be among the minority of complex evolved seemingly-linguistic entities who are privileged with genuine conscious experience. There would be systems all across the universe who equally build cities, travel into space, write novels about their interactions with each other, and monitor and report their internal states in ways approximately as sophisticated as ours -- and among them, only a fraction would happen to possess conscious experience, while the other unfortunates are merely blank inside.

It is much more plausible, on Copernican grounds, to think that we are not in this way especially privileged entities, lucky to be among the minority of evolved intelligences who happen also to have conscious experiences.

[image source]


Howie said...

Would these aliens have love? Love it could be said is as defining to our species as consciousness? Would they have other animals and plants on their planet? In what conceivable ways can they be radically different than us given all they have in common? Would they be as different from us as Chinese from Frenchmen, or humans from hobbits or orcs?

Eric Schwitzgebel said...

Let's stipulate at least the outward appearance of love, for the sake of the argument. I am imagining complex social entities, so it's plausible that they would have important personal relationships.

Jim Cross said...

If they are spacefaring, they must be technological. So we could probably argue backwards from that.

1- They would have ability to transmit knowledge from generation to generation - language.
2- They would be social.
3- They would have appendages to manipulate the world. How did else they learn to make things?

However, a more interesting question would be whether they themselves would explore interstellar space. Given the time frames for travel and communication distances, if they were inclined to travel at all, wouldn't they send robots? And might we have difficulty distinguishing between their robots and conscious organisms?

SelfAwarePatterns said...

Along the lines of your earlier linked post, I think any alien intelligence we'd likely encounter would be an engineered intelligence. If so, it might have much higher sensory resolutions and much deeper associative networks, allowing it to perceive more of the world and extract more meaning from those perceptions. It might also have far more insight into its own workings than we do into our own minds. In other words, it's metacognitive self awareness might be much more thorough and accurate than our own limited and idiosyncratic capabilities.

In other words, by its standards, *we* might not be conscious.

Steven Shaviro said...

Have you read Peter Watts' novel BLINDSIGHT? It's a science fiction novel that speculates about aliens who are more cognitively powerful/advanced than us, and able among other things to generate apparently meaningful linguistic content, but who turn out not to be conscious. Their linguistic ability is explained as a kind of Searle "Chinese room" mechanism.

Eric Schwitzgebel said...

Jim and SelfAware: I agree! I wanted to avoid complications by stipulating a long evolutionary history, and I think that Block's and probably Schneider's concerns can be extended to creatures with a long evolutionary history (some of which might have started out as programmed AI).

Steven: Yes, that's an interesting idea! I felt that Watts kind of stipulated it, rather than really engaging in the philosophical pro and con -- but of course that's an SF writers prerogative!

Angra Mainyu said...


I agree they'd be conscious. But we can turn the question on its head a bit:
Do the the aliens believe that we are conscious? Should they?
Do they say we should believe they're conscious?
If they are clearly better than we are at science, engineering, programming, logic, etc., their answer may have some epistemic weight at least - at least, if we have good reason to believe they would not lie, but why would they?

Jim Cross said...

I guess I am having a problem grasping the point of your "aliens who outwardly behaved similarly to us would very likely have conscious experiences."

First, what is an alien?

Does it include non-biological entities created by biological ones?

If it does, I would say they might behave similarly to us and not have conscious experiences, although as I said we might have a difficult time telling whether they do or not since they might pass the tests you've given.

If it is only biological entities (that behave similarly to us), then I would agree with your point but what about biological entities that do not behave similarly to us?

I would agree they too would likely have conscious experiences but the nature of the experience might significantly different from ours and we might not think they are conscious.

Then, you have the biological/non-biological hybrids - the Borg - which I think might be a likely future for advanced biological entities that decide to explore interstellar space. Their consciousness might be collective in nature and, until we understood that, we might think sometimes they are conscious and sometimes not.

Eric Schwitzgebel said...

Angra: Yes, that's an interesting idea. Keith Frankish has a nice short story that vividly makes that point. There's an Asimov story about it too. :-)

Jim: Those are harder cases, and I wanted to focus on the best-case scenario, since Schneider's and Block's skepticism might apply even to best-case scenarios. I've stipulated a long evolutionary process. That's compatible with being "technological" in the sense of being originally computational devices manufactured by biological organisms, if those computations devices have subsequently gone through a long evolutionary history. If they are recently-designed technology or biological but very different, then I see more room for skepticism, depending on one's general theory of consciousness.

howard b said...

So conscious beings have something it is like according to Nagel- but how much common ground does that give? Take something that there is to be alive or to exist- there is a lot of diversity in that set. Even though there is something that there is to be human, there is quite a lot of variance involved in that set too.
This theory is new to me, so I'm sorry if my question is a little far out (even for this blog!)

Eric Schwitzgebel said...

That seems right to me, Howard — not far out. How much variance, and how correlated with behavioral differences, is an open question.

Stephen Wysong said...

Eric, this post seems related to your “Garden Snails” post of September, except that now we’re considering the gonging of spacefaring aliens. As I wrote then (and believe it also applies in this case):

“… we lack the ability to formally and conclusively prove consciousness in others. I believe the best we can do is infer consciousness in another organism, with the strength of the inference dependent on the degree of neurobiological similarity to oneself, the only thoroughly convincing case of a conscious being.”

You’ve specified spacefaring aliens, but it’s interesting to consider the case of ourselves as the spacefaring species. It seems possible that our consideration of the consciousness of biological spacefaring entities arriving here might be biased because of their obvious technological accomplishments. Should we be the spacefaring species, however, we might find ourselves contemplating the consciousness of floating hexagonal mats populating a Super Earth water world, a decidedly different conundrum, but one that’s free from our pervasive but largely unrecognized “anthro-bias,” which seems to be lurking behind both the Linguistic Argument and the behavioral component of the Grounds of Consciousness Argument.

Stephen Wysong said...

Continued ...

A few questions/observations:

1. Isn’t “… the majority view that aliens who outwardly behaved similarly to us would very likely have conscious experiences” a behavioral conclusion? That “view” is further on proposed in "The Grounds of Consciousness Argument" as a reason for inferring alien consciousness based on behavior ... is it both argument and conclusion?

2. Both the linguistic and the behavioral arguments imply that a completely convincing human-like linguistic and behavioral simulation cannot be constructed. On what grounds is that a reasonable belief? And, if it’s not reasonable, don’t both arguments fail?

3. The Grounds of Consciousness argument applies only to biological creatures, not constructed/programmed entities, where a long evolutionary history isn’t applicable in any case. Bad news for the consciousness of Commander Data, even though It demonstrates “sophisticated outward behavior” ... but I agree. Consciousness is strictly a biological phenomenon.

4. You suggest that we “... consider what it is about neurons aligned in massively parallel structures that explains why such neurons give rise to consciousness in our case.” Being the cortical consciousness skeptic that I am, I’m compelled to point out (once again) that there is no evidence whatsoever to support the hypothesis (perhaps myth) that the cortical tissue arrangement you describe produces consciousness. While compelling evidence exists that implicates cortical functionality in the resolution of the contents of consciousness, no evidence of any kind exists that supports the belief that the cortex creates the streaming experience of conscious images. For all we know at this point, consciousness might be a production of brain cells that are not neurons.

5. In The Copernican Cosmological Argument, you “Stipulate, as seems plausible, that complex coordinated functional responsiveness to one's environment, comparable to the sophistication of human responsiveness, can evolve in myriad different ways ...”. You continue: “If only one or a few of these myriad ways gave rise to actual conscious experience ...”. Are you suggesting that the evolution of sophisticated environmental responsiveness (behavior) sometimes gives rise to consciousness? I find the reverse much more likely—that the evolution of consciousness significantly enhances environmental responsiveness.

Because pain is a fundamental conscious feeling, I propose that when the four foot tall snail-like spacefaring alien oozes down its spacecraft’s exit ramp we simply punch it in what we believe is its face and observe the reaction as our test for gongability. Of course, we would be risking planet-wide death and destruction in response, but the opportunity to improve our understanding of alien consciousness is irresistible … I believe it’s the test for consciousness favored by Lrrr, the renowned Philosopher of Consciousness from the planet Omicron Persei 8.

Eric Schwitzgebel said...

Thanks for the thoughtful comment (as usual), Stephen!

I agree with the ideas expressed in your first comment. On your second comment:

1. I'm not sure what the concern is here. The behavior and interior structures is meant to be the evidence, the existence of consciousness the conclusion.

2. By saying that my arguments imply "that a completely convincing human-like linguistic and behavioral simulation cannot be constructed" do you mean that such a simulation could not be constructed *without consciousness*? If so, my thought is (1) Perhaps it can be *constructed* artificially, which is part of why I've stipulated entities with a long evolutionary history. (2.) The standard is meant to be, just to clarify, not "can" in a thin sense, but what it would be scientifically plausible to conclude.

3. The argument is meant to be neutral on androids, rather than implying that they lack consciousness.

4. I didn't say "cortical". True, I did say neurons. It seems plausible that they are central to the story, but if other sorts of structures are crucial as well, I have no objection to including them and the argument can survive generalizing to neurons+X.

5. The causal story could go either way, in my view. The argument is meant to be neutral on that; the central issue for the argument is correlation, yes?

Stephen Wysong said...

Eric, regarding your feedback:

1. Schneider and Block are represented as arguing that spacefaring aliens who engage us in conversation (“such aliens”) might not have conscious experiences. You wrote that you believe that “aliens who outwardly behaved similarly to us would very likely have conscious experiences,” which you proceed to defend as the “general conclusion” … or perhaps the general conclusion might be that spacefaring aliens are conscious. Perhaps it’s not clear what “this general conclusion” in your second paragraph is referring to.

2&3. My point is that a convincing linguistic and behavioral simulation lacking consciousness could conceivably be created, like Bishop in the movie Aliens. As such, although consciousness might be weakly inferred, I don’t believe we can conclude consciousness from linguistic and other behaviors.

Also, isn’t proposing a long evolutionary history just another way to say the aliens must be organic/biological/alive? In your reply to Jim, you mention “computational devices [that] have subsequently gone through a long evolutionary history,” but I can’t imagine constructed, computational, non-biological aliens deciding to commit their further development and destiny to an evolutionary process, which is a sloppy, wasteful and improbable path to improvement. Earthly evolution over enormously long timescales has obviously resulted in organic wonders, consciousness among them, but reasoned design is vastly more productive and efficient.

4. I guiltily agree that you didn’t say cortical and I’ve surely exposed my bias to see “neurons aligned in massively parallel structures” as a cortical arrangement. We can see a fundamental parallel processing in all neural connectivity because multiple connected downstream neurons operate in parallel on a signal received from a single originating neuron. Perhaps my image of cortical tissue, i.e., a large collections of neurons processing multiple inputs in parallel underlies my bias, but, if a defense is possible, phrases like “massive parallel processing” frequently appear in discussions of cortical processing and not other brain structures, like the thalamus for instance, whose functionality is usually described as a relay rather than a computation involving massively parallel structures. Mea culpa.

5. Can the causal story (environmental responsiveness⇄consciousness) really go either way? We observe that a normally conscious living animal bereft of consciousness is in a dreamless sleep state or anesthetized or comatose and is not responsive to its environment until consciousness returns. In short, an animal that becomes permanently non-conscious does not thrive. It’s very difficult for me to envision non-conscious thriving creatures evolving consciousness, since consciousness would confer no additional survival benefit and comes at the enormous cost of complete non-responsiveness and death should it fail to function.

... and Merry Happy Christmas Holidays!

Stephen Wysong said...

Eric, since outer space aliens are a staple of science fiction, I’ve been trying to recall if any scifi stories/novels directly consider the issue of alien consciousness. And I’m coming up empty …

Aside from the old My Favorite Martian kinds, aliens are often enigmatic, but consciousness seems to be assumed for biological aliens, particularly technological spacefaring ones. With the robotic, AI-type aliens it doesn’t appear to be a consideration either. Are you aware of alien consciousness being plot material, or even a consideration, in scifi?

Eric Schwitzgebel said...

Thanks for the continuing comments, Stephen!

On SF: I read most SF as assuming that the aliens are conscious. Two exceptions are Lem in His Master's Voice and Solaris and Watts in Blindsight (see comments above).

On 1: I meant spacefaring aliens who behave similarly to us and have a long evolutionary history and no structural defeaters (like being puppets or programmed).

On 2&3: I'm granting that for the sake of this argument. Yes, that's why I stipulated the evolutionary history. I'm inclined to think that initially programmed entities would continue to evolve as long as there are different reproduction rates, heritable traits, and the right sorts of variability generation to generation. This evolution need not involve further direct programming, though if it did that might count as a structural defeater.

On 5: I think my argument can remain neutral on that point.

Jim Cross said...

There is the famous They're Made Out of Meat that describes an encounter with "aliens".

Jim Cross said...

Interstellar Probe: 3DX8887

Target: Third planet of Alpha-3325

Sentience criteria:
1- Does species destroy its environment without rational planning?
2- Does species kill other species for food?
3- Does species kill other members of its own species?

Results: Proto-sentience found in some large ocean mammals

Conclusion: No sentience found. Moving to next system.