If someday space aliens visit Earth, I will almost certainly think that they are conscious, if they behave anything like us. If they have spaceships, animal-like body plans, and engage in activities that invite interpretation as cooperative, linguistic, self-protective, and planful, then there will be little good reason to doubt that they also have sensory experiences, sentience, self-awareness, and a conscious understanding of the world around them, even if we know virtually nothing about the internal mechanisms that produce their outward behavior.
One consideration in support of this view is what I've called the Copernican Principle of Consciousness. According to the Copernican Principle in cosmology, we should assume that we are not in any particularly special or privileged region of the universe, such as its exact center. Barring good reason to think otherwise, we should we assume we are in an ordinary, unremarkable place. Now consider all of the sophisticated organisms that are likely to have evolved somewhere in the cosmos, capable of what outwardly looks like sophisticated cooperation, communication, and long-term planning. It would be remarkably un-Copernican if we were the only entities of this sort that happened also to be conscious, while all the others are mere "zombies". It make us remarkable, lucky, special -- in the bright center of the cosmos, as far as consciousness is concerned. It's more modestly Copernican to assume instead that sophisticated, communicative, naturally evolved organisms universe-wide are all, or mostly, conscious, even if they achieve their consciousness via very different mechanisms. (For a contrasting view, see Ned Block's "Harder Problem" paper.)
(Two worries about the Copernican argument I won't address here: First, what if only 15% of such organisms are conscious? Then we wouldn't be too special. Second, what if consciousness isn't special enough to create a Copernican problem? If we choose something specific and unremarkable, such as having this exact string of 85 alphanumeric characters, it wouldn't be surprising if Earth was the only location in which it happened to occur.)
But robots are different from naturally evolved space aliens. After all, they are -- or at least might be -- designed to act as if they are conscious, or designed to act in ways that resemble the ways in which conscious organisms act. And that design feature, rather than their actual consciousness, might explain their conscious-like behavior.
[Dall-E image: Robot meets space alien]Consider a puppet. From the outside, it might look like a conscious, communicating organism, but really it's a bit of cloth that is being manipulated to resemble a conscious organism. The same holds for a wind-up doll programmed in advance to act in a certain way. For the puppet or wind-up doll we have an explanation of its behavior that doesn't appeal to consciousness or biological mechanisms we have reason to think would co-occur with consciousness. The explanation is that it was designed to mimic consciousness. And that is a better explanation than one that appeals to its actual consciousness.
In a robot, things might not be quite so straightforward. However, the mimicry explanation will often at least be a live explanation. Consider large language models, like ChatGPT, which have been so much in the news recently. Why do they emit such eerily humanlike verbal outputs? Not, presumably, because they actually have experiences of the sort we would assume that humans have when they say such things. Rather, because language models are designed specifically to imitate the verbal behavior of humans.
Faced with a futuristic robot that behaves similarly to a human in a wider variety of ways, we will face the same question. Is its humanlike behavior the product of conscious processes, or is it instead basically a super-complicated wind-up doll designed to mimic conscious behavior? There are two possible explanations of the robot's pattern of behavior: that it really is conscious and that it is designed to mimic consciousness. If we aren't in a good position to choose between these explanations, it's reasonable to doubt the robot's consciousness. In contrast, for a naturally-evolved space alien, the design explanation isn't available, so the attribution of consciousness is better justified.
I've been assuming that the space aliens are naturally evolved rather than intelligently designed. But it's possible that a space alien visiting Earth would be a designed entity rather than an evolved one. If we knew or suspected this, then the same question would arise for alien consciousness as for robot consciousness.
I've also been assuming that natural evolution doesn't "design entities to mimic consciousness" in the relevant sense. I've been assuming that if natural evolution gives rise to intelligent or intelligent-seeming behavior, it does so by or while creating consciousness rather than by giving rise to an imitation or outward show of consciousness. This is a subtle point, but one thought here is that imitation involves conformity to a model, and evolution doesn't seem to do this for consciousness (though maybe it does so for, say, butterfly eyespots that imitate the look of a predator's eyes).
What types of robot design would justify suspicion that the apparent conscious behavior is outward show, and what types of design would alleviate that suspicion? For now, I'll just point to a couple of extremes. On one extreme is a model that has been reinforced by humans specifically for giving outputs that humans judge to be humanlike. In such a case, the puppet/doll explanation is attractive. Why is it smiling and saying "Hi, how are you, buddy?" Because it has been shaped to imitate human behavior -- not necessarily because it is conscious and actually wondering how you are. On the other extreme, perhaps, are AI systems that evolve in accelerated ways in artificial environments, eventually becoming intelligent not through human intervention but rather though undirected selection processes that favor increasingly sophisticated behavior, environmental representation, and self-representation -- essentially natural selection within virtual world.
-----------------------------------------------------
Thanks to Jeremy Pober for discussion on a long walk yesterday through Antwerp. And apologies to all for my delays in replying to the previous posts and probably to this one. I am distracted with travel.
Relatedly, see David Udell's and my critique of Susan Schneider's tests for AI consciousness, which relies on a similar two-explanation critique.