Saturday, May 27, 2023

A Reason to Be More Skeptical of Robot Consciousness Than Alien Consciousness

If someday space aliens visit Earth, I will almost certainly think that they are conscious, if they behave anything like us.  If they have spaceships, animal-like body plans, and engage in activities that invite interpretation as cooperative, linguistic, self-protective, and planful, then there will be little good reason to doubt that they also have sensory experiences, sentience, self-awareness, and a conscious understanding of the world around them, even if we know virtually nothing about the internal mechanisms that produce their outward behavior.

One consideration in support of this view is what I've called the Copernican Principle of Consciousness. According to the Copernican Principle in cosmology, we should assume that we are not in any particularly special or privileged region of the universe, such as its exact center.  Barring good reason to think otherwise, we should we assume we are in an ordinary, unremarkable place.  Now consider all of the sophisticated organisms that are likely to have evolved somewhere in the cosmos, capable of what outwardly looks like sophisticated cooperation, communication, and long-term planning.  It would be remarkably un-Copernican if we were the only entities of this sort that happened also to be conscious, while all the others are mere "zombies".  It make us remarkable, lucky, special -- in the bright center of the cosmos, as far as consciousness is concerned.  It's more modestly Copernican to assume instead that sophisticated, communicative, naturally evolved organisms universe-wide are all, or mostly, conscious, even if they achieve their consciousness via very different mechanisms. (For a contrasting view, see Ned Block's "Harder Problem" paper.)

(Two worries about the Copernican argument I won't address here: First, what if only 15% of such organisms are conscious?  Then we wouldn't be too special.  Second, what if consciousness isn't special enough to create a Copernican problem?  If we choose something specific and unremarkable, such as having this exact string of 85 alphanumeric characters, it wouldn't be surprising if Earth was the only location in which it happened to occur.)

But robots are different from naturally evolved space aliens.  After all, they are -- or at least might be -- designed to act as if they are conscious, or designed to act in ways that resemble the ways in which conscious organisms act.  And that design feature, rather than their actual consciousness, might explain their conscious-like behavior.

[Dall-E image: Robot meets space alien]

Consider a puppet.  From the outside, it might look like a conscious, communicating organism, but really it's a bit of cloth that is being manipulated to resemble a conscious organism.  The same holds for a wind-up doll programmed in advance to act in a certain way.  For the puppet or wind-up doll we have an explanation of its behavior that doesn't appeal to consciousness or biological mechanisms we have reason to think would co-occur with consciousness.  The explanation is that it was designed to mimic consciousness.  And that is a better explanation than one that appeals to its actual consciousness.

In a robot, things might not be quite so straightforward.  However, the mimicry explanation will often at least be a live explanation.  Consider large language models, like ChatGPT, which have been so much in the news recently.  Why do they emit such eerily humanlike verbal outputs?  Not, presumably, because they actually have experiences of the sort we would assume that humans have when they say such things.  Rather, because language models are designed specifically to imitate the verbal behavior of humans.

Faced with a futuristic robot that behaves similarly to a human in a wider variety of ways, we will face the same question.  Is its humanlike behavior the product of conscious processes, or is it instead basically a super-complicated wind-up doll designed to mimic conscious behavior?  There are two possible explanations of the robot's pattern of behavior: that it really is conscious and that it is designed to mimic consciousness.  If we aren't in a good position to choose between these explanations, it's reasonable to doubt the robot's consciousness.  In contrast, for a naturally-evolved space alien, the design explanation isn't available, so the attribution of consciousness is better justified.

I've been assuming that the space aliens are naturally evolved rather than intelligently designed.  But it's possible that a space alien visiting Earth would be a designed entity rather than an evolved one.  If we knew or suspected this, then the same question would arise for alien consciousness as for robot consciousness.

I've also been assuming that natural evolution doesn't "design entities to mimic consciousness" in the relevant sense.  I've been assuming that if natural evolution gives rise to intelligent or intelligent-seeming behavior, it does so by or while creating consciousness rather than by giving rise to an imitation or outward show of consciousness.  This is a subtle point, but one thought here is that imitation involves conformity to a model, and evolution doesn't seem to do this for consciousness (though maybe it does so for, say, butterfly eyespots that imitate the look of a predator's eyes).

What types of robot design would justify suspicion that the apparent conscious behavior is outward show, and what types of design would alleviate that suspicion?  For now, I'll just point to a couple of extremes.  On one extreme is a model that has been reinforced by humans specifically for giving outputs that humans judge to be humanlike.  In such a case, the puppet/doll explanation is attractive.  Why is it smiling and saying "Hi, how are you, buddy?"  Because it has been shaped to imitate human behavior -- not necessarily because it is conscious and actually wondering how you are.  On the other extreme, perhaps, are AI systems that evolve in accelerated ways in artificial environments, eventually becoming intelligent not through human intervention but rather though undirected selection processes that favor increasingly sophisticated behavior, environmental representation, and self-representation -- essentially natural selection within virtual world.

-----------------------------------------------------

Thanks to Jeremy Pober for discussion on a long walk yesterday through Antwerp.  And apologies to all for my delays in replying to the previous posts and probably to this one.  I am distracted with travel.

Relatedly, see David Udell's and my critique of Susan Schneider's tests for AI consciousness, which relies on a similar two-explanation critique.

11 comments:

SelfAwarePatterns said...

With the caveat that a lot depends on how we define words like "consciousness", "experience", "sentience", etc, I think this might be true for a short period of time, say a few minutes. But the longer the system in question can keep up the act, the more we have to be open to the possibility that there's an actual implementation of consciousness (at least in a functional sense) somewhere in there.

Consider a system that fakes being a calculator. It might get away with it for a brief time, providing a display with numbers in calculator type fashion, responding to the most common calculations with plausible answers, etc. But the longer we use it as a calculator, with increasing varieties of unusual calculations, the more likely that, whatever else is going on in the system, there's an implementation of a calculator somewhere in it.

Likewise, a system might conceivably pass a five minute Turing test and still be faking it. But a one hour one, multiple days, weeks, months? It seems like the only reason to hold out that a system keeping it up that long isn't conscious is if we think consciousness is a non-functional add-on. But if it is, can we ever know who or what is or isn't a philosophical zombie?

All that said, if we ever do encounter aliens (either evolved, designed, or some hybrid), they may challenge our current categories in ways we can't anticipate. Our own AIs will be creations of human thought, or at least have those creations in their lineage. Alien entities won't. Unless they come from a similar environment, we might find their version of minds unimaginably different, in a way where many might struggle to actually perceive as "minds".

Anyway, thought provoking as always Eric!

Mike

Marcus Arvan said...

There are also these reasons to be skeptical: consciousness is analog, not digital.

https://www.templeton.org/news/can-digital-computers-ever-achieve-consciousness

https://philpapers.org/rec/ARVPAA

Philosopher Eric said...

Nice article Marcus. You might be interested to know that I use an opposing way to dismiss functionalism, which is to say that I demonstrate it to be true by definition. Note that for every consciousness quality that’s brought up, a functionalist is able to propose future AI which functions as if it experiences that quality as well. Thus they seem to defend the proposition that consciousness can arise by means of digital function exclusively, on the basis of an endless ability to come back with “…yeah but if it functions the same, AI must be the same”. Though many seem to consider this logic sound reason to believe consciousness is digital, this can’t be sound. There must be a technical term for such failure.

I’ve heard arguments similar to yours before. Consider mine. It’s essentially that information can only exist as such (in a causal world anyway), in respect to what it informs. Thus for our brain information to create consciousness, it will need to causally inform some sort of consciousness substrate. And since electromagnetic fields were mentioned in your article, I’ll say that I suspect this to be what brain information animates to exist as consciousness. Once there’s enough interest to fund specific testing, I suspect McFadden’s CEMI will be empirically validated.

A few weeks ago Keith Frankish came here, and I guess as a prequel for Eric Schwitzgebel to be on his show. So I asked him to consider my thumb pain thought experiment. (Linked here.) The theme is essentially that if the right marks on paper were converted to the right other marks on paper, then Keith’s position holds that something here would experience what he does when his thumb gets whacked. He failed to reply however. Fortunately Tim Smith did eventually reply, and hilariously with a pseudo dialog of illusionist superheroes. Rather than bite the presented ridiculous bullet, he essentially had them convert to our perspective of embodied cognition. It seems to me that if more people had simple ways of understanding what functionalists, illusionists, and so on practically believe, then progress ought to occur given that this should leave more resources from which to assess less ridiculous ideas, and perhaps even some good ones!

chinaphil said...

As with last week, I think the confusion here lies in imagining that "what it's like to be human" is to be smart. It's not: what it's like to be human is to want a particular set of stuff. The reason that AIs are nothing like us is that they don't want stuff. If and when an AI develops desires - stable, consistent, continuous desires that motivate actions - then they will be just like us.
The way we'd know aliens are conscious is that they have desires: they intentionally found us, and having found us, they would intentionally do something with us. If they behaved like AIs, that would mean: finding us, and then just sitting there waiting for a prompt. And by far the most likely version of First Contact will be something like that: aliens send not ambassadors, but robot probes to Earth. We will recognise them as robot probes precisely by their passivity.

Paul D. Van Pelt said...

I think the notion of sentient aliens is at least consistent with how we think about intelligence. Would imagine even theists would agree, in principle with that view. On a different one, AI protagonists might offer this 'what if' rebuttal: what if space aliens turned out to be creations of a long-dead ( or even recently-dead) civilization? True, machinery---as we know it---blows up, breaks down, falls apart or wears out. But, we do not have any way of knowing all machinery, do we? If there are, or were, sentient aliens, it seems fair to postulate they would, or could, have tinkered with AI, long ago and far away. Well, this is where my mind wandered on this blog post. I have not yet read everything here.

Paul D. Van Pelt said...

OK, Eric and all. I finished reading the blog and will stand (mostly) by my original comments. It has been theorized, by some, that intelligent life from somewhere/when else, has 'scouted' us for, perhaps, centuries. Consider the Vimana representations of ancient Indian texts, one influence, perhaps on some ancient and current beliefs. Sorry if the spelling is off. I saw something, as a child, (under ten years) that I can never explain. My elder brother and a cousin his age saw the same thing, on the same summer evening. This memory is embedded: numerous luminescent globes, not more than three inches in diameter, descending on our twilight ravine playground. They came very near, evading attempts to touch them. These could not have been 'marsh gas' or willow-the-wisp. Evasion from our touch was intentional. Authorities have always feared what they could not explain. The blue book thing was doomed, from the get. Take this, as you will...

Paul D. Van Pelt said...

I would like to talk about something else. You have my email. Thanks,Eric.

Jim Cross said...

My rule of thumb for non-biological entities has been we need a theory that explains consciousness in biological entities (whichever ones are conscious the theory would tell us) that can also be implemented in non-biological entities. If the implementation in non-biological entities produces something that roughly seems conscious, then we might reasonably infer it is conscious. Absent a theory, we have no reason to assume that even something that mimics a human very well is actually conscious.

Philosopher Eric said...

I guess I should have read your article a bit more carefully Marcus. At first I didn’t realize that you and your co-author were additionally arguing a case for panpsychism. Though many modern theorists seem to get channeled this way, surely you now realize how challenging this road can be. Can’t drugs be used to eliminate human consciousness? Rather than everything inherently being conscious to some degree, this suggests that under the proper conditions the brain is able to create consciousness. Thus the right drugs are able to tamper with or end consciousness for a while.

If you’re correct that consciousness cannot merely exist in the form of digital quantities however, which is to say algorithms alone, then what might the brain do to create it? Consider the possibility that brain algorithms go on to inform the right sort of non-digital stuff. Observe that the best neural correlate for consciousness found so far, is synchronous neuron firing. So it could be that some of the slightly higher electromagnetic energies created by such firing, themselves exist as analogue experiencers of existence.

I realize that once a person gets on a given road, it can be difficult to turn around even when there are signs that they’re going the wrong way. There may be a better road for the two of you to take however. I’d love to help if I can…

James Cross said...

Marcus,

I read the abstract of your piece and am onboard with the idea that consciousness is analog. However, computing can be analog. Digital computation isn't the only kind of computation.

If we see it as arising from the oscillatory firings of neurons in spatial-temporal structures, the analog nature of consciousness would naturally fall out as a result. As firings spread in wave-like fashion across the brain the interference patterns of different waves colliding and overlapping would in effect be performing a sort of analog computation.

P-Eric and I both have been more than slightly interested in EM field theories of consciousness which also would be analog. Lately I have been looking closely at Georg Northoff The Spontaneous Brain who has developed temporo-spatial theory of consciousness (TTC) which to me has a lot of compelling arguments. It would certainly be compatible with EM fields theories.

Arnold said...

Please consciousness is not a theory...

Experiencing consciousness is the time and place for consciousness learning...
...it comes with the evolution of one's understanding of balancing forces...

Like, inner outer in-between forces: Learning to mediate them...
...East West In-between philosophical balance: On going for thousands of years...

Where's Eric...what is consciousness for...