Thursday, September 24, 2020

The Copernican Principle of Consciousness

According to the Copernican Principle in cosmology, we should assume that we do not occupy a special or privileged place in the cosmos, such as its exact center. According to the Anthropic Principle, we should be unsurprised to discover that we occupy a cosmological position consistent with the existence of intelligent life. The Anthropic Principle is a partial exception to the Copernican Principle: Even if cosmic locations capable of supporting intelligent life are extremely rare, and thus in a sense special, we shouldn't be surprised to discover that we are in such a location.

Now let's consider the following question: Is it surprising that Homo sapiens is a conscious species? On certain views of consciousness it would be surprising, and this surprisingness constitutes evidence against those views.

The views I have in mind are views on which conscious experience is radically separable from intelligent-seeming outward behavior. Views of this sort are associated with Ned Block and John Searle and more recently Susan Schneider -- though none of them commit to exactly the view I'll criticize today.

Let's stipulate the following: In our wide, maybe infinite, cosmos, living systems have evolved in a wide variety of different ways, with very different biological substrates. Maybe some life is carbon based and other life is not carbon based, and presumably carbon-based entities could take a variety of forms, some very unlike us. Let's stipulate also that some become sophisticated enough to form technological societies.

For concreteness, suppose that a thousand galaxies each host technological life for a thousand years. One hosts a technological society of thousand-tentacled supersquids whose cognitive processing proceeds by inference patterns of light in fiber-optic nerves. Another hosts a technological society of woolly-mammoth-like creatures whose cognition is implemented in humps containing a billion squirming ants. (For more detailed descriptions, see Section 1 of this paper.) Another hosts a technological society of high-pressure creatures who use liquid ammonia like blood. Etc.

Since these are technological societies, they engage in the types of complicated social coordination required to, say, land explorers on a moon. This will require structured communication: language, including, presumably, self-reports interpretable as reports of informational or representational states: "I remember that yesterday Xilzifa told me that the rocket crashed" or "I don't want to stop working yet, since I'm almost done figuring out this equation." (If advanced technology can arise without such communications, exclude such species from my postulated thousand.)

So then, we have a thousand societies like this, scattered across the universe. Now let's ask: Are the creatures conscious? Do they have streams of experience, like we do? Is there "something it's like" to be them?

Most science fiction stories seem to assume yes. I think that is also the answer we find intuitive. And yet, on certain philosophical views that I will call neurochauvinist, we should very much doubt that creatures so different from us are conscious. According to neurochauvinism, what's really special about us, which gives rise to conscious experience, is not our functional sophistication and complex patterns of environmental responsiveness but rather something about having brains like ours, with blood, and carbon, and neurons, and sodium channels, and acetylcholine, and all that.

Neurochauvinism can seem attractive when confronted with examples like Searle's Chinese Room or Block's China Brain -- complex systems designed to look from the outside like they are conscious and sophisticated (and which maybe implement computer-like instructions), but which are in fact basically just tricks. Part of the point of these examples is to challenge the common assumption that programmed robots, if they could someday be designed to behave like us, would be conscious. Consciousness, Block and Searle say, is not just a matter of having the right patterns of outward behavior, or even the right kinds of programmed internal, functional state transitions. Consciousness requires the specific biology of neurons -- or at least something in that direction. As Searle suggests, no arrangement of beer cans and wire, powered by windmills, could ever really have conscious experiences -- no matter how cleverly designed, no matter how sophisticated its behavior might seem when viewed from a distance. It's just not made of the right kind of stuff.

The neurochauvinist position as I am imagining it says this: We know that we are conscious. But those other aliens, made out of such different kinds of stuff, they're out of luck! Human biological neurons are what's special, and they don't have them. Although aliens of this sort might seem to be reporting on their mental states (remembering and wanting, in my example), really there is no more conscious experience there than there is behind the computer "memory" in your laptop or behind a non-player character in a computer game who begs you to save him from a dragon.

Now I don't think that Block or Searle are committed to such a strong view. Both allow that some hypothetical systems very different from us might be conscious, if they have the right kind of lower-level structures -- but they don't specify what exactly those structures must be or whether we should expect them to be rare or common in naturally-evolved aliens capable of sophisticated outward behavior.

So the neurochauvinist view is a somewhat extreme and unintuitive view. And yet philosophers and others do sometimes seem to say things close to it when they say that human beings are conscious not in virtue of their sophisticated behavior and environmental responsiveness but rather in virtue of the specifics of their underlying biological structures.


Back to the Copernican Principle. If we alone have real conscious experiences and the 999 other technologically sophisticated alien species do not, then we do occupy a special region in the universe: the only region with conscious experiences. We are in a sense, super lucky! Of all the wild ways in which technological, linguistic, self-reporting creatures could evolve, we alone lucked in the neural basis of consciousness. Too bad for the others! Unlike us, they are as experientially blank as a computer or a stone.

If your theory of consciousness implies that Homo sapiens lucked into consciousness while all those other technological species missed out, it's got the same kind of weakness as does a theory that says, "yes, we're at the center of the universe, it just happened that way for no good reason, how strangely lucky for us!"

Now you could try to wiggle out of this by invoking the Anthropic Principle. You could say that we should be unsurprised to discover that we are in a region of the universe that supports consciousness, just like we should be unsurprised to discover that we aren't in any of the vast quantities of vacuum between the stars. The Anthropic Principle is sometimes framed in terms of "observers": We should expect to be in a region that can host observers. If only conscious entities count as observers, then it's unsurprising that we're conscious.

Now I think that the best understanding of "observer" for these purposes would be a functional or behavioral understanding that would include all technological alien species, but that seems like an argumentative quagmire, so let me respond to this concern in a different way.

Suppose that instead of a thousand technological species, there are a thousand and one: we who are conscious, 999 alien species without consciousness, and one other alien species with consciousness (they also lucked into neurons) who has secretly endured unobserved while observing all the other species from a distance. I will call this alien species the Unbiased Observers. They gaze with equanimity at the thousand others, evaluating them.

When this species casts its eye on Earth, will it see anything special? Anything that calls out, "Whoa, this planet is radically unlike the others!" As it looks at our language and our technology, will anything jump out that says here be consciousness while all the other linguistic and technological societies lack it? I see no reason to think so if we abide by the starting assumptions of neurochauvinism, that is, if we think that nonconscious entities could easily have sophisticated outward behavior and information processing similar to ours, and that what's really necessary for consciousness is not that but rather the low-level biological magic of neurons.

The Copernican Principle is then violated as follows: The Unbiased Observers should, if they understand the basis of consciousness, regard us as the one-in-a-thousand lucky species that chanced into consciousness. Even if the Unbiased Observers don't understand the basis of consciousness, it is still true that we are special relative to them -- sharing consciousness with them, alone among all the species in the universe that outwardly seem just as sophisticated and linguistic.

The Copernican Principle of Consciousness: Assume that there is no unexplained lucky relationship between our cognitive sophistication and our consciousness. Among all the actual or hypothetical species capable of sophisticated cognitive and linguistic behavior, it's not the case that we are among a small portion who also have conscious experiences.

[image source]


Arnold said...

On surprise...Copernican Principle is the implication "observation is something central"... cosmoses, consciousnesses, beings, everything...

Wasn't sure what you were saying, so I looked up implication, thanks

SelfAwarePatterns said...

This is the best thing on consciousness I've read in a while. Well said Eric!

I often wonder if consciousness remains a productive concept. No one really argues that other systems can't be intelligent, that they can't
take in information,
prioritize which information to focus on,
have predispositions to react to that information in certain ways,
can override those predispositions based on learned associations,
can simulate various courses of action,
or monitor their own operations.

It's only when we re-label the above as conscious systems with
and introspection
that the arguments begin. It seems like the concept encourages us to argue over nothing.

I think "consciousness" is just a system that is enough like us to warrant moral consideration, with "like us" inevitably being in the eye of the beholder.

David Duffy said...

"the associative pallium of crows is rich in neurons that represent what the animals next report to have seen - whether or not that is what they were shown...concluding that birds do have what it takes to display consciousness - patterns of neuronal activity that represent mental content that drives behavior - now appears inevitable."

Maybe a little too strong a claim for some.

Whenever I read something that sounds like a doomsday type argument regarding my membership of a set, my antennae always start twitching.

Stephen Wysong said...

This discussion seems rooted in the speciesist belief that homo sapiens is the only conscious species on the planet, a prejudice that ignores the evolutionary roots of our extended consciousness in the more fundamental core consciousness. Also called creature consciousness, core consciousness is the experience of physical feelings—familiar sensations we all have about our body's internal and external environments. Physical feelings are, by definition, felt by the organism—that’s consciousness.

In my opinion, it is inconceivable that naturally occurring technological alien species, carbon-based or otherwise, sprung into existence fully-formed without evolutionary antecedents. It is further inconceivable that complex antecedent beings survived successfully without an awareness of themselves in their environment. Philosophical zombies only exist in Philosophy.

Of course, at this early point in neuroscience, we can never be certain of the consciousness of another organism, so that is always an inference. As I’ve posted before, in the case of other humans it’s a very strong inference. In the case of other earthly species, biological and DNA similarities can contribute to the strength of the inference. For non-carbon based alien species, the inference would be weaker but it seems reasonable that a technological alien species would be able to provide us with their scientific analysis of their own consciousness, allowing us to strengthen the inference.

Philosopher Eric said...

It seems to me that the point of John Searle’s Chinese room though experiment was not to demonstrate that intelligent seeming creatures are not conscious unless they have neurons like ours. The point was that if one of our computers ever does convincingly pass a Turing test, it shouldn’t be considered conscious simply by means of taking a presented set of human words and looking up an appropriate output set of human words. Instead it should need to do something which our brains do, or produce qualia. His 40 year old thought experiment didn’t quite get into qualia production however. My own version of his thought experiment does however.

We realize that nerves send our brains information when our thumbs get whacked. One result of this should be the qualia of “thumb pain”. But does this happen because the input information becomes properly processed into other information, as many prominent consciousness theories propose? Or rather because that processed information then animates physics based qualia producing mechanisms?

If qualia does exist by means of generic information processing without specific qualia producing mechanisms, observe that if whacked thumb information sent to your brain were instead inscribed on paper, as well as fed into a vast scanning computer that quickly processes it to print out a set of information laden paper associated with your brain’s response, then something should thus feel what you do when your thumb gets whacked! Does that seem right? “Thumb pain” may be produced by means of information laden paper that’s converted into other information laden paper?

Consider instead that our brains not only process whacked thumb information, but that this information then animates various qualia producing mechanisms. So if “a technological society of thousand-tentacled supersquids whose cognitive processing proceeds by inference patterns of light in fiber-optic nerves”, I think Searle would say that these creatures would be conscious as long as their processors don’t just turn information into other information, but rather use it to animate qualia producing mechanisms. This is to say that I don’t consider your former professor to be a neurochauvinist.

Regardless, what do you think? Should qualia producing mechanisms be responsible for your subjective experiences, or do you believe that processed information alone happens to be responsible, as in my paper to paper scenario? (I personally suspect that even garden snails are armed with these mechanisms.)

Eric Schwitzgebel said...

Thanks for the comments, folks!

David: Yes, that seems rather strong. The article has already generated lots of skeptical discussion among people interested in the psychology and neuroscience of consciousness. On the Doomsday argument: Yes, I agree it's sitting there nearby. Self-location and probability is pretty hairy, and I can see room for poking around in that vicinity to find possible problems with the current argument.

Stephen: I think I agree with everything you say, except for the implication that something I've said is in conflict with it!

Philosopher Eric: I think you are right in saying what the point of the Chinese Room thought experiment is and that Searle would want something in the vicinity of "qualia producing mechanisms" (he probably wouldn't put it in exactly that way). But what biology is required for qualia producing mechanisms? He doesn't say. He does think that certain kinds of systems -- and not just classical computers -- will lack it, such as systems of beer cans, wires, and windmills (the example is from Minds, Brains, and Science). He thinks that if consciousness arises, it will require some biological system that has the same causal power to produce consciousness, that neurons have and that in principle this could be a very different biological system than ours. But we don't really get much further than that. So it's really not clear how neurochauvinist he would be, if pressed on the issue. Maybe not very much at all! This is why I'm careful not to attribute that position to him (or to Block or Schneider).

Arnold said...

Brain systems are composed of processing memory systems and their location in a brain...

Some "people interested in the psychology and neuroscience of consciousness" can be called imagers, and when-imaging systems of the same brain use words like "cooperation" to describe relationships...

That this kind of relationship with quanta is a quaila relationship...thanks for another lead..

Philosopher Eric said...

If Searle truly does reserve conscious function for living function, then I think he’s missing something. Instead he should reserve it for the right kind of physics — to hell with “life”. But then a quick look at the wikipedia entry for “Biological Naturalism” eases my concerns. As he said, “Because we do not know exactly how the brain [produces consciousness] we are not yet in a position to know how to do it artificially." (Biological Naturalism, 2004). Exactly! It’s too bad he used the “biological” term whatsoever though. For many this seems to make him an enemy.

The main effective problem with the Chinese room as I see it, is that for many the concept of “understanding a language” is too vague for his point to truly hit home. Thus one may decide, “If it seems like it understands, then it must understand”. Here various popular consciousness theories (with acronyms such as GWT, HOTT, PPT, ART, and AST) are able to sidestep any theoretical qualia producing mechanisms and presume that the job gets done by means of generic information processing alone. But what are the implications of such a premise?

This means that if a vast supercomputer were to scan and process a given set of information laden paper and print out an associated set of information laden paper, then something would then feel what you and I know of as “thumb pain”! To have such an effect in a non-supernatural world however, instead that second set of paper should need to be fed into a machine armed with the physics of qualia production.

If you’d like some speculation about such physics, well surely it’s related to neuron firing. Furthermore we know that when neurons fire they set up waves of electromagnetic radiation that should possess similar fidelity. So it could be that qualia exist by means of these electromagnetic fields. In a causal universe however, some such mechanism must exist.

I appreciate that you agree with Stephen Wysong’s assessment. It was well said.

Fred J M R said...

Hello, Professor Eric. I'm new around here and I enjoyed this article. It was a nice reading!
"Philosopher Eric," Searle's argument was a meant to debate against strong AI and Turing's imitation game. Searle rejected the idea that computers, as were envisioned back in those days, could have semantical (besides syntactical) properties. In general, strong AI is the position of philosophers who believe that AI will eventually reach these pragmatic properties, whereas weak AI is Searle's stance, according to which AI can only possess syntactical properties (although I may be wrong).
Searle's ideas tried to reject a specific stance in strong AI back in the days where basic algorithms such as if-then clauses were probably the biggest deal we had in computer science, so it is truly shocking to believe that any AI could develop semantic properties, or understanding, just from if-then clauses to answer certain inputs with definite outputs as in Turing's imitation game.

Philosopher Eric said...

Your “truly shocking” remark suggests support for Searle’s position. Of course you might simply have been asserting that Searle considers the implications of strong AI this way and are still playing the field. Nothing wrong with that I suppose. I merely have hints of your support given your latest Wordpress post (where you said that Vacariu and Vacariu (2013) are wrong to say that cognitive neuroscience is “pseudoscience”, though you do still characterize it to be “at best, a protoscience”).

I will weigh in over there at some point, so ready your conceptual thumb whacker! Given the popularity of informationist models such as global workspace theory, apparently Searle’s position is largely being ignored. Thus I’d like to step things up from vague terms such as “syntax” and “semantics” to ask people of it makes sense for information laden paper converted into other information laden paper, to do what brains are known to do, or to cause something to feel what we do when our thumbs get whacked? Without dedicated instantiation mechanisms for qualia, I consider this popular premise in modern cognitive neuroscience to depend upon the supernatural.