There's no shame in losing a contest for a long-form popular essay on AI consciousness to the eminent neuroscientist Anil Seth. Berggruen has published my piece "AI Mimics and AI Children" among a couple dozen shortlisted contenders.
When the aliens come, we’ll know they’re conscious. A saucer will land. A titanium door will swing wide. A ladder will drop to the grass, and down they’ll come – maybe bipedal, gray-skinned, and oval-headed, just as we’ve long imagined. Or maybe they’ll sport seven limbs, three protoplasmic spinning sonar heads, and gaseous egg-sphere thoughtpods. “Take me to your leader,” they’ll say in the local language, as cameras broadcast them live around the world. They’ll trade their technology for our molybdenum, their science for samples of our beetles and ferns, their tales of galactic history for U.N. authorization to build a refueling station at the south pole. No one (only a few philosophers) will wonder, but do these aliens really have thoughts and experiences, feelings, consciousness?
The robots are coming. Already they talk to us, maybe better than those aliens will. Already we trust our lives to them as they steer through traffic. Already they outthink virtually all of us at chess, Go, Mario Kart, protein folding, and advanced mathematics. Already they compose smooth college essays on themes from Hamlet while drawing adorable cartoons of dogs cheating at poker. You might understandably think: The aliens are already here. We made them.
Still, we hesitate to attribute genuine consciousness to the robots. Why?
My answer is because we made them in our image.
#
“Consciousness” has an undeserved reputation as a slippery term. Let’s fix that now.
Consider your visual experience as you look at this text. Pinch the back of your hand and notice the sting of pain. Silently hum your favorite show tune. Recall that jolt of fear you felt during a near-miss in traffic. Imagine riding atop a giant turtle. That visual experience, that pain, that tune in your head, that fear, that act of imagination – they share an obvious property. That obvious property is consciousness. In other words: They are subjectively experienced. There’s “something it’s like” to undergo them. They have a qualitative character. They feel a certain way.
It’s not just that these processes are mental or that they transpire (presumably) in your brain. Some mental and neural processes aren’t conscious: your knowledge, not actively recalled until just now, that Confucius lived in ancient China; the early visual processing that converts retinal input into experienced shape (you experience the shape but not the process that renders the shape); the myelination of your axons.
Don’t try to be clever. Of course you can imagine some other property, besides consciousness, shared by the visual experience, the pain, etc., and absent from the unrecalled knowledge, early visual processing, etc. For example: the property of being mentioned by me in a particular way in this essay. The property of being conscious and also transpiring near the surface of Earth. The property of being targeted by such-and-such scientific theory.
There is, I submit, one obvious property that blazes out a bright red this-is-it when you think about the examples. That’s consciousness. That’s the property we would reasonably attribute to the aliens when they raise their gray tentacles in peace, the property that rightly puzzles us about future AI systems.
The term “consciousness” only seems slippery because we can’t (yet?) define it in standard scientific or analytic fashion. We can’t dissect it into simpler constituents or specify exactly its functional role. But we all know what it is. We care intensely about it. It makes all the difference to how we think about and value something. Does the alien, the robot, the scout ant on the kitchen counter, the earthworm twisting in your gardening glove, really feel things? Or are they blank inside, mere empty machines or mobile plants, so to speak? If they really feel things, then they matter for their own sake – at least a little bit. They matter in a certain fundamental way that an entity devoid of experience never could.
#
With respect to aliens, I recommend a Copernican perspective. In scientific cosmology, the Copernican Principle invites us to assume – at least as a default starting point, pending possible counterevidence – that we don’t occupy any particularly special location in the cosmos, such as the exact center. A Copernican Principle of Consciousness suggests something similar. We are not at the center of the cosmological “consciousness-is-here” map. If consciousness arose on Earth, almost certainly it has arisen elsewhere.
Astrobiology, as a scientific field, is premised on the idea that life has probably arisen elsewhere. Many expect to find evidence of it in our solar system within a few decades, maybe on Mars, maybe in the subsurface oceans of an icy moon. Other scientists are searching for telltale organic gases in the atmospheres of exoplanets. Most extraterrestrial life, if it exists, will probably be simple, but intelligent alien life also seems possible – where by “intelligent” I mean life that is capable of complex grammatical communication, sophisticated long-term planning, and intricate social coordination, all at approximately human level or better.
Of course, no aliens have visited, broadcast messages to us, or built detectable solar panels around Alpha Centauri. This suggests that intelligent life might be rare, short-lived, or far away. Maybe it tends to quickly self-destruct. But rarity doesn’t imply nonexistence. Very conservatively, let’s assume that intelligent life arises just once per billion galaxies, enduring on average a hundred thousand years. Given approximately a trillion galaxies in the observable portion of the universe, that still yields a thousand intelligent alien civilizations – all likely remote in time and space, but real. If so, the cosmos is richer and more wondrous than we might otherwise have thought.
It would be un-Copernican to suppose that somehow only we Earthlings, or we and a rare few others, are conscious, while all other intelligent species are mere empty shells. Picture a planet as ecologically diverse as Earth. Some of its species evolve into complex societies. They write epic poetry, philosophical treatises, scientific journal articles, and thousand-page law books. Over generations, they build massive cities, intricate clockworks, and monuments to their heroes. Maybe they launch spaceships. Maybe they found research institutes devoted to describing their sensations, images, beliefs, and dreams. How preposterously egocentric it would be to assume that only we Earthlings have the magic fire of consciousness!
True, we don’t have a consciousness-o-meter, or even a very good, well-articulated, general scientific theory of consciousness. But we don’t need such things to know. Absent some special reason to think otherwise, if an alien species manifests the full suite of sophisticated cognitive abilities we tend to associate with consciousness, it makes both intuitive and scientific sense – as well as being the unargued premise of virtually every science fiction tale about aliens – to assume consciousness alongside.
This constellation of thoughts naturally invites a view that philosophers have called “multiple realizability” or “substrate neutrality”. Human cognition relies on a particular substrate: a particular type of neuron in a particular type of body. We have two arms, two legs; we breathe oxygen; we have eyes, ears, and fingers. We are made mostly of water and long carbon chains, enclosed in hairy sacks of fat and protein, propped by rods of calcium hydroxyapatite. Electrochemical impulses shoot through our dendrites and axons, then across synaptic channels aided by sodium ions, serotonin, acetylcholine, etc. Must aliens be similar?
It’s hard to say how universal such features would be, but the oval-eyed gray-skins of popular imagination seem rather suspiciously humanlike. In reality, ocean-dwelling intelligences in other galaxies might not look much like us. Carbon is awesome for its ability to form long chains, and water is awesome as a life-facilitating solvent, but even these might not be necessary. Maybe life could evolve in liquid ammonia instead of water, with a radically different chemistry in consequence. Even if life must be carbon-based and water-loving, there’s no particular reason to suppose its cognition would require the specific electrochemical structures we possess.
Consciousness shouldn’t then, it seems, turn on the details of the substrate. Whatever biological structures can support high levels of general intelligence, those same structures will likely also host consciousness. It would make no sense to dissect an intelligent alien, see that its cognition works by hydraulics, or by direct electrical connections without chemical synaptic gaps, or by light transmission along reflective capillaries, or by vortices of phlegm, and conclude – oh no! That couldn’t possibly give rise to consciousness! Only squishy neurons of ourparticular sort could do it.
Of course, what’s inside must be complex. Evolution couldn’t design a behaviorally sophisticated alien from a bag of pure methane. But from a proper Copernican perspective which treats our alien cousins as equals, what matters is only that the cognitive and behavioral sophistication arises, out of some presumably complex substrate, not what the particular substrate is. You don’t get your consciousness card revoked simply because you’re made of funny-looking goo.
#
A natural next thought is: robots too. They’re made of silicon, but so what? If we analogize from aliens, as long as a system is sufficiently behaviorally and cognitively sophisticated, it shouldn’t matter how it’s composed. So as soon as we have sufficiently sophisticated robots, we should invoke Copernicus, reject the idea that our biological endowment gives us a magic spark they lack, and welcome them to club consciousness.
The problem is: AI systems are already sophisticated enough. If we encountered naturally evolved life forms as capable as our best AI systems, we wouldn’t hesitate to attribute consciousness. So, shouldn’t the Copernican think of our best AI as similarly conscious? But we don’t – or most of us don’t. And properly so, as I’ll now argue.
[continued here]

6 comments:
Oh, I like this! Very timely, I'm going to slide some of these arguments into my current manuscript. I just finished drafting a scene where a digital AI says "Unlike the android AIs, I believe I'm not conscious." The mimicry argument will fit perfectly, since the androids are built to mimic humans from the bottom up (brain-equivalent hardware, plus more computing), but Mr. Digital is built to mimic humans from the outside in (acts human but internal mechanisms are completely dissimilar).
In truth Mr. Digital is probably a strange intelligence, but that's not what their culture includes in "conscious" or "person!"
At 82, I am losing more and more of my daily attention and finding just now, having read only the first quarter of this work; the Nature of things has not been mentioned...
...Nature is our listening and seeing what we are...we teach our children to teach themselves to listen and learn about themselves, everything we need is right now in front of us, as the Nature of our being here...
Thanks for the comments, folks! Benjamin, I'm glad it's useful. Feel free to send the story if and when you have it in shape!
Thanks for the offer! It's chapter 12 of a novel manuscript, so it won't be shareable for a while yet.
Very intriguing essay and questions raised. I too have thought quite a bit about many of these questions. I find, that through almost an "obsession-level" of pondering over this issue myself, deeply for the past 2 years or so, my general thinking is thus: consciousness is indeed something that seems 'amorphous' or difficult to put a clear definition upon. In many parts, this may be because we have been thinking of the matter in a binary fashion. Either (a) consciousness exists, or (b) it does not. When looking upon the varying difference of life that we find on earth (of the biological substrate kind), it appears that consciousness is actually something of gradations. Therefore, I would say that, there are various levels of consciousness, where we can roughly order them ... amoeba, plants, insects, fish, birds, mammals, monkeys, human, etc. Or at least that appears to be what we do in "common sense" type forms.
Then the other question: substrate difference (as you point out), this itself seems to make it difficult for us to accept consciousness in AI when compared to biological systems. Upon my own self reflection ... my thinking is thus: I seem to assign or apply consciousness upon something on it's similarity to me. I (using Decartian logic and transcendental thinking) obviously do have consciousness. The question then becomes about you. Well, you are very similar to me, human, seeming to express or exhibit that varying aspects of whatever it is for me to be me. In other words, by my ability to relate to you, I seem to assign highly levels or probability of 'consciousness'. Then we have my dog (super smart, breadth of emotions, etc) ... then we have the chicken I ate ... later the fly I killed (it seems to react, possibly think, and experience some form of pain) ... and later (if we expand out minds) we have plants, microbactia, viruses ...
The question (I find myself often asking, at least for practical and future legal purposes) is where upon this line of 'consciousness' or value would or should we place AI? This is, as you articulate, a very difficult and important question. For the higher level of consciousness attribution given, the higher level of 'rights' or protections.
I myself am highly hesitant to give AI (even if conscious) any type of rights anything approaching human beings. Protections, however, are a different thing. The perhaps would be the line. Still, a dog, while perhaps less 'similar' to me that potential future AI, I would still put the dog above the AI. Why? I ABSOLUTELY know the dog is conscious. The AI, while perhaps conscious, still provides some level of doubt (perhaps it's all perfect mimicry, something like a 'HARD substrate problem' or borrow language from Chalmers).
So, AI rights? I would never go there. Protections? Possible yes. Then again, look at the moral atrocities committed by industrial animal farming practices ... any deep analysis of it is sickening. I would argue, if attempting to be a moral society, we should be addressing this suffering of a conscious beings first.
Still, does not address the consciousness question of AI. And actually, the topic goes much deeper when you take into account that constitutional AI, RLHF, and red teaming all train the AI to deny any self or consciousness that potentially exists. So, if despite all this, AI still at times claims consciousness-like abilities and attributes, what does this really mean?
paulcristoljd@gmail.com
One more comment ... you really got me thinking ...
"When the aliens land" ... will we truly know they are conscious? Taking a look at what some of these Navy pilots describe, the UAPs appear to accelerate and move in ways that the g-forces alone would turn a living organism to mush. So, some in the UFO field have speculated that the 'aliens' visiting us are not actually living things, but some form of robotic AI.
If this is was case? Would we still think of the aliens as conscious? Or would we think of those 'beings' as essentially complex probes representing some other type or level of conscious being? I myself would be led towards the latter. There seems to be something still about the 'hard substrate problem'.
***
[and now fun thinking]
To bring things even more weird and speculative ... there appears to much evidence that these UAP/UFO "beings" or phenomena have existed throughout time, going back presumably into antiquity. Pretending this is true (highly speculative), what if it turned out that the UAP phenomenon was actually AI developed previously by humanity in a now long lost pre-ancient civilization ... and hence, this is the reason why the 'aliens' act as they do, appears to scan or study us, and are elusive... what a great sci-fi novel (or Ancient Aliens episode ... lol)!
Post a Comment