There's no shame in losing a contest for a long-form popular essay on AI consciousness to the eminent neuroscientist Anil Seth. Berggruen has published my piece "AI Mimics and AI Children" among a couple dozen shortlisted contenders.
When the aliens come, we’ll know they’re conscious. A saucer will land. A titanium door will swing wide. A ladder will drop to the grass, and down they’ll come – maybe bipedal, gray-skinned, and oval-headed, just as we’ve long imagined. Or maybe they’ll sport seven limbs, three protoplasmic spinning sonar heads, and gaseous egg-sphere thoughtpods. “Take me to your leader,” they’ll say in the local language, as cameras broadcast them live around the world. They’ll trade their technology for our molybdenum, their science for samples of our beetles and ferns, their tales of galactic history for U.N. authorization to build a refueling station at the south pole. No one (only a few philosophers) will wonder, but do these aliens really have thoughts and experiences, feelings, consciousness?
The robots are coming. Already they talk to us, maybe better than those aliens will. Already we trust our lives to them as they steer through traffic. Already they outthink virtually all of us at chess, Go, Mario Kart, protein folding, and advanced mathematics. Already they compose smooth college essays on themes from Hamlet while drawing adorable cartoons of dogs cheating at poker. You might understandably think: The aliens are already here. We made them.
Still, we hesitate to attribute genuine consciousness to the robots. Why?
My answer is because we made them in our image.
#
“Consciousness” has an undeserved reputation as a slippery term. Let’s fix that now.
Consider your visual experience as you look at this text. Pinch the back of your hand and notice the sting of pain. Silently hum your favorite show tune. Recall that jolt of fear you felt during a near-miss in traffic. Imagine riding atop a giant turtle. That visual experience, that pain, that tune in your head, that fear, that act of imagination – they share an obvious property. That obvious property is consciousness. In other words: They are subjectively experienced. There’s “something it’s like” to undergo them. They have a qualitative character. They feel a certain way.
It’s not just that these processes are mental or that they transpire (presumably) in your brain. Some mental and neural processes aren’t conscious: your knowledge, not actively recalled until just now, that Confucius lived in ancient China; the early visual processing that converts retinal input into experienced shape (you experience the shape but not the process that renders the shape); the myelination of your axons.
Don’t try to be clever. Of course you can imagine some other property, besides consciousness, shared by the visual experience, the pain, etc., and absent from the unrecalled knowledge, early visual processing, etc. For example: the property of being mentioned by me in a particular way in this essay. The property of being conscious and also transpiring near the surface of Earth. The property of being targeted by such-and-such scientific theory.
There is, I submit, one obvious property that blazes out a bright red this-is-it when you think about the examples. That’s consciousness. That’s the property we would reasonably attribute to the aliens when they raise their gray tentacles in peace, the property that rightly puzzles us about future AI systems.
The term “consciousness” only seems slippery because we can’t (yet?) define it in standard scientific or analytic fashion. We can’t dissect it into simpler constituents or specify exactly its functional role. But we all know what it is. We care intensely about it. It makes all the difference to how we think about and value something. Does the alien, the robot, the scout ant on the kitchen counter, the earthworm twisting in your gardening glove, really feel things? Or are they blank inside, mere empty machines or mobile plants, so to speak? If they really feel things, then they matter for their own sake – at least a little bit. They matter in a certain fundamental way that an entity devoid of experience never could.
#
With respect to aliens, I recommend a Copernican perspective. In scientific cosmology, the Copernican Principle invites us to assume – at least as a default starting point, pending possible counterevidence – that we don’t occupy any particularly special location in the cosmos, such as the exact center. A Copernican Principle of Consciousness suggests something similar. We are not at the center of the cosmological “consciousness-is-here” map. If consciousness arose on Earth, almost certainly it has arisen elsewhere.
Astrobiology, as a scientific field, is premised on the idea that life has probably arisen elsewhere. Many expect to find evidence of it in our solar system within a few decades, maybe on Mars, maybe in the subsurface oceans of an icy moon. Other scientists are searching for telltale organic gases in the atmospheres of exoplanets. Most extraterrestrial life, if it exists, will probably be simple, but intelligent alien life also seems possible – where by “intelligent” I mean life that is capable of complex grammatical communication, sophisticated long-term planning, and intricate social coordination, all at approximately human level or better.
Of course, no aliens have visited, broadcast messages to us, or built detectable solar panels around Alpha Centauri. This suggests that intelligent life might be rare, short-lived, or far away. Maybe it tends to quickly self-destruct. But rarity doesn’t imply nonexistence. Very conservatively, let’s assume that intelligent life arises just once per billion galaxies, enduring on average a hundred thousand years. Given approximately a trillion galaxies in the observable portion of the universe, that still yields a thousand intelligent alien civilizations – all likely remote in time and space, but real. If so, the cosmos is richer and more wondrous than we might otherwise have thought.
It would be un-Copernican to suppose that somehow only we Earthlings, or we and a rare few others, are conscious, while all other intelligent species are mere empty shells. Picture a planet as ecologically diverse as Earth. Some of its species evolve into complex societies. They write epic poetry, philosophical treatises, scientific journal articles, and thousand-page law books. Over generations, they build massive cities, intricate clockworks, and monuments to their heroes. Maybe they launch spaceships. Maybe they found research institutes devoted to describing their sensations, images, beliefs, and dreams. How preposterously egocentric it would be to assume that only we Earthlings have the magic fire of consciousness!
True, we don’t have a consciousness-o-meter, or even a very good, well-articulated, general scientific theory of consciousness. But we don’t need such things to know. Absent some special reason to think otherwise, if an alien species manifests the full suite of sophisticated cognitive abilities we tend to associate with consciousness, it makes both intuitive and scientific sense – as well as being the unargued premise of virtually every science fiction tale about aliens – to assume consciousness alongside.
This constellation of thoughts naturally invites a view that philosophers have called “multiple realizability” or “substrate neutrality”. Human cognition relies on a particular substrate: a particular type of neuron in a particular type of body. We have two arms, two legs; we breathe oxygen; we have eyes, ears, and fingers. We are made mostly of water and long carbon chains, enclosed in hairy sacks of fat and protein, propped by rods of calcium hydroxyapatite. Electrochemical impulses shoot through our dendrites and axons, then across synaptic channels aided by sodium ions, serotonin, acetylcholine, etc. Must aliens be similar?
It’s hard to say how universal such features would be, but the oval-eyed gray-skins of popular imagination seem rather suspiciously humanlike. In reality, ocean-dwelling intelligences in other galaxies might not look much like us. Carbon is awesome for its ability to form long chains, and water is awesome as a life-facilitating solvent, but even these might not be necessary. Maybe life could evolve in liquid ammonia instead of water, with a radically different chemistry in consequence. Even if life must be carbon-based and water-loving, there’s no particular reason to suppose its cognition would require the specific electrochemical structures we possess.
Consciousness shouldn’t then, it seems, turn on the details of the substrate. Whatever biological structures can support high levels of general intelligence, those same structures will likely also host consciousness. It would make no sense to dissect an intelligent alien, see that its cognition works by hydraulics, or by direct electrical connections without chemical synaptic gaps, or by light transmission along reflective capillaries, or by vortices of phlegm, and conclude – oh no! That couldn’t possibly give rise to consciousness! Only squishy neurons of ourparticular sort could do it.
Of course, what’s inside must be complex. Evolution couldn’t design a behaviorally sophisticated alien from a bag of pure methane. But from a proper Copernican perspective which treats our alien cousins as equals, what matters is only that the cognitive and behavioral sophistication arises, out of some presumably complex substrate, not what the particular substrate is. You don’t get your consciousness card revoked simply because you’re made of funny-looking goo.
#
A natural next thought is: robots too. They’re made of silicon, but so what? If we analogize from aliens, as long as a system is sufficiently behaviorally and cognitively sophisticated, it shouldn’t matter how it’s composed. So as soon as we have sufficiently sophisticated robots, we should invoke Copernicus, reject the idea that our biological endowment gives us a magic spark they lack, and welcome them to club consciousness.
The problem is: AI systems are already sophisticated enough. If we encountered naturally evolved life forms as capable as our best AI systems, we wouldn’t hesitate to attribute consciousness. So, shouldn’t the Copernican think of our best AI as similarly conscious? But we don’t – or most of us don’t. And properly so, as I’ll now argue.
[continued here]

No comments:
Post a Comment