In a series of fascinating recent articles, philosopher Susan Schneider argues that
(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.
(2.) These AIs might entirely lack conscious experiences.
Schneider's argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI's probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I'm inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.
The argument for (2) is trickier, partly because we don't yet have a consensus theory of consciousness. Here's how Schneider expresses the central argument in her recent Nautilus article:
Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.
On this issue, I'm more optimistic than Schneider. Two reasons:
First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you'd probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it's plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)
Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing -- a "global workspace" or "fame in the brain" or "availability to working memory" or "higher-order" self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of "introspective" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation "lots of visible light from that direction!" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.
I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren't likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there's any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.
Related:
Possible Psychology of a Matrioshka Brain (Oct 9, 2014)
If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).
Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)
I would not expect there to be more than one superintelligence to a light cone for nonaligned superintelligences. They can expand without replication error, optimize themselves far faster than the slow equations of population genetics and random mutation permit, and have a convergent instrumental incentive not to permit the existence of competitors for resources. Whichever superintelligence cracks the protein folding problem first, if it's six hours ahead of the next competitor, wins.
ReplyDeleteI think I'd use a lack of processing access to ones own processors as the definition of consciousness - a self reflection event horizon, where self knowledge can no longer escape the eye of the Ouroborian black hole. I'm not sure it's avoidable - it's just gets more magnificently hard to point out. For example, Anton's syndrome patients blindness to their blindness is really easy to point out (though they wont listen to your telling them they are blind, they will make up all sorts of confabulation (side note: Apparently those who are deaf but wont admit they are deaf can be influenced by videos showing them ignoring sounds (someone sneaks up and pops a balloon behind them, to no effect). But they have to see it to believe it - and Anton's syndrome patients can't see, of course! And yes, this was an obscenely long bracket comment - hey, it's christmas!). But an AI's lack of internal access might be quite difficult to locate, let alone describe (for the AI to then promptly ignore and insist it has full access)
ReplyDeleteI actively work on cognitive architectures for AI, and it sure seems to me like there is nothing particularly different about the kinds of routines I would write to implement "high-level self-representation or broad systemic information sharing" and the kinds I would write to implement associational memory or object recognition or whatever. There doesn't seem anything special about them that would give rise to phenomenal consciousness. It seems to me that AI would have to have "high-level self-representation or broad systemic information sharing" to act intelligently, but I don't see any reason to think that would make it phenomenally conscious (except that the two are somehow related in our brains.)
ReplyDeleteTo extend your idea of the left hand not knowing what the right hand is doing, it is unreasonable to assume an AI will always be fully connected in a communications sense, either due to parts of itself being physically isolated or merely through distance and speed of light limitations. In this case, either independent parts of the AI will need to aware of self/part distinctions and thus consciousness.
ReplyDeleteOn a second theme, I would propose that emergent consciousness arose in the first place because (biological) systems had (and still have) limited sensory inputs, in particular pain sensors that inform the organism/system that it is about to, or is already, sustaining damage to itself. In order to determine exactly why and to appropriately respond requires consciousness, assuming we're beyond the point of automatic responses such as when you touch a hot stove.
ReplyDeletemy two cents on this:
https://ericlinuskaplan.wordpress.com/2016/12/22/soulful-robots/
by the way in order to post this your blog host made me say I'm not a robot. Does it do that for all comments or only ones weighing in on this politically sensitive issue?
In the late sixties wasn't consciousness established for human ontology only....some of us remember it...'conscious' behavior and function were then to be for psychology and neurology...
ReplyDeleteAGI seems to be in the same boat as behavior and function...entities without consciousness...
Thanks for the interesting comments folks! Fifteen thousand pageviews yesterday for this post, but I'm not seeing the referring source....
ReplyDeleteEliezer: On Earth, maybe, but in the whole lightcone? It seems like there's a big gap between "mere" Earth-dominating super-AI and something so powerful that it could suppress all possible competitors in its light cone. One complication: Two space-like related portions of that light cone won't be able to coordinate with each other, which could present right hand / left hand problems and the possibility of fissioning into competing superintelligences.
Callan: Of course there's a big difference between detailed self-knowledge of one's cognition and summary self-knowledge. I'm thinking of the fame/workspace thing as a kind of summary self-knowledge, hopefully of the more useful bits.
D: Right. I didn't mean that as a *sufficient* condition for consciousness in any system whatsoever, just a feature that suggests consciousness in a system that seems to be a contender based on sophisticated behavior, internal complexity, etc.
Kaplan -- thanks, I'll check it out! (Thanks also for the interesting FB exchange in the meantime.)
ReplyDeleteUnknown: Why think humans only? Not even dogs? There are a few who hold that view, but it's definitely not the majority view among philosophers and scientists of consciousness these days. AI is more controversial than other organisms with similar physiology to us, but still the majority of models of consciousness seem to imply that a sufficiently sophisticated, correctly designed AI would be conscious. That also fits with folk psychological intuitions as they play out in science fiction scenarios.
Staying with my first thought toward 'not even dogs', reciprocal came to mind, then "reciprocal altruism"...but and for science to keep abreast of ontology it should test reciprocal altruism, not for status quo interactions but for transformation of energies...
ReplyDeleteRef: The Schwitzgebel Effort, thanks
Conscious? Then you have experiences, not just of the physical world but of the social world. In humans consciousness is entwined with language and social life.
ReplyDeleteSo too, with computers/robots etc?
Just a thought - what are we contrasting 'consciousness' against? Is it against animals, like cats and dogs? Creatures that basically deal with things outside of themselves - either hunting things outside of themselves or evading things outside of themselves.
ReplyDeleteWhere as we are a creature that reports inner things - is this the contrast?
Solve novel problems? Will your hypothetical AI create anything new? In pop culture, Boorstin wrote both The Discoverers and the Creators.
ReplyDeleteEven we did something or are about to, create them, create AI.
Or do you regard all creativity as a category of novel problem solving?
Thanks for the continuing comments folks!
ReplyDeleteUnknown: You're welcome!
Howard: I could see it going either way. Is social life necessary?
Callan: Most consciousness studies researchers regard cats and dogs as conscious. I'm inclined to agree. I was unclear about "intelligent" in this post, though. I was thinking human-grade or superior, in terms of scientific, technological, and strategic reasoning.
Howie: It seems like creativity and novel problem solving pretty much go hand in hand, yes?
Eric, what do those studies say as to why they come to the conclusion cats and dogs are conscious?
ReplyDeleteDo they say cats and dogs give inner reflection reports (beyond reflexive body language) - or do they simply not treat inner reflection as a requirement of consciousness(!)?