How can you tell whether a robot, or some other A.I. system, has conscious experiences (i.e., phenomenal consciousness, i.e., that there's "something it's like" to be that system)?
The question matters because conscious experience, or at least certain types of conscious experience, is the most valuable thing in the universe. A planet devoid of all consciousness -- mere stones and algae, say -- is missing something amazing that Earth has: joy, sadness, insight, inspiration, wonder, imagination, relief, longing, understanding, sympathy... each assumed to have experiential components. If we build robots who genuinely possess consciousness, as opposed to being mere empty machines (so to speak), we will have succeeded in something wondrous: the creation of a new type of experiential entity, with a new range of experiential capacities. Such entities will deserve our solicitude and care, perhaps even more care than we owe to human strangers, due to obligations attached to being its creators.
This question throws us upon one of the most difficult questions in all of science: how to detect the presence or absence of conscious experience in entities very different from us.
[the android Data from Star Trek, testifying in a 24th-century court trial concerning his status as a conscious entity with rights]
Now there are two views on which the question is easy. According to panpsychism, consciousness is ubiquitous, so even currently existing robots are conscious, or contain consciousness, or participate in some cosmic consciousness. Far on the other end, according to biological views of consciousness (especially John Searle's view), no artificially constructed, non-biological system could ever be conscious, no matter how sophisticated it seemed to outside observers. Both views are extreme, so let's set them aside (despite their merits).
If we cut off those extremes, we are still left with a wide range of middling views about robot consciousness -- all the way from very liberal views according to which we are already close to creating robot consciousness to very conservative views in which the possibility might require radically new technologies of the far distant future. Even among moderate views, positions differ regarding what's necessary for consciousness like ours (the right kind of integrated information? the right kind of "global workspace"? higher-order self-monitoring?).
These debates show no sign of subsiding in the foreseeable future. So it would be nice if we could have a relatively theory-neutral test of robot consciousness -- a test that at least most moderately inclined theorists of consciousness could agree was diagnostic, despite continuing disputes about underlying theory.
The most famous relatively theory-neutral test is the Turing Test, according to which a machine counts as "thinking", or (adapting to the present case) "being conscious", if its verbal outputs are indistinguishable from those of an ordinary adult human. Unfortunately, the Turing Test has at least three crucial limitations:
First, some entities that most of us would agree have conscious experiences, such as babies and dogs, fail the test.
Second, the test relies exclusively on patterns of external behavior, and so it assumes the falsity of any theory of consciousness on which consciousness depends on internal mechanisms separable from outward behavior (which is probably, in fact, most current theories).
Third, currently existing chatbots already come close to passing it despite, on most moderate views of consciousness, not being conscious. This suggests that the test is liable to "cheating strategies" in which a machine could pass by superficial imitation of human linguistic patterns.
In her 2019 book, Susan Schneider comes to the rescue, proposing a couple of, purportedly, relatively theory-neutral tests of robot consciousness.
One is the AI Consciousness Test (ACT), which is a version of the Turing Test designed to limit cheating strategies by preventing the machine from having access to textual data on human discussions of consciousness. The ACT also focuses the machine's responses to philosophical questions concerning consciousness (life after death, soul swapping, etc.). Schneider and her collaborator on this test, Edwin Turner, hope that with the right kinds of restrictions and a focus on questions concerning consciousness, the machine would only speak like a human if it had genuine introspective access to real conscious experiences.
Schneider's second test is the Chip Test, which involves gradually replacing parts of your brain with silicon (or other artificial) chips, then introspecting. If you introspectively detect that consciousness continues to be as vividly present, you can infer that the silicon chip supports consciousness. If you introspectively detect a decline in consciousness, then you can infer that the chip does not adequately support consciousness.
So now, to the new article promised in the title of this post.
Back in 2017 or 2018, my awesome undergraduate student David Billy Udell became fascinated with these issues. He decided to write an honors thesis about them, focusing on Schneider's two tests (familiar from her presentations in popular media and then later from an advance draft of her book that she kindly shared). David finished his thesis in 2019, then went off to graduate school in philosophy at CUNY. Together, we revised his critique of Schneider into a journal article, finally published last weekend.
His/our fundamental criticism is that neither test is as theory-neutral as it might seem. In other words, they have an "audience problem". Liberals about A.I. consciousness will probably think the tests are unnecessary or too stringent; skeptics and conservatives will probably think they aren't stringent enough. The tests are a partial advance, helpful to a limited range of theorists whose doubt or skepticism is of exactly the right sort to be addressed by the specifics of the tests. However, in short, the ACT is still open to cheating/mimicking strategies, despite Schneider and Turner's efforts. And the Chip Test relies on an awkward combination of skepticism about the purported introspections of fully-chipped robots and uncritical acceptance of the tester's purported introspections after partially replacing their brain with chips.
For the full critique, see the official published version at Journal of Consciousness Studies or the final MS version here).
What a pleasure to see David's work now in print in a prominent journal -- hopefully the start of a magnificient academic career!
16 comments:
A quick edit professor. Instead of liberals both times I think you meant, “Liberals about A.I. consciousness will probably think the tests are unnecessary or too stringent; conservatives will probably think they aren't stringent enough.”
On the outliers professor, I’m not entirely sure that Searle or Roelofs qualify. Of course you’ve known Searle since you were one of his graduate students at UC Berkeley. I’ve also noticed others attributing the “If it ain’t biological then it ain’t conscious” position to him, but I don’t quite know why. There’s this 1990 article for example. At one point he says “Second, I have not tried to show that only biologically based systems like our brains can think. Right now those are the only systems we know for a fact can think, but we might find other systems in the universe that can produce conscious thoughts, and we might even come to be able to create thinking systems artificially.”
I take his Chinese room thought experiment to make a similar case as your USA consciousness thought experiment does, or that a tremendous number of highly regarded academics succumb to consciousness as “information patternism”. Essentially their premise holds that if the right information on paper were properly converted into another set of information on paper, then something here would experience what you do when your thumb gets whacked!
Defending Luke Roelofs from being an outlier may be a bit more tricky. But I did question him on your blog in February about how one could believe that everything is conscious, and also believe that anesthesia can render a person non-conscious (or at least suitable for surgery). He implied that he was also curious about what it is that brains do to create the stable sort of experiences that we have, unlike the “experiential white noise” that he believes exist for dead people and elements of the universe in general. In any case he’s not offering an engineering solution for functional consciousness (or at least not once adequately questioned).
I agree with you and Udell that Schneider and Turner’s tests shouldn’t be useful, though I’ll be more direct. Essentially they’re focused upon testing flawed ideas, and those flaws seem to have rubbed off on their tests. As you noted earlier, no Turing test could ever demonstrate that babies and dogs aren’t conscious (let alone ants or computers). Furthermore replacing brain stuff with other stuff to see if that stuff is “conscious apt”? I don’t blame Schneider or Turner for coming up with these tests, but rather an academia which develops ideas which solicit such responses.
Once academia graduates to substrate based models of what happens in our brains to create subjective experience (as opposed to the long held generic information infatuation), consciousness tests should be developed that even “hard scientists” recognize as such. For example there’s the way that I’d like McFadden’s cemi to be tested.
I've carved a little boat out of wood for my grandson-put into a pond and it floats...
As a puzzle: consciousness is a piece of wood-floating on water...
...verses...consciousness is a human being-on earth...
Is it all the same...consciousness...what is it like to be A.I....
...what is it like to be human seems the puzzle...
Actually from the paper I now see that you must have meant “skeptic” rather than “conservative”, which does seem more appropriate.
I feel like I've had this idea before, but reading your post gave it to me again, so I'm writing it out to help me retain it:
It seems to me that there is an unfortunate possibility that self-awareness is only a form of ignorance. I assume that the world is either deterministic (to whatever extent quantum reality allows) or much more deterministic than I'm aware of (because I don't notice most of the things that happen at the very small and very large scales). One of the most important things that make up feeling like me is the sensation that I can control my own thoughts, at least to some extent. If I want to think of a dog, I think of a dog; if I prefer to think of a cow, I can picture a guernsey. But if those choices were in fact deterministically pre-ordained, then my consciousness is precisely that gap between what happens and the chains of causation that I am able to perceive.
If electronic beings are not created with this perceptual gap - i.e. they retain access to all their inputs - then they may develop to be more complex and intelligent than me, but remain non-conscious simply because they know more.
Thanks chinaphil...
..."The saying “Ignorance is bliss” originates in Thomas Gray's poem “Ode on a Distant Prospect of Eton College” (1742). The quote goes: “Where ignorance is bliss, 'tis folly to be wise.” Face it: you were better off not knowing that, weren't you? Generally speaking, ignorance is a detestable state of mind." google...
Isn't that, even a sentence, as itself, is consciousness...
...and maybe self-awareness, as itself, is too...
Hi Chinaphil,
I think your “ignorance” component to consciousness is quite necessary. I’ll tell you why through a model that I’ve developed. In any case this way you might further explore your suspicion that consciousness requires ignorance. (I don’t mean this to substitute for the professor’s thoughts on the matter, of course.)
Originally as organisms evolved there shouldn’t have been any conscious component to brain function. Like our robots such function should simply have been algorithmic. But as organisms expanded to more “open” venues (or unlike the venue of a chess board for example), this sort of operation should have been impeded since effectively unlimited contingency circumstances couldn’t be addressed nearly as well algorithmically. (This is displayed by difficulties that our computers and robots have today.)
Secondly imagine that in some of these non-conscious beings, brains would sometimes do “hard problem stuff” which create purely epiphenomenal experiencers as well. A given experiencer might be caused to feel bad and so dislike what was happening, while other times it would feel good and so like what was happening. Since the organisms should have problems dealing with novel situations as established above, let’s say that over enough iterations they’d sometimes effectively let the experiencers decide a given matter of organism function. So here the brain would detect the desires of a previously epiphenomenal experiencer that it creates, and then take that route. Of course there’s little reason to think that such a profoundly ignorant tag-along experiencer would inherently make good organism survival choices. Blood loss might feel good to a given iteration for example. But let’s imagine over enough evolution that better than algorithmic results were had in certain cases. In these lines experiencers should tend to be given more informational senses from which to build their desires, and so better choices should tend to be made than before.
Ultimately such evolution could result in creatures like us. This is to say a deterministic scenario where the conscious entity is given an illusion of general understanding, though in truth it’s the brain that would do the real work (not that it would “understand” anything any more than the computers we build do). I’d say that we experiencers, or understanders, are merely kept around for novel situations that can’t algorithmically be programmed for, though remain vastly ignorant of what’s going on behind the scenes.
A step or two further...
...test as with proof vs detest as with out proof...
Should scientists have proof of themselves as conscious and consciousness...
...or should they not have proof of themselves as conscious and consciousness...
Is science learning about the certainty of here now...
...that conscious and consciousness is in all ways here now...
Is the science of being here now the object of philosophy...
Thanks for the comments, folks!
Phil Eric: Thanks for the catching the mistakes! I've corrected the editing mistake. You're also right that I oversimplified Searle, and really I shouldn't have. I've revised the text so that it's more accurate. And of course panpsychists like Roelofs could still value tests of this sort as tests of a certain kind of consciousness, rather than consciousness per se. I also agree that the tests build on challengeable presuppositions that people who have doubts about robot consciousness might well justifiably reject -- the "audience problem". (I wouldn't agree that those presuppositions are mistaken, though; just that they *might* be.)
Arnold: Interesting thought, that the science of being here now is the (or at least a) object of philosophy. As for proofs of our own existence, they are either extremely easy or extremely hard, depending on the standards!
chinaphil/Phil Eric: Yes, the idea that our sense of consciousness is grounded in ignorance of our own processes is an idea with substantial merit, for example in Michael Graziano and R. Scott Bakker (who frequently commented on this blog several years ago), as well as "illusionists" like Keith Frankish. It is virtually inevitable that no cognitive system could keep perfect track of itself, since then it would have to keep track of its keeping track and keep track of its keeping track of its keeping track.... So there must be simplifications and shortcuts, reduced self-models. And on views like you're describing, this is exactly the space for ideas like free will and dualism can come in: What you keep track of in your simplified self-model doesn't seem sufficient to explain what you feel and what you choose to do next.
I may have some comments on you article at a later date. First, however, I feel compelled to correct an unfair description of the position of John Searle. You claim: “Far on the other end, according to biological views of consciousness (especially John Searle’s view), no artificially constructed, non-biological system could ever be conscious….”
This is not a true description of Searle’s position.. For example, in Searle’s “The Mystery of Consciousness” he expressly says the opposite:
“There is no more a logical obstacle to an artificial brain than there is to an artificial heart. …Because we do not know how real brains do it, we are in a poor position to fabricate an artificial brain that could cause consciousness. … Perhaps it is a feature we could duplicate in silicon or vacuum tubes. At present we just do not know.” (See pp.202-203.) Searle’s view, apparently confused by some, is simply that mere formal computer programs cannot achieve consciousness. And his argument about that are compelling.
This glaring misconception of Joh Searle’s thinking creates a lack of confidence in the rest of your essay.
Hey Matti,
I didn’t know that you support the professional work of John Searle too. Sweet! Apparently we’re a rare breed. I raised the same objection above. The professor mentioned that he over simplified Searle originally and so has revised, but I’m not sure what revision was made. The post still says “Far on the other end, according to biological views of consciousness (especially John Searle's view), no artificially constructed, non-biological system could ever be conscious, no matter how sophisticated it seemed to outside observers.” That assessment seems quite contrary with the writings of Searle that we’ve provided.
Professor,
I’ve just now been reading about the many years of sexual harassment which led to the professional demise of John Searle in 2017. Furthermore I recall you mentioning in an interview that you were clueless about the whole thing, except that he always seemed to have a good looking female assistant around. (Apparently less clueless people referred to them as “Searle’s girls”.) At any rate it seems to me that UC Berkeley officials were quite aware of what was going on, and indeed, must have effectively used this natural supply of good looking women as incentive to encourage Searle to keep working for them into his 70s and 80s. Of course you’re employed by the UC system as well, and I suppose might catch some grief for admitting that this makes a lot of sense. Nevertheless, any thoughts?
Hey Eric!
I didn’t see your similar comment about Searle until after I made mine. I just stopped dead when I hit the false remark and jumped to my bookshelves for quick evidence of Searle’s actual views. I don’t know how rare a breed we are—Searle has written a slew of books and someone must have bought them. It was heartbreaking for me to learn of Searle’s failings as a person.
Matti,
I suppose that Searle did get quite popular as a rare thorn in the side of the status quo regarding “consciousness”. But what difference did it make? Theorists today seem largely unpenalized for advocating all sorts of metaphysically spooky ideas. Apparently for many, neuroscience functions as an elixir that makes all sorts of funky ideas seem plausible.
I realize that you’re wary of science overstepping its bounds into the sacred realm of philosophy. It could be however that our mental and behavioral sciences remain as soft as they do, specifically given that they’re naturally more susceptible to failures in metaphysics, epistemology, and axiology. If so then these sciences should need various associated accepted principles for real progress to be made in them. I believe that we’ll need a respected agreement based community to provide these scientists with tools from which to straighten their fields out in this way. It matters not to me what heading such a community resides under, but merely that its principles help soft scientist halt their metaphysically, epistemologically, and axiologically, dubious practices.
I suspect that a younger Searle could have gotten behind such a plan. And though from this post it may not be clear that he’s cut from the same cloth, I’ve seen enough from professor Schwitzgebel to think that he could potentially become a powerful advocate for the creation of such a community. There’s something to be said for being politic, as demonstrated by the contrary example of professor Searle.
Eric,
“…Searle did get quite popular as a rare thorn in the side of the status quo regarding “consciousness”. But what difference did it make? Apparently, not enough if his own graduate student gets it so wrong! But, I’ll take your word on prof Schwitzgebel—for now. I only recently noticed him. Let’s see.
Same regarding Searle and non biological consciousness.
Post a Comment