Last week I posted "How to Accidentally Become a Zombie Robot", discussing Susan Schneider's recent TEDx-talk proposal for checking whether silicon chips can be conscious. Susan has written the following reply, which she invited me to share on the blog.
---------------------------------------------
Eric,
Greetings from a cafĂ© in Lisbon! Your new, seriously cool I,Brain case raises an important point about my original test in my Ted talk and is right out of a cyberpunk novel. A few initial points, for readers, before I respond, as Ted talks don’t give much philosophical detail:
1. It may be that the microchips are made of something besides silicon, (right now, e.g., carbon nanotubes and graphene are alternate substrates under development). I don’t think this matters – the issues that arise are the same.
2. It will be important that any chip test involve the very kind of chip substrate and design as that used in the AI in question.
3. Even if a kind of chip works in humans, there is still the issue of whether the AI in question has the right functional organization for consciousness. Since AI could be very different than us, and it is even difficult to figure out these issues in the case of biological creatures like the octopus, this may turn out to be very difficult.
4. For relation of the chip test to intriguing ideas of Ned Block and Dave Chalmers on this issue, see a paper on my website (a section in “The Future of Philosophy of Mind”, based on an earlier op-ed of mine.)
5. As Eric knows, it is probably a mistake to assume that brain chips will be functional isomorphs. I’m concerned with the development of real, emerging technologies, because I am concerned with finding a solution to the problem of AI consciousness based on an actual test. Brain chips, already under development at DARPA, may eventually be faster and more efficient information processors, enhance human consciousness, or, they may be low fidelity copies of what a given minicolumn does. This depends upon how medicine progresses...
Back to my original “Chip Test” (in the Ted talk). It’s 2045. You are ready to upgrade your aging brain. You go to I,Brain. They can gradually replace parts of your biological brain with microchips. You are awake during the surgery, suppose, and they replace a part of your biological brain that is responsible for some aspect of consciousness, with a microchip. Do you lose consciousness of something (e.g., do you lose part of your visual field)? If so, you will probably notice. This would be a sign that the microchip is the wrong stuff. Science could try and try to engineer a better chip, but if after years of trying, they never could get it right, perhaps we should conclude that that kind of substrate (e.g., silicon) does not give rise to consciousness.
On the other hand, if the chips work, that kind of substrate is in principle the right stuff (it can, in the right mental environment, give rise to qualia) although there is a further issue of whether a particular AI that has such chips has the right organization to be conscious (e.g., maybe it has nothing like a global workspace, like a Rodney Brooks style robot, or maybe it is superintelligent, and has mastered everything already and eliminated consciousness because it is too slow and inefficient).
Eric, your test is different, and I agree that someone should not trust that test. This would involve a systematic deception. What kind of society would do this? A zombie dictatorship, of course, which seeks to secretly eliminate conscious life from the planet. :-)
But I think you want to apply your larger point to the original test. Is the idea: couldn’t a chip be devised that would falsely indicate consciousness to the person? (Let’s call this a “sham qualia chip.” ) I think it is, so here’s a reply: God yes, in a dystopian world. We had better watch out! That would be horrible medicine…and luckily, it would involve a good deal of expense and effort, (systematically fooling someone about say, their visual experience, would be a major undertaking), so science would likely first seek a genuine chip substitute that preserved consciousness. (Would a sham qualia chip even clear the FDA :-) ? Maybe only if microchips were not the right stuff and it was the best science could do. After all, people would always be missing lost visual qualia, and it is best that they not suffer like this....). But crucially, since this would involve a deliberate effort on the part of medical researchers, we would know this, and so we would know that the chip is not a true substitute. Unless, that is, we are inhabitants of a zombie dictatorship.
The upshot: It would involve a lot of extra engineering effort to produce a sham qualia chip, and we would hopefully know that the sham chip was really a device designed to fool us. If this was done because the genuine chip substitute could not be developed, this would probably indicate that chips aren’t the right stuff, or that science needs to go back to the drawing board.
I propose a global ban on sham qualia chips in interest of preserving democracy.
---------------------------------------------
I (Eric) have some thoughts in response. I'm not sure it would be harder to make a sham qualia chip than a genuine qualia chip. Rather than going into detail on that now, I'll let it brew for a future post. Meanwhile, others' reactions welcomed too!
15 comments:
Hmm. I wonder if that response misses the point, though?
Difficulty of creating the sham chip aside, the mere *possibility* of the sham qualia chip shows (in Eric's thought experiments) that memory of having had a conscious experience is not sufficient evidence for actually having had such an experience. We can, in effect, have false memories of consciousness where there was none. So the skeptical scenario is still possible, and it's not clear that we have a good test yet for consciousness, AI or otherwise.
Even more interesting would be if there could be, in addition to false memories of conscious experience, other false cognitions of experience. For example, can one earnestly and honestly believe that one is having a conscious experience (of seeing red, say), even though they are truly not? Perhaps some of the literature on blindsight/Antoine's Syndrome is relevant here...
Fascinating scenarios, Eric and Susan. I now I know why the geniuses at the Genius Bar always have that empty-look in their eyes.
I want to contribute two points to the discussion, and see what either/both of you think:
1. Positive Test Only: Susan's "visual field" test of the microchip only determines whether the chip provides a substrate allowing for that subject's continued consciousness. It seems not to constitute a negative test, however. Imagine that, instead of a microchip replacement, I scoop out that part of my friend's brain. Given that my friend's entire brain is conscious, that part of his brain (responsible for visual subjective experience) is conscious-capable (we can toy with the scenario to imagine how much of my friend's brain I scoop out for replacement parts). However, it's not *my* consciousness. After all, I cannot tell if anybody else is conscious at all. Thus we cannot be too quick to presume that a microchip, of whatever substrate, cannot serve as a substrate for consciousness rather than some other discrete consciousness of which I am not, and cannot, be aware.
2. Robot Cogito: Our main problem might be solved -- or worsened -- if we jettison reliance upon memories (which Descartes does), and instead rely on present-testimony from an honest subject. Say that iBrain gives the robot-you a green-button to press if the robot-you determines through Descartes’ cogito that it’s conscious. This is all assuming that the robot-you is truthful, and that there's no iBrain shenanigans. This leads to a heavy question: Could an AGI (your uploaded self, in this case) determine anything from running Descartes’ cogito? Could a non-conscious program compute, erroneously, that it's conscious? Or would its answer be indeterminate? If the cogito would render a valid answer in the AGI case, it should help solve Susan's challenge of finding an affirmation of consciousness (though it would not help in determining the negatively: namely, that a substrate is not or could not be conscious, as mentioned in #1 above).
3. Memories vs. Robot Cogito: The difference between a recorded-robot-cogito and memories of consciousness is that in the memories case, the memories are qualitative data that are then to be interpreted by the carbon-based-brain in the present. The danger is that the carbon-based brain could possibly mistakenly interpret non-conscious data gathered by the robot as conscious-gathered data. In illustration, consider a person who is blind-sighted (a fascinating condition that the previous commenter referenced), and replace the you-robot with the blind-sighted person. From the data gathered by the blind-sighted person’s brain over a period of time, we would interpret that same data (uploaded now into our brain) as them having had a conscious visual experience whereas they did not have such a conscious experience, despite their reaction to visual stimuli. In contrast, the green-button case is non-interpreted data, interpreted as either conscious or non-conscious at that present moment in time.
The core question then seems to be: Can an AGI (e.g., your robot-self) run the cogito to determine if it is conscious? I have further stumbling-in-the-dark thoughts about this, but I would be interested to see what others think, especially Susan and Eric.
Thanks for the thoughtful comments, Unknown and Cool Whip!
I think Susan would acknowledge that it's *possible* to have false memories of experience, but maybe expensive, and knowable while it's being done, and thus we can expect to be able to avoid it. I'm doubtful of this, which I hope to address in a follow-up post soon.
Positive Test Only: Yes, that seems right to me. That seems worth pointing out, but maybe not fatal to Susan's intentions.
Robot Cogito: I think this test would be easily defeated! Right now, I can design a little program in C+ that would answer the input "Are you phenomenally conscious?" with "You betcha, bud! I think therefore I am!" It seems to me that the robot cogito would basically be a version of that.
Memories vs Robot Cogito: Because of my skepticism of the robot cogito, I'm inclined to think that it's not an improvement on the memories case.
Now the setup I gave is a little different from Susan's -- though I get back to Susan's in the end -- to make it easier to describe my doubts. To be fully fair to Susan, she's imagining the bulk of the organic brain noticing at a moment if part of one's experience has phenomenally vanished, and maybe that doesn't rely on memory and maybe that's different from a pure robot cogito. I think the problem is probably structurally more or less the same in all these cases, but working that through in detail will have to wait.
Thank you for your thoughtful comments, Eric. Regarding the robot cogito, I was envisioning it as an exercise that the robot-you undertook (not a simple program designed to say it was conscious, which you're right, would be easy enough). I am wondering whether what would happen if the robot-you ran the cogito and if we could in any way determine what the results might be. I'll think on it some more.
I don't think the idea of sham qualia makes any sense.
The very definition of a qualia (wait, is that the singular? qualium?!) is that it is the unit of conscious experience. In a comment on the previous post, Eric said he wanted to maintain this assumption: "qualia are special in that they have to be tested internally by introspection". If the only ultimate test of a qualia is that it passes internal validation; and a certain silicon-induced "siliqualia" passes internal validation; then what grounds do we have for claiming that a siliqualia is not a qualia?
Or to put a heavier ethical spin on it: how are the sham qualia of cyborgs different from the sham love of homosexuals?
ok, I'm left at a 'lol wut?' state, after that. Anyone else?
I mean, if I understand it right me old china, and I don't really want to dandy it around, but maybe such self validation would be as valid as the sham love of hetrosexuals?
Thanks for the continuing conversation, folks! The simple input-output program I described could say "yes" if asked "do you have qualia?" For an internal version of this, we might imagine a similar program: Define a variable Qualia that can take "yes" or "no" as values. Assign it the value "yes".
We can add an Introspection-Qualia variable too! Introspection-Qualia has two fields, the name of the Qualia variable and its value, in other words "Qualia, yes".
Okay, now that's just silly of course. A fundamental question of consciousness studies is exactly what needs to be added to make this a genuinely conscious system with qualia like ours; and the skeptical worry is whether we could add enough to convince people that it does have qualia when it fact it does not.
Callan - I think you've understood it right. My argument was: (1) Homosexuals' self-reports of love are, so far as I can tell, basically the same as the self-reports of heterosexuals. (2) For many years it was argued that the love of homosexuals was not the same thing as the love of heterosexuals. (3) Today that position is regarded by many people as wrong and furthermore (4) a terrible ethical mistake. If the self-reports of qualia in electric people or in the electric bits of people are the same in key features as the self-reports of qualia in wetware people, then might we not be making the same mistake all over again?
To Eric's response - I do think it's a bit silly. Qualia have never been about giving yes/no answers to questions. You would never approach a classic qualia in a person in that way, so no, obviously you shouldn't attempt to approach a potential qualia in a computer in that way. Rather than using olde fashioned IT ideas like yes/no questions, we should just apply the usual set of procedures that we apply to get at qualia with humans. I.e. integrated observation and questioning, combined with years of philosophical agonizing over the problem of other minds. If we do that, it doesn't seem obvious to me that the problem of fakery arises. If a computer can fake qualia well enough to convince every human interlocutor and the electronic apparatus we can bring to bear, then I genuinely don't think we have any standing (moral or epistemic) to claim that it is not conscious.
But as I keep banging away at, I still think this is missing the point, because there is a background assumption that the qualia we're talking about are shared, or at least in some way commensurate or co-comprehensible. I'm just not sure they will be. Our qualia are so tied up with physical limitations which our silicon counterparts will not share that their qualia (if indeed they do have any) will just be unimaginable to us. What does it feel like to have a busted diode? More realistically, what does it feel like to experience low download speeds? What does it feel like to not experience our existential angst, because you know exactly what you were created for?
So while the Schneider-style cases of androids that exist in worlds vaguely like ours are interesting, and nice test cases, I don't see them as really approaching the more general and realistic questions of how we interact and deal with potentially conscious software with which we have almost nothing in common.
If the self-reports of qualia in electric people or in the electric bits of people are the same in key features as the self-reports of qualia in wetware people, then might we not be making the same mistake all over again?
Maybe, but feelin' guilty about the past doesn't make for proof of it being true.
I mean, it almost seems a panpsychism argument. How different is homosexuality? It's like referring to the idea of a certain race has 'sham love'. Minute difference. But human/consciousness never runs out, even once we've left carbon based life behind and have gone over to the considerable difference of silicon based mechanism? What isn't conscious at that point?
>What isn't conscious at that point?
Well, if you're asking me, the answer is things that don't have intentions. Personally I think that perceptions are the wrong place to look for consciousness. After all, my iPhone can now handle streams of data that are within a couple of orders of magnitude of human experience, and can perceive many of the things that I perceive - visual stimuli, sound stimuli, touch - but shows no sign of being conscious. My best guess is that that is because it has no intentions or desires. Whereas in humans, intentionality and desire actually precede experience, I think. We want to live long before we know we're alive. We breathe for the first time even though it hurts, i.e. our perceptions tell us not to but we do it anyway. It's the overlay of intentions (both those we are aware of having and those we are not) on perception that makes a qualia. And in fact, consciousness probably requires conflicting intentions. The very fact that running hurts, but I still want to do it, forces the human system to generate some method of balancing the two different urges, and that method *is* consciousness.
So when you have computer systems that, for example, impose requirements on hardware that must be negotiated, and you put a master system in place to regulate those demands, it does seem plausible to me that you're creating some kind of consciousness. It doesn't seem conscious to us, but that brings us back to the race/sexuality analogy. You're right that the differences between races or people of different sexual orientations are miniscule. And yet with all our knowledge, for centuries, we have allowed ourselves to believe that those miniscule differences actually constitute massive differences of type. Even though gay people smile like straight people, kiss like straight people, and cry like straight people, for some reason humanity as a whole decided that they weren't really people, called gay love a disease, and killed gay people wholesale. That fairly massive error of judgment does not fill me we confidence that we're going to get the answer right when we're faced with the question: Does this hunk of silicon that doesn't smile, love, or bleed, have consciousness? If we can't even recognize fellow humans, do we really have what it takes to recognize fellow consciousnesses?
I really need to get my story "Last Janitor of the Divine" into decent shape. It's all about this. I do think that if we allow that behavioral observational methods -- especially given actual limitations -- are fallible (case in point, recent chatbots) and inward scanning difficult to interpret in a system with a different basic architecture -- and then we add in possible alienness too, we *possibly* could find ourselves in a serious epistemic pickle, esp. when confronting something designed by us and without a long learning history and evolutionary history.
Does this hunk of silicon that doesn't smile, love, or bleed, have consciousness?
Isn't that another topic? Here we're talking about consciousness 'transfer', ie a human consciousness. Whether there are other breeds of consciousness possible that could be recognized is another topic - except perhaps that a 'transfer' makes you into another breed of consciousness. Or more exactly, kill the consciousness that is you and animates another consciousness that shares a ton of data points as you did, but will interact with them in subtlety different ways. Which add up to a big difference over time - a humanly perceptible difference. Which is why the fiction always seems to make brain scans kill the brain in order to avoid this question.
Homosexuals might not be called monsterous anymore (or atleast in some places), but we do recognise a difference - we don't try and call them hetrosexual, after all. That we've given up persecution doesn't mean we've given up acknowledging a difference when there is indeed a difference. Same goes for consciousness transfer - if it's different, then how was it a transfer? When you get into a car are you different? No. When you get 'transfered' into Eric's silicon brain, are you different?
Hi, Callan. There's a bunch of different points in there. I'm not quite sure how to order them...
"Isn't that another topic?...except perhaps that a 'transfer' makes you into another breed of consciousness."
That's kind of my point: there are never going to be silicon brains that precisely mimic the function of the human brain. Specifically because there are certain kinds of operation that silicon already does much better than neural cells. It's not clear that it's possible to dumb down those operations to human level, and even if we could, why would we? So inevitably an artificial intelligence is going to be a super-intelligence in at least some ways. This for me makes all the pondering of "would a silicon brain that was just like us be the same as us" academic - in the "unrealistic" sense of the word.
"but will interact with them in subtlety different ways...a humanly perceptible difference"
Yep, but I don't think "perceptible difference" is or should be the measure. The measure should be something like "morally relevant". I share many things in common with my 24 year old self (I'm 34), but I am definitely perceptibly different. However, I'm not morally much different, and that is recognised in many ways: the law grants me the same rights, people interact with me in much the same way, I have the same connection with my family, some of the same possessions, etc., etc. People vary from day to day, and we have pretty strong - but continually renegotiated vf. trans people - instincts and rules for how to deal with continuity of identity through change. What we need to be thinking about with regard to silicon buddies is not "are they the same?" but "are they different in some (morally/epistemically) salient way?"
To take your brain scan example, yes, brain scan guy will be different to the wetware guy from whom he was scanned. But is he going to be more different than a 60 year old is from a 6 year old? We have no problem recognising a 60 year old as the same person as the 6 year old, with the same rights. What factors might make that kind of identity impossible with brain scan guy?
"Homosexuals might not be called monsterous anymore (or atleast in some places), but we do recognise a difference - we don't try and call them hetrosexual, after all."
No, I think this is wrong. We absolutely do reject the difference in key ways. The legalisation of gay marriage explicitly, legally rejects the idea that gay love is different in nature to straight love by putting it in the same legal framework. Homosexuality used to be classified as a mental illness - those who were gay were different from straight people in that they were "ill". That classification - that difference - has now been corrected. And in race, the same thing: black people were legally defined as only 3/5ths of a white person in the original constitution, and that legal difference was denied. Obviously the law is only one social mechanism, but it's a big one, and it seems clear to me that in the law - and in many other social systems - we are progressively eliminating differences between people which we realise are not actually important.
cont. Sorry, monster comment!
cont.
"When you get into a car are you different?"
Absolutely. Your maximum speed increases tenfold, your weight and destructive power increase. And as a result, we have vast physical infrastructures and special legal systems for people in cars. We have a whole mandatory education system just for people in cars. We have special races for people in cars - you're not allowed in the Olympics in a car.
But some things don't change. You're still human in a car; you still have rights; there are social interactions (e.g. flashing your lights to let the other guy go first - interestingly, these vary culturally: in China, flashing your lights means get out of my way!); your identity doesn't change.
So these are the kinds of questions that I think we'll have to address. Not "are you different?" (the answer to that is just yes); but "what kind of differences are they and are they salient?"
@Eric sounds awesome. I agree that the time factor may be really important. I haven't read Robin Hanson's book yet, but one of the reviews said that he thought the "age of ems" might be really short in human terms, but age-long in em terms. During the AlphaGo match, I kept wondering how long the computer actually took to work out its moves. Did it get bored waiting for Lee Sedol to play?
I have to ask, if I put the mona lisa painting in a car, does it become different the same way - maximum speed increases tenfold?
I'd say maximum speed increases not at all, in both cases.
Attributing the cars qualities to oneself seems to be exactly the sort of confabulation Eric is talking about. Taking the silicon brain for a test drive and attributing IT as I.
Post a Comment