Thursday, June 23, 2016

How to Accidentally Become a Zombie Robot

Susan Schneider's beautifully clear TEDx talk on the future of robot consciousness has me thinking about the possibility of accidentally turning oneself into a zombie. (I mean "zombie" in the philosopher's sense: a being who outwardly resembles us but who has no stream of conscious experience.)

Suppose that AI continues to rely on silicon chips and that -- as Schneider thinks is possible -- silicon chips just aren't the right kind of material to host consciousness. (I'll weaken these assumptions below.) It's 2045 and you walk into the iBrain store, thinking about having your degenerating biological brain replaced with more durable silicon chips. Lots of people have done it already, and now the internet is full of programmed entities that claim to be happily uploaded people who have left their biological brains behind. Some of these uploaded entities control robotic or partly organic bodies; others exist entirely in virtual environments inside of computers. If Schneider is right that none of these silicon-chip-instantiated beings is actually conscious, then what has actually happened is that all of the biological people who "uploaded" actually committed suicide, and what exist are only non-conscious simulacra of them.

You've read some philosophy. You're worried about exactly that possibility. Maybe that's why you've been so slow to visit the local iBrain store. Fortunately, the iBrain company has discovered a way to upload you temporarily, so you can try it out -- so that you can determine introspectively for yourself whether the uploaded "you" really would be conscious. Federal regulations prohibit running an uploaded iBrain at the same time that the original source person is conscious, but the company can scan your brain non-destructively while you are sedated, run the iBrain for a while, then pause your iBrain and update your biological brain with memories of what you experienced. A trial run!

From the outside, it looks like this: You walk into the iBrain store, you are put to sleep, a virtual you wakes up in a robotic body and says "Yes, I really am conscious! Interesting how this feels!" and then does some jogging and jumping jacks to test out the the body. The robotic body then goes to sleep and the biological you wakes up and says, "Yes, I was conscious even in the robot. My philosophical doubts were misplaced. Upload me into iBrain!"

Here's the catch: After you wake, how do you know those memories are accurate memories of having actually been conscious? When the iBrain company tweaks your biological neurons to install the memories of what "you" did in the robotic body, it's hard to see how you could be sure that those memories aren't merely presently conscious seeming-memories of past events that weren't actually consciously experienced at the time they occurred. Maybe the robot "you" really was a zombie, though you don't realize that now.

You might have thought of this possibility in advance, and so you might remain skeptical. But it would take a lot of philosophical fortitude to sustain that skepticism across many "trial runs". If biological you has lots of seeming-memories of consciousness as a machine, and repeatedly notices no big disruptive change when the switch is flipped from iBrain to biological brain, it's going to be hard to resist the impression that you really are conscious as a machine, even if that impression is false -- and thus you might decide to go ahead and do the upload permanently, unintentionally transforming yourself into an experienceless zombie.

But maybe if a silicon-chip brain could really duplicate your cognitive processes well enough to drive a robot that acts just as you would act, then the silicon-chip brain really would have to be conscious? That's a plausible (though disputable) philosophical position. So let's weaken the philosophical and technological assumptions a little. We can still get a skeptical zombie scenario going.

Suppose that the iBrain company tires of all the "trial runs" that buyers foolishly insist on, so the company decides to save money by not actually having the robot bodies do any of those things that the the trial-run users think they do. Instead, when you walk in for a trial they sedate you and, based on what they know about your just-scanned biological brain, they predict what you would do if you were "uploaded" into a robotic body. They then give you false memories of having done those things. You never actually do any of those things or have any of those thoughts during the time your biological body is sedated, but there is no way to know that introspectively after waking. It would seem to you that the uploading worked and preserved your consciousness.

There can be less malicious versions of this mistake. Behavior and cognition during the trial might be insufficient for consciousness, or for full consciousness, while memory is nonetheless vivid enough to lead to retrospective attributions of full consciousness.

In her talk, Schneider suggests that we could tell whether silicon chips can really host consciousness by trying them out and then checking whether consciousness disappears when we do so; but I'm not sure this test would work. If nonconscious systems (whether silicon chip or otherwise) can produce both (a.) outwardly plausible behavior, and (b.) false memories of having really experienced consciousness, then we might falsely conclude in retrospect that consciousness is preserved. (This could be so whether we are replacing the whole brain at once or only one subsystem at a time, as long as "outward" means "outside of the subsystem, in terms of its influence on the rest of the brain".) We might then choose to replace conscious systems with nonconscious ones, accidentally transforming ourselves into zombies.

[image source]

----------------------------------------------

Update June 27:

Susan Schneider replies!

----------------------------------------------

6 comments:

  1. The way I estimate it it's a suicide that triggers a child being made, the child thinking it is you. Though perhaps not human consciousness, the child would, in my estimate, have a type of consciousness (or it really is a failed attempt at copying human capacities)

    But it's the suicide, or at least the speculation that it's a suicide, that's the main issue, right? That's my speculation as well.

    To me it's always sketchy in fiction when the brain scan destroys the brain - it just seems a way of dodging the question of what if it didn't destroy the brain and...the bio brain person just wakes up again...kinda shows there was no 'transfer', just a duplication. But better to have the brain destroyed because it pampers the idea of a transfer and lets the author get on with the sci fi action rather than doing navel gazing philosophy.

    ReplyDelete
  2. It's a nice talk, but I still feel it misses the point a bit.

    Firstly, this Nagel thing now makes me want to go back in a time machine and steal all Nagel's pencils the day before he was due to write it. A whole philosophy talk devoted to how important it is that we keep "something" (the "something it's like to be you") in the universe, because a universe without something would be... well, it would really be something. Or not.

    Second, and slightly more importantly, though Schneider does address superintelligence, it's only as one option among several. In reality, all AI is already superintelligence, in that it outperforms us in at least some functions. I don't believe that we will be installing human-power brain chips for very long. We will very quickly move to installing HD vision chips, extended-spectrum audio chips, and SSD memory.

    I'm not sure that I see the problem that you are worried about in the iBrain store. You can use external validation for many of the problems - ask to watch their cctv feed after you wake up, for example, or have witnesses there. It doesn't seem like a problem that is different in quality to other medical ethics problems, and we solve those, with more or less clunky mechanisms.

    ReplyDelete
  3. Nick Agar's book *Humanity's End" argues that it would be irrational to upload consciousness on the grounds that you can't know beforehand (and perhaps even afterwards) whether you would survive being uploaded (because survival requires p consciousness). I have replied to Nick in a paper.

    ReplyDelete
  4. Thanks for the tip, Neil. I'll check out the Agar and your reply!

    Chinaphil: On the suicide question, I think there are two separate issues. One is whether the being will be conscious at all, which is the intended question, and then a separate question of whether if conscious that being would be *you*. On the external validation, part of the background assumption here is that qualia are special in that they have to be tested internally by introspection and external validation is not enough. One might discard that premise (a la Dennett, say), but I want to accept that premise, at least for the purpose of the post. And no fair stealing Nagel's pencils! ;-)

    ReplyDelete
  5. What if the problem is that we're treating this as an all or nothing affair? Borrow Chinaphil's idea that we'd probably be enhancing ourselves piecemeal before copying ourselves whole. Imagine if the process instead involved the slow replacement of parts of your brain until all that's left is silicon or some superposition gel? Can we imagine a kind of tipping point:

    "Up till here you were fully conscious but now that you've enhanced your sense of smell, I'm afraid scans show you're a zombie."

    Starts to seem a little less plausible.

    ReplyDelete
  6. Kallan -- I agree that it's plausible that there would be gray cases and piece-by-piece cases. On the piece-by-piece replacement, which I discuss only briefly at the end, Schneider is imagining it not quite how you suggest but rather as a case where conscious smell might suddenly disappear, but not your entire conscious self!

    ReplyDelete