Thursday, February 04, 2016

Cheerfully Suicidal A.I. Slaves

Suppose that we someday create genuinely conscious Artificial Intelligence, with all the intellectual and emotional capacity of human beings. Never mind how we do it -- possibly via computers, possibly via biotech ("uplifted" animals).

Here are two things we humans might want, which appear to conflict:

(1.) We might want them to subordinately serve us and die for us.

(2.) We might want to treat them ethically, as beings who deserve rights equal to the rights of natural human beings.

A possible fix suggests itself: Design the A.I.'s so that they want to serve us and die for us. In other words, make a race of cheerfully suicidal A.I. slaves. This was Asimov's solution with the Three Laws of Robotics -- a solution that slowly falls apart across the arc of his robot stories (finally collapsing in "Bicentennial Man").

What to make of this idea?

Douglas Adams parodies the cheerily suicidal A.I. with an animal uplift case in The Restaurant at the End of the Universe:

A large dairy animal approached Zaphod Beeblebrox's table, a large fat meaty quadruped of the bovine type with large watery eyes, small horns and what might almost have been an ingratiating smile on its lips.

"Good evening," it lowed and sat back heavily on its haunches. "I am the main Dish of the Day. May I interest you in parts of my body?" It harrumphed and gurgled a bit, wriggled its hind quarters into a comfortable position and gazed peacefully at them.

Zaphod's naive Earthling companion, Arthur Dent, is predictably shocked and disgusted, and when he suggests a green salad instead, the suggestion is brushed off. Zaphod and the animal argue that it's better to eat an animal that wants to be eaten, and can say so clearly and explicitly, than one that does not want to be eaten. Zaphod orders four rare steaks.

"A very wise choice, sir, if I may say so. Very good," it said. "I'll just nip off and shoot myself."

He turned and gave a friendly wink to Arthur.

"Don't worry, sir," he said. "I'll be very humane."

Adams, I think, nails the peculiarity of the idea. There's something ethically jarring about creating an entity with human-like intelligence and emotion, which will completely subject its own interests to ours, even to the point of suiciding at our whim. This appears to be so even if the being wants to be subjected in that way.

The three major classes of ethical theory -- consequentialism, deontology, and virtue ethics -- can each be read in a way that delivers this result. The consequentialist can object that the good of a small pleasure for a human does not outweigh the potential of the lifetime of pleasure for an uplifted steer, even if the steer doesn't appreciate that fact. The Kantian deontologist can object that the steer is treating itself as a "mere means" rather than as an agent whose life should not be sacrificed by itself or others to achieve others' goals. The Aristotelian virtue ethicist can say that the steer is cutting its life short rather than flourishing into its full potential of creativity, joy, friendship, and thought.

If we can use Adams' steer as an anchoring point of moral absurdity at one end of the ethical continuum, the question then arises to what extent such reasoning transfers to less obvious intermediate cases, such as Asimov's robots who don't sacrifice themselves as foodstuffs (though presumably they would do so if commanded to, by the Second Law) but who do, in the stories, appear perfectly willing to sacrifice themselves to save human lives.

When a human sacrifices her life to save someone else's it can be, at least sometimes, a morally beautiful thing. But a robot designed that way from the start, to always subordinate its interests to those of humans -- I'm inclined to think that ought to be ruled out, in the general case, by any reasonable egalitarian principle that treats AIs as deserving equal moral status with humans if they have broadly human-like cognitive and emotional capacities. Such a principle would be a natural extension of the types of consequentialist, deontologist, and virtue ethicist reasoning that rules out Adams' steer.

Thus, I think we can't use the "cheerfully suicidal slave" fix to escape the dilemma posed by (1) and (2) above. If we somehow create genuinely conscious, general-intelligence A.I. with a range of emotional capacities like our own, then we must create it morally equal, not subordinate.

[image source]

-------------------------------------

Related article: A Defense of the Rights of Artificial Intelligences (Schwitzgebel & Garza, Midwest Studies in Philosophy 2015).

14 comments:

  1. For more fiction on this front I recommend the Rick and Morty episode "Meeseeks and Destroy." It centers around a device that generates a being known as "Mr. Meeseeks" whose sole purpose is to help solve a problem and then vanish. The hilarious part comes when Mr. Meeseeks uses the device to create additional copies of himself to try and solve a particularly difficult problem.

    Unlike your example though, Mr. Meeseeks views continued existence as painful- sort of a bizarro Buddhist twist where existence is suffering but there is an action that you can do to get off the cycle of samsara (as opposed to getting off only by recognizing impermanence).

    ReplyDelete
  2. The consequentialist rejection of suicidal AIs might not be as straight-forward as you suggest. It may be that if the Dish of the Day hadn't been made suicidal, then it would not have been made at all. So (the argument goes) its existence adds to aggregate utility if it leads a life minimally worth living. And we can imagine that it does, perhaps because it is allowed to indulge some hobbies while growing plump and delicious. The problem (the problem of population ethics) is that comparing utility between overall scenarios which have different people in them isn't straightforward.

    ReplyDelete
  3. Yes, P.D. -- I agree that the argument can get complicated in the way you suggest. (Garza and I briefly address this type of argument in the section "Argument from Existential Debt" in our linked paper.) There will be an Aristotelian version of that type of argument, too, if the beef's flourishing and telos is exactly this.

    I think this post captures the nugget of the argument, but there's a lot of nuance and back and forth that will require follow-up posts and/or a full length treatment to address. I am planning on working on a more full-length treatment this summer.

    ReplyDelete
  4. Langrangian: Thanks for that suggestion -- sounds interesting!

    ReplyDelete
  5. Nice post Eric. Pardon the crass self-promotion, but it so happens that at Dance of Reason this morning I just put up a post by Alexis Elder on the prospects for robot friendship. You put the essential tension in terms of willful self-sacrifice vs. moral agency, but I feel it even more strongly when I think about it in terms of friendship, which strikes me as an inevitable consequence. How can we ever really be friends with something that would sacrifice itself that way for us, when we know we would never even consider doing the same? Of course, there are those who would be glad of a distinction like this to demarcate the extent to which real friendship with robots is possible. Also, we do have deep meaningful relationships like this. For example, I hope I would be robot-like in my willingness to exchange my life for the life of my child. But I would horrified if they were to do that for me.

    ReplyDelete
  6. Interesting post, Randy -- thanks for pointing to it!

    It seems there are two dimensions to friendship at issue here -- one is whether the target of friendship is really a conscious entity vs. only seeming to be one; another is whether the entity is subordinate or programmed-as-subordinate vs. equal. Yes?

    ReplyDelete
  7. Due to a memory leak, my first experiment in neural networks committed suicide- and took a boot floppy with it. It found and excuted the hardware level Device Service Routine to format a disk on the TI-99/4A floppy controller

    ReplyDelete
  8. Eric, yes. Which raises the interesting possibility that we would be reluctant to attribute full consciousness to them if they were programmed to be be so completely devoted. Could there even be a deep connection here? Perhaps an entity like that would always have to lack full fledged self awareness.

    ReplyDelete
  9. Once you see our moral intuitions as ecological artifacts, then it seems pretty clear that examples such as these constitute crash spaces, places where our intuitions fail to resolve. Think of Dick's Replicants---the Nexus 6 only needed to slightly exceed human capacity to problematize its place in the sociocognitive machinery of humanity. The difficulty of these cases tells us several very important things about moral cognition, I think.

    How could a problem like this ever be solved, particularly when you take into account the variable, even fungible nature of artificial agencies? My guess is that when we reach a technological stage approaching this, we will treat our machines precisely the way they instruct us to! ;)

    ReplyDelete
  10. Once you see our moral intuitions as ecological artifacts, then it seems pretty clear that examples such as these constitute crash spaces, places where our intuitions fail to resolve. Think of Dick's Replicants---the Nexus 6 only needed to slightly exceed human capacity to problematize its place in the sociocognitive machinery of humanity. The difficulty of these cases tells us several very important things about moral cognition, I think.

    How could a problem like this ever be solved, particularly when you take into account the variable, even fungible nature of artificial agencies? My guess is that when we reach a technological stage approaching this, we will treat our machines precisely the way they instruct us to! ;)

    ReplyDelete
  11. Thanks for the comments, Scott and Randy!

    Randy: An interesting possibility! I'd guess against the deep connection, but maybe.

    Scott: Yes. I'm finding myself thinking more in terms of your "crash space" conception. That's part of where I want to take my reflections on monster/AI/alien ethics -- out into the crash space, then back home and see if things look different after the return.

    ReplyDelete
  12. I see the stain of someone else's will, or the will of a collective group who initiated this that I might be part of.

    I don't see it anymore as 'what it wants' as when a magician forces someone to take a particular card.

    We've not really modified brains yet (though various cults and various cults run by countries could be said to have come close), so we're not used to tracking the will of another in someone else. Not used to tracing the line of responsibility.

    ReplyDelete
  13. Theodore - well, there's a rough darwinistic start, isn't it? :)

    ReplyDelete
  14. Hoorah for Douglas Adams, a veritable fountain of philosophical insight! I have lain on the floor and howled at the part where Ford and Arthur first land on the improbability drive spaceship...

    There's lots to say about this, but the first thing is that you've written quite a lot about the nature and problems of immortality. It's not an obviously good state, at least according to some.

    So... I take it to be obviously true that when we make an AI, it's going to be a *lot* smarter than us, in certain ways at least. It may also know that it is better to die. It may also be better able than us to think of good ways of using its inevitable death (just as we have made lovely love out of our inevitable sex drive).

    And I don't think there would be any problem with that. The death angle makes an exciting hook to hang the moral problem on, but I don't actually think the moral problem lies in us designing an AI *to die*. The moral problem lies in us *designing* an AI.

    Part of the nature of being a moral, independent being is that there are certain kinds of intervention in your life which only you can do. For example, we mostly find it wrong to physically constrain each other (kidnapping), and to mentally constrain each other (brainwashing). And intervening in someone's nature, either at the design phase (eugenics) or in later life (involuntary brain surgery?) is one of the things we can't do.

    Of course, with AIs we have to design them, because we make them; but as soon as they're conscious, we have to stop meddling; but then they might only be half-done... it's a problem. But I think you've slightly misidentified the issue with the cheerful martyr AI problem. It's not necessarily the death part.

    ReplyDelete