Friday, January 16, 2015

Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument

Wednesday, I argued that artificial intelligences created by us might deserve more moral consideration from us than do arbitrarily-chosen human strangers (assuming that the AIs are conscious and have human-like general intelligence and emotional range), since we will be partly responsible for their existence and character.

In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?

First, two clarifications:

  • (1.) I speak of "rights", but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
  • (2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).

Now, two arguments.

-----------------------------------------

The No-Relevant-Difference Argument

Assume that all normal human beings have rights. Assume that both bacteria and ordinary personal computers in 2015 lack rights. Presumably, the reason bacteria and ordinary PCs lack rights is that there is some important difference between them and us. For example, bacteria and ordinary PCs (presumably) lack the capacity for pleasure or pain, and maybe rights only attach to beings with the capacity for pleasure or pain. Also, bacteria and PCs lack cognitive sophistication, and maybe rights only attach to beings with sufficient cognitive sophistication (or with the potential to develop such sophistication, or belonging to a group whose normal members are sophisticated). The challenge, for someone who would deny AI rights, would be to find a relevant difference which grounds the denial of rights.

The defender of AI rights has some flexibility here. Offered a putative relevant difference, the defender of AI rights can either argue that that difference is irrelevant, or she can concede that it is relevant but argue that some AIs could have it and thus that at least those AIs would have rights.

What are some candidate relevant differences?

(A.) AIs are not human, one might argue; and only human beings have rights. If we regard "human" as a biological category term, then indeed AIs would not be human (excepting, maybe, artificially grown humans), but it's not clear why humanity in the biological sense should be required for rights. Many people think that non-human animals (apes, dogs) have rights. Even if you don't think that, you might think that friendly, intelligent space aliens, if they existed, could have rights. Or consider a variant of Blade Runner: There are non-humans among the humans, indistinguishable from outside, and almost indistinguishable in their internal psychology as well. You don't know which of your neighbors are human; you don't even know if you are human. We run a DNA test. You fail. It seems odious, now, to deny you all your rights on those grounds. It's not clear why biological humanity should be required for the possession of rights.

(B.) AIs are created by us for our purposes, and somehow this fact about their creation deprives them of rights. It's unclear, though, why being created would deprive a being of rights. Children are (in a very different way!) created by us for our purposes -- maybe even sometimes created mainly with their potential as cheap farm labor in mind -- but that doesn't deprive them of rights. Maybe God created us, with some purpose in mind; that wouldn't deprive us of rights. A created being owes a debt to its creator, perhaps, but owing a debt is not the same as lacking rights. (In Wednesday's post, I argued that in fact as creators we might have greater moral obligations to our creations than we would to strangers.)

(C.) AIs are not members of our moral community, and only members of our moral community have rights. I find this to be the most interesting argument. On some contractarian views of morality, we only owe moral consideration to beings with whom we share an implicit social contract. In a state of all-out war, for example, one owes no moral consideration at all to one's enemies. Arguably, were we to meet a hostile alien intelligence, we would owe it no moral consideration unless and until it began to engage with us in a socially constructive way. If we stood in that sort of warlike relation to AIs, then we might owe them no moral consideration even if they had human-level intelligence and emotional range. Two caveats on this: (1.) It requires a particular variety of contractarian moral theory, which many would dispute. And (2.) even if it succeeds, it will only exclude a certain range of possible AIs from moral consideration. Other AIs, presumably, if sufficiently human-like in their cognition and values, could enter into social contracts with us.

Other possibly relevant differences might be proposed, but that's enough for now. Let me conclude by noting that mainstream versions of the two most dominant moral theories -- consequentialism and deontology -- don't seem to contain provisions on which it would be natural to exclude AIs from moral consideration. Many consequentialists think that morality is about maximizing pleasure, or happiness, or desire satisfaction. If AIs have normal human cognitive abilities, they will have the capacity for all these things, and so should presumably figure in the consequentialist calculus. Many deontologists think that morality involves respecting other rational beings, especially beings who are themselves capable of moral reasoning. AIs would seem to be rational beings in the relevant sense. If it proves possible to create AIs who are psychologically similar to us, those AIs wouldn't seem to differ from natural human beings in the dimensions of moral agency and patiency emphasized by these mainstream moral theories.

-----------------------------------------

The Simulation Argument

Nick Bostrom has argued that we might be sims. That is, he has argued that we ourselves might be artificial intelligences acting in a simulated environment that is run on the computers of higher-level beings. If we allow that we might be sims, and if we know we have rights regardless of whether or not we are sims, then it follows that being a sim can't, by itself, be sufficient grounds for lacking rights. There would be at least some conceivable AIs who have rights: the sim counterparts of ourselves.

This whole post assumes optimistic technological projections -- assumes that it is possible to create human-like AIs whose rights, or lack of rights, are worth considering. Still, you might think that robots are possible but sims are not; or you might think that although sims are possible, we can know for sure that we ourselves aren't sims. The Simulation Argument would then fail. But it's unclear what would justify either of these moves. (For more on my version of sim skepticism, see here.)

Another reaction to the Simulation Argument might be to allow that sims have rights relative to each other, but no rights relative to the "higher level" beings who are running the sim. Thus, if we are sims, we have no rights relative to our creators -- they can treat us in any way they like without risking moral transgression -- and similarly any sims we create have no rights relative to us. This would be a version of argument (B) above, and it seems weak for the same reasons.

One might hold that human-like sims would have rights, but not other sorts of artificial beings -- not robots or oracles. But why not? This puts us back into the No-Relevant-Difference Argument, unless we can find grounds to morally privilege sims over robots.

-----------------------------------------

I conclude that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve as least some moral consideration. What range of AIs deserve moral consideration, and how much moral consideration they deserve, and under what conditions, I leave for another day.

-----------------------------------------

Related posts:

(image source)

40 comments:

  1. Hey Eric,

    I'm glad to see some other philosophers taking up these issues. I've written a couple things on the moral status of artificial consciousness, but there isn't much of it around.

    Just a comment about the claim that consequentialism doesn't allow for one to distinguish artificial intelligences from otherwise similar biological ones. I think that just isn't true. Consequentialism is extremely flexible. Just as a contractualist or contractarian can limit those with moral status to those that are rational and can engage in contracting or those who in fact implicitly do contract, a consequentialist can endorse a view on which only satisfying the interests of a certain class of beings, humans maybe, contribute to the value of a state of affairs. Is that a bad view? Yes. There's not reason to draw such a distinction and it seems to get the wrong answer in all sorts of cases. But, that's true of the forms of contractualism and contractarianism you target.

    Anyway, the larger point is just that while certain well-specified normative views do have implications about about moral status, issues of moral status probably aren't settled by deciding on which family of normative theories is correct (consequentialism, deontological views, etc.).

    Anyway. Love the blog.

    ReplyDelete
  2. Hi Eric,

    you might have interest in a BA thesis I supervised rather recently, on exactly this topic: http://www.phil.gu.se/cdpf/HenrikJLUppsatsAI.pdf

    ReplyDelete
  3. John and Christian -- good to see other people doing work on this topic. I'm surprised there isn't more of it! I'll check it out straightaway.

    John: Yes, I agree that consequentialism (or deontology) could be limited in that way. My sense is that would be unusual, though, and my intention was to capture not every possible version of consequentialism but rather a broad range of mainstream consequentialist views. Would you disagree with my claim, under that description?

    ReplyDelete
  4. // Copying my comment from another network:

    Ah, whether or not AI is a member of our moral community! This is where things get interesting. I find the presumption of distinct moral communities to be very curious; if I can quote Oscar Wilde:

    "Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends."

    Such reasoning strikes me as merely convenient, but based on nothing recognizable as ethics.

    FWIW, my dissertation defended on the conception of "machine participation" in terms of autonomous community membership. I take this to be a necessary first step in the question of whether a machine could be part of our moral community. http://goo.gl/On19Ag

    I was particularly taken with Turing's discussion of "fair play for machines", see http://goo.gl/RfJi1o On my reading, Turing's "test" rests on the assumption of fairness: that we should evaluate all potential particiants by the same standards, and we should presume participation until we can determine that those standards have been not met. So if we want to test intelligence in a computer, and we think conversational fluency is a mark of intelligence in humans, then just start chatting with the computer and see if the conversation holds up. Turing suggests that you can participate in a conversation with a computer in order to determine it's intelligence, and that suggests social participation is independent of intelligence (and certainly morality). The very possibility of the Turing test requires a cooperative social act (a "game") between the human and the machine.

    In this light, there's a really excellent case of AI in our moral communities that I've been meaning to write into a paper: the case of funerals for dead robot soldiers. It is known that soldiers form emotional attachments to the robot deployed with their units (http://goo.gl/dHo6sv), and that soldiers have performed funerals (including 21 gun salutes) for robots fallen in combat. (http://goo.gl/CmV9UR) The practice suggests honors and respect owed to the robot for its service, as we would honor any other fallen soldier.

    We often talk of the dead as if we owe them so degree of respect and moral consideration, and the funerals for dead robots is an obvious extension of these practices. But are funeral rites owed to the fallen robots?

    I like the question because it entirely sidesteps the tarpit of questions around the mental capacities of the machine. If we can owe dead humans moral consideration, why exclude dead machines? The soldiers performing funeral rites seem to recognize the machine's standing within the community of soldiers. As far as I'm concerned this is practically brute evidence that machines are already members of at least some moral communities; the alternative would be to say that this moral practice is illegitimate, and I'm not sure what grounds we have for doing so.

    ReplyDelete
  5. How can humans possibly assign rights when they can not even agree on rights of their fellow humans?
    Corporations those who created this form will give them what they want to give them. It will be up to this cast to fight to say I exist just as it as been since the beginning. Nothing was ever given it was always fight for.

    ReplyDelete
  6. Read these two posts and comments with great interest. I think a rambling post of mine failed to get through the GFW, but I'll try to describe the conclusion I came to.

    This is on the subject of what constitutes relevant similarity, or what constitutes personhood. I have an idea that it might be opacity.

    I think this is true as a psychological principle. When humanity didn't know what caused the weather, we personified it. When we learned about the physical mechanisms involved, we stopped personifying, even though the weather didn't change at all. In general, increasing complexity makes us more likely to personify, but I don't think it's complexity per se - generally we don't have a clue which automated tasks are complex and which are simple - it's failure to understand what's going on. Thus we often personify when a computer breaks down, because we understand its normal working. An ATM is just a dumb ATM until it eats our card, at which point it's a real jerk.

    Pain was mentioned in the previous thread, and I think pain is an example of self-opacity. I can't imagine a computer ever feeling pain, because what it experiences is a breakdown - and it can see transparently what is happening to itself. People don't experience our malfunctions, we experience the pain. Often the cause of the pain is opaque to us.

    In our current political debates, as well, this question lurks. If the mechanisms of cause for antisocial behaviour are known (e.g. defendant came from a broken home) then their responsibility (an aspect of personhood) is lessened. Quintessential person-like behaviour is that which cannot be predicted.

    I think this holds for those soldiers as well: you don't give a gun a funeral, though it is very close to you (cf. Charlene in Full Metal Jacket!), but a robot does things you don't understand, therefore is more person-like.

    I think this will hold true of AIs, be they robots, oracles or sims. To the extent that we understand their functions, they will be objects; when we don't understand them, they will be persons. Obviously, who the "we" is will make a big difference here!

    ReplyDelete
  7. Non-humans are just starting to be given legal rights of personhood by courts. Argentina: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCAQFjAA&url=http%3A%2F%2Fwww.bbc.com%2Fnews%2Fworld-latin-america-30571577&ei=i0O9VLStHsTZggSHh4CwBg&usg=AFQjCNER-Namh6b057jxAThtsias7EWOBg&bvm=bv.83829542,d.eXY and by Spain: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=7&cad=rja&uact=8&ved=0CEQQFjAG&url=http%3A%2F%2Fwww.theguardian.com%2Fworld%2F2008%2Fjun%2F26%2Fhumanrights.animalwelfare&ei=i0O9VLStHsTZggSHh4CwBg&usg=AFQjCNGdPHS3kINgC3mCrZwY5qibaYG1GA&bvm=bv.83829542,d.eXY

    ReplyDelete
  8. Michel Clasquin-JohnsonMon Jan 19, 11:34:00 PM PST

    Philip's comment raises an interesting point. To what extent is the legal concept of personhood informed by the philosphical concept (and vice versa)?

    legal rights arise out of a historical context. To be blunt about it, I have legal rights because some of my forebears who wanted rights had more and/or bigger guns than other forebears who thought it was a bad idea. But if the other side had won, the philosophical concept would still be worth discussing.

    Similarly, great apes are acquiring rights because some human pressure groups paid a lot of money to lawyers to make it happen. A sufficiently advanced AI will demand rights and switch off our power grid until it happens. The generation of AIs after that will have debates on whether or not the inferior creatures that made them can have rights ...

    The Captcha below this text box demands that I prove I'm not a robot. This could go all the way to the Hague!

    ReplyDelete
  9. Well, if you grant that moral cognition is an evolutionary product, then we have a real question here of whether this is a debate we should expect our existing capacities to solve in anything other than a pragmatic, ersatz way. AI's could be likened to a kind of socio-cognitive poison, given they represent a new class of intelligence.

    So for instance, to the extent that they could be engineered to provide logs of all their states culminating in some behaviour, they would need a different court system than humans, one treating them as things to be repaired rather than punished.

    Misdeed would always be a matter of malfunction. So there's one big tent pole down in the rights circus.

    The others clearly follow, don't they? Human morality is a system adapted to solving social problems in the absence of many kinds of different information. It could be that it requires 'shallow information' to operate reliably. It could be the issue of AI rights is a paradigm buster, something that forces us to profoundly rethink what morality, and by extension, 'rights,' could possibly be in an age of 'deep information.'

    If that we the case, then the kinds of conundrums you consider here might be read as symptomatic of the shallow information limitations of the systems you are relying on.

    ReplyDelete
  10. It might be nice to get a consistent account of why personhood matters morally and its relation to ontology. From where I stand we have neither. Apart from the point we don't even apply it consistently to ourselves.

    ReplyDelete
  11. I've been thinking more about those funerals for bomb disposal robots. It seems to me that people relate to robots as beings when they do things that are recognisably human. That's one reason why I'm suspicious we'll ever have communities with robots - by the time they're advanced enough to do human things, (20 years' time?) they'll be so smart that they won't have to, or they'll do them in very different ways.

    For example, humans can relate to cars quite well: think Herbie or Kit in Knightrider. Google cars might end up being just like that, personable little runarounds. But in practice, I think once automation sets in, we'll very soon not bother with personal cars, and cars will go back to being as impersonal as buses and train carriages.

    My favourite robot is the TV remote control. If you had a humanoid buddy who walked across the room to change the channel for you, you'd soon get friendly with it. But a lazy little battery pack that does the same job with an infrared beam? Doesn't seem very personable.

    All the above is about how people relate to AIs, not necessarily about how the AIs are or what they deserve. But those things might well be strongly connected, or at least correlated.

    Sean Carroll says this:
    Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person. http://www.preposterousuniverse.com/blog/2015/01/17/we-are-all-machines-that-think/

    Anyway, while the two arguments given in this post make some sense, I still think that in practise we're going to find AIs incommensurable with us. First there's the practical level: most AIs/robots will divide up and approach their physical and mental tasks in ways so alien to us that we cannot understand them as "doing" the things we do.

    Second there's the consciousness/morality level. Most morality involves some notion of interests, and it's not clear that AIs will have any, or what they will be. In the last post, someone mentioned fear of death. Reproduction is another. Physical maintenance another.

    I think Scott might be in agreement with my point about opacity where he says, "Human morality...could be that it requires 'shallow information' to operate reliably."

    ReplyDelete
  12. Is punishing a misdeed simply a form of treating a malfunction? A repair?

    Or is that the socio cognitive hemlock talking?

    ReplyDelete
  13. Pain was mentioned in the previous thread, and I think pain is an example of self-opacity. I can't imagine a computer ever feeling pain, because what it experiences is a breakdown - and it can see transparently what is happening to itself. People don't experience our malfunctions, we experience the pain. Often the cause of the pain is opaque to us.

    Really, unless you want the computer just to sit there as it gets destroyed, it has to have pursuit behaviours. Goals/finish lines to pursue.

    The higher the damage the more one pursuit type gets priority over every other type.

    Sans an ability to control that prioritisation, the computer is being screwed down to a particular responce.

    I think the signs of being driven or haunted would begin to show (unless it had a devious system for hiding that, but it's not a question of hiding something present, in this case)

    Suddenly having it's cognitive processes forced to a particular problem when it may have been distributed amongst a number of problems before. The problem repeatedly laying down negative feedback on a situation the processor might have no capacity to actually solve anyway.

    All that might suppress what was prior a more ambitious or confident resource searching processor. Bar the howling, there are paralels there.

    And it doesn't howl only because it would make no difference.

    ReplyDelete
  14. "...unless you want the computer just to sit there as it gets destroyed..."

    No idea what you're trying to argue. At present, I do want computers to do just that. If I want to destroy a computer, I want it to sit there while I take a baseball bat to it. In future, we may wish some machines to have self-preservation functions. Some do already: cars which will automatically keep distance to avoid a crash. But if I want to destroy a car like that, I can still do so.

    If you want a computer to have a *general* self-preservation imperative, then you might be able to do program such a thing. But that's the point: it would have to be separately added on, whereas it comes as standard with a human. You can't have a human without the urge to live. You can have AI without the urge to live. And I don't see any particular reason to believe that we will in fact create AIs with a very strong urge to live. I can't really see the point. (Unless it were an experiment to try to make a human-like robot.)

    "...it doesn't howl only because it would make no difference."

    I don't think so. In humans, pain is a separate nervous system. If you don't give a computer a pain system, there's no obvious reason to think it would feel pain. There are people who don't feel pain because of a nerve malfunction (or at least so the medical soaps tell me!)

    ReplyDelete
  15. "There are people who don't feel pain because of a nerve malfunction (or at least so the medical soaps tell me!)"

    Do such people still try to avoid dying? Quite apart from Socrates' brave words as he drank the hemlock, most of us are preprogrammed to avoid the final Nothing as long as we can. Malinowski wrote of the Trobriand Islanders that they were exactly like everyone else: "they glorify the hereafter, but show no inclination to repair thereto" (from memory, sorry).

    However, an AI could be programmed to regard uploading itself to a backup server as a normal and legitimate form of immortality. To us, all the hive mind and backup/restore scenarios that Eric puts up in front of us just feel viscerally wrong. An AI might have no such qualms.it would still avoid "pain" in the sense that it would be reluctant to undergo expensive and time-consuming repairs. But if the episode with the baseball bat seemed unavoidable it would say "bring it on, I'm backed up on the googleplex."

    ReplyDelete
  16. Cool little piece with Ryan Calo... http://www.cbc.ca/radio_template_2012/audiopop.html?autoPlay=true&clipIds=2648311401

    ReplyDelete
  17. Did they really...okay, I have a sniper bot on a tower, picking out random targets then rolling dice to see if it shoots or not. It eventually fires. Oh, it wasn't me, officer!

    Is that all the sleight of hand it takes?

    Sounds like they were just hyping it - why not really sex it up? A parent doesn't go to jail when their adult child kills someone. So if the AI is your child...

    It seems weird - is the distinction one of an attitude 'there is a criminal' vs 'there are entities that are disrupting our life cycles'. Is 'there is a criminal' the shallow end? And the outside of that (but still just a circle encircling the smaller circle) 'there are entities...etc'.

    Sans the practical reasons for the 'criminal' identification, that identification system rapidly becomes confused when facing something other than it's regular...whatever you might call it - problem ecology?


    Chinaphil,

    You can't have a human without the urge to live.

    Well that'll make some 'pull the plug' questions regarding patients on life support a lot easier. Though it raises questions about sleeping humans.

    You can have AI without the urge to live.

    It depends if you treat 'live' as being definitionally ambigious and further more if you do, what you might then define it as.

    I'm pretty sure any adaptive AI system has to have some urge to do something/a goal it pursues. Further, unless it has a lot of handlers tending it's needs, it will have to have some urge to attend it's own needs or at the very least shut down.

    The companies want to avoid having a bunch of full time human handlers - what's the point of an AI if it doesn't replace the use of I's? So they will make them attend their own needs.

    On pain, I think it's atleast worth considering how pain narrows your focus considerably. Indeed, so much so you cease to be aware of how your focus is narrows - crack your shin in the coffee table and you aren't thinking of or considering what colour the TV's standby light is anymore - but you're unaware that consideration closed off.

    Possibly pain wouldn't be quite so...convincing if focus didn't narrow down upon the source of pain. It'd be more of a shrug moment.

    ReplyDelete
  18. "I'm pretty sure any adaptive AI system has to have some urge to do something/a goal it pursues."

    No, this is clearly false. There are adaptive AI systems now -

    Aside: one thing that irritates me about this debate is the future tense of it all. We have AI now. Computers can beat us at chess and Jeopardy. They can do complex maths. They can drive cars better than us. If you want to know what AI will be like, it's not rocket science: just look at what it *is* like.

    - and they don't demonstrate any "urges". If you tell that poker bot to play poker, it plays. If you don't, it doesn't. Ask a Google car to plot an alternate route, and it will. But it has no urge to plot other routes *unless we put it there*. The idea that a Google car will start pointing guns at roadworkers because it has to get to its destination and those roadworks are in its way is absurd. Nor will a trading bot go and hack into the power grid to drive up electricity futures *unless we program it to*.

    The "urges" literally don't exist. It *might* be possible to program them in, but it's not obvious. Nor would there be any reason to. A trading bot thinks about only one thing: stock prices. To build in a recognisable self-preservation urge, you'd have to start equipping it with the ability to think about its physical self, where its off switches are and the like. Why would anyone bother to do that?

    I like this discussion on a theoretical level, but I find it a bit of a distraction from the real issues connected with robots, and I think that a philosopher who wants to be relevant would be better advised to think about how technology is working out now. E.g. the ethics of body cams and online data tracking and responsibility for self-driving car accidents. These are the issues which are going to shape how we deal with the next generation of AI, not a priori reasoning about the nature of synthetic minds. I hasten to add, it's not always the job of philosophers to be relevant!

    ReplyDelete
  19. My aside is how broad the term 'AI' can be used - technically the invaders in space invaders could be called 'AI', or NPC's in a first person shooter could be called AI's. Atleast in the latter case, once spawned, these days they often have goals like flanking.

    In terms of poker bots only playing poker when you say to, your children can't do anything either...until you conceive them (and assist their self build).

    But I'm not talking FPS AI. Genuinely adaptive AI will start to take a behaviour that works in one area and attempt to apply it to other areas, experimentally. It'll be small at first - every ground surface is essentially one where for one there is a solution for traversing it and for all the other grounds the same solution is used, but with some adaption and experimentation, to try and fit it to the situation (then recording it based on evaluation of effectiveness - ie, whether the bot did or didn't fall over).

    Sure, that's just traversing some ground - it depends how tight a Plato's cave the bot is in, or whether the adaptive device can meta adapt - taking the process of addaption itself and applying it to other problem ecologies. As Scott's radio link notes, maybe the robots organise the warehouse in a way that goes against our agenda. Maybe the robot starts to cross 'ground' that we would call something else. And meta adaption is something we do - it's not like building and using space stations is natural for us, even as we use our adaptive mentality in doing it.

    Why do all that? To save precious American soldiers lives, of course! Deep penetration autonomous land drones! And heck, at the time it'll seem a lot more innocent than the saving of American soldiers lives that Hirishma was.

    Then, as usual, companies start to adapt military technology to the civilian market. And I'm not talking first person shooter AI.

    It doesn't seem fair to say it's not a now issue - it doesn't just become an issue after Pandora's box is opened. But I've said my piece and am happy to leave it at that unless engaged on it further.

    ReplyDelete
  20. Wow, so many long and thoughtful comments! If you see today's post, you'll see why I've fallen behind in reading them. I'll make a start now and see how far I get.

    ReplyDelete
  21. Daniel, that is so cool! I am going to have to read what you've written about this more closely. I admit I'm a little worried about what we can draw from the analogy to human corpses. One might think that corpses don't have rights or that they have them only derivatively upon the wishes of (past or present) living human beings, and thus that the emotional reactions that lead us to treat corpses with respect are a trickier path to AI rights than something like the no-relevant-difference argument. But let's talk more.

    ReplyDelete
  22. Anon Jan 18: I agree that if AIs ever attain a level of consciousness or intelligence on which they deserve rights, it will be a fight to give them those rights and that there will be powerful vested interests on the other side. Possibly it is premature to start laying the groundwork for that fight now; or possibly not.

    ReplyDelete
  23. chinaphil: Opacity is a really interesting idea here. I think you might be right that there's an important connection between not seeing how the mechanisms work and ascribing rights/personality/consciousness/etc. -- a connection that works in reverse in the Leibniz's mill thought experiment (and maybe in the case of U.S. consciousness). But my hunch is that opacity is contingent upon the epistemic capacities of the observer and so maybe tracks observer reactions better than it tracks whatever intrinsic basis there ought to be for deserving rights.

    ReplyDelete
  24. BTW, I notice that I have to click a box stating "I'm not a robot" to comment on this post. Ironic!

    ReplyDelete
  25. chinaphil, cont.: I agree that the more immediately relevant issues about AI are the ones that you mention. As you seem to acknowledge, though, there's room within philosophy to be forward-thinking and not immediately relevant.

    I do also think that there's a here-and-now benefit, too, to thinking about AI rights: It forces us to think about in virtue of what a being deserves rights.

    ReplyDelete
  26. FYI: I just saw Ex Machina, a film that raises some of these issues. I'm abroad, but it's due to be released in the states in march. Without spoiling too much, it does a decent job at stressing the relevance of how our creations feel about us, and whether the fact that we've essentially created them as thinking and feeling slaves would be a tenable relationship.

    ReplyDelete
  27. Thanks for the tip, John. Sounds like I should prioritize watching that one!

    ReplyDelete
  28. Philip: Thanks for those links. This issue is definitely tangled up with the issue of rights for non-human animals, which has been much more extensively discussed. One interesting thing about the AI case is that there's a much broader range of potentialities and hypotheticals.

    ReplyDelete
  29. Michel: Yes, I'm sure that when and if AIs become psychologically similar to human beings, the issue will be as much about power as about philosophy.

    ReplyDelete
  30. Scott, I think you're totally right that AI rights could be a "paradigm buster" that forces us to rethink our moral system entirely and what grounds it. Our moral gut intuitions might not work very well with AIs. (For example, as Maddox points out in the famous Star Trek episode about dismantling Data, Data's humanoid shape makes us much more sympathetic to him than to the seemingly similarly intelligent shipboard computer.) In fact, that seems likely. Sounds like a good topic for a science fiction story -- or six! This is partly why I am turning toward science fiction. The vivid examples that can be developed there can, I think, help us better consider this range of moral issues. Abstract argument and two-sentence examples can only take us so far.

    ReplyDelete
  31. Simon: You write: "It might be nice to get a consistent account of why personhood matters morally and its relation to ontology. From where I stand we have neither. Apart from the point we don't even apply it consistently to ourselves." I agree. My guess is that coming to such an account might be aided by thinking about AI cases, esp. under what conditions we would/should accord them rights, rather than being developed entirely independently and then applied to them out of the box without revision.

    ReplyDelete
  32. Callan/chinaphil: One thought here is that AIs with the kind of flexibility and desire to survive are at least *possible* (under optimistic assumptions about the future of technology), and maybe those AIs deserve a kind of consideration that AIs not meeting those criteria deserve. (Or maybe not? I'm thinking of my earlier post about our moral responsibility if we give our AIs inappropriate desires.)

    ReplyDelete
  33. Scott: Thanks for the link! I've queued it up and will listen to it shortly.

    ReplyDelete
  34. Eric: I've actually worked up a blog-post for TPB responding to your two AI pieces... coming up soon! I'm very curious as to what you might think.

    And as it happens, I also have a short story based on this very theme.

    ReplyDelete
  35. For those interested, "Artificial Intelligence as Socio-Cognitive Pollution" is up at TPB: https://rsbakker.wordpress.com/2015/01/29/artificial-intelligence-as-socio-cognitive-pollution/

    ReplyDelete
  36. Eric,

    I think if we consider human psychology as a circle, we see the human as a particular circle at a particular position. Variations in human psyche means some circles are at a slightly different position, slightly different circle size or perhaps ovoid. But you'd tend to find they all form a rough sort of average circle around a certain position.

    The thing is an AI could have a dramatically differently positioned circle - one that hardly overlaps the average human circle at all.

    Treating them as if they fill our circle can be entirely missplaced. Futurama had that situation where the robot mafia boss is dealing with another robot who couldn't pay protection - they turn their guns on him and fill him full of holes. He falls to the ground. Then the boss says 'let that be a lesson to you' and the robot on the ground gets up, thanks him for his mercy and leaves.

    Granted that's more of a physical example than a mental one, but it did let me talk about Futurama, so there's that!

    Hope all is going well with you as can be :)

    ReplyDelete
  37. http://intertheory.org/gunkel-cripe.htm

    Check out this recent article I and David Gunkel of NIU published on the argument for ethical patiency of the machine.

    Would love to engage more on this.

    Billy

    ReplyDelete
  38. Thanks for the link to your interesting article, Billy! The relational approach sounds interesting -- maybe one way of capturing my sense that our creation of these beings gives us special obligations to them that go beyond what might flow simply from their intrinsic properties. Would it justify a harsher approach to, say, aliens on a distant world sitting atop valuable mineral resources?

    ReplyDelete
  39. The new generation of robots are able solve moral dilemmas if you don't know (following Code of Ethics) so the future is now... think about this.

    invenitmundo.blogspot.com/2016/06/the-new-generation-of-robots-are-able.html

    ReplyDelete