In a 2015 article, Mara Garza and I offer the following argument for the rights of some possible AI systems:
Premise 1: If Entity A deserves some particular degree of moral consideration and Entity B does not deserve that same degree of moral consideration, there must be some relevant difference between the two entities that grounds this difference in moral status.
Premise 2: There are possible AIs who do not differ in any such relevant respects from human beings.
Conclusion: Therefore, there are possible AIs who deserve a degree of moral consideration similar to that of human beings.
The argument is, we think, appealingly minimalist, avoiding controversial questions about the grounds of moral status. Does human-like moral status require human-like capacity for pain or pleasure (as classical utilitarians would hold)? Or human-like rational cognition, as Kant held? Or the capacity for human-like varieties of flourishing? Or the right types of social relations?
The No-Relevant-Difference Argument avoids these vexed questions, asserting only that whatever grounds moral status can be shared between robots and humans. This is not an entirely empty claim about the grounds of moral status. For example, the argument commits to denying that membership in the species Homo sapiens, or having a natural rather than artificial origin, is required for human-like moral status.
Compare egalitarianism about race and gender. We needn't settle tricky questions about the grounds of moral status to know that all genders and races deserve similar moral consideration! We need only know this: Whatever grounds moral status, it's not skin color, or possession of a Y chromosome, or any of the other things that might be thought to distinguish among the races or genders.
Garza and I explore four arguments for denying Premise 2 -- that is, for thinking that robots would inevitably differ from humans in some relevant respect. We call these the objections from Psychological Difference, Duplicability, Otherness, and Existential Debt. Today, rather than discussing Premise 2, I want to discuss David Gunkel's objection to our argument in his just-released book, Person, Thing, Robot.
Gunkel acknowledges that the No-Relevant-Difference Argument "turns what would be a deficiency... -- [that] we cannot positively define the exact person-making qualities beyond a reasonable doubt -- into a feature" (p. 91). However, he objects as follows:
The main difficulty with this alternative, however, is that it could just as easily be used to deny human beings access to rights as it could be used to grant rights to robots and other nonhuman artifacts. Because the no relevant difference argument is theoretically minimal and not content dependent, it cuts both ways. In the following remixed version, the premises remain intact; only the conclusion is modified.
Premise 1: If Entity A deserves some particular degree of moral consideration and Entity B does not deserve that same degree of moral consideration, there must be some relevant difference between the two entities that grounds this difference in moral status.
Premise 2: There are possible AIs who do not differ in any such relevant respects from human beings.
Conclusion: Therefore, there are possible human beings who, like AI systems, do not deserve moral consideration.
In other words, the no relevant difference argument can be used either to argue for an extension of rights to other kinds of entities, like AI systems, robots, and artifacts, or, just as easily, to justify dehumanization, reification of human beings, and the exclusion and/or marginalization of others (p. 91-92, italics added).
This is an interesting objection. However, I reject the appropriateness of the repeated phrase "just as easily", which I have italicized in the block quote.
----------------------------------------------------------------
As the saying goes, one person's modus ponens is another's modus tollens. Suppose you know that A implies B. Modus ponens is an inference rule which assumes the truth of A and concludes that B must also be true. Modus tollens is an inference rule which assumes the falsity of B and concludes that A must also be false. For example, suppose you can establish that if anyone stole the cookies, it was Cookie Monster. If you know that the cookies were stolen, modus ponens unmasks Cookie Monster as the thief. If, on the other hand, you know that Cookie Monster has committed no crimes, modus tollens assures you that the cookies remain secure.
Gunkel correctly recognizes that the No Relevant Difference Argument can be reframed as a conditional: Assuming that human X and robot Y are similar in all morally relevant respects, then if human X deserves rights so also does robot Y. This isn't exactly how Garza and I frame the argument -- our framing implicitly assumes that there is a standard level of moral consideration for human beings in general -- but it's a reasonable adaptation for someone wants to leave open the possibility that different humans deserve different levels of moral consideration.
In general, the plausibility of modus ponens vs modus tollens depends on the relative security of A vs not-B. If you're rock-solid sure the cookies were stolen and have little faith in Cookie Monster's crimelessness, then ponens is the way to go. If you've been tracking Cookie all day and know for sure he couldn't have committed a crime, then apply tollens. The "easiness", so to speak, of ponens vs. tollens depends on one's confidence in A vs. not-B.
Few things are more secure in ethics than at least some humans deserve substantial moral consideration. This gives us the rock-solid A that we need for modus ponens. As long as we are not more certain all possible robots would not deserve rights than that some humans do deserve rights, modus ponens will be the correct move. Ponens and tollens will not be equally "easy".
Still, Gunkel's adaptation of our argument does reveal a potential for abuse, which I had not previously considered, and which I thank him for highlighting. Anyone who is more confident that robots of a certain sort are undeserving of moral consideration than they are of the moral considerability of some class of humans could potentially combine our No Relevant Difference principle with an appeal to the supposed robotlikeness of those humans to deny rights to those humans.
I don't think the No Relevant Difference principle warrants skepticism on those grounds. Compare application of a principle like "do unto others as you would have them do unto you". Although one could in principle reason "I want to punch him in the nose, so I guess I should punch myself in the nose", the fact that some people might potentially run such a tollens reveals more about their minor premises than it does about the Golden Rule.
I hope that such an abuse of the principle would be in any case rare. People who want to deny rights to subgroups of humans will, I suspect, be motivated by other considerations, and appealing to those people's putative "robotlikeness" would probably be only an afterthought or metaphor. Almost no one, I suspect, will be on the fence about the attribution of moral status to some group of people and then think, "whoa, now that I consider it, those people are like robots in every morally relevant respect, and I'm sure robots don't deserve rights, so tollens it is". If anyone is tempted by such reasoning, I advise them to rethink the path by which they find themselves with that peculiar constellation of credences.
This looks like a nice example of the classic slide from descriptive to normative. I took your original formulation to be descriptive in content: it was used as a premise of a normative argument, but didn't seem normative in itself.
ReplyDeleteGunkel is seeing the normative power in such a description, and criticising it for putative real-world impacts if its normative implications were played out.
Good argument, and good response to Gunkel. I think, though, that you should emphasize robots rather that AIs. Yes, there might be silicon entities that instantiate the causes of our experiences (or similar experiences) and if so, mistreating them would cause them to be in pain, and we would be morally obliged to refrain from mistreatment. But I don’t think computational functionalism is right, so I think the project of AI is distinct from the project of making robots that have whatever are the causes of our experiences. So, I think we might make great advances in AI without coming close to instantiating the causes of experiences (even though a different project could, in my view, succeed, in principle.
ReplyDeleteThanks for the comments, folks!
ReplyDeleteChinaphil: Yes, that seems like the right way to read Gunkel. If so, I think the correct approach is to evaluate the No-Relevant-Difference Argument in the context of the Design Policy of the Excluded Middle (don't create gray-area systems) and the Emotional Alignment Design Policy (don't provoke inappropriate emotional reactions). As a bundle, I think they avoid his concerns.
Bill: Right, I do want to leave room for that view. But I also want to leave room for computational functionalism. Since robots are a type of AI, I think the broader description is more ecumenical.
Not relevant to this post, sorry, but I wanted to point out this lovely post from Scott Alexander, offering a textbook account of your theory of moral mediocrity: https://www.astralcodexten.com/p/my-left-kidney
ReplyDeleteDiscussing donating a kidney:
"The point is to reach the people who already want to do it, and make them feel comfortable starting the process.
"20-year-old me was in that category. The process of making him feel comfortable involved fifteen years of meeting people who already done it [i.e. already donated a kidney]...After enough of these people, it no longer felt like something that nobody does, and then I felt like I had psychological permission to do it.
"(obviously saints can do good things without needing psychological permission first, but not everyone has to be in that category, and I found it easier to get the psychological permission than to self-modify into a saint6.)"
There's a little more discussion in the post on the psychological mechanisms around this moral benchmarking. It's really striking how well it agrees with what you've been arguing.
Thanks for the heads-up, chinaphil -- I'll check it out!
ReplyDeletePS: The Codex post is terrific -- so insightful, though-provoking, and also as a bonus funny, Alexander at his best.
ReplyDelete