Friday, January 05, 2024

Credence-Weighted Robot Rights?

You're a firefighter in the year 2050 or 2100. You can rescue either one human, who is definitely conscious, or two futuristic robots, who might or might not be conscious. What do you do?

[Illustration by Nicolas Demers, from my newest book, The Weirdness of the World, to be released Jan 16 and available for pre-order now.]

Suppose you think there's a 75% chance that the robots have conscious lives as rich as those of human beings (or, alternatively, that they have whatever else it takes to have "full moral status" equivalent to that of a human). And suppose you think there's a 25% chance that the robots are the moral equivalent of toasters, that is, mere empty machines with no significant capacity for conscious thought or feeling.

Arguably, if you save the robots and let the human die, you maximize the total expected number of humanlike lives saved (.75 * 2 + .25 * 0 = 1.5 expected lives saved, vs. one life for sure if you save the human). Decision-theoretically, it looks similar to choosing an action with a 75% chance of saving two people over an action that will save one person for sure. Applying similar reasoning, if the credences are flipped (25% chance the robots are conscious, 75% they're not), you save the human.

Generalizing: Whatever concern you have for an ordinary human, or whatever you would give on their behalf, multiply that concern by your credence or degree of belief that the robot has human-like consciousness (or alternatively your credence that it has whatever features justify moral consideration similar to that of a human). If you'd give $5 to a human beggar, give $3 to a robot beggar in the same situation, if you think it's 60% likely the robot has human-like consciousness. If an oversubscribed local elementary school has a lottery for admission and resident human children each get a 50% chance of admission, resident robot children of disputable consciousness would get a proportionately reduced chance.

Call this approach credence-weighted robot rights.

I see a least three problems with credence-weighted robot rights:

(1.) Credence-weighted robot rights entail that robots will inevitably be treated as inferior, until we are 100% confident that they are our equals.

Of course it's reasonable to treat robots as inferior to humans now. We should save the person, not the robot, in the fire. And of course if we ever create robots who are beyond all reasonable doubt our equals, we should treat them as such. I'm hypothesizing instead a tricky in-between case -- a period during which it's reasonably disputable whether or not our machines deserve full moral status as our equals, a period during which liberals about robot consciousness and robot rights regard robots as our fully-conscious moral peers, while conservatives about robot consciousness and robot rights regard them as mindless machines to be deployed and discarded however we wish.

If we choose a 75% chance of rescuing two people over a sure-fire rescue of one person, we are not treating the unrescued person as inferior. Each person's life is worth just as much in our calculus as that of the others. But if we rescue five humans rather than six robots we regard as 80% likely to be conscious, we are treating the robots as inferior -- even though, by our own admission, they are probably not. It seems unfortunate and less than ethically ideal to always treat as inferiors entities we regard as probably our equals.

(2.) Credence-weighted robot rights would engender chaos if people have highly variable opinions. If individual firefighters make the choices based on their personal opinions, then one firefighter might save the two robots while another saves the one human, and each might find the other's decision abhorrent. Stationwide policies might be adopted, but any one policy would be controversial, and robots might face very different treatment in different regions. If individual judges or police were to apply the law differently based on their different individual credences, or on the variable and hard-to-detect credences of those accused of offences against robots, that would be unfair both to the robots and to the offenders, since the penalty would vary depending on who happened to be the officer or judge or whether they travel in social circles with relatively high vs. low opinions of robot consciousness. So presumably there would have to be some regularization by jurisdiction. But still, different jurisdictions might have very different laws concerning the demolition or neglectful destruction of a robot, some treating it as 80% of a homicide, others treating it as a misdemeanor -- and if robot technologies are variable and changing, the law, and people's understanding of the law, might struggle to keep up and to distinguish serious offences from minor ones.

(3.) Chaos might also ensue from the likely cognitive and bodily diversity of robots. While human cognitive and bodily variability typically keeps within familiar bounds, with familiar patterns of ability and disability, robots might differ radically. Some might be designed with conscious sensory experiences but no capacity for pain or pleasure. Others might experience intense pain or pleasure but lack cognitive sophistication. Others might have no stable goals or model their goals wholly on instructions from a human to whom they are gladly, perhaps excessively subservient, insufficiently valuing their own life. Still others might be able to merge and divide at will, or back themselves up, or radically reformat themselves, raising questions about the boundaries of the individual and what constitutes death. Some might exist entirely as computational entities in virtual paradises with little practical connection to our world. All this raises the question of what features are necessary for, and what constitutes, "equal" rights for robots, and whether thresholds of equality even make sense. Our understanding might require a controversial multidimensional scalar appreciation of the grounds of moral status.

Other approaches have their own problems. A precautionary principle that grants fully human equal rights as soon as it's reasonable to think that robots might deserve them risks sacrificing substantial human interests for machines that very likely don't have interests worth the sacrifice (letting a human die, for example, to save a machine that's only 5% likely to be conscious), and it perhaps makes the question of the grounds of moral status in the face of future robots' cognitive diversity even more troubling and urgent. Requiring proof of consciousness beyond reasonable doubt aggravates the issue of treating robots as subhuman even if we're pretty confident they deserve equal treatment. Treating rights as a negotiated social construction risks denying rights to entities that really do deserve rights, based on their intrinsic conscious capacities, if we collectively choose as a matter of social policy not to grant those rights.

The cleanest solution would be what Mara Garza and I have called the Design Policy of the Excluded Middle: Don't create AI systems whose moral status is dubious and confusing. Either create only AI systems that we recognize as property without human-like moral status and rights, and treat them accordingly; or go all the way to creating AI systems with a full suite of features that enable consensus about their high moral status, and then give them the rights they deserve. It's the confusing cases in the middle that create trouble.

If AI technology continues to advance, however, I very much doubt that it will do so in accord with the Design Policy of the Excluded Middle -- and thus we will be tossed into moral confusion about how to treat our AI systems, with no good means of handling that confusion.

-------------------------------------------------------------

Related:

The Weirdness of the World, Chapter 11 (forthcoming), Princeton University Press.

The Full Rights Dilemma for AI Systems of Debatable Moral Personhood, Robonomics, 4 (2023), #32.

How Robots and Monsters Might Break Human Moral Systems (Feb 3, 2015)

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (2020) (with Mara Garza), in S. Matthew Liao, ed., The Ethics of Artificial Intelligence, Oxford University Press.

17 comments:

  1. Eric

    Explain to me where you get the 75% credence from, other than a convenient number for your thought experiment?
    What about robots would make them 75% likely to be conscious? Their spontaneity? A glint in their eye? Their flirtatiousness?

    There will always be epistemologically the ability to label any robot behavior as just robots, just as and in my youth I was one of those people who thought others did not have consciousness

    ReplyDelete
  2. Well, Eric, your visual is a thought experiment, with limited variables. I must, repeat, must understand NV's notion of credence, and,your remarks, herein, in order to form and informed opinion. Will get on that...other matters have commanded my attention. I'll only say now that creed is an old term. We understand credentials as signifying academically recognized qualification. I'll come back to this, if permitted. Thanks!

    ReplyDelete
  3. So, I looked up credence. The definition was briefly: belief that something is true. OK. Credences give rise to creeds...those may take years to solidify---slow -setting concrete. Back to Davidson, who claimed beliefs were, in his view, propositional. And, they are---not everyone believes the same, about anything. I don't believe a lot of things. As you may recall, I think most people are constructionist, believing that reality is whatever they and their peers say it is. This is functional,or, as I construe it, contextual, reality. By association, there was another post today, talking about the enmity between scientists and historians. Apparently, credences don't add up. It does not make much sense. Leading me to conclude belief and credence are faulty, from the get. Maybe Neil will get this? Look, I need not even consider robots...property, and all.
    Seems to me.

    ReplyDelete
  4. I don’t think we’re truly approaching a problem here, but rather that it merely seems like it given a standard belief that I consider false. The belief is that the more a computer is programmed to seem like it’s conscious, the more that it will be conscious. I consider this to be a non-causal explanation of how brains create consciousness however, and because information should only exist as such to the extent that it informs something appropriate. So if processed brain information informs something to exist as “vision”, “hearing”, “taste”, “thought”, and so on, then what might this information be informing to exist as these sorts of things? What medium might be dynamic enough to exist as human consciousness itself?

    I know of just one aspect of brain function that seems appropriate — the radiation associated with certain parameters of synchronous neuron firing. Not only is firing synchrony the only reliable neural correlate for consciousness found so far, but modern Brain to Computer Interface seems to be proving that consciousness does happen to be electromagnetic. Because EMF detectors in a speech area of a person’s brain can now be used to interpret the 39 phonemes of the English language when the subject is trying to speak, this seems to be strong evidence of EMF consciousness to test further. I recently did a post on this (and mean to do more though I’ve also been interviewing someone).

    Since our world does remain weird, it’s great that professor S. is finally releasing his new book (which I’ve reviewed and loved!). Hopefully a coming hardening of our soft sciences and philosophy will help normalize our crazy world somewhat.

    ReplyDelete
  5. Well said and lucidly analyzed. I think you are right about credence, vis-a-vis, robots: we can hold whatever belief(s) we wish. Should it transpire that some proof emerges, regarding a robot capability for *belief*, I might accept that. Intuition, experience and age advise me I will not be around, if/when that occurs. I think (do NOT know) in order to believe something, beings need have consciousness, whatever it may be. Ergo, the point is moot for yours truly.

    ReplyDelete
  6. I can't understand why "[c]redence-weighted robot rights entail that robots will inevitably be treated as inferior[.]" If I am a utilitarian then I should treat everyone's pleasure and pain equally. Suppose I have only 75% credence that some action will give Susan 10 units of utility. I will certainly use the value 7.5 as the expected utility for Susan. This doesn't mean that I am treating her as an inferior. Every unit of her utility counts the same as every other person. It's only that when I'm only 75% sure that she will receive 10 units of utility I should assume 7.5 units in my calculation. The same goes for rights. W We treat someone as inferior if we consider their utility/rights/etc. as less valuable. This occurs not when we multiply it with a credence less than one.

    You might argue that, when it comes to robots, we consistently apply a credence of less than one when multiplying their rights. I'm not sure if that is in any way significant.imagine someone whose reactions in all circumstances are such that we can, at best, be 75% sure she will experience some utility. Does this imply that we treat her as inferior?

    ReplyDelete
  7. In my humble opinion, I can't understand why you imply robots are equal.I had to correct Robots, twice, to robots. What does that say of the twenty-first century?See, if you can, creations are never equal to their creators. That does not happen in this universe. I might guess your age and experience. It would not matter, inasmuch as it seems you operate within a pattern or contextual reality. The last is my construct. Contextual reality ties in with IMPS. Those are interests, motives and preferences, all of which, we all have. Therefore, or, inasmuch as, or, ergo, what you wish to illustrate through maudlin statistics and graphic illustrations,does not
    move me. Check your resources and assumptions. Draw better conclusions.,Or, not.




    ReplyDelete
  8. Rashid:
    I was not certain whom you were speaking to. But, I will say---on my own behalf---robots are not among the 'everyone(s)' I recognize as everyone(s). Robots, or, AIs, or LLMs are things, not people...created BY people, not the other way 'round. The distinction is clear, to me. Should there be more rational and conclusive evidence to the contrary, I will consider that when it emerges.
    Thanks!

    ReplyDelete
  9. Paul D. Van Pelt:
    My name is not Rashid. Also, I was not replying to your comments. I was commenting on the original post.

    ReplyDelete
  10. I misread the mane, sir. My mistake.

    ReplyDelete
  11. Thanks for all the comments, folks!

    Howie: Yes, the 75% was just stipulation. On the *grounds* for that stipulation -- very difficult. I think we have reason to be skeptical about the basis of consciousness, in the sense that we will have a difficult time finding decisive evidence for or against consciousness in highly sophisticated machines.

    Phil Eric: I won't argue that your theory isn't a possibility -- but I hope you'll forgive me for thinking there are many other live possibilities too!

    Paul: "Credence" in philosophical jargon means something like degree of belief or confidence. Sometimes I forget that not everyone understands the jargon! I'm inclined to disagree with your idea that we can believe whatever we want regarding robots. Some beliefs will be more justified than others. I also disagree that creations can never equal their creators. Why should we accept something like that?

    Navid: Right, I'm inclined to think there's an important difference choosing Person A over Person B to receive X units of pleasure because you think Person B is only 75% likely to receive that pleasure and the robot case described. Mathematically, they're the same in the utilitarian calculus, of course, and they're the same in terms of expected utility calculated by your own best estimate.

    I don't think there's a great analogy for the difference I'm pointing toward, but here's an imperfect shot. Suppose some research suggests that people of group X feel pain only 60% as intensely as other people, and suppose you that you think the research is 30% likely to be correct. Reacting to that by discounting those people's pain to 86% of the value of others' pain (.7 * 1 + .3 * .6) would be to systematically devalue them on the basis of research you think is probably false. A utilitarian could stick by their guns, saying that's the right decision, but to me it's non-obvious and a very different flavor from choosing to help one person over another based on an estimate of the likelihood that the attempt to help will be successful.

    ReplyDelete
  12. Thank you for your reply. What I find weird is that it seems totally unproblematic if at one or two or ten occasios I think person A is 30% percent likely to feel less pain and at some other occasions I think the same for person B, etc. We do this all the time, since we are almost never certain about how much pleasure or pain others will receive (which is highly correlated to their psychology, preferences, etc.). But when this becomes systematic, when our credence is less than 1 about a groups' amount of pain and pleasure, a groups' possession of rights, and so on, then discounting according to our credence means we are treating them as an inferior. I take the fact that when we do this unsystematically we are not treating a group as inferiors, as pretty strong evidence that even if we do this systematically we will still not treat them as inferiors. Note that such unsystematic discounting of someone's claims is also something we so frequently do when we are unsure whether someone has a right to something and it seems to be completely OK. So I don't think you can reply that in the case of rights even doing such a thing unsystematically is not OK.

    ReplyDelete
  13. Acknowledgement, rebuttal, and anything positive, thereby derived:
    * thanks for clearing up the distinction in how credence is defined. Jargon can be problematic, and I understand that. Hope Neil is reading and tuned in.
    * I did say ...we can believe whatever we want... but, I did not attach that to what is, or is not, believed about robots, although, I might have, and not flinched a millimeter. Another commenter chastised me for identifying him/her as Rashid. That identification appeared in comments I had read. I later noted the name as Navid---miscommunication, abetted by internet lag and inaccessibility. People are touchy about identity. Further understood. It is a major thing, in the twenty-first century.
    * putting on an ecclesiastical cap, something I rarely do, are we, as creations of "God", equal to that? I don't think so. If you follow Ed Feser's post, this is abundantly clear, and reinforces my assertion about believing whatever we want. Have not read much there on robots; AI or LLMs. Dr. Feser is divinity first, philosophy, second. All good. The National Review could be wrong---but, maybe not all wrong?

    I wanted to keep this brief. It did not work out. Lo siento mucho. Insofar as you allow me here, thanks again. I hope Perry gets around to writing...the opportunity comes only once, in a lifetime...

    ReplyDelete
  14. Around your comment about truly approaching a problem. I think you nailed it. Just wanted to see if you might reach a destination I had considered. Thanks.

    ReplyDelete
  15. Paul: You are definitely welcome here! Thanks for your thoughtful comments.

    Navid: Right, I think systematic discounting is somehow more problematic than unsystematic discounting. I should think more about *why* I think so, but here's a start: When it's unsystematic, we should expect chance to sometimes break in favor of you or your group and sometimes against, in a way that might balance over time or at least partly tend toward balancing over time, and there's something more fair/appealing/egalitarian about that than in systematic cases where the luck will never, so to speak, break your way.

    ReplyDelete
  16. Probabilities are sort of like the Universe playing dice. We know God does not play dice because a great man said so. The Universe, and chance along with it, did not get the memo. Ergo, chance may not actually favor the prepared mind, but being able to roll with the punches of contingency never hurts. Well, not so much, anyway.

    ReplyDelete
  17. Howie remarked, back at the get-go here, that when younger, he thought other people did not have consciousness. I can appreciate that. In considering matters now, in our braver, newer world, I have posited that many people are not *responsively conscious*.
    I take that view, while watching people whose attention seldom extends beyond the smart device(s) held in front of their face: They fall from cliffs, into holes and walk into the paths of cars, trusting everyone to look out for their safety and well-being. Balderdash and twaddle. There are too many distractions, and self-centered, narcissists demand more than they deserve. When I am driving, I have other serious concerns, such as the oaf, swerving into my lane to make an opposing turn. I did not learn that in driver's ed. No. I was expected to drive defensively: to be responsively conscious. And responsibly aware. Where did that go, in around half a century? No hay de que eso. The machine is getting better with Spanish. I will not confuse it with,French... I understand there is no driver's ed now. Too bad for us all.

    ReplyDelete