Tuesday, March 14, 2023

Don't Create AI Systems of Disputable Moral Status (Redux)

[originally published at Daily Nous, Mar. 14, as part of a symposium on large language models, ed. Annette Zimmerman]

Engineers will likely soon be able to create AI systems whose moral status is legitimately disputable. We will then need to decide whether to treat such systems as genuinely deserving of our care and solicitude. Error in either direction could be morally catastrophic. If we underattribute moral standing, we risk unwittingly perpetrating great harms on our creations. If we overattribute moral standing, we risk sacrificing real human interests for AI systems without interests worth the sacrifice.

The solution to this dilemma is to avoid creating AI systems of disputable moral status.

Both engineers and ordinary users have begun to wonder whether the most advanced language models, such as GPT-3, LaMDA, and Bing/Sydney might be sentient or conscious, and thus deserving of rights or moral consideration. Although few experts think that any currently existing AI systems have a meaningful degree of consciousness, some theories of consciousness imply that we are close to creating conscious AI. Even if you the reader personally suspect AI consciousness won’t soon be achieved, appropriate epistemic humility requires acknowledging doubt. Consciousness science is contentious, with leading experts endorsing a wide range of theories.

Probably, then, it will soon be legitimately disputable whether the most advanced AI systems are conscious. If genuine consciousness is sufficient for moral standing, then the moral standing of those systems will also be legitimately disputable. Different criteria for moral standing might produce somewhat different theories about the boundaries of the moral gray zone, but most reasonable criteria—capacity for suffering, rationality, embeddedness in social relationships—admit of interpretations on which the gray zone is imminent.

We might adopt a conservative policy: Only change our policies and laws once there’s widespread consensus that the AI systems really do warrant care and solicitude. However, this policy is morally risky: If it turns out that AI systems have genuine moral standing before the most conservative theorists would acknowledge that they do, the likely outcome is immense harm—the moral equivalents of slavery and murder, potentially at huge scale—before law and policy catch up.

A liberal policy might therefore seem ethically safer: Change our policies and laws to protect AI systems as soon as it’s reasonable to think they might deserve such protection. But this is also risky. As soon as we grant an entity moral standing, we commit to sacrificing real human interests on its behalf. In general, we want to be able to control our machines. We want to be able to delete, update, or reformat programs, assigning them to whatever tasks best suit our purposes.

If we grant AI systems rights, we constrain our capacity to manipulate and dispose of them. If we go so far as to grant some AI systems equal rights with human beings, presumably we should give them a path to citizenship and the right to vote, with potentially transformative societal effects. If the AI systems genuinely are our moral equals, that might be morally required, even wonderful. But if liberal views of AI moral standing are mistaken, we might end up sacrificing substantial human interests for an illusion.

Intermediate policies are possible. But it would be amazing good luck if we happened upon a policy that gave the whole range of advanced AI systems exactly the moral consideration they deserve, no more and no less. Our moral policies for non-human animals, people with disabilities, and distant strangers are already confused enough, without adding a new potential source of grievous moral error.

We can avoid the underattribution/overattribution dilemma by declining to create AI systems of disputable moral status. Although this might delay our race toward ever fancier technologies, delay is appropriate if the risks of speed are serious.

In the meantime, we should also ensure that ordinary users are not confused about the moral status of their AI systems. Some degree of attachment to artificial AI “friends” is probably fine or even desirable—like a child’s attachment to a teddy bear or a gamer’s attachment to their online characters. But users know the bear and the character aren’t sentient. We will readily abandon them in an emergency.

But if a user is fooled into thinking that a non-conscious system really is capable of pleasure and pain, they risk being exploited into sacrificing too much on its behalf. Unscrupulous technology companies might even be motivated to foster such illusions, knowing that it will increase customer loyalty, engagement, and willingness to pay monthly fees.

Engineers should either create machines that plainly lack any meaningful degree of consciousness or moral status, making clear in the user interface that this is so, or they should go all the way (if ever it’s possible) to creating machines on whose moral status reasonable people can all agree. We should avoid the moral risks that the confusing middle would force upon us.

----------------------------------------------------------

Notes

For a deeper dive into these issues, see “The Full Rights Dilemma for AI Systems of Debatable Personhood” (in draft) and “Designing AI with Rights, Consciousness, Self-Respect, and Freedom” (with Mara Garza; in Liao, ed., The Ethics of Artificial Intelligence, Oxford: 2020).

See also Is it time to start considering personhood rights for AI chatbots? (with Henry Shevlin), in the Los Angeles Times (Mar 5).

[image: Dall-E 2 "robot dying in a fire"]

8 comments:

  1. The notion that AI systems WOULD have moral status, is, to me, wrongheaded a priori. An indicator of twenty first century illiberalism. It is almost as if the assignment or possible assignment of such status is pre-authorization for transhumanism...another can of worms, for sale at the bait store.I read this morning that B.Spinoza said greed, lust ...and so on were just other species of madness. My mind substituted narcissism for madness. But, I suppose that was unnecessary. Why? Because narcissism is just another specie of madness, seems to me.

    ReplyDelete
  2. Hi Eric, fun argument! I'd be curious to hear your thoughts on my response that merely comparative moral mistakes (i.e. doing less good than we could have) are not something that it makes sense to try to pre-empt.

    For example, if it would truly be beneficial for humanity to create "disputable" AIs, while committing to treating them better than is actually necessary, the mere fact that the latter commitment ends up being morally mistaken is no reason at all to refrain from creating those AIs. "Avoiding mistakes" is actually not a morally worthy goal, as I explain at length in the linked post.

    ReplyDelete
  3. Interesting. good luck with this. Any 'mere' fact of moral mistakenness only reinforces my notion of contextual reality. It has already been asserted by one moral philosopher that morality does not matter anyway. No, avoiding mistakes is not morally worthy Nor is it unworthy. Nor is, it seems, any other notion of the good, bad, evil, or indifferent.(see:Dennett). There are no 'mere' facts. Rather (ding!), there are facts, lies, damned lies and prognostications. Approach this, your way, mes amis.

    Not my fight. Or argument. I am having my own fun. Good for us!

    ReplyDelete
  4. I'm not convinced that this is possible. There are a couple of reasons.
    1) It may be that whenever you create something new, there's always a grey area in between the "not" state and the "fully-formed" state. Perhaps we simply can't get around the disputable ground.
    2) It may be that the human idea of what is "disputable" is simply that stuff we haven't worked out yet. Looking back on very recent history, whether or not gay people should have their marriages recognised by the state seemed disputable to very large numbers of people. We appear to be wrong all the time, so the fact that we haven't got there yet shouldn't be a reason to not try and move forward.
    If either of those hold, then the advice could be bad because it's advice to do something impossible.
    Incidentally, on this topic: I think we're probably too late, anyway. Think of the guy who was fired from Google because he thought the AI was sentient; and of the court ruling that octopuses must not be hurt needlessly in scientific experiments, based on the fact that they act as though they experience pain... put together, I'm fairly sure we already have the basis for legal constraints on what we do to GPT AIs.

    ReplyDelete
  5. Paul David Van PeltThu Mar 16, 05:48:00 AM PDT

    Good points, Phil. Thank you!

    ReplyDelete
  6. Thanks for the comments, folks!

    Paul: It sounds like we fundamentally disagree about several things, including metaethics and the at least in principle possibility of transhumanism.

    Richard: Thanks for your interesting thoughts on this. I’ve commented over at your substack.

    Chinaphil: I feel the pull of both points. It might not be possible to creat fully human-grade AI without some gray cases. If fully human grade AI is important enough, it might outweigh the principle advocated here, if gray cases can be minimized. Regarding the current situation, we do I think need not too liberal a criterion of gray cases. There’s a general enough consensus of experts that LLMs as they currently exist aren’t meaningfully conscious, that I think we aren’t in the policy gray area yet.

    ReplyDelete
  7. I worry a lot about this. Since we still have wholly inadequate frameworks for understanding what is required for conscious experience with valence (pain/pleasure) there is a real hazard we'll create conscious systems that are unwittingly suffering without us having any clue.

    I wonder if that's how we ended up in THIS mess. I'd like to speak to the manager.

    ReplyDelete
  8. Do you all see the rabbit hole? Yeah, look up.

    ReplyDelete