Sunday, November 17, 2019

We Might Soon Build AI Who Deserve Rights

Talk for Notre Dame, November 19:

Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


[An AI slave, from Black Mirror's White Christmas episode]

The first half of the talk mostly rehearses ideas from my articles with Mara Garza here and here. If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.

Selections from the second half of the talk

So here’s what's going to happen:

We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!

Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.

This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.

Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.

Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.

Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.

There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!

If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.

Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?

I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?

We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!

Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.

Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?

Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.

Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?

11 comments:

  1. Would super intelligent humans such as the equivalent of a Shakespeare or a Feynman join the elect company of these super intelligent computers?
    Would a Shakespeare or Feynman or a Picasso be their pets and playthings?
    Would these computers be able to think of things undreamed of in heaven and earth?

    ReplyDelete
  2. Is the bigger threat more of computers asserting themselves or state actors, such as the Kremlin or the US or the Chinese or Saudis or Israelis, not to mention non state actors, to abuse this newfound power?
    Do you suspect quantum computers are the next great jump for AI?

    ReplyDelete
  3. I don’t think it is consciousness, per se, that determines the moral standing of an object/being with respect to “rights”. Pleasure and suffering are considerations related to, but separable from, consciousness. But I think these also compete with economic considerations.

    The economic consideration has to do with sunk costs/replacement costs. If an inanimate object is valuable and hard to replace, we put a moral stigma on the destruction of that object. If the object is unique and impossible to replace, like say a piece of artwork, or even a unique natural formation like a balancing rock, we place a more significant stigma on its destruction, but see below for consideration of the social impact of such destruction, specifically, social suffering.

    I think these economic considerations are applied to people as well. Thus the Trolley problem. Choosing to kill one person to save five is close to the tipping point, but I don’t think too many would hesitate to kill one to save a million. But what if in the original Trolley Problem we had the technology to clone a whole person, memories and all? What if there was just one person on the track destined to be hit, but there was no backup at all for that person, and there was one person on the alternate track, but that alternate was “backed up” ten minutes ago and could be cheaply and easily replaced, or even just repaired (after being “killed”)?

    Between pleasure and suffering, I think suffering is the far more pertinent consideration. While pressure to increase pleasure is there, and leads to recreational use of alcohol and drugs, it quickly runs up against the economic considerations, and then “use” becomes “abuse”.

    I’ve recently said this (on SelfAwarePattern’s blog), but I think creating an AI that can suffer significantly would be one of the most heinous things we could do, and I’m hoping discussions like these will lead to regulations preventing it. Obviously we would have to agree on what “suffering” means, but that just means we should start working on it now.

    Finally, I think social suffering becomes a consideration. This is a combination economic/cognitive consideration. People can become attached to objects that have no consciousness, but the greater the impact of destruction of those objects on people, the greater the moral stigma on such destruction. Thus, abusing some elderly person’s “comfort robot” would be morally wrong, as would destroying much loved artwork (Statue of Liberty?). The more people affected, the more stigma. Discovering a never-before seen Davinci and then immediately destroying it doesn’t quite feel as bad as destroying the Mona Lisa.

    So how do we quantify suffering?

    *

    ReplyDelete
  4. "At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights."

    No one is going to be satisfied with this answer, but I don't think there is a fact of the matter on this. Consciousness and moral rights are in the eye of the beholder. Society will come to regard these systems as fellow beings when and if they are able to trigger a sense of empathy in the majority of people.

    Even if the systems in question are not conscious according to some objective measure, if they engender sympathy in the majority of us, it would be unwise for us to ignore that sense and risk becoming callous to it.

    Yes, that might come with costs, but as James of Seattle pointed out, a being that can be restored from backup and reconstituted, a category I would think most robots would fall into, alters the calculation on who should be saved from the burning building. (Of course, if we ever get to the point where humans can also be restored from backup, then anyone's body being destroyed becomes an inconvenience, more of a property issue than a moral one.)

    That said, I agree Divergent AI is the most likely case. In addition, I suspect we'll have to work very hard to give an engineered system feelings. It won't be something that comes in accidentally. And given that our feelings exist due to evolutionary history, it's not clear to me that AI will need to have feelings in order to give us most of the benefits we want from those systems.

    Feelings are most likely to come in with research projects. Here I agree wholeheartedly that we should think very carefully before building such a system, and any effort to do so should be subject to intense regulatory scrutiny.

    ReplyDelete
  5. No offense Eric, but your post reflects the quintessential drivel which pervades our academic institutions and is systemic of our elitist society at large. Why don't we focus our energies and resources on at least "attempting" to make our society more egalitarian for human beings before we worry about the rights of machines?

    For example: Why don't the elitists admit they made a mistake and abandon the war on drugs, a policy which is nothing more than a continuation of blatant segregation and discrimination, a policy which created the drug cartels and the subculture of crime and drug distribution within our own country. It's not rocket science. But instead, the elitists are now obsessed with ringing their hands and worrying about the rights of machines while twenty-five percent of all human beings incarcerated in the entire world are held in the prisons of America.

    Peace

    ReplyDelete
  6. I think I agree with James that it's not obvious that the key feature for rights is consciousness. Suffering does seem like a more obvious candidate, and the amazing thing is that we may be able to design our AIs not to have any suffering.
    I think in order to be intelligent, an AI would need purposefulness, and it is possible that frustration of a purpose is always in some ways a form of suffering. But I'm not sure that's necessarily true. Suffering for humans seems to be deeply associated with physical pain; I'd say that the idea that I suffer by being locked up, for example, is more an extension of the idea of physical pain than it is a necessary consequence of not being able to achieve the things I want, like going somewhere else. After all, everyone has lots of things we're not able to achieve, and we don't think of those as causing suffering, generally.
    So maybe AIs can be suffering free, and maybe that means they will never be holders of rights, because they simply don't need to be.

    ReplyDelete
  7. Would an attitude towards having our own presences be helpful for recognizing AIs having their own presences...

    ReplyDelete
  8. Thanks for the comments, folks!

    Howard B: I'm thinking of superintelligent AI as far beyond people like Einstein!

    James and Chinaphil: I'm inclined to think of suffering in the relevant sense as a type of conscious state. Would you disagree? "Unconscious suffering" seems either like an oxymoron or as something that doesn't really matter compared to actually felt suffering. (By "consciousness" here, I mean what philosophers sometimes call "phenomenal consciousness", which is just a matter of having felt experiences of some sort or other.) Apart from that issue, I'm inclined to agree with most of what you say -- especially the point about the difficulty of measuring or quantifying suffering!

    Lee: I appreciate the comment, and it is helpful to be reminded that many sensible people have that type of reaction to discussions of this sort. I think it is reasonable, and possible, to care both about what is going on now and about the speculative future. My own work spans the whole range. I do not think that it is incorrect to characterize "elitists" (whoever they are) in general as more concerned about the rights of machines than about mass incarceration. In philosophy, for example, prominent philosophers such as Jason Stanley and Jennifer Lackey, and many others, have prominently spoken about mass incarceration and have taken active practical steps to contribute toward addressing the issue. In contrast, AI rights remains a tiny movement, and the main reason I was drawn into the issue in 2015 was that I couldn't at the time find a *single* full-length mainstream philosophy article in defense of the seemingly-obvious position that if we someday create AI that are psychologically and socially similar to us they would deserve rights similar to ours.

    ReplyDelete
  9. SelfAware: I agree that it's complicated and that we should address the issue with substantial concern and care, but I'm too much of a moral realist to agree that there is no "fact of the matter" about under what conditions AI would deserve moral consideration. Of course, moral realism itself is a huge issue!

    ReplyDelete
  10. So to my layman's ear you raise the question: what is intelligence, but from the cosmic perspective? Is it something autonomous from the substrate from which it's hewed? Can some super intelligent AI be smarter than the universe itself?
    If I recall, some science fiction addressed this matter with seriousness.

    ReplyDelete
  11. This is all new to me. The subject raises the question: what is intelligence?

    Is it something artificial constructed with natural elements? Then we'd say AI would have a soul. A very intelligent AI might not be a spiritual and moral being in our sense; think of Rainman. Plus and Asimov wrote The Last Question would this AI be more intelligent than say the universe, than God? The universe itself should restrict just how intelligent any entity might become; is AI intelligence mere computing power? If so, I think there are people I think Seth Lloyd is one, who see the universe as a information processor. Would AI be smarter than the universe?

    ReplyDelete