Friday, April 26, 2019

Animal Rights for Animal-Like AIs?

by John Basl and Eric Schwitzgebel

Universities across the world are conducting major research on artificial intelligence (AI), as are organisations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AIs might deserve the ethical protections we typically give to animals.

Discussions of ‘AI rights’ or ‘robot rights’ have so far been dominated by questions of what ethical obligations we would have to an AI of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an AI with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an AI with less-than-human sophistication, deserving some less-than-human ethical consideration.

We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. If human stem cells or, especially, human brain cells are involved, the standards of oversight are even more rigorous. Biomedical research is carefully scrutinised, but AI research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be.

You might think that AIs don’t deserve that sort of ethical protection unless they are conscious – that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data or Dolores, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering.

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

This might sound like the stuff of science fiction, but insofar as researchers in the AI community aim to develop conscious AI or robust AI systems that might very well end up being conscious, we ought to take the matter seriously. Research of that sort demands ethical scrutiny similar to the scrutiny we already give to animal research and research on samples of human neural tissue.

In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.

It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this.

[originally posted on Aeon Ideas]

13 comments:

Drake Thomas said...

Relevant: People for the Ethical Treatment of Reinforcement Learners.

Eric Schwitzgebel said...

Yes! They interviewed Mara Garza and me a few years back after we published our first paper on this topic. I don’t know if reinforcement learning is enough for suffering, though, since theory of consciousness is so contentious.

SelfAwarePatterns said...

I think it's right to start thinking about how AIs might compare to animals. The incessant comparisons with humans is far too much of a leap. Although I'm not sure we're anywhere near dogs and mice yet. Do we have an AI with the spatial and navigational intelligence of a fruit fly, a bee, or a fish? Maybe mammals are still too much of a leap.

But it seems like there is a need for a careful analysis of what a system needs to be a subject of moral concern. Saying it needs to be conscious isn't helpful because there are legions of definitions for consciousness. I like the capability to have joy and suffering criteria. Suffering in particular seems like the relevant one.

But what is suffering? The Buddhists seemed to identify desire as the main ingredient, a desire that can't be satisfied. They argue that we should convince ourselves out of such desires, but not all desires seem volitional. I don't believe I can really stop desiring not to be injured.

For example, if I sustain an injury, the signal from the injury conflicts with my desire for my body to be whole and functional. I will have an intense reflexive desire to do something about it. Intellectually I might know that there's nothing I can do but wait to heal. During the interim, I have to continuously inhibit the reflex, which takes energy. But the reflex continuously fires anyway and continuously needs to be inhibited. This is suffering.

But involuntary desires seem like something we have due to the way our minds evolved. Would we build machines like this (aside from cases where we're explicitly attempting to replicate animal cognition)? It seems like machine desires could be satisfied in a way that primal animal desires can't, by learning that the desire can't be satisfied at all. Once that's known, it's not productive for one part of the system to keep needling another part to resolve it.

So if a machines sustains damage, damage it can't fix, it's not particularly productive for the machine's control center to continuously cycle through reflex and inhibition. One signal that the situation can't be resolved should quiet the reflex, at least for a time.

That's not to say that some directives might be judged so critical that we would put them as constant desires in the system, but it seems like this would be something we only used judiciously.

Another thing to consider is that these systems won't necessarily have a survival instinct. (Again unless we're explicitly attempting to replicate organic minds.) That means the inability to fulfill an involuntary and persistent desire wouldn't have the same implications for them that they do for a living system.

So, I think we have to be careful with setting up a new regulatory regime. The vast majority of AI research will never do anything even approaching this area. Making machine learning researchers work though something like that would be bureaucratic and unproductive. But if they are explicitly trying to create a system that might have sentience, then it might be warranted.

Simon said...

So what happens when you have designed a Nano modular self-assembling A.I. where the A.I. capacity doesn't kick in until later in the self-assembly? What is it's personal identity that underpins its moral status? It's design says it is a self-assembling A.I. from the getgo, but if we use something similar to human Personal Identity and psychological continuity that only kicks in with the A.I. capacity. Here the design and classification aspect is much more definitive.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

SelfAware: Your account of suffering has some plausibility -- but I'm not so sure that a "suffering" state would be useless in a machine, if it motivates a search for opportunities to repair and if, depending on its strength, it helps shape future avoidance behavior. As for survival, I'm inclined to think that it might make sense for expensive machines with long-term goals to have among their goals self-preservation (which may be at least a valuable subgoal for fulfillment of other long term goals).

Simon: These are tough questions. I think that as a community we haven't begun yet to responsibly think them through.

Anonymous said...

" On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this."

Seriously Eric... Please let us all of the hook by telling everyone that this post is your idea of a joke. This sounds like some administrator creating busy work for himself and his subordinates. Is there no sanity left in our institutions of academia? There are real problems and issues out there that need to be addressed. Academia is the gate keeper of our culture, it should be funneling its resources and focusing on how to make things better for the general public. Sigh.............

Eric Schwitzgebel said...

Lee, I admit I'm not eager to create (or serve on) more committees. The administrative burden is definitely a huge downside and in my view the strongest argument against this proposal -- though outweighed by the seriousness of the issues at stake, in the event that one of the more "liberal" theories of consciousness is correct.

Anonymous said...

The whole idea sounds like a scheme developed by investment capitalist who are constantly creating their own "emerging markets". While we are at it, academia better set some money aside and create another committee to determine how to fund the emerging influx of law students who will be required to defend the rights of AI.

I'm retired Eric, so my lively hood doe not depend upon me pretending to support a ludicrous idea. Consciousness will forever remain outside the grasp of the human endeavor, just like causality. Causality and consciousness are intrinsically linked; and that riddle will never be solved unless we are least willing to address the meta-problem of consciousness and causality. Rationality is the only tool that we have in our toolbox. As a structured system, rationality is fundamentally flawed because it is a discrete, binary system. Discrete systems, let alone binary ones are not capable of accommodating a linear, continuous system like consciousness or causality.

Thanks

Eric Schwitzgebel said...

We might disagree about the policy implications, but I agree that there is quite a serious meta-problem here!

chinaphil said...

Without getting to grips with the problem of animal rights-style rights, this does remind us of one of the issues connected with AI: one of the assumptions seems to be that people will keep on inventing smarter and smarter forms of AI until they are smarter than people. But at some point when an AI is close to or as smart as a person, it will be deserving of essentially human rights. As commonly understood at the moment, these include both personal autonomy and freedom from eugenic interference on a species level. Morally, it seems unlikely that we will actually be able to keep programming these poor beings.

How this will actually play out seems to be much more of an empirical question, and I agree that the comparison with animal rights is a very useful place to start. The way we react to our household Roomba is quite enlightening (ours has a name, very much like a pet). As drones, delivery bots, and those Boston Robotics dogs are widely adopted, we'll just have to see how human instinct deals with them. The fairly recent extension of protections to octopodes suggests that if they act in an animal-like way, they'll get animal-style ethical treatment.

Eric Schwitzgebel said...

Yes, chinaphil, that all sounds right to me. One important caveat: Our intuitive reactions might be unfortunately misalign with the moral status that the AI actually deserve -- either on the too-high end (as Joanna Bryson worries about) or on the too-low end (as John Basl and I worry about), and thus it will be important to employ what Mara Garza and I call the Emotional Alignment Design Policy: Design AI so that the moral status that ordinary users are emotionally inclined to attribute in fact matches their real moral status. (No worries yet about the Roomba, since despite enjoying treating it like a pet, I assume that you would save a real cat rather than the Roomba in a fire, if forced to choose one.)

Aspasia said...

So I guess I violated some of the rules. Do not mention any individual person? This was a bit vague to me. So here's another try. This time that follows the rules (I hope).

How about a Singer like argument? I take this to be a philosophical argument, though I am not just going to lay out a bunch of premises.

Argument: Imagine that there is a group of drowning children. A crowd walks by and notices. Suppose it will cost each of them a $10 pair of pants to save the children. What should they do? Well, probably, they should all jump in and try to save the children.
Imagine now that a group of people hear children crying out for help. It sounds like it is coming from the nearby woods. It will cost each of you a $20 flashlight to search the woods. What should they do? It seems reasonable to think that they should collectively begin to search the woods. Now suppose a group runs into a trap set by a bunch of criminals with hostages who demand your wallet to release the hostages. Some of you have over $100 in your wallet, some of you less. Now what should they do? Give up their wallets, correct? Again imagine that a group runs into a trap set by a bunch of criminals who tell you they have hostages in a distant location, whose plight is being presented in real time video. The muggers say that they will release them if you all give up your wallets. Even in this situation, the right answer again seems to be that they should give up their wallets. So it would seem that we believe that people should help out those less fortunate whether it costs us something, whether they are near or far. What we care about is the fact that there is harm being done, and there is something we can do to stop it. What we care about is preventing harm no matter who it is or where it occurs.

Durandus said...

Aspasia!...And THAT's the 'moral compass' of Consciousness worth its name...Organic Intelligence, in MY book.