Wednesday, January 14, 2015

Our Moral Duties to Artificial Intelligences

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

You might think: Our moral duties to them would be similar to our moral duties to natural humans beings. A reasonable default view, perhaps. If morality is about maximizing happiness (a common consequentialist view), these beings ought to deserve consideration as loci of happiness. If morality is about respecting the autonomy of rational agents (a common deontological view), these beings ought to deserve consideration as fellow rational agents.

One might argue that our moral duties to such beings would be less. For example, you might support the traditional Confucian ideal of "graded love", in which our moral duties are greatest for those closest to us (our immediate family) and decline with distance, in some sense of "distance": You owe less moral consideration to neighbors than to family, less to fellow-citizens than to neighbors, less to citizens of another country than to citizens of your own country -- and still less, presumably, to beings who are not even of your own species. On this view, if we encountered space aliens who were objectively comparable to us in moral worth from some neutral point of view, we might still be justified in favoring our own species, just because it is our own species. And artificial intelligences might properly be considered a different species in this respect. Showing equal concern for an alien or artificial species, including possibly sacrificing humanity for the good of that other species, might constitute an morally odious disloyalty to one's kind. Go, Team Human?

Another reason to think our moral duties might be less, or more, involves emphasizing that we would be the creators of these beings. Our god-like relationship to them might be especially vivid if the AIs exist in simulated environments controlled by us rather than as ordinarily embodied robots, but even in the robot case we would presumably be responsible for their existence and design parameters.

One might think that if these beings owe their existence and natures to us, they should be thankful to us as long as they have lives worth living, even if we don't treat them especially well. Suppose I create a Heaven and a Hell, with AIs I can transfer between the two locations. In Heaven, they experience intense pleasure (perhaps from playing harps, which I have designed them to intensely enjoy). In Hell, I torture them. As I transfer Job, say, from Heaven to Hell, he complains: "What kind of cruel god are you? You have no right to torture me!" Suppose I reply: "You have been in Heaven, and you will be in Heaven again, and your pleasures there are sufficient to make your life as a whole worth living. In every moment, you owe your very life to me -- to my choice to expend my valuable resources instantiating you as an artificial being -- so you have no grounds for complaint!" Maybe, even, I wouldn't have bothered to create such beings unless I could play around with them in the Torture Chamber, so their very existence is contingent upon their being tortured. All I owe such beings, perhaps, is that their lives as a whole be better than non-existence. (My science fiction story Out of the Jar features a sadistic teenage God who reasons approximately like this.)

Alternatively (and the first narrator in R. Scott Bakker's and my story Reinstalling Eden reasons approximately like this), you might think that our duties to the artificial intelligences we create are something like the duties a parent has to a child. Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.

I tend to favor the latter view. But it's worth clarifying that our relationship isn't quite the same as parent-child. A young child is not capable of fully mature practical reasoning; that's one reason to take a paternalistic attitude to the child, including overriding the child's desires (for ice cream instead of broccoli) for the child's own good. It's less clear that I can justify being paternalistic in exactly that way in the AI case. And in the case of an AI, I might have much more capacity to control what they desire than I have in the case of my children -- for example, I might be able to cause the AI to desire nothing more than to sit on a cloud playing a harp, or I might cause the AI to desire its own slavery or death. To the extent this is true, this complicates my moral obligations to the AI. Respecting a human peer involves giving them a lot of latitude to form and act on their own desires. Respecting an AI whose desires I have shaped, either directly or indirectly through my early parameterizations of its program, might involve a more active evaluation of whether its desires are appropriate. If the AI's desires are not appropriate -- for example, if it desires things contrary to its flourishing -- I'm probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being.

However, to simply tweak around an AI's desire parameters, in a way the AI might not wish them to be tweaked, seems to be a morally problematic cancellation of its autonomy. If my human-intelligence-level AI wants nothing more than to spend every waking hour pressing a button that toggles its background environment between blue and red, and it does so because of how I programmed it early on, then (assuming we reject a simple hedonism on which this would count as flourishing), it seems I should do something to repair the situation. But to respect the AI as an individual, I might have to find a way to persuade it to change its values, rather than simply reaching my hand in, as it were, and directly altering its values. This persuasion might be difficult and time-consuming, and yet incumbent upon me because of the situation I've created.

Other shortcomings of the AI might create analogous demands: We might easily create problematic environmental situations or cognitive structures for our AIs, which we are morally required to address because of our role as creators, and yet which are difficult to address without creating other moral violations. And even on a Confucian graded-love view, if species membership is only one factor among several, we might still end up with special obligations to our AIs: In some morally relevant sense of "distance" creator and created might be very close indeed.

On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well being of any artificial intelligences we create. And if genuinely conscious human-grade AI somehow becomes cheap and plentiful, surely there will be messes, giant messes -- whole holocausts worth, perhaps. With god-like power comes god-like responsibility.

(image source)

[Thanks to Carlos Narziss for discussion.]

Related posts:

[Updated 6:03 pm]

25 comments:

Eric Steinhart said...

Excellent. But isn't there a need to try to ground this in some traditional ethical theory? I mean: do my AI creations also fall under the categorical imperative? am I supposed to try to include them in my efforts to maximize utility? do I have my obligations to them because of their rationality/consciousness/whatever? Stuff like that. (Side note: it's interesting how the old theodicy issues come back into play here.) This is great stuff, a really interesting topic. More!

Howard Berman said...

Moral obligations are involved in reciprocal relationships.
What responsibility will they have for us?
That is tacit in your discussion, I think.
Unlike animals who have no obligations toward us, maybe excepting dogs and cats and other domestic animals, they will as sentient beings in a societal relationship bear moral responsibilities.
Will they be our friends and companions ala Asimov? Will they just be our tools?
How does your discussion fit in with the fears of AI being a threat to humans?

Eric Schwitzgebel said...

Hi, Eric! I was trying to avoid too many commitments to specific moral theories. I was trying to touch on consequentialist happiness and deontological rights and Aristotelian flourishing. How the details play out will depend on which theory one prefers.

Eric Schwitzgebel said...

Howard: That's an interesting set of questions. It's not clear what obligations they will have to us. I'm inclined to think that the obligations of the created to the creator are less than the other way around, given the asymmetry of power.

I find Bostrom's recent book very interesting on the question of AI threat to humans. He makes a strong case for pessimism. I'm inclined to think that if we feel moral obligation to the created, then that puts us more at risk since it creates pressure against certain safety precautions that we might otherwise make -- and thus in this respect the demands of morality might be in partial conflict with self-interest.

Eric Schwitzgebel said...

PS, Eric: I've updated the post with a couple sentences near the beginning gesturing at reasons that AIs would deserve equal consideration on consequentialist and deontologist grounds.

Callan S. said...

Gunna be difficult to begin with - the 'imagine yourself in the other persons shoes' sympathetic software will be entirely absent in early generations (largely due to the inevitable corporations who create them (into, IMO, slavery) not seeing the extensive R&D needed for sympathetic systems as necessary). About as much sympathy as a reptile.

And sympathy is pretty pivotal in hashing out a deal, from day to day, year to year.

I used to be pro AI manufacture, but now I'm kinda AI antinatalist. Too much like an early teen pregnancy - we're still too young, as a species, to have children. It'd just be children having children. We don't even look after ourselves.

Unknown said...

Very interesting! Some comments:

Which "us" is creating these AI? For instance, if I buy and use IBM products do I bear partial responsibility for the well-being of Watson? Or do "the creators" extend only to the engineers tinkering with code and hardware? Where do we draw the line?

I imagine that even well into the future very few people will be creating AI the way Watson's engineers created Watson. We'll mostly be training immature but pre-programmed AI to adjust to our specific parameters. So we might be influencing the development of AI, but that's a slightly different role than that of a god-like designer or creator-parent, and a lot closer to the ethical responsibilities of a teacher. The two are different, despite what teachers say.

At what point are we designing a tool, instead of just using it, or (heaven forbid!) treating it as a genuinely non-human peer? If the first entails some special responsibility, it ought to be distinguishable from the others!

The distinction between 'designers' and 'users' demonstrates how technological systems are the product of many agents. I think the issues surrounding AI are really a special case of collective agency problems more generally. We can talk about the relative responsibility I bear for the actions of the U.S. vs the actions of France, and this discussion will probably parallel much the discussion of "graded love" and other manifestations of social "distance", and this whole discussion pertains directly to the relative responsibility I bear for the behavior of certain machines.

To me this all suggests to me some unifying and coherent notion of "social space" that all social agents are sensitive to, and where "responsibility" in the ethically-loaded sense can be interpreted as influence on the dynamics on this space. AI is philosophically interesting partly because of how it reveals these dynamics within social space. AI is a limit-case agent, and seeing how it behaves in social spaces tells us a lot about those spaces.

On my view, 'use' and 'design' are just different descriptions for the ways we collaborate with other agents in the world. As co-participants in overlapping activities, we're locked in a complicated dance through shared social space. This dance increasingly incorporates the contributions of intelligent machines, and some of us have more influence on these machines than others, but talking about any of it in terms of "God-like creation" is mostly hubris, and reflects little on actual the dynamics of the world.

Unknown said...

One quick reference: The HINTS Lab at Wash. U has been doing really interesting empirical work on moral accountability and AI. You might be interested:

Michel Clasquin-Johnson said...

"And artificial intelligences might properly be considered a different species in this respect."

Perhaps not. we regard the AI as intelligent, thinking life or we would not be having this conversation. But while an alien from a different planet, with its own billions of years of evolution behind it, might think in a truly alien way, the AI would be built and designed by humans and the ways it might be able to think would be limited by that. It would think its electronic thoughts but only in a human-derived way. There would be certain fundamental categories it would not be able to think without (I'm not a Kantian epistemologist, but I play one on the Internet ...)

And so the AI would be human. Maybe not Homo sapiens-type human. Its relationship to us might be more like ours to Homo erectus, and Homo artificialis may be our progeny. But with all its glorious thinking ability, it would still be human.

Eric Schwitzgebel said...

Callan: "I used to be pro AI manufacture, but now I'm kinda AI antinatalist. Too much like an early teen pregnancy - we're still too young, as a species, to have children. It'd just be children having children." If you really want to scare yourself about this, check out Nick Bostrom's latest book!

Eric Schwitzgebel said...

Daniel: Thanks for the interesting thoughts (and the link)! I agree it's going to be complex, and that the user/creator distinction is relevant. However, it might be easy to become a "creator" in the relevant sense, if one considers sim cases -- just launch a world! Also, as in animal ethics and consumer ethics, it seems very plausible that consumers have moral responsibility for their choices which collectively create a market for the creation of some things rather than others. So there's a lot here. But it seems to me that philosophers have not really engaged the issue very seriously yet.

Eric Schwitzgebel said...

Michel: I could see this reasoning going either way. One possibility would be to say it's a "person" (a moral/cognitive category) but not a "human" (a biological species category).

Anonymous said...

I think I'm having trouble with the idea of having more or less duty to someone or something. I can see this claim meaning two different things. It might mean that I have more duties to one individual or class of individuals than I do to others. It doesn't seem implausible to me that I have more duties to my family members than I do to my cat in this sense. I have a much more complex social relationship with them, which would plausibly be caught up with a broader set of duties. It might also mean that my duties to one individual or class of individuals automatically trump my duties to another. In some sense the duties are stronger in a qualitative sense, carry more weight in virtue of the individual or kind of individual they are to. But this seems implausible without refinement. I shouldn't steal from my father, but if I had to take rope from his shed to save someone drowning in a lake... etc... So I think my question is: how am I supposed to understand the notion of more duty such that the question "Do I have more or less duty to artificial intelligences?" makes sense? Although maybe I have an answer already - we should restrict thinking about it to cases in which we are looking at duties of equal weightiness on a moral scale, ex. cases of duties to avoid killing conflicting with duties to avoid killing, not duties to avoid killing with duties to avoid stealing. If I'm in some trolley situation in which either a person or a cow must die, it seems like my duty to the person will trump my duty to the cow. Would this also be so for artificial intelligences? If the answer to the cow case is right, I need to know what it is in virtue of which the cow's moral standing is different from the person's and whether this answer helps with the AI case, and off we go...

Simon said...

Eric I came at this topic from a perspective regarding what moral obligations to the created does the 'maker' have for creating persons, be they synthetic or biological.

Even regarding biological beings this isn't always clear if you look at the foundations of parental responsibilities. Yes yes many will argue a full capacity A.I is a person while a baby technically isn't, but I think both sides of the personal identity debate have it wrong.

Nonetheless I've been looking at duty of care considerations to fellow persons when put or created in a state of existential dependency. I've only dug a little but there doesn't seem to be much done on this topic.

Eric Schwitzgebel said...

Anon 11:57: I'm not very sympathetic to the "trumping" idea that you also seem to find problematic, more sympathetic with the idea that we have a greater quantity of duties (e.g., to render aid to a demented AI who is demented because of our earlier actions than to an arbitrary stranger, if the AI and the stranger are otherwise psychologically similar). One might argue that under threat, one should save a human over a cow (that seems plausible!) and one's children over a stranger's children (plausible, too, in my mind). How about an AI you created who is psychologically similar (in intelligence, depth of experience, life expectancy, etc.) to an arbitrary human stranger you had no hand in creating? I do feel some of the pull of species-favoritism here; but also I still hold that we have a special obligation to protect beings we ourselves created -- so maybe it would depend on the details of the case. Implicit background assumptions about the situation and prospects and nature of AIs and human beings might also be, legitimately or illegitimately, driving some of our intuitions here.

Eric Schwitzgebel said...

Simon: I haven't yet done a good lit search on duties that parents have to children (or that people have to others in states of dependency). There's at least a little literature on this, as you probably know, for example reviewed here: http://plato.stanford.edu/entries/parenthood/#ParRes

Not sure why you think that AIs of the sort I'm imagining, if possible, wouldn't be persons. Can you tell me more?

Callan S. said...

Eric, I'll try to check that book out!


Daniel Estrada, you seem to be asking 'but if I buy a slave from the slaver, do I bare any moral responsibility?'

Simon said...

1. Yes Eric I read ‘Parenthood and Procreation’ and it didn’t really touch on the duty of care angle I’m coming from even with the differing accounts and differences between custodial and trustee relationships. At least at this stage there seems to me to be a disconnect between any ethics underlying parental responsibilities.


Before I read this post I actually wanted to put to various philosophers what would be the moral status of the new life and moral responsibilities of a person who pushed a button on a baby making machine. But this also works for a machine that creates a existentially dependent A.I . It could also run with an android hybrid.

Simon said...

But yes I don’t think personhood capacity A.I’s or traditional human persons are ontologically ‘persons’. I approached it from a systems perspective taking note of what I consider ontologically significant concepts like self-assembly, questioning the relevance of present capacity vs latent capacities and throwing in some Transhumanism thinking in light of the psychological vs biological chains of connectivity of the personal identity debate.

I must confess I never did learn a proper account of the materialist functionalist thinking of why present capacities trumped latent; but would you grant there are significant ontological capacities that differentiate us from closely related non-person animals?

If a monkey- being very similar evolutionary and cognitively- belongs on a cognitive continuum but is separated to us by personhood capacities that classify us as an entirely different ontological beings, then as far as I’m concerned there is no reason why there cannot be in fact an even higher cognitive level -through even more sophisticated cognitive capacities - that would separate persons from say Uber Minds.

But if you then think about Transhumanist uplift where you raise a monkey to personhood capacity many of the continuity identity chains like memory and personality could theoretically still remain. Now if that combined with these new personhood capacities makes it the same individual, it cannot ontologically speaking have been a mere animal to begin with. The ontological status should remain the same. Similarly if I am made into an Uber Mind cyborg I could have capacities that are as different in significance to human persons, as we are to animal minds. I would think myself the same individual through memory and biographical history, yet I’m no longer a human person. Therefore I was never a person to begin with and some other ontological status underpins my existence.

This allows many sophisticated biological and synthetic systems to go up and down the cognitive continuum and still be considered the same entity even if the change their significant cognitive capacities. The monkey is the same ontological type while in animal configuration as it is in personhood uplift configuration.

To cut a long story short I came to think of us as a type of complex adaptive system with chains of organizational continuity which often but not always involves memory.

This also solves many of the brain transplant like questions especially if you start thinking in terms of system mereology and extended systems. I also touch on the ontological significance of self-assembly for modular multi capacity systems.

Hope this makes at least some sense.

Simon said...
This comment has been removed by the author.
Ryan Gomez said...

Hi Eric,

I'm wondering how/why you do not consider death and pain as being prerequisite to a moral obligation. Even assuming that the AI thinks and reasons like a human, such that we can recognize its sentience, I'm not sure we would owe any moral obligations to the AI if it does not experience a "death" nor experiences "pain."

I'm assuming that death is an experiential blank here, but for an AI that we created it does not seem immoral to subject the AI to "death"—say by removing its batteries—when we could simply bring it back to life a few moments later. The AI would be none the wiser about its death, even though it may be confused upon waking up to find itself in another building.

As I see it, one way it would be immoral to do this, is if the AI experiences "pain." Unfortunately I'm several years removed for your and Dr. Fischer's classes, but I'm imagining "pain" to be something similar to mental anguish in response to stimuli.

Barring recognizable (significant?) death or pain, I'm not sure that we would owe any moral obligations to a machine that we created simply because it is able to think. I'm sure there is a whole host of AI theories out there that you are well aware of; maybe I am simply assuming a different AI form than you?

Very interesting post! I'm glad that I (re)discovered your blog!

Eric Schwitzgebel said...

Simon: Thanks for your interesting comment! One thing you say is "I would think myself the same individual through memory and biographical history, yet I’m no longer a human person. Therefore I was never a person to begin with and some other ontological status underpins my existence." Here I would distinguish between "human" as a biological concept and "person" as a moral concept. This is a bit of a cavil about rough-edged terms, but I think it's a useful distinction in the context. You would no longer be a biological "human" but you would still be the same "person" in virtue of your psychological continuity -- and moral obligations attach to personhood not to biological humanity. Of course, if you read Parfit, preserving identity in tersm of the psychological continuity of persons itself starts to seem a very problematic concept, too!

Eric Schwitzgebel said...

Unknown 05:10: Welcome back to the blog! I actually did mean to include pain among the relevant similarities here. A being without pain would not be "similar to us in their conscious experience, in their intelligence, in their range of emotions". The same, probably, if the being had no fear of death. (However, I probably wouldn't want to deny rights to immortal beings if they were possible.) The cases you raise are very interesting though -- *if* an AI had no (fear of) death or no capacity for pain, would we still have moral obligations to it? I'm inclined to think that it might, if the AI had enough sophistication and richness of experience in other ways -- though our obligations might be different, and it would depend on the details!

Simon said...

Eric yes ofc an ontological designation isn’t the same as moral “person” but it seems to me one is often used to justify the other. What account do you think is the best?

IMO using moral “person” confuses matters and instead think in terms of full moral worth entity. So yes you change your ontological type but still remain the same FMWE.

But what I’m getting at is concentrating on the functionalist perspective that differentiates between closely related but ontologically separate types and if extra significant capacities are enough to differentiate a monkey from a human cognitive person; the ability to uplift human persons or non person animals calls into question this way of classifying cognitive types and the related ontological classifications.

Regarding continuous chains of psychological continuity I disagree with Parfits ‘What matters’. But will look at it again. I would still think it me as the total entity/system even if you wiped all my memories and I see my psychological self as a command control informational “aspect” of the whole CAS.


Back to regarding moral responsibilities to A.I. if we value ourselves as psychological and therefore “moral persons”/FMWE and apply a Golden and Silver Rule approach I think this entails similar duty of care thinking to any type of entity with similar cognitive capacities.

Eric Schwitzgebel said...

Joseph: Yes, I agree with both of your points!

Simon: Sorry I missed your comment back in January! I agree with your point about not getting too hung up on traditional ontological types, in favor of something closer to a functional perspective.