Friday, March 28, 2014

Our Moral Duties to Monsters

We might soon be creating monsters, so we'd better figure out our duties to them.

Robert Nozick's Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!

Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.

Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker's and my story Reinstalling Eden.)

Two questions arise:

(1.) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure? On the face of it, it seems like a good thing to do, to bring beings capable of great pleasure into the world! On the other hand, maybe we have no general obligation to bring happy beings into the world. (Compare: Many people think we have no obligation to increase the number of human children even if we think they would be happy.)

(2.) If we do create such beings, ought we immiserate ourselves for their happiness? It seems unintuitive to say that we should, but I can also imagine a perspective on which it makes sense to sacrifice ourselves for superhumanly great descendants.

The Utility Monster can be crafted in different ways, possibly generating different answers to (1) and (2). For example, maybe simple sensory pleasure (a superhumanly orgasmic delight in cookies) wouldn't be enough to compel either (1) creation or (2) if creation sacrifice. But maybe "higher" pleasures, such as great aesthetic appreciation or great intellectual insight, would. Indeed, if artificial intelligence plays out right, then maybe whatever it is about us that we think gives our lives value, we can artificially duplicate it a hundredfold inside machines of the right type (maybe biological machines, if digital computers won't do).

You might think, as Nozick did, and as Kantian critics of utilitarianism sometimes do, that we can dodge utility monster concerns by focusing on the rights of individuals. Even if the Monster would get 100 times as much pleasure from my cookie as I would, it's my cookie; I have a right to it and no moral obligation to give it to her.

But similar issues arise if we allow Fission/Fusion Monsters. If we say "one conscious intelligence, one vote", then what happens when I create a hundred million conscious intelligences in my computer? If we say "one unemployed consciousness, one cookie from the dole", then what happens if my Fission/Fusion Monster splits into a hundred million separate individual unemployed conscious beings, collects its cookies, and then in the next tax year merges back into a single cookie-rich being? A Fission/Fusion Monster could divide at will into many separate individuals, each with a separate claim to rights and privileges as an individual; and then whenever convenient, if the group so chose (or alternatively via some external trigger), fuse back together into a single massively complex individual with first-person memories from all its predecessors.

(See also: Our Possible Imminent Divinity.)

[image source]

26 comments:

clasqm said...

If the Utility Monster resides in a machine under my control, I can dial down the cookie/pleasure ratio down by altering a single line of code. Being a merciful sort of Creator, I would of course remove the memories of previous coookiegasmic experiences. Also, the cookies would be virtual, not real. I smell a category mistake.

Similarly, your Fusion/Fission monster (for argument's sake, let's call it the Democrat Monster) is nullified by the staunchly Republican Fusion/Fission Monster in MY machine. (memo to self: buy more RAM).

OK, seriously now.

(1) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure?

Let's assume we should. Call them Level 2. Should THEY then work to create beings on Level 3 who are exponentially capable of equally greater heights. 10 000 units of pleasure per cookie? And then? Level 4: 10 000 000 000 units?

We have assumed a duty to the beings on Level 2. But do we have a duty to Levels 3 and 4, or do the duties pass up hierarchically, so that the Level 2 beings have a duty to Level 3, Level 3 has a duty to Level 4 and so on? Does the cookie pass ever upward in an infinite progression of potential pleasure, without actually being consumed because there is always another level beyond it that has a greater potential for enjoyment?

But, you say, it has to stop somewhere. Level X may not exist yet and Level X-1 may consume the cookie in a veritable Big Bang of pleasure.

But that is a static picture. If Level X does not yet exist, one day it may do, and what right does level X-1 have to consume cookies passed up from Level X-2 when one day soon there may well be a Level X with a greater right to it. No, the rational thing is to practice self-denial and hoard all cookies against the day when Level X comes into existence.

Which, I believe, answers your question (2). There is no reason (other than prejudice) to believe that we exist on the Ur-level. Our cookies may well originate on a lower level of hedonistic potential, graciously passed upwards to us by our cold, unfeeling creators. We may consume none of it. We must hoard all of it against the day we create the next level.

You may now include me in your other research project: why do ethical theorists not practice what they preach?

Well, that's just how the cookie crumbles.

clasqm said...

Just another thought. We may be able to create conscious beings capable of greater extremes of pleasure. But why should the source of that pleasure be the same as ours? I would begrudge no Utility Monster its 100 units if it was derived from the consumption of oil spills, radioactive waste material, military surplus, televangelists, talk show hosts ... oops.

mdmara said...

This is great and really interesting! While I'm not very sympathetic to hedonistic utility reasoning, I did find your discussion about the fusion/fission of conscious AI fascinating. If we were to one day grant rights to a single AI, it is extremely puzzling what we would do if that single AI were then capable of fissioning into multiple conscious AI. Digital beings are not constrained the way we are, and that seems to create some really interesting puzzles that I have to think about! Again, I highly recommend checking out the movie Her.

Carl M. said...

Suppose on Planet X there are 50 trillion beings of roughly similar psychological makeup to human beings. They're not utility monsters, but there are a lot of them. Further suppose that the Xians have a telescope aimed at Earth and they get great pleasure from watching humans suffer and feel great pain from watching humans prosper. Under textbook utilitarianism, I don't see how one could deny that until such time as the Xians could be persuaded not to feel pleasure from human pains, the Right Thing to Do would be to harm one another here on Earth. I take this to be a reductio of conventional utilitarianism, but perhaps others might bite the bullet on it.

My own explanation for why we should ignore the Xians, whether there's just one utility monster or 50 trillion average beings, is that there's no connection between Earth and Planet X, so we owe them nothing and have no obligation to make them happy.

Callan S. said...

So if our aim is to maximize world happiness, we should give all our cookies to the monster.

How does that make the world happy?

Further, is it a touch paternalistic?

Could there be room given for the creature to pursue it's own ends?

Or are we all really trying to make each other as happy as possible??


Carl,
My own explanation for why we should ignore the Xians, whether there's just one utility monster or 50 trillion average beings, is that there's no connection between Earth and Planet X, so we owe them nothing and have no obligation to make them happy.

Works for god, after all. ;)

Simple Simon said...

They will be the children of our minds and culture, to expect them to be passive vessels for our investment or control is unwise and at odds with history. Instead the question is what can we expect from them, how do we think that they will behave, what will they give us? What will they take?

schmaltz said...

Hey Eric, I think the argument for more pleasure MUST and concursively neccesitates the humans eventual takeover/ownership of pleasure. I don't see how allowing monsters to experiance greater pleasure will lead to our "owning" of pleasure, and maybe it will lead to theirs, who knows.

Eric Schwitzgebel said...

Thanks for all the interesting comments, folks!

Michel: LOL. You know, I really enjoy cookies, so I know you'll share. More seriously: Your last suggestion makes a lot of sense -- *if* we are capable of controlling that type of parameter.

Carlos: Yes, looking forward to Her! It looks like I'm going to have to wait for DVD/Netflix.

Carl M.: Could we tweak the "connection" parameter in your thought experiment? Maybe they're our descendants? Maybe we signed a treaty a hundred years ago when they were only a million beings? Would that change your opinion about our duties to them?

Callan: I could see going either paternalistic or non-paternalist. Thinking you have an obligation to help someone isn't *necessarily* paternalistic, I think.

Simon: I agree that those are important questions! But I think it's also important to think about what our moral duties to them would be. We needn't see them as passive vessels for that question to be relevant, I think.

schmaltz: I agree both that we don't know what the outcome will be and also that if we gain more control over our own constitution and that of our descendants then we gain a certain kind of increased ownership over what gives us pleasure and how much pleasure we get.

Anonymous said...

I am deeply depressed, like there is no hope. useless

Callan S. said...

Anon, toward what goal, useless?

Would you say you are looking for a goal? If you self reflect you might find you are in a cycle of looking for goals that are worthy and repeatedly rejecting them. But if you can't see the cycle, it'll just feel an elongated period of 'useless'.



Eric, all the cookies in the world is just 'helping'?

Peter Hildebrand said...

I really enjoyed this post, and I just had a quick question. I know that utilitarianism tries to maximize happiness, and it seems like the kind of maximum you're talking about here is a simple sum. World happiness is considered to be increased because there are more units of happiness if the utility monster gets the cookie.

Is there any reason why a utilitarian wouldn't look at median or even mean happiness instead? That seems like it would still allow us to work towards overall happiness while providing a workaround for some non-intuitive cases like the utility monster.

Eric Schwitzgebel said...

Peter: Sure, there are all kinds of ways to go about it. But there are also all kinds of weird monsters! For example, a Fission/Fusion Monster could mess up the median.

Callan S. said...

Is the question that if one feels an urge to help someone with, say, 10% of their happyness and say their happyness in regards to cookies goes from 0 to 100, sure, you might get them 10 cookies. But what happens when the things happyness goes from 0 to 10000? 1000 cookies? Some sort of dragging of our values along to alien levels of emotion?

Eric Schwitzgebel said...

Yes, Callan, I think that's part of the complexity of the story....

Callan S. said...

Ah, thanks, Eric! I wasn't quite on target before!

clasqm said...

But what about the opposite case?

Let's suppose that for a human 1 cookie = 1 Unit of Happiness (HU)

If I eat my cookie alone, happiness in the world has increased by 1HU

If I give the cookie to a Utility Monster Happiness increases by 100 HU.

By giving the Utility monster 1/100 of a cookie and eating 99/100 myself, happiness in the world has increased by 1.99 HU

But what about the Disutility Monsters? These poor creatures require 100 cookies just to generate one measly HU.

Living in the Third World, there are a lot of Disutility Monsters in my neigborhood. They argue that only an American could come up with a theory in which it is moral to give to those who can generate happiness so easily.

"Clearly our need is greater," they say. "Let the Utility Monsters get by on 1 HU per day, just like the humans. Concentrate your efforts on lifting us out of our abject misery. Disutility Monsters of the world, unite, you have nothing to lose but your cookies!"

I don't so much mind the Disutility Monsters who sit apathetically on street corners begging for cookies. But the more militant among them have formed the DisUtility Cookie Liberation Effort (DULCE) and starting to make threatening noises ...

Eric Schwitzgebel said...

They might find a friend in Rawls!

Once we are convinced to send cookies to the least well off to bring them up to a basic level of utility, even though it takes a lot of cookies per HU, my Fission/Fusion monster arrives. She fissions into 100 million Disutility Monsters. Once these monsters get their cookies, she gains 100 trillion units of utility by fusing back together -- just because that's how she's built, say. So the poor Disutility Monsters, having lost a substantial portion of the cookies they would otherwise have received, continue to get the short end of the stick.

clasqm said...

That would be illogical and counterproductive.

The F/FM could divide into 100 trillion Disutility Monsters, collect 10 000 trillion cookies, fuse and experience the collective pleasure of 100 trillion HUs

Or she could divide into 100 trillion Utility monsters, collect 100 trillion cookies, experience 100 HUs per cookie, then fuse again and have the collective experience of 10 000 trillion HUs.

But we have decided to ration Utility monsters to 0.01 cookie or 1 HU per day. Therefore 100 trillion Utility Monsters will receive a mere 1 trillion cookies. which still gives the F/FM a total experience of 100 trillion HUs, same as if she had divided into disutility monsters.

At this stage we appeal to the F/FM's reason (she is, after all a sentient, reasoning being). We point to the fact that cookies are a finite, and increasingly scarce resource. We appeal to the F/FM to divide into Utility Monsters rather than Disutility Monsters, since this will allow us to stretch the cookie supply for 100 times as long.

And if our recent experience with another supposedly sentient, reasoning species is any guide, she will say "No, gimme cookies".

Eric Schwitzgebel said...

Michel: I'm inclined to think that a reasonable Fission/Fusion Monster, *if* she had the option of fissioning in either of the two ways you suggest, would fission the second way. But she might not have that option. It *might* turn out that fissioning can only result in Disutility Monsters, which would still leave us with the problematic hypothetical.

Anonymous said...

Your last paragraph describes humanity “each with a separate claim to rights and privileges as an individual”, each are saying I matter and claim my own destiny. We are as infants playing with fire not understanding we can get burned.

Eric Schwitzgebel said...

Anon 09:42: I wouldn't deny that.

Scott Bakker said...

Ingenious piece, Eric. If you haven't, you definitely need to check out Ancillary Justice, Anne Leckie's Hugo and Nebula winner from last year. The main character is a fission/fusion 'monster.'

In a sense you're importing the 'mechanical reproduction problem' from aesthetics into meta-ethics, using it to demonstrate the way it jams certain theoretical intuitions of value.

The million dollar question, of course, is why do these kinds of scenarios so roundly resist intuitive moral resolution. I see now why you thought the dichotomy I set up between moral and mechanical cognition in my piece is too simplistic. But it still seems to be a problem tailor-made for a heuristic understanding of cognition. Moral cognition, as an example of bounded cognition, simply cannot satisfactorily solve certain kinds of novelty. We can identify these problems AS moral problems, press our intuitions done this path or that, but wires are always crossed.

Eric Schwitzgebel said...

Yes, Scott, Ancillary Justice! Someone else pointed me to it a while back, and I agree it's very interesting. I've been thinking about doing a blog post contrasting Leckie's and Vinge's approaches to group minds (his "tines"). Vinge's group mind interestingly achieves a level of cognitive sophistication that the individual minds don't have on their own, while Leckie's ships and ancillaries all seem to have very similar minds differing mainly in their access to sensory viewpoints and other information sources. It would be very cool to see her try a fusion, too, of the ancillary into a new ship!

Eric Schwitzgebel said...

BTW, I'm inclined to agree with your last point. It's on my list to think that through a bit more and maybe develop it into a blog post and paper section.

Unknown said...

It seems to me that the argument assumes moral obligations are real things. If morality is merely a set of useful rules of thumb to enable people to cooperate then moral obligations are merely agreements established under a set of fundamentally arbitrary rules. If morality doesn't have the sort of objective existence ascribed to it in (for example) the Bible I don't think a utility monster or any other being can impose moral obligations on me by the mere fact of its existence. Most human societies for most of human history have treated their morality as being of divine origin in order to protect it from attack by human reason. I think the belief that morality is non-arbitrary creates the illusion of a moral quandary where none exists

Eric Schwitzgebel said...

Michael: That's definitely one way to go here. A compromise position to which I'm attracted is that morality is a type of invention, but once invented there are real rules or boundaries to it so that some things really are and are not immoral; but these kinds of cases might show where that invention breaks down.