We might soon be creating monsters, so we'd better figure out our duties to them.
Robert Nozick's Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.
Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker's and my story Reinstalling Eden.)
Two questions arise:
(1.) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure? On the face of it, it seems like a good thing to do, to bring beings capable of great pleasure into the world! On the other hand, maybe we have no general obligation to bring happy beings into the world. (Compare: Many people think we have no obligation to increase the number of human children even if we think they would be happy.)
(2.) If we do create such beings, ought we immiserate ourselves for their happiness? It seems unintuitive to say that we should, but I can also imagine a perspective on which it makes sense to sacrifice ourselves for superhumanly great descendants.
The Utility Monster can be crafted in different ways, possibly generating different answers to (1) and (2). For example, maybe simple sensory pleasure (a superhumanly orgasmic delight in cookies) wouldn't be enough to compel either (1) creation or (2) if creation sacrifice. But maybe "higher" pleasures, such as great aesthetic appreciation or great intellectual insight, would. Indeed, if artificial intelligence plays out right, then maybe whatever it is about us that we think gives our lives value, we can artificially duplicate it a hundredfold inside machines of the right type (maybe biological machines, if digital computers won't do).
You might think, as Nozick did, and as Kantian critics of utilitarianism sometimes do, that we can dodge utility monster concerns by focusing on the rights of individuals. Even if the Monster would get 100 times as much pleasure from my cookie as I would, it's my cookie; I have a right to it and no moral obligation to give it to her.
But similar issues arise if we allow Fission/Fusion Monsters. If we say "one conscious intelligence, one vote", then what happens when I create a hundred million conscious intelligences in my computer? If we say "one unemployed consciousness, one cookie from the dole", then what happens if my Fission/Fusion Monster splits into a hundred million separate individual unemployed conscious beings, collects its cookies, and then in the next tax year merges back into a single cookie-rich being? A Fission/Fusion Monster could divide at will into many separate individuals, each with a separate claim to rights and privileges as an individual; and then whenever convenient, if the group so chose (or alternatively via some external trigger), fuse back together into a single massively complex individual with first-person memories from all its predecessors.
(See also: Our Possible Imminent Divinity.)