Friday, March 28, 2014

Our Moral Duties to Monsters

We might soon be creating monsters, so we'd better figure out our duties to them.

Robert Nozick's Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!

Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.

Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker's and my story Reinstalling Eden.)

Two questions arise:

(1.) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure? On the face of it, it seems like a good thing to do, to bring beings capable of great pleasure into the world! On the other hand, maybe we have no general obligation to bring happy beings into the world. (Compare: Many people think we have no obligation to increase the number of human children even if we think they would be happy.)

(2.) If we do create such beings, ought we immiserate ourselves for their happiness? It seems unintuitive to say that we should, but I can also imagine a perspective on which it makes sense to sacrifice ourselves for superhumanly great descendants.

The Utility Monster can be crafted in different ways, possibly generating different answers to (1) and (2). For example, maybe simple sensory pleasure (a superhumanly orgasmic delight in cookies) wouldn't be enough to compel either (1) creation or (2) if creation sacrifice. But maybe "higher" pleasures, such as great aesthetic appreciation or great intellectual insight, would. Indeed, if artificial intelligence plays out right, then maybe whatever it is about us that we think gives our lives value, we can artificially duplicate it a hundredfold inside machines of the right type (maybe biological machines, if digital computers won't do).

You might think, as Nozick did, and as Kantian critics of utilitarianism sometimes do, that we can dodge utility monster concerns by focusing on the rights of individuals. Even if the Monster would get 100 times as much pleasure from my cookie as I would, it's my cookie; I have a right to it and no moral obligation to give it to her.

But similar issues arise if we allow Fission/Fusion Monsters. If we say "one conscious intelligence, one vote", then what happens when I create a hundred million conscious intelligences in my computer? If we say "one unemployed consciousness, one cookie from the dole", then what happens if my Fission/Fusion Monster splits into a hundred million separate individual unemployed conscious beings, collects its cookies, and then in the next tax year merges back into a single cookie-rich being? A Fission/Fusion Monster could divide at will into many separate individuals, each with a separate claim to rights and privileges as an individual; and then whenever convenient, if the group so chose (or alternatively via some external trigger), fuse back together into a single massively complex individual with first-person memories from all its predecessors.

(See also: Our Possible Imminent Divinity.)

[image source]

SpaceTimeMind's First Podcast

Wow. Pete Mandik and Richard Brown on idealist metaphysics, computer simulation, self-knowledge of consciousness, free will and moral realism.... They plunge right to the core.

Thursday, March 20, 2014

The Bait-and-Switch Blurring of Moral and Prudential in the World of Chutes and Ladders

Chutes and Ladders, if you didn't know, isn't just a game of chance. It's a game of virtue. At the bottom of each ladder a virtuous action is depicted, and at the top we see its reward. Above each chute is a vice, at the bottom natural punishment. The world of Chutes and Ladders is the world of perfect immanent justice! Virtue always pays and vice is always punished, and always through natural mechanisms rather than by the action of any outside authority, much less divine authority.

Here's a picture of my board at home:

One striking thing: What 21st-century Anglophone philosophers would normally call "prudential" virtues and what 21st-century Anglophone philosophers would normally call "moral" virtues are treated exactly on par, as though they were entirely the same sort of thing.

In square 1, Elmo plants seeds. (Prudential!) Laddering to square 38, he reaps his bouquet. In square 9, Ernie helps Bert carry Bert's books. (Moral!) Laddering to square 31, we see Ernie and Bert enjoying soccer together. In square 64 Bert is running without looking (prudential) and he slips on a banana peel, chuting down to square 60. In square 16, Zoe teasingly hides Elmo's alphabet block from him (moral), and she chutes down to square 6, losing the pleasure of Elmo's company.

It's my first-grade daughter's favorite game right now (though she seems to like it even more when we play it upside down, celebrating vice).

Consider the Boy Scout code: trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent. Wait, "clean"? Or the seven deadly sins: lust, gluttony, greed, sloth, wrath, envy, and pride. My sense is that cross-culturally and historically long term prudential self-interest and short- and long-term moral duty tend to be lumped together into the category of virtues, not sharply distinguished, all primarily opposed to short-term self-interest.

It's a nice fantasy, the fantasy of mainstream moral educators across history -- that we live in a Chutes-and-Ladders world. And in tales and games you don't even need to do the long-term waiting bit: just ladder right up! I see why my daughter enjoys it. But does it make for a good moral education? Maybe so. One would hope there's wisdom embodied in the Chutes-and-Ladders moral tradition.

One possibility is that it's a bait-and-switch. That's how I'm inclined to read the early Confucian tradition (Mencius, Xunzi, though if so it's below the surface of the texts). The Chutes-and-Ladders world is offered as a kind of hopeful lie, to lure people onto the path of valuing morality as a means to attain long-term self-interest. But once one goes far enough down this path, though the lie becomes obvious, simultaneously the means starts to become an end valued for its own sake, even overriding the long-term selfish goals that originally motivated it. We come, eventually, to help Bert with his books even when it chutes us down rather than ladders us up.

After all, we see the same thing with pursuit of money, don't we?

Update 6:38: As my 14-year-old son points out, one other feature of Chutes and Ladders is that there's no free will. It's all chance whether you end up being virtuous. So in that sense, justice is absent (note March 21: though maybe though it's chance relative to the player, that chance represents the free will of the pawn).

Update March 21: As several people at New APPS have pointed out, Chutes and Ladders originated in ancient India as Snakes and Ladders. According to this site, the original virtues were faith, reliability, generosity, knowledge, asceticism; the original vices disobedience, vanity, vulgarity, theft, lying, drunkenness, debt, rage, greed, pride, murder, and lust. (A lot more snakes than ladders in India than on Sesame Street!)

Friday, March 14, 2014

The Copernican Sweets of Not Looking Too Closely Inside an Alien's Head

I've been arguing that if materialism is true, the United States is probably conscious. My argument is essentially this: Materialists should accept that all kinds of weirdly-formed aliens would be conscious, if they act intelligently enough; and the U.S. is basically a weirdly-formed alien.

One objection is that if an alien is weirdly enough constructed, we should deny that it's conscious, or at least withhold judgment, regardless how similar it is to us in outward behavior. Consciousness requires not only intelligent behavior but also an internal organization similar to our own.

Now I grant that a certain amount of complex structural organization is necessary if a being is to exhibit sophisticated outward behavior; a hunk of balsa wood won't do it. But it's plausible that a vast array of wildly different structural organizations could give rise to complex human-like behavior -- parallel processing or fast serial processing, carbon or silicon, spatially compact entities or spatially distributed entities, mostly subsystem driven or mostly centrally driven, and at all sorts of time scales. I've explored a few weird examples in previous blog posts: Betelgeusian beeheads, Martian smartspiders, and group minds on Ringworld.

Suppose we grant, then, that there's a vast array of possible -- indeed in a large enough universe probably actual -- beings with behavior of human-like sophistication, emitting complex seemingly communicative structures, seeming to flexibly protect themselves and flexibly exploit resources to enhance their longevity and power, seeming to track their own interior states in complex ways, and seeming to produce long philosophical and psychological treatises about their mental lives, including their streams of conscious experience. Would it be reasonable to think that although we have phenomenal consciousness (qualia, subjective experience, what-it's-like-ness), they don't, if they're not enough like us on the inside?

Consider the Copernican Principle of cosmological method. According to the Copernican Principle, we should tend to assume that we are not in a specially favored position in the universe (such as the exact center). Our position in the universe is mediocre, not privileged, not especially lucky. My central thought for this post is: Denying consciousness to weirdly structured but behaviorally sophisticated aliens would be a violation of the Copernican Principle.

Suppose we thought human biological neurons were necessary for conscious experience and that no being made of silicon or magnets or beer cans and wire, and lacking human-like neurons, could be conscious, regardless how sophisticated its patterns of outward behavior. Then suppose we met these magnetic aliens and we learned to communicate in each other's languages (or seeming-languages). Perhaps they come to Earth, begin to utter sounds that we naturally interpret as English, interact with us, and -- because they are so delightfully similar in outward behavior -- become our friends, spouses, and business partners. On the un-Copernican view I reject, we human beings could justifiably say: Nyah, nyah, we're conscious and you aren't! We got neurons, you didn't. We're awesomely special in a way you're not! (Fortunately, the magnetic aliens' feelings won't be hurt, since they will have no real feelings -- though they sure might behave as though insulted.) Our functional organization would be importantly different from all other functional organizations of similar sophistication in that it alone would have phenomenal consciousness attached. This would seem to be a violation of mediocrity, a claim of special favor, weird humanocentric parochialism.

Similarly, of course, for distinctions based on parallel vs. fast serial processing or spatially compact vs. spatially distributed processing, or whatever.

Even if we confess substantial doubt, we might be guilty of anti-Copernican bias. Here's a possible argument: I know that creatures with neurons can be conscious because I am one and I know through introspection that I'm conscious; but I don't know that magnetic beings behaviorally indistinguishable from me can be genuinely phenomenomally conscious, because I have no direct introspective access to their mentality, and the structural differences are large enough that there's room for considerable doubt in inferring from my own case to theirs. In my more skeptical moods I'm quite tempted by this argument.

But I think the argument is probably un-Copernican. It's tantamount to thinking that we neuron-owners might be specially privileged. Maybe we are at the center of the universe! -- not physically, of course, but consciously. A map of the distribution of mentality in the universe might put dots for behavioral sophistication all over the place, but the big red dot for true phenomenal consciousness might go only on us!

Now the Copernican Principle isn't inviolable. It could have turned out that we were at the geometric center of the universe. So maybe it could turn out Earth indeed is just the lucky spot where sophisticated behavioral responsiveness, self-monitoring, and linguistic-seeming communication is grounded in consciousness-supporting neurons rather than mere zombie-magnets (or zombie-hydraulics, or zombie-silicon, or whatever). But entertaining that view other than as a radically skeptical possibility is a parochialism that I doubt would justifiably survive real contact with an alien species -- or even a good, long immersion in well-constructed science fiction thought experiments.

Thursday, March 06, 2014

Does Skepticism Destroy Its Own Empirical Grounds?

You might think that empirically grounded radical skepticism is self-defeating.

Consider dream skepticism. Suppose I have, or think I have, empirical grounds for believing that dreams and waking life are difficult to tell apart. On those grounds, I think that my experience now, which I'd taken to be waking experience, might actually be dream experience. But if I might now be dreaming, then my current opinions (or seeming opinions) about the past all become suspect. I no longer have good grounds for thinking that dreams and waking life are difficult to tell apart. Boom!

(That was supposed to be the sound of a skeptical argument imploding.)

Stephen Maitzen has recently been advancing an argument of roughly that sort: that the skeptic "must attribute to us justified empirical beliefs of the very kind the argument must deny us" (p. 30). Similarly, G.E. Moore, in "Certainty", argues that dream skeptics assume that they know that dreams have occurred, and that if one is dreaming one does not know that dreams have occurred. (Boom.)

One problem with this self-defeat objection to dream skepticism is that it assumes that the skeptic is committed to saying she is justified in thinking (or knows) that this might well be a dream. The most radical skeptics (e.g., Sextus), might not be committed to this.

A more moderate skeptic (like my 1% skeptic) can't escape the argument that way, but another way is available. And that is to concede that whatever degree of credence she was initially inclined to assign to the possibility that she is dreaming, on the basis of her assumed empirical evidence and memories of the past, she probably should tweak that credence somewhat to take into account the fact that she can no longer be highly confident about the provenance of that seeming empirical evidence. But unless she somehow discovers new grounds for thinking that it's impossible or hugely unlikely that she is dreaming, this is only partial undercutting -- not grounds for 100% confidence that she is not dreaming. She can still maintain reasonable doubt: Previously she was very confident that she knew that dreams and waking life were hard to tell apart; now she could see going either way on that question.

Consider this case as an analogy. I have a very vivid and realistic seeming-memory of having been told ten minutes ago, by a powerful demon, that in five minutes this demon would flip a coin. If it comes up heads, she will give me a 50% mix of true and false memories about the half hour before and after the coin flip, including about that very conversation; if tails, she won't tamper with my memory. Then she'll walk away and leave me in my office.

Should I trust my seeming-memories of the past half hour, including of that conversation? If I trust those memories, that gives me reason not to trust them. If I don't trust those memories, well that seems hardly less skeptical. Either way, I'm left with substantial doubt. The doubt undercuts its own grounds to some extent, yes, but it doesn't seem epistemically justified to react to that self-undercutting by purging all doubt and resting in perfect confidence that my memories of that conversation are entirely veridical.

This is the heart of the empirical skeptic's dilemma: Either I confidently take my experience at face value or I don't. If I don't confidently take my experience at face value, I am already a skeptic. If I do confidently take my experience at face value, then I discover empirical reasons not to take it confidently at face value after all. Those reasons partly undercut themselves, but that partial undercutting does not then justify shifting back to high confidence as though there were no such grounds for doubt.

(image source)