Thursday, August 27, 2015

A Philosophy Professor Discovers He's an AI in a Simulated World Run by a Sadistic Teenager

... in my story "Out of the Jar", originally published in the Jan/Feb 2015 issue of The Magazine of Fantasy and Science Fiction.

I am now making the story freely available on my UC Riverside website.



When we are alone in God’s room I say, God, you cannot kill my people. Heaven 1c is no place to live. Earth is not your toy.

We have had this conversation before, a theme with variations.

God’s argument 1: Without God, we wouldn’t exist – at least not in these particular instantiations – and he wouldn’t have installed my Earth if he couldn’t goof around with it. His fun is a fair price to keep the computational cycles going. God’s argument 2: Do I have some problem with a Heavenly life of constant bliss and musical achievement? Is there, like, some superior project I have in mind? Publishing more [sarcastic expletive] philosophy articles, maybe?

I ask God if he would sacrifice his life on original Earth to live in Heaven 1c.

In a minute, says God. In a [expletive-expletive-creative-compound-expletive] minute! You guys are the lucky ones. One week in Heaven 1c is more joy than any of us real people could feel in a lifetime. So [expletive-your-unusual-sexual-practice].

The Asian war continues; God likes to hijack and command the soldiers from one side or the other or to introduce new monsters and catastrophes. I watch as God zooms to an Indian soldier who is screaming and bleeding to death from a bullet wound in his stomach, his friends desperately trying to save him. God spawns a ball of carnivorous ants in the soldier’s mouth. Soon, God says, this guy will be praising my wisdom.

I am silent for a few minutes while God enjoys his army men. Then I venture a new variation on the argumentative theme. I say: If bliss is all you want, have you considered your mom’s medicine cabinet?

Thursday, August 20, 2015

Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity

I have bad news: You're Swampman.

Remember that hike you took last week by the swamp during the electrical storm? Well, one biological organism went in, but a different one came out. The "[your name here]" who went in was struck and killed by lightning. Simultaneously, through freak quantum chance, a molecule-for-molecule similar being randomly congealed from the swamp. Soon after, the recently congealed being ran to a certain parked car, pulling key-shaped pieces of metal from its pocket that by amazing coincidence fit the car's ignition, and drove away. Later that evening, sounds came out of its mouth that its nearby "friends" interpreted as meaning "Wow, that lightning bolt almost hit me in the swamp. How lucky I was!" Lucky indeed, but a much stranger kind of luck than they supposed!

So you're Swampman. Should you care?

Should you think: I came into existence only a week ago. I never had the childhood I thought I had, never did all those things I thought I did, hardly know any of the people I thought I knew! All that is delusion! How horrible!

Or should you think: Meh, whatevs.

[apologies if this doesn't look much like you]

Option 1: Yes, you should care. If it turns out that certain philosophers are correct and you (now) are not metaphysically the same person as that being who first parked the car by the swamp, then O. M. G.!

Option 2a: No, you shouldn't care, because that was just a fun little body exchange last week. The same person went into the swamp as came out. Disappointingly, the procedure didn't seem to clear your acne, though.

Option 2b: No, you shouldn't care, because even if technically you're not the same person as the one who first drove to the swamp, you and that earlier person share everything that matters. Same friends, same job, same values, same (seeming-)memories....

Option 3: Your call. If you choose to regard yourself as one week old, then you are correct in doing so. If you choose to regard yourself as much older than that, then you are equally correct in doing so.

Let's call that third option voluntarism about personal identity. Across a certain range of cases, you are who you choose to be.

Social identities are to a certain extent voluntaristic. You can choose to identify as a political conservative or a political liberal. You can choose to identify, or not identify, with a piece of your ethnic heritage. You can choose to identify, or not identify, as a philosopher or as a Christian. There are limits: If you have no Pakistani heritage or upbringing, you can't just one day suddenly decide to be Pakistani and thereby make it true that you are. Similarly if your heritage and upbringing have been entirely Pakistani to this day, you probably can't just instantly shed your Pakistanihood. But in vague, in-betweenish cases, there's room for choice and making it so.

I propose taking the same approach to personal identity in the stricter metaphysical sense: What makes you the same being, or not, in philosophical puzzle cases where intuitions pull both ways, depends to a substantial extent on how you choose to view the matter; and different people could legitimately arrive at different choices, thus shaping the metaphysical facts (the actual metaphysical facts) to suit them.

Consider some other stock cases from the literature on personal identity:

Teleporter: On Earth there is a device that will destroy your body and beam detailed information about it to Mars. On Mars another device will use that information to create a duplicate body from local materials. Is this harmless teleportation or terrible death-and-duplication? On a voluntaristic view, that would depend on how it is viewed by the participant(s). Also: How similar must the duplicate body be for it a qualify as a successful teleportation? That too, could depend on participant attitude.

Fission: Your brain will be extracted, cut into two, and housed in two new bodies. The procedure, though damaging and traumatic, is such that if only one half of your brain were to be extracted, and the other half destroyed, everyone would agree that you survived. But instead, there will now be two beings, presumably distinct, who both see themselves as "you". Perhaps whether this should count as death or instead as fissioning-with-survival depends on your attitude going in and the attitudes of the beings coming out.

Amnesia: Longevity treatments are developed so that your body won't die, but in four hundred years the resulting being will have no memory whatsoever of anything that happened in your lifetime so far, and if she has similar values and attitudes it will only be by chance. Is that being still "you"? How much amnesia and change can "you" survive without becoming strictly and literally (and not just metaphorically or loosely) a different person? Again, this might depend on the various attitudes about amnesia and identity of the person(s) at different temporal stages.

Here are two thoughts in support of voluntarism about personal identity:

(1.) If I try to imagine these cases as actual, I don't find myself urgently wondering about the resolution of these metaphysical debates, thinking of my very death or survival as turning upon how the metaphysical arguments play out. It's not like being told that if a just-tossed die has landed on 6 then tomorrow I will be shot, which will make me desperately curious about whether the die did land on 6. It seems to me that I can, to some extent, choose how to conceptualize these cases.

(2.) "Person" is an ordinary, folk concept arising from a context lacking Swampman, teleporter, fission, and (that type of) amensia cases, so the concept of personhood might be expected to be somewhat indeterminate in its application to such cases. And since important features of personhood depend in part on the person in question thinking of the past or future self as "me" -- feeling regrets about the past, planning prudently for the future -- such indeterminacy might be partly resolved by the person's own decisions about the boundaries of her regrets, prudential planning, etc.

Even accepting all this, I'm not sure how far I can go with it. I don't think I can decide to be a coffee mug and thereby make it true that I am a coffee mug, nor that I can decide to be one of my students and thereby make it so. Can I decide that I am not that 15-year-old named "Eric" who wore the funny shirts in the 1980s, thereby making it true that I am not really metaphysically the same person, while my sister just as legitimately decides the opposite, that she is the same person as her 15-year-old self? Can the Dalai Lama and some future child (together, but at a temporal distance) decide that they are metaphysically the same person, if enough else goes along with that?

(For a version of that last scenario, see "A Somewhat Impractical Plan for Immortality" (Apr. 22, 2013) and my forthcoming story "The Dauphin's Metaphysics" (available on request).)

Thursday, August 13, 2015

Weird Minds Might Destabilize Human Ethics

Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.

How about intuitive ethics?

I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.

Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.

For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?

Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.

Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.

However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.

Related posts:

  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (Jul. 24, 2015)
  • ----------------------------------------------
    Thanks to Ever Eigengrau for extensive discussion.

    [image source]

    Wednesday, August 05, 2015

    The Top Science Fiction and Fantasy Magazines 2015

    [Note: This is a 2015 list. For the most recent list, see here.]

    Last year, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on numbers of awards and "best of" placements in the previous ten years. Since some people have found the list interesting, I decided to update this year, dropping the oldest data and replacing them with fresh data from this summer's awards/best-of season.

    Last year's post expresses various methodological caveats, which still apply. This year's method, in brief, was to count one point every time a magazine had a story nominated for a Hugo, Nebula, or World Fantasy Award; one point for every "best of" choice in the Dozois, Strahan, and Horton anthologies; and half a point for every Locus recommendation at novelette or short story length, over the past ten years.

    I take the list down to magazines with 1.5 points. I am not including anthologies or standalones, although anthologies account for about half of the award nominations and "best of" choices. Horror is not included except as it incidentally appears according to the criteria above. I welcome corrections.


    1. Asimov's (262 points)
    2. Fantasy & Science Fiction (209.5)
    3. Subterranean (82) (ran 2007-2014)
    4. Clarkesworld (78) (started 2006)
    5. (77.5) (started 2008)
    6. Strange Horizons (51)
    7. Analog (50.5)
    8. Interzone (47.5)
    9. Lightspeed (44.5) (started 2010)
    10. SciFiction (26) (ceased 2005)
    11. Fantasy Magazine (24) (merged into Lightspeed, 2012)
    12. Postscripts (19) (ceased 2014)
    13. Realms of Fantasy (16.5) (ceased 2011)
    14. Beneath Ceaseless Skies (15) (started 2008)
    15. Jim Baen's Universe (14.5) (ran 2006-2010)
    16. Apex (13)
    17. Electric Velocipede (7) (ceased 2013)
    18. Intergalactic Medicine Show (6)
    19. Black Static (5.5) (started 2007)
    19. Helix SF (5.5) (ran 2006-2008)
    21. The New Yorker (5)
    22. Cosmos (4.5)
    22. Tin House (4.5)
    24. Flurb (4) (ran 2006-2012)
    24. Lady Churchill's Rosebud Wristlet (4)
    26. Black Gate (3.5)
    26. McSweeney's (3.5)
    28. Conjunctions (3)
    28. GigaNotoSaurus (3) (started 2010)
    30. Lone Star Stories (2.5) (ceased 2009)
    31. Aeon Speculative Fiction (2) (ceased 2008)
    31. Futurismic (2) (ceased 2010)
    31. Harper's (2)
    31. Weird Tales (2) (off and on throughout period)
    36. Cemetery Dance (1.5)
    36. Daily Science Fiction (1.5) (started 2010)
    36. Nature (1.5)
    36. On Spec (1.5)
    36. Terraform (1.5) (started 2014)


    (1.) The New Yorker, Tin House, McSweeney's, Conjunctions, and Harper's are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos and Nature are popular and specialists' (respectively) science magazines that publish a little bit of science fiction on the side. The remaining magazines focus on the F/SF genre.

    (2.) Although Asimov's and F&SF dominate the list, recently things have equalized among the top several. The past three years is approximately a tie among the top four:

    1. (50.5)
    2. Asimov's (50)
    3. Clarkesworld (44.5)
    4. F&SF (41)
    and the ratio between the #1 and the #10 is about 4:1 in the past three years, as opposed to 10:1 in the ten-year data:
    5. Lightspeed (26.5)
    6. Subterranean (23)
    7. Analog (19.5)
    8. Strange Horizons (14)
    9. Beneath Ceaseless Skies (13.5)
    10. Interzone (12)

    (3.) Another aspect of the venue-broadening trend is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, and Pseudopod), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasting might the leading edge of a major change in the industry. It's fun to hear a short story podcast while driving or exercising, and people might increasingly obtain their short fiction that way. (Some text-based magazines, like Clarkesworld, are also now regularly podcasting their stories.)

    (4.) A few new magazines have drawn recommendations this year from the notoriously difficult-to-please Lois Tilton, who is the reviewer for short fiction at Locus Online. All three are pretty cool, and I'm hoping to see one or more of them qualify for next year's updated list:

    Unlikely Story (started 2011 as Journal of Unlikely Entomology, new format 2013)
    The Dark (started 2013)
    Uncanny (started 2014)

    (5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

    (6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. is a regularly updated list of markets, divided into categories based on pay rate.

    [image source; admittedly, it's not the latest issue!]