Friday, January 23, 2015

Memories of My Father

My father, Kirkland R. Gable (born Ralph Schwitzgebel) died Sunday. Here are some things I want you to know about him.

Of teaching, he said that authentic education is less about textbooks, exams, and technical skills than about moving students "toward a bolder comprehension of what the world and themselves might become." He was a beloved psychology professor at California Lutheran University.

I have never known anyone, I think, who brought as much creative fun to teaching as he did. He gave out goofy prizes to students who scored well on his exams (e.g., a wind-up robot nun who breathed sparks of static electricity: "nunzilla"). Teaching about alcoholism, he would start by pouring himself a glass of wine (actually, water with food coloring), pouring more wine and acting drunker, arguing with himself, as the class proceeded. Teaching about child development, he would bring in my sister or me, and we would move our mouths like ventriloquist dummies as he stood behind us, talking about Piaget or parenting styles (and then he'd ask our opinion about parenting styles). Teaching about neuroanatomy, he brought in a brain jello mold, which he sliced up and passed around class for the students to eat ("yum! occipital cortex!"). Etc.

As a graduate student and then assistant professor at Harvard in the 1960s and 1970s, he shared the idealism of his mentors Timothy Leary and B.F. Skinner, who thought that through understanding the human mind we can transform and radically improve the human condition -- a vision he carried through his entire life.

His comments about education captured his ideal for thinking in general: that we should always aim toward a bolder comprehension of what the world and we ourselves, and the people around us, might become.

He was always imagining the potential of the young people he met, seeing things in them that they often did not see in themselves. He especially loved juvenile delinquents, whom he encouraged to think expansively and boldly. He recruited them from street corners, paying them to speak their hopes and stories into reel-to-reel tapes, and he recorded their declining rates of recidivism as they did this, week after week. His book about this work, Streetcorner Research (1964), was a classic in its day. As a prospective graduate student in the 1990s, I proudly searched the research libraries at the schools I was admitted to, always finding multiple copies with lots of date stamps in the 1960s and 1970s.

With his twin brother Robert, he invented the electronic monitoring ankle bracelet, now used as an alternative to prison for non-violent offenders.

He wanted to set teenage boys free from prison, rewarding them for going to churches and libraries instead of street corners and pool halls. He had a positive vision rather than a penal one, and he imagined everyone someday using location monitors to share rides and to meet nearby strangers with mutual interests -- ideas which, in 1960, seem to have been about fifty years before their time.

With degrees in both law and psychology, he helped to reform institutional practice in insane asylums -- which were often terrible places in the 1960s, whose inmates had no effective legal rights. He helped force these institutions to become more humane and to release harmless inmates held against their will. I recall his stories about inmates who were often, he said, "as sane as could be expected, given their current environment", and maybe saner than their jailors -- for example an old man who decades earlier had painted his neighbor's horse as an angry prank, and thought he'd "get off easy" if he convinced the court he was insane.

As a father, he modeled and rewarded unconventional thinking. We never had an ordinary Christmas tree that I recall -- always instead a cardboard Christmas Buddha (with blue lights poking through his eyes), or a stepladder painted green, or a wild-found tumbleweed carefully flocked and tinseled -- and why does it have to be on December 25th? I remember a few Saturdays when we got hamburgers from different restaurants and ate them in a neutral location -- I believe it was the parking lot of a Korean church -- to see which burger we really preferred. (As I recall, my sister and he settled on the Burger King Whopper, while I could never confidently reach a preference, because it seemed like we never got the methodology quite right.)

He loved to speak with strangers, spreading his warm silliness and unconventionality out into the world. If we ordered chicken at a restaurant, he might politely ask the server to "hold the feathers". Near the end of his life, if we went to a bank together he might gently make fun of himself, saying something like "I brought along my brain," here gesturing toward me with open hands, "since my other brain is sometimes forgetting things now". For years, though we lived nowhere near any farm, we had a sign from the Department of Agriculture on our refrigerator sternly warning us never to feed table scraps to hogs.

I miss him painfully, and I hope that I can live up to some of the potential he so generously saw in me, carrying forward some of his spirit.

-----------------------------------------------

I am eager to hear stories about his life from people he knew, so please, if you knew him, add one story (or more!) as a comment below. (Future visitors from 2018 or whenever, still post!) Stories are also being collected on his Facebook wall.

We are planning a memorial celebration for him in July to which anyone who knew him would be welcome to come. Please email me for details if you're interested.

Friday, January 16, 2015

Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument

Wednesday, I argued that artificial intelligences created by us might deserve more moral consideration from us than do arbitrarily-chosen human strangers (assuming that the AIs are conscious and have human-like general intelligence and emotional range), since we will be partly responsible for their existence and character.

In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?

First, two clarifications:

  • (1.) I speak of "rights", but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
  • (2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).

Now, two arguments.

-----------------------------------------

The No-Relevant-Difference Argument

Assume that all normal human beings have rights. Assume that both bacteria and ordinary personal computers in 2015 lack rights. Presumably, the reason bacteria and ordinary PCs lack rights is that there is some important difference between them and us. For example, bacteria and ordinary PCs (presumably) lack the capacity for pleasure or pain, and maybe rights only attach to beings with the capacity for pleasure or pain. Also, bacteria and PCs lack cognitive sophistication, and maybe rights only attach to beings with sufficient cognitive sophistication (or with the potential to develop such sophistication, or belonging to a group whose normal members are sophisticated). The challenge, for someone who would deny AI rights, would be to find a relevant difference which grounds the denial of rights.

The defender of AI rights has some flexibility here. Offered a putative relevant difference, the defender of AI rights can either argue that that difference is irrelevant, or she can concede that it is relevant but argue that some AIs could have it and thus that at least those AIs would have rights.

What are some candidate relevant differences?

(A.) AIs are not human, one might argue; and only human beings have rights. If we regard "human" as a biological category term, then indeed AIs would not be human (excepting, maybe, artificially grown humans), but it's not clear why humanity in the biological sense should be required for rights. Many people think that non-human animals (apes, dogs) have rights. Even if you don't think that, you might think that friendly, intelligent space aliens, if they existed, could have rights. Or consider a variant of Blade Runner: There are non-humans among the humans, indistinguishable from outside, and almost indistinguishable in their internal psychology as well. You don't know which of your neighbors are human; you don't even know if you are human. We run a DNA test. You fail. It seems odious, now, to deny you all your rights on those grounds. It's not clear why biological humanity should be required for the possession of rights.

(B.) AIs are created by us for our purposes, and somehow this fact about their creation deprives them of rights. It's unclear, though, why being created would deprive a being of rights. Children are (in a very different way!) created by us for our purposes -- maybe even sometimes created mainly with their potential as cheap farm labor in mind -- but that doesn't deprive them of rights. Maybe God created us, with some purpose in mind; that wouldn't deprive us of rights. A created being owes a debt to its creator, perhaps, but owing a debt is not the same as lacking rights. (In Wednesday's post, I argued that in fact as creators we might have greater moral obligations to our creations than we would to strangers.)

(C.) AIs are not members of our moral community, and only members of our moral community have rights. I find this to be the most interesting argument. On some contractarian views of morality, we only owe moral consideration to beings with whom we share an implicit social contract. In a state of all-out war, for example, one owes no moral consideration at all to one's enemies. Arguably, were we to meet a hostile alien intelligence, we would owe it no moral consideration unless and until it began to engage with us in a socially constructive way. If we stood in that sort of warlike relation to AIs, then we might owe them no moral consideration even if they had human-level intelligence and emotional range. Two caveats on this: (1.) It requires a particular variety of contractarian moral theory, which many would dispute. And (2.) even if it succeeds, it will only exclude a certain range of possible AIs from moral consideration. Other AIs, presumably, if sufficiently human-like in their cognition and values, could enter into social contracts with us.

Other possibly relevant differences might be proposed, but that's enough for now. Let me conclude by noting that mainstream versions of the two most dominant moral theories -- consequentialism and deontology -- don't seem to contain provisions on which it would be natural to exclude AIs from moral consideration. Many consequentialists think that morality is about maximizing pleasure, or happiness, or desire satisfaction. If AIs have normal human cognitive abilities, they will have the capacity for all these things, and so should presumably figure in the consequentialist calculus. Many deontologists think that morality involves respecting other rational beings, especially beings who are themselves capable of moral reasoning. AIs would seem to be rational beings in the relevant sense. If it proves possible to create AIs who are psychologically similar to us, those AIs wouldn't seem to differ from natural human beings in the dimensions of moral agency and patiency emphasized by these mainstream moral theories.

-----------------------------------------

The Simulation Argument

Nick Bostrom has argued that we might be sims. That is, he has argued that we ourselves might be artificial intelligences acting in a simulated environment that is run on the computers of higher-level beings. If we allow that we might be sims, and if we know we have rights regardless of whether or not we are sims, then it follows that being a sim can't, by itself, be sufficient grounds for lacking rights. There would be at least some conceivable AIs who have rights: the sim counterparts of ourselves.

This whole post assumes optimistic technological projections -- assumes that it is possible to create human-like AIs whose rights, or lack of rights, are worth considering. Still, you might think that robots are possible but sims are not; or you might think that although sims are possible, we can know for sure that we ourselves aren't sims. The Simulation Argument would then fail. But it's unclear what would justify either of these moves. (For more on my version of sim skepticism, see here.)

Another reaction to the Simulation Argument might be to allow that sims have rights relative to each other, but no rights relative to the "higher level" beings who are running the sim. Thus, if we are sims, we have no rights relative to our creators -- they can treat us in any way they like without risking moral transgression -- and similarly any sims we create have no rights relative to us. This would be a version of argument (B) above, and it seems weak for the same reasons.

One might hold that human-like sims would have rights, but not other sorts of artificial beings -- not robots or oracles. But why not? This puts us back into the No-Relevant-Difference Argument, unless we can find grounds to morally privilege sims over robots.

-----------------------------------------

I conclude that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve as least some moral consideration. What range of AIs deserve moral consideration, and how much moral consideration they deserve, and under what conditions, I leave for another day.

-----------------------------------------

Related posts:

(image source)

Wednesday, January 14, 2015

Our Moral Duties to Artificial Intelligences

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

You might think: Our moral duties to them would be similar to our moral duties to natural humans beings. A reasonable default view, perhaps. If morality is about maximizing happiness (a common consequentialist view), these beings ought to deserve consideration as loci of happiness. If morality is about respecting the autonomy of rational agents (a common deontological view), these beings ought to deserve consideration as fellow rational agents.

One might argue that our moral duties to such beings would be less. For example, you might support the traditional Confucian ideal of "graded love", in which our moral duties are greatest for those closest to us (our immediate family) and decline with distance, in some sense of "distance": You owe less moral consideration to neighbors than to family, less to fellow-citizens than to neighbors, less to citizens of another country than to citizens of your own country -- and still less, presumably, to beings who are not even of your own species. On this view, if we encountered space aliens who were objectively comparable to us in moral worth from some neutral point of view, we might still be justified in favoring our own species, just because it is our own species. And artificial intelligences might properly be considered a different species in this respect. Showing equal concern for an alien or artificial species, including possibly sacrificing humanity for the good of that other species, might constitute an morally odious disloyalty to one's kind. Go, Team Human?

Another reason to think our moral duties might be less, or more, involves emphasizing that we would be the creators of these beings. Our god-like relationship to them might be especially vivid if the AIs exist in simulated environments controlled by us rather than as ordinarily embodied robots, but even in the robot case we would presumably be responsible for their existence and design parameters.

One might think that if these beings owe their existence and natures to us, they should be thankful to us as long as they have lives worth living, even if we don't treat them especially well. Suppose I create a Heaven and a Hell, with AIs I can transfer between the two locations. In Heaven, they experience intense pleasure (perhaps from playing harps, which I have designed them to intensely enjoy). In Hell, I torture them. As I transfer Job, say, from Heaven to Hell, he complains: "What kind of cruel god are you? You have no right to torture me!" Suppose I reply: "You have been in Heaven, and you will be in Heaven again, and your pleasures there are sufficient to make your life as a whole worth living. In every moment, you owe your very life to me -- to my choice to expend my valuable resources instantiating you as an artificial being -- so you have no grounds for complaint!" Maybe, even, I wouldn't have bothered to create such beings unless I could play around with them in the Torture Chamber, so their very existence is contingent upon their being tortured. All I owe such beings, perhaps, is that their lives as a whole be better than non-existence. (My science fiction story Out of the Jar features a sadistic teenage God who reasons approximately like this.)

Alternatively (and the first narrator in R. Scott Bakker's and my story Reinstalling Eden reasons approximately like this), you might think that our duties to the artificial intelligences we create are something like the duties a parent has to a child. Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.

I tend to favor the latter view. But it's worth clarifying that our relationship isn't quite the same as parent-child. A young child is not capable of fully mature practical reasoning; that's one reason to take a paternalistic attitude to the child, including overriding the child's desires (for ice cream instead of broccoli) for the child's own good. It's less clear that I can justify being paternalistic in exactly that way in the AI case. And in the case of an AI, I might have much more capacity to control what they desire than I have in the case of my children -- for example, I might be able to cause the AI to desire nothing more than to sit on a cloud playing a harp, or I might cause the AI to desire its own slavery or death. To the extent this is true, this complicates my moral obligations to the AI. Respecting a human peer involves giving them a lot of latitude to form and act on their own desires. Respecting an AI whose desires I have shaped, either directly or indirectly through my early parameterizations of its program, might involve a more active evaluation of whether its desires are appropriate. If the AI's desires are not appropriate -- for example, if it desires things contrary to its flourishing -- I'm probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being.

However, to simply tweak around an AI's desire parameters, in a way the AI might not wish them to be tweaked, seems to be a morally problematic cancellation of its autonomy. If my human-intelligence-level AI wants nothing more than to spend every waking hour pressing a button that toggles its background environment between blue and red, and it does so because of how I programmed it early on, then (assuming we reject a simple hedonism on which this would count as flourishing), it seems I should do something to repair the situation. But to respect the AI as an individual, I might have to find a way to persuade it to change its values, rather than simply reaching my hand in, as it were, and directly altering its values. This persuasion might be difficult and time-consuming, and yet incumbent upon me because of the situation I've created.

Other shortcomings of the AI might create analogous demands: We might easily create problematic environmental situations or cognitive structures for our AIs, which we are morally required to address because of our role as creators, and yet which are difficult to address without creating other moral violations. And even on a Confucian graded-love view, if species membership is only one factor among several, we might still end up with special obligations to our AIs: In some morally relevant sense of "distance" creator and created might be very close indeed.

On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well being of any artificial intelligences we create. And if genuinely conscious human-grade AI somehow becomes cheap and plentiful, surely there will be messes, giant messes -- whole holocausts worth, perhaps. With god-like power comes god-like responsibility.

(image source)

[Thanks to Carlos Narziss for discussion.]

Related posts:

[Updated 6:03 pm]

Monday, January 12, 2015

The 10 Worst Things About Listicles

10. Listicles destroy the narrative imagination and subtract the sublimity from your gaze.

9. The numerosities of nature never equal the numerosity of human fingers.

8. The spherical universe becomes pretzel sticks upon a brief conveyor.

7. In every listicle, opinion subverts fact, riding upon it as upon a sad pony. (Since you momentarily accept everything you hear, you already know this.)

6. The human mind naturally aspires to unifying harmonies that the listicle instead squirts into yogurt cups.

5. Those ten yogurt pretzels spoiled your dinner. That is the precisely the relation between a listicle and life.

4. In their eagerness to consume the whole load, everyone skips numbers 4 and 3, thereby paradoxically failing to consume the whole load. This little-known fact might surprise you!

3. Why bother, really. La la.

2. Near the end of eating a listicle you begin to realize, once again, that if you were going to be doing something this pointless, you might as well have begun filing that report. Plus, whatever became of that guy you knew in college? Is this really your life? Existential despair squats atop your screen, a fat brown google-eyed demon.

1. Your melancholy climax is already completed. The #1 thing is never the #1 thing. Your hope that this listicle would defy that inevitable law was only absurd, forlorn, Kierkegaardian faith.

(image source)

Wednesday, January 07, 2015

Psychology Research in the Age of Social Media

Popular summaries of fun psychology articles regularly float to the top of my Facebook feed. I was particularly struck by this fact on Monday when these two links popped up:

The links were striking to me because both of the articles reported studies that I regarded as methodologically rather weak, though interesting and fun. It occurred to me to wonder whether social media might be seriously aggravating the perennial plague of sexy but dubious psychological research.

Below, I will present some dubious data on this very issue!

But first, why think these two studies are methodologically dubious? [Skip the next two paragraphs if you like.]

The first article, on whether the rich are jerks, is based largely on a study I critiqued here. Among that study's methodological oddities: The researchers set up a jar of candy in their lab, ostensibly for children in another laboratory, and then measured whether wealthy participants took more candy from the jar than less-wealthy participants. Cute! Clever! But here's the weird thing. Despite the fact that the jar was in their own lab, they measured candy-stealing by asking participants whether they had taken candy rather than by directly observing how much candy was taken. What could possibly justify this methodological decision, which puts the researchers at a needless remove from the behavior being measured and conflates honesty with theft? The linked news article also mentions a study suggesting that expensive cars are more likely to be double-parked. Therefore, see, the rich really are jerks! Of course, another possibility is that the wealthy are just more willing to risk the cost of a parking ticket.

The second article highlights a study that examines a few recently-popular measures of "individualistic" vs. "collectivistic" thinking, such as the "triad" task (e.g., whether the participant pairs trains with buses [because of their category membership, supposedly individualistic] or with tracks [because of their functional relation, supposedly collectivistic] when given the three and asked to group two together). According to the study, the northern Chinese, from wheat-farming regions, are more likely to score as individualistic than are the southern Chinese, from rice-farming regions. A clever theory is advanced: wheat farming is individualistic, rice farming communal! (I admit, this is a cool theory.) How do we know that difference is the source of the different performance on the cognitive tasks? Well, two alternative hypotheses are tested and found to be less predictive of "individualistic" performance: pathogen prevalence and regional GDP per capita. Now, the wheat vs. rice difference is almost a perfect north-south split. Other things also differ between northern and southern China -- other aspects of cultural history, even the spoken language. So although the data fit nicely with the wheat-rice theory, many other possible explanations of the data remain unexplored. A natural starting place might be to look at rice vs. wheat regions in other countries to see if they show the same pattern. At best, the conclusion is premature.

I see the appeal of this type of work: It's fun to think that the rich are jerks, or that there are major social and cognitive differences between people based on the agricultural methods of their ancestors. Maybe, even, the theories are true. But it's a problem if the process by which these kinds of studies trickle into social media has much more do to with how fun the results are than with the quality of the work. I suspect the problem is especially serious if academic researchers who are not specialists in the area take the reports at face value, and if these reports then become a major part of their background sense of what psychological research has recently revealed.

Hypothetically, suppose a researcher measured whether poor people are jerks by judging whether people in more or less expensive clothing were more or less likely to walk into a fast-food restaurant with a used cup and steal soda. This would not survive peer review, and if it did get published, objections would be swift and angry. It wouldn't propagate through Facebook, except perhaps as the butt of critical comments. It's methodologically similar, but the social filters would be against it. I conjecture that we should expect to find studies arguing that the rich are morally worse, or finding no difference between rich and poor, but not studies arguing that the poor are morally worse (though they might be found to have more criminal convictions or other "bad outcomes"). (For evidence of such a filtering effect on studies of the relationship between religion and morality, see here.)

Now I suspect that in the bad old days before Facebook and Twitter, popular media reports about psychology had less influence on philosophers' and psychologists' thinking about areas outside their speciality than they do now. I don't know how to prove this, but I thought it would be interesting to look at the usage statistics on the 25 most-downloaded Psychological Science articles in December 2014 (excluding seven brief articles without links to their summary abstracts).

The article with the most views of its abstracted summary was The Pen Is Mightier Than the Keyboard: Advantages of Longhand over Laptop Note Taking. Fun! Useful! The article had 22,389 abstract views in December, 2014. It also had 1,320 full text or PDF downloads. Thus, it looks like at most 6% of the abstract viewers bothered to glance at the methodology. (I say "at most" because some viewers might go straight through to the full text without viewing the abstract separately. See below for evidence that this happens with other articles.) Thirty-seven news outlets picked up the article, according to Psychological Science, tied for highest with Sleep Deprivation and False Memories, which had 4,786 abstract views and 645 full text or PDF downloads (at most 13% clicking through).

Contrast these articles with the "boring" articles (not boring to specialists!). The single most downloaded article was The New Statistics: Why and How: 1,870 abstract views, 4,717 full-text and PDF views -- more than twice as many full views as abstract views. Psychological Science reports no media outlets picking this one up. I guess people interested in statistical methods want to see the details of the articles about statistical methods. One other article had more full views than abstract views: the ponderously titled Retraining Automatic Action Tendencies Changes Alcoholic Patients’ Approach Bias for Alcohol and Improves Treatment Outcome: 164 abstract views and 274 full views (67% more full views than abstract views). That article was only picked up by one media outlet. Overall, I found a r = -.49 (p = .01) correlation between the number of news-media pickups and the log of the ratio of full-text views to abstract views.

I suppose it's not surprising that articles picked up by the media attract more casual readers who will view the abstract only. I have no way of knowing that many of these readers are fellow academics in philosophy, psychology, and the other humanities and social sciences. But if so, and if my hypothesis is correct that academic researchers are increasingly exposed to psychological research based on Tweetability rather than methodological quality, that's bad news for the humanities and social sciences. Even if the rich really are jerks.

Thursday, January 01, 2015

Writings of 2014

I guess it's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012 and 2013.)

Two notable things this past year were (a.) several essays on a topic that is new to me: skeptical epistemology of metaphysics ("The crazyist metaphysics of mind", "If materialism is true, the United States is probably conscious", "1% skepticism", "Experimental evidence for the existence of an external world"); and (b.) a few pieces of philosophical science fiction, a genre in which I had only published one co-authored piece before 2014. These two new projects are related: one important philosophical function of science fiction is to enliven metaphysical possibilities that one might not otherwise have taken seriously, thus opening up more things to be undecided among.

Of course I also continued to work on some of my other favorite topics: self-knowledge, moral psychology, the nature of attitudes, the moral behavior of ethicists. Spreading myself a bit thin, perhaps!

Non-fiction appearing in print in 2014:

Non-fiction finished and forthcoming:
Non-fiction in draft and circulating:
Some favorite blog posts:
Fiction:
  • What kelp remembers”, Weird Tales: Flashes of Weirdness series, #1, April 14, 2014.
  • The tyrant’s headache”, Sci Phi Journal, issue 3. [appeared Dec. 2014, dated Jan. 2015]
  • Out of the jar”, The Magazine of Fantasy and Science Fiction [forthcoming].
  • “Momentary sage” The Dark [forthcoming].
  • (reprint) “Reinstalling Eden” (first author, with R. Scott Bakker), in S. Schneider, ed., Science Fiction and Philosophy, 2nd ed. (Wiley-Blackwell) [forthcoming].
(I also published a couple of poems, and I have several science fiction stories in draft. If you're interested to see any of these, feel free to email me.)

Monday, December 29, 2014

"The Tyrant's Headache" in Sci Phi Journal

According to a broad class of materialist views, conscious experiences -- such as the experience of pain -- do not supervene on the local physical state of the being who is having those conscious experiences. Rather, they depend in part on the past evolutionary or learning history of the organism (Fred Dretske) or on what is "normal" for members of its group (David Lewis). These dependencies are not just causal but metaphysical: The very same (locally defined) brain state might be experienced as pain by one organism as as non-pain by another organism, in virtue of differences in the organisms' past history or group membership, even if the two organisms are molecule-for-molecule identical at the moment in question.

Donald Davidson's Swampman example is typically used to make this point vivid: You visit a swamp. Lightning strikes, killing you. Simultaneously, through incredibly-low-odds freak quantum chance, a being who is molecule-for-molecule identical to you emerges from the swamp. Does this randomly-congealed Swampman, who lacks any learning history or evolutionary history, experience pain when it stubs its toe? Many people seem to have the hunch or intuition, that yes, it would; but any externalist who thinks that consciousness requires a history will have to say no. Dretske makes clear in his 1995 book that he is quite willing to accept this consequence. Swampman feels no pain.

But Swampman cases are only the start of it! If pain depends, for example, on what is normal for your species, then one ought to be able to relieve a headache by altering your conspecifics -- for example, by killing enough of them to change what is "normal" for your species: anaesthesia by genocide. And in general, any view that denies local supervenience while allowing the presence or absence of pain to depend on other currently ongoing events (rather than only on events in the past) should allow that there will be conditions under which one can end one's own pain by changing other people even without any changes in one's own locally-defined material configuration.

To explore this issue further, I invented a tyrant with the headache, who will do anything to other people to end his headache, without changing any of his own relevant internally-defined brain states.

"The Tyrant's Headache" is a hybrid between a science fiction story and an extended philosophical thought experiment. It has just come out in Sci Phi Journal -- a new journal that publishes both science fiction stories and philosophical essays about science fiction. The story/essay is behind a paywall for now ($3.99 at Amazon or Castalia House). But consider buying! Your $3.99 will support a very cool new journal, and it will get you, in addition to my chronicle of the Tyrant's efforts to end his headache (also featuring David K. Lewis in magician's robes), three philosophical essays about science fiction, eight science fiction stories that explore other philosophical themes, part of a continuing serial, and a review. $3.99 well spent, I hope, and dedicated to strengthening the bridge between science fiction and philosophy.

[See also Anaesthesia by Genocide, David Lewis, and a Materialistic Trilemma]

(image source)

Sunday, December 28, 2014

The Moral World of Dreidel

I used to think dreidel was a poorly designed game of luck. Now I realize that its "bugs" are really features! Dreidel is the moral world in minature.

Primer for goys: You sit in a circle with friends or relatives and take turns spinning a wobbly top (the dreidel). In the center of the circle is a pot of several foil-wrapped chocolate coins. If the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute to the pot again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put a coin in. Then the next player takes a turn.

It all sounds very straightforward, until you actually start to play the game.

First off: Some coins are big, others little. If the game were fair, all the coins would be the same size, or at least there would be clear rules about tradeoffs or about when you're supposed to contribute your big coins and little coins. Also, there's never just one driedel, and the dreidels all seem to be uneven and biased. (This past Hannukah, my daughter Kate and I spun a sample of dreidels 40 times each. One in particular landed on shin an incredible 27/40 spins. [Yes, p < .001, highly significant, even with a Bonferroni correction.]) No one agrees whether you should round up or round down with hey; no one agrees when the game should end or how low the pot should be before you all have to contribute again. (You could look at various alleged authorities on the internet, but people prefer to argue and employ varying house rules.) No one agrees whether you should let someone borrow coins if they run out, or how many coins to start with. Some people hoard their coins; others slowly unwrap and eat them while playing, then beg and borrow from their wealthy neighbors.

You can, if you want, always push things to your advantage: Always contribute the smallest coins you can, always withdraw the biggest coins you can, insist on using what seems to be the "best" dreidel, always argue for rule-interpretations in your favor, eat your big coins and use that as a further excuse to only contribute little ones, etc. You could do all this without ever once breaking the rules, and you'd probably end up with the most chocolate as a result.

But here's the brilliant part: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is pretty minimal. As a result, almost all the children learn that they would rather enjoy being kind and generous than they would enjoy hoarding up more coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, even though the rules don't demand it -- just to be fair to others, and to be perceived as fair by them.

Of course, it also feels bad always to be the most generous one -- always to put in big, take out small, always to let others win the rules-arguments, etc., to play the sucker or self-sacrificing saint. Dreidel is a practical lesson in discovering the value of fairness both to oneself and others, in a context where proper interpretation of the rules is unclear, and where there are norm violations that aren't rule violations, and where both norms and rules are negotiable, varying from occasion to occasion -- just like life itself, but with only mediocre chocolate at stake.

(image source)

Tuesday, December 23, 2014

Nussbaum on the Moral Bright Side of Literature

In Poetic Justice, her classic defense of the moral value of the "literary imagination", Martha Nussbaum writes about the children's song "Twinkle, twinkle little star" that:

the fact is that the nursery song itself, like other such songs, nourishes the ascription of humanity, and the prospect of friendship, rather than paranoid sentiments of being persecuted by a hateful being in the sky. It tells the child to regard the star as "like a diamond," not like a missile of destruction, and also not like a machine good only for production and consumption. In this sense, the birth of fancy is non-neutral and does, as Dickens indicates, nourish a generous construction of the seen (p. 39).
Nussbaum also argues that the literary imagination favors the oppressed over the aristocracy:
Whitman calls his poet-judge an "equalizer." What does he mean? Why should the literary imagination be any more connected with equality than with inequality, or with democratic rather than aristocratic ideals?... When we read [the Dickens novel] Hard Times as sympathetic participants, our attention has a special focus. Since the sufferings and anxieties of the characters are among the central bonds between reader and work, our attention is drawn in particular to those characters who suffer and fear. Characters who are not facing any adversity simply do not hook us in as readers (p. 90).
Does listening to nursery rhymes and reading literature cultivate generous and sympathetic friendship, across class and ethic divides, as Nussbaum seems to think it does? Maybe so! But the evidence isn't really in yet. Nursery rhymes can also be dark and unsympathetic -- "Rock-a-Bye Baby", "Jack and Jill" -- and I must say that it seems to me that aristocrats are over-represented in literature, the more common targets of our sympathies, than are the poor. We sympathize with Odysseus, with Hamlet, with the brave knight, with the wealthy characters in Eliot, James, and Fitzgerald, and we tend to overlook the servants around them, except in works intentionally written (as Hard Times was) to turn our eyes toward the working class. True, if these characters had no adversities, they wouldn't engage us; but Hamlet suffers adversity enough to capture sympathy despite ample wealth.

Children's literature (especially pre-Disney) mocks and chuckles and laughs callously at suffering as much as it expresses the ideals of wonder and friendship. Children's literature represents the full moral range of human impulses, for good and bad; it would be surprising if that were not so. The same with movies, novels, television, every medium. And "fancy" -- that is, the metaphorical imagination (p. 38) -- can be quite dark and paranoid (especially at night), and sadistic, and sexual, and vengeful, and narcissistic. Fancy is as morally mixed as those who do the fancying.

One might even argue, contra Nussbaum, that there is an aristocratic impulse in literature, a default tendency to present as its focal figures people of great social power, since the socially powerful are typically the ones who do the most exciting things on which the future of their worlds depends. The literary eye is drawn to Lincoln and Caesar and their equivalents, more than to the ordinary farmer who never leaves his land. It takes an egalitarian effort to excite the reader equally about the non-great. And although we are sympathetic with focal figures, the death of non-focal figures (e.g., foes in battle) might tend to excite less sympathy in literature than in real life.

Nussbaum has cherry picked her sample. She might be right that, on balance, we are morally improved by a broad consumption of literature. (Or at least by "good" literature? But let's be careful about what we build into "good" here, lest we argue in a circle.) But if so, I don't think the case can be made on the grounds that literature tends, overall, to be anti-aristocratic and broadly sympathetic. Nor do I think there is much direct empirical evidence on this question, such as longitudinal studies comparing the moral behavior and attitudes those extensively exposed to literature to those not so exposed. (Impressionistically, I'd say literature professors don't seem much morally better, for all their exposure, than others of similar education and social background with less exposure; but the study has never been done.)

It's an interesting and important issue, what are the moral effects of reading literature -- but in my mind, wide open.

[Image source]

Tuesday, December 16, 2014

Moral Order and Immanent Justice

Let's say the world is morally ordered if good things come to those who act morally well and bad things come to those who act morally badly.

Moral order admits of degrees. We might say that the world is perfectly morally ordered if everyone gets exactly what they morally deserve, perfectly immorally ordered if everyone gets the opposite of what they morally deserve, and has no moral order if there's no relationship between what one deserves and what one gets.

Moral order might vary by subgroup of individuals considered. Perhaps the world is better morally ordered in 21st century Sweden than it was in 1930s Russia. Perhaps the world is better morally ordered among some ethnicities or social classes than among others. Class differences highlight the different ways in which moral order can fail: Moral order can fail among the privileged if they do not suffer for acting badly, can fail among the disadvantaged if they do not benefit from acting well.

Moral order might vary by action type. Sexual immorality might more regularly invite disaster than financial immorality, or vice versa. Kindness to those you know well might precipitate deserved benefits or undeserved losses more dependably than kindness to strangers.

Moral order can be immanent or transcendent. Transcendent moral order is ensured by an afterlife. Immanent moral order eschews the afterlife and is either magical (mystical attraction of good or bad fortune) or natural.

Some possible natural mechanisms of immanent moral order:

* A just society. Obviously.

* A natural attraction to morality of the sort Mencius finds in us. Our hearts are delighted, Mencius says, when we see people do what's plainly good and revolted when we see people do what's plainly wrong. Even if this impulse is weak, it might create a constant pressure to reward people for doing the right and revile them for doing the wrong; and it might add pleasure to one's own personal choices of the right over the wrong.

* The Dostoyevskian and Shakespearian psychological reactions to crime. Crime might generate fear of punishment or exposure, including exaggerated fear; it might lead to a loss of intimacy with others if one must hide one's criminal side from them; and it might encourage further crimes, accumulating risk.

* Shaping our preferences toward noncompetitive goods over competitive ones. If you aim to be richer than your neighbors, or more famous, or triumphant in physical, intellectual, or social battle, then you put your happiness at competitive risk. The competition might encourage morally bad choices; and maybe success in such aims is poorly morally ordered or even negatively morally ordered. Desires for non-competitive goods -- the pleasures of shared friendship and a good book -- seem less of a threat to the moral order (though books and leisure time are not free, and so subject to some competitive pressures). And if it's the case that we can find as much or more happiness in easily obtainable non-competitive goods, then even if wealth goes to the jerks, the world might be better morally ordered than it at first seems.

How morally ordered is the world? Do we live in a world where the knaves flourish while the sweethearts are crushed underfoot? Or do people's moral choices tend to come back around to them in the long run? No question, I think, is more central to one's general vision of the world, that is, to one's philosophy in the broad and and proper sense of "philosophy". All thoughtful people have at least implicit opinions about the matter, I think -- probably explicit opinions, too.

Yet few contemporary philosophers address the issue in print. We seem happy to leave the question to writers of fiction.

Tuesday, December 09, 2014

Knowing Something That You Think Is Probably False

I know where my car is parked. It's in the student lot on the other side of the freeway, Lot 30. How confident am I that my car is parked there? Well, bracketing radically skeptical doubts, I'd say about 99.9% confident. I seem to have a specific memory of parking this morning, but maybe that specific memory is wrong; or maybe the car has been stolen or towed or borrowed by my wife due to some weird emergency. Maybe about once in every three years of parking, something like that will happen. Let's assume (from a god's-eye perspective) that no such thing has happened. I know, but I'm not 100% confident.

Justified degree of confidence doesn't align neatly with the presence or absence of knowledge, at least if we assume that it's true that I know where my car is parked (with 99.9% confidence) but false that I know that my lottery ticket will lose (despite 99.9999% confidence it will lose). (For puzzles about such cases, see Hawthorne 2004 and subsequent discussion.) My question for this post is, how far can this go? In particular, can I know something about which I'm less than 50% confident?

"I know that my car is parked in Lot 30; I'm 99.9% confident it's there." -- although that might sound a little jarring to some ears (if I'm only 99.9% confident, maybe I don't really know?), it sounds fine to me, perhaps partly because I've soaked so long in fallibilist epistemology. "I know that my car is parked in Lot 30; I'm 80% confident it's there." -- this sounds a bit odder, though perhaps not intolerably peculiar. Maybe "I'm pretty sure" would be better than "I know"? But "I know that my car is parked in Lot 30; I'm 40% confident it's there." -- that just sounds like a bizarre mistake.

On the other hand, Blake Myers-Schulz and I have argued that we can know things that we don't believe (or about which we are in an indeterminate state between believing and failing to believe). Maybe some of our cases constitute knowledge of some proposition simultaneously with < 50% confidence in that proposition?

I see at least three types of cases that might fit: self-deception cases, temporary doubt cases, and mistaken dogma cases.

Self-deception. Gernot knows that 250 pounds is an unhealthy weight for him. He's unhappy about his weight; he starts half-hearted programs to lose weight; he is disposed to agree when the doctor tells him that he's too heavy. He has seen and regretted the effects of excessive weight on his health. Nonetheless he is disposed, in most circumstances, to say to himself that he's approximately on the fence about whether 250 pounds is too heavy, that he's 60% confident that 250 is a healthy weight for him and 40% confident he's too heavy.

Temporary doubt. Kate studied hard for her test. She knows that Queen Elizabeth died in 1603, and that's what she writes on her exam. But in the moment of writing, due to anxiety, she feels like she's only guessing, and she thinks it's probably false that Elizabeth died in 1603. 1603 is just her best guess -- a guess about which she feels only 40% confident (more confident than about any other year).

Mistaken dogma. Kaipeng knows (as do we all) that death is bad. But he has read some Stoic works arguing that death is not bad. He feels somewhat convinced by the Stoic arguments. He'd (right now, if asked) sincerely say that he has only a 40% credence that death is bad; and yet he'd (right now, if transported) tremble on the battlefield, regret a friend's death, etc. Alternatively: Karen was raised a religious geocentrist. She takes an astronomy class in college and learns that the Earth goes around the sun, answering correctly (and in detail) when tested about the material. She now knows that the Earth goes around the sun, though she feels only 40% confident that it does and retains 60% confidence in her religious geocentrism.

The examples -- mostly adapted from Schwitzgebel 2010, Myers-Schulz and Schwitzgebel 2013, and Murray, Sytsma, and Livengood 2013 -- require fleshing out and perhaps also a bit of theory to be convincing. I offer a variety because I suspect different examples will resonate with different readers. I aim only for an existence claim: As long as there is a way of fleshing out one of these examples so that the subject knows a proposition toward which she has only 40% confidence, I'll consider it success.

As I just said, it might help to have a bit of theory here. So consider this model of knowledge and confidence:

You know some proposition P if you have it -- metaphorically! -- stored in your memory and available for retrieval in such a way that we can rightly hold you responsible for acting or not acting on account of it (and P is true, justified, etc.).

You're confident about some proposition P just in case you'd wager on it, and endorse it, and have a certain feeling of confidence in doing so. (If the wagering, expressing, and feeling come apart, it's a non-canonical, in-between case.)

There will be cases where a known proposition -- because it is unpleasant, or momentarily doubted, or in conflict with something else one wants to endorse -- does not effectively guide how you would wager or govern how you feel. But we can accuse you. We can say, "You know that! Come on!"

So why won't you say "I know that P but I'm only 40% confident in P"? Because such utterances, as explicit endorsements, reflect one's feelings of confidence -- exactly what comes apart from knowledge in these types of cases.

Tuesday, December 02, 2014

"I Think There's About a 99.8% Chance That You Exist" Said the Skeptic

Alone in my office, it can seem reasonable to me to have only about a 99% to 99.9% credence that the world is more or less how I think it is, while reserving the remaining 0.1% to 1% credence for the possibility that some radically skeptical scenario obtains (such as that this is a dream or that I'm in a short term sim).

But in public... hm. It seems an odd thing to say aloud to someone else! The question rises acutely as I prepare to give a talk on 1% Skepticism at University of Miami this Friday. Can I face an audience and say, "Well, I think there's a small chance that I'm dreaming right now"? Such an utterance seems even stranger than the run-of-the-mill strangeness of dream skepticism in solitary moments.

I've tried it on my teenage son. He knows my arguments for 1% skepticism. One day, driving him to school, a propos of nothing, I said, "I'm almost certain that you exist." A joke, of course. How could he have heard it, or how could I have meant it, in any other way?

One possible source of strangeness is this: My audience knows that they are not just my dream-figures. So it's tempting to say that in some sense they know that my doubts are misplaced.

But in non-skeptical cases, we can view people as reasonable in having non-zero credence in propositions we know to be false, if we recognize an informational asymmetry. The blackjack dealer who knows she has a 20 doesn't think the player a fool for standing on a 19. Even if the dealer sincerely tells the player she has a 20, she might think the player reasonable to say he has some doubt about the truth of the dealer's testimony. So why do radically skeptical cases seem different?

One possible clue is this: It doesn't seem wrong in quite the same way to say "I think that we might all be part of a short-term sim". Being together in skeptical doubt seems fine -- in the right context, it might even be kind of friendly, kind of fun.

Maybe, then, the issue is a matter of respect -- a matter of treating one's interlocutor as an equal partner, metaphysically and epistemically? There's something offensive, perhaps, or inegalitarian, or oppressive, or silencing, about saying "I know for sure that I exist, but I have some doubts about whether you do".

I feel the problem most keenly in the presence of the people I love. I can't doubt that we are in this world together. It seems wrong -- merely a pose, possibly an offensive pose -- to say to my seriously ill father, in seeming sincerity at the end of a philosophical discussion about death and God, "I think there's a 99.8% chance that you exist". It throws a wall up between us.

Or can it be done in a different way? Maybe I could say: "Here, you should doubt me. And I too will doubt you, just a tiny bit, so we are doubting together. Very likely, the world exists just as we think it does; or even if it doesn't, even if nothing exists beyond this room, still I am more sure of you than I am of almost anything else."

There is a risk in radical skepticism, a risk that I will doubt others dismissively or disrespectfully, alienating myself from them. But I believe that this risk can be managed, maybe even reversed: In confessing my skepticism to you, I make myself vulnerable. I show you my weird, nerdy doubts, which you might laugh at, or dismiss, or join me in. If you join me, or even just engage me seriously, we will have connected in a way that I treasure.

Monday, November 24, 2014

More Philosophical SF Recommendations

Regular readers of The Splintered Mind will remember the recent series of posts offering 36 professional philosophers' recommendations of works of science fiction or speculative fiction (SF) -- compiled here. Since then, I've accumulated a few more lists and recommendations.

Here's

a list of movies from the Philo-Teach discussion list started in 1996,
which Bruce Janz has kindly reposted -- movies that philosophers have found useful to show students for teaching purposes. Some good SF on there (but also lots of non-SF).

And here's

a list of science fiction about death
compiled for John M. Fischer in 1993 by John's and my late colleague George Slusser, the visionary science fiction scholar whose vast knowledge of the genre was central to developing UC Riverside's Eaton Collection into the largest publicly available collection of science fiction, fantasy, horror, and utopian literature in the world.


Below are three new SF lists in the standard format I am using for list contributions (ten recommendations with brief pitches).

Further contributions are welcome. Official list contributors should be "professional philosophers" (by which I mean something like PhD and a teaching or research job in philosophy) or SF writers with graduate training in philosophy and at least one "pro" sale. And as always, all readers' further thoughts and recommendations are welcomed in the comments section!

----------------------------------------------

List from Simon Fokt (Teaching Fellow in Philosophy, University of Leeds), Polish SF from Lem and Dukaj:

Stanisław Lem, Solaris (novel, 1961; trans. 1970) Lem explores issues related to limitations of knowledge and communication, philosophy of mind and the structure of radically different minds.

Stanisław Lem, Fiasco (novel, 1986; trans. 1987) Another novel exploring the linguistic and cognitive limitations on understanding and communicating with truly different, alien life forms.

Stanisław Lem, Golem XIV (novel, 1981; trans. 1985) A story from the point of view of an AI which achieves consciousness, raises issues in philosophy of mind, and questions human ethics.

Stanisław Lem, The Futurological Congress (novel, 1971; trans. 1974) On distinguishing reality from hallucination; scepticism and issues in knowledge acquisition and justification.

Stanisław Lem, Return from the Stars (novel, 1961; trans. 1980) Can humans live in a utopian society? What is the value of suffering, danger and risk, and what can happen if they are removed?

Stanisław Lem, Wizja lokalna (Local Vision) (novel, 1982 – Polish, not translated) Raises moral issues related to artificial intelligences and immortality.

Jacek Dukaj, Inne Pieśni (Other Songs) (novel, 2003 – Polish, not translated) An alternative history, starting from Alexander the Great’s times, in which Aristotle's physics is actually true. There are five elements, form and matter, etc., and some people have the power to will form onto matter. Basically, what would the world be like if Aristotle were right?

Jacek Dukaj, Lód (Ice) (novel, 2007 – Polish, not translated) The Tunguska Meteorite creates the Ice which freezes history and laws of logic in a part of the world. Under the Ice logic has only two-values, while outside it's many-valued. Issues in logic, rationality and cognition.

Jacek Dukaj, Czarne oceany (Black Oceans) (novel, 2001 – Polish, not translated) Jacek Dukaj, Perfekcyjna niedoskonałość (An Ideal Imperfection) (novel, 2004 – Polish, not translated) Both novels explore post-humanism, the limits of human cognition and self, personal identity and persistence in the context of technology advanced enough to permit multiple physical realizations of a single consciousness, and blurring the lines between several simultaneous streams of thought and communication.

Simon adds: "Sadly, Dukaj’s work isn’t likely to be translated any time soon, which is unfortunate. Not because it’s not worth it, but because of the difficulty – he’s very interested in linguistic manipulations and neologisms, including not only making up new words, but making up entire grammar structures (e.g. some post-human-beings have no gender or location, so he creates an entirely new type of declination which is used when speaking about them). It must be a great challenge to translate that! Hopefully someone will, sooner or later."

----------------------------------------------

List from David John Baker (Associate Professor of Philosophy, University of Michigan):

Dan Simmons, Hyperion (novel, 1989) The best science fiction novel I've ever read, a treasure of the genre. It isn't philosophical throughout, but the chapter titled "The Scholar's Tale" contains a lot of interesting philosophy of religion.

C.J. Cherryh, Cyteen (novel, 1988) Nature/nurture and personal identity questions are central to an absorbing plot.

Gene Wolfe, The Fifth Head of Cerberus (novel, 1972) Revolves around a fascinating question at the border between philosophy and psychology. Revealing the question would spoil the plot.

John C. Wright, The Golden Age (and sequels The Phoenix Exultant and The Golden Transcendence) (novels 2002-2003) A well-thought-out posthuman libertarian utopia. (Also a deeply sexist novel, I'm afraid.)

Stephen Baxter, Manifold Time (novel, 1999) The plot of this book revolves around the doomsday argument! Also features some interesting detail about time and quantum physics, although much of it is distorted for fictional effect.

John Varley, The Ophiuchi Hotline (novel, 1977) Hinges on some wonderful thought experiments about personal identity, free will and the nature of intelligence.

John Kessel, "Stories for Men" (short story, 2002) Fascinating piece about gender. Examines a civilization in which women are privileged in something like the way our civilization privileges men.

Ted Chiang, "The Truth of Fact, The Truth of Feeling" (short story, 2013) One of Chiang's most philosophical stories, which is saying a lot. Examines the unreliability of memory. If I had more room for a longer list, at least half of Chiang's stories would be on it.

Ariel Djanikian, The Office of Mercy (novel, 2013) Recent novel by a first-time author. A utilitarian civilization ruthlessly acts out its principles on a grand scale. Hard to say if this is a utopia or a dystopia.

Greg Bear, Queen of Angels (and sequel Slant) (novels, 1990 and 1997) Another morally ambiguous utopia. A civilization which treats violent deviants with therapy rather than punishment.

----------------------------------------------

List from Christy Mag Uidhir (Assistant Professor of Philosophy, University of Houston):

Gene Wolfe, The Fifth Head of Cerberus (novel, 1972) A novella composed of three short stories that addresses the issue of personal identity through the Colonialist lens.

Gene Wolfe, The Book of the New Sun (novels, 1980-1987) Four novels and a coda. Modern masterpiece of literature, science-fiction or otherwise. Difficult and at times seems impenetrably dense but, like much of Wolfe’s work, the rewards for the careful reader are endless.

Walter Miller, Jr., Canticle for Leibowitz (novel, 1959) A powerful tale both beautiful and tragic of Humanity and the light of knowledge.

Stanisław Lem, The Cyberiad (story collection, 1965; trans. 1974) A collection of philosophically-themed short stories about the adventures of constructor engineers Trurl and Klapaucius trying to out do one another.

Frederick Pohl, Gateway (novel, 1977) How time doesn’t heal all wounds; some it leaves freshly open and raw forever.

Joe Haldeman, The Forever War (novel, 1975) Haldeman’s sci-fi Vietnam masterpiece. What war at relativistic speeds means for soldiers going home.

Jack Vance, The Dying Earth (novel, 1950) Set millions of years in the future against the backdrop of a dying sun where mathematics has become magic and Earth a thing of terrible beauty.

Stanisław Lem, The Futurological Congress (novel, 1971; trans. 1974) It’s The Matrix on drugs (literally) but better written and utterly hilarious.

Connie Willis, To Say Nothing of the Dog (novel, 1998) A thoroughly enjoyable time-travel romp with a surprisingly philosophically sophisticated ending.

Mike Resnick, Seven Views of Olduvai Gorge (novel, 1994) Novella that uses stories from a single geographic location across time to weave together a portrait of humanity (and the rise and fall thereof) as an essentially ruthless and thoroughly evil blight upon the universe.

Wednesday, November 19, 2014

Schindler's Truck

Today I'm thinking about Schindler's truck and what it suggests about the moral psychology of one of the great heroes of the Holocaust.

Here's a portrayal of the truck, in the background of a famous scene from Schindler's List:

[image source]

Oskar Schindler, as you probably know, saved over a thousand Jews from death under the Nazis by spending vast sums of money to hire them in his factories, where they were protected. Near the end of Spielberg's movie about him, the script suggests that Schindler is broke -- that he has spent the last of his wartime slave-labor profits to save his Jewish workers, just on the very eve of German surrender:

Stern: Do you have any money hidden away someplace that I don't know about?
Schindler: No. Am I broke?
Stern: Uh, well...

Then there's the surrender, Schindler's speech to the factory workers, and preparations for Schindler's escape (as a hunted profiteer of slave labor).

Seeing the film, you might briefly think, what's with the truck that caravans off with Schindler? But the truck gets no emphasis in the film.

Thomas Keneally's 1982 book Schindler's Ark (on which Spielberg's 1993 film was based) tells us more about the truck:

Emilie, Oskar, and a driver were meant to occupy the Mercedes. [Seven] others would follow in a truck loaded with food and cigarettes and liquor for barter (p. 375).
Also,
In one of the factory garages that afternoon, two prisoners were engaged in removing the upholstery from the ceiling and inner doors of Oskar's Mercedes, inserting small sacks of the Herr Direktor's diamonds... (p. 368).
So, on Keneally's telling, Schindler drove off with a truck full of barter goods and small sacks of diamonds hidden in the upholstery -- hardly broke. On reflection, too, you might think the timing is too cinematic, the story suspiciously tidy, if Schindler goes broke just at the moment of German surrender.

Part of me wants Schindler to have gone broke, or at least not to have driven off with sacks of diamonds. A fully thoughtful Schindler would have realized, perhaps, that he was in fact a profiteer of slave labor, despite the admiration he rightly deserves for the risks he took and his enormous expenditures of (most of!) his ill-gotten profits. On this way of thinking, the wealth generated by Schindler's factories more rightly belonged to the Jews than to Schindler. I picture an alternative Schindler who realizes that and who thus retains only enough money to ensure his escape.

But another part of me thinks this is too much to hope for, that the thought "Of course I deserve to keep some of these diamonds" is so natural that no merely human Schindler would fail to have it; that in wanting Schindler not to have that thought, I am wanting an angel rather than a person.

We don't really know, though, what Schindler fled with. David M. Crowe writes:

It is hard to imagine that he still had a collection of diamonds so large that it would fill the door and ceiling cavities of a Mercedes. [N.B.: This is an uncharitable reading of Keneally's version] Emilie [Schindler's wife] totally discounted the idea that the two of them left Bruennlitz with a "fortune in diamonds," though she later admitted that Oskar did have a "huge diamond" hidden in the glove compartment (2004, p. 455).
By all accounts, Schindler's remaining wealth was gone, probably stolen, by the time he surrendered to the Americans.

Still another part of me thinks: If anyone deserves diamonds, it's Schindler. It would have been justice served, not a failing, for him to keep a portion of his wealth.

These three parts of me are still at war.

Monday, November 10, 2014

My Reaction to David Chalmers's The Conscious Mind, 18 Years Later

The Chronicle of Higher Education asked me what book written in the last 30 years changed my mind. Instead of trying to be clever, I went with my somewhat boring best guess at the truth: David Chalmers's The Conscious Mind. It changed my mind not because I came to accept its conclusions, but rather because Chalmers so nicely shows that if you want to avoid the bizarreness of panpsychism, epiphenomenalism, and property dualism, you have to say something else that seems at least equally bizarre. I differ from Chalmers in lacking confidence that I have good basis for choosing among the various bizarre metaphysical alternatives.

These reflections brought me, then, to what I've been calling "crazyism": Something that seems crazy must be true, but we have no good way to know which among the crazy options is the right one. My article, "The Crazyist Metaphysics of Mind", just published in Australasian Journal of Philosophy is, at root, my much-delayed answer to Chalmers's 1996 challenge to materialism.

Two Views of the Relationship Between Philosophy and Science Fiction

Consider two possible views of the relationship between philosophy and science fiction.

On the first view, science fiction simply illustrates, or makes more accessible, what could be said as well or better in a discursive philosophical essay. Those who can’t stomach purely abstract discussions on the nature of time, for example, might be drawn into an exciting story; but seasoned philosophers can ignore such entertainments and proceed directly to the abstract arguments that are the meat of the philosophical enterprise.

On the second view, science-fictional storytelling has philosophical merit in its own right that is not reducible to abstract argumentation. For at least some philosophical topics, one cannot substitute for the other, and a diet of only one type of writing risks leaving you philosophically malnourished.

One argument for the second view holds that examples and thought-experiments play an ineliminable role in philosophical thinking. If so, we might see the miniature examples and thought experiments in philosophical essays as midpoints on a continuum from purely abstract propositions on one end to novel-length narratives on the other. Whatever role short examples play in philosophical thinking, longer narratives might also play a similar role. Perhaps entirely abstract prose leaves the imagination and the emotions hungry; well-drawn thought experiments engage them a bit; and films and novels engage them more fully, bringing with them whatever cognitive benefits (and risks) flow from vividly engaging the imagination and emotions. Ordinary literary fiction engages imaginative and emotive cognition about possibilities within the ordinary run of human experience; speculative fiction engages imaginative and emotive cognition about possibilities outside the ordinary run of human experience. Both types of fiction potentially deserve a central role in philosophical reflection about such possibilities.

[from the intro of "Philosophers Recommend Science Fiction", forthcoming in Susan Schneider, ed., Science Fiction and Philosophy, 2nd ed.]

Monday, November 03, 2014

Philosophical SF: Thirty-Six Philosophers' Recommendations

... here!

This mega-list of about 360 recommendations is compiled from the lists I've been rolling out over the past several weeks. Thirty-four professional philosophers and two prominent science fiction / speculative fiction (SF) authors with graduate training in philosophy each contributed a list of ten personal favorite "philosophically interesting" SF works, with brief "pitches" for each recommended work.

I have compiled two mega-lists, organized differently. One mega-list is organized by contributor, so that you can see all of Scott Bakker's recommendations, then all of Sara Bernstein's recommendations, etc. It might be useful to skim through to see whose tastes you seem to share and then look at what other works that person recommends.

The other mega-list is organized by author (or director or TV series), to highlight authors (directors / TV shows) who were most often recommended by the list contributors.

The most recommended authors were:

Recommended by 11 contributors:

  • Ursula K. Le Guin
Recommended by 8:
  • Philip K. Dick
Recommended by 7:
  • Ted Chiang
  • Greg Egan
Recommended by 5:
  • Isaac Asimov
  • Robert A. Heinlein
  • China Miéville
  • Charles Stross
Recommended by 4:
  • Jorge Luis Borges
  • Ray Bradbury
  • P. D. James
  • Neal Stephenson
Recommended by 3:
  • Edwin Abbott
  • Douglas Adams
  • Margaret Atwood
  • R. Scott Bakker
  • Iain M. Banks
  • Octavia Butler
  • William Gibson
  • Stanisław Lem
  • George R. R. Martin
  • Larry Niven
  • George Orwell (Eric A. Blair)
  • Kurt Vonnegut
The most recommended directors / TV shows were:

Recommended by 7:

  • Star Trek: The Next Generation
Recommended by 5:
  • Christopher Nolan (Memento, The Prestige, Batman: The Dark Knight, Inception)
Recommended by 4:
  • Ridley Scott (Blade Runner)
Recommended by 3:
  • Futurama
  • Duncan Jones (Moon, Source Code)
  • Andrew Niccol (Gattaca)
  • Paul Verhoeven (Total Recall, Starship Troopers)
  • Andy & Lana Wachowski (The Matrix and sequels)
Reactions, corrections, and futher suggestions welcome (as always) in the comments section.

[image source]

Wednesday, October 29, 2014

Why I Will Be Contributing Rankings to the Gourmet Report

I have been asked to be an evaluator for the 2014-2015 edition of the Philosophical Gourmet Report. Contrary to what seems to be a widespread sentiment in the philosophical blogosphere, I support the rankings and will participate.

The PGR rankings have at least three related downsides:

1. They perpetuate privilege, including the privilege of people with social power in the discipline, the privilege of people in PhD-granting institutions over other types of institutions, and the general privilege of Anglophone philosophy and philosophers.
2. They reinforce mainstream ("Gourmet ecology") valuations of topics and approaches, in a discipline where the mainstream needs no help and it would probably be productive to push against the mainstream.
3. They risk blurring the distinction between second-hand impressions about reputation (especially outside evaluators' own subareas) and genuine quality.

In light of these downsides, I understand people's hesitation to support the enterprise.

I view the rankings as an exercise in the sociology of philosophy. The rankings are valuable insofar as they reveal sociological facts about how departments, and to some extent individuals (especially in the specialty rankings) are viewed by the social elite in Anglophone philosophy -- by the people who publish articles in journals like Nous and Philosophical Review, by the people who write and are written about in Stanford Encyclopedia of Philosophy entries, by the people who teach at renowned British and U.S. universities like Oxford, Harvard, and Berkeley. As a part-time sociologist of philosophy interested in patterns of esteem, I am curious how people in this social group view the field, and I regard the PGR as an important source of data.

The PGR is thus valuable in part because sociological and historical knowledge about academia in general is valuable. It is sociologically interesting, and of historical interest, to know what sort of esteem Australian universities have in mainstream Anglophone philosophy. It is sociologically interesting, and of historical interest, to see the shifting patterns of social power among Ivy League universities and large public U.S. universities that are able to hire renowned professors.

The PGR is also practically valuable because knowledge of the centers of social power is practically valuable. To the extent a student wishes to tap into the centers of social power to increase her likelihood of finding a research-oriented job, she should know where those centers of power are; and students and their advisors who are not currently near centers of power might not find it at all obvious where those centers are. By empowering outsiders with knowledge -- especially the knowledge that renowned universities like Harvard, Oxford, and Yale might not be the best universities in their subfield -- the PGR to some extent works against the perpetuation of privilege, despite the fact that it reinforces privilege in other ways. Also, to the extent one wishes to fight against mainstream perceptions of the discipline, it is of interest to track what those perceptions are and how they are changing over time -- though if this were one's primary motivation, one would probably oppose the PGR. Finally, to the extent one respects the judgment of philosophers in the Anglophone philosophy mainstream, one might infer differences in real quality from differences in reputation.

On the last point: If you think that Anglophone philosophy mainstream judgment is grossly erroneous in general, you might reasonably infer that the PGR does more harm than good; but I don't hold that view myself. In philosophy of mind, for example -- my own specialty -- I think that the best-regarded philosophers tend in fact to be excellent philosophers who deserve their good reputations.

One area in which I think mainstream philosophical judgment is ill-tuned is in its disregard of non-Western traditions. However, I believe that the PGR has the potential to be progressive on this issue. For example, in treating Chinese philosophy as an area worth special remark, despite the small number of PhD-granting philosophy departments in Anglophone countries who have specialists in the area, it gives the subarea more visibility than it otherwise would have. And were there sufficient hiring in other non-Western traditions, I suspect the PGR would adapt to reflect that.

Despite my support of the PGR rankings, I think it is important that the rankings be viewed critically, as a rough tool for revealing certain sociological patterns in the discipline. I would very much like to see other approaches to evaluation, which would help put the PGR rankings in context as only one way to think about the social structures that drive academic philosophy.

[Cross-posted at New APPS.]

Monday, October 27, 2014

Philosophical SF: Ninth Batch of Lists (Nichols, Wittkower, Brophy, and Yap)

A couple of months ago, I started asking professional philosophers for their recommendations of some personal favorites among philosophically interesting science fiction or "speculative fiction" (SF) more broadly construed. Every contributor was to list ten works along with brief "pitches" pointing toward the works' interest. Thirty-six philosophers have sent in their lists, which I've been spinning out four at a time. This is the ninth and final list. (Or rather I should say, final for now. If more contributions come in, I will post them in small batches.)

Soon, I'll merge everything into a "mega-list", adding a bit of quantitative analysis.

The number of contributors, the range of works recommended, and the recommenders' enthusiasm and knowledge, all substantially exceeded my expectations. A hearty thanks to all of them! I'm looking forward to years of awesome reading and viewing.

A general description of the project, plus the first four lists, from Dever, Powell, Kind, and Horst.

Second set: Mandik, E. Kaplan, Evnine, De Cruz.

Third set: De Smedt, Bakker, J. Kaplan, Weinberg.

Fourth set: Frankish, Blumson, Cash, Keeley.

Fifth set: Jollimore, Chalmers, Palma, Schneider.

Sixth set: Campbell, Cameron, Easwaran, Briggs.

Seventh set: Roy-Faderman, Clark, Schwitzgebel, Killoren & Brophy.

Eighth set: Sullivan, Clarke, Oppenheimer, Bernstein.

As always, readers should feel free to contribute their own recommendations to the comments section of this post or the earlier posts.

----------------------------------------------

List from Ryan Nichols (Associate Professor of Philosophy, Cal State Fullerton):

Mike Resnick, "Kirinyaga" (short story, 1988). The best and most fêted story — one dealing a deft touch to issues of race and gender, justice and moral relativism — from an author who needs to hire someone to carry around his treasure trove of awards.

Ted Chiang, "Liking What You See: A Documentary" (short story, 2002). In the same vein as Vonnegut's 1961 "Harrison Bergeron," here Chiang offers us a brilliant semi-story in which a campus community takes seriously a pervasive but undiscussed bias — lookism.

Daniel Suarez, Influx (novel, 2014). Justly compared to Crichton, Suarez's page-turning plotting does not come at the expense of intelligent protagonists and antagonists, thank God; but make no mistake, this exciting but thoughtful book is much more than aisle-seat fodder.

Timons Esaias, "Norbert and the System" (short story, 1993). Imagine an app, dropped into the head of a Homer Simpson-like character, that uses an algorithm to instruct him — with microsecond speed — that if he wants her to like him, for example, he ought to tilt his head a bit more to the left and use the words "I feel" in the next sentence he utters. Written with wit and humor, this meditation on free will and compatibilism is more than the sum of its parts and foreshadows the increasing lack of empathy of facebooking millenials.

Greg Egan, "Reasons to be Cheerful" (short story, 1997). Egan, in my pantheon of hard sf writers, plays with the psychology and philosophy of happiness with a protagonist, narrated in the first person, who of necessity gains the ability to adjust his mental well-being moment by moment.

Douglas Adams, Hitchhiker's Guide to the Galaxy &c (various media, 1978-2005). This book and the series still delivers Mona Lisa-like smiles (and laughs) to thinking readers from the moment Arthur's first grabs a towel — and a pint — to the moment when Zaphod asks to "meet the meat" at the Restaurant.

Johann Kepler, "Somnium" (novel, 1608). An incredible story by one of the most important scientists in world history, Kepler (1571-1630) represents a trip to the moon according to extrapolation from his then-current, accurate, and highly non-standard scientific knowledge. (The real-life story behind "Somnium" and what it cost Kepler personally is more gripping.)

Michael Moorcock, "Pale Roses" (short story, 1974). While we think that post-humanity will override most of our base evolutionary motivations, this literary story raises profound questions about the meaning of a human life through a setting in which human-like characters are virtually immortal and have nearly limitless powers... but still desperately want to be invited to parties.

Kij Johnson, "Spar" (short story, 2009). I fucking dare you.

Iain M. Banks, Surface Detail (novel, 2010). If we plot ideas-per-page on the x-axis and quality of writing on the y, Banks' novels exist in an upper-right-corner world of their own, and this probing novel about punishment, religion and the state is no exception.

----------------------------------------------

List from Dylan Wittkower (Assistant Professor of Philosophy, Old Dominion University):

Philip K. Dick, “Autofac” (short story, 1955). A short story about the grey goo problem in nanotech, which is, um, a pretty interesting thing to find someone writing about in the '50s. Relevant to the difficulty of acting responsibly with regard to complex systems whose effects are hard to predict, and about the questionable value of autonomy when you don’t have any particular rational determination of values that would guide what you would do with that autonomy.

Philip K. Dick, “The Defenders” (short story, 1953). It forms a great counterpoint to “Autofac.” In “Autofac,” the machines mindlessly consume the planet to create consumer goods. In “The Defenders,” -- spoiler alert -- the machines realize that the humans’ mindless destruction of the planet (through war, this time, rather than production) is irrational, and instead they just fake massive destruction to placate the humans.

Nancy Kress, “Nano Comes to Clifford Falls” (short story, 2006). Nano destroys scarcity, work is no longer necessary, society falls apart.

Pamela Zoline, “The Heat Death of the Universe” (short story, 1967). Avant-garde writing, and genre-challenging, since it does not have most (any?) of the usual marks of science fiction. Concerns the uselessness of scientific knowledge in the face of existential despair and the experience of meaninglessness.

J.G. Ballard, “The Thousand Dreams of Stellavista” (short story, 1962). A man drives his wife to kill him, also inadvertently (but foreseeably) programming his “psychotropic” house to later attempt to kill its new owners. Each chapter of the Vermillion Sands collection (which this is from) uses science fiction to explore a different art form — this is the chapter on architecture.

Philip K. Dick, Do Androids Dream of Electric Sheep (novel, 1968). There’s the moral isolation from others through an “experience-machine”-like self-programming of emotional states, contrasted with Mercer as a kind of Levinasian Other; animal ethics, especially as connected to consumerism and environmentalism; AI stuff; etc. Wonderfully complicated, deep, and wacky — all of which will be surprising if you’ve only heard of it by way of Blade Runner. I’ll also go ahead and plug one of my edited volumes, Philip K. Dick and Philosophy (2011), which has chapters on philosophical issues in a good number of Dick novels and films.

R. Scott Bakker, Neuropath and the Prince of Nothing trilogy (novels, 2004-2008). Very philosophically informed. Neuropath is grounded in serious research in neuroscience and philosophy of mind. Prince of Nothing is high fantasy in the spirit, but not the style, of Tolkien, indebted to both Thucydides and Camus.

Orson Scott Card, Ender’s Game (novel, 1985). Issues include embodiment and phenomenology, philosophy of education, lying and consequentialism, just war theory, and virtue ethics. See my 2013 anthology, Ender's Game and Philosophy.

M.T. Anderson, Feed (novel, 2002). Issues include extended cognition, transhumanism, and the internet of things.

----------------------------------------------

List from Matthew Brophy (Assistant Professor of Philosophy, High Point University):

Richard K. Morgan, Altered Carbon (novel, 2002): A deceased mercenary is “uploaded” into a technologically augmented body to solve a mystery, 500 years in the future.

Richard K. Morgan, Thirteen (novel, 2007): A genetically enhanced soldier is tasked with hunting down renegade “thirteens” like himself.

Christopher Nolan, The Prestige (movie, 2006): Dueling magicians each make the ultimate sacrifice to perfect an astounding trick.

Robert Venditti, Surrogates (comic book, 2005-2006): When android avatars, remotely controlled by human users, start to be mysteriously murdered, one detective must unplug in order to stop a societal genocide of surrogates and humans alike.

James Cameron, Avatar (movie, 2009): A wheelchair-bound marine finds new freedom and identity as a bio-engineered alien.

Christopher Nolan, Inception (movie, 2010): A con-man transverses through layers of shared dreams in this mind-bending “heist” movie.

Rian Johnson, Looper (movie, 2012): A hit-man for the mob “terminates” other contract-killers, who are sent back in time when their contract is up.

Duncan Jones, Source Code (movie, 2012): A soldier repeatedly awakens on a train, as another man who has mere minutes to find and defuse a time-bomb that will kill them all.

Mike Cahill, Another Earth (movie, 2011): The appearance of a duplicate earth brings hope to a promising young student that a tragic accident she’s caused may have been averted on the twin earth.

----------------------------------------------

List from Audrey Yap (Associate Professor of Philosophy, University of Victoria):

Nalo Hopkinson, Brown Girl in the Ring (novel, 1998). This book has everything you didn’t know you wanted in a book: three generations of kickass women, post-apocalyptic Toronto, and some Afro-Caribbean magic. That’s all I need to tell you, now go read it immediately. I think it’s one of the best and most underrated works of feminist speculative fiction out there.

Isaac Asimov, I, Robot (short stories, collected 1950). Classic short stories in this book, having to do with the relationship between humans and non-human intelligences. It’s not as utopian about technology as a lot of Asimov’s other work, but despite several incidents of robots behaving badly, it’s not all Skynet and doom either.

Red Dwarf, "Justice" (TV show, 1991). The Justice Field makes it physically impossible for injustice to be committed!

Ted Chiang, Stories of Your Life and Others (short stories, collected 2002). Short stories following through on the consequences of various ideas. What if arithmetic actually was inconsistent? What if we did live in a system of celestial spheres?

Robert J. Sawyer, Hominids (novel, 2002; also Humans and Hybrids, 2003). Hominids is the first book in the Neanderthal Parallax trilogy, in which a doorway to a parallel universe opens up in Sudbury, Ontario. Yes, Sudbury. In the parallel universe, Neanderthals became dominant rather than us. It’s interesting thinking through the differences in the family culture of each group, since Neanderthals in the other universe have two partners, one male and one female.

Christopher Nolan, The Prestige (movie, 2010). It’s hard to describe what makes this movie philosophically interesting without giving away the big plot twist at the end. But there are two very distinct explorations of personal identity. My personal favourite is the one that has to do with social identity.

Jorge Luis Borges, "On Rigor in Science" (short story, 1946). I want to use this one-paragraph short story in a paper on idealization. It brings up an empire in which map-making has “advanced” such that the only acceptable map of the empire is one of the exact same scale as the empire itself.

Futurama, "Mars University" (TV show, 1999). Gunther is a monkey who becomes super-intelligent but can then no longer fit in with his monkey community. Could we be better off ignorant if it means we can then enjoy the company of others?

Elizabeth Moon, The Speed of Dark (novel, 2002). The protagonist is a scientist with autism in a near-future world in which there may be a “cure” for his condition. The quotation marks are there because one of the central issues has to do with whether autism is a condition that in fact needs curing. I don’t think I’d heard of the idea of neurodiversity when I read this, but it strikes me as exactly the idea under consideration.

----------------------------------------------

Soon I will present a compilation of all of the approximately 360 SF recommendations included these nine posts, sorted in a few different ways.