Thursday, May 21, 2015

Leading SF Novels: Academic Library Holdings and Citation Rates

Among the most culturally influential English-language fiction writers of the 20th century, a substantial portion wrote science fiction or fantasy -- "speculative fiction" (SF) broadly construed. H.G. Wells, J.R.R. Tolkien, George Orwell, Isaac Asimov, Philip K. Dick, and Ursula K. Le Guin, for starters. In the 21st century so far, speculative fiction remains culturally important. There's sometimes a feeling among speculative fiction writers that even the best recent work in the genre isn't taken seriously by academic scholars. I thought I'd look at a couple possible (imperfect!) measures of this.

(I'm doing this partly just for fun, 'cause I'm a dork and I find this kind of thing relaxing, if you'll believe it.)

Holdings of recent SF in academic libraries

I generated a list of critically acclaimed SF novels by considering Hugo, Nebula, and World Fantasy award winners from 2009-2013 plus any non-winning novels that were among the 5-6 finalists for at least two of the three awards. Nineteen novels met the criteria.

Then I looked at two of the largest Anglophone academic library holdings databases: COPAC and Melvyl, and counted how many different campuses (max 30-ish) had a print copy of the book [see endnote for details].

H = Hugo finalist, N = Nebula finalist, W = World Fantasy finalist; stars indicate winners.

The results, listed from most held to least:

16 campuses: Neil Gaiman, The Graveyard Book (H*W)
15: George R.R. Martin, A Dance with Dragons (HW)
15: China Mieville, The City & the City (H*NW*)
12: Cory Doctorow, Little Brother (HN)
12: Ursula K. Le Guin, Powers (N*)
12: China Mieville, Embassytown (HN)
12: Connie Willis, Blackout / All Clear (H*N*)
11: Paolo Bacigalupi, The Windup Girl (HN*)
11: G. Willow Wilson, Alif the Unseen (W*)
10: Kim Stanley Robinson, 2312 (HN*)
8: N.K. Jemisin, The Hundred Thousand Kingdoms (HNW)
8: N.K. Jemisin, The Killing Moon (NW)
8: Jon Scalzi, Redshirts (H*)
8: Jeff VanderMeer, Finch (NW)
8: Jo Walton, Among Others (H*N*W)
7: Cherie Priest, Boneshaker (HN)
7: Caitlin Kiernan, The Drowning Girl (NW)
5: Nnedi Okorafor, Who Fears Death (NW*)
3: Saladin Ahmed, Throne of the Crescent Moon (HN)

As a reference point, I did a similar analysis of PEN/Faulkner award winners and finalists over the same period.

Of the 25 PEN winners and finalists, 7 were held by more campuses than was any book on my SF list, though the difference was not extreme, with two at 24 (Jennifer Egan, A Visit from the Goon Squad; Joseph O'Neill, Netherland) and five ranging from 18-21 campuses. In the PEN group, just as in the SF group, there were nine books held by fewer than ten of the campuses (3, 5, 6, 7, 7, 7, 9, 9, 9) -- so the lower part of the lists looks pretty similar.

References in Google Scholar

Citation patterns in Google Scholar tell a similar story. Although citation rates are generally low by philosophy and psychology standards (assuming as a comparison group the most-praised philosophy and psychology books of the period), they are not very different between the SF and PEN lists. The SF books for which I could find five or more Google Scholar citations:

53 citations: Gaiman, The Graveyard Book
52: Doctorow, Little Brother
27: Martin, A Dance with Dragons
26: Bacigalupi, The Windup Girl
9: Priest, Boneshaker
8: Robinson, 2312
5: Okorafor, Who Fears Death

The top-cited PEN books were at 70 (O'Neill, Netherland) and 59 (Egan, A Visit from the Goon Squad). After those two, there's a gap down to 17, 15, 12, 11, 10.

I continue to suspect that there is a bit of a perception difference between "highbrow" literary fiction and "middlebrow" SF, disadvantaging SF studies in some quarters of the university; but if so, perhaps that is compensated by recognition of SF's broader visibility in popular culture, so that in terms of overall scholarly attention, it appears to be approximately a tie.

---------------------------------

Bestsellers:

So... hey! That makes me wonder about bestsellers. I've taken the four best selling fiction books each year from 2009-2013 (according to USA Today for 2009-2012, Nielsen Book Scan for 2013) and tried the same. (The catalogs are a bit messier since these books tend to have multiple editions, so the numbers are a little rougher.)

Top five by citations (# of campuses in parens):

431: Suzanne Collins, The Hunger Games (23)
333: Stephanie Meyer, Twilight (26)
162: Stephanie Meyer, Breaking Dawn (17)
132: Stephanie Meyer, New Moon (15)
130: Steig Larsson, The Girl with the Dragon Tattoo (12)

Only 4 of the 19 had fewer than 10 citations, and all were held by at least six campuses.

So by both of these measures, bestsellers are receiving more academic attention than either the top critically acclaimed SF or PEN. Notable: By my count, 8 of the 19 bestsellers are SF, including all of the top-four most cited.

Maybe that's as is should be: The Hunger Games and Twilight are major cultural phenomena, worthy of serious discussion for that sake alone, in addition to whatever merits they might have as literature.

---------------------------------

Endnote:
COPAC covers the major British and Irish academic libraries, Melvyl the ten University of California campuses. I counted up the total number of campuses in the two systems with at least one holding of each book, limiting myself to print holdings (electronic and audio holdings were a bit disorganized in the databases, and spot checking suggested they didn't add much to the overall results since most campuses with electronic or audio also had print of the same work).

As always, corrections welcome!

Thursday, May 14, 2015

Moral Duties to Flawed Gods

Suppose that God exists and is morally imperfect. (I'm inclined to think that if a god exists, that god is not perfect.) If God has created me and sustains the world, I owe a pretty big debt to her/him/it. Now suppose that this morally imperfect God tells me to wear a blue shirt today instead of a brown one. No greater good would be served; it's just God's preference, for no particular reason. God tells me to do it, but doesn't threaten me with punishment if I don't -- she (let's say "she") just appeals to my sense of moral obligation: "I am your creator," she says, "and I work to sustain your whole universe. I'd like you to do it. You owe me!"

One way we might conceptualize a morally flawed god is this: We might be sims, or model playthings, in a world that is subject to the whims of some larger being with the power to radically manipulate or destroy it, and who therefore has sufficient powers to be properly conceptualized as a god by us. Alternatively, if technology advances sufficiently, we ourselves might create genuinely conscious rational beings who live as sims or playthings, and then we would be gods relative to them.

It is helpful, I think, to consider these issues simultaneously bottom up and top down -- both in terms of what we ourselves would owe to such a hypothetical god and in terms of what we, if we hypothetically gained divine levels of power over created beings, could legitimately demand of those beings. It seems a reasonable desideratum of a theory that the constraints be symmetrical: Whatever a flawed god could legitimately demand of us, we, if we had similar attributes in relation to beings we created, could legitimately demand of them; and contrapositively, whatever we could not legitimately demand of beings we created we should not recognize as demands a flawed god could make upon us, barring some relevant asymmetry between the situations.

Here are three possible approaches to God's authority to command:

(1.) Love of God and/or the good. Divine command theory is the view that we are obliged to do whatever God commands. Christian articulations of this view have typically assumed a morally perfect God, whom we obey out of love for him, or love of the good, or both (e.g., Adams 1999). A version of this view might be adapted to the case where God is morally flawed: We might still love her, and obey her from love (as one might obey another human out of love); or one might obey because one admires and respects the goodness of God and her commands, even if God is not perfectly good and this particular command is flawed.

(2.) Acknowledgement of debt. Other approaches to divine command theory emphasize God's power and our debt as God's creations (for example, Augustine: "Unless you turn to Him and repay the existence that He gave you... you will be wretched. All things owe to God, first of all, what they are insofar as they are natures" [cited here] and the conclusion of the Book of Job). A secular comparison might be the debt children owe to their parents for their creation and sustenance, for example as emphasized in the Confucian tradition.

(3.) Social contract theory. According to social contract theory, what gives (morally flawed) governmental representatives legitimate authority to command us is something like the fact that, hypothetically, the overall social arrangement is fair, and we would agree to it if it were offered from the right kind of neutral position. God might say: Universes require gods to create, command, and sustain them -- or at least your universe has required one -- and I am the god in that role, executing my powers in a manner that would be antecedently recognizable as fair. Surely you would agree, hypothetically, to the justice of the creation of your world under this general arrangement?

Now when I consider these possible justifications of a morally imperfect God's authority to command, what strikes me is that all three seem to justify only rather limited power. To see this, consider three types of command: (a.) the trivial and arbitrary, (b.) the non-trivial and arbitrary, and (c.) the non-arbitrary and non-trivial.

It is perhaps legitimate for a god to make trivial, arbitrary demands -- like to wear a blue shirt today rather than a brown -- and for a created being to satisfy them, in recognition of a personal relationship or a debt. Similarly legitimate, it seems, are non-arbitrary demands that God makes for excellent reasons, justifiable either interpersonally or through social contract theory.

My own sense, however -- does yours differ? -- is that arbitrary but non-trivial demands should be sharply limited. Suppose, for example, that God says she wants me to go out to the student commons and do a chicken dance -- not for any good reason but just as a passing minor whim, because she wants me to. I'd be embarrassed, but no serious consequences would ensue. My feeling is that God would not be in the right to make this sort of demand of me; nor would I be in the right to demand it of my creations, were I ever to create genuinely conscious beings over whom I had divine degrees of power.

It seems to me that would be wrong in the same way that it would be wrong for my mother or wife to ask this of me for no good reason: It would be a matter of someone's treating her own whims as of greater importance than my legitimate desires and interests. It would violate the principle of equality. But if that's correct -- if an imperfect god's whims don't trump my interests for that type of reason -- then in the relevant moral sense, we are God's equals.

You might say: If a god really did create us, our debt is enormous. Indeed it would be! But what follows? My parents created me, and they raised me through childhood, so my debt to them is also enormous; and my government paid for my education and my roads and my protection, so in a sense my government has also created and sustained me, and my debt to it is also enormous. However, once I have been created, I have a dignity and interests that even those who have created and sustained me cannot legitimately disregard to satisfy their whims. And I see no reason to suppose this limitation on the morally legitimate exercise of power is any less for gods than for fellow humans.

A morally perfect god might be different. Necessarily, such a god would not demand anything morally illegitimate. But I think a sober look at the world suggests that if there is any creating or sustaining god of substantial power, that god is far from morally perfect. If that god tells me never to mix clothing fibers or never to work on the sabbath, she had better also supply a good reason.

Related posts:

  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • [image source]

    Monday, May 11, 2015

    Network Map of Philosophical SF Authors

    Andrew Higgins has done one of his beautiful network maps for my Philosophical SF authors list:

    [click to see full size] Andrew writes:

    This graph represents a network of science fiction authors and philosophers, with the authors linked to philosophers just in case the philosopher listed that author as philosophically interesting. Authors are labeled, and label size corresponds to the number of philosophers mentioning them. Label colors and positions are rough indicators of similarity. Colors represent groups of authors; as an intuitive gloss, if authors A1-An are the same color that means the connections between the As is ≥ their connections to authors in other groups. Author positions are determined by a combination of three forces - gravity, attraction, and repulsion - applied to the network until it has settled into a stable position (a local peak in the space of possible positions). All nodes gravitate to the center and repulse one another, and nodes are attracted just in case they are connected. So, positions and colors can be seen as weak indicators of similarity, whatever kind of similarity is highlighted by philosophers' choices.

    But, given the relatively small sample size and lack of strong modularity in the network, we should be cautious in inferring anything about these authors (or philosophers) based on their relative positions or colors.

    Friday, May 08, 2015

    Competing Perspectives on the Significance of One's Final, Dying Thought

    Here's a particularly unsentimental view about last, dying thoughts: Your dying thought will be your least important thought. After all (assuming no afterlife), it is the one thought guaranteed to have no influence on any of your future thoughts, or on any other aspect of your psychology.

    Now maybe if you express the thought aloud -- "I did not get my Spaghetti Os. I got spaghetti. I want the press to know this." -- or if your last thought is otherwise detectable by others, it will have an effect; but for this post let's assume a private last thought that influences no one else.

    A narrative approach to the meaning of life seems to recommend a different attitude toward last thoughts. If a life is like a story, you want it to end well! The ending of a story colors all that has gone before. If the hero dies resentful or if the hero dies content, that rightly changes our understanding of earlier events. It does so not only because we might now understand that all along the hero felt subtly resentful, but also because private deathbed thoughts, on this view, have a retrospective transformative power: An earlier betrayal, for example, now becomes a betrayal that was forgiven by the end (or it becomes one that was never forgiven). The ghost's appearance to Hamlet has one type of significance if Hamlet ends badly and quite a different significance if Hamlet ends well. On the narrative view, the significance of events depends partly on the future. Maybe this is part of what Solon had in mind when he told King Croesus not to call anyone happy until they die: A horrible enough disaster at the end, maybe, can retrospectively poison what your marriage and seeming successes had really amounted to. Thus, maybe the last thought is like the final sentence of a book: Ending on a thought of love and happiness makes your life a very different story than does ending on a thought of resentment and regret.

    The unsentimental view seems to give too little significance to one's last thought -- I, at least, would want to die on a positive note! -- but the narrative view seems to give one's last thought too much significance. I doubt we're deprived of knowing the significance of someone's life if we don't know their last thought in the way we can't know the significance of a story if we don't know its last sentence. Also, the last sentence of a story is a contrived feature of a type of work of art, a sentence which the work is designed to render highly significant; while a last thought might be trivially unimportant by accident (if you're thinking about what to have for lunch, then hit by a truck you didn't see coming) or it might not reflect a stable attitude (if you're grumpy from pain).

    Maybe the right answer is just a compromise: The last thought is not totally trivial because it has some narrative power, but life isn't so much like a narrative that it has last-sentence-of-a-story-like power? Life has narrative elements, but the independent pieces also have a power and value that isn't hostage to future outcomes.

    Here's another possibility, which interacts with the first two: Maybe one's last thought is an opportunity. But what kind of opportunity it is will depend on whether last thoughts can retrospectively change the significance of earlier events.

    On the narrative view, it is an opportunity to -- secretly! with an almost magical time-piercing power -- make it the case that Person A was forgiven by you or never forgiven, that Action B was regretted or never regretted, etc.

    On the unsentimental view, in contrast, it is an opportunity to think things that, had you thought them earlier, would have been too terrible to think because of their possible impact on your future thoughts. (Compare: It's also an opportunity to explore the neuroscience of decapitation.) I don't know that we have such a reservoir of unthinkable thoughts that we refuse to make conscious for fear of the effects of thinking them. That sounds pretty Freudian! But if we do, here's the perfect opportunity, perhaps, to finally admit to yourself that you never really loved Person A or that your life was a failure. Maybe if you thought such things and then remembered those thoughts the next day, bad consequences would follow. But now there can be no such bad consequences the next day; and if you reject the narrative view, there are no retrospective bad consequences on earlier events either. So it's your chance, if you can grab it, to drop your self-illusions and glare at the truth.

    Writing this now, though, that last view seems too dark. I'd rather die under illusion, I think, than dispel the illusion at the last moment, when it's too late to do anything about it. Maybe that's the better narrative. Or maybe truth is not the most important thing on the deathbed.

    [image source]

    Thursday, May 07, 2015

    List of Philosophical Science Fiction / Speculative Fiction

    I've just updated my list of "philosophically interesting" SF -- about 400 total recommendations from 40 contributors, along with brief "pitches" for each work that point toward the work's philosophical interest. All of the contributors are either professional philosophers or professional SF writers with graduate training in philosophy.

    The version sorted by author (or director, for movies) is organized so that the most frequently recommended authors appear first on the list. What SF authors are the biggest hits with the philosophy crowd? Now you know! (Or you will know, shortly after you click.)

    There's also a version sorted by recommender. If you scan through to find works you love, then you can see which contributors recommended those works. Since you have overlapping tastes, you might want to especially check out their other recommendations.

    Tuesday, May 05, 2015

    Momentary Sage

    My newest piece of short speculative fiction, Momentary Sage, has just come out in The Dark. I wanted to do two things with the story.

    First: I wanted to envision the aftermath of A Midsummer Night's Dream. In the main plot of Shakespeare's play, Lysander and Hermia want to marry, but Hermia has been promised to Demetrius whom she loathes. The problem is resolved with a fairy love spell: Demetrius is tricked into loving Helena, to whom he had previously been engaged and who still loves him. All ends happily, with Lysander marrying Hermia and Demetrius marrying Helena. But dear poet Willy, that's too cheap a fix! Demetrius can't just stay permanently tricked into love, happily ever after, can he? Midnight fairy magic always causes more problems than it solves, for that is the unbreakable law of fairies. (Just ask Susanna Clarke.)

    Second: I wanted to explore a certain simplistic parody of Buddhism. Demetrius's love spell ends the next day. But his revenge is this: Hermia's child, Sage, is a philosopher baby who believes that non-existence is preferable to suffering. Since he disbelieves in the reality of an extended self, to determine whether life is worth living at any moment, Sage simply weighs up his total joy and suffering at that moment. As soon as his current suffering outweighs his current joy, he attempts to commit suicide, employing a sharp magic tusk he was born with for just that purpose. Hermia and Lysander must thus keep constant watch on Sage, physically pinning him down the moment he starts feeling frustrated or colicky.

    Though drawn in starker colors, this is just the predicament confronting all parents when their children would rather cast away future interests than accept a little short-term suffering. Is there a rational argument that can convince someone to value the future, if they don't already? Sage and Lysander have a go at it, but Sage always wins. He is the better philosopher.

    It's a piece of dark fantasy, verging on horror -- so if you don't enjoy that genre, stand warned.

    [image source]

    Wednesday, April 29, 2015

    Duplicating the Universe

    I've been thinking about two forms of duplication. One is duplication of the entire universe from beginning to end, as envisioned in Nietzsche's eternal return (cf. Poincare's recurrence theorem on a grand scale). The other is duplication within an eternal (or very long) individual life (goldfish-pool immortality). In both cases, I find myself torn among four different evaluative perspectives.

    For color, imagine a god watching our universe from Big Bang to heat death. At the end, this god says, "In total, that was good. Replay!" Or imagine an immortal life in which you loop repeatedly (without remembering) through the same pleasures over and over.

    Consider four ways of thinking about the value of duplication:

    1. The summative view: Duplicating a good thing doubles the world's goodness, all else being equal; and in particular duplicating the universe doubles the total sum of goodness. There's twice as much total happiness overall, for example. Although Nietzsche rejected the ethics of happiness-summing, something in the general direction of the summative view seems to be implicit in his suggestion that if we knew that the universe repeats infinitely, that would add infinite weight to every decision.

    2. The indifference view: Repetition adds no value or disvalue, if it is a true repetition (no memory, no development, no audience-god watching saying "oh, I remember this... here comes the good part!"). You might even think, if the duplication is perfect enough, that there aren't even two metaphysically distinct things (Leibniz's identity of indiscernibles).

    3. The diminishing returns view: A second run-through is good, but it doesn't double the goodness of the first run-through. For example, the total subjectively experienced happiness might be double, but there's something special about being the first person on the (or "a"?) moon, which is something that never happens in the second run -- and likewise something special about being the last episode of Seinfeld (or "Seinfeld"?) and about being the only copy of a Van Gogh painting (or a "Van Gogh" painting?), which the first run loses if a second run is added.

    4. The precious uniqueness view: Expanding the last thought from the diminishing returns view, one might think that duplication somehow cheapens both runs, and that it's better to do things exactly once and be done.

    Which of these four views is the best way of thinking about cosmic value (or the value of an extended life)?

    You might think that this kind of question isn't amenable to rational argumentation -- that there is no discoverable fact of the matter about whether doubling is better. And maybe that's right. But consider this: Universe A is just like our universe. Universe B is just like our universe, but life on Earth never advances past microbial levels of complexity. If you think Universe A is overall better, or more creation-worthy (or, if you're enough of a pessimist, overall worse) than Universe B, then you think there are facts about the relative value of universes -- in which case, plausibly, there should also be some fact about whether a duplicative universe is a lot better, a little better, the same, or worse than a single-run universe. Yes?

    There is, I think, at least a chance that this question, or a relative of it, will become a question of practical ethics in the future -- if we ever become "gods" who create universes of genuinely conscious people running inside of simulated environments (as I discuss here and here), or if we ever have the chance to "upload" into paradises of repetitive bliss.

    [image source]

    Monday, April 27, 2015

    How to Make Van Gogh's "Starry Night" Undulate

    Not sure the original source of this one (maybe notbecauseitsironic on Reddit?).

    First, look at the center of the image below for about 30 seconds.

    Look at the center of this image for 30sec, then watch Van Gogh's *Starry Night* come to life
    Then look at Van Gogh's "The Starry Night".
    The technique also achieves interesting results when applied to Kincade:
    [HT Mariano Aski]

    Thursday, April 23, 2015

    New Essay: Death and Self in the Incomprehensible Zhuangzi

    Every nineteen years, I should write a new essay on the ancient Chinese philosopher Zhuangzi, don't you think? This one should tide me over until 2034, then!

    Death and Self in the Incomprehensible Zhuangzi

    The ancient Chinese philosopher Zhuangzi defies interpretation. This is an inextricable part of the beauty and power of his work. The text – by which I mean the “Inner Chapters” of the text traditionally attributed to him, the authentic core of the book – is incomprehensible as a whole. It consists of shards, in a distinctive voice – a voice distinctive enough that its absence is plain in most or all of the “Outer” and “Miscellaneous” Chapters, and which I will treat as the voice of a single author. Despite repeating imagery, ideas, style, and tone, these shards cannot be pieced together into a self-consistent philosophy. This lack of self-consistency is a positive feature of Zhuangzi. It is part of what makes him the great and unusual philosopher he is, defying reduction and summary.
    Full draft here.

    As always, comments, objections, suggestions welcome, either by email or as comments on this post.

    See this post from March 5 for a briefer treatment of the same themes.

    Wednesday, April 22, 2015

    Rules of War, the Card Game, with Deck Management

    I think you'll agree that few games are as tedious as the card game war. Unfortunately, my eight-year-old daughter likes the damned thing. So I cooked up some new rules, which make the game considerably more interesting and quicker to resolve.

    (What does this have to do with the themes of this blog? Um. If widely adopted, the new rules will substantially reduce humanity's card-game-related dyshedons!)

    War with Deck Management

    Simple Rules for Two Players:

    Deal the 52-card deck face down, 26 cards to each player. As in standard war, each player turns their top card face up on the table. High card wins the trick (ace high, suit ignored). The winner of the trick collects the cards face up in a pile. In case of a tie, there's a "war", and each player lays three "soldier" cards face down then one "general" face up. The highest general wins all ten cards. If the generals tie, repeat. If there aren't enough face-down cards to play out the war, each player shuffles their face-up stack of won tricks and draws randomly from that stack to complete the war, then turns the stack back face up. If a player has insufficient cards to play out the war, that player loses the game.

    When both players are out of face-down cards, one round is over. Each player counts their face-up cards openly, for all to see. The player with more cards then discards enough cards to equal the number of cards in the pile of the player with fewer cards. For example, if after Round 1, Player A has 30 cards and Player B has 22, then Player A discards 8 cards of his or her choice, so they both have 22.

    Each player then turns their stack face down and shuffles, then plays Round 2 by the same rules as Round 1. After all cards are face up, the player with more cards again discards to match the number of cards in the stack of the player with fewer. This is repeated until one player runs out of cards and loses.

    Advantages over Standard War:

  • The game resolves much faster!
  • The winner of each round enjoys discarding away low cards instead of accumulating a bunch of losers.
  • In later rounds, wars are more common because the low cards are removed from the decks, leaving a smaller range of cards to match.
  • Although aces are important, the original distribution of the aces isn't as important as in standard war. This is partly because there are more wars, so there are more chances for aces to change hands as soldiers, and partly because a generally strong deck that wins more total cards gives a major advantage in the discard phase.
  • Advanced Rules with Deck-Order Management:

    Rules as above, except that players may arrange their face down cards in any order they wish. Once the cards are arranged face down, they can't be rearranged, and any wars that require drawing from the face-up pile are still based on random draw from a face-down shuffle.

    Tactics: Since the top card will never be a soldier, you might want to make it your ace. But then if the other player does the same, you'll have a war. Anticipating that, you might make cards 2-4 low and card 5 high. But maybe you know your general will lose if the other player employs the same tactics, so you might surprise them by putting your 2 on top, so that the ace you think they'll play will be wasted gathering a low card. Etc.

    Rules for More Than Two Players:

    Divide the deck equally face down among the players. Any leftover cards go face up in the middle, to be collected by the winner of the first trick. High card wins the trick. If the high card is a tie, then the two (or more) players with the high card play a war. Any remaining player sits out the war, playing neither soldiers nor general. Winner takes all cards.

    The round is over when at most one player has face down cards remaining. Any player out of face down cards before the end of the round sits out the remainder of the round, neither losing nor winning cards. At the end of the round each player counts their total cards. The player with the most cards discards to reduce to the number of cards held by the player with the second most. For example, if after Round 1 Player A has 22, Player B has 18, and Player C has 12, then Player A discards 4 so that Players A and B have 18 and Player C has 12.

    When a player is out of cards, that player is out. As in the two-player version, this can happen either because the player wins no tricks in a round or because the player does not have enough cards to complete a war. The game is over when all but one player is out.

    [image source]

    Thursday, April 16, 2015

    How to Disregard Extremely Remote Possibilities

    In 1% Skepticism, I suggest that it's reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we're in a short-term sim), sometimes making decisions that we wouldn't otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).

    But what about extremely remote possibilities with extremely large payouts? Maybe it's reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes' worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn't seem right!

    I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.

    First, symmetry:
    My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I'm not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways -- one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don't greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense -- no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.

    Second, diminishing returns:
    Bernard Williams famously thought that extreme longevity would be a tedious thing. I tend to agree instead with John Fischer that extreme longevity needn't be so bad. But it's by no means clear that 10^20 years of bliss is 10^20 times more choiceworthy than a single year of bliss. (One issue: If I achieve that bliss by repeating similar experiences over and over, forgetting that I have done so, then this is a goldfish-pool case, and it seems reasonable not to think of goldfish-pool cases as additively choiceworthy; alternatively, if I remember all 10^20 years, then I seem to have become something radically different in cognitive function than I presently am, so I might be choosing my extinction.) Similarly for bad outcomes and for extreme but instantaneous outcomes. Choiceworthiness might be very far from linear with temporal bliss-extension for such magnitudes. And as long as one's credence in remote outcomes declines sharply enough to offset increasing choiceworthiness in the outcomes, then extremely remote possibilities will not be action-guiding: a one in 10^50 credence of a utility of +/- 10^30 is negligible.

    Third, loss aversion:
    I'm loss averse rather than risk neutral. I'll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn't avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I'm loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes' worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability -- one in 10^50? -- no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.

    These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.

    Wednesday, April 15, 2015

    Dialogues on Disability

    ... a new series of interviews, by Shelley Tremain, launches today at the Discrimination and Disadvantage blog with inaugural guest Bryce Huebner.

    One interesting feature of the interview is Bryce's discussion of whether his celiac disease should be viewed as a disability. There is a broad sense in which virtually everyone is disabled -- we are nearsighted, have allergies, experience back pain, etc. Yet, given our social structures, many of these disabilities are hardly disabilities at all. If I lived in a world in which corrective lenses were inaccessible, my 20/500 nearsightedness would have a huge impact on my life. As it is, I pop on my glasses and no problem! (In fact, I'm terrific at reading tiny print that eludes most others my age.) When I was in southern China a couple years ago, I had an allergic reaction to shellfish almost every day of my visit -- the food is so pervasive in the culture that even when it's not an ingredient, some residue often gets mixed in -- but in southern California, no problem. Conversely, in some culinary cultures, Bryce's celiac disease might hardly manifest; and we might imagine cultures or subcultures where being in a wheelchair is similarly experienced as only a minor inconvenience.

    Monday, April 13, 2015

    Comment Moderation Being Implemented

    I will try to approve comments within 24 hours of submission. I'm sorry to have to do this! Eric

    Wednesday, April 08, 2015

    Blogging and Philosophical Cognition

    Yesterday or today, my blog got its three millionth pageview since its launch in 2006. (Cheers!) And at the Pacific APA last week, Nancy Cartwright celebrated "short fat tangled" arguments over "tall skinny neat" arguments. (Cheers again!)

    To see how these two ideas are related, consider this picture of Legolas and his friend Gimli Cartwright. (Note the arguments near their heads. Click to enlarge if desired.) [modified from image source]

    Legolas: tall, lean, tidy! His argument takes you straight like an arrowshot all the way from A to H! All the way from the fundamental nature of consciousness to the inevitability of Napoleon. (Yes, I'm looking at you, Georg Wilhelm Friedrich.) All the way from seven abstract Axioms to Proposition V.42, "it is because we enjoy blessedness that we are able to keep our lusts in check". (Sorry, Baruch, I wish I were more convinced.)

    Gimli: short, fat, knotty! His argument only takes you from versions of A to B. But it does it three ways, so that if one argument fails, the others remain. It does without without need of a string of possibly dubious intermediate claims. And finally, the different premises lend tangly sideways support to each other: A2 supports A1, A1 supports A3, A3 supports A2. I think of Mozi's dozen arguments for impartial concern or Sextus's many modes of skepticism.

    In areas of mathematics, tall arguments can work -- maybe the proof of Fermat's last theorem is one -- long and complicated, but apparently sound. (Not that I would be any authority.) When each step is unshakeably secure, tall arguments go through. But philosophy tends not to be like that.

    The human mind is great at determining an object's shape from its shading. The human mind is great at interpreting a stream of incoming sound as a sly dig on someone's character. The human mind is stupendously horrible at determining the soundness of philosophical arguments, and also at determining the soundness of most individual stages within philosophical arguments. Tall, skinny philosophical arguments -- this was Cartwright's point -- will almost inevitably topple.

    Individual blog posts are short. They are, I think, just about the right size for human philosophical cognition: 500-1000 words, enough to put some flesh on an idea, making it vivid (pure philosophical abstractions being almost impossible to evaluate for multiple reasons), enough to make one or maybe two novel turns or connections, but short enough that the reader can get to the end without having lost track of the path there.

    In the aggregate, blog posts are fat and tangled: Multiple posts can get at the same general conclusion from diverse angles. Multiple posts can lend sideways support to each other. I offer, as an example, my many posts skeptical of philosophical expertise (of which this is one): e.g., here, here, here, here, here, here.

    I have come to think that philosophical essays, too, often benefit from being written almost like a series of blog posts: several shortish sections, each of which can stand semi-independently and which in aggregate lead the reader in a single general direction. This has become my metaphilosophy of essay writing, exemplified in "The Crazyist Metaphysics of Mind" and "1% Skepticism".

    Of course there's also something to be said for Legolas -- for shooting your arrow at an orc halfway across the plain rather than waiting for it to reach your axe -- as long as you have a realistically low credence that you will hit the mark.

    Tuesday, March 31, 2015

    Percentages of Women on the Program of the Pacific APA

    Tomorrow I head off to the Pacific Division meeting of the American Philosophical Association in Vancouver. (Thursday I'll be presenting my critique of Quassim Cassam's Self-Knowledge for Humans. Saturday, I'll be presenting on blameworthiness for implicit attitudes.) Given my interest in professional philosophy's skewed gender ratios (e.g. here and here), I thought I'd do a rough coding of the Pacific APA main program by gender. Alongside gender, I also coded role in the program and whether the session topic is ethics (including political philosophy).

    I coded gender conservatively, declining to code names that I perceived as gender ambiguous (e.g., "Kris", "Jamie") or that I did not associate with a clear gender given my particular cultural background (most Asian names and some European names or unusual names), except when I had personal knowledge of the person's gender. As a result 13% of the names remained unclassified. In a more careful coding, I would try to get the exclusions down below 5%.

    With that caveat, I found that 275/856 (32%) of Pacific APA main program participants were women. Although this may sound low, it is substantially higher than the proportion of women in the profession overall, which is typically estimated to be in the low 20%'s in North America (e.g., here). (275/856 > 21%, two-tailed exact p < .001; even classifying all ambiguous names as men yields 28% vs. 21%, exact p < .001).

    These data can't fully be explained by recent changes in the proportion of women entering the profession: According to the Survey of Earned Doctorates, 27% of philosophy PhDs in 2013 were women (also 27% in 2012). So even if newly-minted PhDs are more likely to attend conferences, that wouldn't raise the percentage of women to 32%. Affirmative action might be playing a role -- probably other factors too. Plenty of room for speculation.

    Since it's often thought that the gender distribution is closer to equal in ethics than in other areas of philosophy, I also coded sessions as "ethics" vs. "non-ethics" vs. "excluded" (excluded sessions being topically borderline or mixed or concerning general issues in the profession). I found the expected divergence: 38% of the ethics program participants were women, compared to 28% in non-ethics (Z = 3.0, p = .003).

    Finally, I was interested to look at women's representation in different roles on the program. Some roles are much more prestigious than others: being the author of a book targeted for an author-meets-critics session is much more prestigious than chairing a session. I coded five levels of prestige:

  • 1: Author in an author-meets-critics, or award winner, or invited symposium speaker with at least one commentator focused exclusively on your work.
  • 2: Invited symposium speaker not meeting the criteria above, or "critic" at an author-meets-critics.
  • 3: Invited symposium commentator.
  • 4: Refereed colloquium speaker, or colloquium commentator.
  • 5: Session chair.
  • Excluded: APA organized sessions (e.g., on finding a community college position) and poster presentations (too few for meaningful analysis).
  • Of the people in the most prestigious roles in the program (Category 1), 13/52 (25%) are women. Although this appears to be a bit below the 32% representation of women in all other roles combined, this sample size is too small to permit any definite conclusions (one-proportion CI 14%-39%).

    In the larger group of people with fairly prestigious roles (Category 2), 59/162 (36%) are women, similar to women's overall representation in the program. The group of symposium commentators was small -- 15/44 (34%) -- but in line with the overall numbers. The proportion of women presenting (usually anonymously refereed) colloquium papers was 85/310 (27%, CI 23%-33%), and the proportion of women chairing sessions was 77/221 (35%, CI 29%-42%). Thus, I found no clear tendency for women to appear disproportionately at either a higher or lower level of prestige than men.

    Analysis of more years' data, which I hope to explore in the future, will give more power to detect smaller effect sizes, and will also allow temporal analysis, to see how representation of women in the profession has been changing over time. Ideas welcome!

    Wednesday, March 25, 2015

    "A" Is Red, "I" Is White, "X" Is Black -- Um, Why?

    This is just the kind of dorky thing I think is cool. Check out this graph of the color associations for different letters for people with grapheme-color synesthesia.

    [click on the picture for full size, if it's not showing properly]

    This is from a sample of 6588 synesthetes in the US, reported in Witthoft, Winawer, and Eagleman 2015. Presumably, they're not talking to each other. But there's a pretty good agreement that "A" is red, "X" is black, and "Y" is yellow. But you knew that already, right?

    Now some of these results seem partly explicable: "Y" is yellow, maybe, because of the word "yellow" starts with "Y". That might also work for "R" red, "B" blue, and "G" green. For "A" I think of the big red apple with the "A is for apple" posters that ubiquitously decorate kindergarten classrooms. But "O" is not particularly associated with orange in this chart, nor "W" with white. And why are "X" and "Z" black? Because we're tired because it's near the end of the alphabet and our eyelids are starting to droop doesn't seem like a good answer. (Does it?)

    You might wonder whether it's only synesthetes who have this consensus of associations, and how stable such associations are over time or between countries.

    You're in luck, then, because here's another cool chart, from Australia in 2005!

    [again, click for clearer view]

    The colored bars are synesthetic respondents and the hatched bars are non-synesthetic respondents. The patterns are similar between synesthetes and non-synesthetes, but maybe with the non-synesthetes tending toward stronger associations between the color and the initial letter of the color word. Furthermore, again "A" is red, "I" is white, and "X" and "Z" are black. US and Australian synesthetes seems to agree that "O" is white, but the Australian non-synesthetes like their "O" orange. For some reason, "D" is now brown (47%!).

    There are some older US data from the underappreciated early introspective psychologist Mary Whiton Calkins in her classic 1893 paper on synesthesia. [Pop quiz: Who are the only three people to have been president of both the American Psychological Association and the American Philosophical Association? Answer: William James, John Dewey, and Mary Whiton Calkins.] She reports that synesthetes tend to associate "I" with black and "O" with white. "O" being white matches the synesthete reports from the US and Australia in 2015 and 2005, but Calkins's black "I" is different. Calkins reports this possible explanation for the whiteness of "O", from one of her participants, seeming to find it plausible: O "= cipher = blank = sheet of white paper".

    Witthoft et al. 2015 found that almost a sixth of their participants born in the US in the late 1970s (but not those born before 1967) seem to have letter-color associations that match much better than chance with the colors of the letters of this then-popular magnet toy:

    [image source]

    Neat finding. Of course, the darned toy has "X" purple and "Z" orange, so it's all wrong!

    Brang, Rouw, Ramachandran and Coulson 2011 find a weak tendency for similarly-shaped letters to associate to similar colors in US sample. Irish-based Barnett et al. 2008 and British-based Simner et al. 2015 find broadly similar patterns to the other recent English-language populations.

    Spector and Maurer 2011 find that even pre-literate English-speaking Canadian toddlers associate "O" and "I" with white and "X" and "Z" with black, though they do not share older participants' associations of "A" with red, "B" with blue, "G" with green, and "Y" with yellow. They hypothesize that jagged shapes ("X" and "Z") might be more likely to have shaded portions in a natural environment than non-jagged shapes ("O" and "I"), and that other, later associations might be language based. However, color maps of Swiss research on German-language synesthetes (Beeli, Esslen, and Jaencke 2007) shows no such relationship (see the chart on p. 790) -- for example with more participants associating "X" with white or light gray than with black or dark gray (though Simner et al. have a German subset which do show black associations with "X" and "Z"). Beeli et al. find a weak tendency for higher frequency letters to be associated with higher saturation colors in a German-language sample. Rouw et al. 2014 found that Dutch and English-speaking non-synesthetic participants had similar associations for "A" (red), "B" (blue), "D" (brown), "E" (yellow), "I" (white), and "N" (brown). Hindi participants, with their different alphabet, had a rather different set of associations -- though the first letter of the Hindi alphabet was also associated with red. They speculate that the first letter in each alphabet gets a "signal" color.

    Okay, so now you know!

    Let me leave you then, with this highly unnatural thought:

    Whoa.

    Thursday, March 19, 2015

    On Being Blameworthy for Unwelcome Thoughts, Reactions, and Biases

    As Aristotle notes (NE III.1, 1110a), if the wind picks you up and blows you somewhere you don't want to go, your going there is involuntary, and you shouldn't be praised or blamed for it. Generally, we don't hold people morally responsible for events outside their control. The generalization has exceptions, though. You're still blameworthy if you've irresponsibly put yourself in a position where you lack control, such as through recreational drugs or through knowingly driving a car with defective brakes.

    Spontaneous reactions and unwelcome thoughts are in some sense outside our control. Indeed, trying to vanquish them seems sometimes only to enhance them, as in the famous case of trying not to think of a pink elephant. A particularly interesting set of cases are unwelcome racist, sexist, and ableist thoughts and reactions: If you reflexively utter racist slurs silently to yourself, or if you imagine having sex with someone with whom you're supposed to be having a professional conversation, or if you feel flashes of disgust at someone's blameless disability, are you morally blameworthy for those unwelcome thoughts and reactions? Let's stipulate that you repudiate those thoughts and reactions as soon as they occur and even work to compensate for any bias.

    To help fix ideas, let's consider a hypothetical. Hemlata, let's say, lacks the kind of muscular control that most people have, so that she has a disvalued facial posture, uses a wheelchair to get around, and speaks in a way that people who don't know her find difficult to understand. Let’s also suppose that Hemlata is a sweet, competent person and a good philosopher. If the psychological literature on implicit bias is any guide, it's likely that it will be more difficult for Hemlata to get credit for intelligence and philosophical skill than it will be for otherwise similar people without her disabilities.

    Now suppose that Hemlata meets Kyle – at a meeting of the American Philosophical Association, say. Kyle’s first, uncontrolled reaction to Hemlata is disgust. But he thinks to himself that disgust is not an appropriate reaction, so he tries to suppress it. He is only partly successful: He keeps having negative emotional reactions looking at Hemlata. He doesn’t feel comfortable around her. He dislikes the sound of her voice. He feels that he should be nice to her; he tries to be nice. But it feels forced, and it’s a relief when a good excuse arises for him to leave and chat with someone else. When Hemlata makes a remark about the talk that they’ve both just seen, Kyle is less immediately disposed to see the value of the remark than he would be if he were chatting with someone non-disabled. But then Kyle thinks he should try harder to appreciate the value of Hemlata's comments, given Hemlata's disability; so he makes an effort to do so. Kyle says to Hemlata that disabled philosophers are just as capable as non-disabled philosophers, and just as interesting to speak with – maybe more interesting! – and that they deserve fully equal treatment and respect. He says this quite sincerely. He even feels it passionately as he says it. But Kyle will not be seeking out Hemlata again. He thinks he will; he resolves to. But when the time comes to think about how he wants to spend the evening, he finds a good enough reason to justify hitting the pub with someone else instead.

    Question: How should we think about Kyle?

    I propose that we give Kyle full credit for his thoughtful egalitarian judgments and intentions but also full blame for his spontaneous, uncontrolled – to some extent uncontrollable – ableism. The fact that his ableist reactions are outside of his control does not mitigate his blameworthiness for them. When the wind blows you somewhere, the fact that you ended up there does not reflect your attitudes or personality. In contrast, in Kyle's case, his ableist reactions, repudiated though they are, are partly constitutive of his attitudes and personality. Hemlata would not be wrong to find Kyle morally blameworthy for his unwelcome ableist reactions.

    Compare with the case of personality traits: Some people are more naturally sweet, some more naturally jerkish than others. Excepting bizarre or pathological cases, we praise or blame people for those dispositions without much attention to whether they worked hard to attain them or came by them easily or can't help but have them. Likewise, if you've been a spontaneous egalitarian as far back as you can remember, great! And if you've worked hard to become a thoroughgoing spontaneous egalitarian despite a strong natural tendency toward bias, also great, in a different way. And someone whose immediate reactions are so deeply, ineradicably sexist, racist, and ableist that there is no hope of ever obliterating those reactions is not thereby excused.

    This is a harder line, I think, than most philosophers take who write about blameworthiness for implicit bias (e.g., Jennifer Saul and Neil Levy).

    Part of my thought here is that words and theories and ineffective intentions are cheap. It's easy to say egalitarian things, with a feeling of sincerity. For 21st century liberals you almost have to be a contrarian not to go along with endorsing egalitarian views at an intellectual level. It seems reasonable to give ourselves some credit for that, since egalitarianism (about the right things) is good. But we take it too easy on ourselves if we think that such conscious endorsements and intentions are the main thing to which credit and blame should attach: Our spontaneous responses to people, our implicit biases, and the actual pattern of decisions we make are often not as handsome as our words and resolutions, and such things also can matter quite a bit to the people against whom we have these unwelcome thoughts, reactions, and biases. It seems a bit like excuse-making to step away from accepting full blame for that aspect of ourselves.

    (This, by the way, is the topic of the talk I'll be giving at the Pacific APA meeting, in the Group Session from 6-9 pm Saturday evening, April 4.)

    [image source]

    ----------------------------------------------
    Appendix:

    One compromise approach is to say that people are blameworthy only because, and to the extent, that their reactions are under their indirect control: Although Kyle now can't effectively eliminate his unwelcome reactions to Hemlata, he could earlier have engaged in a course of self-cultivation which could have reduced or eliminated his tendency toward such reactions, for example by repeatingly exposing himself to positive exemplars of disabled people. He should have taken those measures, but he didn't.

    Although I'm broadly sympathetic with that line of response, I see at least two problems with insisting that at least indirect control is necessary: First, indirect control comes in degrees. Presumably, for some people, some biases or unwelcome patterns of reaction would be fairly easily controlled if they made the effort, while for other people those same patterns might be practically impossible to eliminate; but in the ordinary course of assigning praise and blame we rarely inquire into such interpersonal differences in difficulty. Second, the full suite of unwelcome thoughts, reactions, and biases, if we consider not only sexism and racism but also the manifold versions of ableism, ageism, classism, bias based on physical attractiveness, and cultural bias, as well as the full pattern of unjustifiable angry, dismissive, insulting, and unkind thoughts we can have about people even separate from bias – well, it's so huge that a self-improvement project focused on eliminating all of them would be hopeless and arguably so time-consuming that it would squeeze out many other things that also deserve attention. We are forced to choose our targets for self-improvement. But the practical impossibility of a program of self-cultivation that eliminates all unwelcome thoughts, reactions, and biases shouldn't excuse us from being blameworthy for those thoughts, reactions, and biases that remain. Given the difficulty, it's appropriately merciful to cut people some slack – but that slack should be something like understanding and forgiveness rather than excuse from praise and blame.

    Update April 3:

    I've been getting a lot of helpful critique, both in the comments section and orally. Let me add two important qualifications:

    (1.) Pathologically obsessive thoughts probably deserve a different approach.

    (2.) The case I am most interested in is self-blame and self-critique, especially among those of us with a tendency to want to let ourselves off the hook. Secondarily, I want to affirm Hemlata's mixed reaction to Kyle (and other parallel cases). What I'm least interested in is licensing a person in a position of power to have a low opinion of others because of whatever unwelcome thoughts, reactions, and biases those others might have that the person in power might or might not have.

    Wednesday, March 11, 2015

    Perils of the Sweetheart

    Tonight in Palm Desert, I'm presenting my "Theory of Jerks (and Sweethearts)" to a general audience. (Come!) In my past work on the topic, jerks have got most of the attention. (Don't they always!) A jerk, in my definition, is someone who gives insufficient weight to (or culpably fails to respect) the perspectives of others around him, treating them as tools to be manipulated or fools to be dealt with rather than as moral and epistemic peers.

    The sweetheart is the opposite of the jerk -- someone who very highly values the perspectives of others around him.

    You might think that if being a jerk is bad, being a sweetheart is good. And I do think it's better, overall, to be a bit of a sweetheart if you can. But I'd also argue that it's possible to go too far toward the sweetheart side, overvaluing, or giving excessive weight to, the perspectives of others around you.

    I see three moral and epistemic perils in being too much of a sweetheart.

    First peril: The sweetheart risks being so attuned to others’ goals and interests that he is captured by them, losing track of his own priorities. Consider the person who never says “no” to others – who spends his whole day helping everyone else get their own things done, leaving insufficient time to relax or to satisfy his own long-term goals. The sweetheart might forget that he can also sometimes make his own demands. Sometimes you need to disappoint people. In the extreme, the sweetheart’s complicity in this arrangement becomes in fact a kind of moral failure – a failure of moral duty to a certain person who counts, who ought to be respected, who ought to be cut some slack and given a chance to flourish and discover independent ideals – I’m speaking here, of course, of the duties the sweetheart has to himself.

    Second peril: Because the sweetheart has so much respect for the opinions of other people who might disagree with him, he can have trouble achieving sufficient intellectual independence. This is part of the reason that visionary moralists are often not sweethearts. The perfect sweetheart hates disagreeing with others, hates taking controversial stands, prefers the compromise position in which everyone gets to be at least partly right. But everyone is not always partly right. Southerners oppressing black people were not partly right. Physically abusive alcoholic husbands are not partly right. Some people need to be fought against, and the purest sweethearts tend not to have much stomach for the fight. Also, some people, even if not morally wrong, are just factually wrong, and sometimes we need a clear, confident, disagreeable voice to see this.

    Third peril: To the extent being a jerk or sweetheart turns on how you react to the people around you, being too much of a sweetheart means risking being too captured by the perspectives of whoever happens to be around you – without, perhaps, enough counterbalancing weight on the interests and perspectives of more distant people. The homeless person right here in front of you might compel you so much that you set wrongly aside other obligations so that you can help her, or you give her money that would be more wisely and effectively given to (say) Oxfam. When you’re with your friends who are liberal you find yourself agreeing with all their liberal positions; when you’re with your friends who are conservative you find yourself agreeing with all their conservative positions. You are blown about by the winds.

    If you know the cartoon SpongeBob SquarePants, the humor and conflict in the show often derives from SpongeBob's excessive sweetness in these three ways.

    I’m not sure there’s a perfect Aristotelian golden mean here: an ideal spot on the spectrum from jerk to sweetheart. Maybe there’s one best way to be – partway toward the sweet side perhaps, but not all the way to doormat – but I’m more inclined to think that perfection is not even a conceivable thing, that one can’t be wholly true to oneself without sinning against others, that one can’t wholly satisfy the legitimate demands of others without sinning against oneself; that everyone is thus deficient in some ways.

    Furthermore, when we try to correct, often we don’t even know what direction to go in. It’s characteristic of the sweetheart to worry that he has been too harsh or insistent when in fact what he really needs is to be more comfortable standing up for himself; it’s characteristic of the jerk to regret moments of softness and compromise.

    (image source)

    Thursday, March 05, 2015

    Zhuangzi's Delightful Inconsistency about Death

    I've been working on a new paper on ancient Chinese philosophy, "Death and Self in the Incomprehensible Zhuangzi" (come hear it Saturday at Pitzer College, if you like). In it, I argue that Zhuangzi has inconsistent views about death, but that that inconsistency is a good thing that fits nicely with his overall philosophical approach.

    Most commentators, understandably, try to give Zhuangzi -- the Zhuangzi of the authentic "Inner Chapters" at least -- a self-consistent view. Of course! This is only charitable, you might think. And this is what we almost always try to do with philosophers we respect.

    There are two reasons not to take this approach to Zhuangzi.

    First, Zhuangzi seems to think that philosophical theorizing is always defective, that language always fails us when we try to force rigid distinctions upon it, and that logical reasoning collapses into paradox when pushed to its natural end (see especially Ch. 2). Thus, you might think that Zhuangzi should want to resist committing to any final, self-consistent philosophical theory.

    Second, Zhuangzi employs a variety of devices that seem intended to frustrate the reader's natural desire to make consistent sense of his work, including: stating patent absurdities with a seeming straight face; putting his words in the mouths of various dubious-seeming sources; using humor, parable, and parody; and immediately challenging or contradicting his own assertions.

    Thus, I think we can't interpret Zhuangzi in the way we'd interpret most other philosophers: He is not, I think, offering us the One Correct Theory or the Philosophical Truth. His task is different, more subtle, more about jostling us out of our usual habits and complacent confidence, while pushing us in certain broad directions.

    Given the brevity of the text, his comments about longevity and death are strikingly frequent. In my view, they exemplify his self-inconsistency in a fun and striking way. I see three strands:

    (1.) Living out your full span of years is better than dying young. For example, Zhuangzi appears to advocate that you "live out all your natural years without being cut down halfway" (Ziporyn trans., p. 39). He celebrates trees that are big and useless and thus never chopped down (p. 8, 30-31). He seems to prefer the useless yak who can't catch rats to the weasel who can and who therefore hurries about, dying in a snare (p. 8). He seems to think it a bad outcome to be killed by a tyrant (p. 25, p. 29-30) or to die because well-meaning friends have drilled holes in your head (p. 54). A butcher so skillful in carving oxen that his blade is still as sharp as if straight from the whetstone is described as knowing "how to nourish life" (p. 23).

    (2.) Living out your full span of years is not better than dying young. In seemingly more radical moments, Zhuangzi says that although the sage likes growing old, the sage also likes dying young (p. 43), that the "Genuine Human Beings of old understood nothing about delighting in being alive or hating death. They emerged without delight, submerged again without resistance" (p. 40). He seems to admire groups of friends who are not at all distressed by each others' deaths, who "look upon life as a dangling wart or a swollen pimple, and on death as its dropping off, its bursting and draining" (p. 46-47). Of "early death, old age, the beginning, the end", the sage sees "each of them as good" (p. 43).

    (3.) We don't know whether living out your full span of years is better than dying young. This view fits with the general skepticism Zhuangzi expresses in Chapter 2. It doesn't have as broad a base of direct textual support, but there is one striking passage to this effect:

    How, then, do I know that delighting in life is not a delusion? How do I know that in hating death I am not like an orphan who left home in youth and no longer knows the way back? Lady Li was a daughter of the border guard of Ai. When she was first captured and brought to Qin, she wept until tears drenched her collar. But when she got to the palace, sharing the king's luxurious bed and feasting on the finest means, she regretted her tears. How do I know the dead don't regret the way they used to cling to life?" (p. 19).
    You could try to reconcile these various strands into a consistent view. For example you could say that they are targeted to readers of different levels of enlightenment (Allinson), or maybe they reflect different phases of Zhuangzi's intellectual development (possibly Graham), or you might think try to explain away one or the other strand: Maybe he really values death as much as he values life, as part of the infinite series of changes that is life-and-death (possibly Ames or Fraser), or you might think that Zhuangzi's view is that it's only remote "sages" who are lacking something important who are unmoved by death (Olberding). But each of these interpretations has substantial weaknesses, if intended as a means by which to reconcile the text into a self-consistent unity.

    [revision 6:40 pm: These statements are too compressed to be entirely accurate to these scholars' views and Olberding in particular suggests that in the course of personal mourning (outside the Inner Chapters) Zhuangzi seems to have a shifting attitude.]

    My own approach is to allow Zhuangzi to be inconsistent, since there's textual evidence that Zhuangzi is not trying to present a single, self-consistent philosophical theory. If Zhuangzi thinks that philosophical theorizing is always inadequate in our small human hands, then he might prefer to philosophize in a fragmented, shard-like way, expressing a variety of different, conflicting perspectives on the world. He might wish to frustrate, rather than encourage, our attempts to make neat sense of him, inviting us to mature as philosophers not by discovering the proper set of right and wrong views, but rather by offering his hand as he takes his smiling plunge into confusion and doubt.

    That delightfully inconsistent Zhuangzi is the one I love -- the Zhuangzi who openly shares his shifting ideas and confusions, rather than the Zhuangzi that most others seem to see, who has some stable, consistent theory underneath that for some reason he chooses not to display in plain language on the surface of the text.

    Related posts:
    Skill and Disability in Zhuangzi (Sep. 10, 2014)
    Zhuangzi, Big and Useless -- and Not So Good at Catching Rats (Dec. 19, 2008)
    The Humor of Zhuangzi; the Self-Seriousness of Laozi (Apr. 8, 2013)
    [image source]

    Update April 23:

    A full length draft is now up on my website.

    Wednesday, February 25, 2015

    Depressive Thinking Styles and Philosophy

    Recently I read two interesting pieces that I'd like to connect with each other. One is Peter Railton's Dewey Lecture to the American Philosophical Association, in which he describes his history of depression. The other is Oliver Sacks's New York Times column about facing his own imminent death.

    One of the inspiring things about Sacks's work is that he shows how people with (usually neurological) disabilities can lead productive, interesting, happy lives incorporating their disabilities and often even turning aspects of those disabilities into assets. (In his recent column, Sacks relates how imminent death has helped give him focus and perspective.) It has also always struck me that depression -- not only major, clinical depression but perhaps even more so subclinical depressive thinking styles -- is common among philosophers. (For an informal poll, see Leiter's latest.) I wonder if this prevalence of depression among philosophers is non-accidental. I wonder whether perhaps the thinking styles characteristic of mild depression can become, Sacks-style, an asset for one's work as a philosopher.

    Here's the thought (suggested to me first by John Fischer): Among the non-depressed, there's a tendency toward glib self-confidence in one's theoretical views. (On positive illusions in general among the non-depressed see this classic article.) Normally, conscious human reasoning works like this: First, you find yourself intuitively drawn to Position A. Second, you rummage around for some seemingly good argument or consideration in favor of Position A. Finally, you relax into the comfortable feeling that you've got it figured out. No need to think more about it! (See Kahneman, Haidt, etc.)

    Depressive thinking styles are, perhaps, the opposite of this blithe and easy self-confidence. People with mild depression will tend, I suspect, to be less easily satisfied with their first thought, at least on matters of importance to them. Before taking a public stand, they might spend more time imagining critics attacking Position A, and how they might respond. Inclined toward self-doubt, they might be more likely to check and recheck their arguments with anxious care, more carefully weigh up the pros and cons, worry that their initial impressions are off-base or too simple, discard the less-than-perfect, worry that there are important objections that they haven't yet considered. Although one needn't be inclined toward depression to reflect in this manner, I suspect that this self-doubting style will tend to come more naturally to those with mild to moderate depressive tendencies, deepening their thought about the topic at hand.

    I don't want to downplay the seriousness of depression, its often negative consequences for one's life including often for one's academic career, and the counterproductive nature of repetitive dysphoric rumination (see here and here), which is probably a different cognitive process than the kind of self-critical reflection that I'm hypothesizing here to be its correlate and cousin. [Update, Feb. 26: I want to emphasize the qualifications of that previous sentence. I am not endorsing the counterproductive thinking styles of severe, acute depression. See also Dirk Koppelberg's comment below and my reply.] However, I do suspect that mildly depressive thinking styles can be recruited toward philosophical goals and, if managed correctly, can fit into, and even benefit, one's philosophical work. And among academic disciplines, philosophy in particular might be well-suited for people who tend toward this style of thought, since philosophy seems to be proportionately less demanding than many other disciplines in tasks that benefit from confident, high-energy extraversion (such as laboratory management and people skills) and proportionately more demanding of careful consideration of the pros and cons of complex, abstract arguments and of precise ways of formulating positions to shield them from critique.

    Related posts:
    Depression and Philosophy (July 28, 2006)
    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (August 14, 2014)

    Update April 23:

    The full-length circulating draft is now up on my academic website.

    Thursday, February 19, 2015

    Why I Deny (Strong Versions of) Descriptive Cultural Moral Relativism

    Cultural moral relativism is the view that what is morally right and wrong varies between cultures. According to normative cultural moral relativism, what varies between cultures is what really is morally right and wrong (e.g., in some cultures, slavery is genuinely permissible, in other cultures it isn't). According to descriptive cultural moral relativism, what varies is what people in different cultures think is right and wrong (e.g., in some cultures people think slavery is fine, in others they don't; but the position is neutral on whether slavery really is fine in the cultures that think it is). A strong version of descriptive cultural moral relativism holds that cultures vary radically in what they regard as morally right and wrong.

    A case can be made for strong descriptive cultural moral relativism. Some cultures appear to regard aggressive warfare and genocide as among the highest moral accomplishments (consider the book of Joshua in the Old Testament); others (ours) think aggressive warfare and genocide are possibly the greatest moral wrongs of all. Some cultures celebrate slavery and revenge killing; others reject those things. Some cultures think blasphemy punishable by death; others take a more liberal attitude. Cultures vary enormously on womens' rights and obligations.

    However, I reject this view. My experience with ancient Chinese philosophy is the central reason.

    Here are the first passages of the Analects of Confucius (Slingerland trans., 2003):

    1.1. The Master said, "To learn and then have occasion to practice what you have learned -- is this not satisfying? To have friends arrive from afar -- is this not a joy? To be patient even when others do not understand -- is this not the mark of the gentleman?"
    1.2. Master You said, "A young person who is filial and respectful of his elders rarely becomes the kind of person who is inclined to defy his superiors, and there has never been a case of one who is disinclined to defy his superiors stirring up rebellion. The gentleman applies himself to the roots. 'Once the roots are firmly established, the Way will grow.' Might we not say that filial piety and respect for elders constitute the root of Goodness?"
    1.3. The Master said, "A clever tongue and fine appearance are rarely signs of Goodness."
    1.4. Master Zeng said, "Every day I examine myself on three counts: in my dealings with others, have I in any way failed to be dutiful? In my interactions with friends and associates, have I in any way failed to be trustworthy? Finally, have I in any way failed to repeatedly put into practice what I teach?"
    No substantial written philosophical tradition is culturally farther from the 21st century United States than is ancient China. And yet, while we might not personally endorse these particular doctrines, they are not alien. It is not difficult to enter into the moral perspective of the Analects, finding it familiar, comprehensible, different in detail and emphasis, but at the same time homey. Some people react to the text as kind of "fortune cookie": full of boring and trite -- that is, familiar! -- moral advice. (I think this underestimates the text, but the commonness of the reaction is what interests me.) Confucius does not advocate the slaughter of babies for fun, nor being honest only when the wind is from the east, nor severing limbs based on the roll of dice. 21st century U.S. undergraduates might not understand the text's depths but they are not baffled by it as they would be by a moral system that was just a random assortment of recommendations and prohibitions.

    You might think, "of course there would be some similarities!" The ancient Confucians were human beings, after all, with certain natural reactions and who needed to live in a not-totally-chaotic social system. Right! But then, of course, this is already to step away from the most radical form of descriptive cultural moral relativism.

    Still, you might say, the Analects is pretty morally different -- the Confucian emphasis on being "filial", for example -- that's not really a big piece of U.S. culture. It's an important way in which the moral stance of the ancient Chinese differs from ours.

    This response, I think, underestimates two things.

    First, it underestimates the extent to which people in the U.S. do regard it as a moral ideal to care for and respect their parents. The word "filial" is not a prominent part of our vocabulary, but this doesn't imply that attachment to and concern for our parents is minor.

    Second, and more importantly, it underestimates the diversity of opinion in ancient China. The Analects is generally regarded as the first full-length philosophical text. The second full-length text is the Mozi. Mozi argues vehemently against the Confucian ideal of treating one's parents with special concern. Mozi argues that we should have equal concern for all people, and no more concern for one's parents than for anyone else's parents. Loyalty to one's state and prince he also rejects, as objectionably "partial". One's moral emphasis should be on ensuring that everyone has their basic necessities met -- food, shelter, clothing, and the like. Whereas Confucius is a traditionalist who sees the social hierarchy as central to moral life, Mozi is a radical, cosmopolitan, populist consequentialist!

    And of course, Daoism is another famous moral outlook that traces back to ancient China -- one that downplays social obligation to others and celebrates harmonious responsiveness to nature -- quite different again from Confucianism and Mohism.

    Comparing ancient China and the 21st century U.S., I see greater differences in moral outlook within each culture than I see between the cultures. With some differences in emphasis and in culturally specific manifestations, a similar range of outlooks flourishes in both places. (This would probably be even more evident if we had more than seven full-length philosophical texts from ancient China.)

    So what about slavery, aggressive warfare, women's rights, and the rest? Here's my wager: If you look closely at cultures that seem to differ from ours in those respects, you will see a variety of opinions on those issues, not a monolithic foreignness. Some slaves (and non-slaves) presumably abhor slavery; some women (and non-women) presumably reject traditional gender roles; every culture will have pacifists who despise military conquest; etc. And within the U.S., probably with the exception of slavery traditionally defined, there still is a pretty wide range of opinion about such matters, especially outside mainstream academic circles.

    [image source]