Friday, January 12, 2018

Grounds for a Sliver of Skepticism

Yesterday, Philosophy Bites released a brief podcast interview of me on skepticism. Listening to the interview now, I feel that I didn't frame my project as well as I might have, so I'll add a few remarks here.

I want to think about what grounds we might have for a non-trivial sliver of radically skeptical doubt.

There is, in my mind, an important difference between, for example, "brain-in-a-vat" skepticism and dream skepticism. Brain-in-a-vat skepticism asks you how you know that genius alien neuroscientists didn't remove your brain last night while you were sleeping, drop it into a vat, and start feeding it stimuli as though you were having a normal day. Dream skepticism asks how you know that you are not currently dreaming. The difference is this: There are no grounds for thinking that there's any but an extremely remote chance that you have been envatted, while there are some reasonable grounds for thinking there's a non-trivial sliver of a chance that you are presently dreaming.

It's crucial here to recognize the role played by theories that are probably wrong. It is, I think, probably wrong that people often have sensory experiences just like waking experiences when they sleep. Dreams are, in my view, always sketchy or even imagistic, rather than quasi-sensory with rich realistic detail. However, I'm hardly certain of this theory, and some prominent dream theorists argue that dream experiences are often highly realistic or even phenomenologically indistinguishable from waking life (e.g. Revonsuo 1995; Hobson, Pace-Schott & Stickgold 2000; Windt 2010). Contingently upon accepting that latter view, it seems to me that I ought to reasonably have some doubt about my current state. Maybe this now is one of those highly realistic dreams.

The idea here is that there are grounds for accepting, as a live possibility, a theoretical view from which it seems to follow that I might be radically wrong about my current situation. I don't prefer that theoretical view; but neither can I reject it with high certainty. It is thus reasonable for me to reserve a non-trivial sliver of doubt about my current wakefulness.

I would argue similarly with respect to two other skeptical possibilities: the idea that we are Artificial Intelligences living in a simulated world, and a somewhat less familiar form of skepticism I call "cosmological skepticism". In both cases, there are grounds, I think, for treating as a live possibility theories that, while probably not correct, would, if correct, imply that you might easily be radically wrong in many of your ordinary beliefs.

In concluding the interview, I also make an empirical conjecture: that seriously entertaining radically skeptical possibilities has the psychological effect of reducing dogmatic self-confidence and increasing tolerance, even regarding non-skeptical possibilities. I hope to more fully explore this in a future post.

Full interview here.

Related papers:

  • 1% Skepticism
  • The Crazyist Metaphysics of Mind
  • Experimental Evidence for the Existence of an External World
  • Zhuangzi's Attitude Toward Language and His Skepticism
  • Friday, January 05, 2018

    Imagining Yourself in Another's Shoes Vs. Extending Your Love

    A favorite passage of mine from Mengzi (Mencius) is this one:

    That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are non that do not know to revere their elder brothers. Treating one's parents as parents is benevolence. Revering one's elders is righteousness. There is nothing else to do but extend these to the world (7A15, Van Norden trans.)

    One thing I like about this passage is that it assumes love and reverence for one's family as a given, rather than as a special achievement, portraying moral development as a matter of extending that natural love and reverence to new targets.

    Similarly, in passage 1A7, Mengzi notes the kindness that the vicious ruler King Xuan exhibits in saving a frightened ox from slaughter, and he urges King Xuan to extend similar kindness to the people of his kingdom. Mengzi says that such extension is a matter of "weighing" things correctly -- a matter of treating similar things similarly and not overvaluing what merely happens to be nearby.

    Contrast this approach with "The Golden Rule": "Do unto others as you would have others do unto you". Contrast it also with the common advice to imagine yourself in someone else's shoes. Golden Rule / others' shoes advice assumes self-interest as the starting point. While Mengzian extension starts by assuming that people are already concerned about nearby others, and takes the developmental or cognitive challenge to be extending your concern beyond a narrow circle, The Golden Rule / others' shoes thinking starts by assuming egoistic self-interest, and takes the developmental or cognitive challenge to be generalizing beyond one's own self-interest.

    Maybe we can model Golden Rule / others' shoes thinking as follows:

    (1.) If I were in the situation of Person X, I would want to be treated according to Principle P.
    (2.) Golden Rule: Do unto others as you would have others do unto you.
    (3.) I will treat the person according to Principle P.

    In contrast, maybe we can model Mengzian extension as follows:

    (1.) I care about Person X and want to treat them well.
    (2.) Person Y, though perhaps more distant, is relevantly similar.
    (3.) Therefore, I should also treat Person Y well.

    There will be other more careful and detailed formulations, but this simple sketch captures, I hope, how radically different these two ways of modeling moral cognition are. Mengzian extension models general moral concern on the natural concern we already have for those nearby, while Golden Rule / others' shoes thinking models general moral concern on concern for oneself.

    I like Mengzian extension better, for three reasons.

    First, Mengzian extension is more psychologically plausible. People do, naturally, have concern for and compassion for others around them. Explicit exhortations aren't necessary to bring this about. This natural concern for and compassion for others is likely the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies.

    Second, Mengzian extension is less ambitious -- in a good way. The Golden Rule imagines a leap from self-interest to generalized good treatment of others. The Golden Rule is sometimes excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, involving less of a leap, since it only moves from natural concern about nearby cases to similar treatment of relevantly similar but more distant cases.

    Third, Mengzian extension could arguably be turned back upon yourself, if you are one of those people who has trouble standing up for your own interests and rights. You would want to stand up for your loved ones and help them flourish. Applying Mengzian extension, extend the same kindness to yourself that you would give to give to others you care about. If you'd want your sister to be able to take a vacation, realize that you owe the same courtesy to yourself. [Thanks to Yvonne Tam for emphasizing this point in discussion with me. See also my post "Perils of the Sweetheart"]

    Although both Mengzi and Rousseau endorse the motto that "human nature is good", and have views that are similar in important ways (as I explore here), this is one difference between them. In both Emile and Discourse on Inequality, Rousseau emphasizes self-concern as the starting point, treating natural pity or compassion for others as secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that "Love of men derived from love of self is the principle of human justice" (Emile, trans. Bloom, p. 235).

    This difference between Mengzi and Rousseau does not, I think, reflect a general cultural difference between ancient China and the West. Kongzi (Confucius), for example, endorses something like the Golden Rule: "Do not impose on others what you yourself do not desire" (15.24, Slingerland trans.) Mozi and Xunzi, also writing in period, imagine people acting mostly or entirely selfishly, until society artificially imposes regulations upon us, and so they cannot see Mengzian extension as the core of moral development. (However, also see Mozi's argument for impartial concern that starts by assuming that one is concerned for one's parents [ch. 16].) The extension approach is specifically Mengzian rather than generally Chinese.

    [image source]

    Monday, January 01, 2018

    Writings of 2017

    It's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012, 2013, 2014, 2015, and 2016.)

    I'm proud of what I've managed to write in the past year. May 2018 be similarly fruitful!

    -----------------------------------

    Full-length non-fiction essays appearing in print in 2017:

    Full-length non-fiction finished and forthcoming:
    Shorter non-fiction:
      "Consciousness, idealism, and skepticism: Reflections on Jay Garfield’s Engaging Buddhism", Sophia (forthcoming).
    Editing work:
      Oneness in philosophy, religion, and psychology (with P.J. Ivanhoe, O. Flanagan, R. Harrison, and H. Sarkissian), Columbia University Press (forthcoming).
    Non-fiction in draft and circulating:
    Science fiction stories:

        - translated into Vietnamese for TẠP CHÍ KHOA HỌC VIỄN TƯỞNG VIỆT NAM, issue 13, 37-60.
    Some favorite blog posts:
    Selected interviews:

    Saturday, December 30, 2017

    Some Universities Have About 20% Women Philosophy Majors and others About 40%, in Patterns Unlikely to Be Chance

    According to data from the National Center for Education Statistics, about 32% of Philosophy Bachelor's degree recipients in the U.S. are women, a lower percentage than in almost any other discipline outside of engineering and the physical sciences. I would love to see that percentage rise. (I say this without committing to the dubious view that all majors should have gender proportions identical with that of the general population.)

    A case could be made for pessimism. Although academic philosophy has slowly been diversifying in race and ethnicity, the percentage of women receiving philosophy BAs, philosophy PhDs, and publishing in elite philosophy journals has remained virtually unchanged since the 1990s. It seems that we have reached a plateau. Could anything move the dial?

    I decided to look, institution-by-institution across the U.S., at the percentage of Philosophy B.A. recipients who are women. If there are some "outlier" institutions that have a much higher percentage of women philosophy majors than is typical, we could look more closely at those institutions. Such institutions might have policies or practices worth imitating.

    Method:

    For this analysis, I downloaded Bachelor's degree completion data from all 7357 "U.S. institutions" in the NCES IPEDS database for the academic years 2009-2010 to 2015-2016. For each institution, I examined four variables: total Bachelor's degrees awarded (1st major), total Bachelor's degrees awarded to women (1st major), total Philosophy Bachelor's degrees awarded (1st or 2nd major, category 38.01), and total Philosophy Bachelor's degrees awarded to women (1st or 2nd major, category 38.01).

    To reduce the likelihood of high or low percentages due to chance, I limited my analysis to the 66 colleges and universities that had awarded at least 200 Bachelor's degrees in Philosophy over the period. Across these 66 institutions, there were 22150 completed Philosophy majors during the seven-year period, of which 7120 (32.1%) were women. (Across all institutions, 61963 Philosophy Bachelor's degrees were awarded (31.6% to women). So the 66 targeted institutions together awarded about 36% of the Philosophy Bachelor's degrees in the U.S.) I then arranged the institutions by percentages of Philosophy majors who were women. One would expect most of these institutions to be within a few percentage points of 32%. My interest is in the statistical outliers.

    [Raw data for all institutions here. Summary data for the targeted 66 here.]

    I wanted to be conservative in testing for outliers, so here is how I did it.

    First, I performed a two-tailed two-proportion z test, comparing the proportion of women Philosophy majors at each target institution to the proportion of women Philosophy majors in all the remaining institutions combined. To correct for multiple comparisons, I used a Bonferroni correction, lowering the threshold for statistical significance from the conventional p < .05 (representing a 5% chance that results at least as extreme would arise from random sampling error) to p < .0008. Ten of the 66 institutions had proportions significantly different from the overall proportion by this measure.

    However, some of those results might have been due to unusually large or small proportions of women in the overall undergraduate population of those institutions, so I performed a second test also. Overall, across all 66 institutions, 1.43% of men graduate with Philosophy majors, compared to 0.58% of women. Based on these numbers, I calculated an "expected" percentage of women Philosophy majors for each institution, based on the number of graduating men and graduating women at that institution: (total graduating women * .0058)/((total graduating men * .0143) + (total graduating women * .0058)). The resulting expected percentages ran from 25% (for Virginia Polytechnic, with a total graduating class of 45% women) to 43% (for Loyola Chicago, with a graduating class of 65% women). I then performed two-tailed one-proportion z-tests comparing the percentage of Philosophy majors who were women at each institution to this expected percentage of Philosophy majors at the same institution, using the same p < .0008 threshold to correct for multiple comparisons. Twelve institutions had gender ratios significantly different from chance by this measure.

    For my final list of outliers, I included only institutions significantly different from chance by both measures -- that is, schools that both had a significantly higher or lower percentage of women Philosophy majors than did the other institutions and that had a significantly higher or lower percentage of women Philosophy majors than would be expected given the overall gender composition of their undergraduate graduating body.

    Results:

    Five of the 66 institutions were outliers by these criteria. Two were outliers on the low side:

  • University of St Thomas: Women were 9.7% of graduating Philosophy majors (32/329; vs 48% of Bachelor's degree recipients across all majors; two-prop p < .0001; one-prop p < .0001);
  • Franciscan University of Steubenville: 17.3% (43/249; vs 63%; < .0001, < .0001).

  • Five more institutions had fewer than 24% women Philosophy majors, with p values < .05 by both measures, but without meeting my stringent criteria for outliers. Due to the nature of multiple comparisons, some of these may appear on that list by chance.

    Three institutions were outliers on the high side:

  • University of Pennsylvania: Women 38.5% of Philosophy BA recipients (343/892; vs 51% among Bachelor's overall, p < .0001, p < .0001);
  • University of Scranton: 46.0% (108/235; < .0001, .0005);

  • and -- but I suspect there's something odd going on here --
  • University of Washington, Bothell: 72.8% (348/481; < .0001, < .0001).

  • UW Bothell doesn't appear to offer a straight Philosophy major; so I suspect these numbers are due to a potentially misleading decision to classify one of their popular interdisciplinary majors (Society, Ethics & Human Behavior?) as "Philosophy" (38.01 in the NCES classification). I'm looking into the matter. Meanwhile, let's bracket that campus.

    Between Penn and Scranton were several campuses with higher percentages of women among Philosophy majors, but smaller total numbers of Philosophy majors, and thus not crossing my stringent statistical threshold for outliers: University of Southern California (39.4%), University of Virginia Main Campus (39.4%), University of California Riverside (go team! 39.6%), CUNY Brooklyn (40.1%), Virginia Polytechnic (41.0%), Georgia State (41.1%), Boston University (41.3%), and Cal State Fresno (43.2%). None of these schools had strikingly high percentages of women BA recipients overall (45%-61%, compared to 57% women BA recipients in the US overall during this period and 54% in the 66 selected universities). Among these, U Virginia, UC Riverside, Virginia Polytechnic, and CSU Fresno all have p values < .05 by both measures. Again, due to the nature of multiple comparisons, some of these may appear on that list by chance.

    I draw the following conclusion:

    Some colleges and universities have unusually high or low percentages of women philosophy majors as a result of factors that are unlikely to be chance, across a range from about 10%-24% at the low end to about 38%-46% at the high end. If these differences are due to differences in policies or practices which differentially draw women and men into the Philosophy major, it might be possible for schools near the lower end of this range to increase their percentage of women Philosophy majors by imitating those at the higher end of the range.

    I don't know what explains the differences. It is striking that the two schools with the smallest percentage of Philosophy majors are both Catholic affiliated, so maybe that is a factor in some way. (ETA: Scranton, one of the high end outliers, is also Catholic affiliated!) Gender ratios among permanent faculty don't to be the primary determining factor. Judging from gender-typical names and photos on the Philosophy Department homepages, women are 5/24 (excl. 1) and 0/8 of permanent faculty at St Thomas and Fransiscan, vs 4/15 and 2/16 at Penn and Scranton.

    Thursday, December 21, 2017

    Philosophy Undergraduate Majors Aren't Very Black, but Neither Are They As White As You Might Have Thought

    Okay, I have some more data from the NCES database on Bachelor's degrees awarded in the U.S. A couple of weeks ago I noted that women have been earning 30-34% of philosophy BAs since the 1980s. Last week I noted the sharp decline in Philosophy, History, and English majors since 2010.

    Race and ethnicity data are a bit more complicated, since the coding categories change over time. Currently, NCES uses "American Indian or Alaska Native", "Asian", "Black or African American", "Hispanic or Latino", "Native Hawaiian or Other Pacific Islander", "White", "Two or more races", "Race/ethnicity unknown", and "Nonresident alien". The last three of these categories are difficult to interpret, especially given changes over time; and "Native Hawaiian or Other Pacific Islander" was included with Asian before 2010; so I will focus my analysis on the racial/ethnic categories Asian, Black, Latino/Hispanic, Native American, and White. [For more details, see this note.]

    Based on results from my analysis last year on PhDs in Philosophy, I had expected Philosophy majors to be overwhelmingly White. To my surprise, that's not what I found. Although recipients of Bachelor's degrees in Philosophy are somewhat more White than recipients of Bachelor's degrees overall, the difference is not large: 63% of BA recipients in Philosophy identified as White, compared to 60% of all graduating majors in the 2015-2016 academic year.

    In the NCES data, both Latino/Hispanic students and Asian students are approximately proportionately represented among Philosophy majors: 13% and 7% respectively, compared to 12% and 7% of graduating students overall. Of course Latino students are underrepresented among college graduates generally, compared to their prevalence in the U.S. as a whole (about 18% of the U.S. population overall). However, they don't appear to be more underrepresented among Philosophy majors than they are among Bachelor's degree recipients in general.

    Similarly -- though the numbers are very small -- Native Americans are about 0.4% of Philosophy degree recipients and about 0.5% of graduating students overall (and 1.3% in the general population).

    In contrast, Black students are substantially underrepresented: 5% in Philosophy compared to 10% overall (and 13% in the general population).

    Interestingly, these trends also appear to hold over time, back to the beginning of available data in the 1994-1995 academic year. White students are overrepresented in Philosophy by a few percentage points, Black students underrepresented by about as many percentage points, and the other groups are about proportionately represented. These trends persist throughout the broad decline in percentage of White students among Bachelor's recipients overall.

    Here's the graph for White students [update: corrected figure]:

    [click to enlarge and clarify]

    And here's Latino/Hispanic and Asian [update: corrected figure]:

    [click to enlarge and clarify]

    Native American is noisier due to small numbers, but roughly matches over the period:

    The most striking disparity is among students identifying as Black [update: corrected figure]:

    [click to enlarge and clarify]

    Looking at intersection with gender, 42% of Black Philosophy BA recipients in the most recent two years of data were women. (For comparison, 33% of Philosophy BAs overall were women, 57% of Bachelor's degree recipients overall were women, and 64% of Black Bachelor's recipients were women.)

    Evidently, the disproportionate Whiteness of Philosophy PhD recipients in the U.S. (recently in the mid-80%s, excluding nonresident alien and unknown) is not mostly explained by a similarly large disproportion among B.A. recipients, though the underrepresentation of Black students in the discipline does start at the undergraduate level.

    I'd be interested to hear what others make of these patterns.

    ------------------------------------------------------

    Note 1: I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors in Philosophy. Before the 2000-2001 academic year, only first major is recorded. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. Before the 2007-2008 academic year, the White and Black categories specify "non-Hispanic", and before 2010-2011, Pacific Islander is included with Asian. In the 2007-2008, "two or more races" is introduced as an option.

    Thursday, December 14, 2017

    Sharp Declines in Philosophy, History, and Language Majors Since 2010

    As I was gathering data for last week's post on the remarkably flat gender ratios in philosophy over time, I was struck by a pattern in the data that I hadn't anticipated: a sharp decline in Philosophy Bachelor's degrees awarded in the U.S. since 2010.

    In the 2009-2010 academic year, 9297 students received Bachelor's degrees in Philosophy in the U.S. In 2015-2016 (the most recent available year), the number was only 7507. In the same period, the total number of Bachelor's degrees increased from 1,597,740 (completing 1,684,011 majors, including double majors) to 1,922,705 (2,019,829 including doubles). In 2009-2010, 0.58% of graduating students majored in Philosophy. In 2015-2016, 0.39% did. [See Note 1 for methodological details.]

    Looking more closely at the year-by-year data, the decline in absolute numbers is entirely in the most recent three years, and quite precipitous:

    2010: 9297 philosophy BAs (0.58% of all graduates)
    2011: 9309 (0.56%)
    2012: 9376 (0.54%)
    2013: 9439 (0.53%)
    2014: 8837 (0.47%)
    2015: 8198 (0.43%)
    2016: 7507 (0.39%)

    As a fan of the Philosophy major, I am alarmed!

    A broader look at the data is partly reassuring, however: There was a similarly precipitous increase in the numbers and percentages of philosophy majors in the early 2000s, as displayed in the graph below. So maybe we're just seeing the pop of a philosophy bubble?

    [click to enlarge and clarify]

    For further context, I examined every other broad category of major with at least 100,000 graduating majors since the 2000-2001 academic year (27 broad majors total). Since 2010, only two other broad majors have declined in absolute number by at least 15%: History and English. Foreign language isn't far behind, with a 13% decline in absolute numbers. So Philosophy's decline seems to be part of a general decline in the main traditional humanities majors. (The three biggest gainers: Computer Science, Health Science, and Natural Resources.)

    I've graphed the data below. You'll need to click to expand it to see the whole thing legibly; apologies for my incompetence with the blog graphics. (I've thickened and brightened English, History, and Philosophy. Note also that the English line is mostly obscured by the Philosophy line in recent years.)

    [click to enlarge]

    I've put the raw numbers for all major categories in CSV here, if you'd like more detail.

    Finally, I looked at recent trends by institution type (Carnegie 2015 basic classification). As you can see from the chart below, the decline appears to occur across most or all institution types. (The top line, for four-year faith-related institutions, is jagged presumably due to noise, given low total numbers.)

    [click to enlarge]

    I'm not sure what to make of this. I suppose Wittgenstein, who reportedly advised aspiring students to major in anything but philosophy, would have approved. Thoughts (and corrections) welcome!

    ETA (9:55 a.m.): Several people have suggested that it relates to the Great Recession of 2008. I don't think that can be the entire explanation, since the recessions of the early 1990s and early 2000s don't correlate with sharp declines in the major. On Facebook, Todd Yampol offers this interesting analysis:

    In many states, business interests have put pressure on the state government to prepare students for the "jobs of the future". They complain that high school & college graduates don't have the specific skills that they're looking for. Also, there has been more and more emphasis on undergrads to finish in 4 years (for a 4-year degree). In California and many other states, there is a state-sponsored "finish in 4" initiative. They provide funding to the state universities for software & other projects that will help students finish in 4 years. They only way to do this, given all the crazy requirements these days, is to determine your major very early & stick with it. This is definitely the case at CSU. Not sure about UC. I believe this initiative started in CA around 2012. I've been working with CSU since 2014. I'm not saying that this is inherently a bad thing. It's just the environment we're living in. I come from a liberal arts background, and I see the value in being a well-rounded person.

    I think it comes down to the question "what is the purpose of a state university?" In some states like Wisconsin (where I grew up), there has been a lot of pressure from the (ultra-conservative) state government to dismantle the traditional approach to higher education & research, and turn the emphasis towards job training.

    Anyway, the pressure for job skills & to finish in 4 years makes it difficult for young people to explore and find their true interests. How many freshman / sophomores know that they're interested enough in philosophy / linguistics / history / etc to declare it that early? I didn't declare linguistics until my very last semester (long story).

    ETA2 (Dec 18): Some of the bump in the early 2000s is due to changes in NCES reporting. Starting in 2001, second majors are reported, as well as first majors. That accounts for that little jump from 2000 to 2001 (approx 0.40% to 0.48%) but not any of the increase from 2001 onward, after which the reporting remains the same.

    -------------------------------------------------------

    Note 1: Data from the NCES IPEDS database. I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. Before the 2000-2001 academic year, only first major is recorded. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. Some majors have different classification titles and criteria over the period, so I needed to make a few coding/grouping decisions. The most important of these was disaggregating the History subfield from the "Social Sciences and History" category in the 2000-2001 and 2001-2002 data. Although there are some category and coding differences over time in the dataset, the 2011-2012 to 2015-2016 academic years appear to have used exactly the same coding criteria.

    Tuesday, December 12, 2017

    Dreidel: A seemingly foolish game that contains the moral world in miniature

    [Also appearing in today's LA Times. Happy first night of Hannukah!]

    Superficially, dreidel appears to be a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and (apparently) meaningful strategic choice. From this perspective, its prominence in the modern Hannukah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

    This perspective misses the brilliance of dreidel. Dreidel's seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

    For readers unfamiliar with the game, here's a tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of several foil-wrapped chocolate coins, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put one coin in. Then the next player takes a spin.

    It all sounds very straightforward, until you actually start to play the game.

    The first odd thing you might notice is that although some of the coins are big and others are little, they all count just as one coin in the rules of the game. This is unfair, since the big coins contain more chocolate, and you get to eat your stash at the end.

    To compound the unfairness, there is never just one dreidel — each player may bring her own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels 40 times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27/40 spins.) It matters a lot which dreidel you spin.

    And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or how low you should let the pot get before you all have to contribute again. No one agrees how many coins to start with or whether you should let someone borrow coins if he runs out. You could try to appeal to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

    Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using what seems to be the "best" dreidel, always argue for rule interpretations in your favor, eat your big coins and use that as a further excuse to contribute only little ones, et cetera. You could do all of this without ever breaking the rules, and you'd probably end up with the most chocolate as a result.

    But here's the twist, and what makes the game so brilliant: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather enjoy being kind and generous than hoarding up the most coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, just to be fair to others and to enjoy being perceived as fair by them.

    Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint.

    Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context where the rules are unclear and where there are norm violations that aren't rules violations, and where both norms and rules are negotiable, varying from occasion to occasion. Just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

    Friday, December 08, 2017

    Women Have Been Earning 30-34% of Philosophy BAs in the U.S. Since Approximately Forever*

    * for values of "forever" ≤ 30 years.

    The National Center for Education Statistics has data on the gender of virtually all Bachelor's degree recipients in the U.S. back into the 1980s, publicly available through the IPEDS database. For Philosophy, the earliest available data cover the 1986-1987 academic year. [For methodological details, see note 1].

    The percentage of Philosophy Bachelor's degrees awarded to women has been remarkably constant over time -- a pattern not characteristic of other majors, many of which have shown at least a modest increase in the percentage of women since 1987. In the 1986-1987 academic year, women received 33.6% of Philosophy BAs. In the most recent available year (preliminary data), 2015-2016, is was 33.7%. Throughout the period, the percentage never strays from the band between 29.9% and 33.7%.

    I have plotted the trends in the graph below, with Philosophy as the fat red line, including a few other disciplines for comparison: English, History, Psychology, the Biological Sciences, and the Physical Sciences. The fat black line represents all Bachelor's degrees awarded.

    [if blurry or small, click to enlarge]

    Philosophy is the lowest of these, unsurprisingly to those of us who have followed gender issues in the discipline. (It is not the lowest overall, however: Some of the physical science and engineering majors are as low or lower.) To me, more striking and newsworthy is the flatness of the line.

    I also thought it might be worth comparing high-prestige research universities (Carnegie classification: Doctoral Universities, Highest Research Activity) versus colleges with much more of a teaching focus (Carnegie classification: Baccalaureate's Colleges, Arts & Science focus or Diverse Fields).

    Women were a slightly lower percentage of Philosophy BA recipients in the research universities than in the teaching-focused colleges (30% vs. 35%; and yes, p < .001). However, the trends over time were still approximately flat:

    For kicks, I thought I'd also check if my home state of California was any different -- since we'll be seceding from the rest of the U.S. soon (JK!). Nope. Again, a flat line, with women overall 33% of graduating BAs in Philosophy.

    Presumably, if we went back to the 1960s or 1970s, a higher percentage of philosophy majors would be men. But whatever cultural changes there have been in U.S. society in general and in the discipline of philosophy in particular in the past 30 years haven't moved the dial much on the gender ratio of the philosophy major.

    [Thanks to Mike Williams at NCES for help in figuring out how to use the database.]

    -----------------------------------------

    Note 1: I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. Before the 2000-2001 academic year, only first major is recorded. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who complete the degree are included in the data. Although gender data are available back to 1980, Philosophy and Religious Studies majors are merged from 1980-1986.

    Friday, December 01, 2017

    Aiming for Moral Mediocrity

    I've been working on this essay off and on for years, "Aiming for Moral Mediocrity". I think I've finally pounded it into circulating shape and I'm ready for feedback.

    I have an empirical thesis and a normative thesis. The empirical thesis is that most people aim to be morally mediocre. They aim to be about as morally good as their peers, not especially better, not especially worse. This mediocrity has two aspects. It is peer-relative rather than absolute, and it is middling rather than extreme. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of people we regard as our peers and we aim to behave broadly within that range. We -- most of us -- look around, notice how others are acting, then calibrate toward so-so.

    This empirical thesis is, I think, plausible on the face of it. It also receives some support from two recent subliteratures in social psychology and behavioral economics.

    One is the literature on following the (im-)moral crowd. I'm thinking especially of the work of Robert B. Cialdini and Cristina Bicchieri. Cialdini argues that "injunctive norms" (that is, social or moral admonitions) most effectively promote norm-compliant behavior when they align with "descriptive norms" (that is, facts about how people actually behave). People are less likely to litter when they see others being neat, more likely to reuse their hotel towels when they learn that others also do so, and more likely to reduce their household energy consumption when they see that they are using more than their neighbors. Bicchieri argues that people are more likely to be selfish in "dictator games" when they are led to believe that earlier participants had mostly been selfish and that convincing communities to adopt new health practices like family planning and indoor toilet use typically requires persuading people that their neighbors will also comply. It appears that people are more likely to abide by social or moral norms if they believe that others are also doing so.

    The other relevant literature concerns moral self-licensing. A number of studies suggest that after having performed good acts, people are likely to behave less morally well than after performing a bad or neutral act. For example, after having done something good for the environment, people might tend to make more selfish choices in a dictator game. Even just recalling recent ethical behavior might reduce people's intentions to donate blood, money, and time. The idea is that people are more motivated to behave well when their previous bad behavior is salient and less motivated to behave well when their previous good behavior is salient. They appear to calibrate toward some middle state.

    One alternative hypothesis is that people aim not for mediocrity but rather for something better than that, though short of sainthood. Phenomenologically, that might be how it seems to people. Most people think that they are somewhat above average in moral traits like honesty and fairness (Tappin and McKay 2017); and maybe then people mostly think that they should more or less stay the course. An eminent ethicist once told me he was aiming for a moral "B+". However, I suspect that most of us who like to think of ourselves as aiming for substantially above-average moral goodness aren't really willing to put in the work and sacrifice required. A close examination of how we actually calibrate our behavior will reveal us wiggling and veering toward a lower target. (Compare the undergraduate who says they're "aiming for B+" in a class but who wouldn't be willing to put in more work if they received a C on the first exam. It's probably better to say that they are hoping for a B+ than that they are aiming for one.)


    My normative thesis is that it's morally mediocre to aim for moral mediocrity. Generally speaking, it's somewhat morally bad, but not terribly bad, to aim for the moral middle.

    In defending this view, I'm mostly concerned to rebut the charge that it's perfectly morally fine to aim for mediocrity. Two common excuses, which I think wither upon critical scrutiny, are the Happy Coincidence Defense and The-Most-You-Can-Do Sweet Spot. The Happy Coincidence Defense is an attractive rationalization strategy that attempts to justify doing what you prefer to do by arguing that it's also for the moral best -- for example, that taking this expensive vacation now is really the morally best choice because you owe it to you family, and it will refresh you for your very important work, and.... The Most-You-Can-Do Sweet Spot is a similarly attractive rationalization strategy that relies on the idea that if you tried to be any morally better than you in fact are, you would end up being morally worse -- because you would collapse along the way, maybe, or you would become sanctimonious and intolerant, or you would lose the energy and joie de vivre on which your good deeds depend, or.... Of course it can sometimes be true that by Happy Coincidence your preferences align with the moral best or that you are already precisely in The-Most-You-Can-Do Sweet Spot. But this reasoning is suspicious when deployed repeatedly to justify otherwise seemingly mediocre moral choices.

    Another normative objection is the Fairness Objection, which I discussed on the blog last month. Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to make such sacrifices. If the average person in your financial condition gives X% to charity, for example, it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, lie, and flake X amount of the time, it's only fair that you should get to do the same.

    The simplest response to the Fairness Objection is to appeal to absolute moral standards. Although some norms are peer-relative, so that they become morally optional if most of your peers fail to comply with them, other norms aren't like that. A Nazi death camp guard is wrong to kill Jews even if that is normal behavior among his peers. More moderately, sexism, racism, ableism, elitism, and so forth are wrong and blameworthy, even if they are common among your peers (though blame is probably also partly mitigated if you are less biased than average). If you're an insurance adjuster who denies or slow-walks important health benefits on shaky grounds because you guess the person won't sue, the fact that other insurance adjusters might do the same in your place is again at best only partly mitigating. It would likely be unfair to blame you more than your peers are blamed; but if you violate absolute moral standards you deserve some blame, regardless of your peers' behavior.

    -----------------------------------------

    Full length version of the paper here.

    As always, comments welcome either by email to me or in the comments field of this post. Please don't feel obliged to read the full paper before commenting, if you have thoughts based on the summary arguments in this post.

    [Note: Somehow my final round of revisions on this post was lost and an old version was posted. The current version has been revised in attempt to recover the lost changes.]

    Wednesday, November 22, 2017

    Yay Boo Strange Hope

    Happy (almost) Thanksgiving (in the U.S.)! I want to share a recent family tradition which might help you through some of those awkward conversations with others around the table. We call it Yay Boo Strange Hope.

    The rules:

    (1.) Sit in a circle (e.g., around the dinner table).

    (2.) Choose a topic. For example: your schoolday/workday, wilderness camping, Star Wars, the current state of academic philosophy.

    (3.) Arbitrarily choose someone to go first.

    (4.) That person says one good thing about the topic (the Yay), one bad thing (the Boo), one weird or unexpected thing (the Strange), and some wish for the future related to the topic in question (the Hope).

    (5.) Interruptions for clarificatory questions are encouraged.

    (6.) Sympathetic cheers and hisses are welcome, or brief affirmations like "that stinks" or "I agree!" But others shouldn't take the thread off in their own direction. Keep the focus on the opinions and experiences of the person whose turn it is.

    (7.) Repeat with the next person clockwise around the circle until everyone has had a turn.


    Some cool things about this game:

    * It is modestly successful in getting even monosyllabic teenagers talking a bit. Usually they can muster at least a laconic yay boo strange and hope about their day or about a topic of interest to them.

    * It gives quiet people a turn at the center of the conversation, and discourages others from hijacking the thread.

    * Yay Boo Strange Hope typically solicits less predictable and more substantive responses than bland questions like, "So what happened at school today?" Typically, you'll hear about at least three different events (the Yay, Boo, and Strange) and one of those events (the Strange) is likely to be novel.

    * The Boo gives people an excuse to complain (which most people enjoy) and the Yay forces people to find a bright side even on a topic where their opinion is negative.

    * By ending on Hope, each person's turn usually concludes on an up note or a joke.


    Origin:

    When I was touring Pomona College with my son in the summer of 2016, I overhead another group's tour guide describing something like this game as a weekly ritual among her dormmates. I suspect the Pomona College version differs somewhat from my family's version, since I only partly overheard and our practice has evolved over time. If you know variations of this game, I'd be interested to hear from you in the comments.

    Thursday, November 16, 2017

    A Moral Dunning-Kruger Effect?

    In a famous series of experiments Justin Kruger and David Dunning found that people who scored in the lowest quartile of skill in grammar, logic, and (yes, they tried to measure this) humor tended to substantially overestimate their abilities, rating themselves as a bit above average in these skills. In contrast, people in the top half of ability had more accurate estimations (even tending to underestimate a bit). The average participant in each quartile rated themselves as above average, and the correlation between self-rated skill and measured skill was small.

    For example, here's Kruger and Dunning's chart for logic ability and logic scores:


    (Kruger & Dunning 1999, p. 1129).

    Kruger and Dunning's explanation is that poor skill at (say) logical reasoning not only impairs one's performance at logical reasoning tasks but also impairs one's ability to evaluate one's own performance at logical reasoning tasks. You need to know that affirming the consequent is a logical error in order to realize that you've just committed a logical error in affirming the consequent. Otherwise, you're likely to think, "P implies Q, not-P, so not-Q. Right! Hey, I'm doing great!"

    Although popular presentations of the Kruger-Dunning effect tend to generalize it to all skill domains, it seems unlikely that it does generalize universally. In domains where evaluating one's success doesn't depend on the skill in question, and instead depends on simpler forms of observation and feedback, one might expect more realistic self-evaluations by novices. (I haven't noticed a clear, systematic discussion of cases where Dunning-Kruger doesn't apply, though Kahneman & Klein 2009 is related; tips welcome.) For example: footraces. I'd wager that people who are slow runners don't tend to think that they are above average in running speed. They might not have perfect expectations; they might show some self-serving optimistic bias (Taylor & Brown 1988), but we probably won't see the almost flat line characteristic of Kruger-Dunning. You don't have to be a fast runner to evaluate your running speed. You just need to notice that others tend to run faster than you. It's not like logic where skill at the task and skill at self-evaluation are closely related.

    So... what about ethics? Ought we to expect a moral Dunning-Kruger Effect?

    My guess is: yes. Evaluating one's own ethical or unethical behavior is a skill that itself depends on one's ethical abilities. The least ethical people are typically also the least capable of recognizing what counts as an ethical violation and how serious the violation is -- especially, perhaps, when thinking about their own behavior. I don't want to over-commit on this point. Certainly there are exceptions. But as a general trend, this strikes me as plausible.

    Consider sexism. The most sexist people tend to be the people least capable of understanding what constitutes sexist behavior and what makes sexist behavior unethical. They will tend either to regard themselves as not sexist or to regard themselves only as "sexist" in a non-pejorative sense. ("Yeah, so what, I'm a 'sexist'. I think men and women are different. If you don't, you're a fool.") Similarly, the most habitual liars might not see anything bad in lying or just assume that everyone else who isn't just a clueless sucker also lies when convenient.

    It probably doesn't make sense to think that overall morality can be accurately captured in a single unidimensional scale -- just like it probably doesn't make sense to think that there's one correct unidimensional scale for skill at baseball or for skill as a philosopher or for being a good parent. And yet, clearly some baseball players, philosophers, and parents are better than others. There are great, good, mediocre, and crummy versions of each. I think it's okay as a first approximation to think that there are more and less ethical people overall. And if so, we can at least imagine a rough scale.

    With that important caveat, then, consider the following possible relationships between one's overall moral character and one's opinion about one's overall moral character:

    Dunning-Kruger (more self-enhancement for lower moral character):

    [Note: Sorry for the cruddy-looking images. They look fine in Excel. I should figure this out.]

    Uniform self-enhancement (everyone tends to think they're a bit better than they are):

    U-shaped curve (even more self-enhancement for the below average):

    Inverse U (realistically low self-image for the worst, self-enhancement in the middle, and self-underestimation for the best):

    I don't think we really know which of these models is closest to the truth.

    Thursday, November 09, 2017

    Is It Perfectly Fine to Aim to be Morally Average?

    By perfectly fine I mean: not at all morally blameworthy.

    By aiming I mean: being ready to calibrate ourselves up or down to hit the target. I would contrast aiming with settling, which does not necessarily involve calibrating down if one is above target. (For example, if you're aiming for a B, then you should work harder if you get a C on the first exam and ease up if you get an A on the first exam. If you're willing to settle for a B, then you won't necessarily ease up if you happen fortunately to be headed toward an A.)

    I believe that most people aim to be morally mediocre, even if they don't explicitly conceptualize themselves that way. Most people look at their peers' moral behavior, then calibrate toward so-so, wanting neither to be among the morally best (with the self-sacrifice that seems to involve) nor among the morally worst. But maybe "mediocre" is too loaded a word, with its negative connotations? Maybe it's perfectly fine, not at all blameworthy, to aim for the moral middle?


    Here's one reason you might think so:

    The Fairness Argument.

    Let's assume (of course it's disputable) that being among the morally best, relative to your peers, normally involves substantial self-sacrifice. It's morally better to donate large amounts to worthy charities than to donate small amounts. It's morally better to be generous rather than stingy with one's time in helping colleagues, neighbors, and distant relatives who might not be your favorite people. It's morally better to meet your deadlines than to inconvenience others by running late. It's morally better to have a small carbon footprint than a medium-size or large one. It's morally better not to lie, cheat, and fudge in all the small (and sometimes large) ways that people tend to do.

    To be near the moral maximum in every respect would be practically impossible near-sainthood; but we non-saints could still presumably be somewhat better in many of these ways. We just choose not to be better, because we'd rather not make the sacrifices involved. (See The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot for my discussion of a couple of ways of insisting that you couldn't be morally better than you in fact are.)

    Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to do so. If the average person in your financial condition gives 3% of their income to charity, then it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, fib, and flake X amount of the time, it's only fair that you get to do the same. Fairness requires that we demand no more than average moral sacrifice from the average person. Thus, there's nothing wrong with aiming to be only a middling member of the moral community -- approximately as selfish, dishonest, and unreliable as everyone else.


    Two Replies to the Fairness Argument.

    (1.) Absolute standards. Some actions are morally bad, even if the majority of your peers are doing them. As an extreme example, consider a Nazi death camp guard in 1941, who is somewhat kinder to the inmates and less enthusiastic about killing than the average death camp guard, but who still participates in and benefits from the system. "Hey, at least I'm better than average!" is a poor excuse. More moderately, most people (I believe) regularly exhibit small to moderate degrees of sexism, racism, ableism, and preferential treatment of the conventionally beautiful. Even though most people do this, one remains criticizable for it -- that you're typical or average in your degree of bias is at most a mitigator of blame, not a full excuser from blame. So although some putative norms might become morally optional (or "supererogatory") if most of your peers fail to comply, others don't show that structure. With respect to some norms, aiming for mediocrity is not perfectly fine.

    (2.) The seeming-absurdity of trade offs between norm types. Most of us see ourselves as having areas of moral strength and weakness. Maybe you're a warm-hearted fellow, but flakier than average about responding to important emails. Maybe you know you tend to be rude and grumpy to strangers, but you're an unusually active volunteer for good causes in your community. My psychological conjecture is that, in implicitly guiding our own behavior, we tend to treat these tradeoffs as exculpatory or licensing: You forgive yourself for the one in light of the other. You let your excellence in one area justify lowering your aims in another, so that averaging the two, you come out somewhere in the middle. (In these examples, I'm assuming that you didn't spend so much time and energy on the one that the other becomes unfeasible. It's not that you spent hours helping your colleague so that you simply couldn't get to your email.)

    Although this is tempting reasoning when you're motivated to see yourself (or someone else) positively, a more neutral judge might tend to find it strange: "It's fine that I insulted that cashier, because this afternoon I'm volunteering for river clean-up." "I'm not criticizable for neglecting Cameron's urgent email because this morning I greeted Monica and Britney kindly, filling the office with good vibes." Although non-consciously or semi-consciously we tend to cut ourselves slack in one area when we think about our excellence in others, when the specifics of such tradeoffs are brought to light, they often don't stand scrutiny.


    Conclusion.

    It's not perfectly fine to aim merely for the moral middle. Your peers tend to be somewhat morally criticizable; and if you aim to be the same, you too are somewhat morally criticizable for doing so. The Fairness Argument doesn't work as a general rule (though it may work in some cases). If you're not aiming for moral excellence, you are somewhat morally blameworthy for your low moral aspirations.

    [image source]

    Thursday, November 02, 2017

    Two Roles for Belief Attribution

    Belief attribution, both in philosophy and in ordinary language, normally serves two different types of role.

    One role is predicting, tracking, or reporting what a person would verbally endorse. When we attribute belief to someone we are doing something like indirect quotation, speaking for them, expressing what we think they would say. This view is nicely articulated in (the simple versions of) the origin-myths of belief talk in the thought experiments of Wilfrid Sellars and Howard Wettstein, according to which belief attribution mythologically evolves out of a practice of indirect quotation or imagining interior analogues of outward speech. The other role is predicting and explaining (primarily) non-linguistic behavior -- what a person will do, given their background desires (e.g. Dennett 1987; Fodor 1987; Andrews 2012) .

    We might call the first role testimonial, the second predictive-explanatory. In adult human beings, when all goes well, the two coincide. You attribute to me the belief that class starts at 2 pm. It is true both that I would say "Class starts at 2 pm" and that I would try to show up for class at 2 pm (assuming I want to attend class).

    But sometimes the two roles come apart. For example, suppose that Ralph, a philosophy professor, sincerely endorses the statement "women are just as intelligent as men". He will argue passionately and convincingly for that claim, appealing to scientific evidence, and emphasizing how it fits the egalitarian and feminist worldview he generally endorses. And yet, in his day-to-day behavior Ralph tends not to assume that women are very intellectually capable. It takes substantially more evidence, for example, to convince him of the intelligence of an essay or comment by a woman than a man. When he interacts with cashiers, salespeople, mechanics, and doctors, he tends to assume less intelligence if they are women than if they are men. And so forth. (For more detailed discussion of these types of cases, see here and here.) Or consider Kennedy, who sincerely says that she believes money doesn't matter much, above a certain basic income, but whose choices and emotional reactions seem to tell a different story. When the two roles diverge, should belief attribution track the testimonial or the predictive-explanatory? Both? Neither?

    Self-attributions of belief are typically testimonial. If we ask Ralph whether he believes that women and men are equally intelligent, he would presumably answer with an unqualified yes. He can cite the evidence! If he were to say that he doesn't really believe that, or that he only "kind of" believes it, or that he's ambivalent, or that only part of him believes it, he risks giving his conversational partner the wrong idea. If he went into detail about his spontaneous reactions to people, he would probably be missing the point of the question.

    On the other hand, consider Ralph's wife. Ralph comes home from a long day, and he finds himself enthusiastically talking to his wife about the brilliant new first-year graduate students in his seminar -- Michael, Nestor, James, Kyle. His wife asks, what about Valery and Svitlana? [names selected by this random procedure] Ah, Ralph says, they don't seem quite as promising, somehow. His wife challenges him: Do you really believe that women and men are equally intelligent? It sure doesn't seem that way, for all your fine, egalitarian talk! Or consider what Valery and Svitlana might say, gossiping behind Ralph's back. With some justice, they agree that he doesn't really believe that women and men are equally intelligent. Or consider Ralph many years later. Maybe after a long experience with brilliant women as colleagues and intellectual heroes, he has left his implicit prejudice behind. Looking back on his earlier attitudes, his earlier evaluations and spontaneous assumptions, he can say: Back then, I didn't deep-down believe that women were just as smart as men. Now I do believe that. Not all belief attribution is testimonial.

    It is a simplifying assumption in our talk of "belief" that these two roles of belief attribution -- the testimonial and the predictive-explanatory -- converge upon a single thing, what one believes. When that simplifying assumption breaks down, something has to give, and not all of our attributional practices can be preserved without modification.

    [This post is adapted from Section 6 of my paper in draft, "The Pragmatic Metaphysics of Belief"]

    [HT: Janet Levin.]

    [image source]

    Tuesday, October 31, 2017

    Rationally Speaking: Weird Ideas and Opaque Minds

    What a pleasure and an honor to have been invited back to Julia Galef's awesome podcast, Rationally Speaking!

    If you don't know Rationally Speaking, check it out. The podcast weaves together ideas and guests from psychology, philosophy, economics, and related fields; and Julia has a real knack for the friendly but probing question.

    In this episode, Julia and I discuss the value of truth, daringness, and wonder as motives for studying philosophy; the hazards of interpreting other thinkers too charitably; and our lack of self-knowledge about the stream of conscious experience.

    Thursday, October 26, 2017

    In 25 Years, Your Employer Will Directly Control Your Moods

    [Edit Oct. 28: After discussion with friends and commenters in social media, I now think that the thesis should be moderated in two ways. First, before direct mood control becomes common in the workplace, it probably first needs to become voluntarily common at home; and thus it will probably take more than 25 years. Second, it seems likely that in many (most?) cases the direct control will remain in the employee's hands, though there will likely be coercive pressure from the employer to use it as the employer expects. (Thanks to everyone for their comments!)]

    Here's the argument:

    (1.) In 25 years, employers will have the technological capacity to directly control their employees' moods.

    (2.) Employers will not refrain from exercising that capacity.

    (3.) Most working-age adults will be employees.

    (4.) Therefore, in 25 years, most working-age adults will have employers who directly control their moods.

    The argument is valid in the sense that the conclusion (4) follows if all of the premises are true.

    Premise 1 seems plausible, given current technological trajectories. Control could be either pharmacological or via direct brain stimulation. Pharmacological control could, for example, be through pills that directly influence your mood, energy levels, ability to concentrate, feeling of submissiveness, or passion for the type of task at hand. Direct brain stimulation could be through a removable TMS helmet that magnetically stimulates and suppresses neural activity in different brain regions, or with some more invasive technology. McDonald's might ask its cashiers to tweak their dials toward perky friendliness. Data entry centers might ask their temp workers to tweak their dials toward undistractable focus. Brothels might ask their strippers to tweak their dials toward sexual arousal.

    Contra Premise 1, society might collapse, of course, or technological growth could stall or proceed much more slowly. If it's just slower, then we can replace "25 years" with 50 or 100 and retain the rest of the argument. It seems unlikely that moods are too complex or finicky to be subject to fairly precise technological control, given how readily they can be influenced by low-tech means.

    I don't know to what extent people in Silicon Valley, Wall Street, and elite universities already use high-tech drugs to enhance alertness, energy, and concentration at work. That might already be a step down this road. Indeed, coffee might partly be seen this way too, especially if you use it to give your all to work, and then collapse in exhaustion when the caffeine wears off and you arrive home. My thought is that in a few decades the interventions might be much more direct, effective, and precisely targeted.

    Premise 2 also seems plausible, given the relative social power of employers vs. employees. As long as there's surplus labor and a scarcity of desirable jobs, then employers will have some choice about whom to hire. If Starbucks has a choice between Applicant A who is willing to turn up the perky-friendly dial and otherwise similar Applicant B who is not so willing, then they will presumably tend to prefer Applicant A. If the Silicon Valley startup wants an employee who will crank out intense 16-hour days one after the next, and the technology is available for people to do so by directly regulating their moods, energy levels, focus, and passion, then the people who take that direct control, for their employers' benefit, will tend to win the competition for positions. If Stanford wants to hire the medical researcher who is publishing article after article, they'll find the researcher who dialed up her appetite for work and dialed down everything else.

    Employees might yield control directly to the employer: The TMS dials might be in the boss's office, or the cafeteria lunch might include the pharmacological cocktail of the day. Alternatively, employees might keep the hand on the dial themselves, but experience substantial pressure to manipulate it in the directions expected by the employer. If that pressure is high enough and effective enough, then it comes to much the same thing. (My guess is that lower-prestige occupations (the majority) would yield control directly to the employer, while higher-prestige occupations would retain the sheen of self-control alongside very effective pressure to use that "self-control" in certain ways.)

    Contra Premise 2, (a.) collective bargaining might prevent employers from successfully demanding direct mood control; or (b.) governmental regulations might do so; or (c.) there might be a lack of surplus labor.

    Rebuttal to (a): The historical trend recently, at least in the U.S., has been against unionization and collective bargaining, though I guess that could change.

    Rebuttal to (b): Although government regulations could forbid certain drugs or brain technologies, if there's enough demand for those drugs or technologies, employees will find ways to use them (unless enforcement gets a lot of resources, as in professional sports). Government regulations could specifically forbid employers from requiring that their employees use certain technologies, while permitting such technologies for private use. (No TMS helmets on the job.) But enforcement might again be difficult; and private use vs. use as an employee is a permeable line for the increasing number of jobs that involve working outside of a set time and location. Also, it's easier to regulate a contractual demand than an informal de facto demand. Presumably many companies could say that of course they don't require their employees to use such technologies. It's up to the employee! But if the technology delivers as promised, the employees who "voluntarily choose" to have their moods directly regulated will be more productive and otherwise behave as the company desires, and thus be more attractive to retain and promote.

    Rebuttal to (c): At present there's no general long-term trend toward a shortage of labor; and at least for jobs seen as highly desirable, there will always be more applicants than available positions.

    Premise 3 also seems plausible, especially on a liberal definition of "employee". Most working-age adults (in Europe and North America) are currently employees of one form or another. That could change substantially with the "gig economy" and more independent contracting, but not necessarily in a way that takes the sting out of the main argument. Even if an Uber driver is technically not an employee, the pressures toward direct mood control for productivity ought to be similar. Likewise for computer programmers and others who do piecework as independent contractors. If anything, the pressures may be higher, with less security of income and fewer formal workplace regulations.

    Thinking about Premises 1-3, I find myself drawn to the conclusion that my children's and grandchildren's employers are likely to have a huge amount of coercive control over their moods and passions.

    -------------------------------------

    Related:

    "What Would (or Should) You Do with Administrator Access to Your Mind?" (guest post by Henry Shevlin, Aug 16, 2017).

    "Crash Space" (a short story by R. Scott Bakker for Midwest Studies in Philosophy).

    "My Daughter's Rented Eyes" (Oct 11, 2016).

    [image source: lady-traveler, creative commons]

    Thursday, October 19, 2017

    Practical and Impractical Advice for Philosophers Writing Fiction

    Hugh D. Reynolds has written up a fun, vivid summary of my talk at Oxford Brookes last spring, on fiction writing for philosophers!

    -----------------------------------

    Eric Schwitzgebel has a pleasingly liberal view of what constitutes philosophy. A philosopher is anyone wrestling with the “biggest picture framing issues” of... well, anything.

    In a keynote session at the Fiction Writing for Philosophers Workshop that was held at Oxford Brookes University in June 2017, Schwitzgebel, Professor of Philosophy at the University of California, Riverside, shared his advice–which he stated would be both practical and impractical.

    Schwitzgebel tells us of a leading coiffeur who styles himself as a “Philosopher of Hair”. We laugh – but there’s something in this – the vagary, the contingency in favoured forms of philosophical output. And it’s not just hairdressers that threaten to encroach upon the Philosophy Department’s turf. Given that the foundational issues in any branch of science or art are philosophical in nature, it follows that most people “doing” philosophy today aren’t professional philosophers.

    There are a host of ways one could go about doing philosophy, but of late a consensus has emerged amongst those that write articles for academic journals: the only proper way to “do” philosophy is by writing articles for academic journals. Is it time to re-stock the tool shed? Philosophical nuts come in all shapes and sizes; yet contemporary attempts to crack them are somewhat monotone.

    As Schwitzgebel wrote in a Los Angeles Times op-ed piece:

    Too exclusive a focus on technical journal articles excludes non-academics from the dialogue — or maybe, better said, excludes us philosophers from non-academics’ more important dialogue.

    [Hugh's account of my talk continues here.]

    -----------------------------------

    Thanks also to Helen De Cruz for setting up the talk and to Skye Cleary for finding a home for Hugh's account on the APA blog.

    [image detail from APA Blog]

    Tuesday, October 17, 2017

    Should You Referee the Same Paper Twice, for Different Journals?

    Uh-oh, it happened again. That paper I refereed for Journal X a few months ago -- it's back in my inbox. Journal X rejected it, and now Journal Y wants to know what I think. Would I be willing to referee it for Journal Y?

    In the past, I've tended to say no if I had previously recommended rejection, yes if I had previously recommended acceptance.

    If I'd previously recommended rejection, I've tended to reason thus: I could be mistaken in my negative view. It would be a disservice both to the field in general and to the author in particular if a single stubborn referee prevented an excellent paper from being published by rejecting it again and again from different journals. If the paper really doesn't merit publication, then another referee will presumably reach the same conclusion, and the paper will be rejected without my help.

    If I'd previously recommended acceptance (or encouraging R&R), I've tended to just permit myself think that the other journal's decision was probably the wrong call, and it does no harm to the field or to the author for me to serve as referee again to help this promising paper find the home it deserves.

    I've begun to wonder whether I should just generally refuse to referee the same paper more than once for different journals, even in positive cases. Maybe if everyone followed my policy, that would overall tend to harm the field by skewing the referee pool too much toward the positive side?

    I could also imagine arguments -- though I'm not as tempted by them -- that it's fine to reject the same paper multiple times from different journals. After all, it's hard for journals to find expert referees, and if you're confident in your opinion, you might as well share it widely and save everyone's time.

    I'd be curious to hear about others' practices, and their reasons for and against.

    (Let's assume that anonymity isn't an issue, having been maintained throughout the process.)

    [Cross-posted at Daily Nous]

    Monday, October 16, 2017

    New Essay in Draft: Kant Meets Cyberpunk

    Abstract:

    I defend a how-possibly argument for Kantian (or Kant*-ian) transcendental idealism, drawing on concepts from David Chalmers, Nick Bostrom, and the cyberpunk subgenre of science fiction. If we are artificial intelligences living in a virtual reality instantiated on a giant computer, then the fundamental structure of reality might be very different than we suppose. Indeed, since computation does not require spatial properties, spatiality might not be a feature of things as they are in themselves but instead only the way that things necessarily appear to us. It might seem unlikely that we are living in a virtual reality instantiated on a non-spatial computer. However, understanding this possibility can help us appreciate the merits of transcendental idealism in general, as well as transcendental idealism's underappreciated skeptical consequences.

    Full essay here.

    As always, I welcome comments, objections, and discussion either as comments on this post or by email to my UCR email address.

    Thursday, October 12, 2017

    Truth, Dare, and Wonder

    According to Nomy Arpaly and Zach Barnett, some philosophers prefer Truth and others prefer Dare. I love the distinction. It helps us see an important dynamic in the field. But it's not exhaustive. I think there are also Wonder philosophers.

    As I see the distinction, Truth philosophers sincerely aim to present the philosophical truth as they see it. They tend to prefer modest, moderate, and commonsensical positions. They tend to recognize the substantial truth in multiple different perspectives (at least once they've been around long enough to see the flaws in their youthful enthusiasms), and thus tend to prefer multidimensionality and nuance. Truth philosophers would rather be boring and right than interesting and wrong.

    Dare philosophers reach instead for the bold and unusual. They want to explore the boundaries of what can be defended. They're happy for the sake of argument to champion unusual positions that they might not fully believe, if those positions are elegant, novel, fun, contrarian, or if they think the positions have more going for them than is generally recognized. Dare philosophers sometimes treat philosophy like a game in which the ideal achievement is the breathtakingly clever defense of a position that others would have thought to be patently absurd.

    There's a familiar dynamic that arises from their interaction. The Dare philosopher ventures a bold thesis, cleverly defended. ("Possible worlds really exist!", "All matter is conscious!", "We're morally obliged to let humanity go extinct!") If the defense is clever enough, so that a substantial number of readers are tempted to think "Wait, could that really be true? What exactly is wrong with the argument?" then the Truth philosopher steps in. The Truth philosopher finds the holes and presuppositions in the argument, or at least tries to, and defends a more seemingly sensible view.

    This Dare-and-Truth dynamic is central to the field and good for its development. Sometimes there's more truth in the Dare positions than one would have thought, and without the Dare philosophers out there pushing the limits, seeing what can be said in defense of the seemingly absurd, then as a field we wouldn't appreciate those positions as vividly as we might. Also, I think, there's something intrinsically valuable about exploring the boundaries of philosophical defensibility, even if the positions explored turn out to be flatly false. It's part of the magnificent glory of life on Earth that we have fiendishly clever panpsychists and modal realists in our midst.

    Now consider Wonder.

    Why study philosophy? I mean at a personal level. Personally, what do you find cool, interesting, or rewarding about philosophy? One answer is Truth: Through philosophy, you discover answers to some of the profoundest and most difficult questions that people can pose. Another answer is Dare: It's fun to match wits, push arguments, defend surprising theses, win the argumentative game (or at least play to a draw) despite starting from a seemingly indefensible position. Both of those motivations speak to me somewhat. But I think what really delights me more than anything else in philosophy is its capacity to upend what I think I know, its capacity to call into question what I previously took for granted, its capacity to cast me into doubt, confusion, and wonder.

    Unlike the Dare philosopher, the Wonder philosopher is guided by a norm of sincerity and truth. It's not primarily about matching wits and finding clever arguments. Unlike the Truth philosopher, the Wonder philosopher has an affection for the strange and seemingly wrong -- and is willing to push wild theses to the extent they suspect that those theses, wonderfully, surprisingly, might be true.

    But in the Dare-and-Truth dynamic of the field, the Wonder philosopher can struggle to find a place. Bold Dare articles and sensible Truth articles both have a natural home in the journals. But "whoa, I wonder if this weird thing might be true?" is a little harder to publish.

    Probably no one is pure Truth, pure Dare, or pure Wonder. We're all a mix of the three, I suspect. Thus, one approach is to leave Wonder out of your research profile: Find the Truth, where you can, publish that, and leave Wonder for your classroom teaching and private reading. Defend the existence of moderate naturalistically-grounded moral truths in your published papers; read Zhuangzi on the side.

    Still, there are a few publishing strategies for Wonder philosophers. Here are four:

    (1.) Find a Dare-like position that you really do sincerely endorse on reflection, and defend that -- optionally with some explicit qualifications indicating that you are exploring it only as a possibility.

    (2.) Explicitly argue that we should invest a small but non-trivial credence in some Dare-like position -- for example, because the Truth-type arguments against it aren't fully compelling.

    (3.) Find a Truth-like view that generates Wonder if it's true. For example, defend some form of doubt about philosophical method or about the extent of our self-knowledge. Defend the position on sensible, widely acceptable grounds; and then sensibly argue that one possible consequence is that we don't know some of the things that we normally take for granted that we do know.

    (4.) Write about historical philosophers with weird and wonderful views. This gives you a chance to explore the Wonderful without committing to it.

    In retrospect, I think one unifying theme in my disparate work is that it fits under one of these four heads. Much of my recent metaphysics fits under (1) or (2) (e.g., here, here, here). My work on belief and introspection mostly fits under (3) (with some (1) in my bolder moments): We can't take for granted that we have the handsome beliefs (e.g., "the sexes are intellectually equal") that we think we do, or that we have the moral character or types of experience that we think we do. And my interest in Zhuangzi and some of the stranger corners of early introspective psychology fits under (4).