Showing posts with label psychology of philosophy. Show all posts
Showing posts with label psychology of philosophy. Show all posts

Friday, February 16, 2024

What Types of Argument Convince People to Donate to Charity? Empirical Evidence

Back in 2020, Fiery Cushman and I ran a contest to see if anyone could write a philosophical argument that convinced online research participants to donate a surprise bonus to charity at rates statistically above control. (Chris McVey, Josh May, and I had failed to write any successful arguments in some earlier attempts.) Contributions were not permitted to mention particular real people or events, couldn't be narratives, and couldn't include graphics or vivid descriptions. We wanted to see whether relatively dry philosophical arguments could move people to donate.

We received 90 submissions (mostly from professional philosophers, psychologists, and behavioral economists, but also from other Splintered Mind readers), and we selected 20 that we thought represented a diversity of the most promising arguments. The contest winner was an argument written by Matthew Lindauer and Peter Singer, highlighting that a donation of $25 can save a child in a developing country from going blind due to trachoma, then asking the reader to reflect on how much they would be willing to donate to save their own child from going blind. (Full text here.)

Kirstan Brodie, Jason Nemirow, Fiery, and I decided to follow up by testing all 90 submitted arguments to see what features were present in the most effective arguments. We coded the arguments according to whether, for example, they mentioned children, or appealed to religion, or mentioned the reader's assumed own economic good fortune, etc. -- twenty different features in all. We recruited approximately 9000 participants. Each participant had a 10% chance of winning a surprise bonus of $10. They could either keep the whole $10 or donate some portion of it to one of six effective charities. Participants decided whether to donate, and how much, before knowing if they were among the 10% receiving the $10.

Now, unfortunately, proper statistical analysis is complicated. Because we were working with whatever came in, we couldn't balance argument features, most arguments had multiple coded features, and the coded features tended to correlate between submissions. I'll share a proper analysis of the results later. Today I'll share a simpler analysis. This simple analysis looks at the coded features one by one, comparing the average donation among the set of arguments with the feature to average donation among the set of arguments without the feature.

There is something to be said, I think, for simple analysis even when they aren't perfect: They tend to be easier to understand and to have fewer "researcher degrees of freedom" (and thus less opportunity for p-hacking). Ideally, simple and sophisticated statistical analyses go hand-in-hand, telling a unified story.

So, what argument features appear to be relatively more versus less effective in motivating charitable giving?

Here are our results, from highest to lowest difference in mean donation. "diff" is the dollar difference in mean donation, N is the number of participants who saw an argument with that feature, n is the number of arguments containing that feature, and p is the statistical p-value in a two-sample t test (without correction for multiple comparisons). All analyses are tentative, pending double-checking, skeptical examination, and possibly some remaining data clean-up.

Predictive Argument Features, Highest to Lowest

Does the argument appeal to the notion of equality?
$3.99 vs $3.39 (diff = $.60, N = 395, n = 4, p < .001)

... mention human evolutionary history?
$3.93 vs $3.39 (diff = $.55, N = 4940, n = 5, p < .001)

... specifically mention children?
$3.76 vs $3.26 (diff = $.49, N = 4940, n = 27, p < .001)

... mention a specific, concrete benefit to others that $10 or a similar amount would bring (e.g., 3 mosquito nets or a specific inexpensive medical treatment)?
$3.75 vs $3.44 (diff = $.41, N = 1718, n = 17, p < .001)

... appeal to the diminishing marginal utility of dollars kept by (rich) donors?
$3.69 vs $3.29 (diff = $.40, N = 2843, n = 27, p < .001)

... appeal to the massive marginal utility of dollars transferred to (poor) recipients?
$3.65 vs $3.25 (diff = $.40, N = 3758, n = 36, p < .001)

... mention, or ask the participant to bring to mind, a particular person who is physically or emotionally near to them?
$3.74 vs $3.34 (diff = $.34, N = 318, n = 3, p = .061)

... mention particular needs or hardships such as clean drinking water or blindness?
$3.56 vs $3.23 (diff = $.30, N = 4940, n = 49, p < .001)

... refer to the reader's own assumed economic good fortune?
$3.58 vs $3.31 (diff = $.27, N = 3544, n = 35, p < .001)

... focus on one, single issue? (e.g. trachoma)
$3.61 vs $3.40 (diff = $.21, N = 800, n = 8, p = .07)

... remind people that giving something is better than nothing? (i.e. corrective for drop-in-the-bucket thinking)
$3.56 vs $3.40 (diff = $.15, N = 595, n = 6, p = .24)

... appeal to the views of experts (e.g. philosophers, psychologists)?
$3.47 vs $3.39 (diff = $.07, N = 2629, n = 27, p = .29)

... reference specific external sources such as news reports or empirical studies?
$3.47 vs $3.40 (diff = $.07, N = 1828, n = 18, p = .41)

... explicitly mention that donation is common?
$3.46 vs $3.41 (diff = $.05, N = 736, n = 7, p = .66)

... appeal to the notion of randomness/luck (e.g., nobody chose the country they were born in)?
$3.43 vs $3.41 (diff = $.02, N = 1403, n = 14, p = .80)

... mention religion?
$3.35 vs $3.42 (diff = -$.07, N = 905, n = 9, p = .48)

... appeal to veil-of-ignorance reasoning or other perspective-taking thought experiments?
$3.29 vs $3.23 (diff = -$.14, N = 4940, n = 8, p = .20)

... mention that giving could inspire others to give? (i.e. spark behavioral contagion)
$3.29 vs $3.43 (diff = -$.14, N = 896, n = 9, p = .20)

... explicitly mention and address specific counterarguments?
$3.29 vs $3.45 (diff = -$.15, N = 1829, n = 19, p = .048)

... appeal to the self-interest of the participant?
$3.22 vs $3.49 (diff = -$.30, N = 2604, n = 22, p < .001)

From this analysis, several argument features appear to be effective in increasing participant donations:

  • mentioning children and appealing to the equality of all people,
  • mentioning concrete benefits (one or several),
  • mentioning the reader's assumed economic good fortune and the relatively large impact of a relatively small sacrifice (the "margins" features), and
  • mentioning evolutionary history (e.g., theories that human beings evolved to care more about near others than distant others).
  • Mentioning a particular near person might also have been effective, but since only three arguments were coded in this category, statistical power was poor.

    In contrast, appealing to the participant's self-interest (e.g., that donating will make them feel good) appears to have backfired. Mentioning and addressing counterarguments to donation (e.g., responding to concerns that donations are ineffective or wasted) might also have backfired.

    Now I don't think we should take these results wholly at face value. For example, only five of the ninety arguments appealed to evolutionary history, and all of those arguments included at least two other seemingly effective features: particular hardships, margins, or children. In multiple regression analyses and multi-level analyses that explore how the argument features cluster, it looks like particular hardships, children, and margins might be more robustly predictive -- more on that in a future post. ETA (Feb 19): Where the n < 10 arguments, effects are unlikely to be statistically robust.

    What if we combine argument features? There are various ways to do this, but the simplest is to give an argument one point for any of the ten largest-effect features, then perform a linear regression. The resulting model has an intercept of $3.09 and a slope of $.13. Thus, the model predicts that participants who read arguments with none of these features will donate $3.09, while participants who read a hypothetical argument containing all ten features will donate $4.39.

    Further analysis also suggests that piling up argument features is cumulative: Arguments with at least six of the effective features generated mean donations of $3.89 (vs. $3.37), those with at least seven generated mean donations of $4.46 (vs. $3.38), and the one argument with eight of the ten effective features generated a mean donation of $4.88 (vs. $3.40) (all p's < .001). This eight-feature argument was, in fact, the best performing argument of the ninety. (However, caution is warranted concerning the estimated effect size for any particular argument: With approximately only 100 participants per argument and a standard deviation of about $3, the 95% confidence intervals for the effect size of individual arguments are about +/- $.50.)

    ------------------------------------------------------

    Last month, I articulated and defended the attractiveness of moral expansion through Mengzian extension. On my interpretion of the ancient Chinese philosopher Mengzi, expansion of one's moral perspective often (typically?) begins with noticing how you react to nearby cases -- whether physically nearby (a child in front of you, about to fall into a well) or relationally nearby (your close family members) -- and proceeds by noticing that remote cases (distant children, other people's parents) are similar in important respects.

    None of the twenty coded features captured exactly that. ("Particular near person" was close, but neither necessary nor sufficient: not necessary, because the coders used a stringent standard for when an argument invoked a particular near person, and not sufficient since invoking a particular near person is only the first step in Mengzian extension.) So I asked UCR graduate student Jordan Jackson, who studies Chinese philosophy and with whom I've discussed Mengzian extension, to read all 90 arguments and code them for whether they employed Mengzian extension style reasoning. He found six that did.

    In accord with my hypothesis about the effectiveness of Mengzian extension, the six Mengzian extension arguments outperformed the arguments that did not employ Mengzian extension:

    $3.85 vs $3.38 (diff = $.47, N = 612, n = 6, p < .001)

    Among those six arguments are both the 2020 original contest winner written by Lindauer and Singer and also the best-performing argument in the present study -- though as noted earlier, the best-performing argument in the current study also had many other seemingly effective features.

    In case you're curious, here's the full text of that argument, adapted by Alex Garinther, and quoting extensively, from one of the stimuli in Lindauer et al. 2020

    HEAR ME OUT ON SOMETHING. The explanation below is a bit long, but I promise reading the next few paragraphs will change you.

    As you know, there are many children who live in conditions of severe poverty. As a result, their health, mental development, and even their lives are at risk from lack of safe water, basic health care, and healthy food. These children suffer from malnutrition, unsanitary living conditions, and are susceptible to a variety of diseases. Fortunately, effective aid agencies (like the Against Malaria Foundation) know how to handle these problems; the issue is their resources are limited.

    HERE'S A PHILOSOPHICAL ARGUMENT: Almost all of us think that we should save the life of a child in front of us who is at risk of dying (for example, a child drowning in a shallow pond) if we are able to do so. Most people also agree that all lives are of equal moral worth. The lives of faraway children are no less morally significant than the lives of children close to us, but nearby children exert a more powerful emotional influence. Why?

    SCIENTISTS HAVE A PLAUSIBLE ANSWER: We evolved in small groups in which people helped their neighbors and were suspicious of outsiders, who were often hostile. Today we still have these “Us versus Them” biases, even when outsiders pose no threat to us and could benefit enormously from our help. Our biological history may predispose us to ignore the suffering of faraway people, but we don't have to act that way.

    By taking money that we would otherwise spend on needless luxuries and donating it to an effective aid agency, we can have a big impact. We can provide safe water, basic health care, and healthy food to children living in severe poverty, saving lives and relieving suffering.

    Shouldn't we, then, use at least some of our extra money to help children in severe poverty? By doing so, we can help these children to realize their potential for a full life. Great progress has been made in recent years in addressing the problem of global poverty, but the problem isn't being solved fast enough. Through charitable giving, you can contribute towards more rapid progress in overcoming severe poverty.

    Even a donation $5 can save a life by providing one mosquito net to a child in a malaria-prone area. FIVE DOLLARS could buy us a large cappuccino, and that same amount of money could be used to save a life.

    Monday, July 18, 2022

    Narrative Stories Are More Effective Than Philosophical Arguments in Convincing Research Participants to Donate to Charity

    A new paper of mine, hot off the presses at Philosophical Psychology, with collaborators Christopher McVey and Joshua May:

    "Engaging Charitable Giving: The Motivational Force of Narrative Versus Philosophical Argument" (freely available final manuscript version here)

    Chris, who was then a PhD student here at UC Riverside, had the idea for this project back in 2014 or 2015. He found my work on the not-especially-ethical behavior of ethics professors interesting, but maybe too negative in its focus. Instead of emphasizing what doesn't seem to have any effect on moral behavior, could I turn my attention in a postive direction? Even if philosophical reflection ordinarily has little impact on one's day-to-day choices, maybe there are conditions under which it can have an effect. What might those conditions be?

    Chris (partly under the influence of Martha Nussbaum's work) was convinced that narrative storytelling could bring philosophy powerfully to life, changing people's ethical choices and their lived understanding of the world. In his teaching, he used storytelling to great effect, and he thought we might be able to demonstrate the effectiveness of philosophical storytelling empirically too, using ordinary research participants.

    Chris thus developed a simple experimental paradigm in which research participants are exposed to a stimulus -- either a philosophical argument for charitable giving, a narrative story about a person whose life was dramatically improved by a charitable organization, both the argument and the narrative, or a control text (drawn from a middle school physics textbook) -- and then given a surprise 10% chance of receiving $10. Participants could then choose to donate some portion of that $10 (should they receive it) to one of six effective charities. Chris found that participants exposed to the argument donated about the same amount as those in the control condition -- about $4, on average -- while those exposed to the narrative or the narrative plus argument donated about $1 more, with the narrative-plus-argument showing no detectable advantage over the narrative alone.

    We also developed a five-item scale for measuring attitude toward charitable donation, with similar results: Expressed attitude toward charitable donation was higher in the narrative condition than in the control condition, while the argument-alone condition was similar to the control condition and the narrative-plus-argument condition was similar to the narrative alone. In other words, exposure to the narrative appeared to shift both attitude and behavior, while argument seemed to be doing no work either on its own or when added to the narrative.

    For this study, the narrative was the true story of Mamtha, a girl whose family was saved from slavery in a sand mine by the actions of a charitable organization. The argument was a Peter-Singer-style argument for charitable giving, adapted from Buckland, Lindauer, Rodriguez-Arias, and Veliz 2021. I've appended the full text of both to the end of this blog post.

    Here are the results in chart form. (This is actually "Experiment 2" in the published version. Experiment 1 concerned hypothetical donation rather than actual donation, finding essentially the same results.) Error bars represent 95% confidence intervals. Click to enlarge and clarify.

    Chris completed his dissertation in 2020 and went into the tech industry (a separate story and an unfortunate loss for academic philosophy!). But I found his paradigm and results so interesting that with his permission, I carried on research using his approach.

    One fruit of this was a contest Fiery Cushman and I hosted on this blog in 2019-2020, aiming to find a philosophical argument that is effective in motivating research participants to donate to charity at rates higher than a control condition, since Chris and I had tried several which failed. We did in fact find some effective arguments this way. (The most effective one, and the contest winner, was written collaboratively by Matthew Lindauer and Peter Singer.) Fiery and I are currently running a follow-up study with more details.

    The other fruit was a few follow-up studies I conducted collaboratively with Chris and Joshua May. In these studies, we added more narratives and more arguments -- including the winning arguments from the blog contest. These studies extended and replicated Chris's initial results. Across a series of five experiments, we found that participants exposed to emotionally engaging narratives consistently donated more and expressed more positive attitudes toward charitable giving than did participants exposed to the physics-text control condition. Philosophical arguments showed less consistent positive effects, on average considerably weaker and not always statistically detectable in our sample sizes of about 200-300 participants per condition.

    For full details, see the full article!

    --------------------------------------------------------

    Narrative: Mamtha

    Mamtha’s dreams were simple—the same sweet musings of any 10-year-old girl around the world. But her life was unlike many other girls her age: She had no friends and no time to draw. She was not allowed to attend school or even play. Mamtha was a slave. For two years, her every day was spent under the control of a harsh man who cared little for her family’s health or happiness. Mamtha’s father, Ramesh, had been farming his small plot of land in Tamil Nadu until a draught dried his crops and left him deeply in debt. Around that time, a broker from another state offered an advance to cover his debts in exchange for work on a farm several hours away.

    Leaving their home village would mean uprooting the family and pulling Mamtha from school, but Ramesh had little choice. They needed the work to survive. Once the family moved, however, they learned that much of the arrangement was a lie: They were brought to a sand mine, not a farm, and the small advance soon ballooned with ever-growing interest they couldn’t possibly repay. This was bonded labor slavery.

    Every day, Ramesh, his wife, and the other slaves rose before sunrise to begin working in the mine. For 16 hours a day, they hauled mud and filtered the sand in putrid sewage water. The conditions left them constantly sick and exhausted, but they were never allowed to take breaks or leave for medical care. When Ramesh tried to ask about their low wages, the owner scolded and beat him badly. When he begged for his family to be released, again he was beaten and abused. Ramesh knew the owner was wealthy and well-connected in the community, so escape was not an option. There was nothing he could do.

    Mamtha’s family withered from malnutrition before her eyes in the sand mine. Every morning at 5 a.m., she watched with deep sadness as her parents left for another day of hard labor—and spent her day in fear this would soon become her fate. She was left to watch her baby sister, Anjali, and other younger children to keep them out of the way. Her carefree childhood was taken over byresponsibility, hard work and crushed dreams.

    Everything changed for Mamtha’s family on December 20, 2013, when the international Justice Mission, a charitable aid organization funded largely by donations from everyday people, worked with a local government team on a rescue operation at the sand mine. Seven adults and five children were brought out of the facility, and government officials filed paperwork to totally shut down the illegal mine. After a lengthy police investigation, the owner will now face charges for deceiving and enslaving these families.

    The next day, the government granted release certificates to all of the laborers. These certificates officially absolve the false debts, document the slaves’ freedom, and help provide protection from the owner. The International Justice Mission aftercare staff helped take the released families back to their home villages to begin their new lives in freedom.

    For Mamtha, starting over in her home village meant making those daydreams come true: She was enrolled back in school and could once again have a normal childhood. She’s got big plans for her future—dreams that never would have been possible if rescue had not come. She says confidently, “Today, I still want to be a doctor. Now that I am back in school, I know I can achieve my dream.”

    Singer-Style Argument:

    1. A great deal of extreme poverty exists, which involves suffering and death from hunger, lack of shelter, and lack of medical care. Roughly a third of human deaths (some 50,000 daily) are due to poverty-related causes.

    2. If you can prevent something bad from happening, without sacrificing anything nearly as important, you ought to do so and it is wrong not to do so.

    3. By donating money to trustworthy and effective aid agencies that combat poverty, you can help prevent suffering and death from lack of food, shelter, and medical care, without sacrificing anything nearly as important.

    4. Countries in the world are increasingly interdependent: you can improve the lives of people thousands of miles away with little effort.

    5. Your geographical distance from poverty does not lessen your duty to help. Factors like distance and citizenship do not lessen your moral duty.

    6. The fact that a great many people are in the same position as you with respect to poverty does not lessen your duty to help. Regardless of whether you are the only person who can help or whether there are millions of people who could help, this does not lessen your moral duty.

    7. Therefore, you have a moral duty to donate money to trustworthy and effective aid agencies that combat poverty, and it is morally wrong not to do so.

    For example, $20 spent in the United States could buy you a fancy restaurant meal or a concert ticket, or instead it could be donated to a trustworthy and effective aid agency that could use that money to reduce suffering due to extreme poverty. By donating $20 that you might otherwise spend on a fancy restaurant meal or a concert ticket, you could help prevent suffering due to poverty without sacrificing anything equally important. The amount of benefit you would receive from spending $20 in either of those ways is far less than the benefit that others would receive if that same amount of money were donated to a trustworthy and effective aid agency.

    Although you cannot see the beneficiaries of your donation and they are not members of your community, it is still easy to help them, simply by donating money that you would otherwise spend on a luxury item. In this way, you could help to reduce the number of people in the world suffering from extreme poverty. You could help reduce suffering and death due to hunger, lack of shelter, lack of medical care, and other hardships and risks related to poverty.

    With little effort, by donating to a trustworthy and effective aid agency, you can improve the lives of people suffering from extreme poverty. According to the argument above, even though the recipients may be thousands of miles away in a different country, you have a moral duty to help if you can do so without sacrificing anything of equal importance.

    Monday, March 14, 2022

    The Parable of the Overconfident Student -- and Why Academic Philosophy Still Favors the Socially Privileged

    If you've taken or taught some philosophy classes in the United States, you know the type: the overconfident philosophy student. Our system rewards these students. The epistemic failing of overconfidence ultimately serves them well. This pattern, I conjecture, helps explain the continuing inequities in philosophy.

    It's the second day of class. You're starting some complex topic, say, the conceivability argument for metaphysical dualism. Student X jumps immediately into the discussion: The conceivability argument is obviously absurd! You offer some standard first-pass responses to his objection, but he stands his ground, fishing around for defenses. Before today, he knew nothing about the issues. It's his first philosophy class, but suddenly, he knows better than every philosopher who thinks otherwise, whose counterarguments he has never heard.

    [image from the classic Onion article "Guy in Philosophy Class Needs to Shut the Fuck Up"]

    It's also Student Y's first philosophy class. Student X and Student Y are similar in intelligence and background knowledge, differing only in that Student Y isn't irrationally overconfident. Maybe Student Y asks a question of clarification. Or maybe she asks how the author would deal with such-and-such an objection. More likely, she keeps quiet, not wanting to embarrass herself or use class time when other, more knowledgeable students presumably have more insightful things to say.

    (I've called Student X a "he", since in my experience most students of this type are men. Student Y types are more common, in any gender.)

    What will happen to Student X and Student Y over time, in the typical U.S. classroom? Both might do well. Student Y is by no means doomed to fail. But Student X's overconfidence wins him several important advantages.

    First, he gets practice asserting his philosophical views in an argumentative context. Oral presentation of one's opinions is a crucial skill in philosophy and closely related to written presentation of one's opinions.

    Second, he receives customized expert feedback on his philosophical views. Typically, the professor will restate Student X's views, strengthening them and fitting them better into the existing discourse. The professor will articulate responses those views, so that the student learns those too. If the student responds to the responses, this second layer of responses will also be charitably reworked and rebutted. Thus, Student X will gain specific knowledge on exactly the issues that engage and interest him most.

    Third, he engages his emotions and enhances his memory. Taking a public stand stirs up your emotions. Asking a question makes your heart race. Being defeated in an argument with your professor burns that argument into your memory. So also does winning an argument or playing to a draw. After his public stand and argument, it matters to Student X, more than it otherwise would have, that the conceivability argument is absurd. This will intensify his engagement with the rest of the course, where he'll latch on to arguments that support his view and develop counterarguments against the opposition. His written work will also reflect this passion.

    Fourth, he wins the support and encouragement of his professor. Unless he is unusually obnoxious or his questions are unusually poor, the typical U.S. professor will appreciate Student X's enthusiasm and his willingness to advance class discussion. His insights will be praised and his mistakes interpreted charitably, enhancing his self-confidence and his sense that he is good at philosophy.

    The combined effect of these advantages, multiplied over the course of an undergraduate education, ensures that most students like Student X thrive in U.S. philosophy programs. What was initially the epistemic vice of overconfidence becomes the epistemic virtue of being a knowledgeable, well-trained philosophy student.

    Contrast with the sciences. If a first-year chemistry student has strong, ignorant opinions about the electronegativity of fluorine, it won't go so well -- and who would have such opinions, anyway? Maybe at the most theoretically speculative end of the sciences, we can see a similar pattern, though. The social sciences and other humanities might also reward the overconfident student in some ways while punishing him in others. Among academic disciplines as practiced in the U.S., I conjecture that philosophy is the most receptive to the Overconfident Student Strategy.

    Success in the Overconfident Student Strategy requires two things: a good sense of what is and is not open for dispute, and comfort in classroom dialogue. Both tend to favor students from privileged backgrounds.

    It's ridiculous to dispute simple matters of fact with one's professor. The Overconfident Student Strategy only works if the student can sniff out a defensible position, one rich for back-and-forth dispute. Student X in our example immediately discerned that the conceivability argument for dualism was fertile ground on which to take a stand. Student X can follow his initial gut impression, knowing that even if he can't really win the argument on day two, arguments for his favored view are out there somewhere. Students with academically strong backgrounds -- who have a sense of how academia works, who have some exposure to philosophy earlier in their education, who are familiar with the back-and-forth of academic argumentation -- are at an advantage in sensing defensible positions and glimpsing what broad shapes an argument might take.

    And of course speaking up in the classroom, especially being willing to disagree with one's professor, normally requires a certain degree of comfort and self-assurance in academic contexts. It helps if classrooms feel like home, if you feel like you belong, if you see yourself as maybe a professor yourself some day.

    For these reasons -- as well as the more general tendency toward overconfidence that comes with social privilege -- we should expect that the Overconfident Student Strategy should be especially available to students with privileged backgrounds: the children of academics, wealthy students who went to elite high schools, White students, men, and non-immigrants, for example. In this way, initial privilege provides advantages that amplify up through one's education.

    I have confined myself to remarks about the United States, because I suspect the sociology of overconfidence plays out differently in some other countries, which might explain the difficulty that international students sometimes have adapting to the style in which philosophy is practiced here.

    I myself, of course, was just the sort of overconfident student I've described -- the son of two professors, raised in a wealthy suburb with great public schools. Arguably, I'm still employing the same strategy, opining publicly on my blog on a huge range of topics beyond my expertise (e.g., Hume interpretation last week, COVID ethics last month), reaping advantages analogous to the overconfident student's four classroom advantages, only in a larger sphere.

    Coming up! Some strategies for evening the playing field.

    ------------------------------------------

    Related: "On Being Good at Seeming Smart" (Mar 25, 2010).

    Tuesday, March 31, 2020

    Charity Argument Contest Update

    In october, Fiery Cushman and I announced a contest: Participants were to write a philosophical argument that attempts to convince research participants to donate a surprise bonus to charity. The winner would receive $500, and we would donate an additional $500 to the winner's choice of charity.

    We planned to run the experiment in early 2020 and announce the winner by today, March 31. For a variety of reasons, the experiment has been delayed, but the contest is still on and we will announce the winner as soon as we can.

    In the meantime, I hope Splintered Mind readers and contest entrants are managing well through the chaos.

    Tuesday, May 01, 2018

    Please Rate My Blog Posts for Inclusion in My Next Book

    I'm working away on selecting and revising blog posts and op-eds for my next book. Readers' feedback has been very helpful in narrowing down to just the most memorable and interesting posts! My final poll, 22 selected posts on philosophical method and the sociology of philosophy is live today.

    As with all the other polls, this poll contains links to the original posts so so you can refresh your memory if you want. But there's no need to rate all of the listed posts! Even if you just remember one or two that you like, it would be useful for me to know that.

    Below are all seven polls.

    Polls 3, 5, and 6 have low response rates. It would be terrific if you could click through and rate a few posts that you like or remember, from one or more of those polls.

    Categories:

  • 1. moral psychology
  • 2. technology
  • 3. belief, desire, and self-knowledge
  • 4. culture and humor
  • 5. cosmology, skepticism, and weird minds
  • 6. consciousness
  • 7. philosophical method and the sociology of philosophy
  • (new as of today)

    Tuesday, January 30, 2018

    The Philosophical Overton Window?

    It seems like I've been hearing a lot recently about the "Overton Window" in politics. The idea is that there's a range of normal policy positions (within the window), which a politician can be adopt without being regarded as radical or extreme; and then there are radical or extreme positions, outside of the window. Over time, what is within the window can change. Gay marriage, for example, was outside of the window in U.S. politics in the 1980s, then entered the window in the 1990s or early 2000s.

    A common thought is that one way to move the window is to prominently voice a position so extreme that a somewhat less extreme position seems moderate in comparison, and perhaps enters the window. After Bernie Sanders starts saying "free college education for everyone!", maybe "only" offering $10,000 toward every student's tuition no longer seems extreme.

    Before going further, a big heaping caveat. I figured I'd go back to the original Overton article to confirm that the picture in the popular press conforms to the scholarship. (Reports of the Dunning-Kruger effect, for example, which is also recently hot in blogs and op-eds, often do not.)

    And... whoops. There is no Overton article! There is no scholarship. Not unless you count Glenn Beck. This Joseph P. Overton was a not-very-well-known libertarian think-tank guy who died in a plane crash before writing the idea up. As far as I can tell, this is as close as we get to the root scholarly source. (See also Laura Marsh's discussion.)

    Still, the idea has some theoretical appeal. Might it capture some of the dynamics in philosophy?

    For it to work, first we'd need some sense of what positions qualify as extreme and what positions qualify as moderate in a philosophical cultural context. Then we'd need some way of measuring (through citations?) the increasing visibility of an extreme position and see if that opens up "moderate" philosophers to positions that they might previously have regarded as too extreme.

    Here's one possibility: Panpsychism is the view that everything in the universe is conscious, even elementary particles. Generally, it's regarded as an extreme position. However, it has recently been gaining visibility. If the Overton Window idea is correct, then we might expect some formerly "extreme" positions in that direction, but not as extreme as panpsychism, to come to seem less extreme or maybe even moderate.

    Hmmmmm. I'm not sure it's so. A couple of obvious candidates are group consciousness and plant cognition. These would seem to be less extreme positions in the same direction as panpsychism, since instead of ascribing mind or consciousness to everything, they extend it to a only limited range of things that aren't usually regarded as having mental lives. If the Overton Window idea is right, then, given the increasing visibility of radical panpsychism, group consciousness and plant cognition will come to seem less extreme than they previously were.

    Hard to tell if that's true. Both positions are probably more popular now than they were 15 years ago (in academic Anglophone philosophy), but they'd still probably be considered extreme.

    Eh. You know what? My heart isn't in it. I'm too bummed about the Glenn Beck thing. I wanted this to be an idea with a more solid scholarly foundation.

    [image source]

    Thursday, July 27, 2017

    How Everyone Might Reasonably Believe They Are Much Better Than Average

    In a classic study, Ola Svenson (1981) found that about 80% of U.S. and Swedish college students rated themselves as being both safer and more skilled as drivers than other students in the room answering the same questionnaire. (See also Warner and Aberg 2014.) Similarly, most respondents tend to report being less susceptible to cognitive biases and sexist bias than their peers, as well as more honest and trustworthy -- and so on for a wide variety of positive traits: the "Better-Than-Average Effect".

    The standard view is that this is largely irrational. Of course most people can't be justified in thinking that they are better than most people. The average person is just average! (Just between you and me, don't you kind of wish that all those dumb schmoes out there would have a more realistic appreciation of their incompetence and mediocrity? [note 1])

    Particularly interesting are explanations of the Better-Than-Average Effect that appeal to people's idiosyncratic standards. What constitutes skillful driving? Person A might think that the best standard of driving skill is getting there quickly and assertively, while still being safe, while Person B might think skillful driving is more a matter of being calm, predictable, and within the law. Each person might then prefer the standard that best reflects their own manner of driving, and in that way justify viewing themselves as above average (e.g., Dunning et al. 1991; Chambers and Windschitl 2004).

    In some cases, this seems likely to be just typical self-enhancement bias: Because you want to think well of yourself, in cases where the standards are ambiguous, you choose the standards that make you look good. Changing example, if you want to think of yourself as intelligent and you're good at math, you might choose to think of mathematical skill as central to intelligence, while if you're good at practical know-how in managing people, you might choose to think of intelligence more in terms of social skills.

    But in other cases of the Better-Than-Average Effect, the causal story might be much more innocent. There may be no self-flattery or self-enhancement at all, except for the good kind of self-enhancement!

    Consider the matter abstractly first. Kevin, Nicholas, and Ana [note 2] all value Trait A. However, as people will, they have different sets of evidence about what is most important to Trait A. Based on this differing evidence, Kevin thinks that Trait A is 70% Property 1, 15% Property 2, and 15% Property 3. Nicholas thinks Trait A is 15% Property 1, 70% Property 2, and 15% Property 3. Ana thinks that Trait A is 15% Property 1, 15% Property 2, and 70% Property 3. In light of these rational conclusions from differing evidence, Kevin, Nicholas, and Ana engage in different self-improvement programs, focused on maximizing, in themselves, Properties 1, 2, and 3 respectively. In this, they succeed. At the end of their training, Kevin has the most Property 1, Nicholas the most Property 2, and Ana the most Property 3. No important new evidence arrives in the meantime that requires them to change their views about what constitutes Trait A.

    Now when they are asked which of them has the most of Trait A, all three reasonably conclude that they themselves have the most of Trait A -- all perfectly rationally and with no "self-enhancement" required! All of them can reasonably believe that they are better than average.

    Real-life cases won't perfectly match that abstract example, of course, but many skills and traits might show some of that structure. Consider skill as a historian of philosophy. Some people, as a result of their training and experience, might reasonably come to view deep knowledge of the original language of the text as most important, while others might view deep knowledge the the historical context as most important, while others might view deep knowledge of the secondary literature as most important. Of course all three are important and interrelated, but historians reasonably disagree substantially in their comparative weighting of these types of knowledge -- and, I think, not always for self-serving or biased reasons. It's a difficult matter of judgment. Someone committed to the first view might then invest a lot of energy in mastering the details of the language, someone committed to the second view might invest a lot of energy in learning the broader historical context, and someone committed to the third view might invest a lot of energy in mastering a vast secondary literature. Along the way, they might not encounter evidence that requires them to change their visions of what makes for a good historian. Indeed, they might quite reasonably continue to be struck by the interpretative power they are gaining by close examination of language, historical context, or the secondary literature, respectively. Eventually, each of the three might very reasonably regard themselves as a much better historian of philosophy than the other two, without any irrationality, self-flattery, or self-enhancing bias.

    I think this might be especially true in ethics. A conservative Christian, for example, might have a very different ethical vision than a liberal atheist. Each might then shape their behavior according to this vision. If both have reasonable ethical starting points, then at the end of the process, each person might reasonably regard themselves as morally better than the other, with no irrational self-enhancing bias. And of course, this generalizes across groups.

    I find this to be a very convenient and appealing view of the Better-Than-Average Effect, quite comforting to my self-image. Of course, I would never accept it on those grounds! ;-)

    Friday, April 21, 2017

    Common Sense, Science Fiction, and Weird, Uncharitable History of Philosophy

    Philosophers have three broad methods for settling disputes: appeal to "common sense" or culturally common presuppositions, appeal to scientific evidence, and appeal to theoretical virtues like simplicity, coherence, fruitfulness, and pragmatic value. Some of the most interesting disputes are disputes in which all three of these broad methods are problematic and seemingly indecisive.

    One of my aims as a philosopher is to intervene on common sense. "Common sense" is inherently conservative. Common sense used to tell us that the Earth didn't move, that humans didn't descend from ape-like ancestors, that certain races were superior to others, that the world was created by a god or gods of one sort or another. Common sense is a product of biological and cultural evolution, plus the cognitive and social development of people in a limited range of environments. Common sense only has to get things right enough, for practical purposes, to help us manage the range of environments to which we are accustomed. Common sense is under no obligation to get it right about the early universe, the microstructure of matter, the history of the species, future technologies, or the consciousness of weird hypothetical systems we have never encountered.

    The conservativism and limited vision of common sense leads us to dismiss as "crazy" some philosophical and scientific views that might in fact be true. I've argued that this is especially so regarding theories of consciousness, about which something crazy must be true. For example: literal group consciousness, panpsychism, and/or the failure of pain to supervene locally. Although I don't believe that existing arguments decisively favor any of those possibilities, I do think that we ought to restrain our impulse to dismiss such views out of hand. Fit with common sense is one important factor in evaluating philosophical claims, especially when direct scientific evidence and considerations of general theoretical virtue are indecisive, but it is only one factor. We ought to be ready to accept that in some philosophical domains, our commonsense intuitions cannot be entirely preserved.

    Toward this end, I want to broaden our intuitive sense of the possible. The two best techniques I know are science fiction and cross-cultural philosophy.

    The philosophical value of science fiction consists not only in the potential of science fictional speculations to describe possible futures that we might actually encounter. Historically, science fiction has not been a great predictor of the future. The primary philosophical value of science fiction might rather consist in its ability to flex our minds and disrupt commonsense conservatism. After reading far-out stories about weird utopias, uploading into simulated realities, bizarrely constructed intelligent aliens, body switching, Matrioshka Brains, and alternative universes, philosophical speculations about panpsychism and group consciousness no longer seem quite so intolerably weird. At least that's my (empirically falsifiable) conjecture.

    Similarly, brain-flexing is an important part of the value of reading the history of philosophy -- especially work from traditions other than those with which you are already familiar. Here it's especially important not to be too "charitable" (i.e. assimilative). Relish the weirdness -- "weird" from your perspective! -- of radical Buddhist metaphysics, of medieval Chinese neo-Confucianism, of neo-Platonism in late antiquity, of 19th century Hegelianism and neo-Hegelianism.

    If something that seems crazy must be true about the metaphysics of consciousness, or about the nature of objects and causes, or about the nature of moral value -- as extended philosophical discussions of these topics suggest probably is the case -- then to evaluate the possibilities without excess conservatism, we need to get used to bending our minds out of their usual ruts.

    This is my new favorite excuse for reading Ted Chiang, cyberpunk, and Zhuangzi.

    [image source]

    Tuesday, January 24, 2017

    The Philosopher's Rationalization-O-Meter

    Usually when someone disagrees with me about a philosophical issue, I think they're about 20% correct. Once in a while, I think a comment is just straightforwardly wrong. Very rarely, I find myself convinced that the person who disagrees is correct and my original view was mistaken. But for the most part, it's a remarkable consistency: The critic has a piece of the truth, but I have more of it.

    My inner skeptic finds this to be a highly suspicious state of affairs.

    Let me clarify what I mean by "about 20% correct". I mean this: There's some merit in what the disagreeing person says, but on the whole my view is still closer to correct. Maybe there's some nuance that they're noticing, which I elided, but which doesn't undermine the big picture. Or maybe I wasn't careful or clear about some subsidiary point. Or maybe there's a plausible argument on the other side which isn't decisively refutable but which also isn't the best conclusion to draw from the full range of evidence holistically considered. Or maybe they've made a nice counterpoint which I hadn't previously considered but to which I have an excellent rejoinder available.

    In contrast, for me to think that someone who disagrees with me is "mostly correct", I would have to be convinced that my initial view was probably mistaken. For example, if I argued that we ought to expect superintelligent AI to be phenomenally conscious, the critic ought to convince me that I was probably mistaken to assert that. Or if I argue that indifference is a type of racism, the critic ought to convince me that it's probably better to restrict the idea of "racism" to more active forms of prejudice.

    From an abstract point of view, how often ought I expect to be convinced by those who object to my arguments, if I were admirably open-minded and rational?

    For two reasons, the number should be below 50%:

    1. For most of the issues I write about, I have given the matter more thought than most (not all!) of those who disagree with me. Mostly I write about issues that I have been considering for a long time or that are closely related to issues I've been considering for a long time.

    2. Some (most?) philosophical disputes are such that even ideally good reasoners, fully informed of the relevant evidence, might persistently disagree without thereby being irrational. People might reasonably have different starting points or foundational assumptions that justify persisting disagreement.

    Still, even taking 1 and 2 together, it seems that it should not be a rarity for a critic to raise an interesting, novel objection that I hadn't previously considered and which ought to persuade me. This is clear when I consider other philosophers: Often they get objections (sometimes from me) which, in my judgment, nicely illuminate what is incorrect in their views, and which should rationally lead them to change their views -- if only they weren't so defensively set upon rebutting all critiques! I doubt I am a much better philosopher than they are, wise enough to have wholly excellent opinions; so I must sometimes hear criticisms that ought to cause me to relinquish my views.

    Let me venture to put some numbers on this.

    Let's begin by excluding positions on which I have published at least one full-length paper. For those positions, considerations 1 and 2 plausibly suggest rational steadfastness in the large majority of cases.

    A more revealing target is half-baked or three-quarters-baked positions on contentious issues: anything from a position I have expressed verbally, after a bit of thought, in a seminar or informal discussion, up to approximately a blog post, if the issue is fairly new to me.

    Suppose that about 20% of the time what I say is off-base in a way that should be discoverable to me if I gave it more thought, in an reasonably open-minded, even-handed way. Now if I'm defending that off-base position in dialogue with someone substantially more expert than I, or with a couple of peers, or with a somewhat larger group of people who are less expert than I but still thoughtful and informed, maybe I should expect that about half to 3/4 of the time I'll hear an objection that ought to move me. Multiplying and rounding, let's say that about 1/8 of the time, when I put forward a half- or three-quarters-baked idea to some interlocutors, I ought to hear an objection that makes me think, whoops, I guess I'm probably mistaken!

    I hope this isn't too horrible an estimate, at least for a mature philosopher. For someone still maturing as a philosopher, the estimate should presumably be higher -- maybe 1/4. The estimate should similarly be higher if the half- or three-quarters-baked idea is a critique of someone more expert than you, concerning the topic of their philosophical expertise (e.g., pushing back against a Kant expert's interpretation of a passage of Kant that you're interested in).

    Here then are two opposed epistemic vices: being too deferential or being too stubborn. The cartoon of excessive deferentiality would be the person who instantly withdraws in the face of criticism, too quickly allowing that they are probably mistaken. Students are sometimes like this, but it's hard for a really deferential person to make it far as a professional philosopher in U.S. academic culture. The cartoon of excessive stubbornness is the person who is always ready to cook up some post-hoc rationalization of whatever half-baked position happens to come out of their mouth, always fighting back, never yielding, never seeing any merit in any criticisms of their views, however wrong their views plainly are. This is perhaps the more common vice in professional philosophy in the U.S., though of course no one is quite as bad as the cartoon.

    Here's a third, more subtle epistemic vice: always giving the same amount of deference. Cartoon version: For any criticism you hear, you think there's 20% truth in it (so you're partly deferential) but you never think there's more than 20% truth in it (so you're mostly stubborn). This is what my inner skeptic was worried about at the beginning of this post. I might be too close to this cartoon, always a little deferential but mostly stubborn, without sufficient sensitivity to the quality of the particular criticism being directed at me.

    We can now construct a rationalization-o-meter. Stubborn rationalization, in a mature philosopher, is revealed by not thinking your critics are right, and you are wrong, at least 1/8 of the time, when you're putting forward half- to three-quarters-baked ideas. If you stand firm in 15 out of 16 cases, then you're either unusually wise in your half-baked thoughts, or you're at .5 on the rationalization-o-meter (50% of the time that you should yield you offer post-hoc rationalizations instead). If you're still maturing or if you're critiquing an expert on their own turf, the meter should read correspondingly higher, e.g., with a normative target of thinking you were demonstrably off-base 1/4 or even half the time.

    Insensitivity is revealed by having too little variation in how much truth you find in critics' remarks. I'd try to build an insensitivity-o-meter, but I'm sure you all will raise somewhat legitimate but non-decisive concerns against it.

    [image modified from source]

    Sunday, January 08, 2017

    Against Charity in the History of Philosophy

    Peter Adamson, host of History of Philosophy Without Any Gaps, recently posted twenty "Rules for the History of Philosophy". Mostly, they are terrific rules. I want to quibble with one.

    Like almost every historian of philosophy I know, Adamson recommends that we be "charitable" to the text. Here's how he puts it in "Rule 2: Respect the text":

    This is my version of what is sometimes called the "principle of charity." A minimal version of this rule is that we should assume, in the absence of fairly strong reasons for doubt, that the philosophical texts we are reading make sense.... [It] seems obvious (to me at least) that useful history of philosophy doesn't involve looking for inconsistencies and mistakes, but rather trying one's best to get a coherent and interesting line of argument out of the text. This is, of course, not to say that historical figures never contradicted themselves, made errors, and the like, but our interpretations should seek to avoid imputing such slips to them unless we have tried hard and failed to find a way of resolving the apparent slip.

    At first pass, it seems a good idea to avoid imputing contradictions and errors, and to seek a coherent, sensible interpretation of historical texts "unless we we have tried hard and failed to find a way of resolving the apparent slip". This is how, it seems, to best "respect the text".

    To see why I think charity isn't as good an idea as it seems, let me first reveal my main reason for reading history of philosophy: It's to gain a perspective, through the lens of distance, on my own philosophical views and presuppositions, and on the philosophical attitudes and presuppositions of 21st century Anglophone philosophy generally. Twenty-first century Anglophone philosophy tends to assume that the world is wholly material (with the exception of religious dualists and near cousins of materialists, like property dualists). I'm inclined to accept the majority's materialism. Reading the history of philosophy helpfully reminds me that a wide range of other views have been taken seriously over time. Similarly, 21st century Anglophone philosophy tends to favor a certain sort of liberal ethics, with an emphasis on individual rights and comparatively little deference to traditional rules and social roles -- and I tend to favor such an ethics too. But it's good to be vividly aware that wonderful thinkers have often had very different moral opinions. Reading culturally distant texts reminds me that I am a creature of my era, with views that have been shaped by contingent social factors.

    Of course, others might read history of philosophy with very different aims, which is fine.

    Question: If this is my aim in reading history of philosophy, what is the most counterproductive thing I could do when confronting a historical text?

    Answer: Interpret the author as endorsing a view that is familiar, "sensible", and similar to my own and my colleagues'.

    Historical texts, like all philosophical texts -- but more so, given our linguistic and cultural distance -- tend to be difficult and ambiguous. Therefore, they will admit of multiple interpretations. Suppose, then, that there's a text admitting of four possible interpretations: A, B, C, and D, where Interpretation A is the least challenging, least weird, and most sensible, and Interpretation D is the most challenging, weirdest, and least sensible. A simple application of the principle of charity seems to recommend that we favor the sensible, pedestrian Interpretation A. In fact, however, weird and wild Interpretation D would challenge our presuppositions more deeply and give us a more helpfully distant perspective. This is one reason to favor Interpretation D. Call this the Principle of Anti-Charity.

    Admittedly, this way of defending of Anti-Charity might seem noxiously instrumentalist. What about historical accuracy? Don't we want the interpretation that's most likely to be true?

    Bracketing post-modern views that reject truth in textual interpretation, I have four responses to that concern:

    1. Being Anti-Charitable doesn't mean that anything goes. You still want to respect the surface of the text. If the author says "P", you don't want to attribute the view not-P. In fact, it is the more "charitable" views that are likely to take the author's claims other than at face value: "The author says P, but really a charitable, sensible interpretation is that the author really meant P-prime". In one way, it is actually more respectful to the texts not to be too charitable, and to interpret the text superficially at face value. After all, P is what the author literally said.

    2. What seems "coherent" and "sensible" is culturally variable. You might reject excessive charitableness, while still wanting to limit allowable interpretations to one among several sensible and coherent ones. But this might already be too limiting. It might not seem "coherent" to us to embrace a contradiction, but some philosophers in some traditions seem happy to accept bald contradictions. It might not seem "sensible" to think that the world is nothing but a flux of ideas, such that the existence of rocks depends entirely upon the states of immaterial spirits. So if there's any ambiguity, you might hope to tame views that seem metaphysically idealist, thereby giving those authors a more sensible, reasonable seeming view. But this might be leading you away from rather than toward interpretative accuracy.

    3. Philosophy is hard and philosophers are stupid. The human mind is not well-designed for figuring out philosophical truths. Timeless philosophical puzzles tend to kick our collective asses. Sadly, this is going to be true of your favorite philosopher too. The odds are good that this philosopher, being a flawed human like you and me, made mistakes, fell into contradictions, changed opinions, and failed to see what seem to be obvious consequences and counterexamples. Respecting the text and respecting the person means, in part, not trying too hard to smooth this stuff away. The warts are part of the loveliness. They are also a tonic against excessive hero worship and a reminder of your own likely warts and failings.

    4. Some authors might not even want to be interpreted as having a coherent, stable view. I have recently argued that this is the case for the ancient Chinese philosopher Zhuangzi. Let's not fetishize stable coherence. There are lots of reasons to write philosophy. Some philosophers might not care if it all fits together. Here, attempting "charitably" to stitch together a coherent picture might be a failure to respect the aims and intentions implicit in the text.

    Three cheers for the weird and "crazy", the naked text, not dressed in sensible 21st century garb!

    -----------------------------------------------

    Related post: In Defense of Uncharitable and Superficial History of Philosophy (Aug 17, 2012)

    (HT: Sandy Goldberg for discussion and suggestion to turn it into a blog post)

    [image source]

    Tuesday, October 06, 2015

    The Laughter of Ethicists

    A guest post by Regina Rini

    You are loitering by the railyard when you see an out-of-control trolley hurtling toward five innocent orphans who’ve been lashed to the track by a mustachioed villain. There is a switch nearby, which would activate an enormous fan and disrupt the air above you. There is a very fat man hang-gliding over the tracks. Since (like most ethicists) you are an expert in the aerodynamics of obesity, you know that the fan would force him to swoop down right into the path of the trolley; the collision would save the five orphans, though the very fat hang-glider would die.

    There is another option. You happen to be carrying one of those t-shirt cannons they use to fire souvenir t-shirts into the stands at sporting events. And the tracks are right next to a nursey for babies born without developed brains, who will surely die within hours in any case. Since (like most ethicists) you are an expert in infant ballistics, you know that you could use the t-shirt cannon to fire ten anencephalic infants at the trolley, and that would be just enough to derail it, saving the orphans, though killing the projectile babies.

    What should you do? Do nothing and let the five orphans die? Flip the switch and blow the very fat hang-glider into the trolley? Or use the t-shirt cannon to fire the ten anencephalic infants at the trolley?

    Actually, don’t answer that. My story is only a parody, though it is not far off from many stories you will find in professional philosophy journals. Moral philosophers have a penchant for inventing goofy thought experiments in which numerous people are oddly imperiled. These stories have a purpose: they are meant to isolate and test some purported moral principle. The absurd details are often unnecessary, though they keep the writing from becoming dull. But we might ask: should we really be amusing ourselves with ethics?

    One possible worry is that amusingly absurd thought experiments can make our moral intuitions less reliable. Some have thought that the frequent use of unrealistic scenarios might make for bad philosophy. Others might point out that being put in a humorous mood changes how people react to moral dilemmas. But I will leave that sort of objection to the side. My question is this: is there something morally inappropriate about constructing amusing moral dilemmas?

    It’s important to keep in mind that these scenarios are often intended to provide simplified models of very troubling moral issues: killing in war, abortion, euthanasia. Even when justified, killing is killing, and it would obviously never be appropriate to laugh at a person wracked by guilt over a justifiable homicide. If our thought experiments are meant to inform reflective moral deliberation, or to model the features of real-world moral dilemma, then should we really be so irreverent toward our ultimate subject matter? The worry is that our practice of constructing funny thought experiments has caused us to become desensitized to the real human suffering we claim to be studying.

    One response to this worry is that we are simply engaged in gallows humors. Emergency room physicians talk about this phenomenon often. When you are confronted with pain and death every day, and when inevitably people will die in your care, some levity may be necessary to keep yourself functioning. Physicians make jokes about their patients, sometimes even about their patients’ suffering, and perhaps this is just a psychological necessity (though extreme instances give pause to even the most hardened medics [WARNING: this link may be triggering to victims of sexual assault]).

    But this can’t be the right justification for moral philosophers. We don’t actually watch people suffer and die right in front of us, and certainly not under our care. Our professional experience of dying is a pale imitation of what physicians experience.

    However, there may be something to the parallel with medical gallows humor. What moral philosophers are intimately familiar with is the absurdity of human life and choice. It is absurd, the existentialists will remind us, that we imbue so much meaning in the lives and the deaths of tiny beings dangling from a vast chain of eternal galactic causation. Yet we do, of course, see our lives as meaningful – and so it is absurd that our meaningful lives can be ended by things that do not matter. People are killed by trolleys. People die hang-gliding. Human pain and mortality are not produced exclusively by wrenching sacrificial choice. Sometimes a three-cent bolt comes loose, sometimes the insulation peels off the wires, sometimes a pebble is in just the wrong place on the bike path – and then a meaningful human life ends with no meaning at all.

    The moral philosopher is responsible for being reflectively aware of the ultimate limits of human life. We do not face concrete instances of death and suffering as physicians must, but we do confront the abstract reality of human limitation, with its inevitable implication of our own personal vulnerability. Perhaps this is a professional hazard of moral philosophy; we are not in a business that allows us to simply look away from unpleasant ultimate realities. Perhaps all philosophers must find some way to sublimate their necessary awareness of life’s fragility. Some bury it under anodyne logical formalism. Others lean into the absurd, mocking death’s dominion over their thought experiment characters – and so, over themselves. Perhaps the laughter of ethicists is not irreverence, but the unyielding desire to find human joy even in the contemplation of human misery.

    Thanks to Tyler Doggett and William Ruddick for helping me think through how to express this idea.

    image credit: 'Tracks' by Clint Losee

    Wednesday, April 08, 2015

    Blogging and Philosophical Cognition

    Yesterday or today, my blog got its three millionth pageview since its launch in 2006. (Cheers!) And at the Pacific APA last week, Nancy Cartwright celebrated "short fat tangled" arguments over "tall skinny neat" arguments. (Cheers again!)

    To see how these two ideas are related, consider this picture of Legolas and his friend Gimli Cartwright. (Note the arguments near their heads. Click to enlarge if desired.) [modified from image source]

    Legolas: tall, lean, tidy! His argument takes you straight like an arrowshot all the way from A to H! All the way from the fundamental nature of consciousness to the inevitability of Napoleon. (Yes, I'm looking at you, Georg Wilhelm Friedrich.) All the way from seven abstract Axioms to Proposition V.42, "it is because we enjoy blessedness that we are able to keep our lusts in check". (Sorry, Baruch, I wish I were more convinced.)

    Gimli: short, fat, knotty! His argument only takes you from versions of A to B. But it does it three ways, so that if one argument fails, the others remain. It does without without need of a string of possibly dubious intermediate claims. And finally, the different premises lend tangly sideways support to each other: A2 supports A1, A1 supports A3, A3 supports A2. I think of Mozi's dozen arguments for impartial concern or Sextus's many modes of skepticism.

    In areas of mathematics, tall arguments can work -- maybe the proof of Fermat's last theorem is one -- long and complicated, but apparently sound. (Not that I would be any authority.) When each step is unshakeably secure, tall arguments go through. But philosophy tends not to be like that.

    The human mind is great at determining an object's shape from its shading. The human mind is great at interpreting a stream of incoming sound as a sly dig on someone's character. The human mind is stupendously horrible at determining the soundness of philosophical arguments, and also at determining the soundness of most individual stages within philosophical arguments. Tall, skinny philosophical arguments -- this was Cartwright's point -- will almost inevitably topple.

    Individual blog posts are short. They are, I think, just about the right size for human philosophical cognition: 500-1000 words, enough to put some flesh on an idea, making it vivid (pure philosophical abstractions being almost impossible to evaluate for multiple reasons), enough to make one or maybe two novel turns or connections, but short enough that the reader can get to the end without having lost track of the path there.

    In the aggregate, blog posts are fat and tangled: Multiple posts can get at the same general conclusion from diverse angles. Multiple posts can lend sideways support to each other. I offer, as an example, my many posts skeptical of philosophical expertise (of which this is one): e.g., here, here, here, here, here, here.

    I have come to think that philosophical essays, too, often benefit from being written almost like a series of blog posts: several shortish sections, each of which can stand semi-independently and which in aggregate lead the reader in a single general direction. This has become my metaphilosophy of essay writing, exemplified in "The Crazyist Metaphysics of Mind" and "1% Skepticism".

    Of course there's also something to be said for Legolas -- for shooting your arrow at an orc halfway across the plain rather than waiting for it to reach your axe -- as long as you have a realistically low credence that you will hit the mark.

    Wednesday, February 25, 2015

    Depressive Thinking Styles and Philosophy

    Recently I read two interesting pieces that I'd like to connect with each other. One is Peter Railton's Dewey Lecture to the American Philosophical Association, in which he describes his history of depression. The other is Oliver Sacks's New York Times column about facing his own imminent death.

    One of the inspiring things about Sacks's work is that he shows how people with (usually neurological) disabilities can lead productive, interesting, happy lives incorporating their disabilities and often even turning aspects of those disabilities into assets. (In his recent column, Sacks relates how imminent death has helped give him focus and perspective.) It has also always struck me that depression -- not only major, clinical depression but perhaps even more so subclinical depressive thinking styles -- is common among philosophers. (For an informal poll, see Leiter's latest.) I wonder if this prevalence of depression among philosophers is non-accidental. I wonder whether perhaps the thinking styles characteristic of mild depression can become, Sacks-style, an asset for one's work as a philosopher.

    Here's the thought (suggested to me first by John Fischer): Among the non-depressed, there's a tendency toward glib self-confidence in one's theoretical views. (On positive illusions in general among the non-depressed see this classic article.) Normally, conscious human reasoning works like this: First, you find yourself intuitively drawn to Position A. Second, you rummage around for some seemingly good argument or consideration in favor of Position A. Finally, you relax into the comfortable feeling that you've got it figured out. No need to think more about it! (See Kahneman, Haidt, etc.)

    Depressive thinking styles are, perhaps, the opposite of this blithe and easy self-confidence. People with mild depression will tend, I suspect, to be less easily satisfied with their first thought, at least on matters of importance to them. Before taking a public stand, they might spend more time imagining critics attacking Position A, and how they might respond. Inclined toward self-doubt, they might be more likely to check and recheck their arguments with anxious care, more carefully weigh up the pros and cons, worry that their initial impressions are off-base or too simple, discard the less-than-perfect, worry that there are important objections that they haven't yet considered. Although one needn't be inclined toward depression to reflect in this manner, I suspect that this self-doubting style will tend to come more naturally to those with mild to moderate depressive tendencies, deepening their thought about the topic at hand.

    I don't want to downplay the seriousness of depression, its often negative consequences for one's life including often for one's academic career, and the counterproductive nature of repetitive dysphoric rumination (see here and here), which is probably a different cognitive process than the kind of self-critical reflection that I'm hypothesizing here to be its correlate and cousin. [Update, Feb. 26: I want to emphasize the qualifications of that previous sentence. I am not endorsing the counterproductive thinking styles of severe, acute depression. See also Dirk Koppelberg's comment below and my reply.] However, I do suspect that mildly depressive thinking styles can be recruited toward philosophical goals and, if managed correctly, can fit into, and even benefit, one's philosophical work. And among academic disciplines, philosophy in particular might be well-suited for people who tend toward this style of thought, since philosophy seems to be proportionately less demanding than many other disciplines in tasks that benefit from confident, high-energy extraversion (such as laboratory management and people skills) and proportionately more demanding of careful consideration of the pros and cons of complex, abstract arguments and of precise ways of formulating positions to shield them from critique.

    Related posts:
    Depression and Philosophy (July 28, 2006)
    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (August 14, 2014)

    Update April 23:

    The full-length circulating draft is now up on my academic website.

    Monday, June 23, 2014

    The Calibration View of Moral Reflection

    Oh, when the saints go marching in
    Oh, when the saints go marching in
    Lord, I want to be in that number
    When the saints go marching in.
    No. No you don't, Louis. Not really.

    If you want to be a saint, dear reader, or the secular equivalent, then you know what to do: Abandon those selfish pleasures, give your life over to the best cause you know (or if not a single great cause then a multitude of small ones) -- all your money, all your time. Maybe you'll misfire, but at least we'll see you trying. But I don't think we see you trying.

    Closer to you what you really want, I suspect, is this: Grab whatever pleasures you can here on Earth consistent with just squeaking through the pearly gates. More secularly: Be good enough to meet some threshold, but not better, not a full-on saint, not at the cost of your cappuccino and car and easy Sundays. Aim to be just a little bit better, maybe, in your own estimation, than your neighbor.

    Here's where philosophical moral reflection can come in very handy!

    As regular readers will know, Joshua Rust and I have done a number of studies -- eighteen different measures in all -- consistently finding that professors of ethics behave no morally better than do socially similar comparison groups. These findings create a challenge for what we call the booster view of philosophical moral reflection. On the booster view, philosophical moral reflection reveals moral truths, which the person is then motivated to act on, thereby becoming a better person. Versions of the booster view were common in both the Eastern and the Western philosophical traditions until the 19th century, at least as a normative aim for the discipline: From Confucius and Socrates through at least Wang Yangming and Kant, philosophy done right was held to be morally improving.

    Now, there are a variety of ways to duck this conclusion: Maybe philosophical ethics neither does nor should have any practical relevance to the philosophers expert in it; or maybe most ethics professors are actually philosophizing badly; or.... But what I'll call the calibration view is, I think, among the more interesting possibilities. On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for. This could often involve consciously deciding to act morally worse.

    Consider moral licensing in social psychology and behavioral economics. When people do a good deed, they then seem to behave worse in follow-up measures than people who had no opportunity to do a good deed first. One possible explanation is something like calibration: You want to be only so good and not more. A unusually good deed inflates you past your moral target; you can adjust back down by acting a bit jerkishly later.

    Why engage in philosophical moral reflection, then? To see if you're on target. Are you acting more jerkishly than you'd like? Seems worth figuring out. Or maybe, instead, are you really behaving too much like a sweetheart/sucker/do-gooder and really you would feel okay taking more goodies for yourself? That could be worth figuring out, too. Do I really need to give X amount to charity to be the not-too-bad person I'd like to think I am? Could I maybe even give less? Do I really need to serve again on such-and-such worthwhile-but-boring committee, or to be a vegetarian, or do such-and-such chore rather than pushing it off on my wife? Sometimes yes, sometimes no. When the answer is no, my applied philosophical moral insight will lead me to behave morally worse than I otherwise would have, in full knowledge that this is what I'm doing -- not because I'm a skeptic about morality but because I have a clear-eyed vision of how to achieve exactly my own low moral standards and nothing more.

    If this is right, then two further things might follow.

    First, if calibration is relative to peers rather than absolute, then embracing more stringent moral norms might not lead to improvements in moral behavior in line with those more stringent norms. If one's peers aren't living up to those standards, one is no worse relative to them if one also declines to do so. This could explain the cheeseburger ethicist phenomenon -- the phenomenon of ethicists tending to embrace stringent moral norms (such as that eating meat is morally bad) while not being especially prone to act in accord with those stringent norms.

    Second, if one is skilled at self-serving rationalization, then attempts at calibration might tend to misfire toward the low side, leading one on average away from morality. The motivated, toxic rationalizer can deploy her philosophical tools to falsely convince herself that although X would be morally good (e.g., not blowing off responsibilities, lending a helping hand) it's really not required to meet the mediocre standards she sets herself and the mediocre behavior she sees in her peers. But in fact, she's fooling herself and going even lower than she thinks. When professional ethicists behave in crappy ways, such mis-aimed low-calibration rationalizing is, I suspect, often exactly what's going on.

    Thursday, March 07, 2013

    Against the One True Kant

    I start with two premises:
    Premise 1: All human beings are bad at philosophy.
    Premise 2: Kant was a human being.
    Therefore, um, uh, let's see....

    It is sufficient a person's for being "bad at philosophy" in the relevant sense that when that person tries to build an ambitious, elaborate philosophical system that addresses the great, enduring questions of metaphysics and epistemology, there will be some serious errors in the system, as a result of the person's cognitive shortcomings (e.g., invisible presuppositions, equivocal arguments). It is very easy to be bad at philosophy in this sense, and we have excellent empirical evidence for Premise 1. Premise 2 also seems well attested. Further supporting evidence for the conclusion comes from the boneheaded things Kant sometimes says when he is speaking clearly and concretely rather than in a difficult-to-evaluate haze of abstracta.

    Here's a vision, then, of Kant:

    Kant has a brilliant sense of what it would be very cool to be able to prove -- or at least a brilliant sense of what lots of philosophers think it would be very cool to be able to prove. For example, it would be very cool to (non-question-beggingly) prove that the external world exists. It would be very cool to prove that immorality is irrational. Kant also has some clever and creative pieces of argumentation that seem like promising elements in potential proofs of this sort. And finally, Kant has an intimidating aura of authority. He creates a fog of jargon through which the pieces of argument appealingly glint, in their coolness and cleverosity. And, voila, he asserts success. If you fail to understand, the fault seems to be yours.

    Maybe this sounds bad. But the thing is: There really are interesting pieces of argument in there! It's just that they don't all fit together. There are gaps in the arguments, and seeming inconsistencies, and different possibilities for the meaning of the jargon. Because these gaps, seeming inconsistencies, and possibilities might be variously resolved, there need be no one right interpretation of Kant. We can be Kant interpretation pluralists. Although there are clearly bad ways of reading Kant (e.g., as an unreconstructed Lockean), there might be no determinately best way, but rather a variety of attractive ways with competing costs and benefits.

    Interpret the terms this way and fill in the gaps that way, and find a Kant who thinks that there's stuff out there independent of our minds that causes our sensory experiences. Interpret the terms this other way and fill in the gaps that other way, and find a Kant who regards such stuff as merely an invention of our minds. Yet another Kant holds that there might be such stuff, but we can't prove that there is. Call these Kant Model 1, Kant Model 2, and Kant Model 3. There will also be Kant Model 4, Kant Model 1a, Kant Model 5f, etc. Similarly across the range of Kantian issues.

    But surely only one of these things is what Kant really thought? No, I wouldn't be sure of that at all! When our terms admit multiple interpretations, when our arguments are gappy and dispositions unstable, the contents of both our occurrent thoughts and our dispositional opinions can be muddy. When I say, "the only really important thing is to be happy" or "all men are created equal", what exactly to do I mean? There might be no exactness about it! (See my dispositional approach to attitudes.) This is as true of philosophers as of anyone else -- and, I would argue, as true of the mortal Kant as of any other philosopher.

    But even if Kant did have absolutely specific private opinions on all the topics of his writings, it doesn't matter. The philosophy of Kant is not that. Maybe in the secret grotto of his soul he was an orthodox Thomist and he invented the critical philosophy only as a joke to amuse his manservant Martin Lampe. This would not render the Critique of Pure Reason a defense of Thomism. Kant's philosophy is embodied in the words he left behind, not in his private opinions about those words. And those words might not, very likely do not, determinately resolve into one single self-consistent philosophical system.

    Historians of philosophy can and should fight about whether to treat Kant Model 2b, Kant Model 5f, or instead some other Kant, as the canonical Kant. But those of us who don't make Kant interpretation our profession should have some liberty to choose among the Kants, as best suits our philosophical purposes -- as long as we bear in mind that Kant Model 2b is no more the One Kant than Hamlet Interpretation 2b is the One Hamlet.