Professors appear to think that voting regularly in public elections is about as morally good as donating 10% of one's income to charity. This seems, anyway, to be suggested by the results of a survey Josh Rust and I sent earlier this year to hundreds of U.S. professors, ethicists and non-ethicists, both inside and outside of philosophy. (The survey is also described in a couple of previous posts.)
In one part of the survey, we asked professors to rate various actions on a nine point scale from "very morally bad" through "morally neutral" to "very morally good". Although some actions we expected to be rated negatively (e.g., "not consistently responding to student emails"), there were three we expected to be rated positively by most respondents: "regularly voting in public elections", "regularly donating blood", and "donating 10% of one's income to charity". Later in the survey, we asked related questions about the professors' own behavior, allowing us to compare expressed normative attitudes with self-described behavior. (In some cases we also have direct measures of behavior to compare with the self-reports.)
Looking at the data today, I found it striking how strongly the respondents seemed to feel about voting. Overall, 87.9% of the professors characterized voting in public elections as morally good. Only 12.0% said voting was morally neutral, and a lonely single professor (1 of the 569 respondents or 0.2%) characterized it as morally bad. That's a pretty strong consensus. Political philosophers were no more cynical about voting than the others, with 84.5% responding on the positive side of the scale (a difference well within the range of statistical chance variation). But I was struck, even more than by the percentage who responded on the morally good side of our scale, by the high value they seemed to put on voting. To appreciate this, we need to compare the voting question with the two other questions I mentioned.
On our 1 to 9 scale (with 5 "morally neutral" and 9 "very morally good"), the mean rating of "regularly donating blood" was 6.81, and the mean rating of "donating 10% of one's income to charity" was 7.36. "Regularly voting in public elections" came in just a smidgen above the second of those, at 7.37 (the difference being within statistical chance, of course).
I think we can assume that most people think it's fairly praiseworthy to donate 10% of one's income to charity (for the average professor, this would be about $8,000). Professors seem to be saying that voting is just about equally good. Someone who regularly donates blood can probably count at least one saved life to her credit; voting seems to be rated considerably better than that. (Of course, donating 10% of one's income to charity as a regular matter probably entails saving even more lives, if one gives to life-saving type charities, so it makes a kind of utilitarian sense to rate the money donation as better than the blood donation.)
Another measure of the importance professors seem to invest in voting is the rate at which they report doing it. Among professors who described themselves as U.S. citizens eligible to vote, fully 97.8% said they had voted in the Nov. 2008, U.S. Presidential election. (Whether this claim of near-perfect participation is true remains to be seen. We hope to get some data on this shortly.)
Now is it just crazy to say that voting is as morally good as giving 10% of one's income to charity? That was my first reaction. Giving that much to charity seems uncommon to me and highly admirable, while voting... yeah, it's good to do, of course, but not that good. One thought, however -- adapted from Derek Parfit -- gives me pause about that easy assessment. In the U.S. 2008 Presidential election, I'd have said the world would be in the ballpark of $10 trillion better off with one of the candidates than the other. (Just consider the financial and human costs at stake in the Iraq war and the U.S. bank bailouts, for starters.) Although my vote, being only one of about 100,000,000 cast, probably had only about a 1/100,000,000 chance of tilting the election, multiplying that tiny probability by a round trillion leaves a $10,000 expected public benefit from my voting -- not so far from 10% of my salary.
Of course, that calculation is incredibly problematic in any number of ways. I don't stand behind it, but it helps loosen the grip of my previous intuition that of course it's morally better to donate 10% to charity than to vote.
Update July 29:
As Neil points out in the comments, in this post I seem to have abandoned my usual caution in inferring attitudes from expressions of attitudes. Right: Maybe professors don't think this at all. But I found it a striking result, taken at face value. If it's not to be taken at face value, we might ask: Why would so many professors, who really think donating 10% of income is morally better than voting, mark a bubble more toward the "very morally good" end of the scale in response to the voting question than in response to the donation question? Moral self-defensiveness, perhaps, on the assumption (borne out elsewhere in the data) that few of them themselves donate 10%...?
Tuesday, July 28, 2009
Tuesday, July 21, 2009
On Relying on Self-Report: Happiness and Charity
To get published in a top venue in sociology or social or personality psychology, one must be careful about many things -- but not about the accuracy of self-report as a measure of behavior or personality. Concerns about the accuracy of self-report tend to receive merely a token nod, after which they are completely ignored. This drives me nuts.
(Before I go further, let me emphasize that the problem here -- what I see as a problem -- is not universal: Some social psychologists -- Timothy Wilson, Oliver John, and Simine Vazire for example -- are appropriately wary of self-report.)
Although the problem is by no means confined to popular books, two popular books have been irking me acutely in this regard: The How of Happiness, by my UC Riverside colleague Sonja Lyubomirsky, and Who Really Cares, by Arthur Brooks (who has a named chair in Business and Government Policy at Syracuse).
The typical -- but not universal -- methodology in work by Lyubomirsky and those she cites is this: (A1.) Ask some people how happy (or satisfied, etc.) they are. (A2.) Try some intervention. (A3.) Ask them again how happy they are. Or: (B1.) Randomly assign some people to two or three groups, one of which receives the key intervention. (B2.) Ask the people in the different groups how happy they are. If people report greater happiness in A3 than in A1, conclude that the intervention increases happiness. If people in the intervention group report greater happiness in B2 than people in the other groups, likewise conclude that the intervention increases happiness.
This makes me pull out my hair. (Sorry, Sonja!) What is clear is that, in a context in which people know they are being studied, the intervention increases reports of happiness. Whether it actually increases happiness is a completely different matter. If the intervention is obviously intended to increase happiness, participants may well report more happiness post-intervention simply to conform to their own expectations, or because they endorse a theory on which the intervention should increase happiness, or because they've invested time in the intervention procedure and they'd prefer not to think of their time as wasted, or for any of a number of other reasons. Participants might think something like, "I reported a happiness level of 3 before, and now that I've done this intervention I should report 4" -- not necessarily in so many words.
As Dan Haybron has emphasized, the vast majority of the U.S. population describe themselves as happy (despite our high rate of depression and anger problems), and self-reports of happiness are probably driven less by accurate perception of one's level of happiness than by factors like the need to see and to portray oneself as a happy person (otherwise, isn't one something of a failure?). My own background assumption, in looking at people's self-reports of happiness, life-satisfaction, and the like, is that those reports are driven primarily by the need to perceive oneself a certain way, by image management, by contextual factors, by one's own theories of happiness, and by pressure to conform to perceived experimenter expectations. Perhaps there's a little something real underneath, too -- but not nearly enough, I think, to justify conclusions about the positive effects of interventions from facts about differences in self-report.
In Who Really Cares? Brooks aims to determine what sorts of people give the most to charity. Brooks bases his conclusions almost (but not quite) entirely on self-reports of charitable giving in large survey studies. His main finding is that self-described political conservatives report giving more to charity (even excluding religious charities) than do self-described political liberals. What he concludes -- as though this were unproblematically the same thing -- is that conservatives give more to charity than do liberals. Now maybe they do; it wouldn't be entirely surprising, and he has a little bit of non-self-report evidence that seems to support that conclusion (though how assiduously he looked for counterevidence is another question). But I doubt that people have any especially accurate sense of how much they really give to charity (even after filling out IRS forms, for the minority who itemize charitable deductions), and even if they did have such a sense I doubt that would be accurately reflected in self-report on survey studies.
As with happiness, I suspect self-reports of charitable donation are driven at least as much by the need to perceive oneself, and to have others perceive one, a particular way as by real rates of charitable giving. Rather than assuming, as Brooks seems to, that political conservatives and political liberals are equally subject to such distortional demands in their self-reports and thus attributing differences in self-reported charity to actual differences in giving, it seems to me just as justified -- that is to say, hardly justified at all -- to assume that the real rates of charitable giving are the same and thus attribute differences in reported charity to differences in the degree of distortion in the self-descriptive statements of political conservatives and political liberals.
Underneath sociologists' and social and personality psychologists' tendency to ignore the sources of distortion in self-report is this, I suspect: It's hard to get accurate, real-life measures of things like happiness and overall charitable giving. Such real-life measures will almost always themselves be only flawed and partial. In the face of an array of flawed options, it's tempting to choose the easiest of those options. Both the individual researcher and the research community as a whole then become invested in downplaying the shortcomings of the selected methods.
(Before I go further, let me emphasize that the problem here -- what I see as a problem -- is not universal: Some social psychologists -- Timothy Wilson, Oliver John, and Simine Vazire for example -- are appropriately wary of self-report.)
Although the problem is by no means confined to popular books, two popular books have been irking me acutely in this regard: The How of Happiness, by my UC Riverside colleague Sonja Lyubomirsky, and Who Really Cares, by Arthur Brooks (who has a named chair in Business and Government Policy at Syracuse).
The typical -- but not universal -- methodology in work by Lyubomirsky and those she cites is this: (A1.) Ask some people how happy (or satisfied, etc.) they are. (A2.) Try some intervention. (A3.) Ask them again how happy they are. Or: (B1.) Randomly assign some people to two or three groups, one of which receives the key intervention. (B2.) Ask the people in the different groups how happy they are. If people report greater happiness in A3 than in A1, conclude that the intervention increases happiness. If people in the intervention group report greater happiness in B2 than people in the other groups, likewise conclude that the intervention increases happiness.
This makes me pull out my hair. (Sorry, Sonja!) What is clear is that, in a context in which people know they are being studied, the intervention increases reports of happiness. Whether it actually increases happiness is a completely different matter. If the intervention is obviously intended to increase happiness, participants may well report more happiness post-intervention simply to conform to their own expectations, or because they endorse a theory on which the intervention should increase happiness, or because they've invested time in the intervention procedure and they'd prefer not to think of their time as wasted, or for any of a number of other reasons. Participants might think something like, "I reported a happiness level of 3 before, and now that I've done this intervention I should report 4" -- not necessarily in so many words.
As Dan Haybron has emphasized, the vast majority of the U.S. population describe themselves as happy (despite our high rate of depression and anger problems), and self-reports of happiness are probably driven less by accurate perception of one's level of happiness than by factors like the need to see and to portray oneself as a happy person (otherwise, isn't one something of a failure?). My own background assumption, in looking at people's self-reports of happiness, life-satisfaction, and the like, is that those reports are driven primarily by the need to perceive oneself a certain way, by image management, by contextual factors, by one's own theories of happiness, and by pressure to conform to perceived experimenter expectations. Perhaps there's a little something real underneath, too -- but not nearly enough, I think, to justify conclusions about the positive effects of interventions from facts about differences in self-report.
In Who Really Cares? Brooks aims to determine what sorts of people give the most to charity. Brooks bases his conclusions almost (but not quite) entirely on self-reports of charitable giving in large survey studies. His main finding is that self-described political conservatives report giving more to charity (even excluding religious charities) than do self-described political liberals. What he concludes -- as though this were unproblematically the same thing -- is that conservatives give more to charity than do liberals. Now maybe they do; it wouldn't be entirely surprising, and he has a little bit of non-self-report evidence that seems to support that conclusion (though how assiduously he looked for counterevidence is another question). But I doubt that people have any especially accurate sense of how much they really give to charity (even after filling out IRS forms, for the minority who itemize charitable deductions), and even if they did have such a sense I doubt that would be accurately reflected in self-report on survey studies.
As with happiness, I suspect self-reports of charitable donation are driven at least as much by the need to perceive oneself, and to have others perceive one, a particular way as by real rates of charitable giving. Rather than assuming, as Brooks seems to, that political conservatives and political liberals are equally subject to such distortional demands in their self-reports and thus attributing differences in self-reported charity to actual differences in giving, it seems to me just as justified -- that is to say, hardly justified at all -- to assume that the real rates of charitable giving are the same and thus attribute differences in reported charity to differences in the degree of distortion in the self-descriptive statements of political conservatives and political liberals.
Underneath sociologists' and social and personality psychologists' tendency to ignore the sources of distortion in self-report is this, I suspect: It's hard to get accurate, real-life measures of things like happiness and overall charitable giving. Such real-life measures will almost always themselves be only flawed and partial. In the face of an array of flawed options, it's tempting to choose the easiest of those options. Both the individual researcher and the research community as a whole then become invested in downplaying the shortcomings of the selected methods.
Tuesday, July 14, 2009
The Smallish Difference Between Belief and Desire
Surely this much at least is true: The belief that P is the case (say, that my illness is gone) and the desire that P be the case are very different mental states -- the possession of one without the other explaining much human dissatisfaction. Less cleanly distinct, however, are the desire that P (or for X) and the belief that P (or having X) would be good.
I don't insist that the desires and believings-good are utterly inseparable. Maybe we sometimes believe that things are good apathetically, without desiring them; surely we sometimes desire things that we don’t believe are, all things considered, good. But I’m suspicious of the existence of utter apathy. And if believing good requires believing good all things considered, perhaps we should think genuine desiring, too, is desiring all things considered; or conversely if we allow for conflicting and competing desires that pick up on individual desirable aspects of a thing or state of affairs then perhaps also we should allow for conflicting and competing believings good that also track individual aspects – believing that the desired object has a certain good quality (the very quality in virtue of which it is desired). With these considerations in mind, there may be no clear and indisputable case in which desiring and believing good come cleanly apart.
If the mind works by the manipulation of discrete representations with discrete functional roles – inner sentences, say, in the language of thought, with specific linguistic contents – then the desire that P and the belief that P would be good are surely different representational states, despite whatever difficulty there may be in prizing them apart. (Perhaps they’re closely causally related.) But if the best ontology of belief and desire, as I think, treats as basic the dispositional profiles associated with those states – that is, if mental states are best individuated in terms of how the people possessing those states are prone to act and react in various situations – and if dispositional profiles can overlap and be partly fulfilled, then there may be no sharp distinction between the desire that P and the belief that P would be good. The person who believes that Obama’s winning would be good and the person who wants Obama to win act and react – behaviorally, cognitively, emotionally – very similarly: Their dispositional profiles are much the same. The patterns of action and reaction characteristic of the two states largely overlap, even if they don’t do so completely.
This point of view casts in a very different light a variety of issues in philosophy of mind and action, such as the debate about whether beliefs can, by themselves, motivate action or whether they must be accompanied by desires; characterizations of belief and desire as having neatly different "directions of fit"; and functional architectures of the mind that turn centrally on the distinction between representations in the "belief box" and those in the "desire box".
Thursday, July 02, 2009
On Debunking V: The Final Chapter
(by guest blogger Tamler Sommers)
First, let me offer my thanks to Eric for giving me this opportunity and to everyone who commented on my posts. This was fun.
Since my latest post on debunking I came across a paper called “Evolutionary Debunking Arguments” by Guy Kahane. (Forthcoming in Nous, you can find it on Philpapers.org.) Kahane mounts some careful and compelling criticisms of selective (“targeted”) debunking strategies and global debunking strategies in metaethics, and I strongly recommend this article to anyone interested in the topic. For my last post, want to focus on a claim from Kahane’s paper that isn’t central to his broader thesis but relates to my earlier posts. Kahane argues that evolutionary debunking arguments (EDAs) implicitly assume an “objectivist account of evaluative discourse.” EDAs cannot apply to subjectivist theories because: subjectivist views claim that our ultimate evaluative concerns are the source of values; they are not themselves answerable to any independent evaluative facts. But if there is no attitude-independent truth for our attitudes to track, how could it make sense to worry whether these attitudes have their distal origins in a truth-tracking process?” (11)
I don’t think Kahane is right about this. Learning about the evolutionary or historical origins of our evaluative judgments can have an effect on those judgments—even for subjectivists. But we need to revise the description of EDAs as follows. Rather than ask whether the origins of our attitudes or intuitions have their origins in a truth-tracking process, we need to ask whether they have their origins in a process that we (subjectively) feel ought to bear on the judgments they are influencing.
Consider judgments about art. Imagine that Jack is a subjectivist about aesthetic evaluation. Ultimately, he think, there is no fact of the matter about whether a painting is beautiful. He sees a painting by an unknown artist and finds it magnificent. Later he learns that the painter skillfully employs a series of phallic symbols that trigger cognitive mechanisms which cause him to experience aesthetic appreciation. Would knowing this alter his judgment about the quality of the work? I can see two ways in which it might. First, his more general subjectivist ideas about the right way to evaluate works of art may rebel against cheap tricks like this to augment appreciation. He doesn’t feel that mechanisms that draw him unconsciously to phallic symbols ought to bear on his evaluation of a work of art. Second, learning this fact may have an effect on his visceral appreciation of the painting. (Now he sees a bunch of penises instead of a mountainous landscape.) In a real sense, then, his initial appreciation of the painting has been debunked.
So how might this work in moral case? Imagine Jill is an ethical subjectivist who is about to vote on a new law that would legalize consensual incest relationships between siblings as long they don’t produce children. Jill’s intuition is that incest is wrong. However, she has recently read articles that trace our intuitions about the wrongness of incest to disgust mechanisms that evolved in hominids to prevent genetic disorders. She knows that genetic disorders are not an issue in these kinds of cases, since the law stipulates that preventive measures must be taken. Her disgust, and therefore her intuition, are aimed at something that does not apply in this context. She feels, then, that her intuitions ought not to bear on her final judgment. And so she discounts the intuition and defers to other values that permit consensual relationships that do not harm anyone else.
The general point here is that evolutionary or historical explanations of our intuitions can have an effect on our all-things-considered evaluative judgments even if we think those judgments are ultimately subjective. Knowing the origins and mechanisms behind our attitudes can result in judgments that more accurately reflect our core values. This seems like a proper goal of philosophical inquiry in areas where no objectivist analysis is available.
First, let me offer my thanks to Eric for giving me this opportunity and to everyone who commented on my posts. This was fun.
Since my latest post on debunking I came across a paper called “Evolutionary Debunking Arguments” by Guy Kahane. (Forthcoming in Nous, you can find it on Philpapers.org.) Kahane mounts some careful and compelling criticisms of selective (“targeted”) debunking strategies and global debunking strategies in metaethics, and I strongly recommend this article to anyone interested in the topic. For my last post, want to focus on a claim from Kahane’s paper that isn’t central to his broader thesis but relates to my earlier posts. Kahane argues that evolutionary debunking arguments (EDAs) implicitly assume an “objectivist account of evaluative discourse.” EDAs cannot apply to subjectivist theories because: subjectivist views claim that our ultimate evaluative concerns are the source of values; they are not themselves answerable to any independent evaluative facts. But if there is no attitude-independent truth for our attitudes to track, how could it make sense to worry whether these attitudes have their distal origins in a truth-tracking process?” (11)
I don’t think Kahane is right about this. Learning about the evolutionary or historical origins of our evaluative judgments can have an effect on those judgments—even for subjectivists. But we need to revise the description of EDAs as follows. Rather than ask whether the origins of our attitudes or intuitions have their origins in a truth-tracking process, we need to ask whether they have their origins in a process that we (subjectively) feel ought to bear on the judgments they are influencing.
Consider judgments about art. Imagine that Jack is a subjectivist about aesthetic evaluation. Ultimately, he think, there is no fact of the matter about whether a painting is beautiful. He sees a painting by an unknown artist and finds it magnificent. Later he learns that the painter skillfully employs a series of phallic symbols that trigger cognitive mechanisms which cause him to experience aesthetic appreciation. Would knowing this alter his judgment about the quality of the work? I can see two ways in which it might. First, his more general subjectivist ideas about the right way to evaluate works of art may rebel against cheap tricks like this to augment appreciation. He doesn’t feel that mechanisms that draw him unconsciously to phallic symbols ought to bear on his evaluation of a work of art. Second, learning this fact may have an effect on his visceral appreciation of the painting. (Now he sees a bunch of penises instead of a mountainous landscape.) In a real sense, then, his initial appreciation of the painting has been debunked.
So how might this work in moral case? Imagine Jill is an ethical subjectivist who is about to vote on a new law that would legalize consensual incest relationships between siblings as long they don’t produce children. Jill’s intuition is that incest is wrong. However, she has recently read articles that trace our intuitions about the wrongness of incest to disgust mechanisms that evolved in hominids to prevent genetic disorders. She knows that genetic disorders are not an issue in these kinds of cases, since the law stipulates that preventive measures must be taken. Her disgust, and therefore her intuition, are aimed at something that does not apply in this context. She feels, then, that her intuitions ought not to bear on her final judgment. And so she discounts the intuition and defers to other values that permit consensual relationships that do not harm anyone else.
The general point here is that evolutionary or historical explanations of our intuitions can have an effect on our all-things-considered evaluative judgments even if we think those judgments are ultimately subjective. Knowing the origins and mechanisms behind our attitudes can result in judgments that more accurately reflect our core values. This seems like a proper goal of philosophical inquiry in areas where no objectivist analysis is available.