Thursday, June 24, 2010

Psychology of Philosophy: The Strong Program

Why do you endorse Platonism about numbers, or idealism, or skepticism, or consequentialism, or any other a priori justified (you think) philosophical opinion? I suggest that you look not to the philosophical truth of that view, with which you are somehow better in touch with than those with whom you disagree. Rather, look to your own biases, personality, and background.

The "strong program" in sociology of science holds as a methodological precept that in explaining the rise and fall of scientific theories one may not appeal to the truth or falsity of those theories. So also, I recommend, in thinking about the origins of your own and others' philosophical views, you would do well not to think about the truth of those views. You would also do well not to think in other terms that imply success or insight, such as "seeing the weaknesses" in a predecessor's view, or "recognizing" troubles or advantages of a substantive, philosophical sort, or "proving" or "establishing" anything.

Where philosophy blurs into science or math, this constraint becomes blurry. I say this because I think -- unlike the cartoon version of sociologists of science -- that in science and math features of the world (or of mathematical structures) start to apply strong pressures on views. The farther toward a priori philosophy, the less influence from such pressures. The farther toward concrete science or formal mathematics, the more the influence from such pressures.

No a priori claim is so patently false that no philosopher would endorse it. No constellation of views is so bizarre and self-contradictory that no philosopher would sign up for those views, were her background motivations and culturally-given assumptions right. If there are any a priori philosophical truths among the factors influencing our philosophical theorizing, their influence is modest.

Friday, June 18, 2010

Call for Papers: Workshop on Ethics and Mind at the University of Miami

... keynote speaker John Doris.
Submissions are invited for the 3rd annual Mind and Ethics [Graduate Student] Workshop at the University of Miami, November 20th-21st 2010. Proposals are due September 1st. We encourage submissions of finished works or works in progress (5,000-10,000 words) addressing issues at the intersection of philosophy of mind and ethics. Please send papers prepared for blind review to ethicsandmind@gmail.com.

Thursday, June 17, 2010

The Winner of the Sound-Color Qualia Inversion Contest

... is Edmond Wright.

The aim of the contest was to find a published discussion of the possibility of sound-color qualia inversion, that is, the possibility of someone linguistically and behaviorally practically indistinguishable from normal people but who has auditory experiences when stimulated by light and visual experiences when stimulated by sonic vibrations. Several satisfactory citations were submitted in the comments thread and by email, so the contest rules required that I select what I judged to be the best example (as of the June 14 deadline). This was, I thought, Edmond Wright's article in Synthese (1993, vol. 97, p. 365-382, esp. p. 370-371), which Wright himself mentioned in an email to me. Per the terms of the contest, I owe Wright a drink of his choice next time we are in the same town.

Honorable mention goes to Jeremy Goodman, both for being the first to mention Adam Wager's excellent 1999 article in Philosophical Psychology (vol. 12, p. 263-281, esp. p. 269) and for his insightful comments and references throughout the discussion thread.

If you saw the original post announcing the contest, you may recall that the contest was prompted by Saul Kripke's claim, made twice in a two oral presentations at U.C. Riverside, that no philosopher had ever suggested the possibility of sound-color qualia inversion. If only it were always so easy to definitely refute what Kripke says!

Monday, June 14, 2010

Do Metaethicists Really Behave Worse Than Other Ethicists?

On Friday, I presented data suggesting -- contrary to common opinion -- that deontologists behave no worse than virtue ethicists and consequentialists. Another opinion I've often heard from philosophers is that metaethicists -- that is, philosophers who focus on the most abstract general questions about ethics (such as whether there are moral truths at all and if so what their metaphysical grounding is) -- behave, on average, less well than do other ethicists, perhaps especially applied ethicists. The same data set provides evidence on this question too.

At the center of the dataset is a survey Josh Rust and I sent to hundreds of professional philosophers. Near the end of the survey we asked respondents, "If an ethics-related area is among your specializations, which of the following best reflects the level of abstraction at which you tend to consider ethical issues? (check all that apply)". Response options were "metaethics", "normative ethics" [i.e., theoretical debates about deontology vs. consequentialism, etc., an intermediate level of abstraction], "applied ethics", and "no ethics-related area among my specializations". 28% of philosophers claimed a specialization in metaethics, 45% in normative ethics, 32% in applied ethics (these three groups overlapping, of course), and 36% claimed no such specialization. Although there was a trend for the non-ethicists to be more male (78% vs. 72%) this was not statistically significant given our sample size (361 respondents). Non-ethicist respondents did tend to be a little younger (mean birthyear 1958 vs. 1954). We saw no gender or age differences by level of abstraction within ethics.

Looking at our various questions about respondents' opinions on various applied ethical issues, applied ethicists seemed to think it morally better to vote regularly than did the other groups; ethicists in general thought it morally worse than did non-ethicists not to keep in at least monthly telephone or face-to-face contact with one's mother; and ethicists asserted more of a duty to give to charity (13% of ethicists said it was not the case that the typical professor should donate to charity vs. 24% of non-ethicists). We saw no detectable differences among the groups on the morality of belonging to one's main disciplinary society (for philosophers, the APA), eating the meat of mammals, being an organ or blood donor, or responding to student emails.

The groups did not differ in their self-reported rates of dues-paying membership in the APA, but looking directly at APA membership lists, applied ethicists were less likely than were other groups actually to be members of the APA (61% vs. 75%). Metaethicists tended to report voting in fewer public elections than did applied ethicists, and actual voting data obtained from public records appeared to bear out that trend (an estimated 1.26 votes/year for applied ethicists, vs. 1.05 for metaethicists; I should clarify here that Josh and I "de-identified" the survey data so that we cannot make inferences about particular individuals' survey responses). Metaethicists also reported eating more mammal meat than did applied ethicists (mean 5.2 vs. 3.2 meals/week, and 50% vs. 31% reporting having eaten the meat of a mammal at the last evening meal). Meta-ethicists self-reported giving less to charity (mean 3.9% of income vs 5.2%, excluding one applied ethicist who claimed to have given 500% of his income to charity) and metaethicists appeared to be less motivated by the survey's charity incentive (half of survey recipients received a charity incentive, that is a promise by us (which we did fulfill) to give $10 to a charity of the respondent's choice among six major charities in return for the completed survey; 45% of returned metaethicists' surveys had the charity incentive vs. 57% of applied ethicists'). We found no differences in self-reported organ or blood donation, self-reported or actual responsiveness to undergraduate emails, overall rate of suspicious responding to the survey, or frequency of contact with one's mother (if living).

Obviously, there's room for difference of opinion about these measures, but my interpretation of the data is that they tend to weakly confirm the hypothesis that metaethicists behave not quite as morally well as do applied ethicists.

Friday, June 11, 2010

Do Kantians Really Behave Worse Than Other Ethicists?

As I noted in a previous post, Kantian ethicists seem to have a reputation among philosophers for behaving worse than other sorts of ethicists. But who has any systematic empirical data on this? Well, Josh Rust and I do!

Our data are based on a questionnaire Josh and I sent to hundreds of philosophers last year (more description and other results here, here, here, and here). The questionnaire asked first about the goodness or badness of theft from a friend, paying membership dues to the APA, voting in public elections, having regular conversations with your mother, eating the meat of mammals, being an organ donor, being a blood donor, responding to undergraduate emails, and donating to charity. Then, second, we solicited self-reports of these same behaviors (except for theft). In some cases we also have direct data about actual behavior. Near the end of the questionnaire, we asked about normative ethical view -- that is, about the philosopher's general theoretical approach to ethics. The response options were consequentialist, deontologist, virtue ethicist, skeptical, or no settled position.

(If you're not familiar with those terms: Consequentialists think, roughly, that one should act so as to produce the best expected consequences for everyone. Deontologists think, roughly, that one should act according to certain moral rules, such as don't lie or don't kill innocent people, even if you know that abiding by those rules won't produce the best consequences overall. Kant is currently the most eminent deontologist. Virtue ethicists think, roughly, that ethics is about having moral virtues like honesty, courage, and kindness.)

First, let's look at the general distribution of theoretical approach. Ethicists were more likely to be deontologists (29%) than consequentialists (10%), whereas non-ethicists were split about equally between those two positions (17% for both). One might wonder about causal direction here: Does seriously studying ethics tend to lead philosophers to abandon consequentialism? Or are consequentialists less likely to become ethicists? Or...? Rounding out the answers: 30% of ethicist and 27% of non-ethicist respondents espoused virtue ethics, 29% of ethicists and 33% of non-ethicists said they had no settled position, and 2% of ethicists and 6% of non-ethicists expressed skepticism. We detected no differences by age or gender.

There were two questions where I thought consequentialists, deontologists, and virtue ethicists would differ substantially in their normative opinions -- the meat question and the charity question. Consequentialists have a reputation for emphasizing the importance of charitable donation and vegetarianism (most famously Peter Singer). Surprisingly to me, however, the groups showed no difference in moral opinion on these questions. Only 52% of consequentialists rated "regularly eating the meat of mammals" toward the morally bad end of the scale, compared to 58% of deontologists and 53% of virtue ethicists -- well within statistical chance. Likewise, the mean percentage of income that respondents said that the typical professor should donate to charity was 7.5% (for consequentialists) vs. 7.4% (for both virtue ethicists and deontologists), again well within chance. Apparently, Singer's views aren't representative of consequentialists generally.

Virtue ethicists were less likely than others to say that it's morally bad not to be an organ donor (48% vs. 65% for consequentialists and deontologists) and that it's morally good to regularly donate blood (81% vs. 94% of consequentialists and 91% of deontologists). Virtue ethicists were also least likely to say it was good to pay membership dues to support one's main disciplinary society (59% vs. 80% of consequentialists and 74% of deontologists). Responses to the other normative questions seemed to be about the same for all groups.

Looking at self-reported and actual behavior: Virtue ethicists were least likely to belong to the APA (philosophers' main disciplinary society), based on our examination of the membership list: 58% vs. 65% of consequentialists and 74% of deontologists. However, they were just as likely to report being members (74% vs. 67% and 77%). Virtue ethicists were also least likely to report having an organ donor indicator on their drivers' license (58% vs. 78% and 70%). On the other hand, the deontologists were the ones who reported the longest lapse of time since their last blood donation (for eligible donors): 1994 vs. 2001 for consequentialists and 2000 for virtue ethicists. (Does this have anything to do with Kant's odd views on the matter?) And consequentialists appeared to be more responsive to the survey's charity incentive: Half of the questionnaires went out with a promise that we would donate $10 to a charity of the respondent's choice (among six major charities) when the survey was returned. 67% of the consequentialists' returned surveys were charity-incentive surveys, compared to 46% of deontologists' and 51% of virtue ethicists'.

We found no other differences in self-reported behavior, or in actual measured behavior -- including voting rate in public elections (based on actual voting records), responsiveness to sham undergraduate emails we had sent, or detected honesty or dishonesty in their survey responses.

In sum, we found no evidence that deontologists -- many or most of whom are presumably Kantians, broadly speaking -- behave any worse than professors who favor other normative theories.

Friday, June 04, 2010

Ethicists' vs Non-Ethicists' Honesty in Questionnaire Responses

In 2009, Josh Rust and I ran a survey asking hundreds of ethicists, non-ethicist philosophers, and comparison professors, first, a variety of questions about their views on ethical matters (e.g., vegetarianism, voting, staying in touch with one's mother) and second, about their own personal behavior on the same matters. (No identifying information was associated with the responses, of course.) Some previous posts on the survey are here and here and here.

Now one of the cool things about this study is that in some cases we also have data on actual behavior -- thus enabling a three-way comparison of normative attitude, self-reported behavior, and actual behavior (as you'll see in the links above).

Thus, a measure of honesty falls out of the questionnaire: How well are the self-reports related to the actual behavior? Actually, we have two different types of measures of honesty: For most of the topics on which we didn't have direct behavioral data we asked two behavioral questions, one vague and easy to fudge, the other concrete and more difficult to fudge without explicit deceptive intent. So, for example, we asked at how many meals per week the respondent ate the meat of a mammal (the fudgy question) and also whether she ate the meat of a mammal at her last evening meal not including snacks (the concrete question). If, hypothetically, half the respondents who reported eating mammal meat at 3 or fewer meals per week also reported having eaten it at the last evening meal, we could infer that that group of respondents were fudging their answers.

We created a composite of six types of suspicious (or demonstrably false) responses and we compared the rates of suspicious responding between the groups. The six measures were:
* comparison of self-reported number of votes since the year 2000 with actual voting records;

* comparison of claims to have voted in the Nov. 2008 U.S. general Presidential election with actual voting records;

* comparison of claims of 100% or 95% responsiveness to undergraduate emails with responsiveness to emails that we had sent that were designed to look as though they were from undergraduates (see here, and yes we got IRB approval);

* for philosophers only, comparison of claims of membership in the American Philosophical Association with membership records (excluded from the analysis in this post, but discussed here);

* comparison of a general claim about how often the respondent talks with her mother (if living) with a specific claim about date of last conversation;

* comparison of a general claim about how often the respondent donates blood (if eligible) with a specific claim about date of last donation;

* comparison of a general claim about meals per week at which the respondent eats the meat of a mammal with a specific claim about the last evening meal.
We found that all three groups showed similar rates of suspicious responding: 50% of ethicists, 49% percent of non-ethicist philosophers, and 49% of comparison professors gave at least one suspicious response -- variation well within chance, of course, given the number of respondents. (Remember that on the second three measures a suspicious response is not necessarily a lie or even an unconscious self-serving distortion, but only an answer or pattern of answering that seems more likely to be false or distortive than another pattern would be, when aggregated across respondents.)

Thus, as in previous research we failed to find any evidence that ethicists behave any better than other socially comparable non-ethicists.

This is assuming, of course, that lying or giving distorted answers on surveys like ours is morally bad. Now, as it happens, we asked our respondents about that very issue, and 87% (89% of ethicists) said it was morally bad to answer dishonestly on surveys like ours. We also had a measure of how bad they thought it was -- a 9-point scale from "very morally good" through "morally neutral" to "very morally bad". As it turned out, there was no statistically significant relationship between normative attitude and rate of suspicious responding.

Near the end of the survey, we also explicitly asked respondents whether they had answered any of the questions dishonestly. Few said they did, and answers to this question appeared to be unrelated to rates of suspicious responding: Among respondents with no suspicious looking responses, 6 (2.2%) said they had answered dishonestly, compared to 7 (2.6%) of the respondents with at least one suspicious response.

Finally, we had asked the philosophy respondents whether they prefered a deontological, consequentialist, virtue ethical, or some other sort of normative ethical view. Deontologists are often portrayed as sticklers about lying -- Kant, the leading historical deontologist, was notoriously very strict on the point. However we detected no difference in patterns of suspicious responding according to normative ethical view. To the extent there was a trend, it was for the consequentialists to be least likely to have suspicious or false responses (47%, vs. 56% for deontologists and 58% for virtue ethicists; this analysis includes the APA question).

Thursday, June 03, 2010

An Opportunity to Collaborate with Psychologists on Experimental Philosophy Research

Jonathan Phillips writes:
The Experiment Month initiative is a program designed to help philosophers conduct experimental studies. If you are interested in running a study, you can send your study proposal to the Experiment Month staff. Then, if your proposal is selected for inclusion, we will conduct the study online, send you the results and help out with any statistical analysis you may need. All proposals are due Sept. 1.

For further information, see the Experiment Month website: http://www.yale.edu/cogsci/XM/