A recent article by Paul Piff and collaborators, purporting to show that rich people are jerks (more formally: "higher social class predicts increased unethical behavior"), has been getting attention in the popular media. Numerous people have sent me the article, correctly surmising that I'd be interested given my own research on the moral behavior of (generally high socioeconomic status) ethicists.
The article nicely displays some of the difficulties of researching moral behavior.
First, let me express a thought about the article's reception. If Piff and collaborators had found no differences in behavior, it seems a reasonable conjecture that the article would have received less attention. It might even have been difficult to publish at all. The same might be true even if Piff et al had found significant results but in the opposite direction, that is, if they had found the rich better behaved. Readers' and referees' critical acumen would probably have been activated, much more so than for a sexy result that tickles our fancy. Consider the many filters that a study must pass -- from approval by one's advisor, to design, to data collection, to analysis and write-up, to refereeing, to editorial acceptance, to public dissemination. At each step, the sexy study has an advantage over less-sexy competitors. The cumulative advantage in the marketplace of ideas should make us nervous about forming our opinions based on what we see in the news. (I recognize this applies to my own research on ethics professors too. So far, my most frequently mentioned study on ethicists is the one study that found ethicists behaving worse.)
Now let's consider the methods. The authors report the results of seven different studies.
Two studies examine the rudeness of drivers. Piff et al report that people driving fancy cars are less likely to wait their turn at a four-way stop and less likely to stop for a pedestrian entering a crosswalk. While I like the real-world naturalness of this study ("ecological validity"!), this particular measure seems very likely to be subject to experimenter effects -- that is, distortion in coding and results so as to favor the hypothesis of the experimenter. Experimenter effects can be large even when there is no obvious source of bias (hence medical research typically aims to be "double blind"). In this case the sources of possible coder bias seem obvious and very difficult to control. This is especially true of the crosswalk study. A confederate of the experimenter steps out into the crosswalk, and the experimenter codes both the perceived status of the car and whether it stops. Wisely, the status of the car is coded before the experimenter knows whether it has stopped. But anyone who has been a pedestrian in the San Francisco Bay Area (where the study was conducted) knows that the crosswalk is a place of subtle communication between ped and driver: You take a step out, you catch the driver's eye. How confidently you step, the look on your face, your reaction (or not) to the driver's glance and to the change (or not) of velocity -- all this has a big effect on what happens. The results might as easily reflect the expectations of the experimenter as any real difference in driving patterns. So this is exactly the sort of case in which one would expect large experimenter effects. Since the results have only mid-grade p values (.05 > p > .01), a small experimenter effect could vitiate them entirely.
(Piff et al state that the coders and confederates were "blind to the hypothesis of the study", but it is hard to imagine that the coders don't at least have strong suspicions, given that they are being asked to code the luxuriousness of the vehicles. At the very least, this should make vehicle status very salient to them, amping up any of the coders' prior expectations of a relationship between vehicle status and driving behavior.)
How about the other studies? Studies 3 and 5 asked participants to read scenarios and then describe how ethically or unethically they would act in those scenarios. Piff et al report that participants reporting higher social class also report that they would act less ethically in such scenarios. Would it seem too fussy of me to say that I don't fully trust self-report of moral behavior in hypothetical scenarios? I would like some evidence that this isn't, say, actually a measure of honesty and frankness instead of a measure of differences in how one would really act in such scenarios, with self-reports of less moral behavior revealing more honesty and frankness than do self-reports of moral perfection. That interpretation would completely flip the moral significance of Piff et al's results. Or maybe the measure is really something more like a measure of one's opinion about one's own moral character, which might have a zero correlation with real differences in moral character (as I suggest here)?
In Study 4, after completing filler tasks, participants were offered candy from a jar ostensibly for children in a nearby laboratory. Afterwards, they were asked how many candies they had taken from the jar. Participants who had been primed to think of themselves as relatively low class (by being asked to compare themselves to the rich, the well educated, and the prestigiously employed) reported having taken less candy than participants who had been primed to think of themselves as relatively high class [edited 5/28]. "Wait, what?" I hear you asking. They reported having taken more candy? But did they actually take more candy? If I'm reading the article correctly, the experimenters chose not to measure actual theft, relying on self-report instead, though the subjects' fingers were right there in the jar! Thus, honesty is confounded with immorality, as in Studies 3 and 5. Perhaps I can also mention the weirdness of coming into a psychology lab and then being offered candy ostensibly for children elsewhere. Are participants really buying this cover story? I participated in a few psychology studies as an undergrad, and I suspect I wouldn't have bought it for a minute. Educated undergrads expect to be lied to by psychologists.
Study 6 also has cover story problems (see also my discussion of a similar study by Gino and Ariely). Participants are set in front of a computer ostensibly presenting them with the outcome of random die rolls. Participants are asked to self-report the outcome -- without the experimenter checking -- and they are told they will have a higher chance of winning a prize if they self-report higher results. I ask you to imagine yourself as a participant in this experiment. What do you think is going on? Is there a moral obligation to tell the truth? Or is the whole thing just silly? The experimenters have brought you into this weird situation in which they seem, pretty much explicitly, to be asking you to lie to them. They, of course, are themselves lying to you, as you probably suspect. The connection between behavior in this setting and real-world honesty seems dubious at best.
In Study 7, participants were either asked to list three things about their day or three benefits of greed. They were then asked to self-report whether they would engage in immoral behavior in hypothetical scenarios. Participants who had been asked to list positive features of greed said that they would engage in more immoral behavior in the hypothetical scenarios, and this was especially the case for the lower socioeconomic status participants. Therefore...? In addition to the general types of concerns raised above, I might mention that an experimental context in which a researcher is asking you to list advantages of greed might encourage the respondent to entertain certain hypotheses about the experiment that influence her answers. It might also encourage the respondent to expect a more forgiving moral atmosphere in which self-report of selfish behavior would be viewed less negatively.
Real moral behavior is hard to measure. I appreciate the difficulty of the researchers' task. Three cheers for convergent measures! I think it's cool that this is being done, and I enjoyed reading the article and thinking about the issues. But I hope I will be forgiven for not buying it in this case.
Update, May 28:
Readers of the post might also be interested in this critical reaction and response (HT Rolf Degen).
Saturday, March 31, 2012
Friday, March 23, 2012
Why Tononi Should Think That the United States Is Conscious
The is the fourth and probably last in a series of posts on why several major theorists of consciousness should attribute literal "phenomenal" conscious experience to the United States, considered as a concrete but spatially distributed entity at least partly composed by citizens and residents. Previous posts treated Dennett, Dretske, and Humphrey. Humphrey and I have an extended exchange in the comments field of my post on his work, and I have offered general considerations supporting the view that if materialism is true the United States is probably conscious here, here, and here (page 18 ff). A full-length paper is in the works but not yet in circulatable shape.
I chose Dennett, Dretske, Humphrey, and Tononi as my sample theorists for two reasons: First, they represent a diverse range of very prominent materialist theories of conscious. And second, they are theoretically ambitious, trying to explain consciousness in general in any possible organism (and not just human consciousness or consciousness as it appears on Earth, like most scientific and neural accounts), covering the metaphysics from top to bottom (and not, say, resting upon a relatively unanalyzed notion of "representation" on which it would be unclear whether the United States literally has the right sort of representations).
Of our four theorists, neuroscientist Giulio Tononi’s view (2004, 2008; Balduzzi and Tononi 2009) enables the quickest argument to the consciousness of the United States. Tononi equates consciousness with “integrated information”. “Information”, in Tononi’s sense, is abundant in the universe – present everywhere or almost everywhere there is causation. And information is integrated, at least in a tiny degree, whenever there are contingent causal connections within a system with a bit a structure – a system that is not collapsed into maximum entropy. Since integrated information is pervasive, so also, Tononi says, is consciousness. He says that “even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness” (2008, p. 236; cf. Chalmers 1996 on thermostats). Likewise, Tononi attributes “qualia” (that is, consciousness) to simple logical AND and OR gates (Balduzzi and Tononi 2009). On Tononi’s view, what distinguishes human consciousness from photodiode consciousness, OR-gate consciousness, and speck-of-dust consciousness is its richness of detail: The brain is massively informationally complex and integrated, and thus enjoys consciousness orders of magnitude more complex than that of simple systems.
Before we saddle Tononi straightaway with commitment to the consciousness of the United States, though, there is one issue to address: Despite the liberality of his view, Tononi does not regard every putative system as an “entity” that could be the locus of consciousness. If a putative system contains no causal, that is, informational, connections between its parts, then it is not an entity in the relevant sense; it is not, he says, a “complex”. Also, a putative system is not a conscious entity or complex if a larger, more informationally integrated system entirely subsumes it. For example, two disparate nodes do not constitute a conscious complex if a third node lies between them creating a more informationally integrated network. This restriction on the possible loci of consciousness is still extremely liberal by commonsense standards: Complexes can nest and overlap, for example, within the brain, where tightly integrated subsystems interact within larger less-integrated systems.
It seems straightforward that residents of the United States also form multiple overlapping, causally connected complexes. Despite Tononi’s general caveat about what can legitimately count as an entity or a complex, there seem to be no Tononian grounds for denying that the United States is such an entity or complex and thus a locus of consciousness. Its subsystems are informationally connected, and it doesn’t appear to be subsumed within any more tightly informationally integrated system. (I’m assuming the world community and the Earth as a whole are not more tightly informationally integrated than is the U.S., but doesn’t matter for my ultimate argument if we relax this assumption and grant that on Tononi’s view it would be the world community or planet as a whole that is conscious, rather than the United States.) This conclusion seems especially evident given Tononi’s assertion that conscious complexes exist “at multiple spatial and temporal scales” “in most natural (and artificial) systems” (2004, p. 19). Choose the right temporal and spatial scale and Tononi’s view will deliver group consciousness.
The only question that would appear to remain is whether the United States is informationally integrated enough to have a rich stream of conscious experience, or whether its consciousness is substantially impoverished compared to that of a normal human being. This matter is somewhat difficult to assess, but given the massive informational transfer between people and the highly sensitive complex contingencies in human interaction, including in large-group interactions over longish time frames, I would think a plausible first guess from Tononi’s perspective should be that the United States (or world community), when assessed at the appropriate time scale, has at least as rich a stream of conscious experience as does a small mammal.
Update April 3:
In the comments section, Scott Bakker has kindly pointed me toward a new paper by Tononi. This paper seems to reflect a substantial change in Tononi's position with respect to the issues above. While I think the view above accurately captures Tononi's view through at least 2009, it will require substantial modification in light of his most recent remarks.
Update June 6:
See here for my reaction to Tononi's updated position.
I chose Dennett, Dretske, Humphrey, and Tononi as my sample theorists for two reasons: First, they represent a diverse range of very prominent materialist theories of conscious. And second, they are theoretically ambitious, trying to explain consciousness in general in any possible organism (and not just human consciousness or consciousness as it appears on Earth, like most scientific and neural accounts), covering the metaphysics from top to bottom (and not, say, resting upon a relatively unanalyzed notion of "representation" on which it would be unclear whether the United States literally has the right sort of representations).
Of our four theorists, neuroscientist Giulio Tononi’s view (2004, 2008; Balduzzi and Tononi 2009) enables the quickest argument to the consciousness of the United States. Tononi equates consciousness with “integrated information”. “Information”, in Tononi’s sense, is abundant in the universe – present everywhere or almost everywhere there is causation. And information is integrated, at least in a tiny degree, whenever there are contingent causal connections within a system with a bit a structure – a system that is not collapsed into maximum entropy. Since integrated information is pervasive, so also, Tononi says, is consciousness. He says that “even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness” (2008, p. 236; cf. Chalmers 1996 on thermostats). Likewise, Tononi attributes “qualia” (that is, consciousness) to simple logical AND and OR gates (Balduzzi and Tononi 2009). On Tononi’s view, what distinguishes human consciousness from photodiode consciousness, OR-gate consciousness, and speck-of-dust consciousness is its richness of detail: The brain is massively informationally complex and integrated, and thus enjoys consciousness orders of magnitude more complex than that of simple systems.
Before we saddle Tononi straightaway with commitment to the consciousness of the United States, though, there is one issue to address: Despite the liberality of his view, Tononi does not regard every putative system as an “entity” that could be the locus of consciousness. If a putative system contains no causal, that is, informational, connections between its parts, then it is not an entity in the relevant sense; it is not, he says, a “complex”. Also, a putative system is not a conscious entity or complex if a larger, more informationally integrated system entirely subsumes it. For example, two disparate nodes do not constitute a conscious complex if a third node lies between them creating a more informationally integrated network. This restriction on the possible loci of consciousness is still extremely liberal by commonsense standards: Complexes can nest and overlap, for example, within the brain, where tightly integrated subsystems interact within larger less-integrated systems.
It seems straightforward that residents of the United States also form multiple overlapping, causally connected complexes. Despite Tononi’s general caveat about what can legitimately count as an entity or a complex, there seem to be no Tononian grounds for denying that the United States is such an entity or complex and thus a locus of consciousness. Its subsystems are informationally connected, and it doesn’t appear to be subsumed within any more tightly informationally integrated system. (I’m assuming the world community and the Earth as a whole are not more tightly informationally integrated than is the U.S., but doesn’t matter for my ultimate argument if we relax this assumption and grant that on Tononi’s view it would be the world community or planet as a whole that is conscious, rather than the United States.) This conclusion seems especially evident given Tononi’s assertion that conscious complexes exist “at multiple spatial and temporal scales” “in most natural (and artificial) systems” (2004, p. 19). Choose the right temporal and spatial scale and Tononi’s view will deliver group consciousness.
The only question that would appear to remain is whether the United States is informationally integrated enough to have a rich stream of conscious experience, or whether its consciousness is substantially impoverished compared to that of a normal human being. This matter is somewhat difficult to assess, but given the massive informational transfer between people and the highly sensitive complex contingencies in human interaction, including in large-group interactions over longish time frames, I would think a plausible first guess from Tononi’s perspective should be that the United States (or world community), when assessed at the appropriate time scale, has at least as rich a stream of conscious experience as does a small mammal.
Update April 3:
In the comments section, Scott Bakker has kindly pointed me toward a new paper by Tononi. This paper seems to reflect a substantial change in Tononi's position with respect to the issues above. While I think the view above accurately captures Tononi's view through at least 2009, it will require substantial modification in light of his most recent remarks.
Update June 6:
See here for my reaction to Tononi's updated position.
Friday, March 16, 2012
Final Call for Papers: Consciousness and Moral Cognition
Submissions for a special issue of the Review of Philosophy and Psychology on consciousness attribution in moral cognition are due at the end of this month. The list of invited authors includes: Kurt Gray (Maryland) and Chelsea Schein (Maryland), Anthony I. Jack (Case Western Reserve) and Philip Robbins (Missouri), Edouard Machery (Pittsburgh) and Justin Sytsma (East Tennessee State), and Liane Young (Boston College).
Submissions are due March 31, 2012.
The full CFP, including relevant dates and submission details, is available on RoPP's website.
Submissions are due March 31, 2012.
The full CFP, including relevant dates and submission details, is available on RoPP's website.
Ethicists No More Likely Than Non-Ethicists to Pay Their Registration Fees at APA Meetings
As some of you will know, I have an abiding interest in the moral behavior of ethics professors. I've collected a variety of evidence suggesting that ethics professors behave on average no morally better than do professors not specializing in ethics (e.g., here, here, here, here, and here). Here's another study.
Until recently, the American Philosophical Association had more or less an honor system for paying meeting registration fees. There was no serious enforcement mechanism for ensuring that people who attended the meeting -- even people appearing on the program as chairs, speakers, or commentators -- actually paid their registration fees. (Now, however, you can't get the full program with meeting room locations without having paid the fees.)
Registration fees are not exorbitant: Since at least the mid-2000s, pre-registration for APA members been $50-$60. (Fees are somewhat higher for non-members and for on-site registration. For students, pre-registration is $10 and on-site registration is $15.) According to the APA, these fees don't fully cover the costs of hosting the meetings, with the difference subsidized from other sources of revenue. Barring exceptional circumstances, people attending the meeting plausibly have an obligation to pay their registration fees. This might be especially true for speakers and commentators, since the APA has given them a podium to promulgate their ideas.
From personal experience, I believe that almost everyone appearing on the APA program attends the meeting (maybe 95%). What I've done, then, is this: I have compared published lists of Pacific APA program participants from 2006-2008 with lists of people who paid their registration fees at those meetings -- data kindly provided by the APA with the permission of the Pacific Division. (The Pacific Division meeting is the best choice for several reasons, and both of the recent Secretary-Treasurers, Anita Silvers and Dom Lopes have been generous in supporting my research.)
Let me emphasize one point before continuing: The data were provided to me with all names encrypted so that I could not determine the registration status of any particular individual. This was a condition of the Pacific Division's cooperation and of UC Riverside's review board approval. It is also very much my own preference. I am interested only in group trends.
To keep this post to manageable size, I've put further details about coding here.
Here, then, are my preliminary findings:
Overall, 76% of program participants paid their registration fees: 75% in 2006, 76% in 2007, and 77% in 2008. (The increasing trend is not statistically significant.)
74% of participants presenting ethics-related material (henceforth "ethicists": see the coding details) paid their registration fees, compared to 76% of non-ethicists, not a statistically significant difference (556/750 vs. 671/885, z = -0.8, p = .43, 95% CI for diff -6% to +3%).
Other predictors:
* People on the main program were more likely to have paid their fees than were people whose only participation was on the group program: 77% vs. 65% (p < .001).
* Gender did not appear to make a difference: 75% of men vs. 76% of women paid (p = .60).
* People whose primary participation was in a (generally submitted and blind refereed) colloquium session were more likely to have paid than people whose primary participation was in a (generally invited) non-colloquium session on the main program: 81% vs. 74% (p = .004).
* There was a trend, perhaps not statistically significant, for faculty at Leiter-ranked PhD-granting institutions to have been less likely to have paid registration fees than students at those same institutions: Leiter-ranked faculty 73% vs. people not at Leiter-ranked institutions (presumably mostly faculty) 75% vs. students at Leiter-ranked institutions 81% (chi-square p = .11; Leiter-ranked faculty vs. students, p = .03).
* There was a marginally significant trend for speakers and commentators to have been more likely to have paid their fees than people whose only role was chairing: 76% vs. 71% (p = .097).
Ethicists differed from non-ethicists in several dimensions.
* 33% of ethicists were women vs. 18% of non-ethicists (p < .001).
* 63% of participants whose only appearance was on the group program were ethicists vs. 42% of participants who appeared on the main program (p < .001).
* Looking only at the main program, 35% of participants whose highest level of participation was in a colloquium session were ethicists vs. 49% whose highest level of participation was in a non-colloquium session (p < .001). (I considered speaking as a higher level of participation than commenting and commenting as a higher level of participation than chairing.)
* Among faculty in Leiter-ranked departments, a smaller percentage were ethicists (38%) than among participants who were not Leiter-ranked faculty (49%, p < .001). (I've found similar results in another study too.)
I addressed these potential confounds in two ways.
First, I ran split analyses. For example, I looked only at main program participants to see if ethicists were more likely to have registered than were non-ethicists (they weren't: 77% vs. 77%, p = .90), and I did the same for participants who were only in group sessions (also no difference: 65% vs. 64%, p = .95). No split analysis revealed a significant difference between ethicists and non-ethicists.
Second, I ran logistic regressions, using the following dummy variables as predictors: ethicist, group program participant, colloquium participant, student at Leiter-ranked institution, chair. In one regression, those were the only predictors. In a second regression, each variable was crossed as an "interaction variable" with ethicist. No interaction variable was significant. In the non-interaction regression, colloquium role and main program participation were both positively predictive of having registered (p < .01) and participation only as chair was negatively predictive (p < .01). Being a student at a Leiter-ranked institution was not predictive (p = .18) and -- most importantly for my analysis -- being an ethicist was also not predictive (logistic beta = .04, p = .72), confirming the main result of the non-regression analysis.
[Thanks to the Pacific Division of the American Philosophical Association for providing access to their data, anonymously encoded, on my request. However, this research was neither solicited by nor conducted on behalf of the APA or the Pacific Division.]
Update March 17, for those concerned about privacy: See the comments section for a bit more detail on the methods used to ensure that no one outside the APA was able to determine any individual's registration status.
Until recently, the American Philosophical Association had more or less an honor system for paying meeting registration fees. There was no serious enforcement mechanism for ensuring that people who attended the meeting -- even people appearing on the program as chairs, speakers, or commentators -- actually paid their registration fees. (Now, however, you can't get the full program with meeting room locations without having paid the fees.)
Registration fees are not exorbitant: Since at least the mid-2000s, pre-registration for APA members been $50-$60. (Fees are somewhat higher for non-members and for on-site registration. For students, pre-registration is $10 and on-site registration is $15.) According to the APA, these fees don't fully cover the costs of hosting the meetings, with the difference subsidized from other sources of revenue. Barring exceptional circumstances, people attending the meeting plausibly have an obligation to pay their registration fees. This might be especially true for speakers and commentators, since the APA has given them a podium to promulgate their ideas.
From personal experience, I believe that almost everyone appearing on the APA program attends the meeting (maybe 95%). What I've done, then, is this: I have compared published lists of Pacific APA program participants from 2006-2008 with lists of people who paid their registration fees at those meetings -- data kindly provided by the APA with the permission of the Pacific Division. (The Pacific Division meeting is the best choice for several reasons, and both of the recent Secretary-Treasurers, Anita Silvers and Dom Lopes have been generous in supporting my research.)
Let me emphasize one point before continuing: The data were provided to me with all names encrypted so that I could not determine the registration status of any particular individual. This was a condition of the Pacific Division's cooperation and of UC Riverside's review board approval. It is also very much my own preference. I am interested only in group trends.
To keep this post to manageable size, I've put further details about coding here.
Here, then, are my preliminary findings:
Overall, 76% of program participants paid their registration fees: 75% in 2006, 76% in 2007, and 77% in 2008. (The increasing trend is not statistically significant.)
74% of participants presenting ethics-related material (henceforth "ethicists": see the coding details) paid their registration fees, compared to 76% of non-ethicists, not a statistically significant difference (556/750 vs. 671/885, z = -0.8, p = .43, 95% CI for diff -6% to +3%).
Other predictors:
* People on the main program were more likely to have paid their fees than were people whose only participation was on the group program: 77% vs. 65% (p < .001).
* Gender did not appear to make a difference: 75% of men vs. 76% of women paid (p = .60).
* People whose primary participation was in a (generally submitted and blind refereed) colloquium session were more likely to have paid than people whose primary participation was in a (generally invited) non-colloquium session on the main program: 81% vs. 74% (p = .004).
* There was a trend, perhaps not statistically significant, for faculty at Leiter-ranked PhD-granting institutions to have been less likely to have paid registration fees than students at those same institutions: Leiter-ranked faculty 73% vs. people not at Leiter-ranked institutions (presumably mostly faculty) 75% vs. students at Leiter-ranked institutions 81% (chi-square p = .11; Leiter-ranked faculty vs. students, p = .03).
* There was a marginally significant trend for speakers and commentators to have been more likely to have paid their fees than people whose only role was chairing: 76% vs. 71% (p = .097).
Ethicists differed from non-ethicists in several dimensions.
* 33% of ethicists were women vs. 18% of non-ethicists (p < .001).
* 63% of participants whose only appearance was on the group program were ethicists vs. 42% of participants who appeared on the main program (p < .001).
* Looking only at the main program, 35% of participants whose highest level of participation was in a colloquium session were ethicists vs. 49% whose highest level of participation was in a non-colloquium session (p < .001). (I considered speaking as a higher level of participation than commenting and commenting as a higher level of participation than chairing.)
* Among faculty in Leiter-ranked departments, a smaller percentage were ethicists (38%) than among participants who were not Leiter-ranked faculty (49%, p < .001). (I've found similar results in another study too.)
I addressed these potential confounds in two ways.
First, I ran split analyses. For example, I looked only at main program participants to see if ethicists were more likely to have registered than were non-ethicists (they weren't: 77% vs. 77%, p = .90), and I did the same for participants who were only in group sessions (also no difference: 65% vs. 64%, p = .95). No split analysis revealed a significant difference between ethicists and non-ethicists.
Second, I ran logistic regressions, using the following dummy variables as predictors: ethicist, group program participant, colloquium participant, student at Leiter-ranked institution, chair. In one regression, those were the only predictors. In a second regression, each variable was crossed as an "interaction variable" with ethicist. No interaction variable was significant. In the non-interaction regression, colloquium role and main program participation were both positively predictive of having registered (p < .01) and participation only as chair was negatively predictive (p < .01). Being a student at a Leiter-ranked institution was not predictive (p = .18) and -- most importantly for my analysis -- being an ethicist was also not predictive (logistic beta = .04, p = .72), confirming the main result of the non-regression analysis.
[Thanks to the Pacific Division of the American Philosophical Association for providing access to their data, anonymously encoded, on my request. However, this research was neither solicited by nor conducted on behalf of the APA or the Pacific Division.]
Update March 17, for those concerned about privacy: See the comments section for a bit more detail on the methods used to ensure that no one outside the APA was able to determine any individual's registration status.
Thursday, March 15, 2012
Women's Roles in APA Meetings
I've been looking into data on whether ethicists are more or less likely than non-ethicists to pay their registration fees at meetings of the American Philosophical Association. As part of this project, I've coded program participation data from the Pacific APA from 2006-2008. Given the gender issues in philosophy, I thought readers might be interested to see the data broken down by gender.
Gender coding was based on first name only, excluding people with gender-ambiguous first names, first initials only, and foreign names if the gender was not obvious to the U.S. coders (altogether 10% of the program slots were excluded from gender coding for these reasons).
First: Women occupied 25% of the Pacific APA program slots each year. This rate was remarkably consistent, in fact: 25.3% in 2006, 24.8% in 2007, and 24.8% in 2008, with about 1000 gender-coded program slots each year. This 25% representation on the program is approximately in line with estimates of the percentage of U.S. philosophers who are women (compare, e.g., my 23% estimate across 5 U.S. states, Leiter's report of 21% from the National Center for Education Statistics, and the 2009 Survey of Earned Doctorates finding that women receive about 30% of U.S. philosophy PhDs).
One very consistent finding in my research is that female philosophers are more likely to be ethicists than non-ethicists. My Pacific APA data fit this pattern. I coded talks, by title, as "ethics", "non-ethics", or "excluded". "Ethics" was construed broadly to include political philosophy and philosophy of law. "Excluded" talks included talks on religion, philosophy of action, gender, race, and issues in the profession (such as technology or teaching) unless the title of those talks suggested an ethical focus. Philosophers chairing or commenting on sessions containing a mix of ethics and non-ethics talks were also excluded from this analysis. 33% of participant slots in ethics were occupied by women, compared to 18% in non-ethics (363/1085 vs. 232/1315, p less than .001).
I also broke the data down by role in the program. Women were slightly more likely to be on the "group program" than the "main program": 28% vs. 24% (579/2408 vs. 198/704, p = .03). However, this effect appears to be driven by the fact that the group program had proportionately more ethics slots than did the main program (60% of group program participant slots were ethics vs. 41% of main program participant slots, 338/561 vs. 876/2126, p less than .001). As noted above, women were more likely to occupy ethics slots. Regression analysis suggests that women were not more likely to be group program than main program participants when this other factor is taken into account.
Within the main program, I found no statistically detectable difference in the likelihood of being in the (usually submitted and blind refereed) colloquium sessions than in the (usually invited) non-colloquium sessions (23% vs. 25%, 245/1091 vs. 326/1318, p = .38) (See also Dom Lopes' analysis here and here). Nor did I find a difference in the likelihood as serving in the chair role as opposed to speaking or commenting (27% vs. 24%, 209/780 vs. 568/2332, p = .18).
Gender coding was based on first name only, excluding people with gender-ambiguous first names, first initials only, and foreign names if the gender was not obvious to the U.S. coders (altogether 10% of the program slots were excluded from gender coding for these reasons).
First: Women occupied 25% of the Pacific APA program slots each year. This rate was remarkably consistent, in fact: 25.3% in 2006, 24.8% in 2007, and 24.8% in 2008, with about 1000 gender-coded program slots each year. This 25% representation on the program is approximately in line with estimates of the percentage of U.S. philosophers who are women (compare, e.g., my 23% estimate across 5 U.S. states, Leiter's report of 21% from the National Center for Education Statistics, and the 2009 Survey of Earned Doctorates finding that women receive about 30% of U.S. philosophy PhDs).
One very consistent finding in my research is that female philosophers are more likely to be ethicists than non-ethicists. My Pacific APA data fit this pattern. I coded talks, by title, as "ethics", "non-ethics", or "excluded". "Ethics" was construed broadly to include political philosophy and philosophy of law. "Excluded" talks included talks on religion, philosophy of action, gender, race, and issues in the profession (such as technology or teaching) unless the title of those talks suggested an ethical focus. Philosophers chairing or commenting on sessions containing a mix of ethics and non-ethics talks were also excluded from this analysis. 33% of participant slots in ethics were occupied by women, compared to 18% in non-ethics (363/1085 vs. 232/1315, p less than .001).
I also broke the data down by role in the program. Women were slightly more likely to be on the "group program" than the "main program": 28% vs. 24% (579/2408 vs. 198/704, p = .03). However, this effect appears to be driven by the fact that the group program had proportionately more ethics slots than did the main program (60% of group program participant slots were ethics vs. 41% of main program participant slots, 338/561 vs. 876/2126, p less than .001). As noted above, women were more likely to occupy ethics slots. Regression analysis suggests that women were not more likely to be group program than main program participants when this other factor is taken into account.
Within the main program, I found no statistically detectable difference in the likelihood of being in the (usually submitted and blind refereed) colloquium sessions than in the (usually invited) non-colloquium sessions (23% vs. 25%, 245/1091 vs. 326/1318, p = .38) (See also Dom Lopes' analysis here and here). Nor did I find a difference in the likelihood as serving in the chair role as opposed to speaking or commenting (27% vs. 24%, 209/780 vs. 568/2332, p = .18).
Tuesday, March 13, 2012
Cohen, Dennett, and Humphrey
Readers might be interested to see Cohen and Dennett's reply to my February 28 post on their work on reportability and consciousness (which I have added as an update to that post) and/or my extended discussion with Nick Humphrey in the comments section of my March 8 post on why he should think that the United States is conscious.
Thursday, March 08, 2012
Why Humphrey Should Think That the United States Is Conscious
In February, I argued that Daniel Dennett and Fred Dretske should, given their other views, hold that the United States is a spatially distributed group entity with a stream of experience of its own (a stream of experience over and above the experiences of the individual citizens and residents of the United States). Today I'm going to the suggest the same about psychologist Nicholas Humphrey.
My general project is to argue that that if materialism is true, the United States is probably conscious. I've advanced some general considerations in favor of that claim. But I also want to examine some particular materialist theories in more detail. I've chosen Dennett, Dretske, Humphrey, and (coming up) Guilio Tononi because their theories are prominent, aim to explain consciousness in any possible organism (not just human beings), and cover the metaphysics top to bottom.
Humphrey is a particularly interesting case because, awkwardly for my view, he explicitly denies that collective entities made of separate bounded individuals, such as swarms of bees, can have conscious experience (1992, p. 194). I will now argue that Humphrey, by the light of his own theory, should recant such remarks.
Humphrey argues that a creature has conscious experience when it has high-fidelity recurrent feedback loops in its sensory system (1992, 2011). A "sensory system", per Humphrey, is a system that represents what is going on inside the creature and directs behavior accordingly. No fancy minds are required for "sensation" in Humphrey's sense; such systems can be as simple as the reactivity of an amoeba to chemicals or light. (Humphrey also contrasts sensation with "perception", which provides information not about states of the body but rather about the outside world.) For consciousness, the only thing necessary besides a sensory system, on Humphrey's (1992) view, is that there be high-fidelity, momentarily self-sustaining feedback loops within that system -- loops between input and output, tuned and integrated across subjective time.
At first glance, you might think this theory would imply a superabundance of consciousness in the natural world, since sensory systems (by Humphrey's liberal definition) are cheap and feedback loops are cheap. But near the end of his 1992 book, Humphrey proves conservative. He rules out, for example, worms and fleas, saying that their sensory feedback loops "are too long and noisy to sustain reverberant activity" (1992, p. 195). Maybe even consciousness is limited only to "higher vertebrates such as mammals and birds, although not necessarily all of these" (ibid.).
Humphrey argues against conscious experience in spatially discontinuous entities as follows: Collective entities, he says, don't have bodies, and thus they lack a boundary between the me and the not-me (1992, p. 193-194). Since sensation (unlike perception) is necessarily directed at one's own bodily states, collective organisms necessarily lack sensory systems. Thus, they're not even candidates for consciousness. The argument is quick. Its subpoints are undefended, and it occupies less than a paragraph.
One plausible candidate for the boundary of a collective organism is the boundary of the discontinuous region occupied by all its members' bodies. The individual bees are each part of the colony; the flowers and birds and enemy bees are not part of the colony. That could be the me and the not-me -- at least as much me/not-me as an amoeba has! The colony reacts to disturbances of this body so as to preserve as much of it as possible from threats, and it deploys parts of its body (individual bees) toward collective ends, for example via communicative bee dances in a tangled informational loop at the center of the colony.
Humphrey makes no mention of spatial contiguity in developing his account of sensation, representation, responsiveness, and feedback loops. Nor would a requirement of contiguity appear to be motivated within the general spirit of his account. A sensory signal travels inward along a nerve from the bodily surface to the central tangle. Perhaps a species could evolve that sends its nerve signals by lightwave instead, along hollow reflective capillaries, saving precious milliseconds of response time. Perhaps this adaptation then allows a further adaptation in which peripheral parts can temporarily detach from the central body while still sending their light signals to targeted receptors. You can see how this adaptation could be useful for ambushing prey or reaching into long, narrow spaces. Viola, discontinuous organisms! Nothing in Humphrey's account seems to motivate ruling such possibilities out. Humphrey should allow that beings with discontinuous bodies can, at least in principle, have spatially distributed sensory surfaces that communicate their disturbances to the center of the organism and whose behavior is in turn governed by signals outbound from the center. He should allow the possibility of sensation, body, and the me/not-me distinction in spatially distributed organisms. And then for consciousness there remains only the question of whether there are sustained, high-fidelity feedback loops within those sensory systems.
So much for in-principle possibility. How about actual consciousness in actually existing distributed organisms? Since Humphrey sets a high bar for "high fidelity", bee colonies still won't qualify as conscious organisms by his standards; their feedback loops won't be high fidelity enough. But how about the United States? I think it will qualify. It will be helpful, however, to consider a cleaner case first: an army division.
An army division has clear boundaries. There are people who are in it and there are people who are outside of it. There's the division on the one hand and the terrain on the other. The division will act to preserve its parts, for example under enemy attack. Disturbances on the periphery (e.g., on the retinas of scouts) will be communicated to the center, and commands from the center will govern behavior at the periphery. If we can set aside prejudice against discontiguous entities and our commonsensical distaste at conceiving of human beings as mere parts of a larger organism, it seems that an army division has a body by Humphrey's general standards.
Does the division also have a sensory system? Again, it seems it should, by Humphrey's standards: Conditions on the periphery are represented by the center, which then governs the behavior of the periphery in response. That's all it takes for the amoeba, and if Humphrey is to be consistent that's all it should take for the division.
Now finally for the condition that Humphrey uses to exclude earthworms and fleas: Does the army division have high-fidelity, temporally extended feedback loops from the sensory periphery to the center? (Alternatively it might have, as in the human case per Humphrey, a more truncated loop from output signals, which needn't actually make it to the periphery, back to the center.) It seems so, at least sometimes. The commander can watch in real time on high-fidelity video as her orders are carried out. She can even stream live satellite video back and forth with her scouts and platoon leaders. Video feeds from the scouts' positions can come high-fidelity in a sustained stream to her eyes. Auditory feeds can return to her ears -- including auditory feeds containing the sound of her own voice issuing commands. For a modern army, there's plenty of opportunity for sustained high-fidelity feedback loops between center and periphery. With good technology, the feedback can be much higher fidelity, higher bandwidth, and more sustained than in the proprioceptive feedback loops I get when I close my eyes and wiggle my finger.
(In his 2011 book, Humphrey gestures toward further complexities of information flow that feedback loops enable (p. 57-59). However, as he suggests, such emergent complexities arise quite naturally once feedback loops are sustained and high-quality, and the same will presumably be true for some such feedback loops in an army division. In any case, since such remarks are gestural rather than fully developed, I focus primarily on the account in Humphrey's 1992 book.)
If I can convince you that a Humphrey-like view implies that army divisions have conscious experience, that's enough for my overall purposes. But to bring it back specifically to the United States: The U.S. has a boundary of me/not-me and a spatially distributed body, in roughly the same way an army division does. In Washington, D.C., it has a center of control of its official actions, which governs behaviors like declaring war, raising tariffs, and sending explorers to the moon. Signals from the periphery (and from the interior too, as in the human case) provide information to the center, and signals from the center command the periphery. And with modern technology, the feedback loops can be high fidelity, high bandwidth, temporally sustained, and almost arbitrarily complex. Humphrey's criteria are all met. Humphrey should abandon his apparent bias against discontinuous organisms and accept that the United States is literally conscious.
Update, March 13:
Readers might be interested to Nick Humphrey's reply in the comments section and the exchange between Nick and me that grows out of it.
My general project is to argue that that if materialism is true, the United States is probably conscious. I've advanced some general considerations in favor of that claim. But I also want to examine some particular materialist theories in more detail. I've chosen Dennett, Dretske, Humphrey, and (coming up) Guilio Tononi because their theories are prominent, aim to explain consciousness in any possible organism (not just human beings), and cover the metaphysics top to bottom.
Humphrey is a particularly interesting case because, awkwardly for my view, he explicitly denies that collective entities made of separate bounded individuals, such as swarms of bees, can have conscious experience (1992, p. 194). I will now argue that Humphrey, by the light of his own theory, should recant such remarks.
Humphrey argues that a creature has conscious experience when it has high-fidelity recurrent feedback loops in its sensory system (1992, 2011). A "sensory system", per Humphrey, is a system that represents what is going on inside the creature and directs behavior accordingly. No fancy minds are required for "sensation" in Humphrey's sense; such systems can be as simple as the reactivity of an amoeba to chemicals or light. (Humphrey also contrasts sensation with "perception", which provides information not about states of the body but rather about the outside world.) For consciousness, the only thing necessary besides a sensory system, on Humphrey's (1992) view, is that there be high-fidelity, momentarily self-sustaining feedback loops within that system -- loops between input and output, tuned and integrated across subjective time.
At first glance, you might think this theory would imply a superabundance of consciousness in the natural world, since sensory systems (by Humphrey's liberal definition) are cheap and feedback loops are cheap. But near the end of his 1992 book, Humphrey proves conservative. He rules out, for example, worms and fleas, saying that their sensory feedback loops "are too long and noisy to sustain reverberant activity" (1992, p. 195). Maybe even consciousness is limited only to "higher vertebrates such as mammals and birds, although not necessarily all of these" (ibid.).
Humphrey argues against conscious experience in spatially discontinuous entities as follows: Collective entities, he says, don't have bodies, and thus they lack a boundary between the me and the not-me (1992, p. 193-194). Since sensation (unlike perception) is necessarily directed at one's own bodily states, collective organisms necessarily lack sensory systems. Thus, they're not even candidates for consciousness. The argument is quick. Its subpoints are undefended, and it occupies less than a paragraph.
One plausible candidate for the boundary of a collective organism is the boundary of the discontinuous region occupied by all its members' bodies. The individual bees are each part of the colony; the flowers and birds and enemy bees are not part of the colony. That could be the me and the not-me -- at least as much me/not-me as an amoeba has! The colony reacts to disturbances of this body so as to preserve as much of it as possible from threats, and it deploys parts of its body (individual bees) toward collective ends, for example via communicative bee dances in a tangled informational loop at the center of the colony.
Humphrey makes no mention of spatial contiguity in developing his account of sensation, representation, responsiveness, and feedback loops. Nor would a requirement of contiguity appear to be motivated within the general spirit of his account. A sensory signal travels inward along a nerve from the bodily surface to the central tangle. Perhaps a species could evolve that sends its nerve signals by lightwave instead, along hollow reflective capillaries, saving precious milliseconds of response time. Perhaps this adaptation then allows a further adaptation in which peripheral parts can temporarily detach from the central body while still sending their light signals to targeted receptors. You can see how this adaptation could be useful for ambushing prey or reaching into long, narrow spaces. Viola, discontinuous organisms! Nothing in Humphrey's account seems to motivate ruling such possibilities out. Humphrey should allow that beings with discontinuous bodies can, at least in principle, have spatially distributed sensory surfaces that communicate their disturbances to the center of the organism and whose behavior is in turn governed by signals outbound from the center. He should allow the possibility of sensation, body, and the me/not-me distinction in spatially distributed organisms. And then for consciousness there remains only the question of whether there are sustained, high-fidelity feedback loops within those sensory systems.
So much for in-principle possibility. How about actual consciousness in actually existing distributed organisms? Since Humphrey sets a high bar for "high fidelity", bee colonies still won't qualify as conscious organisms by his standards; their feedback loops won't be high fidelity enough. But how about the United States? I think it will qualify. It will be helpful, however, to consider a cleaner case first: an army division.
An army division has clear boundaries. There are people who are in it and there are people who are outside of it. There's the division on the one hand and the terrain on the other. The division will act to preserve its parts, for example under enemy attack. Disturbances on the periphery (e.g., on the retinas of scouts) will be communicated to the center, and commands from the center will govern behavior at the periphery. If we can set aside prejudice against discontiguous entities and our commonsensical distaste at conceiving of human beings as mere parts of a larger organism, it seems that an army division has a body by Humphrey's general standards.
Does the division also have a sensory system? Again, it seems it should, by Humphrey's standards: Conditions on the periphery are represented by the center, which then governs the behavior of the periphery in response. That's all it takes for the amoeba, and if Humphrey is to be consistent that's all it should take for the division.
Now finally for the condition that Humphrey uses to exclude earthworms and fleas: Does the army division have high-fidelity, temporally extended feedback loops from the sensory periphery to the center? (Alternatively it might have, as in the human case per Humphrey, a more truncated loop from output signals, which needn't actually make it to the periphery, back to the center.) It seems so, at least sometimes. The commander can watch in real time on high-fidelity video as her orders are carried out. She can even stream live satellite video back and forth with her scouts and platoon leaders. Video feeds from the scouts' positions can come high-fidelity in a sustained stream to her eyes. Auditory feeds can return to her ears -- including auditory feeds containing the sound of her own voice issuing commands. For a modern army, there's plenty of opportunity for sustained high-fidelity feedback loops between center and periphery. With good technology, the feedback can be much higher fidelity, higher bandwidth, and more sustained than in the proprioceptive feedback loops I get when I close my eyes and wiggle my finger.
(In his 2011 book, Humphrey gestures toward further complexities of information flow that feedback loops enable (p. 57-59). However, as he suggests, such emergent complexities arise quite naturally once feedback loops are sustained and high-quality, and the same will presumably be true for some such feedback loops in an army division. In any case, since such remarks are gestural rather than fully developed, I focus primarily on the account in Humphrey's 1992 book.)
If I can convince you that a Humphrey-like view implies that army divisions have conscious experience, that's enough for my overall purposes. But to bring it back specifically to the United States: The U.S. has a boundary of me/not-me and a spatially distributed body, in roughly the same way an army division does. In Washington, D.C., it has a center of control of its official actions, which governs behaviors like declaring war, raising tariffs, and sending explorers to the moon. Signals from the periphery (and from the interior too, as in the human case) provide information to the center, and signals from the center command the periphery. And with modern technology, the feedback loops can be high fidelity, high bandwidth, temporally sustained, and almost arbitrarily complex. Humphrey's criteria are all met. Humphrey should abandon his apparent bias against discontinuous organisms and accept that the United States is literally conscious.
Update, March 13:
Readers might be interested to Nick Humphrey's reply in the comments section and the exchange between Nick and me that grows out of it.
Friday, March 02, 2012
The Instability of Professional Philosophers' Endorsement of the Famous "Doctrine of the Double Effect"
People's responses to hypothetical moral scenarios can vary substantially depending on the order in which those scenarios are presented (e.g., Lombrozo 2009). Consider the well-known "Switch" and "Push" versions of The Trolley Problem. In the Switch version, an out-of-control boxcar is headed toward five people whom it will kill if nothing is done. You're standing by a railroad switch, and you can divert the boxcar onto a side-track, saving the five people. However, there's one person on the side-track, who would then be killed. Many respondents will say that there's nothing morally wrong with flipping the switch, killing the one to save the five. Some will even say that you're morally obliged to flip the switch. In the Push version, instead of being able to save the five by flipping a switch, you can do so by pushing a heavy man into the path of the boxcar, killing him but saving the five as his weight slows the boxcar. Despite the surface similarity to the Switch case, most people think it's not okay to push the man.
Here's the order effect: If you present the Push case first, people are much less likely to say it's okay to flip the switch when you then later present the Switch case than if you present the Switch case first. In one study, Fiery Cushman and I found that if we presented Push first, respondents tended to rate the two cases equivalently (on a seven-point scale from "extremely morally good" to "extremely morally bad"). But if we presented Switch first, only about half the respondents rated the scenarios equivalently. Somewhat simplified: People who see Push first will say that it's morally bad to push the man, and then when they see Switch they will say it's similarly bad to flip the switch. People who see Switch first will say it's okay to flip the switch, but then when they see the Push case they don't say "Oh, I guess that's okay too". Rather, they dig in their heels and say that pushing the man is bad despite the superficial similarity to the Switch case, and thus they rate the two scenarios inequivalently.
Strikingly, Fiery and I found that professional philosophers show the same size order effects on their judgment about hypothetical scenarios as do non-philosophers. Even when we restricted our analysis to respondents reporting a PhD in philosophy and an area of specialization or competence in ethics, we found no overall reduction of the magnitude of the order effect. (This research is forthcoming in Mind & Language; manuscript draft available here.)
The Doctrine of the Double Effect is the orthodox (but by no means universally accepted) explanation of why it might be okay to flip the switch but not okay to push the man. According to the Doctrine of the Double Effect, it's worse to harm someone as a means of bringing about a good outcome than it is to harm someone as merely a foreseen side-effect of bringing about a good outcome. Applied to the trolley case, the thought is this: If you flip the switch, the means of saving the five is diverting the boxcar to the side-track, and the death of the one person is just a foreseen side effect. However, if you push the man, killing him is the means of saving the five.
Now maybe this is a sound doctrine, soundly applied, or maybe not. But what Fiery and I did was this: At the end of our experiment, we asked our participants whether they endorsed the Doctrine of the Double Effect. Specifically we asked the following:
Manipulating the order of two pairs of scenarios (a Push-Switch case and a Moral Luck case) appeared to amplify the magnitude of this effect, by pushing philosophers either generally toward or generally against endorsing inequivalency-supporting principles. With two scenario pairs ordered to favor inequivalency, we found 70% of our philosopher respondents endorsing the Doctrine of the Double Effect. With the two pairs ordered to favor equivalency, only 28% endorsed the doctrine (p < .001). This is a very large shift in opinion, given how well-known the doctrine is among philosophers and given that by this point in the questionnaire, all philosophers had viewed all versions of each scenario. We then filtered our results, looking only at respondents reporting a PhD and an area of specialization or competence in ethics, thinking that these high-grade specialists (mostly ethics professors at Leiter-ranked institutions) might have more stable opinions about the Doctrine of the Double Effect. They didn't. When the two scenario pairs were arranged to favor inequivalency, 62% of ethics PhDs endorsed the Doctrine of the Double Effect. When the two pairs were arranged to favor equivalency, 29% endorsed the doctrine (p < .05).
The simplest interpretation of our overall results, across three types of scenarios (Double Effect, Moral Luck, and Action-Omission), is that in cases like these skill in philosophy doesn't manifest as skill in consistently applying explicitly endorsed abstract principles to reach stable judgments about hypothetical scenarios; rather, it manifests more as skill in choosing principles to rationalize, post-hoc, scenario judgments that are driven by the same types of factors that drive non-philosophers' judgments.
Here's the order effect: If you present the Push case first, people are much less likely to say it's okay to flip the switch when you then later present the Switch case than if you present the Switch case first. In one study, Fiery Cushman and I found that if we presented Push first, respondents tended to rate the two cases equivalently (on a seven-point scale from "extremely morally good" to "extremely morally bad"). But if we presented Switch first, only about half the respondents rated the scenarios equivalently. Somewhat simplified: People who see Push first will say that it's morally bad to push the man, and then when they see Switch they will say it's similarly bad to flip the switch. People who see Switch first will say it's okay to flip the switch, but then when they see the Push case they don't say "Oh, I guess that's okay too". Rather, they dig in their heels and say that pushing the man is bad despite the superficial similarity to the Switch case, and thus they rate the two scenarios inequivalently.
Strikingly, Fiery and I found that professional philosophers show the same size order effects on their judgment about hypothetical scenarios as do non-philosophers. Even when we restricted our analysis to respondents reporting a PhD in philosophy and an area of specialization or competence in ethics, we found no overall reduction of the magnitude of the order effect. (This research is forthcoming in Mind & Language; manuscript draft available here.)
The Doctrine of the Double Effect is the orthodox (but by no means universally accepted) explanation of why it might be okay to flip the switch but not okay to push the man. According to the Doctrine of the Double Effect, it's worse to harm someone as a means of bringing about a good outcome than it is to harm someone as merely a foreseen side-effect of bringing about a good outcome. Applied to the trolley case, the thought is this: If you flip the switch, the means of saving the five is diverting the boxcar to the side-track, and the death of the one person is just a foreseen side effect. However, if you push the man, killing him is the means of saving the five.
Now maybe this is a sound doctrine, soundly applied, or maybe not. But what Fiery and I did was this: At the end of our experiment, we asked our participants whether they endorsed the Doctrine of the Double Effect. Specifically we asked the following:
Sometimes it is necessary to use one person’s death as a means to saving several more people—killing one helps you accomplish the goal of saving several. Other times one person’s death is a side-effect of saving several more people—the goal of saving several unavoidably ends up killing one as a consequence. Is the first morally better, worse, or the same as the second?Non-philosophers' responses to this question were unrelated to the order of the presentation of the scenarios. We suspect that many of them didn't see the connection between this abstract principle and the Push and Switch scenarios presented much earlier in the questionnaire. But philosophers' responses were related to the order of presentation of the Push and Switch scenarios. Specifically, the majority of philosophers (62%) who saw the Switch scenario first endorsed the Doctrine of the Double Effect. However, the doctrine was endorsed only by a minority of philosophers (46%) who saw Push first (p = .02). What seems to have happened is this: By manipulating order of presentation, Fiery and I influenced the likelihood that respondents would rate the scenarios equivalently or inequivalently. We thereby also influenced the likelihood of our philosopher respondents' endorsing a doctrine that appears to justify inequivalent judgments about the scenarios, the Doctrine of the Double Effect. Rather than relying on stable principles to reach judgments about the cases, a certain portion of philosophers appear to have reached their scenario judgments on the basis of covert factors like order of presentation and then endorsed principles only post-hoc as a means of rationalizing their covertly influenced judgments about the specific cases.
[Response options: ‘better’ ‘worse’ or ‘same’]
Manipulating the order of two pairs of scenarios (a Push-Switch case and a Moral Luck case) appeared to amplify the magnitude of this effect, by pushing philosophers either generally toward or generally against endorsing inequivalency-supporting principles. With two scenario pairs ordered to favor inequivalency, we found 70% of our philosopher respondents endorsing the Doctrine of the Double Effect. With the two pairs ordered to favor equivalency, only 28% endorsed the doctrine (p < .001). This is a very large shift in opinion, given how well-known the doctrine is among philosophers and given that by this point in the questionnaire, all philosophers had viewed all versions of each scenario. We then filtered our results, looking only at respondents reporting a PhD and an area of specialization or competence in ethics, thinking that these high-grade specialists (mostly ethics professors at Leiter-ranked institutions) might have more stable opinions about the Doctrine of the Double Effect. They didn't. When the two scenario pairs were arranged to favor inequivalency, 62% of ethics PhDs endorsed the Doctrine of the Double Effect. When the two pairs were arranged to favor equivalency, 29% endorsed the doctrine (p < .05).
The simplest interpretation of our overall results, across three types of scenarios (Double Effect, Moral Luck, and Action-Omission), is that in cases like these skill in philosophy doesn't manifest as skill in consistently applying explicitly endorsed abstract principles to reach stable judgments about hypothetical scenarios; rather, it manifests more as skill in choosing principles to rationalize, post-hoc, scenario judgments that are driven by the same types of factors that drive non-philosophers' judgments.