Friday, March 02, 2012

The Instability of Professional Philosophers' Endorsement of the Famous "Doctrine of the Double Effect"

People's responses to hypothetical moral scenarios can vary substantially depending on the order in which those scenarios are presented (e.g., Lombrozo 2009). Consider the well-known "Switch" and "Push" versions of The Trolley Problem. In the Switch version, an out-of-control boxcar is headed toward five people whom it will kill if nothing is done. You're standing by a railroad switch, and you can divert the boxcar onto a side-track, saving the five people. However, there's one person on the side-track, who would then be killed. Many respondents will say that there's nothing morally wrong with flipping the switch, killing the one to save the five. Some will even say that you're morally obliged to flip the switch. In the Push version, instead of being able to save the five by flipping a switch, you can do so by pushing a heavy man into the path of the boxcar, killing him but saving the five as his weight slows the boxcar. Despite the surface similarity to the Switch case, most people think it's not okay to push the man.

Here's the order effect: If you present the Push case first, people are much less likely to say it's okay to flip the switch when you then later present the Switch case than if you present the Switch case first. In one study, Fiery Cushman and I found that if we presented Push first, respondents tended to rate the two cases equivalently (on a seven-point scale from "extremely morally good" to "extremely morally bad"). But if we presented Switch first, only about half the respondents rated the scenarios equivalently. Somewhat simplified: People who see Push first will say that it's morally bad to push the man, and then when they see Switch they will say it's similarly bad to flip the switch. People who see Switch first will say it's okay to flip the switch, but then when they see the Push case they don't say "Oh, I guess that's okay too". Rather, they dig in their heels and say that pushing the man is bad despite the superficial similarity to the Switch case, and thus they rate the two scenarios inequivalently.

Strikingly, Fiery and I found that professional philosophers show the same size order effects on their judgment about hypothetical scenarios as do non-philosophers. Even when we restricted our analysis to respondents reporting a PhD in philosophy and an area of specialization or competence in ethics, we found no overall reduction of the magnitude of the order effect. (This research is forthcoming in Mind & Language; manuscript draft available here.)

The Doctrine of the Double Effect is the orthodox (but by no means universally accepted) explanation of why it might be okay to flip the switch but not okay to push the man. According to the Doctrine of the Double Effect, it's worse to harm someone as a means of bringing about a good outcome than it is to harm someone as merely a foreseen side-effect of bringing about a good outcome. Applied to the trolley case, the thought is this: If you flip the switch, the means of saving the five is diverting the boxcar to the side-track, and the death of the one person is just a foreseen side effect. However, if you push the man, killing him is the means of saving the five.

Now maybe this is a sound doctrine, soundly applied, or maybe not. But what Fiery and I did was this: At the end of our experiment, we asked our participants whether they endorsed the Doctrine of the Double Effect. Specifically we asked the following:

Sometimes it is necessary to use one person’s death as a means to saving several more people—killing one helps you accomplish the goal of saving several. Other times one person’s death is a side-effect of saving several more people—the goal of saving several unavoidably ends up killing one as a consequence. Is the first morally better, worse, or the same as the second?

[Response options: ‘better’ ‘worse’ or ‘same’]
Non-philosophers' responses to this question were unrelated to the order of the presentation of the scenarios. We suspect that many of them didn't see the connection between this abstract principle and the Push and Switch scenarios presented much earlier in the questionnaire. But philosophers' responses were related to the order of presentation of the Push and Switch scenarios. Specifically, the majority of philosophers (62%) who saw the Switch scenario first endorsed the Doctrine of the Double Effect. However, the doctrine was endorsed only by a minority of philosophers (46%) who saw Push first (p = .02). What seems to have happened is this: By manipulating order of presentation, Fiery and I influenced the likelihood that respondents would rate the scenarios equivalently or inequivalently. We thereby also influenced the likelihood of our philosopher respondents' endorsing a doctrine that appears to justify inequivalent judgments about the scenarios, the Doctrine of the Double Effect. Rather than relying on stable principles to reach judgments about the cases, a certain portion of philosophers appear to have reached their scenario judgments on the basis of covert factors like order of presentation and then endorsed principles only post-hoc as a means of rationalizing their covertly influenced judgments about the specific cases.

Manipulating the order of two pairs of scenarios (a Push-Switch case and a Moral Luck case) appeared to amplify the magnitude of this effect, by pushing philosophers either generally toward or generally against endorsing inequivalency-supporting principles. With two scenario pairs ordered to favor inequivalency, we found 70% of our philosopher respondents endorsing the Doctrine of the Double Effect. With the two pairs ordered to favor equivalency, only 28% endorsed the doctrine (p < .001). This is a very large shift in opinion, given how well-known the doctrine is among philosophers and given that by this point in the questionnaire, all philosophers had viewed all versions of each scenario. We then filtered our results, looking only at respondents reporting a PhD and an area of specialization or competence in ethics, thinking that these high-grade specialists (mostly ethics professors at Leiter-ranked institutions) might have more stable opinions about the Doctrine of the Double Effect. They didn't. When the two scenario pairs were arranged to favor inequivalency, 62% of ethics PhDs endorsed the Doctrine of the Double Effect. When the two pairs were arranged to favor equivalency, 29% endorsed the doctrine (p < .05).

The simplest interpretation of our overall results, across three types of scenarios (Double Effect, Moral Luck, and Action-Omission), is that in cases like these skill in philosophy doesn't manifest as skill in consistently applying explicitly endorsed abstract principles to reach stable judgments about hypothetical scenarios; rather, it manifests more as skill in choosing principles to rationalize, post-hoc, scenario judgments that are driven by the same types of factors that drive non-philosophers' judgments.


Unknown said...

It is refreshing to read a philosopher admit to your interpretation. Philosophers might not be purely rational after all...(gasp!) usual, great post!

Neil said...

I make some related claims here:

Richard said...

So does this imply that we are unable to access capacities required to overcome the non-reflective responses, even when we adopt rigorous philosophical brooding? Does this relate to the Kierkegaardian idea of the 'ignorant knower' that Jennifer Lockhart writes about. The ignorant knower is someone who :
1. knows something
2. is ignorant of the knowledge (and so acts as if she hasn't the knowledge)
3. is ignorant of the disjunct between her knowledge and her actions.

Lockhart proposes a reading of K that suggests that adding new knowledge directly to such a person is inert because it is added to her knowledge without giving any capacity to act on it. So K's texts are indirect communications of the capacity to act, rather than direct communications of knowledge.

Perhaps in the case of the trolley issue and philosophers, they are being like the ignorant knower, where direct knowledge and further knowledge cannot access the capacity to do more than post hoc rationalise.

Michael Straight said...

Or maybe it's just that The Trolley Problem is such a contrived, muddled, unhelpful approach to thinking about ethics that we shouldn't expect anyone to have consistent responses to it.

Gabe Dupre said...

It would be very interesting to see whether there was any correlations between the ethical views of the respondents and the instability of their endorsements.

Anonymous said...

Remembering about overpopulation, ideal would be to refrain from statements or actions. Today it sounds like a sort of cynicism. 100-200 years later it will sound very common.

Eric Schwitzgebel said...

Thanks for the comments, folks!

Neil: Yes, I'm in sympathy with your arguments in that article.

Michael: I don't entirely disagree, which is why the primary focus of my work on ethics professors has been on actual "real world" moral behavior.

Gabe: The trend, if I recall, was for ethicists reporting general agreement with consequentialism to show larger order effects than for people reporting a general agreement with deontology or virtue ethics.

Eric Schwitzgebel said...

Richard: I find Lockhart interesting on these issues. Maybe something like that analysis would pan out. This would also connect to my work with Blake Myers-Schulz on knowing that P without believing that P (forthcoming in Nous): Belief, in my view, is about being disposed to act and react belief-that-P-ishly, whereas knowledge is about having the *capacity*, but not necessarily the overall disposition, to act and react on the information that P (or something like that).

Richard said...


So would that mean that instead of Lockhart's ignorant knower we have the unbelieving knower where
1. I know something (and thus have the capacity to act on it)
2. I don't believe and so am not disposed to act on that knowledge
3. I don't acknowledge the disconnect.

In your case what is missing is a disposition to act, in Lockhart's it is the capacity to act. Does Lockhart's ignorant knower seem a more intractable problem than the disbelieving knower? And in the case of the moral philosophers, do you think it is a lack of capacity or a dispositional defect? Can I can be disposed to x if I haven't the capacity for x, (wouldn't that be more like wishing (eg I could fly)? Would that involve something like a principle suggesting that I can't will the impossible. However I can have the capacity to y, but be disposed not to y (eg because I'm sulking) . The intractability of the instability of moral philosophers case may suggest ignorant knowing rather than disbelieving knowers perhaps?

Mike Otsuka said...

In an article published in Utilitas in 2008 I make some related comments:

"It is an interesting question whether the widely shared intuition of moral philosophers that it is permissible to turn the trolley in various looping cases is an artefact of the fact that our initial exposure to this type of case has been so strongly influenced by Thomson's inauguration of the Loop Case in the context of the Trolley Case. Perhaps the history of this branching, looping, spinning, revolving problem in moral philosophy would have gone very differently if the first version of a looping trolley case to which moral philosophers had been exposed was the Loop-Bridge Case that makes its debut in this article. Had that been our first looping case, and had it been presented in the context of the Bridge Case [which is akin to Push -- MO], we might instead have intuited the impermissibility of turning the trolley in this and other looping cases. In fact, I presented the Bridge Case and then the Loop-Bridge Case to undergraduates in a recent lecture course. I did so before I exposed them to the Trolley Case. By a show of hands, a majority of those willing to venture an opinion deemed it impermissible to kill the one in the Loop-Bridge Case as well as the Bridge Case (though the majority was smaller in the Loop-Bridge Case). It is also significant that, as I have noted above, the opinion of those who took Marc Hauser's online Moral Sense Test was divided 50–50 regarding the permissibility of diverting the trolley in a version of a looping case highly similar to Thomson's. Presumably many of these respondents were internet users who had not been previously exposed to Thomson-influenced discussion of the trolley problem. Moreover, Hauser's statistics were drawn only from people's responses to the first case that they encountered in the online survey, thereby screening out the influence of other cases on their convictions regarding permissibility." (pp. 109-10)

Aaron Maltais said...

I wonder what effect it would have if one explained to philosopher respondents that the order of presentation tends to affect both the moral assessment of the cases and the justifications for these assessments. Given the opportunity to re-evaluate their positions in light of this information would there be any significant changing of positions?

If yes then one might conclude that the problem with their "considered judgements" has been a lack of relevant information or lack of attention to relevant information (i.e. information about psychological tendencies once individuals make certain kinds of commitments). If this is the problem it is not necessarily deeply troubling. However, if there is little meaningful re-evaluation (e.g. respondents assume that these are psychological tendencies that impact others or then can't escape the commitments ordering creates despite consciously trying) then...

Eric Schwitzgebel said...

Thanks for that, Mike! That's an interesting historical conjecture. On a related note, Fiery and I took a preliminary look at published papers on the trolley problem, to see if there is any correlation between the order in which the cases are presented and the author's final views. No strong relationship popped out at us, but it might be worth more systematic investigation. Any thoughts about how to test your historical conjecture empirically?

Eric Schwitzgebel said...

Interesting suggestion, Aaron. I don't know how that would turn out! One possibility is that such information would enhance the consistency motivation, which is part of our hypothesized cause of the effect, and so might actually amplify the effect. But it's also very plausible that it would decrease the effect. I wish I had an endless pool of professional philosophers, naive to my hypotheses, on which I could try out such ideas.

I'm not sure that we escape trouble if philosophers' responses stabilize under such conditions. There's a possibility that they're stabilizing in the wrong kind of way that doesn't track the moral truth (or whatever else the target of moral reflection about such cases is supposed to be).

candid_observer said...

This sort of evidence can be taken a variety of ways, and it raises a possibility I've sometimes thought about.

Isn't it quite plausible that our intuitions are, at some level, and perhaps even frequently, simply incoherent? May it not be that we may have to come at our moral principles in some other way, or at least with support from other kinds of considerations to resolve the contradictions?

Honestly, if you think of these intuitions as being derived from our "moral psychology", and you think of our moral psychology as coming about through our peculiar evolution as a species, why should the apparent incoherence in these intuitions surprise us? Why should we believe on theoretical grounds that such intuitions so evolved would be coherent, or could easily be rendered coherent?

Eric Schwitzgebel said...

@ Candid Observer: I couldn't agree more! I make the case regarding the metaphysics of mind in particular in "The Crazyist Metaphysics of Mind", available in draft on my website.

martin said...

I'm surprised by the large ordering effects found here. This research poses a big challenge for the view that case intuitions are reliable as long as we focus on the intuitions professional ethicists have.

I'd like to see an continuation of surveys that probe deeper. The "ethicist case intuitionist" can in light of this specify the favored method to only deal with those intuitions that survive some processes of cognitive bias-debugging and then add "gaining knowledge about the ordering effects you're discovering" as a step in that process.

A related research problem: how to find a large enough pool of subjects for each such refined step.

In the long run some centralized intuition survey system makes sense. One with a large and open bank of test questions (translated into multiple languages) that can easily be rearranged and rerun by different researchers. And a lot of parameters to tweak, like timing, specific wordings, names and genders for people in the narratives and so on. That could speed up replication and test new patterns a lot.

Eric Schwitzgebel said...

That would be pretty cool, Martin! Maybe the Bouget & Chalmers PhilPapers survey is a start?

Ezio Di Nucci said...

I found a similar order effect by presenting people with either the classic Bystander scenario first and then Thomson's recent self-sacrifice version or with the self-sacrifice version and then the classic bystander. The intuition about the permissibility of bystander disappears.

Paper's here:

Now that I see your results, I wish I had asked professional philosophers too! ;-)


Eric Schwitzgebel said...

Very cool, Ezio!

Jeremy Pierce said...

What's the difference between (1) adopting the principle in some illegitimate post hoc way and (2) finding a philosophical principle that must underlie our moral judgments in the way that particularists say we should. My suspicion is that the difference is whether particularism is true. If it is, then philosophers are just doing (2). If it's not, then they're doing (1). As a particularist, it struck me that you were trying to turn the proper method of doing ethical theory into a fallacy. But maybe I've missed something that's going on here.

seth edenbaum said...

The man who swings the axe is called the "Executioner"; the man who gives the order is called only "Governor". Officers send enlisted men to almost certain death but may not befriend them. Stanley Milgram’s 1963 experiments showed that physical proximity, of authority to subject and subject to “learner”, was the main factor in affecting the level of obedience to the command to cause harm.

Every subset of human society that has "solved" the trolley problem, has done so by separation and orders of taboo. Common sense morality is the morality of equals. Abstract logic will not succeed in changing that.

"The trolley problem has morphed to include many variations, and even its earlier forms included discussion of “the doctrine of double effect” and of intentionality, treating the act of killing to save lives as an unintentional consequence of a moral act. Utilitarianism doesn’t need to nit-pick about intention; it’s simple enough to say “I chose to kill 3 people to save 10”. But the focus on intention denies full moral existence to those who’ve been killed, and I know of no study asking people to imagine themselves as the fat man and asking if they’re able to intuit a moral difference between being pushed by a man’s hand or by a turnstile with someone’s finger on the switch."

Logicians are not very observant. Observation is empiricism, not rationalism.

Sam Rickless said...

I haven't read the paper, and look forward to reading it. Here are a couple of initial thoughts. First, Switch and Push can be explained using either the DDA or the DDE. Push involves doing harm and, possibly, intending harm (depending on how the case is described). Switch involves diverting a harmful causal sequence, and so, for Foot, counts as morally equivalent to allowing harm, but it also involves merely foreseeing harm. You might want to consider whether some sort of interference between the doctrines might have some connection to your results. More generally, you might want to test intuitions by looking at cases that more clearly isolate doing/allowing from intending/foreseeing, and vice-versa. Second, in answer to Gabe, you write: "The trend, if I recall, was for ethicists reporting general agreement with consequentialism to show larger order effects than for people reporting a general agreement with deontology or virtue ethics." This should not surprise us. DDA and DDE are, after all, non-consequentialist principles. Those who are already drawn to non-consequentialism are less likely to be manipulable wrt order of presentation. Third, diversion in Switch involves an action, but the diversion aspect of Switch may be more salient when Switch is presented first, while the action aspect of Switch may be more salient when Push (which involves an action) is presented first. This might explain why subjects see more of a difference between Push and Switch when Switch is presented first, but see less of a difference between Push and Switch when Push is presented first.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

@ Jeremy: We're careful not to define "rationalization" in terms of truth. One might, through post-hoc rationalization, happen upon the truth and even defend that truth in a sound way. However, if the endorsement of principles is highly labile, very much influenced by covert factors that the philosopher respondent wouldn't endorse, then that suggests that a consistent adherence to principles isn't driving most philosophers' judgments about such cases and that the principles are to a large extent recruited post-hoc to justify judgments influenced by those unendorsed factors.

Eric Schwitzgebel said...

@ Sam: There's a lot of interesting stuff in that comment!

* We don't disagree that the difference between Switch and Push can be explained by factors other than DDE. Fiery and others have found, for example, that the physical contact involved in Push seems to be an important influence too. In a way, this should make it only *more* surprising that DDE endorsements are so variable with Switch/Push scenario order, since in principle differences in contact or DDA could be invoked instead.

* We have some more explicitly DDA scenarios too, and we ask a version of the DDA principle at the end of the test, but the results were messy. Our two DDA scenarios both had order effects but in the *opposite* direction, limiting the value of those scenarios in the conclusions we can draw about the endorsement of a DDA principle.

* I can see the value in trying to completely isolate DDE, etc., from all other possible factors that might explain the judgment. I've tried my hand at it, but it turns out to be nigh upon impossible if the other factors that one is trying to keep constant include factors that might have a psychological influence (such as physical proximity, gruesomeness, and temporal order of the deaths, etc., etc.!) regardless of their plausibility as moral principles. Also, as one might expect, matching the scenarios more carefully reduces the effect size. Part of what Fiery and I were trying to do was amp up the effect size though covert factors like order and contact/no-contact differences and then see if changes in DDE endorsement followed in consequence *despite* the fact of other differences that could be invoked.

* On consequentialists vs. non-consequentialists: I'm not sure I accept your reasoning. A consequentialist could be firmly committed and stable contra DDE in much the same way as a deontologist could be committed and stable pro DDE, I'd think -- at least hypothetically. I'm not seeing the asymmetry. I think Josh Greene (Fiery's post-doc supervisor at the time) was surprised to see the trend against consequentialists. You can see why Greene's work might suggest, at least superficially, that consequentialists would have more stable judgments.

* That third hypothesis is an interesting suggestion. It might be worth thinking of ways to test that conjecture!

steven said...

Unscientific, but the result of my poll here on the subject was interesting.

Sam Rickless said...

Hi Eric,

This is the first part of a two-part post.

* On DDA (Vest/Oxygen cases in the paper): What we want, ideally, is a set of hypotheses that can explain not just a subset of responses, but all responses. But it may be that, when you consider the survey responses, there is no clean way to explain them all. If this is so, then the best solution might be to treat survey responses in the way that the method of reflective equilibrium says they should be treated, namely as merely the first input in a process of theoretical adjustment that involves a significant amount of thought.

* On the isolation of DDE from DDA. What's wrong with the cases in Quinn (1989)? Terror Bomber and Strategic Bomber do not differ re DDA because they both involve doing harm. They do differ re DDE, because the intentions in the two cases differ. Direction of Resources and Guinea Pig do not differ re DDA because they both involve allowing harm. But they differ re DDE because the intentions in the two cases differ.

* On the Vest and Oxygen cases in the paper
1. I don't think these are the best cases for identifying DDA-type intuitions. One big problem is that the cases involve self-sacrifice. The idea of losing one's own life can play psychological havoc with one's moral theorizing, possibly in unpredictable ways. Anecdotally, I find that students (at least initially) react more positively to doing harm as a means to saving one's own life than they do to doing harm as a means to saving the life of another.
2. Here's another problem. Snatching a life vest from someone else is not, from the point of view of, say, Foot's theory, a case of doing harm at all. It's more like *enabling* harm. Think of it this way. Snatching away the life vest does not initiate a causal sequence that leads to death; rather, there is already a causal sequence (involving the envelopment of the diver in the water) that is potentially harmful to the diver, a sequence that the presence of the life vest is staving off. When the life vest is removed from the diver, the potentially harmful sequence that was already in existence is allowed to take its course. The removal of an obstacle to a potentially harmful causal sequence is what Foot thinks of as "enabling harm". For Foot, enabling harm is morally equivalent to allowing harm, not to doing harm; though, again, because enabling (like diverting) involves an action, initial intuitions of subjects can be messy. The same point can be made about the Oxygen case. Indeed, both Vest and Oxygen are more similar to Foot's classic Respirator case than they are to the kinds of examples that are typically used to illustrate harm-doings.

Sam Rickless said...

This is the second part of the two-part post.

* Still on the Vest and Oxygen cases in the paper
3. Something else you should consider is why you obtained such different ordering effects in Vest and Oxygen. Here the actual description of the cases could be very important. I read over your paper quickly, but did not find the descriptions. Without the descriptions, one can only speculate. One potential difference is that a life vest is an object possession of which under certain sorts of circumstances can be understood as sufficient for conferring rights of ownership, whereas one's mouth being attached to an oxygen line (especially one that does not issue from one's own scuba gear) is less easily understood as sufficient for conferring rights of ownership to the oxygen line (or to the gear to which it is attached). More generally, in DDA cases, one wants to limit potential confounds produced by intuitions about ownership. Think, for example, of what happens to intuitions in the classic Trolley case if the switch is described as owned by the lone person on the side track. Or think of what happens to intuitions in allowing/allowing scenarios (like Thomson's health-pebble case) if the resource that could be given to one or to five is owned by the one.

* On consequentialism/non-consequentialism. I think that consequentialists are not, in fact, as stable contra DDA/DDE than non-consequentialists are stable pro DDA/DDE (at least in one sense of "stable"). Consequentialists, like all people who have their moral heads screwed on straight, recognize the existence of strong non-consequentialist intuitions (even in themselves). They may have sophisticated theoretical arguments designed to discount these intuitions, but the fact that they have these intuitions makes it possible to generate ordering effects. When presented with a Push case first, their non-consequentialist intuitions are triggered. When presented with a Switch case first, their non-consequentialist intuitions are not triggered, and when presented with a Push case second, their non-consequentialist intuitions take a back seat are likely to the theory to which they officially subscribe (which tells them that the cases don't differ morally). By contrast, when non-consequentialists are presented with a Push case second, their non-consequentialist intuitions do not take a back seat to any theory that tells them that the cases don't differ morally. So, when Push is presented second, whereas there is something that dampens the impact of non-consequentialist intuitions in consequentialists, there is nothing similar to dampen the non-consequentialist intuitions in non-consequentialists.



Eric Schwitzgebel said...

Wow, Sam, a lot of very interesting things to think about in these comments! I have do dash off now, so I don't have time right now to give them the careful consideration they deserve. I'll be back in touch later. Family vacation coming up, so it might be next week....

Kevin Hough said...
This comment has been removed by a blog administrator.
Kevin Hough said...
This comment has been removed by a blog administrator.
Kevin Hough said...
This comment has been removed by a blog administrator.
Eric Schwitzgebel said...

Sorry for the slow replies, folks. I was supposed to be on family vacation over the weekend but I got stuck with a high fever instead!


* I don't disagree, really, with the reflective equilibrium point. I think it remains an open question how much weight should be put on professional philosophers' "intuitive judgments" about moral scenarios. I find the epistemology here rather murky. It seems to me too sanguine simply to assume that somehow everything will work out right with enough reflection. (Not that you are necessarily assuming that either.)

* While I agree that one can probably manipulate DDE and DDA independently, it is very difficult to create a comprehensible scenario that varies *only* in DDE or DDA and not in anything else that would plausibly be morally or psychologically relevant to the evaluation. Consider Terror Bomber vs. Strategic Bomber, as you suggest. To really make them parallel, except with DDE, you would want to balance the consequences -- so presumably both TB and SB would be bombing the factory. And you would also want them to have equivalent *knowledge* of those consequences -- both the strategic consequences and the terror consequences. So TB and SB would presumably have to have different desires behind their outwardly similar behavior with similar known consequences -- TB would have to value the terror consequences more than the strategic consequences, and SB the reverse, despite their not having any difference of opinion about the effect on the outcome of the war. But then it sounds like they'll have differences in character -- and wouldn't we want to be evaluating the action rather than the underlying character...? You can see how it gets thorny fast! So it's almost inevitable that different principles and psychological factors will get tangled together. Our aim was not a clean test of DDE (or DDA) alone, but rather to see whether philosophers would show smaller order effects on their judgments and principle endorsements.


Eric Schwitzgebel said...

Sam [cont]

* I don't disagree about the different ways in which DDA can be played out (though Foot's account seems a bit strained to me). Again, the main point was not a clean test of DDA -- since I'm not sure that would be practicable, anyway, in a scenario comprehensible to non-specialists -- but simply to see how stable people's responses would be across variations in order, and whether philosophers' responses would be more stable. In retrospect, a different set of DDA-related pairs would probably have worked better!

* Your hypothesis about the explanation of the different directions for Vest/Oxygen is interesting. I wouldn't rule that out. If you're curious about the exact wording of the stimuli, the Supplementary Online Material (link at the bottom of the page on my website that contains the paper abstract).

* You might be right about consequentialists vs. non-consequentialists. That would fit with the trend in the data.

Eric Schwitzgebel said...
This comment has been removed by the author.