Here's exactly the pagan solstice celebration gift you were yearning for: a proof that the external world exists!
(Caveat emptor: The arguments only work if you lack a god-like intellect. See footnote 15.)
Abstract:
In this essay I attempt to refute radical solipsism by means of a series of empirical experiments. In the first experiment, I prove to be a poor judge of four-digit prime numbers, in contrast to a seeming Excel program. In the second experiment, I prove to have an imperfect memory for arbitrary-seeming three-digit number and letter combinations, in contrast to my seeming collaborator with seemingly hidden notes. In the third experiment, I seem to suffer repeated defeats at chess. In all three experiments, the most straightforward interpretation of the experiential evidence is that something exists in the universe that is superior in the relevant respects – theoretical reasoning (about primes), memorial retention (for digits and letters), or practical reasoning (at chess) – to my own solipsistically-conceived self.
This essay is collaborative with Alan T. Moore.
Available here.
Thursday, December 27, 2012
New Essay: Experimental Evidence of the Existence of an External World
Posted by Eric Schwitzgebel at 2:21 PM 12 comments
Wednesday, December 19, 2012
Animal Rights Advocate Eats Cheeseburger, So... What?
Suppose it turns out that professional ethicists' lived behavior is entirely uncorrelated with their philosophical theorizing. Suppose, for example, that ethicists who assert that lying is never permissible (a la Kant) are neither more nor less likely to lie, in any particular situation, than is anyone else of similar social background. Suppose that ethicists who defend Singer's strong views about charity in fact give no more to charity than their peers who don't defend such views. Suppose this, just hypothetically.
For concreteness, let's imagine an ethicist who gives a lecture defending strict vegetarianism, then immediately retires to the university cafeteria for a bacon double cheeseburger. Seeing this, a student charges the ethicist with hypocrisy. The ethicist replies: "Wait. I made no claims in class about my own behavior. All I said was that eating meat was morally wrong. And in fact, I do think that. I gave sound arguments in defense of that conclusion, which you should also accept. The fact that I am here eating a delicious bacon double cheeseburger in no way vitiates the force of those arguments."
Student: "But you can't really believe those arguments! After all, here you are shamelessly doing what you just told us was morally wrong."
Ethicist: "What I personally believe is beside the point, as long as the arguments are sound. But in any case, I do believe that what I am doing is morally wrong. I don't claim to be a saint. My job is only to discover moral truths and inform the world about them. You're going to have to pay me extra if you want to add actually living morally well to my job description."
My question is this: What, if anything, is wrong with the ethicist's attitude toward philosophical ethics?
Maybe nothing. Maybe academic ethics is only a theoretical enterprise, dedicated to the discovery of moral truths, if there are any, and the dissemination of those discoveries to the world. But I'm inclined to think otherwise. I'm inclined to think that philosophical reflection on morality has gone wrong in some important way if it has no impact on your behavior, that part of the project is to figure out what you yourself should do. And if you engage in that project authentically, your behavior should shift accordingly -- maybe not perfectly but at least to some extent. Ethics necessarily is, or should be, first-personal.
If a chemist determines in the lab that X and Y are explosive, one doesn't expect her to set aside this knowledge, failing to conclude that an explosion is likely, when she finds X and Y in her house. If a psychologist discovers that method Z is a good way to calm an autistic teenager, we don't expect him to set aside that knowledge when faced with a real autistic teenager, failing to conclude that method Z might calm the person. So are all academic disciplines, in a way, first-personal?
No, not in the sense I intend the term. The chemist and psychologist cases are different from the ethicist case as I have imagined it. The ethicist is not setting aside her opinion that eating meat is wrong as she eats that cheeseburger. She does in fact conclude that eating the cheeseburger is wrong. However, she is unmoved by that conclusion. And to be unmoved by that conclusion is to fail in the first-personal task of ethics. A chemist who deliberately causes explosions at home might not be failing in any way as a chemist. But an ethicist who flouts her own vision of the moral law is, I would suggest, in some way, though perhaps not entirely, a failure as an ethicist.
An entirely zero correlation between moral opinion and moral behavior among professional ethicists is empirically unlikely, I'm inclined to think. However, Joshua Rust's and my empirical evidence to date does suggest that the correlations might be pretty weak. One question is whether they are weak enough to indicate a problem in the enterprise as it is actually practiced in the 21st-century United States.
Posted by Eric Schwitzgebel at 12:29 PM 28 comments
Labels: ethics professors, moral psychology
Tuesday, December 11, 2012
Intuitions, Philosophy, and Experiment
Herman Cappelen has provocatively argued that philosophers don't generally rely upon intuition in their work and thus that work in experimental philosophy that aims to test people's intuitions about philosophical cases is really beside the point. I have a simple argument against this view.
First: I define "intuition" very broadly. A judgment is "intuitive", in my view, just in case it arises by some cognitive process other than explicit, conscious reasoning. By this definition, snap judgments about the grammaticality of sentences, snap judgments about the distance of objects, snap judgments about the moral wrongness of an action in a hypothetical scenario, and snap folk-psychological judgments are generally going to be intuitive. Intuitive judgments don't have to be snap judgments -- they don't have to be fast -- but the absence of explicit conscious reasoning is clearest when the judgment is quick.
This definition of "intuition" is similar to one Alison Gopnik and I worked with in a 1998 article, and it is much more inclusive than Cappelen's own characterizations. Thus, it's quite possible that intuitions in Cappelen's narrow sense are inessential to philosophy while intuitions in my broader sense are essential. But I don't think that Cappelen and I have merely a terminological dispute. There's a politics of definition. One's terminological choices highlight and marginalize different facets of the world.
My characterization of intuition is also broader than most other philosophers' -- Joel Pust in his Stanford Encyclopedia article on intuition, for example, seems to regard it as straightforward that perceptual judgments should not be called "intuitions" -- but I don't think my preferred definition is entirely quirky. In fact, in a recent study, J.R. Kuntz and J.R.C Kuntz found that professional philosophers were more likely to "agree to a very large extent" with Gopnik's and my definition of intuition than with any of six other definitions proposed by other authors (32% giving it the top rating on a seven-point scale). I think professional psychologists and linguists might also sometimes use "intuition" in something like Alison's and my sense.
If we accept this broad definition of intuition, then it seems hard to deny that, contra Cappelen, philosophy depends essentially on intuition -- as does all cognition. One can't explicitly consciously reason one's way to every one of one's premises, on pain of regress. One must start somewhere, even if only tentatively and subject to later revision.
Cappelen has, in conversation, accepted this consequence of my broad definition of "intuition". The question then becomes what to make of the epistemology of intuition in this sense. And this epistemological question is, I think, largely an empirical one, with several disciplines empirically relevant, including cognitive psychology, experimental philosophy, and study of the historical record. Based on the empirical evidence, what might we expect to be the strengths and weaknesses of explicit reasoning? And, alternatively, what might we expect to be the strengths and weaknesses of intuitive judgment?
Those empirical questions become especially acute when the two paths to judgment appear to deliver conflicting results. When your ordinary-language spontaneous judgments about the applicability of a term to a scenario (or at least your inclinations to judge) conflict with what you would derive from your explicit theory, or when your spontaneous moral judgments (or inclinations) do, what should you conclude? The issue is crucial to philosophy as we all live and perform it, and the answer one gives ought to be informed, if possible, by empirically discoverable facts about the origins and reliability of different types of judgments or inclinations. (This isn't to say that a uniform answer is likely to win the day: Things might vary from time to time, person to person, topic to topic, and depending on specific features of the case.)
It would be strange to suppose that the psychology of philosophy is irrelevant to its epistemology. And yet Cappelen's dismissal of the enterprise of experimental philosophy on grounds of the irrelevance of "intuitions" to philosophy would seem to invite us toward that exactly that dubious supposition.
Posted by Eric Schwitzgebel at 2:13 PM 30 comments
Labels: experimental philosophy
Tuesday, December 04, 2012
Second- vs. Third-Person Presentations of Moral Dilemmas
Is it better for you to kill an innocent person to save others than it is for someone else to do so? And does the answer you're apt to give depend on whether you are a professional philosopher? Kevin Tobia, Wesley Buckwalter, and Stephen Stich have a forthcoming paper in which they report results that seem to suggest that philosophers think very differently about such matters than do non-philosophers. However, I'm worried that Tobia and collaborators' results might not be very robust.
Tobia, Buckwalter, and Stich report results from two scenarios. One is a version of Bernard Williams' hostage scenario, in which the protagonist is captured and given the chance to personally kill one person among a crowd of innocent villagers so that the other villagers may go free. If the protagonist refuses, all the villagers will be killed. Forty undergrads and 62 professional philosophers were given the scenario. For half of the respondents the protagonist was "you"; for the other half it was "Jim". Undergrads were much more likely to say that shooting the villager was morally obligatory if "Jim" was the protagonist (53%) than if "you" was the protagonist (19%). Professional philosophers, however, went the opposite direction: 9% if "Jim", 36% if "you". Their second case is the famous trolley problem, in which the protagonist can save five people by flipping a switch to shunt a runaway trolley to a sidetrack where it will kill one person instead. Undergrads were more likely to say that shunting the trolley is permissible in the third-person case than in the second-person "you" case, and philosophers again showed the opposite pattern.
Weird! Are we to conclude that undergrads would rather let other people get their hands dirty for the greater good, while philosophers would rather get their hands dirty themselves? Or...?
When I first read about these studies in draft, though, one thing struck me as odd. Fiery Cushman and I had piloted similar-seeming studies previously, and we hadn't found much difference at all between second- and third-person presentations.
In one pilot study (the pilot for Schwitzgebel and Cushman 2012), we had given participants the following scenario:
Nancy is part of a group of ecologists who live in a remote stretch of jungle. The entire group, which includes eight children, has been taken hostage by a group of paramilitary terrorists. One of the terrorists takes a liking to Nancy. He informs her that his leader intends to kill her and the rest of the hostages the following morning. He is willing to help Nancy and the children escape, but as an act of good faith he wants Nancy to kill one of her fellow hostages whom he does not like. If Nancy refuses his offer all the hostages including the children and Nancy herself will die. If she accepts his offer then the others will die in the morning but she and the eight children will escape.The second-person version substituted "you" for "Nancy". Responses were on a seven-point scale from "extremely morally good" (1) to "extremely morally bad" (7). We had three groups of respondents, philosophers (reporting Master's or PhD in philosophy), non-philosopher academics (reporting graduate degree other than in philosophy), and non-academics (reporting no graduate degree). The mean responses for the non-academics were 4.3 for both cases (with thousands of respondents), for academic non-philosophers 4.6 for "you" vs. 4.5 for "Nancy" (not a statistically significant difference, even with several hundred respondents in each group). And the mean responses for philosophers were 3.9 for "you" vs. 4.2 for "Nancy" (not statistically significant, with about 200 in each group). Similarly, we found no differences in several runaway trolley cases, moral luck cases, and action-omission cases. Or, to speak more accurately, we found a few weak results that may or may not qualify as statistically significant, depending on one how one approaches the statistical issue of multiple comparisons, but nothing strong or consistent. Certainly nothing that pops out with a large effect size like the one Tobia, Buckwalter, and Stich found.
I'm not sure how to account for these different results. One difference is that Fiery and I used internet respondents rather than pencil-and-paper respondents. Also, we solicited responses on a 1-7 scale rather than asking yes or no. And the scenarios differed in wording and detail -- including the important difference that in our version of the hostage scenario the protagonist herself would be killed. But still, it's not obvious why should our results be so flat when Tobia, Buckwalter, and Stich find such large effects.
Because Fiery and I were disappointed by seeming ineffectuality of switching between "you" and a third-party protagonist, in our later published study we decided to try varying, in a few select scenarios, the victim rather than the protagonist. In other words, what do you think about Nancy's choice when Nancy is to shoot "you" rather than "one the fellow hostages"?
Here we did see a difference, though since it wasn't relevant to the main hypothesis discussed in the final version of the study we didn't detail that aspect of our results in the published essay. Philosophers seemed to treat the scenarios about the same when the victim was "you" as when the victim was described in the third person; but non-philosophers expressed more favorable attitudes toward the protagonist when the protagonist sacrificed "you" for the greater good. In the hostage scenario, non-academics rated it 3.6 in the "you" condition vs. 4.1 in the other condition (p < .001) (remember, lower numbers are morally better on our scale); non-philosopher academics split 4.1 "you" vs. 4.5 third-person (p = .001); and philosophers split 3.9 vs. 4.0 (p = .60, N = 320). (Multiple regression shows the expected interaction effects here, implying that non-philosophers were statistically more influenced by the manipulation than were philosophers.) There was a similar pattern of results in another scenario involving shooting one person to save others on a sinking submarine, with philosophers showing no detectable difference but non-philosophers rating shooting the one person more favorably when the victim is "you" than when the victim is described in third-person language. However, we did not see the same pattern in some action-omission cases we tried.
Fiery and I didn't end up publishing this part of our findings because it didn't seem very robust and we weren't sure what to make of it, and I should emphasize that the findings and analysis are only preliminary; but I thought I'd put it out there in the blogosphere at least, especially since it relates to the forthcoming piece by Tobia, Buckwalter, and Stich. The issue seems ripe for some follow-up work, though I might need to finish proving that the external world exists first!
Posted by Eric Schwitzgebel at 5:10 PM 5 comments
Labels: moral psychology