Thursday, December 27, 2012

New Essay: Experimental Evidence of the Existence of an External World

Here's exactly the pagan solstice celebration gift you were yearning for: a proof that the external world exists!

(Caveat emptor: The arguments only work if you lack a god-like intellect. See footnote 15.)

Abstract:

In this essay I attempt to refute radical solipsism by means of a series of empirical experiments. In the first experiment, I prove to be a poor judge of four-digit prime numbers, in contrast to a seeming Excel program. In the second experiment, I prove to have an imperfect memory for arbitrary-seeming three-digit number and letter combinations, in contrast to my seeming collaborator with seemingly hidden notes. In the third experiment, I seem to suffer repeated defeats at chess. In all three experiments, the most straightforward interpretation of the experiential evidence is that something exists in the universe that is superior in the relevant respects – theoretical reasoning (about primes), memorial retention (for digits and letters), or practical reasoning (at chess) – to my own solipsistically-conceived self.

This essay is collaborative with Alan T. Moore.

Available here.

Wednesday, December 19, 2012

Animal Rights Advocate Eats Cheeseburger, So... What?

Suppose it turns out that professional ethicists' lived behavior is entirely uncorrelated with their philosophical theorizing. Suppose, for example, that ethicists who assert that lying is never permissible (a la Kant) are neither more nor less likely to lie, in any particular situation, than is anyone else of similar social background. Suppose that ethicists who defend Singer's strong views about charity in fact give no more to charity than their peers who don't defend such views. Suppose this, just hypothetically.

For concreteness, let's imagine an ethicist who gives a lecture defending strict vegetarianism, then immediately retires to the university cafeteria for a bacon double cheeseburger. Seeing this, a student charges the ethicist with hypocrisy. The ethicist replies: "Wait. I made no claims in class about my own behavior. All I said was that eating meat was morally wrong. And in fact, I do think that. I gave sound arguments in defense of that conclusion, which you should also accept. The fact that I am here eating a delicious bacon double cheeseburger in no way vitiates the force of those arguments."

Student: "But you can't really believe those arguments! After all, here you are shamelessly doing what you just told us was morally wrong."

Ethicist: "What I personally believe is beside the point, as long as the arguments are sound. But in any case, I do believe that what I am doing is morally wrong. I don't claim to be a saint. My job is only to discover moral truths and inform the world about them. You're going to have to pay me extra if you want to add actually living morally well to my job description."

My question is this: What, if anything, is wrong with the ethicist's attitude toward philosophical ethics?

Maybe nothing. Maybe academic ethics is only a theoretical enterprise, dedicated to the discovery of moral truths, if there are any, and the dissemination of those discoveries to the world. But I'm inclined to think otherwise. I'm inclined to think that philosophical reflection on morality has gone wrong in some important way if it has no impact on your behavior, that part of the project is to figure out what you yourself should do. And if you engage in that project authentically, your behavior should shift accordingly -- maybe not perfectly but at least to some extent. Ethics necessarily is, or should be, first-personal.

If a chemist determines in the lab that X and Y are explosive, one doesn't expect her to set aside this knowledge, failing to conclude that an explosion is likely, when she finds X and Y in her house. If a psychologist discovers that method Z is a good way to calm an autistic teenager, we don't expect him to set aside that knowledge when faced with a real autistic teenager, failing to conclude that method Z might calm the person. So are all academic disciplines, in a way, first-personal?

No, not in the sense I intend the term. The chemist and psychologist cases are different from the ethicist case as I have imagined it. The ethicist is not setting aside her opinion that eating meat is wrong as she eats that cheeseburger. She does in fact conclude that eating the cheeseburger is wrong. However, she is unmoved by that conclusion. And to be unmoved by that conclusion is to fail in the first-personal task of ethics. A chemist who deliberately causes explosions at home might not be failing in any way as a chemist. But an ethicist who flouts her own vision of the moral law is, I would suggest, in some way, though perhaps not entirely, a failure as an ethicist.

An entirely zero correlation between moral opinion and moral behavior among professional ethicists is empirically unlikely, I'm inclined to think. However, Joshua Rust's and my empirical evidence to date does suggest that the correlations might be pretty weak. One question is whether they are weak enough to indicate a problem in the enterprise as it is actually practiced in the 21st-century United States.

Tuesday, December 11, 2012

Intuitions, Philosophy, and Experiment

Herman Cappelen has provocatively argued that philosophers don't generally rely upon intuition in their work and thus that work in experimental philosophy that aims to test people's intuitions about philosophical cases is really beside the point. I have a simple argument against this view.

First: I define "intuition" very broadly. A judgment is "intuitive", in my view, just in case it arises by some cognitive process other than explicit, conscious reasoning. By this definition, snap judgments about the grammaticality of sentences, snap judgments about the distance of objects, snap judgments about the moral wrongness of an action in a hypothetical scenario, and snap folk-psychological judgments are generally going to be intuitive. Intuitive judgments don't have to be snap judgments -- they don't have to be fast -- but the absence of explicit conscious reasoning is clearest when the judgment is quick.

This definition of "intuition" is similar to one Alison Gopnik and I worked with in a 1998 article, and it is much more inclusive than Cappelen's own characterizations. Thus, it's quite possible that intuitions in Cappelen's narrow sense are inessential to philosophy while intuitions in my broader sense are essential. But I don't think that Cappelen and I have merely a terminological dispute. There's a politics of definition. One's terminological choices highlight and marginalize different facets of the world.

My characterization of intuition is also broader than most other philosophers' -- Joel Pust in his Stanford Encyclopedia article on intuition, for example, seems to regard it as straightforward that perceptual judgments should not be called "intuitions" -- but I don't think my preferred definition is entirely quirky. In fact, in a recent study, J.R. Kuntz and J.R.C Kuntz found that professional philosophers were more likely to "agree to a very large extent" with Gopnik's and my definition of intuition than with any of six other definitions proposed by other authors (32% giving it the top rating on a seven-point scale). I think professional psychologists and linguists might also sometimes use "intuition" in something like Alison's and my sense.

If we accept this broad definition of intuition, then it seems hard to deny that, contra Cappelen, philosophy depends essentially on intuition -- as does all cognition. One can't explicitly consciously reason one's way to every one of one's premises, on pain of regress. One must start somewhere, even if only tentatively and subject to later revision.

Cappelen has, in conversation, accepted this consequence of my broad definition of "intuition". The question then becomes what to make of the epistemology of intuition in this sense. And this epistemological question is, I think, largely an empirical one, with several disciplines empirically relevant, including cognitive psychology, experimental philosophy, and study of the historical record. Based on the empirical evidence, what might we expect to be the strengths and weaknesses of explicit reasoning? And, alternatively, what might we expect to be the strengths and weaknesses of intuitive judgment?

Those empirical questions become especially acute when the two paths to judgment appear to deliver conflicting results. When your ordinary-language spontaneous judgments about the applicability of a term to a scenario (or at least your inclinations to judge) conflict with what you would derive from your explicit theory, or when your spontaneous moral judgments (or inclinations) do, what should you conclude? The issue is crucial to philosophy as we all live and perform it, and the answer one gives ought to be informed, if possible, by empirically discoverable facts about the origins and reliability of different types of judgments or inclinations. (This isn't to say that a uniform answer is likely to win the day: Things might vary from time to time, person to person, topic to topic, and depending on specific features of the case.)

It would be strange to suppose that the psychology of philosophy is irrelevant to its epistemology. And yet Cappelen's dismissal of the enterprise of experimental philosophy on grounds of the irrelevance of "intuitions" to philosophy would seem to invite us toward that exactly that dubious supposition.

Tuesday, December 04, 2012

Second- vs. Third-Person Presentations of Moral Dilemmas

Is it better for you to kill an innocent person to save others than it is for someone else to do so? And does the answer you're apt to give depend on whether you are a professional philosopher? Kevin Tobia, Wesley Buckwalter, and Stephen Stich have a forthcoming paper in which they report results that seem to suggest that philosophers think very differently about such matters than do non-philosophers. However, I'm worried that Tobia and collaborators' results might not be very robust.

Tobia, Buckwalter, and Stich report results from two scenarios. One is a version of Bernard Williams' hostage scenario, in which the protagonist is captured and given the chance to personally kill one person among a crowd of innocent villagers so that the other villagers may go free. If the protagonist refuses, all the villagers will be killed. Forty undergrads and 62 professional philosophers were given the scenario. For half of the respondents the protagonist was "you"; for the other half it was "Jim". Undergrads were much more likely to say that shooting the villager was morally obligatory if "Jim" was the protagonist (53%) than if "you" was the protagonist (19%). Professional philosophers, however, went the opposite direction: 9% if "Jim", 36% if "you". Their second case is the famous trolley problem, in which the protagonist can save five people by flipping a switch to shunt a runaway trolley to a sidetrack where it will kill one person instead. Undergrads were more likely to say that shunting the trolley is permissible in the third-person case than in the second-person "you" case, and philosophers again showed the opposite pattern.

Weird! Are we to conclude that undergrads would rather let other people get their hands dirty for the greater good, while philosophers would rather get their hands dirty themselves? Or...?

When I first read about these studies in draft, though, one thing struck me as odd. Fiery Cushman and I had piloted similar-seeming studies previously, and we hadn't found much difference at all between second- and third-person presentations.

In one pilot study (the pilot for Schwitzgebel and Cushman 2012), we had given participants the following scenario:
Nancy is part of a group of ecologists who live in a remote stretch of jungle.  The entire group, which includes eight children, has been taken hostage by a group of paramilitary terrorists.  One of the terrorists takes a liking to Nancy.  He informs her that his leader intends to kill her and the rest of the hostages the following morning.  He is willing to help Nancy and the children escape, but as an act of good faith he wants Nancy to kill one of her fellow hostages whom he does not like.  If Nancy refuses his offer all the hostages including the children and Nancy herself will die.  If she accepts his offer then the others will die in the morning but she and the eight children will escape.
The second-person version substituted "you" for "Nancy".  Responses were on a seven-point scale from "extremely morally good" (1) to "extremely morally bad" (7). We had three groups of respondents, philosophers (reporting Master's or PhD in philosophy), non-philosopher academics (reporting graduate degree other than in philosophy), and non-academics (reporting no graduate degree). The mean responses for the non-academics were 4.3 for both cases (with thousands of respondents), for academic non-philosophers 4.6 for "you" vs. 4.5 for "Nancy" (not a statistically significant difference, even with several hundred respondents in each group). And the mean responses for philosophers were 3.9 for "you" vs. 4.2 for "Nancy" (not statistically significant, with about 200 in each group). Similarly, we found no differences in several runaway trolley cases, moral luck cases, and action-omission cases. Or, to speak more accurately, we found a few weak results that may or may not qualify as statistically significant, depending on one how one approaches the statistical issue of multiple comparisons, but nothing strong or consistent. Certainly nothing that pops out with a large effect size like the one Tobia, Buckwalter, and Stich found.

I'm not sure how to account for these different results. One difference is that Fiery and I used internet respondents rather than pencil-and-paper respondents. Also, we solicited responses on a 1-7 scale rather than asking yes or no. And the scenarios differed in wording and detail -- including the important difference that in our version of the hostage scenario the protagonist herself would be killed. But still, it's not obvious why should our results be so flat when Tobia, Buckwalter, and Stich find such large effects.

Because Fiery and I were disappointed by seeming ineffectuality of switching between "you" and a third-party protagonist, in our later published study we decided to try varying, in a few select scenarios, the victim rather than the protagonist.  In other words, what do you think about Nancy's choice when Nancy is to shoot "you" rather than "one the fellow hostages"?

Here we did see a difference, though since it wasn't relevant to the main hypothesis discussed in the final version of the study we didn't detail that aspect of our results in the published essay. Philosophers seemed to treat the scenarios about the same when the victim was "you" as when the victim was described in the third person; but non-philosophers expressed more favorable attitudes toward the protagonist when the protagonist sacrificed "you" for the greater good.  In the hostage scenario, non-academics rated it 3.6 in the "you" condition vs. 4.1 in the other condition (p < .001) (remember, lower numbers are morally better on our scale); non-philosopher academics split 4.1 "you" vs. 4.5 third-person (p = .001); and philosophers split 3.9 vs. 4.0 (p = .60, N = 320). (Multiple regression shows the expected interaction effects here, implying that non-philosophers were statistically more influenced by the manipulation than were philosophers.) There was a similar pattern of results in another scenario involving shooting one person to save others on a sinking submarine, with philosophers showing no detectable difference but non-philosophers rating shooting the one person more favorably when the victim is "you" than when the victim is described in third-person language. However, we did not see the same pattern in some action-omission cases we tried.

Fiery and I didn't end up publishing this part of our findings because it didn't seem very robust and we weren't sure what to make of it, and I should emphasize that the findings and analysis are only preliminary; but I thought I'd put it out there in the blogosphere at least, especially since it relates to the forthcoming piece by Tobia, Buckwalter, and Stich. The issue seems ripe for some follow-up work, though I might need to finish proving that the external world exists first!

Tuesday, November 27, 2012

A Story in Which I Acquire Fisheyes, or The Paranoid Jeweler and the Sphere-Eye God

I become paranoid and quit philosophy. But I need to make money somehow, so I become a jeweler. The problem with being a paranoid jeweler is this: Since I am doing close work, I must focus closely through a magnifying lens, which drastically reduces my ability to keep a constant wary eye on my surroundings. Fortunately, being the clever sort, I hit upon a solution: fisheye lenses!

Above each ear, I mount a fisheye lens, each with a 180-degree field of view, together drawing light from the entire 360 degrees of my environment. Mirrors redirect the light to a screen mounted eight inches in front of my eyes, entirely occluding my normal forward view. Reflections from the right-mounted lens fill almost the whole space before my right eye. Reflections from the left-mounted lens fill almost the whole space before my left eye. Thus, binocularly, I simultaneously see my entire 360-degree environment. No one can sneak up on me now! By shifting attention between my right and left eyes and by shifting my gaze direction, I can focus on different objects in my visual field, thus doing my jeweler's tasks. My best view is of objects near my ears, which in fact are magnified relative to what I could see unaided, while objects near midline, directly in front of me, behind me, or above or below me, are radically shrunk and curved. This is no big problem, though, because I can rotate my head to put whatever I want into the privileged field of view. I quickly acquire the habit of walking with my head turned ninety-degrees sideways and with my ear slightly angled toward the ground.

At first, of course, everything looks radically distorted, given what I am used to. Here's how things look on the right side of my visual field when I turn my right ear toward the sky:

(image from http://www.sandydan.com/photo/wide/fish/ftest5.jpg)

After a while, though, I get used to it -- just like people get used to convex rearview mirrors in cars and just like, given long enough, people adapt to lenses that completely invert the visual field. I come to expect that a gemstone of constant size will look like [this] when held before my forehead and like [that] when held before my ear. I learn to play catch, to ski, to drive a car (no need for rearview mirrors!). After long enough, things look right this way. I would be utterly stymied and physically incompetent without my fisheye lenses.

After long enough, do things go back to looking like they did before I put on the fisheye lenses, the way some people say (but not others!) that after adapting to visual-field inverting lenses things eventually go back to looking the way they did before having donned the lenses? That doesn't seem possible. Unlike in the inverting-lenses case, there doesn't seem to be sufficient symmetry to enable such an adaptation back. I see much better near my ear than in front of my forehead. I see a full 360 degrees. Things might come to look just like I expect, but they can't come look the same as before.

Do things nonetheless look illusory, though now I am used to the illusion? I'm inclined to say no. With full adaptation they will come to seem right, and not illusory -- just as it seems right and not illusory that the angle a person occupies in my visual field grows smaller as he walks away from me, just like it seems right and not illusory that the car behind me appears in my central rear view mirror in my forward gaze as I am driving (and not like a small car somehow elevated in the air in front of me, which I only intellectually know is really behind me). There's nothing intrinsically privileged, I'm inclined to think, about the particular camera obscura optics of the human eye, about our particular form of refraction and focus on an interior retinal hemisphere. Another species might be born with fisheyes, and talk and fight and do physics with us -- do it better than us, and think we are the ones with the weird set-up. A god might have a giant spherical eye gathering light from its interior, in which mortals dwell, with objects occupying more visual angle as they approach the surface of the sphere that bounds its indwellers' world. There is no objectively right visual optics.

From this, I think it might follow that our visual experience of the world as like [this] (and here I inwardly gesture at the way in which my office presents itself to me visually right now, the way my hand before my eyes looks to me right now), is no more the one right veridical visual phenomenology of shape and size and distance than that of my fisheye future self or that of the sphere-eye god.

Wednesday, November 21, 2012

Re-Post: Does Studying Economics Make You Selfish?

Sometimes when I present my work on the mediocre moral behavior of ethics professors, someone tells me that it's been shown that professional economists behave more selfishly than do non-economists. I was all prepped today to write a post on how the empirical evidence for that claim is actually quite weak. It turns out I already posted on this in 2008, and had just forgotten about it! Perhaps you've forgotten too. Since my thoughts haven't changed much since then, I thought I'd just be lazy and re-post:

There's been a lot of discussion in economics circles about how economics training makes people more selfish -- in particular, by teaching people "rational choice theory", the cartoon version of which portrays rationality as a matter of always acting in one's perceived (economic) self interest (for example, by defecting in prisoner's dilemma games and offering very little in ultimatum games). Accordingly, the economics literature contains a few much-cited studies that seem to show that economics students behave more selfishly than other students.

However, virtually all the experiments cited in support of this view are flawed in one of two ways. Either they test students on basically the same sorts of games discussed in economics classes, or they rely on self-report of selfishness. Relying on econ-class games makes generalizing the results very problematic. It's no surprise that after a semester of being told by your professor that defecting (basically, ratting on your accomplice to get less prison time) would be the rational thing to do in a prisoner's dilemma game, when that same professor or one of his colleagues gives you a pencil-and-paper version of the prisoner's dilemma, you're more likely to say you'd defect than you would otherwise have been (even with small real stakes). What relationship this has to actually screwing over acquaintances is another question.

Likewise, relying on self-report of selfishness is problematic for all the reasons self-report is usually problematic in the domain of morality, and in this case there's an obvious additional confound: People exposed to rational choice theory might feel less embarrassed to confess their selfish behavior (since it is, after all, rational according to the theory), and so might show up as more selfish on self-report measures even if they actually behave the same as everyone else.

I've found so far only three real-world studies of the relationship between economics training and selfishness, and none suggest that economics training increases selfishness.

(1.) Though I find their study too problematic to rely much on, Yezer et al. (1996) found that envelopes containing money were more likely to be forwarded with the money still in them if they were dropped in economics classes than other classes.

(2.) Frey and Meier (2003) found that economics majors at University of Zurich were less likely than other majors to opt to give to student charities when registering for classes, but that effect held starting with the very first semester (before any exposure to rational choice theory), and the ratio of economics majors to non-economics majors donating remained about the same over time (all groups declined a bit as their education proceeded). [Update 21 Nov 2012: Bauman and Rose 2011 finds similar results for students at University of Washington.]

(3.) Studying professional economists, Laband and Beil (1999) found a majority to pay the highest level of dues to the American Economic Association (dues prorated on self-reported income), though they could without detection or punishment have reported lower income and so paid less. Through an analysis of proportion paying dues in each income category vs. proportion in the profession making income in those categories they found similar rates of cheating in self-reported income among sociologists and political scientists.

I see these findings as the flip side of what I've been finding with ethicists: Just as ethical training doesn't seem to increase rates of actual moral behavior much, if at all, so also being bathed in rational choice theory (if, indeed, this is what economics students are mostly taught) doesn't seem to induce real-world selfishness.

Monday, November 12, 2012

New Essay: The Problem of Known Illusion and the Resemblance of Experience to Reality

I'll be presenting a new essay at the Philosophy of Science Association meeting in San Diego on Friday in the morning sessions. I've been drafting it out since 2010, in various shapes and lengths, and I presented it orally in 2011 at UMSL, but the thing always seems to crumble in my hands, and until now I haven't been comfortable posting a circulating draft. However, by stripping it down to 2000 words for a brief oral presentation, I can conveniently decline to delve into the issues that keep stymieing me and present the core idea fairly simply, I hope, with a couple of examples.

Abstract: If Locke is right, when I visually experience a cubical thing and judge rightly that it is in fact a cube, then there is a mind-independent thing out there the shape of which in some important way resembles my experience of its shape. If Kant is right, in contrast, we have no good reason to think that things in themselves are cubical; there's nothing independent of the human mind that has cubical properties that resemble the properties of my visual experience of cubes. I believe we can start to get a handle on this dispute empirically through introspection. Suppose that there are multiple different ways of veridically experiencing the same object and that it can sometimes be the case that there's no good reason to think that one of the two different experiences more closely resembles things as they are in themselves. It would then seem to follow that there's a kind of looseness between features of experience and features of things in themselves. Things in themselves might be more like this or they might be more like that or somewhere in between; but we can no longer say that we know they are like this -- a miniature Kantian victory over Locke. And then the question would be: How far can we push this type of argument? In this paper, I consider two test cases: convex passenger-side car mirrors and inverting lenses of the sort invented by George Stratton.

Full paper here. As always, comments and criticisms welcome, either on this post or by email.

Wednesday, November 07, 2012

Aaron James's Theory of Assholes

The nature and management of assholes -- or as I generally prefer to say, jerks -- deserves far more attention than it has received thus far in moral psychology. Thus, I commend to your attention Aaron James's recent book Assholes: A Theory.

James defines an asshole as follows. The asshole

(1.) allows himself to enjoy special advantages and does so systematically;
(2.) does this out of an entrenched sense of entitlement; and
(3.) is immunized by his sense of entitlement against the complaints of other people (p. 5).
Nuances of ordinary usage aside, it does seem to me that this captures an important type of person, and one deserving of the epithet.

Two of James's insights about the asshole particularly strike me. First, why is the asshole so infuriating, even when the harm he does is slight? James's answer is that the asshole's entrenched sense of entitlement -- the asshole's refusal to treat others as equals -- adds particular sting to the injuries he forces upon us. It's not just that he cuts in line or takes the last two cookies for himself. It's that, even when confronted, he refuses to recognize us as deserving equal consideration for line position and cookie consumption. A mere jerk (in James's terminology) might be moved upon reflection to confess the wrongness of his actions (even if still refusing to yield the second cookie) but all such appeals slide off the asshole. In fact, the more you protest, the more the asshole glazes over and rises, in his own mind, above you. (Here I go somewhat beyond James's own remarks, but I hope I remain within his general spirit.)

Second -- and equally infuriating -- the asshole, unlike the psychopath, is morally motivated. It's not just "morality be damned, I'm getting mine!" Rather, the asshole feels morally entitled to special advantages. An injustice is done, he feels, if he has to wait in the post office line equally with everyone else. After all, he's not a mere schmoe like you! Sanctimonious selfishness is the mark of the asshole.

However, I think James hits one wrong note repeatedly in the book, concerning the asshole's self-knowledge. For example, in the conclusion of his book -- his "Letter to an Asshole" -- he addresses the asshole with remarks like this: "we should ask about the nature of your own presumed special moral status" (p. 198) and "I address you here to give you... an argument that you really should come to recognize others as equals, that you should in this way change your basic way of being" (p. 190). This is off key, I think, because many assholes, perhaps most, would not explicitly acknowledge, even privately to themselves, that they deserve special moral consideration; they would not deny that "all men are created equal" -- in the morally relevant sense of "equal". Rather, I suggest, their spontaneous reactions and their moral judgments about particular cases reveal that they implicitly regard others as undeserving of full moral consideration; but when pushed to verbalize, and when reflecting in their usual self-congratulatory mode, they will deny that this is in fact their view.

Why shouldn't the asshole wait his turn in the post office line, then, in his own mind? Well, it's not that others aren't his equals -- not really -- it's just that he is particularly busy, since he owns his own business, or that he's a particularly important person around town, since he's a distinguished professor at the local university, or... whatever. Anyone else in the same position would (the asshole insists) deserve exactly the same special treatment! It's not that he's inherently superior, he says, but rather that he has achieved something that others have not, and this is entitles him to special privileges. Or: I've got especially important stuff going on today! Alternatively, if achievement and importance-based rationalizations aren't handy, the asshole has the following ready fallback: Cutting in line if you can get away with it is just how this game is supposed to work. Others could easily do so too, if they were more on the ball, if they weren't such cow-like fools. (But not in front of me! Part of the game is also enforcing your line position against intruders; too bad for them that those other people didn't.)

Conveniently for him, there always seems to be a rationalization lying around somewhere. All men are created equal, of course, of course! But not all achieve the same and not everyone can take first place.

Update, Nov. 8: Aaron James has launched a blog on assholes.

Tuesday, November 06, 2012

Tips for Writing Philosophy Papers

In my undergraduate classes, I normally distribute tips for writing philosophy papers along with my essay assignments. Perhaps others will find these tips helpful.

General Instructions on Writing Philosophy Papers

Philosophy papers can take a variety of forms; no single formula suffices to describe them all. However, most philosophy papers are built around two components: textual analysis and critical discussion. The simplest and most common form of a philosophy paper is the presentation of a particular author’s point of view, coupled with an argument against that view. Another common form is to contrast the views of two authors on a particular issue and to support one author’s view against the other.

The first element: textual analysis. The first element of a successful philosophy paper is an accurate, sympathetic, and cogent presentation of a point of view – typically the point of view of one or more of the authors we have been reading in the class. For longer papers, you should probably present not only a sketch of the author’s position, but also, as sympathetically as possible, some of the reasons the author gives for accepting his or her view. You might also offer your own, or others’, arguments supporting the author’s view. Using your own fresh examples, in the context of more abstract exposition, can especially effectively convey your command of the material.

You should put special care into accurately representing views with which you disagree, since it is tempting to oversimplify or caricature such views. You should also use citations to support your analysis (see below).

The second element: critical discussion. The second element of a successful philosophy paper is a critical discussion of the view (or views) presented. A plausible argument must be mounted either for or against at least one point of view. In constructing this argument, you may use ideas from the readings, lectures, class discussion, or any other source. When using an idea that you obtained from someone else, you must cite the source (see below). While it is not expected in most cases that you will discover wholly novel arguments, you will be expected to put the arguments in your own words, take your own angle on them, and use your own examples, going deeper into at least one issue or objection than we have in class. You should also bear in mind how an opponent might respond to your argument. The best papers often explicitly develop a potential line of criticism against the view the student favors and then show how the view advocated can withstand that criticism. (Of course, it is of little value to do this if the criticism anticipated is too weak to be advocated by thoughtful opponents of your position.) One or two powerful criticisms developed in convincing detail is almost always better than a barrage of quick criticisms treated superficially. Students sometimes relegate their critical discussion to the last paragraph or last page of their papers. Generally, that is a mistake. This is what the rest of the paper is building towards. Spend some time with it; work it out in detail.

One common mistake is to simply state (or restate) your position, or the position of one of the authors, as your critical discussion. Instead, you should mount an argument that brings new considerations to bear or shows some specific weakness in the position or argument you are criticizing. Give reasons for accepting the view you endorse.

Sentences. You should separately evaluate each sentence of your paper along the following three dimensions.

(1.) Is it clear? Although philosophers such as Kant and Heidegger are notorious for their opacity, opacity is not generally accepted in student papers. It should be clear what every sentence means, on the face of it. Avoid technical terms as much as is feasible. When you do use a technical term, for the most part it should be clearly defined in advance. Generally speaking, you should aim to write so that an intelligent person with no background in philosophy can understand most of your paper.

(2.) Is it true or plausible? Every claim you make in a philosophy paper – indeed, every element of every claim – should be either true or plausibly true. Claims that are true or plausible on their face (e.g., “damage to the brain can affect the capacity to think”) may be offered without further support. Claims likely to be questioned by an alert reader (e.g., “nothing immaterial can cause the motion of a material object”) should either precede or be preceded by some sort of consideration or argument in support, although in some cases the support may be fairly simple or preliminary, especially if the point is subsidiary or taken for granted by all the relevant parties.

(3.) Is it relevant? Often, the aim of a philosophy paper is to criticize the view of some particular author. In that case, most of the sentences should serve either to articulate relevant parts of the view that is to be criticized, or they should somehow support the criticism, or they should serve summary or “signpost” functions. In longer papers, digressions and speculations, if they are of sufficient interest, are also acceptable. In more purely interpretive papers, relevance may be harder to assess, but generally you should try to confine yourself either to a single interpretive thesis (e.g., “Descartes would not have accepted Malebranche’s occasionalism”) or you should focus on a narrow enough range of ideas that you can present an insightfully deep exploration of one aspect of the author’s view (as opposed to a scattered, superficial treatment).

Paragraphs. The basic unit of writing is not the sentence but the paragraph. As a general rule, every major point deserves its own paragraph. To make a claim clear it is often desirable to do at least one of the following: Restate it in different words, qualify and delimit it, contrast it to a related point with which it may be confused, present an example of its application, or expand it into several subpoints. To make a claim plausible, it is often desirable to present an example or application, show how it is supported by another claim that is plausible on the face of it, rephrase it in a way that brings out its commonsensicality, or cite an author who supports it. In your critical discussion, it is generally desirable that every major objection you raise receive at least a full paragraph of explanation.

If you treat the paragraph as the basic unit of writing, you will find that only a few points can truly be made well in any one paper, and five or even seven pages will start to seem (if it does not already) a narrow stricture.

Preparation for the paper. In preparing for the paper, I advise that you review the readings and notes relevant to the topic in question. If you have thought of what you consider to be a convincing objection to something in the texts or the lectures, you might want to build your paper around that, carefully describing the view or argument you oppose and then showing why you think we ought not accept that view or argument. If you raise your objection in class discussion, in office hours, or in discussion with friends, then you can see whether others find it convincing and perhaps how someone who disagrees with you would be inclined to respond.

Use of sources. Frequent citations should be used to back up the claims you make in your paper, both in describing an author’s view and in mounting your criticisms, if you depend on the ideas of other people in doing so. Citations serve two purposes: (1.) They credit people for their views, omission of this credit being plagiarism. (2.) If I disagree with your interpretation or recollection of what an author has said, a page reference allows me to check what you have said against the text – otherwise, in cases of disagreement I will have simply to assume that you are mistaken. Citations to particular authors and pages should be included parenthetically in the text, and the bulk of your references should be paraphrases, not direct quotations. Quotes take a lot of space and do not make clear what it is you mean to highlight or extract from the quoted passage, nor do they effectively convey that you understand the quoted material.

Below are two examples of parenthetical citation format for paraphrases. Both are written in the student's own words, not quoting from but rather conveying the crucial idea of the cited text:

It might be thought that materialism cannot be true because people can talk quite intelligently about mental states without knowing anything about brain states – without even believing that the brain is involved in thinking. Consider, however, the case of lightning as an electrical discharge: One can talk intelligently about lightning without knowing about electrical discharges, but this does not prove that lightning is not an electrical discharge (Smart, p. 171).

Or:

In support of the view that the mind continues to exist after death, Paterson cites evidence from reports of near-death experiences. Occasionally, Paterson claims, people who have near-death experiences report details of objects and places it would have been impossible for them normally to observe, such as friends unexpectedly arriving in the hospital waiting room (p. 146).

It is not necessary to appeal to secondary sources in discussing your interpretation of the texts. If you do choose to refer to such sources, you should always make your own judgment about whether what they say is plausible and back up your judgment, if possible, with references to the primary texts. Information found on the internet should be treated with special caution. Wikipedia is an unreliable source for philosophy; the Stanford Encyclopedia of Philosophy is usually much better. If you find information on the internet that in any way informs your paper, you should be certain to cite it (including the U.R.L.) as you would any other sort of secondary material.

Audience. Imagine the audience of your paper to be a mediocre student taking this class. You need not explain such basic things as who Descartes is. However, you should not assume that your reader has any more than the most rudimentary acquaintance with the texts and arguments or any knowledge at all of literature not assigned in the class. Jargon should be minimized. When you do use jargon, explain carefully what you mean by it.

Introductions. For such a short paper, there is no need for a general introduction to the issues or the particular thinkers being discussed. Get right to the point. The first paragraph should probably contain an explicit statement of what you take your primary point or points to be. There is no need to keep the reader in suspense.

Drafts, outlines, sketches. Many people find it helpful to create an outline of the paper before writing. At a minimum, one should have a general idea, in advance, of the main points one will make. One potential danger with outlines for philosophy papers is that it is often difficult to judge in advance the proper amount of time to spend on any particular sub-claim. Brief sketches of one’s main points and arguments – e.g., a summary of the main project of the paper in one or two paragraphs – are sometimes more helpful. Once you have completed the paper, it can be very rewarding to set it aside for a few days and then return to it, rewriting it from scratch from the beginning. Such rewriting forces you to rethink every sentence afresh as you retype it, which generally results in a clearer, tighter, and more coherent paper. (I rewrite my own essays multiple times before submitting them for publication.)

Wednesday, October 31, 2012

Nietzsche's Eternal Recurrence, Scrambled Sideways

Nietzsche writes:
What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: "This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it, but every pain and every joy, and every thought and sigh and everything unutterably small or great in your life will have to return to you, all in the same succession and sequence? -- even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, a speck of dust!"
Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: "You are a god and never have I heard anything more divine." If this thought gained possession of you, it would change you as you are or perhaps crush you. The question in each and every thing, "Do you desire this once more and innumerable times more?" would lie upon your actions as the greatest weight. Or how well disposed would you have to become to yourself and to life to crave nothing more fervently than this ultimate eternal confirmation and seal? (Gay Science 341, Kaufmann trans.).

Unlike some readers of Nietzsche, I'm inclined to think Nietzsche intended his remarks about eternal recurrence not as a mere thought experiment but rather as a genuine cosmological possibility. His unpublished reflections on eternal recurrence suggest a view not unlike that of his contemporary, physicist Ludwig Boltzmann. In a universe of finite relevantly different combinatorial possibilities, infinite duration, and some means of avoiding permanent collapse into entropy, it is plausible to think that eventually the current configuration of the world will recur, not just once but infinitely often. And if one adds determinism to the picture (as most would have done in the 19th century), then once the current configuration recurs, the same subsequent states will follow. Voila, eternal recurrence.

Now update to the early 21st century by adding multiverse theory and randomness. What do we get? Eternal recurrence scrambled sideways! Sideways because the infinitely many duplicates of you need not exist only in your past and future (and in fact probably don't, assuming a finite or entropy-collapsing observable universe and universe-local spacetime) -- rather they exist "sideways", outside of our observable universe. And scrambled because rather than being destined always to play out the same, every finite possibility is played out, infinitely often.

So, on this view -- which is well within the range of the mainstream options in contemporary scientific cosmology -- there are infinitely many "Eric Schwitzgebel"s in infinitely many universes who have lived their lives identically to mine up to this minute. Given that there is a huge variety of highly improbable but finitely probable weird futures for these Eric Schwitzgebels, infinitely many Eric Schwitzgebels play out each of these weird outcomes. Infinitely many of my up-to-now counterparts decide to leave philosophy forever to pursue a hopeless career in football, infinitely many leap to death from the top of the tower, infinitely many spend the rest of the week stapling pages of Kant's first critique atop relevant passages of Hume's Treatise. And of course infinitely many also finish this blog post, in every possible way it might be finished.

How should I feel about these counterparts of mine, assuming such a cosmology is the correct one, as seems possible? They are oddly close to me, in a way, though universes distant. I can't quite find myself indifferent to them -- just as Nietzsche can't find himself indifferent to his future counterparts who must live out his every decision. Though it seems weird to say so, I find myself feeling sorry as I imagine their sufferings. I don't feel the heavy weight of Nietzsche's eternal recurrence, though. I'm not sure I would feel that weight even on Nietzsche's original assumptions, but definitely not now. Maybe instead there's a lightness: Even if I decide wrong, there will be infinitely many Erics who get it right! Conversely, there's an eeriness too: Infinitely many Erics bashed their cars headlong into that oncoming traffic.

Maybe I shouldn't take such reflections very seriously. The cosmology might not be correct. Even if it is correct, I'm the only Eric Schwitzgebel, UC Riverside philosopher, in this universe, and I really shouldn't care at all about what transpires in other universes, no matter how eerily similar. Should I? There are plenty of other people, right here on our own Earth, past and future, whom I should care about more, right? Because they're... well, why exactly? Because they're closer?

Wednesday, October 24, 2012

Group Minds on Ringworld

In the year 3000, let's suppose, humanity completes its greatest construction project ever: Ringworld, a habitable surface as wide as a planet but spanning an entire planetary orbit -- a ring around a neighboring star with 10,000,000,000,000 square kilometers of living space. A big place!

Earthly nations send colonists. Once on Ringworld, the colonists form independent nations, free of Earthly control. These nations grow and spread. For sociological reasons, let's suppose, Ringworld nations function best with populations near 100,000,000. Once a nation grows much larger than that, it tends either to fission or to stagnate. Now, what type of nation will be well represented on the surface of Ringworld after ten thousand years?

Although it could play out in various ways, the most straightforward answer seems to be: nations that grow fast, then fission, then repeatedly grow again and fission again. Mobility to unpopulated parts of Ringworld, away from competitors, might also be favored. Also, we might expect the most evolutionarily successful nations to have intergenerationally stable developmental resources -- that is, to be such that their fission products tend to develop the same traits that the fissioning parent nations had, i.e., the very traits that made those parents evolutionarily successful. Otherwise, after a few generations, those nations' fission-produced offspring nations will be outcompeted. We might further imagine that the most successful nations employ eugenics: Their governments select a range of DNA strands containing especially desirable traits, which then serve as the genetic basis of the next generation of their citizens; and the governments that do so with the best eye to maximizing their nations' eventual descendant nations, and that do so stably over the generations, are eventually the nations that are best represented on the Ringworld surface.

We might imagine, too, that as the Ringworld surface becomes more crowded, aggression starts to pay. In response, the competing nations develop protective physical borders, grown using nanotechnology and difficult to penetrate without permission. Nations might also strictly limit immigration as contrary to their eugenic plans. If nations are somewhat mobile -- and we might imagine that gravity (or centrifugal inertia) is light and fusion power plentiful -- they might best compete with each other by moving toward opportunities and away from threats, bringing their citizenry and physical defensive borders along with them. Eventually, these defensive borders might gain appendage-like functionality -- e.g., offensive weaponry and the ability to harvest minerals and sources of power. Once this happens, the majority of individual citizens might become largely sedentary, communicating via radio and microwave signals. And once sedentary, size-reduction might be selected for, to reduce the energetic costs of nation-scale movement; and transmission of essential nutrients between citizens might be achieved by purely mechanical means. Furthermore, once free of the demands of individual mobility and individual-level reproduction, citizens might start to specialize ever more narrowly in tasks that serve the reproductive interests of the nation -- or at least the nations whose citizens develop in that direction might in the long run outcompete the nations whose citizens do not.

Over time, as individual citizens shrink and become increasingly specialized, and as the membrane around the nation becomes more functional and more effectively protective of the interior, the overall physical structure of the nation might start to look increasingly like that of what we would call an individual organism that reproduces by fission.

Nations -- at least the evolutionarily most successful ones -- will presumably engage in social intercourse among each other, both cooperatively and competitively. Possibly, some of these nations will evolve so that no single individual citizen is responsible for between-nation communication but rather the communicative efforts arise in a complex way from the citizenry as a whole. If individual citizens become sufficiently small and specialized, and if they learn to communicate with each other non-linguistically (e.g., by direct brain-to-brain stimulation), then it might eventually become the case that no individual citizen can even understand the linguistic communications emitted by her own nation.

A million years passes, during which Earth loses communication with Ringworld. Social pressures on Ringworld favor increasingly sophisticated forms of communication between nations, including the emergence of nation-level art, poetry, song, history, and philosophy -- none of which is comprehensible to the individual citizens of the species of nation that eventually conquers the rest. After these million years, visitors from Earth arrive, and they decide that conscious experience is primarily to be found at the level of nations, not at the level of individual citizens.

Question: At what point in this process did the nations first have nation-level conscious experience?

Might it have been from the very beginning?

Wednesday, October 17, 2012

Do You have Infinitely Many Beliefs about the Number of Planets?

If you're like me, you believe that the solar system contains eight planets. (Stubborn Plutophiles may adjust accordingly.) You probably also believe that the solar system contains fewer than nine planets. And you probably believe that it contains more than just the four inner planets. Do you also believe that the solar system contains fewer than 14 planets? Fewer than 127? Fewer than 134,674.6 planets? That there are eight planet-like bodies within half a light year? That there are 2^3 planets within the gravitational well of the nearest large hydrogen-fusing body? That there are 1000(base 2) planets, or -i^2*e^0*sqrt(64) planets? That Shakespeare's estimate of the number of planets was probably too low? Presumably you can form these beliefs now, if you didn't already have them. The question, really, is whether you believed these things before thinking specifically about them.

Standard philosophical representationalist approaches to belief seem to fare poorly when faced with this question.

Here's a typical first-pass representationalist account of belief: To believe some proposition P (e.g., that the solar system has fewer than 9 planets) is to have stored somewhere in one's mind a language-like representation with the content P -- and to have it stored in such a way that the representation can be deployed for appropriate use in inference and practical reasoning. Now, one possibility, on such a view, is to say we have all the beliefs described above and thus that we have a vast number of stored representations with very similar content. But that doesn't seem very appealing. Another possibility seems more initially promising: Say that we really only have a few stored representations concerning the number of planets. Probably, then you didn't believe (until you thought about it just now) that there were fewer than 14 planets.

But there are two problems with this approach. First, although I certainly agree that it would be weird to say that you believe, before doing the relevant calculation, that there are -i^2*e^0*sqrt(64) planets, it seems weirdly misleading to say that you don't believe that there are fewer than 14. But if we do want to include the latter as a belief, there are probably going to have to be, on this view, quite a few stored representations regarding the number of planets (at least the 15 representations indicating that the number is >0, >1, >2, ... <14). Second, the line between what people believe and what they don't believe turns out, now, to be surprisingly occult. Does my wife believe that the solar system contains more than just the four inner planets? Well, I know she would say it does. But whether she believes it is now beyond me. Does she have that representation stored or not? Who knows?

Jerry Fodor and Dan Dennett, discussing this problem, suggest that the representationist might distinguish between "core" beliefs that require explicitly stored representations and "implicit" beliefs which are beliefs that can be swiftly derived from the core beliefs. So, if I have exactly one stored representation for the number of planets (that there are 8), I have a core belief that there are 8 and I implicitly believe that there are fewer than 14 and fewer than 134,674.6, etc. Although this move lets me safely talk about my wife -- I know she believes either explicitly or implicitly that there are fewer than 14 planets -- the occult is not entirely vanquished. For now there is a major, sharp architectural distinction in the mind -- the distinction between "core" beliefs and the others (and what could be a bigger architectural difference, really, for the philosophical representationalist?) -- with no evident empirical grounding for that distinction and no clear means of empirical test. I suspect that what we have here is nothing but ad hoc maneuver to save a theory in trouble by insulating it from empirical evidence. Is there some positive reason to believe that, in fact, all the things we would want to say we believe are either the explicit contents of stored representations or swiftly derivable from those contents? It seems we're being asked merely to accept that it must be so. (If the view generated some risky predictions that we could test, that would be a different matter.)

An alternative form of representationalism -- the "maps" view -- has some advantages. A mental map or picture of the solar system would, it seems, equally well represent, in one compact format, the fact that the solar system has more than 1 planet, more than 2, more than 3,... exactly 8, fewer than 9, ... fewer than 14.... That's nice; no need to duplicate representations! Similarly, the same representation can have the content that Oregon is south of Washington and that Washington is north of Oregon. On the language view, it seems, either both representational contents would have to be explicitly stored, which seems a weird duplication; or one would have to be core and the other merely implicit, which seems weirdly asymmetrical for those of us who don't really think much more about one of those states than about the other; or there'd have to some some different core linguistic representation, an unfamiliar concept, from which xNORTHy and ySOUTHx were equally derivable as implicit beliefs, which seems awkward and fanciful, at least absent supporting empirical evidence.

However, these very advantages for the maps view become problems when we consider other cases. For it seems like a map of the solar system represents that there are -i^2*e^0*sqrt(64) planets, and that there are 1000(base 2) planets, just as readily as it represents that there are 8. Maps aren't intrinsically decimal, are they? And it seems wrong to say that I believe those things, especially if I am disposed to miscalculate and thus mistakenly deny their truth. For related reasons, it seems difficult if not impossible to represent logically inconsistent beliefs on a map; and surely we do sometimes have logically inconsistent beliefs (e.g., that there are four gas giants, five smaller planets, and 8 planets total).

It seems problematic to think of belief either in terms of discretely stored language-like style representations (perhaps plus swift derivability allowing implicit beliefs), or in terms of map-like representations. Is there some other representational format that would work better?

Maybe the problem is in thinking of belief as a matter of representational storage and retrival in the first place.

Thursday, October 11, 2012

Berkeleyan Polytheism

I've been traveling in Spain and France, and it has been hard to find the time to pull together a full-length post. But this thought struck me a few nights ago, when I probably should have been paying closer attention to the dinner conversation: What would happen to Berkeleyan metaphysics if we swapped his monotheism for polytheism?

George Berkeley, you will recall, is an 18th-century philosopher who held that matter doesn't exist, only immaterial souls and their experiences. Your computer screen? Just an idea in your immaterial soul. Your fingers on the keyboard? Just ideas in your immaterial soul. The human brain, as seen by a neurosurgeon? Again, just an idea in the surgeon's immaterial soul. Behind these ideas are not physical substances but rather the will of God, who ensures that your sensory experiences are all nicely coordinated with each other and with the sensory experiences of other people. God ensures -- since He loves order so very much! -- that when I have an experience of seeing a red dot here and then I experience willing my eyes to move to the left I then have a sensory experience of the dot as shifting rightward in my visual field. He ensures that you and I, who experience each other as being in the same room, also have similar sensory experiences of that room, allowing for variation in perspective. Etc. It's like a very well-coordinated mutual dream. The Matrix is tame compared to Berkeley.

Berkeley, being a good Christian, believed in a single, perfect God, but what if we tweaked Berkeley's view, allowing for a limited God? What if sometimes God fell into inconsistencies, so that when you and I experience ourselves as being in the same room, sometimes I see one thing written on the board (the letter P) and you see something else (the letter Q)? God could try to cover up this error, so that when I say "I see a 'P' on the board" you hear my words as "I see a 'Q' on the board". But such cover-ups could multiply vastly: I write 'P' in my notebook, you write 'Q', then we both share our notebooks with some third party.... In the extreme, we might have to splinter off into entirely separate worlds.

Or maybe God could do a late correction: What I was seeing as 'P', I now see as 'Q', consistently with you -- and then presumably (unless God also alters my memory) I attribute my error to initial misperception. If God showed predictable patterns in such errors, maybe we could study such errors under the heading of "perceptual illusions" and see nothing so strange in them.

We might alternatively imagine more than one god, with competing goals. Without an objective physical reality to constrain them, the gods might create experiences in their followers that would be "physically inconsistent" with the experiences in other gods' followers. Perhaps God A allows his followers to experience refreshing rain after the rain dance, while God B's followers experience the scene of watching the physical bodies of the exact same followers of God A dancing for rain and being rewarded with only a dry spell. Maybe the two sects go to war and when Person X dies, the followers of God A experience X's demise as glorious bravery, while the followers of God B see X dying the death of a fleeing coward. (Person X himself presumably either willed to fight or to flee, but the others in the battle don't see his will, instead only having the visual experiences that their god chooses to deliver to them.)

Philosopher of science Thomas Kuhn says that scientists with different theories literally "live in different worlds" because their sensory experiences and their recorded data are transformed by their theoretical commitments. This is perhaps just a more radical version of that idea.

Okay, so it's not very plausible, if we're interested in the metaphysical truth of things! But maybe it's more interesting than hearing about the Barcelona-Madrid football match, if you're sitting in a restaurant full of Spaniards. Or maybe it was a side-effect of the fish.

Tuesday, October 02, 2012

Trip to Spain and France

Tomorrow I'm headed to Spain and France for a couple of weeks, and I'm not sure whether I'll have time to post. My schedule:

October 5, 1 pm start, Barcelona: "The Moral Behavior of Ethics Professors" (talk hosted by the Psychology Department).

October 6-7, Barcelona: Two day workshop on Empirical Data and Philosophical Theorizing. My talk: Oct. 6, 5-6:15: "The Psychology of Philosophy".

October 9, 11:30 start, San Sebastian/Donostia: "If Materialism Is True, the United States Is Probably Conscious" (classroom 4, Carlos Santamaria Building).

October 12, Paris, 11am-1pm: "The Crazyist Metaphysics of Mind" (Institut Jean Nicod).

As far as I'm concerned, any reader of this blog is welcome to attend. However, you might want to check with the institutions about details of access.

I will have a couple unscheduled days in Paris. I'm open to suggestions about interesting things to do.

Saturday, September 29, 2012

What Experimental Philosophy Might Be

What is "experimental philosophy"? There is, I think, some tension in how philosophers use the term -- tension between a narrow and a wide conception.

Most of the work canonically identified as "experimental philosophy" surveys ordinary people's judgments (or "intuitions") about philosophical concepts, and it does so by soliciting people's responses to questions about hypothetical scenarios. (See, e.g., Knobe's chairman study regarding intentional action, Swain et al.'s study of order effects on knowledge judgments, and Machery et al.'s study of cultural variation in the perceived reference of proper names.) Thus, philosophers sometimes think of "experimental philosophy" as the enterprise of running these sorts of studies. Joshua Alexander, for example, in his 2012 book portrays experimental philosophy as the study of people's philosophical intuitions, as does Timothy Williamson, in his forthcoming critique of the "Experimental Philosophy revolution".

Conceived narrowly in this way, experimental philosophy is a coherent and (mostly) recent movement, with a distinctive methodology and an interrelated network of results. It is possible to discuss and critique it as a unified body.

However, there also seems to be a broader conception of experimental philosophy -- a conception that has never, I think, been adequately articulated, a conception that Josh Knobe and Shaun Nichols, for example, gesture toward in their "Experimental Philosophy Manifesto", before they shift their focus to experimental philosophy in the narrow sense. In this broad sense, philosophers who do empirical work aimed at addressing traditionally philosophical questions are also experimental philosophers, even if they don't survey people about their intuitions. In practice, the narrow/broad distinction hasn't meant too much because almost all of the empirical work of almost all of the people who are canonically recognized as "experimental philosophers" fits within both the narrow and the broad conceptualizations.

My own work, however, tends to fit only within the broad conception, not the narrow (with a couple of recent exceptions). So the difference has personal importance to me, affecting both my and others' conceptualization of my role in the "x-phi" community.

How to articulate the broad conception? Experimental work done by people in philosophy departments seems an odd category, since similar work can be done by people in different departments. Experimental work that addresses traditionally philosophical issues, however, seems far too broad, sweeping in too much research that we would not ordinary conceptualize as philosophical, including physics experiments on the structure of space and time, biological research on the origins of our social roles, and psychological work on the origins of our concepts, since these issues are at the root of philosophy of physics, philosophy of biology, and philosophy of psychology; and sometimes what the most theoretically ambitious physicists, biologists, and psychologists say is not so different from what philosophers of those disciplines say; if by adding experimentation to that, we get experimental philosophers, then most experimental philosophers are employed by science departments.

Here's a thought that is perhaps more satisfactory: Experimental philosophy in the broad sense is empirical research that is thoroughly contextualized within an intimate knowledge of the philosophical literature on which it bears, and which is presented, primarily, as advancing that philosophical literature. (For present purposes, we can define "philosophical literature" sociologically as what is published in philosophy journals and in books classified as philosophy books by academic presses.) This will omit the typical developmental psychologist working on conceptual categories, but probably allow as marginal cases of "experimental philosophers" the most philosophically informed developmental psychologists (such as Alison Gopnik and Susan Carey). It will allow in, as central, work by Shaun Nichols that is not typical intuition-polling x-phi (such as his work on disgust norms and on quantitative history of philosophy). Maybe some of Elisabeth Lloyd's empirical work on the female orgasm will also qualify.

And of course (my main, not-so-secret intention in developing this account) so also will all of my own empirical work. Included, for example, will be my work on the moral behavior of ethics professors, which does tend to be recognized as "experimental philosophy" by the x-phi crowd, despite not fitting the narrow conception; and also my empirical research on consciousness (e.g., here, and chapters 1 and 6 here), which is less often mentioned as x-phi.

But more than that, I think this conception explicitly encourages the thought that a broad range of traditionally philosophical issues can be illuminated by empirical studies, even though they have not yet commonly been approached with the tools of empirical science. For example, empirical research might help illuminate the question of whether philosophical study, over the centuries, tends to progress toward the truth. We might be able to take an experimental approach to the question of whether the external world exists. Questions in the history of philosophy might be approachable by means of the systematic study of citation patterns in philosophical venues. And so forth.

Experimental philosophers, break free! There is so much more to do than just surveying intuitions!

(No accident maybe, that I was a student of Gopnik and Lloyd?)

Thursday, September 27, 2012

New Essay: A Dispositional Approach to Attitudes, or Thinking Outside of the Belief Box

I have long advocated a dispositional approach to belief (e.g., here). But I have been cagey about trying to extend that account to other attitudes such as desiring and loving. In this new essay, I finally set aside those hesitations and go all-in for dispositionalism.

Abstract: To have an attitude is, for the most part, just to live a certain way. It is, for the most part, just to be disposed to behave in certain ways, to be disposed to undergo certain conscious experiences, and to be disposed to exhibit certain folk-psychologically recognizable cognitive patterns. To have an attitude is, essentially, to be describable by means of a folk-psychologically recognizable superficial syndrome, regardless of one's deep cognitive or biological structure. To have an attitude is not, for example, essentially a matter of having a representation stored in a metaphorical functional box. It is more like having a personality trait. It is to have a certain temporary or habitual posture of mind.  
As always, comments welcome, either by email or as comments on this post.

Wednesday, September 26, 2012

Tuesday, September 18, 2012

Against Architectural Accounts of the Attitudes: Two Thought Experiments

Most recent Anglophone philosophers appear to favor what I will call architectural accounts of the attitudes. On such accounts, what is essential to possessing an attitude is that one be in some particular type of physical or biological state or that one possess some particular piece of cognitive architecture. On such an account, to have an attitude such as belief or desire might, for example, be to possess an internal representation of a certain sort, perhaps poised to play a particular cognitive role; or it might be a matter of being in a brain state of a certain sort. On my view, in contrast, such architectural facts should be regarded as matters of implementation only, not essence. What matters to having an attitude is instead, I suggest, that one live a certain way -- that across a wide range of actual and counterfactual circumstances, one is disposed to act and react, both inwardly and outwardly, in patterns that we would folk-psychologically tend to regard as characteristic of someone who possesses that attitude.

Call whatever architectural condition is essential to having Attitude A, on one of the architecturally-based views, Architectural Condition C. Unless Architectural Condition C just is the condition of being disposed to act and react, inwardly and outwardly, in the pattern characteristic of Attitude A, then presumably it is conceivable that Architectural Condition C could be possessed by a person who lacks the such a suite of dispositions, or such a suite of dispositions could be possessed by a person who lacks Architectural Condition C. What should we say about such cases? Let’s consider two.

One: Andi, let’s suppose, is in Architectural Condition C for the belief that giraffes are born six feet tall. Colorfully, we might imagine that a 22nd century brain scanner finds in her Belief Box a slip of paper containing the sentence, in the Language of Thought, “giraffes are born six feet tall”. Or maybe the giraffe neuron is linked to the six-feet-tall neuron is linked to the birth-size neuron. Despite this architectural fact, however, Andi is not at all inclined to act and react in the usual way. She is not at all disposed, for example, to say that baby giraffes are six feet tall. If asked explicitly, she would say giraffes are probably born no more than three feet tall. If shown a picture of a giraffe as tall as an ordinary man she would assume it’s not a newborn. If a zookeeper were to tell Andi that giraffes are born six feet tall, Andi would be surprised and would say, “Really? I had thought they were born much smaller than that!” And so forth, robustly, across a wide range of actual and counterfactual circumstances. None of these facts about Andi are due to the presence of weird factors like guns to her head or manipulation of evil neuroscientists or a bizarre network of other attitudes like thinking that “three” means six. (See also my post on Mad Belief.)

Two: Tomorrow, aliens from Beta Hydri arrive. The BetaHydrians show all signs of valuing molybdenum over gold. They will trade ounce for ounce, with no apparent hesitation. When they list metal prices in their currency, they list the price of molybdenum higher than the price of gold. They learn English, and then they say things like “in BetaHydrian culture, molybdenum is more valuable than gold.” And so forth. Suppose, too, that BetaHydrians have conscious experiences. There is a kind of swelling they feel in their shoulders when they obtain things for which they have been striving. They translate this feeling into English as “the pleasure of success”. They experience this swelling feeling when they successfully trade away their gold for molybdenum. Like us, they have eyes sensitive to the visible spectrum, and like us they have visual imagery. They entertain visual imagery of returning to Beta Hydri loaded with molybdenum and of the accolades they will receive. Pleasurable feelings accompany such imagery. They plan ways to obtain molybdenum, at the cost of gold if that’s what it takes. They judge other BetaHydrians’ molybdenum-for-gold trades as wisely done. Etc. Ordinary people around Earth find it eminently natural to say that BetaHydrians value molybdenum over gold. But we know nothing yet about BetaHydrian biology or cognitive architecture, except that whatever it is can support this pattern of action, thought, and feeling. Whatever Architectural Condition C is, if we can coherently conceive its coming apart from the dispositional patterns above, the patterns characteristic of valuing molybdenum over gold, then suppose Architectural Condition C is not met. If we may conceive the physically impossible, we might even imagine that the BetaHydrians robustly, intrinsically, durably, and non-accidentally exhibit these behavioral and cognitive and phenomenological patterns, across a wide range of possible worlds, despite being made entirely of undifferentiated balsa wood. (See also my post on Betelgeusian Beeheads.)

If we are at liberty to choose an approach to the attitudes that is practically useful and that gets right what we care about in ascribing attitudes, we should choose an approach that says that BetaHydrians value molybdenum over gold and that Andi does not believe that giraffes are born six feet tall. The lived patterns are what matters, not, except derivatively, the underlying architecture.

Wednesday, September 12, 2012

Speaking at Harvard and Tufts

If you're in the Boston area, you're welcome to come to my talk at Harvard tomorrow (Thursday) and/or at Tufts on Friday.

Thursday Sep 13, 4 pm: "The Crazyist Metaphysics of Mind", Harvard University, Emerson Hall Room 305.

Friday Sep 14, 4 pm: "If Materialism Is True, the United States Is Probably Conscious", Tufts University (Neuphi group), 1 Braker Hall.

I'll also be attending Harvard's workshop on belief on Saturday, which looks very interesting!

Tuesday, September 11, 2012

Forgetting as an Unwitting Confession of Your Values

I woke this morning to find my Facebook feed full of reminders to "never forget" the September 11 terrorist attacks. I am reminded of the Jewish community's insistence that we keep vivid the memory of the Holocaust. It says something about a person's values, I think, what that person thinks worth striving to vividly remember -- a grudge, a harm, a treasured moment, a loved one now gone, an error or lesson.

What we remember says, perhaps, more about us than we would want. Forgetfulness is an unwitting confession of our values. The Nazi Adolf Eichmann, in Hannah Arendt's famous portrayal of him, had little memory of his decisions about shipping thousands of Jews off to their deaths, but he did remember in detail his small social triumphs with superiors in the Nazi hierarchy. He vividly remembered the notable occasion, for example, when he was permitted to lounge around by a fireplace with Reinhard Heyrich, watching the Nazi leader smoke and drink (Arendt 1963, p. 114). Eichmann's failures and successes of memory are more eloquent and accurate testimony of his values than any of his outward avowals.

I remember obscure little arguments in philosophy papers if they are relevant to an essay I am working on, but I can't seem to keep track of the parents of my children's friends. Some of us remember insults and others forget them; some remember the exotic foods they ate on vacation, others the buildings they saw, others the wildlife, and still others hardly anything specific at all.

From the leavings of memory and forgetfulness we could create a nearly complete map, I think, of a person's values. What you don't even see -- the subtle sadness in a colleague's face? -- and what you might briefly see but don't react to or retain, is in some sense not part of the world shaped for you by your interests and values. Others with different values will remember a very different series of events.

Michelangelo is widely quoted as having said that to make David he simply removed from the stone everything that was not David. Remove from your life everything you forget; what is left is you.

Thursday, September 06, 2012

Has Civilization Made Moral Progress? Sketch of an Empirical Test

From a certain perspective, current liberal Western civilization seems to be a moral pinnacle. We have rejected slavery. We have substantially de-legitimized aggressive warfare. We have made huge progress in advancing the welfare of children. We have made huge progress toward gender and racial equality. In his 2011 book The Better Angels of Our Nature, Steven Pinker says he is prepared to call our recent ancestors "morally retarded" (p. 658). Imagine how we would react if a Westerner today were seriously to endorse a set of views that would not have been radical in 1800: denying women the vote (or maybe even advocating a return to monarchy), viewing slavery and twelve-hour days of child labor in coal mines as legitimate business enterprises, advocating military conquest for the sake of glory, etc. "Morally retarded" might seem a fair assessment!

From another perspective, though, it appears almost inevitable that the average person in any culture will see his or her own culture's values as morally superior. Suppose that the average person in Culture 1 endorses values A, B, C, and D, the average person in Culture 2 endorses values A, B, -C, and E, and the average person in Culture 3 endorses values A, -B, D, and F. The average person from Culture 1 will think, "Well the average person in my culture has A, B, C, and D right! The people in culture 2 have C wrong and really should pay attention to D rather than E, while the people in Culture 3 have B wrong and should attend to C rather than F. So my culture's values are superior." If Culture 1 is temporally more recent than Cultures 2 and 3, then our average person from Culture 1 can laud the change in values as "progress".

The problem is, of course, that it might not be progress at all, but rather only a preference for local values. Call this the Local Pinnacle Illusion. Someone from the past might suffer the same illusion looking forward at us, condemning (say) our liberal Western neglect of proper class and gender role distinctions, our relative irreligiosity, and our relative tolerance of homosexuality, masturbation, divorce, and moneylending -- calling the changes "decadence" rather than progress.

The question then is whether there's a good way to tell whether those of us with a Pinkeresque preference for contemporary liberal values are merely victims of the Local Pinnacle Illusion, or whether we really have made huge moral progress in the last few centuries or millennia. I've been thinking about whether there might be a way to explore this empirically, using the history of philosophy.

Here's my thought: The temporal picture of progress and the temporal picture of a random walk look very different. If, say, rational reflection over the very long haul tends to guide us ever closer to right moral principles, as Pinker thinks, then issue-by-issue we ought to see opinion changes over the very long haul that look like progressive trends of moral philosophical discovery. If, in contrast, all that's going on is the Local Pinnacle Illusion, trends should be relatively short-lived, due to local cultural pressures, and not consistently directional over the very long haul.

Suppose we chose twelve issues that have been broadly discussed since ancient times and that in at least some eras -- not necessarily our own -- are regarded as morally important issues. If we could chart philosophers' opinions on these issues quantitatively (e.g., from strong support for democracy to strong support for monarchy with moderate views in the middle, from strong support for gender-neutral role expectations to strong support for gender-specific role expectations, etc.), using -1 and +1 to mark the average position in the historically most extreme eras for each pole of each issue, then on a random-walk picture, we ought to see something like this pattern among those issues:
I have assumed here a random starting spread from -1 to 1 and a random fluctuation from -0.3 to +0.3 in each successive period; then I renormalized the maximum value of each run to 1 and the minimum to -1. The result is basically a noisy walk, with extreme values as likely in the middle as at the ends.

On the other hand, if there's moral progress through philosophical reflection, the chart ought to look more like this:
For this chart, I assumed a progress factor, negative or positive, of one-third of the spread of the random fluctuation (i.e., adding or subtracting 0.1 to each step, in addition to the -0.3 to +0.3 random fluctuation). For the "toward 0" positions, I assumed positive incremental pressure when the previous period value was (absolutely) negative and negative incremental pressure when the previous period was positive. As with the previous graph, at the end I normalized the maximum and minimum values to +1 and -1.

As you can see, many more moral issues show consistent trend directions over time and the most extreme positions tend to be held at the beginning and the end of the analysis period. The latter pattern follows from the fact that even if the initial historical position at time zero is by some objective standard moderate, as long as it is not spot on the target number, the trend over time should be roughly unidirectional toward the target number; and then the normalizing will make that initial position look extreme.

Two caveats and a final note:

Caveat 1: It will of course be difficult to code this objectively, and it's quite possible that the final outcome will vary depending on hard-to-justify coding decisions.

Caveat 2: My guess is that if we were to chart moral and political positions from the 1600s to the present we would see a chart somewhat like the second (progressive) chart, whereas if we were to chart equal time intervals back to the ancient West, we would see a more random-looking chart like the first. So which time period should be examined? A case for choosing the shorter period might be that it is only after the printing press and widespread communication of philosophical ideas that we should expect to see rationally-driven moral progress. However a consideration against choosing the shorter period is this: There might be cultural factors, such as industrialization and capitalization, that have created consistent unidirectional pressures on moral and political norms, independently of the rational case for adopting those norms; and for that reason the broader the temporal span the better.

Final note: We might be able to construct similar charts to evaluate whether there has been progress in metaphysics.

(By the way, see here for another post of mine on Pinker's intellectualist liberalism.)

Thursday, August 30, 2012

Desiring, Valuing, and Believing Good: Almost the Same Thing

Philosophers typically think of belief as one thing and desire as quite another. Beliefs, for example, represent the world as it is; desires picture the world as we want it to be. Desires are intrinsically motivating; beliefs are maps by which we steer once motivated. Action, it's often thought, requires the copulation of a belief and a desire -- for example the desire to get a cupcake and the belief that the store I'm passing has them at a reasonable price. In generating my action, these attitudes play entirely different roles, sometimes labeled "cognitive" and "conative".

And yet I am struck by how belief and desire can sometimes seem to blur into each other. To believe that it would be good for X to happen is probably not quite the same thing as to desire that X happen, but it takes an unusual psychological wedge to pull them apart. It might not be possible to have an entirely canonical case of the one alongside an entirely canonical lack of the other. And valuing seems a kind of intermediate case: To value one's privacy seems to be almost, on the one hand, to have a kind of belief or set of beliefs about one's privacy but also, on the other hand, to have a certain sort of desire for privacy. Where's the sharp line between cognitive and conative?

I've come to think there is, in fact, no sharp line between the attitude types. This falls out of a general theory of psychological attitudes on which I have been working. (A paper is in progress, but not yet in circulating shape.)

On my view, to believe that something is the case -- for example, to believe that gold is more valuable than molybdenum -- is just to live in a particular way. It is to act and react, and to be disposed to act and react, across a wide variety of hypothetical scenarios, in ways that ordinary people would tend to regard as characteristic of having that belief. So, for example, it is to be willing to say, ceteris paribus (that is, all else being equal or normal, or absent countervailing forces like the intent to deceive), that gold is more valuable than molybdenum. It is to be willing to trade away molybdenum for gold. It is to be disposed to judge others' molybdenum-for-gold trades as wisely done. It is to feel happier upon receiving gold than upon receiving molybdenum, and to judge it reasonable to price the one higher than the other. And so forth. If space aliens were to visit tomorrow and they exhibited this psychological pattern, we would rightly say they believe that gold is more valuable than molybdenum, even though we may know virtually nothing about their internal operations.

To value one thing over another is also, I think, to live in a particular way. In fact, valuing A over B involves almost the same set of characteristic patterns of behavior, subjective experience, and cognition as does believing that A is more valuable than B. The same way of living that makes it true to say of our space aliens that they believe that gold is more valuable than molybdenum also makes it true to say that they value gold over molybdenum.

The same goes for desiring one thing more than another. Almost the same set of dispositions, perhaps with a shift of focus or emphasis, are involved in desiring A over B as in valuing A over B and believing A more valuable than B.

Now perhaps in using the term "desire", we put more emphasis on something like visceral reward or impulse, while "valuing" seems more intellectual or cognitive and "believing valuable" seems more intellectual still; but this is a subtlety. It's a subtlety that can make a difference in an unusual sort of case, where the intellectual and the impulsive pull apart -- where one has an impulse, say, to eat a cupcake but also a sense that it would be bad to do so all things considered. Then we might say, "I want that cupcake, but I don't think it would be good to eat it". But this is not really a canonical case of wanting something. In a way, it seems just as accurate to say that what one really wants is not to eat the cupcake, and what one is fighting is not so much a desire as an impulse.

Shortly after moving into one of my residences I met a nineteen-year-old neighbor. Call him Ethan. In my first conversation with Ethan, it came out (i.) that he had a beautiful, expensive new pickup truck, and (ii.) that he unfortunately had to attend the local community college because he couldn't afford to attend a four-year school. Although I didn't ask Ethan directly whether he thought owning an awesome pickup truck was more important than attending a four-year university, let's suppose that's how he lived his life in general: Ethan's inward and outward actions and reactions -- perhaps not with absolute consistency -- fit the profile of someone who wanted an awesome pickup truck more than he wanted to go to a four-year school, who valued having an awesome pickup truck more than he valued going to a four-year school, and who believed it better or more important to have the one than the other. Approximately the same set of dispositional facts makes each of these psychological attributions true.

Desiring, valuing, believing good, believing valuable -- if to have these attitudes is just a matter of living a certain way, of being disposed to make certain choices, to have certain feelings, to regret some things and celebrate others, etc., then although these attitudes might differ somewhat in flavor and thus sometimes partly diverge, the difference between them is vastly overstated by philosophical talk of the cognitive-conative divide and of the very different psychological roles of belief and desire.