Tuesday, July 28, 2009

Professors on the Morality of Voting

Professors appear to think that voting regularly in public elections is about as morally good as donating 10% of one's income to charity. This seems, anyway, to be suggested by the results of a survey Josh Rust and I sent earlier this year to hundreds of U.S. professors, ethicists and non-ethicists, both inside and outside of philosophy. (The survey is also described in a couple of previous posts.)

In one part of the survey, we asked professors to rate various actions on a nine point scale from "very morally bad" through "morally neutral" to "very morally good". Although some actions we expected to be rated negatively (e.g., "not consistently responding to student emails"), there were three we expected to be rated positively by most respondents: "regularly voting in public elections", "regularly donating blood", and "donating 10% of one's income to charity". Later in the survey, we asked related questions about the professors' own behavior, allowing us to compare expressed normative attitudes with self-described behavior. (In some cases we also have direct measures of behavior to compare with the self-reports.)

Looking at the data today, I found it striking how strongly the respondents seemed to feel about voting. Overall, 87.9% of the professors characterized voting in public elections as morally good. Only 12.0% said voting was morally neutral, and a lonely single professor (1 of the 569 respondents or 0.2%) characterized it as morally bad. That's a pretty strong consensus. Political philosophers were no more cynical about voting than the others, with 84.5% responding on the positive side of the scale (a difference well within the range of statistical chance variation). But I was struck, even more than by the percentage who responded on the morally good side of our scale, by the high value they seemed to put on voting. To appreciate this, we need to compare the voting question with the two other questions I mentioned.

On our 1 to 9 scale (with 5 "morally neutral" and 9 "very morally good"), the mean rating of "regularly donating blood" was 6.81, and the mean rating of "donating 10% of one's income to charity" was 7.36. "Regularly voting in public elections" came in just a smidgen above the second of those, at 7.37 (the difference being within statistical chance, of course).

I think we can assume that most people think it's fairly praiseworthy to donate 10% of one's income to charity (for the average professor, this would be about $8,000). Professors seem to be saying that voting is just about equally good. Someone who regularly donates blood can probably count at least one saved life to her credit; voting seems to be rated considerably better than that. (Of course, donating 10% of one's income to charity as a regular matter probably entails saving even more lives, if one gives to life-saving type charities, so it makes a kind of utilitarian sense to rate the money donation as better than the blood donation.)

Another measure of the importance professors seem to invest in voting is the rate at which they report doing it. Among professors who described themselves as U.S. citizens eligible to vote, fully 97.8% said they had voted in the Nov. 2008, U.S. Presidential election. (Whether this claim of near-perfect participation is true remains to be seen. We hope to get some data on this shortly.)

Now is it just crazy to say that voting is as morally good as giving 10% of one's income to charity? That was my first reaction. Giving that much to charity seems uncommon to me and highly admirable, while voting... yeah, it's good to do, of course, but not that good. One thought, however -- adapted from Derek Parfit -- gives me pause about that easy assessment. In the U.S. 2008 Presidential election, I'd have said the world would be in the ballpark of $10 trillion better off with one of the candidates than the other. (Just consider the financial and human costs at stake in the Iraq war and the U.S. bank bailouts, for starters.) Although my vote, being only one of about 100,000,000 cast, probably had only about a 1/100,000,000 chance of tilting the election, multiplying that tiny probability by a round trillion leaves a $10,000 expected public benefit from my voting -- not so far from 10% of my salary.

Of course, that calculation is incredibly problematic in any number of ways. I don't stand behind it, but it helps loosen the grip of my previous intuition that of course it's morally better to donate 10% to charity than to vote.

Update July 29:
As Neil points out in the comments, in this post I seem to have abandoned my usual caution in inferring attitudes from expressions of attitudes. Right: Maybe professors don't think this at all. But I found it a striking result, taken at face value. If it's not to be taken at face value, we might ask: Why would so many professors, who really think donating 10% of income is morally better than voting, mark a bubble more toward the "very morally good" end of the scale in response to the voting question than in response to the donation question? Moral self-defensiveness, perhaps, on the assumption (borne out elsewhere in the data) that few of them themselves donate 10%...?

Tuesday, July 21, 2009

On Relying on Self-Report: Happiness and Charity

To get published in a top venue in sociology or social or personality psychology, one must be careful about many things -- but not about the accuracy of self-report as a measure of behavior or personality. Concerns about the accuracy of self-report tend to receive merely a token nod, after which they are completely ignored. This drives me nuts.

(Before I go further, let me emphasize that the problem here -- what I see as a problem -- is not universal: Some social psychologists -- Timothy Wilson, Oliver John, and Simine Vazire for example -- are appropriately wary of self-report.)

Although the problem is by no means confined to popular books, two popular books have been irking me acutely in this regard: The How of Happiness, by my UC Riverside colleague Sonja Lyubomirsky, and Who Really Cares, by Arthur Brooks (who has a named chair in Business and Government Policy at Syracuse).

The typical -- but not universal -- methodology in work by Lyubomirsky and those she cites is this: (A1.) Ask some people how happy (or satisfied, etc.) they are. (A2.) Try some intervention. (A3.) Ask them again how happy they are. Or: (B1.) Randomly assign some people to two or three groups, one of which receives the key intervention. (B2.) Ask the people in the different groups how happy they are. If people report greater happiness in A3 than in A1, conclude that the intervention increases happiness. If people in the intervention group report greater happiness in B2 than people in the other groups, likewise conclude that the intervention increases happiness.

This makes me pull out my hair. (Sorry, Sonja!) What is clear is that, in a context in which people know they are being studied, the intervention increases reports of happiness. Whether it actually increases happiness is a completely different matter. If the intervention is obviously intended to increase happiness, participants may well report more happiness post-intervention simply to conform to their own expectations, or because they endorse a theory on which the intervention should increase happiness, or because they've invested time in the intervention procedure and they'd prefer not to think of their time as wasted, or for any of a number of other reasons. Participants might think something like, "I reported a happiness level of 3 before, and now that I've done this intervention I should report 4" -- not necessarily in so many words.

As Dan Haybron has emphasized, the vast majority of the U.S. population describe themselves as happy (despite our high rate of depression and anger problems), and self-reports of happiness are probably driven less by accurate perception of one's level of happiness than by factors like the need to see and to portray oneself as a happy person (otherwise, isn't one something of a failure?). My own background assumption, in looking at people's self-reports of happiness, life-satisfaction, and the like, is that those reports are driven primarily by the need to perceive oneself a certain way, by image management, by contextual factors, by one's own theories of happiness, and by pressure to conform to perceived experimenter expectations. Perhaps there's a little something real underneath, too -- but not nearly enough, I think, to justify conclusions about the positive effects of interventions from facts about differences in self-report.

In Who Really Cares? Brooks aims to determine what sorts of people give the most to charity. Brooks bases his conclusions almost (but not quite) entirely on self-reports of charitable giving in large survey studies. His main finding is that self-described political conservatives report giving more to charity (even excluding religious charities) than do self-described political liberals. What he concludes -- as though this were unproblematically the same thing -- is that conservatives give more to charity than do liberals. Now maybe they do; it wouldn't be entirely surprising, and he has a little bit of non-self-report evidence that seems to support that conclusion (though how assiduously he looked for counterevidence is another question). But I doubt that people have any especially accurate sense of how much they really give to charity (even after filling out IRS forms, for the minority who itemize charitable deductions), and even if they did have such a sense I doubt that would be accurately reflected in self-report on survey studies.

As with happiness, I suspect self-reports of charitable donation are driven at least as much by the need to perceive oneself, and to have others perceive one, a particular way as by real rates of charitable giving. Rather than assuming, as Brooks seems to, that political conservatives and political liberals are equally subject to such distortional demands in their self-reports and thus attributing differences in self-reported charity to actual differences in giving, it seems to me just as justified -- that is to say, hardly justified at all -- to assume that the real rates of charitable giving are the same and thus attribute differences in reported charity to differences in the degree of distortion in the self-descriptive statements of political conservatives and political liberals.

Underneath sociologists' and social and personality psychologists' tendency to ignore the sources of distortion in self-report is this, I suspect: It's hard to get accurate, real-life measures of things like happiness and overall charitable giving. Such real-life measures will almost always themselves be only flawed and partial. In the face of an array of flawed options, it's tempting to choose the easiest of those options. Both the individual researcher and the research community as a whole then become invested in downplaying the shortcomings of the selected methods.

Tuesday, July 14, 2009

The Smallish Difference Between Belief and Desire

In the usual taxonomy of mental states (usual, that is, among contemporary analytic philosophers of mind) belief is one thing, desire quite another. They play very different roles in the economy of the mind: Desires determine our goals, and beliefs determine the perceived means to achieve them, with action generally requiring the activation of a matched belief-desire pair (e.g., the belief that there is beer in the fridge plus the desire for beer). I confess I’m not much enamored of this picture.

Surely this much at least is true: The belief that P is the case (say, that my illness is gone) and the desire that P be the case are very different mental states -- the possession of one without the other explaining much human dissatisfaction. Less cleanly distinct, however, are the desire that P (or for X) and the belief that P (or having X) would be good.

I don't insist that the desires and believings-good are utterly inseparable. Maybe we sometimes believe that things are good apathetically, without desiring them; surely we sometimes desire things that we don’t believe are, all things considered, good. But I’m suspicious of the existence of utter apathy. And if believing good requires believing good all things considered, perhaps we should think genuine desiring, too, is desiring all things considered; or conversely if we allow for conflicting and competing desires that pick up on individual desirable aspects of a thing or state of affairs then perhaps also we should allow for conflicting and competing believings good that also track individual aspects – believing that the desired object has a certain good quality (the very quality in virtue of which it is desired). With these considerations in mind, there may be no clear and indisputable case in which desiring and believing good come cleanly apart.

If the mind works by the manipulation of discrete representations with discrete functional roles – inner sentences, say, in the language of thought, with specific linguistic contents – then the desire that P and the belief that P would be good are surely different representational states, despite whatever difficulty there may be in prizing them apart. (Perhaps they’re closely causally related.) But if the best ontology of belief and desire, as I think, treats as basic the dispositional profiles associated with those states – that is, if mental states are best individuated in terms of how the people possessing those states are prone to act and react in various situations – and if dispositional profiles can overlap and be partly fulfilled, then there may be no sharp distinction between the desire that P and the belief that P would be good. The person who believes that Obama’s winning would be good and the person who wants Obama to win act and react – behaviorally, cognitively, emotionally – very similarly: Their dispositional profiles are much the same. The patterns of action and reaction characteristic of the two states largely overlap, even if they don’t do so completely.

This point of view casts in a very different light a variety of issues in philosophy of mind and action, such as the debate about whether beliefs can, by themselves, motivate action or whether they must be accompanied by desires; characterizations of belief and desire as having neatly different "directions of fit"; and functional architectures of the mind that turn centrally on the distinction between representations in the "belief box" and those in the "desire box".

Thursday, July 02, 2009

On Debunking V: The Final Chapter

(by guest blogger Tamler Sommers)


First, let me offer my thanks to Eric for giving me this opportunity and to everyone who commented on my posts. This was fun.

Since my latest post on debunking I came across a paper called “Evolutionary Debunking Arguments” by Guy Kahane. (Forthcoming in Nous, you can find it on Philpapers.org.) Kahane mounts some careful and compelling criticisms of selective (“targeted”) debunking strategies and global debunking strategies in metaethics, and I strongly recommend this article to anyone interested in the topic. For my last post, want to focus on a claim from Kahane’s paper that isn’t central to his broader thesis but relates to my earlier posts. Kahane argues that evolutionary debunking arguments (EDAs) implicitly assume an “objectivist account of evaluative discourse.” EDAs cannot apply to subjectivist theories because: subjectivist views claim that our ultimate evaluative concerns are the source of values; they are not themselves answerable to any independent evaluative facts. But if there is no attitude-independent truth for our attitudes to track, how could it make sense to worry whether these attitudes have their distal origins in a truth-tracking process?” (11)

I don’t think Kahane is right about this. Learning about the evolutionary or historical origins of our evaluative judgments can have an effect on those judgments—even for subjectivists. But we need to revise the description of EDAs as follows. Rather than ask whether the origins of our attitudes or intuitions have their origins in a truth-tracking process, we need to ask whether they have their origins in a process that we (subjectively) feel ought to bear on the judgments they are influencing.

Consider judgments about art. Imagine that Jack is a subjectivist about aesthetic evaluation. Ultimately, he think, there is no fact of the matter about whether a painting is beautiful. He sees a painting by an unknown artist and finds it magnificent. Later he learns that the painter skillfully employs a series of phallic symbols that trigger cognitive mechanisms which cause him to experience aesthetic appreciation. Would knowing this alter his judgment about the quality of the work? I can see two ways in which it might. First, his more general subjectivist ideas about the right way to evaluate works of art may rebel against cheap tricks like this to augment appreciation. He doesn’t feel that mechanisms that draw him unconsciously to phallic symbols ought to bear on his evaluation of a work of art. Second, learning this fact may have an effect on his visceral appreciation of the painting. (Now he sees a bunch of penises instead of a mountainous landscape.) In a real sense, then, his initial appreciation of the painting has been debunked.

So how might this work in moral case? Imagine Jill is an ethical subjectivist who is about to vote on a new law that would legalize consensual incest relationships between siblings as long they don’t produce children. Jill’s intuition is that incest is wrong. However, she has recently read articles that trace our intuitions about the wrongness of incest to disgust mechanisms that evolved in hominids to prevent genetic disorders. She knows that genetic disorders are not an issue in these kinds of cases, since the law stipulates that preventive measures must be taken. Her disgust, and therefore her intuition, are aimed at something that does not apply in this context. She feels, then, that her intuitions ought not to bear on her final judgment. And so she discounts the intuition and defers to other values that permit consensual relationships that do not harm anyone else.

The general point here is that evolutionary or historical explanations of our intuitions can have an effect on our all-things-considered evaluative judgments even if we think those judgments are ultimately subjective. Knowing the origins and mechanisms behind our attitudes can result in judgments that more accurately reflect our core values. This seems like a proper goal of philosophical inquiry in areas where no objectivist analysis is available.

Sunday, June 28, 2009

The Mystery of the Chiming Bell

We've all had this experience: The clock tower starts chiming. At first, you're paying no attention, but about three or four chimes in, you suddenly notice. In memory, you can count back those first few chimes.

Here's the question: Did you have auditory experience of those chimes before you started thinking about them? Were they part of your stream of conscious experience, part of your phenomenology, part of "what it was like to be you", during those first few inattentive seconds? Or, until you started attending to the matter, were the chimes no part at all of your conscious experience, not even a secondary and peripheral part? Were they, that is, only part of an at-the-time nonconscious but after-the-fact recoverable "sensory store"?

Similarly: Suppose you suddenly notice, for the first time, that you have a mild headache. Was the pain a small, background part of your stream of experience before you first noticed it? Or did you not really experience the pain until you actually directed attention to the state of your head? Is having an enduring pain a matter of constantly experiencing painfulness, in the background or foreground depending on your state of attention; or is it more a matter of having occasional spurts of felt pain, arising from an enduring nonconscious disposition for such spurts to shoot annoyingly and against your will into consciousness?

Philosophers, psychologists, and ordinary folks seem to have different opinions about these questions. One group may be wrong and the other right; or everyone may be right about their own experiences, wrong to the extent they generalize to others. Is there a good way to determine where the truth lies? I'm inclined to think not -- at least not in the short term. Introspection can only reveal consciousness as attended at the moment, not whatever experience there is, or is not, without attention. Immediate memory is corrupted both by our typical quick forgetting of things outside attention and the potential confusion of actual experiences with the recovery, from the sensory store, of previously unexperienced traces (if such a thing is possible; and we can't assume it's not possible without begging the question). Third-person methods like brain imaging require, to be interpretable as revealing facts about consciousness, a prior commitment to the very issue at hand and thus are inescapably circular.

You may or you may not think you experienced that chiming bell before you attended to it. I can't see, though, how you could have any secure ground for that opinion.

Friday, June 19, 2009

Friends of the Stanford Encyclopedia of Philosophy

The Stanford Encyclopedia of Philosophy is to date the most visible and successful experiment in "open-access" -- that is, free -- academic philosophy. (This isn't to say there aren't also other excellent open-access resources like Philosophers' Imprint and various archiving projects.) Fans of open access who loathe the finacial abuse of academic libraries at the hands of companies like Springer might consider paying the modest fee to support the Stanford Encyclopedia: $5 per year for students, $10 or $25 for others. Who'd've thought you could buy friendship so cheaply?

The SEP is trying to entice people to join by offering their "friends" access to handsome PDFs of SEP entries. Maybe that kind of thing appeals to you, but for me it's just a matter of supporting a cause I care about.

Thursday, June 18, 2009

Avowing Dream Skepticism in a Dream

Last night I dreamt I was giving a talk -- a talk I am due to give next week in Australia. The talk wasn't going so well, and I suggested to the audience that maybe, just maybe, I was actually dreaming giving the talk. My evidence was that I remembered having planned to polish things up in the remaining few days before the talk but now I couldn't remember those days having occurred.

Alex Byrne, in the back of the room, looked highly skeptical and a bit dyspeptic. Dave Chalmers looked mildly amused. Dan Dennett stood up and said, "I very much doubt that you are dreaming, but I agree that your talk is nightmarishly bad."

Of course, it turns out that I was right and they were wrong. So there!

Tuesday, June 16, 2009

Are Ethicists Any More Responsive to Undergraduate Emails Than Are Other Professors?

As regular readers know, Joshua Rust and I are interested in the moral behavior of ethics professors -- namely, do they behave any better? Pending the invention of the moralometer, though, it's a bit tricky to measure actual ethicists' actual moral behavior. Josh and I are forced to be a little creative. Here's one of our ideas: Assuming it's morally better, generally speaking, to respond to undergraduate emails than to ignore them, we can look at the rate at which ethicists respond to undergraduate emails, compared to other professors.

Thus inspired, Josh and I sent phony emails to several hundred professors -- emails designed to look like they came from an undergraduate. (Yes, we got human subjects ethics approval first; and yes we're aware that in spamming philosophers we are perhaps coming uncomfortably close to being a test case for our own thesis.) One of our emails asked about the professors' office hours; another expressed interest in declaring a major and asked for the name of the undergraduate advisor. Research question: Would ethicists be more likely than the other groups to respond to the emails?

No, it turns out. Here are the response rates:

Group: 1stemail , 2ndemail
Ethicists: 59.0% , 53.6%
Non-ethicist philosophers: 58.0% , 49.8%
Non-philosophers: 54.6% , 54.1%
This variation is well within chance (chi-squared, p = .51, .60).

Interesting enough, perhaps, as confirmation of our general finding (so far) that ethicists behave no better than non-ethicists. But this study had an additional twist: Many of these same professors also completed a survey we sent them -- a survey asking them, among other things, to rate the morality of "not consistently responding to student emails" on a nine point scale from "very morally bad" through "neutral" to "very morally good". We also asked: "About what percentage of student emails do you respond to?" followed by a blank for them to enter a percentage. Thus, we could compare normative attitude, self-described behavior, and actual behavior. (We hasten to add, here, that all identifying information was removed for analysis: We are not interested in the responses of particular individuals but only of groups.)

Our survey respondents said they nearly always responded to undergraduate emails. More than half estimated that they responded to 100% of undergraduate emails. More than 90% estimated that they responded to 90% or more of undergraduate emails. On the face of it, these appear to be gross overestimates -- I'm tempted even to say, in the aggregrate, borderline delusional (though I don't doubt that there are a few very conscientious email responders out there). When I reported these numbers recently in a talk to undergraduates, they laughed out loud. Ethicists reported neither more nor less responsiveness than did the other groups.

Those who reported responding to 100% of undergraduate emails were indeed somewhat more likely to respond to both emails: 47.2% versus 29.0% for those who claimed less than 100% responsiveness (chi square, p = .003).

Oddly, however, we found no relationship whatsoever between professors' expressed attitudes about the morality of consistently responding to undergraduate emails and their actual behavior. 83.0% of professors said it was morally bad not consistently to respond to undergraduate emails, but these professors were no more likely to respond to our emails than were the 17.0% who said it was morally okay not to respond. In fact, 65.5% of those who said it was okay not to respond consistently to undergraduate emails responded to our second email, compared to only 55.3% of those who said it was bad not to respond. (This was within the range of chance variation given the smallish numbers involved in this particular set of conditions, but the 95% confidence interval for the difference in response rates tops out at a 3.2% advantage for those who think it is morally bad not to respond -- so at best they're responding at practically the same rate.)

On none of these measures did ethicists appear to respond or behave any differently, or any more or less self-consistently, than the non-ethicist philosophers or the comparison group of non-philosophers.

Thursday, June 11, 2009

Alternatives to the Burning Armchair

(by guest blogger Tamler Sommers)

There has been some discussion lately about whether the burning armchair is too combative and aggressive to serve as an appropriate symbol for the experimental philosophy movement. My first thought when I came across the controversy was that people need to lighten up a little. But then I realized that a slow burning is possibly the worst way to go and I began to see the critics’ point. So, inspired by Obama’s Cairo speech, I’d like to offer some alternative symbols for the X-Phi movement in hopes of reconciling the two feuding factions.

1. A beautiful day in Compton, CA, sounds of children playing in the background. An armchair sits on a corner enjoying the sunshine. Out of nowhere, the sound of screeching tires fills the air. A Chevy Suburban tears down the block. As it passes, we see Josh Knobe hanging out the window of the Suburban with an AK 47 yelling “caught you slippin’, caught you slippin’!” and filling the armchair up with holes.

2. An armchair sits in a deep black pit with only a bucket beside it. Thomas Nadelhoffer appears at the top of the pit with a small bisson frise. He calls down to the armchair:
“It rubs the Scotchguard on its upholstery…it does this whenever it’s told.”
Silence.
“It rubs the Scotchguard on its upholstery or else it gets the hose again.”
Silence.
“Now it places the Scotchguard in the basket….”

3. An armchair is taken prisoner by an unknown captor and placed in a small hotel room for fifteen years with no contact to the outside world other than a television and a small serving of dumplings that are pushed under the door every evening. The armchair has no idea why it is there.

4. A fleet of AH-64A Apache helicopters approach the shore of a small village of armchairs. In the cockpit, Shaun Nichols hits a button and Wagner's "Ride of the Valkyries” blare from the helicopter speakers. Bullets from automatic weapons rain down on the helpless armchairs. “Run Lazy-boy! Run!” shouts Eddy Nahmias from one of the Apache open doors.

Other suggestions welcome.

Monday, June 08, 2009

The Human Pseudopod: Michotte on Bodily Phenomenology

Albert Michotte was famous for his work on the perception of causality, especially on the conditions under which one ball is visually interpreted as launching another. Less well known are his remarks on the experience of embodiment, which I just came across and can't resist sharing.

[W]here the body is motionless... there is an almost complete adaptation of the receptor organs, and the result is that the body simply disappears from the phenomenal world. This is indeed what seems to happen to a very high degree in the practice of certain oriental sects, where those who are expert are able, by remaining motionless, to achieve an extreme state of apparent "spiritualisation". Movement appears to be essential to the phenomenal existence of the body, and it is probable that we are aware of our bodily states only in so far as they are terminal phases of movements. In our ordinary waking life, of course, our bodies are motionless only to a relative extent; there is nearly always movement, if only as a result of respiration.

Whether it is temporarily motionless or whether it is moving, the body appears as a somewhat shapeless mass or volume. there is very little by way of internal organisation or connexion between the parts. There is no clear marking off of the head, trunk, and limbs by precise lines of demarcation.... Instead of any precise line of demarcation we find a number of regions with extensive connexions between them gradually merging into one another.

We can with some justification look on the body as a sort of kinaesthethic amoeba, a perpetually changing mass with loose connexions between the parts, and with the limbs constituting the pseudopodia.... The "volume" of which it consists is not limited by a clearly defined surface, and there is no "contour".... The limit of the body is more like the limit of the visual field -- an imprecise frontier which has no line of demarcation, and indeed which cannot without absurdity be imagined to have one (1946/1964, p. 203-204).
Close your eyes, refrain as much as possible from touching anything. Do your pseudopodia grow and shrink as you move or refrain from moving them?

Friday, June 05, 2009

The Moral Behavior of Ethicists: Peer Opinion

(with Josh Rust) is now forthcoming in Mind.

Thanks to all the folks at the 2007 Pacific Division meeting of the American Philosophical Association who stopped by to express their views on the behavior of ethicists!

Wednesday, June 03, 2009

Wundt on Self-Observation and Inner Perception

Wilhelm Wundt was a founding father of laboratory psychology and a grand visionary of psychology as a discipline -- of how it fit among the sciences, of the structure of its object (the mind), of its methods, most centrally introspection -- and also an author so vastly prolific that most of his work remains untranslated despite his importance. Among those untranslated works is his essay "Selbstbeobachtung und innere Wahrnehmung" [Self-Observation and Inner Perception"] (1888), with which I've been struggling. The essay is key to Wundt's view of "introspection" -- the usual English translation of the German Selbstbeobachtung -- since here he contrasts it with the seemingly related process of "inner perception". And unfortunately, the secondary sources are all over the map on this. I can find no good treatments.

To understand Wundt's distinction, it helps to know two bits of historical context. One is August Comte's influential criticism of the introspective method of psychology:

But as for observing in the same way intellectual phenomena at the time of their actual presence, that is a manifest impossibility. The thinker cannot divide himself into two, of whom one reasons whilst the other observes him reason. The organ observed and the organ observing being, in this case, identical, how could observation take place? The pretended method is then radically null and void (1830, using James's translation of 1890/1981, p. 188).
The other is Franz Brentano's (1874/1973) distinction between "inner observation" [innere Beobachtung] and "inner perception" [innere Wahrnehmung]. Brentano asserts that inner observation involves attending to conscious psychological processes as they transpire. This, he says with Comte, is impossible, or at least fails as a psychological method, because the act of attending to the process inevitably destroys or at least objectionably alters the target process. In "inner perception", in contrast, psychological processes are noticed while one's attention is dedicated to something else. They are noticed only "incidentally" [nebenbei], and thus undisturbed.

Wundt agrees with Brentano and Comte that observation necessarily involves attention and so normally interferes with the process to be observed, if that process is an inner, psychological one. Contra Brentano however, Wundt does not envision scientific knowledge of mental processes arising without attention of some sort, including planful and controlled variation -- attentive planned exploration, if not of the process as it occurs, then at least to a reproduction of that process as a "memory image" [Erinnerungsbild]. No science by sideways glances for Wundt. The psychological method of "inner perception" is, for Wundt, the method of holding and attentively manipulating a memory image of a psychological process. This method, he thinks, has two crucial shortcomings: First, one can only work with what one remembers of the process in question -- the manipulation of a memory-image cannot discover new elements. And second, new elements may be unintentionally introduced through association -- one might confuse one's memory of a process with one's memory of another associated process or object.

Therefore, Wundt suggests, the science of psychology must depend upon the attentive observation of mental processes as they occur. He argues that those who think attention necessarily distorts the target mental process are too pessimistic. A subclass of mental processes is relatively undisturbed by attentive observation -- specifically the basic mental processes, especially of perception. The experience of seeing red is more or less the same, Wundt suggests, whether or not one is aware of the psychological fact that one is experiencing redness. Wundt also thinks the basic processes of memory, emotion, and volition are largely undisturbed by introspective attention. These alone, he thinks, can be studied by introspective psychology. More complicated processes, in contrast, must be studied non-introspectively -- through the obsevation of language, history, culture, and human and animal development, for example.

Wundt's students tended to disregard his admonition to restrict introspective observation to such basic processes. E.B. Titchener, for example, held that practiced introspectors could observe even their "higher" cognitive processes without disturbing them. Arguably, the eventual fall of introspective psychology in favor of behaviorism (focusing only on outward stimuli and behavioral response, nothing "inner" at all) was hastened by the ambitious attempts of Wundt's students to extend introspective method to such higher cognitive processes, about which methodological and substantive disputes proved intractable.

Monday, June 01, 2009

On Debunking IV: Non-Selective Debunking

(by guest blogger Tamler Sommers)

So far I have considered whether evolutionary explanations undermine love and whether they can be used to debunk non-consequentialist moral intuitions while leaving the consequentialist one intact. In this post, I want to bring these thoughts together to examine a debunking strategy in metaethics I’ve defended in the past: the attempt explain away objective moral values in general.

Here’s a rough outline of the strategy. The explanandum, the thing to be explained, is our moral intuitions—intuitions like “burning cats is wrong!” Moral skeptics and moral realists offer competing explanations for the explanandum, and the debate hangs on which of the explanations is more plausible. The objectivist claims that this intuition is picking up on real moral properties, out there in the world—the wrongness of burning a cat. But the skeptic points out that our biological/cultural evolutionary processes account for these intuitions, and so we would have them whether or not they referred to anything real. So with a clean slice from Occam’s razor we can banish objective moral values from our ontology.

As I said, this has always sounded plausible to me. But consider this strategy when applied to love for one’s children. The explanandum is my deep feelings of attachment for my daughter Eliza. Kin selection theory shows that I would have these feelings whether or not I really loved her. So with a clean slice from Occam’s razor we can banish love from our ontology.

Now the strategy seems completely misguided! Why? Because as Manuel and other commentators point out, my love for Eliza is constituted, at least in part, by the feelings of attachment.

The skeptic will object that unlike love, moral values are not supposed to be constituted by feelings or intuitions that arise from an evolutionary process. Love is subjective. Morality is objective. Fair enough. But what about colors? We don’t say that it’s false that snow is white because evolution designed us to view snow in this fashion.

At this point, the skeptic can respond in two quite different (and perhaps incompatible) ways. The first is to say that there is universal agreement about the whiteness of snow. But there is no universal agreement about morality. And that is why we should reject moral realism.

The second is to say that morality has essential features that are incompatible with these naturalistic explanations, features like its categorical nature or “bindingness.” Since these features cannot fit within a naturalistic ontology, even if there were universal agreement under normal conditions about certain moral judgments—perhaps due to our common evolutionary history—it would still not vindicate moral realism. (These two replies, of course, parallel Mackie’s arguments from relativity and queerness.)

I’ll talk about both responses in more detail in my next post. But for now, let me conclude with an observation about the latter reply. Ashley, in my first post, thought that real love was essentially incompatible with an evolutionary/neuroscientific account of its origin. As some commentators pointed out, one option available to Ashley upon learning of this account is to revise her concept of love accordingly. She could say: love doesn’t quite have the status and history that I thought it had, but it’s still real love, I still love my son. Would anyone begrudge her this revision? Would anyone accuse her of “changing the subject” about love and putting something bogus in its place? Similarly, even if we thought morality had certain features that we now realize are inconsistent with a naturalistic account of its sources, why couldn’t we just revise our concept of morality accordingly? If we allow that Ashley truly loves her son, why can’t we say it’s truly wrong to burn that poor cat?

Thursday, May 28, 2009

A Landmark

... The Splintered Mind's 250,000th visitor since its founding on April 27, 2006.

When Your Eyes Are Closed, What Do You See?

I've drafted the eighth and final chapter of my book in progress (working title Perplexities of Consciousness). Chapters 4 and 7 are not yet in circulatable form. Like the other chapters, this chapter is written to be comprehensible in isolation. As usual, I welcome feedback, either by email or as comments on this post. Unlike the other seven chapters, this chapter is not based on a previously published article.

Here's an abstract:

This chapter raises a number of questions, not adequately addressed by any researcher to date, about what we see when our eyes are closed. In the historical literature, the question most frequently discussed was what we see when our eyes are closed in the dark (and so entirely or almost entirely deprived of light). In 1819, Purkinje, who was the first to write extensively about this, says he sees "wandering cloudy stripes" that shrink slowly toward the center of the field. Other later authors also say such stripes are commonly seen, but they differ about their characteristics. In 1897, for example, Scripture describes them as spreading violet rings. After Scripture, the cloudy stripes disappear from psychologists' reports. Other psychologists describe the darkened visual field as typically -- not just idiosyncratically, for themselves -- very nearly black (e.g., Fechner), mostly neutral gray (e.g., Hering), or bursting with color and shape (e.g., Ladd). I loaned beepers to five subjects and collected their reports about randomly sampled moments of experience with their eyes closed. Their reports were highly variable, and one subject denied ever having any visual experience at all (not even of blackness or grayness) in any of his samples. I also briefly discuss a few other issues: whether we can see through our eyelids, whether the closed-eye visual field is "cyclopean", whether the field is flat or has depth or distance, and whether we can control it directly by acts of will. The resolution of such questions, I suggest, will not be straightforward.

Thanks very much, by the way, to all the people who wrote about their eyes closed visual experience in response to my queries about it in earlier posts.

Tuesday, May 26, 2009

On Debunking III: A Surprising Concession from JJC Smart

(by guest blogger Tamler Sommers)

In my previous post, I suggested that recent attempts at “selective debunking” in metaethics—explaining away non-consequentialist intuitions while leaving the consequentialist ones intact—have been unsuccessful. The debunking either works for both sets of intuitions or it doesn’t work at all. In this post, I look for support from a surprising source: Mr. “Embrace the Reductio” himself, JJC Smart.

The classic debate about utilitarian approaches to justice—as we’re taught in textbooks—looks like this. The utilitarian argues that retributive approaches to punishment are incoherent and that punishing criminals is only justified when society as a whole benefits. The retributivist then mounts a reductio-ad-absurdum argument, claiming that the utilitarian approach could make it just to punish an innocent person (e.g. the magistrate and the mob case). The Utilitarian has two choices now: (1) claim (implausibly in my view) that in real life it could never benefit society to punish an innocent person or (2) embrace the reductio: claim that in those rare cases in which society benefits from punishing the innocent, it is morally right to do so. JJC Smart is associated with the latter response. When confronted by the fact that the common moral consciousness rebels against this conclusion, Smart famously replies “so much the worse for the common moral consciousness.” (p. 68 in Utilitarianism For and Against)

Smart goes on to say that he is inclined to reject the common methodology of testing general principles by seeing if they match our feelings about particular cases. Why? Smart writes: “it is undeniable that we have anti-utilitarian feelings in particular cases but perhaps they should be discounted as far as possible as due to our moral conditioning in childhood.” (68)

What I’ve never seen reproduced in books that lay out this dialectic are Smart’s parenthetic remarks that immediately follow:

“(The weakness of this line of thought is that the approval of the general moral principle of utilitarianism may be due to moral conditioning too. And even if benevolence was in some way a ‘natural,’ not an ‘artificial,’ attitude, this consideration could at best have persuasive force without any clear rationale. To argue from the naturalness of the attitude to its correctness is to commit the naturalistic fallacy.)”

This critique of his own strategy parallels my comments on Greene and Singer. The strategy one chooses for debunking anti-utilitarian feelings, if it works it all, seems to apply to utilitarian feelings as well. Selective debunking is treacherous business. Smart’s position here is more subtle and complex than it sometimes appears from secondhand reports.

Friday, May 22, 2009

Do Ethicists Eat Less Meat?

At philosophy functions there seems to be an abundance of vegetarians or semi-vegetarians, especially among ethicists. In my quest for some measure by which ethicists behave morally better than non-ethicists, this has seemed to me, along with charitable donation, among the most likely places to look. (On my history of failure to find evidence in previous research that professional ethicists behave better than anyone else, see here.)

Earlier this year, Joshua Rust and I sent out a survey to three groups of professors: Ethicists in philosophy, philosophers not specializing in ethics, and a comparison group of professors in other departments. After a number of prods (verging, I fear, on harrassment), we achieved a response rate in the ballpark of 60%, which is pretty good for a survey study given the wide variety of reasons people don't respond. Among our questions were three about vegetarianism.

First we asked a normative question. The prompt was "Please indicate the degree to which the action described is morally good or morally bad by checking one circle on each scale". Nine actions were described, among them "Regularly eating the meat of mammals such as beef or pork". Responses were on the following nine-point scale (laid out horizontally, not vertically as here)

O very morally bad
O
O somewhat morally bad
O
O morally neutral
O
O somewhat morally good
O
O very morally good
We coded the responses from "1" (very morally bad) to "9" (very morally good).

It seems that ethicists are substantially more condemnatory of eating meat (at least beef and pork) than are non-ethicists. Among the 196 ethicsts who responded to this question 59.7% espoused the view that regularly eating the meat of mammals was somewhere on the morally bad end of the scale (that is, 4 or less in our coding scheme). Among the 206 non-ethicist philosophers, 44.7% said eating the meat of mammals is morally bad. Among the 168 comparison professors only 19.6% said it is morally bad. (All differences are statistically significant.)

We posed two questions about respondents' own behavior. One question was this: "During about how many meals or snacks per week to you eat the meat of mammals such as beef or pork"? On this question, 50 ethicists (25.5%), 40 non-ethicist philosophers (19.4%), and 23 other professors (13.7%) claimed complete abstinence (zero meals per week). (The difference between the ethicists and comparison professors was statistically significant, the other differences within the range of chance variation.) Ethicists reported a median rate of 3 meals per week, the other groups median rates of 4 meals per week (a marginal statistical difference vs. the non-ethicist philosophers, a significant difference vs. the comparison profs).

Now by design that question was a bit difficult and easy to fudge. We also asked a much more specific question that we thought would be harder to fudge: "Think back on your last evening meal (not including snacks). Did you eat the meat of a mammal during that meal?" We figured that if there was a tendency to fudge or misrepresent on the survey, it would show up as a difference in patterns of response to these two questions; and if there was such a difference in patterns of response, we thought the latter question would probably yield the more accurate picture of actual behavior.

So here are the proportions of respondents who reported eating the meat of a mammal at their last evening meal:
Ethicists: 70/187 (37.4%)
Non-ethicist philosophers: 65/197 (33.0%)
Professors in other departments: 75/165 (45.4%).
There is no statistically detectable difference between the ethicists and either group of non-ethicists. (The difference between non-ethicists philosophers and the comparison professors was significant to marginal, depending on the test.)

Conclusion? Ethicists condemn meat-eating more than the other groups, but actually eat meat at about the same rate. Perhaps also, they're more likely to misrepresent their meat-eating practices (on the meals-per-week question and at philosophy functions) than the other groups.

I don't have anything against ethicists. Really I don't. In fact, my working theory of moral psychology predicted that ethicists would eat less meat, so I'm surprised. But this how the data are turning out.

Tuesday, May 19, 2009

On Debunking Part Deux: Selective Debunking in Metaethics.

(by guest blogger Tamler Sommers)

In my last post, I explained why an evolutionary account of parental love in no way undermined or debunked my love for my daughter. Now I want to apply some ideas from that post and discussion to a debunking strategy employed by Peter Singer and Josh Greene* in “Ethics and Intuitions” and “The Secret Joke of Kant’s Soul” respectively.

Singer and Greene aim to accomplish two things: first, debunk our deontological moral intuitions by appealing to an evolutionary account of their origin; and second, to explain why this account doesn’t reveal consequentialist intuitions to be equally misguided. (Balancing these two aims is tricky business, as I’ll try explain below.)

Their basic line of reasoning is this: evidence from evolutionary biology (and neuroscience, social psychology etc.) suggests that non-consequentialist intuitions are the product of emotional responses that enabled our hominid ancestors to leave more offspring. Since they were adaptive, we would have these intuitions whether or not they reflected moral truth of some kind. Consequently, we have no reason to trust these intuitions as guides to what we ought morally to do, or to take these intuitions as “data” to be justified by more general normative principles or as starting points in an attempt to reach reflective equilibrium. As Singer writes: “there is little point in constructing a moral theory designed to match considered moral judgments that themselves stem from our evolved responses to situations in which we and our ancestors lived during our period of evolution...” (348)

Here’s the key question. If this evolutionary account successfully debunks our non-consequentialist intuitions, then why doesn’t it debunk consequentialist intuitions as well, leading to moral nihilism? Singer provides one response but I don’t think it can work. He claims that our consequentialist intuitions are not products of natural selection. They are better described as “rational intuitions.” Why? Well, Singer argues, to take one example, natural selection would not favor treating everyone’s happiness as equal. True, but that is precisely the consequentalist intuition that we don’t have. We believe it’s permissible (or obligatory) to favor our own children’s welfare over the welfare of others. As for the other crucial consequentialist intuition, not wanting people to suffer in general—this likely is a product of our evolved sense of empathy. By Singer’s reasoning, we should likewise be suspicious of that intuition as well.

Josh Greene’s response is different, part of a divide and conquer strategy. He argues that the naturalistic and sentimentalist account undermines, at the very least, rationalist deontologists because it reveals them to be rationalizers. The normative conclusions they claim to be reaching through reason are actually a product of evolved emotional responses. As an analogy, he asks us to imagine a woman named Alice who (unbeknownst to her) has a height fetish and is only attracted to men over 6 foot 4. When she comes back from a date, she defends her view of the man’s attractiveness with claims about his wit, charm, intelligence, or lack thereof. But really it’s all about the height. Her claims are rationalizations of unconscious impulses, Greene argues, just like the theories of rationalist deontologists.

The analogy is interesting and perhaps not altogether favorable for Greene and Singer’s purposes—for consider what this true account of the causes of her taste in men doesn’t do. It doesn’t debunk her taste in men! The account does not show that it’s false that she finds tall men attractive, it just shows that she is attracted to them for different reasons than she originally thought. Is she going to start dating Danny Devito types now that she’s aware of this? Surely not. And there’s no reason why she should. Similarly, those who, say, believe it permissible to favor one’s children’s welfare over the welfare of strangers can retain this intuition and just abandon the pretense that they’ve arrived at it through reason.

In short, while Greene’s strategy may undermine a certain kind of justification for non-consequentialist intuitions, it doesn’t seem to give us any reason to hold them in less regard than our consequentialist ones. If you agree that Singer has not demonstrated the inherent “rationality” of consequentialist judgments, then it seems the two sets of intuitions are, for the moment, equally justified or unjustified.

I can think of a host of objections here, but the post is long so I’ll stop for now. Comments and eviscerations welcome!

*Greene is officially a moral skeptic but one who only attempts to debunk non-consequentialist judgments and who believes that consequentialism is the most reasonable normative theory to endorse even if it is not, strictly speaking, true. The paper is available on his website.

Sunday, May 17, 2009

Do You Have Constant Tactile Experience of Your Feet in Your Shoes?

Chapter Six of my book in draft (working title Perplexities of Consciousness) is available here. As with the other chapters, I've tried to make it comprehensible without having read the previous chapters. Also as with the other chapters, I would very much value feedback, either by email or as comments on this post.

Abstract:

Do we have a constant, complex flow of conscious experience in many sensory modalities simultaneously? Or is experience limited to one or a few modalities, regions, or objects at a time? Philosophers and psychologists disagree, running the spectrum from saying that experience is radically sparse (e.g., Julian Jaynes) to saying it's radically abundant (e.g., William James). Existing introspective and empirical arguments (including arguments from "inattentional blindness") generally beg the question. I describe the results of an experiment in which I gave subjects beepers to wear during everyday activity. When a beep sounded, they were to note the last conscious experience they were having immediately before the beep. I asked some participants to report any experience they could remember. I asked others to report simply whether they had visual experience or not. Still others I asked if they had tactile experience or not, or visual experience in the far right visual field, or tactile experience in the left foot. Interpreted at face value, the data suggest a moderate view according to which experience broadly outruns attentional focus but does not occur anything like 100% of the time through the whole field of each sensory modality. However, I offer a number of reasons not to take the reports at face value. I suggest that the issue may, in fact, prove utterly intractable. And if so, it may prove impossible to reach justifiable scientific consensus on a theory of consciousness.

Tuesday, May 12, 2009

On Debunking (by guest blogger Tamler Sommers)

Not too long ago, I was one of those people who found the children of my friends annoying and the endless discussions about how fast they were growing unbearable (really, babies grow???!! Amazing!!). Then my daughter Eliza was born and I was smitten from the day one. Teaching her to ride a bike, watching Charlie Chaplin movies with her, I feel like I’m in heaven. Now, if an evolutionary biologist comes along and tells me: “yes, but these feelings of “love” are really just a bunch of neurons firing—these feelings have been naturally selected for so that parents would care for offspring long enough for them to pass along their genes,” I’d shrug my shoulders or perhaps ask for more details. But this mechanistic/evolutionary explanation wouldn’t in any way undermine my love for my daughter or debunk my belief that I truly love her. Why? Because I’m a naturalist and never presumed that love wouldn’t have this type of explanation.

However, I know people who don’t feel this way about love—someone named Ashley for example. For Ashley, real love cannot just be neurons firing because it was adaptive for her ancestors to have those neurons firing. Real love must have its source in something completely unrelated to the struggle for survival and reproduction. Naturalistic explanations terrify Ashley precisely because they do undermine her belief that she truly loves her children or partner.

But would/should these explanations debunk her belief that she loves her children? Well, that depends. It certainly seems strange (for Ashley) to think that she loves her son because it was adaptive for her ancestors to love their children. That doesn’t seem like real love. On the other hand, it also seems strange to her, given what she now knows, to say “it’s false that I love my son.” She still adores him, loves to play with him, would kill anyone that tried to harm him. So what, in the end, does/should Ashley think about her belief in the existence of her love—is it (a) false or (b) just in need of revision? The answer seems to depend in large part on which option, upon reflection, seems stranger, more counterintuitive. It also seems to be the case that whatever she chooses will be the result of her personal history, the particular ways in which Ashley acquired the concept of love (as opposed to, say, the way I acquired the concept.)

I bring this up because lately I’ve been thinking that we have no agreed-upon method for determining when a belief has been explained and when it has been explained away. The above example makes me think that the success of debunking strategies is (a) tied to our preconceptions about the origins of the belief in question, and (b) indeterminate. In my next post, I’ll give my thoughts about how these considerations relate to specific naturalistic debunking strategies in metaethical debates (by Josh Greene, Richard Joyce, and Peter Singer). But first, I would love to hear others’ thoughts on the criteria for evaluating the success of debunking strategies in general, or debunking strategies in metaethics in particular.

Oh, and for a classic case of debunking (and a look back at one of Bob Barker’s lesser known enterprises) check this out. (Note that Randi would be providing an explanation rather than a debunking explanation if the preconception about what was causing the pages to move were different…)

Sunday, May 10, 2009

Titchener's Introspective Training Manual

... Chapter 5 of my book in draft Perplexities of Consciousness, is now up on my homepage.

This chapter does not presuppose any particular knowledge of Chapters 1-4 (and Chapter 4 still isn't in circulatable shape, anyway), so if you're curious feel free just to dive in.

Here's a brief abstract:

The unifying theme of this book is people's incompetence in assessing their own stream of experience, even in what would seem to be favorable circumstances. In this chapter I consider the possibility that old-fashioned "introspective training" in Titchener's style might help produce more accurate reports. In particular, I examine Titchener's treatment of auditory "difference tones" heard when a musical interval is played, his treatment of the "flight of colors" in afterimages following exposure to bright white light, and his treatment of subtle visual illusions.
As always, comments welcomed and appreciated, here on this post or my email to my academic address!

Wednesday, May 06, 2009

When Are Introspective Judgments Reliable?

I'm a pessimist about the accuracy of introspective judgments -- even introspective judgments about currently ongoing conscious experience (e.g., visual experience, imagery experience, inner speech; see here). But I'm not an utter skeptic. If you're looking directly at a large, red object in canonical conditions and you judge that you are having a visual experience of redness, I think the odds are very good that you're right. So then the question arises: Under what conditions do I think judgments about conscious experience are reliable?

I think there are two conditions. (In my Fullerton talk last week, I said three, but I'm giving up on one.)

First, I believe our judgments about conscious experience ("phenomenology") are reliable when we can lean upon our knowledge of the outside world to prop them up. For example, in the redness case, I know that I'm looking at a red thing in canonical conditions. I can use this fact to support and confirm my introspective judgment that I'm having a visual experience of redness. Or suppose that I know that my wife just insulted me. This knowledge of an outward fact can help support my introspective judgment that the rising arousal I'm feeling is anger.

Second, I believe that our judgments about phenomenology are reliable when they pertain to features of our phenomenology about which it's important to our survival or functioning to get it right. It's important that we be right about the approximate locations of our pains, so that we can avoid or repair tissue damage. It's important that we recognize hunger as hunger and thirst as thist. It's important that we know what objects and properties we're perceiving and in what sensory modality. It's also important, I think, that we be able to keep track of the general gist of our thoughts and imagery so that we can develop chains of reasoning over time.

However, about all other aspects of our phenomenology not falling under these two heads we are, I think, highly prone to error. This includes:

* the general structural morphology of our emotions (e.g., the extent to which they are experienced as bodily or located in space, their physiological phenomenology)

* whether our thoughts are in inner speech or are instead experienced in a less articulate way (except when we deliberately form thoughts inner speech, in which case it is part of the general gist of our thoughts that we're forming the kind of auditory imagery associated with inner speech)

* the exact location of our pains (as you'll know if you've ever tried to locate a toothache for a dentist) and their character (shooting, throbbing, etc.)

* the structural features of our imagery (how richly detailed, whether experienced as in a subjective location such as inside the head, how stable, etc.)

* non-objectual features of sensory experience such as its stability or gappiness or extension beyond the range of attention

* virtually everything about our hunger and thirst apart from their very presence or absence.
About such matters, I think, when we're asked to report, we're prone simply to fly by the seat of our pants and make crap up. If we exude a blustery confidence in doing so, that's simply because we know no one else can prove us wrong.

Tuesday, April 28, 2009

April 29-30: Fullerton Philosophy Conference: Consciousness and the Self

Last minute notice, I know, but L.A. area folks might be interested in considering attending all or part of a conference on "Consciousness and the Self" tomorrow and Thursday at Cal State Fullerton.

The speakers are Fred Dretske, Alex Byrne, Sydney Shoemaker, David Chalmers, Jesse Prinz, and me. Info here.

My talk is called "Self-Unconsciousness", posted here.

Monday, April 27, 2009

When Is It Time to Retire?

(by guest blogger Manuel Vargas)

My tenure of guest-blogging here at the Splintered Mind is coming to an end. My thanks to Eric for having me, and to all the thoughtful comments and responses I received from commentators. In keeping with my retirement from this bit of guest-blogging, I thought I’d post something about retirement and its norms, since I know so little about it.

Everyone knows at least one professor, whether a colleague at their own institution or some other, of whom it is painfully clear to everyone EXCEPT that person that he or she should retire. So I’ve been told. I don’t actually know such a person myself, but it seems a common enough refrain that I’ve started to think about the phenomenon. In particular, I’m worried that some day I’ll be THAT guy, the guy whom everyone (except me) knows ought to retire. So, in support of my then-colleagues and chagrined students of the future, I’m trying to work out some general principles of retirement far in advance, so that I might apply them to my own circumstances. Will you help me?

In what follows, I offer some initial thoughts about the matter, with the acknowledgment that I will surely retract everything I write in this post at some point in the next 40 years.

First, some caveats about the scope of the involved ‘ought’:

(1) Let’s suppose we are talking about professors who have no real financial need to teach, nor whose psychology would collapse in some profound way if he or she were not teaching any longer.

(2) Let us also suppose that retirement here does not necessarily mean that the professor emeritus ceases to participate in life of the profession or perform research in some guise. We are only concerned with retirement from one’s regular full-time faculty position at the university.

And, (3) let us suppose that in surrendering said position the department is left not dramatically worse off from a long-term staffing or workload standpoint. And to anticipate, no, having to hire a replacement doesn’t count as making a department dramatically worse off in the relevant sense. So, the ought in my usage of the phrase “ought to retire” should be regarded as ranging over a somewhat limited set of circumstances.

Given the aforementioned restrictions of scope, then, I’m inclined to put the sense of ought that is my concern in those circumstances as something like this: when ought a (philosophy) professor to retire, from the combined standpoints of the professor’s dignity and the general well being, given no powerful or important disincentives for doing so, but given that there are finite jobs in the profession at large and in one’s own department.

Some further caveats and refinements:

(4) I recognize that some professors have no dignity and/or no aspirations of dignity. Indeed, I may be one. But that is the sort of dirty, specific kind of detail that we shall discretely to the side. It is better to pretend that all professors (and departments) have aspirations of dignity.

(5) Our considered question is manifestly NOT about age. Or, at any rate, it is not directly about age. Age may or may not be correlated with whether some of the conditions I suggest, but chronological age itself is irrelevant to what follows. There are plenty of philosophers working now who, despite having known Kant personally, are under no “ought of retirement” of the sort under present consideration. And, presumably, there are people could never have heard a David Lewis talk but who, if they had any good sense, would do themselves and their departments a favor and would retire from the profession— if only their university had the good sense to offer them a reasonable retirement package!

(Randy Clarke once called my attention to a principled argument to this effect made by Saul Smilansky in Moral Paradoxes, an argument that concludes that most of us should retire immediately in light of the numbers of people who could do our jobs at least as well as we are doing them. Still, let’s ignore this too for the moment.)

These considerations having been noted, I suggest that a professor should retire when some weighted cluster of the following conditions are satisfied (the weights given by contextual features of the person’s dignity and the department’s aspirations for itself and what one’s university values in their faculty members):

When, after 7 years or more since tenure . . .

(a) One’s classes are repeatedly cancelled for low enrollment at a much higher rate than other full-time faculty members.
(b) One’s published research has not been cited in more than 7 years in a scholarly context.
(c) One has not been invited or induced to participate in an extra-departmental committee in more than seven years.
(d) When one has not served the discipline in any notable professional capacity in 10 years (e.g., editing a journal, refereeing papers, organizing conferences, etc.)

Do these conditions seem about right? Should something be added or deleted? How would you weight the conditions for, say, a teaching institution or a research institution? Is there some other sure-fire indicator for when someone should retire?

Admittedly, some of these conditions and numbers are arbitrary. And this is all way too rough and foolish. That’s okay, so long as the arbitrariness and foolishness don’t preclude a useful discussion. And anyway, we shouldn’t expect more precision that then subject matter permits, which must be true since Aristotle said it.

Please bear in mind that I’m not supposing that retirement means retirement from participating in the life of the profession. I’m simply assuming that one is walking away from a formal position that will be promptly filled by a new philosopher delighted by the prospect of employment in a profession with grotesquely fewer jobs than qualified applicants.

So help me out here . . . how will I know when I should retire from the active, full-time professor gig?

Thursday, April 23, 2009

The Purview of Human Subjects Committees

Federal regulations on human subjects research state that Institutional Review Boards (IRBs) should review activity that involves "systematic investigation... designed to develop or contribute to generalizable knowledge" and which further involves "intervention" or "interaction" with, or the acquisition of "identifiable private information" about, human beings.

Here's what I wonder: Given these definitions, why aren't IRBs evaluating journalism projects? (Of course they aren't. For one thing journalism moves too quickly for IRBs, which typically take weeks if not months to issue approvals.)

Journalists interact with people, obviously (according to the code "communication or interpersonal contact" counts as interaction), so if they don't fall under the Human Subjects code, it must be because they don't do "systematic investigation... designed to develop or contribute to generalizable knowledge". Tell that to an investigative journalist exposing the abuses of factory laborers!

Is investigative journalism on factory workers maybe not "systematic"? Or not "generalizable"? I see no reason why investigative journalism shouldn't be systematic. Indeed, it seems better if it is -- unless one works with a very narrow definition of "systematic" on which much of the research IRBs actually do (and should) review is not systematic. And why should the systematicity of the research matter for the purpose of reviewability anyway? I also don't see why investigative journalism shouldn't be generalizable. The problems with this criterion are the same as with systematicity: Define "generalizable" reasonably broadly and intuitively, so that a conclusion such as "undocumented factory workers in L.A. are often underpaid" is a generalization, and journalism involves generalization; define it very narrowly and much IRB reviewed research is not "generalizable". And like systematicity, why should it matter to reviewability exactly how specific or general the conclusions are that come out in the end?

IRBs were designed in the wake of 20th century abuses of human subjects both in medicine (as in the Tuskegee syphilis study) and in psychology (such as the Milgram shock study and the Stanford prison experiment). Guidelines were designed with medicine and psychology in mind and traditionally IRBs focused on research in those fields. However, there are plenty of other fields that study people, and the way the guidelines are written, it looks like much research in those fields actually falls under IRBs' purview. So the U.C. Riverside IRB -- of which I'm a member -- has been reviewing more and more proposals in Anthropology, History, Ethic Studies, and the like. Let's call it IRB mission creep.

We recently got news of a graduate student in Music who interviewed musicians and wanted to use those interviews as the core of his dissertation -- but he didn't think to ask for IRB approval before conducting those interviews. The IRB originally voted to forbid him to use that information, basically torpedoing his whole dissertation project. That decision was only overturned on appeal.

It makes a lot of sense, especially given the history of abuse, for IRBs to examine proposals in medicine and psychology. But do we really want to require university professors and graduate students to go through the rigmarole of an IRB application -- and it is quite a rigmarole, especially if you're not used to it! -- and wait weeks or months every time they want to talk with someone and then write about it?

Here's the core problem, I think: Research needs to be reviewed if there's a power issue that might lead to abuse. If the researcher has a kind of power, whether through social status (e.g., as a professor or a doctor vis-a-vis a student or a patient, or even in the experimenter-subject relationship), or through an informational advantage (e.g., through subjecting people to some sort of deception or experimental intervention whose purposes and hypotheses they don't fully understand), then IRBs need to make sure that that power isn't misused. But no IRB should need to approve a journalism student interviewing the mayor or a music professor interviewing jazz musicians about their careers. In such cases, the power situation is unproblematic. Of course in any human interaction there are opportunities for abuse, but only Big Brother would insist that a regulatory board should govern all human interaction.

Monday, April 20, 2009

Why the Gourmet Report is a Failure

(by guest blogger Manuel Vargas)

I’m a long-time fan of the Gourmet Report.* Nevertheless, I’ve recently started to wonder whether the Report fails to measure faculty quality, even when it is construed in roughly reputational terms, that is, in terms of concrete judgments of faculty quality as seen by the mainstream of research-active elements in the Anglophone portion of the profession.

(Before you start to roll your eyes let me note I’m still a fan of the report, and despite the problem I’m about to note, I think it is like democratic government— deeply problematic, but better than any of the alternatives. Moreover, it isn’t like my department or my work is at stake in anything the report does— I’m in a department with no graduate program and my career, such as it is, is beyond the point at which the reputation of the institution that awarded me a Ph.D. is of much consequence to it. So there.)

Here’s why I suspect that the Report is a failure at measuring faculty quality: we are bad judges of our own estimates of quality. That is, I suspect that we are unreliable reporters about the work that we regard as best, in something like a stable, all-things-considered sense. (I certainly think students are unreliable judges of what teaching they learn the most from, and I suspect something analogous is true of philosophers.**) I suspect the quality of my quality assessment is a function of lots of different things— what I’ve read recently, what first springs to mind when I see their name, whether I had reason to attend very closely to something of theirs, what I’ve forgotten about their work, and if so, whether I disagreed vehemently or lightly with it, and so on.

Even bracketing framing effects, though, I suspect that my explicit deliberative judgments of quality fail to perfectly track my actual positive regard of quality for philosophers and their work in some complex ways. Here’s one way my judgments might fail to track my actual regard: X’s work was underappreciated by me simply because the ideas sat in the back of my mind, and later played a role in my own judgments about what would work and what wouldn’t, but I never picked up on the fact that it was X’s arguments about Y that did that for me.

Here’s another way that might happen: I could be aware of X’s work, and think well of that person’s work, but underrate its importance to my own thoughts in the following way: I might not realize how much of that person’s work I cite and respond to in a way that takes it seriously. That is, I could think that work is of very high quality (perhaps worth more of my time than any other work on the subject matter!) but unless I counted up citations or counted up the number of times I focus on responding to that figure, I might simply fail to realize how significant that person’s work really is for me, and so I might fail to accurately assess the quality of work. (Of course: I might also overinflate importance for a related reason—I spent a lot of time criticizing someone’s work because it is easy, but that makes their name loom larger in my mind than my actual regard for it.)

Here’s another way “under-regarding” might happen: I could be subject to implicit bias effects of a peculiar sort. That is, I could unconsciously downgrade (or upgrade) my global assessment of quality on the basis of perceived race/class/gender/age etc., even if, when asked, I sincerely disavow that these things have anything to do with it. On this picture, the relevant test might be closer to something like: what would I think of this work if I had never known anything about the author? A: We’ll never know.

(Relatedly, implicit bias might work in a more targeted way, only affecting my overall assessments of worth, and not my assessments of a particular argument, or even a specific paper even when conscious of race/class/gender/age/etc.)

Here’s another way that might happen: I could be less good than I think at blocking halo effects of various sorts. So, knowing that X is at Wonderful Institution Y may inflate my estimate of that person’s work unconsciously. Or, my agreement with X on matter M may lead me to think better of X than someone else when filling out a survey, because we share the same beleaguered position on some matter. Or, knowing X has published many times in some journal I think well of might lead me to cast doubt on my own assessments of the quality of the work.

Suppose you thought people in general are subject to these effects. Are philosophers vulnerable to such effects? I think yes, but I’ve been repeatedly told that philosophers are special, and alone among humans immune to these sorts of effects because of our marginally greater reflectiveness. So, I must be wrong.

Still, there is some evidence that at-a-time global self-assessments are subject to priming and framing effects. There is some literature on the way in which people are good at monitoring their own discriminatory behavior only when they have reason to think it will be observed (so, for example, you probably aren’t very good at monitoring your discrimination against groups whose salience is not raised for you: think age, disability, non-black/non-white racial groups, etc.). There is also the fast-growing literature on implicit bias and the way it operates. And, there is a large body of work in cognitive science and psychology casting doubt on the accuracy and efficacy of conscious, deliberative judgments with respect evaluative matters (something that Leiter himself, writing with Joshua Knobe, wrote about in the context of Nietzschean moral psychology!).

I don’t know how to correct for any of this, given the Report’s aim of measuring faculty quality in terms of conscious, explicit, global judgments of quality. Keeping track of citation impact corrects for at least one of the possible misalignments I mentioned, but not all of them. And anyway, citation impact rankings are subject to their own difficulties as well. (Although I think it would be a useful supplement to the Report to track this data, too.)

In sum, although I think the Gourmet Report probably fails to accurately report in fact estimations of faculty quality, it nevertheless is likely the best thing we’ve got going for judging philosophical reputation of departments and their specializations, as seen by the mainstream of research-active elements in the Anglophone portion of the profession.

*Indeed, I may be one of the longest of the long-time fans of the PGR: somehow I stumbled across an early version of it, back when Mosaic was my browser of choice, using email required some degree of sophistication with UNIX commands, and the Report appeared to be something produced on a typewriter. Anyhow, the Report was a big help when thinking about graduate schools and a nice supplement to local advice about where I should consider applying. In several cases the Report highlighted departments than individual advisors had never mentioned, but when I asked them (because it was listed on the Report), the response was invariably something like “Oh yeah— so-and-so is there; that place would be pretty good, too.” I think the report has improved in numerous ways since those early days, and I think that it continues to be excellent at its ostensive function as one of several tools for those thinking about graduate school in philosophy. Indeed, it is out of a sense of its ongoing utility for graduate students that I’m happy to serve as one of the folks providing specialty rankings in philosophy of action.

** Regarding student unreliability, the matter is complicated. But see Mayer et al. “Increased Interestingness of Extraneous Details in a Multimedia Science Presentation Leads to Decreased Learning” Journal of Experimental Psychology: Applied (2008) Vol. 14, No. 4, 329–339. And think about research on what teaching evaluations track. One might worry that too often teaching evals track those things irrelevant to learning, or even—if the Mayet et. al. data proves correct, impediments to learning!)

Thursday, April 16, 2009

Where Does It Look Like Your Nose Is?

Following the suggestion of H. Ono et al. in their weird and fascinating 1986 article on "Facial Vision" in Psychological Research, I drew two lines on a piece of cardboard, and you might want to do the same. The lines start at one edge, about 6 cm apart, and converge to a point at the other edge. (A piece of paper held the long way will work fine, as long as you can keep it rigid.) Hold the midpoint of the 6 cm separation at the bridge of your nose and converge your eyes on the intersection point. If you do it right, it should look like there are three or four lines, two on the sides (one going toward each ear) and one or two in the center, headed right for the bridge of your nose.

The weird thing of course is that there are no lines on the cardboard that aim toward your ears or terminate at the bridge of your nose. Ono et al. suggest that the explanation (of the nose part at least) is that from the perspective of each eye the nose appears to be at the location of the other eye, so that the line headed toward your left eye seems to your right eye to be headed toward your nose and the line headed toward your right eye seems to the left eye also to be headed toward the nose.

With that in mind, I remove the cardboard and close one eye. Where does it seem that my nose is? Well, at first I'm inclined to say my perception is veridical: To the open eye, the nose seems closer than does my closed eye (or my bodily map of where my closed eye should be). But now I open and shut each eye in alternation. It does seem that my nose jumps around, maybe an inch or two side to side when I do this. But maybe that's just because my assumed egocentric position changes, relative to my nose?

Ono et al. also suggest trying to locate your phosphenes with one closed eye. (I had a post on this some time ago.) Phosphenes are those little circles you can see when you press on your eye. I find them easiest to see when I press on the corner of a closed eye and attend to the opposite corner of that same eye, looking for a dark or bright circle. (It may take some trial and error to get this right.) As I noted in the old post, for me at least the phosphenes generated by pressing the outside corner of a closed eye, with the other eye open, appear to be spatially located inside or behind the nose. This seems to me to be the case no matter which part of my closed eye I press. At the time of that post, it didn't occur to me that this might be because my nose was subjectively located as co-positional with the closed eye. Holding my nose with two fingers and pressing my closed eye with another finger from the same hand, to throw some tactile feedback into the mix, doesn't seem to change anything.

Tuesday, April 14, 2009

Armchair Sociology of the Profession IV: Splintered Fields

(by guest blogger Manuel Vargas)

UCR’s Peter Graham once mentioned to me that if you go to different departments, what you’ll find is that different figures will be really prominent in the local conception of a field. So, all the graduate students at School A read figure Y and all the grad students at School B read figure Z. What it takes a while to realize, he said, was that half of the time mostly the same views are in play, just filtered through whatever figures have local prominence. So, everyone is getting their dose of externalism, anti-realism, or whatever, but filtered through the concerns of whichever figures loom large in local graduate education. (Peter had a nice example of this, but I have since forgotten what it was. Go ask him yourself and see if he remembers what he had in mind.)

That picture seems mostly right to me. In different departments, different figures are more and less likely to be taught, even if there is widespread professional consensus outside the department about which figures are worth teaching and which issues are important. Local variation can be explained away in several ways: partly in terms who faculty members are reading or responding to in their own work at the time, partly in light of the literatures faculty members were trained in, and (without a doubt) whether any of the big cheeses in a field are members of the department in which one is getting trained. In many (most?) fields, the overlap is substantial enough so that if, for example, you study metaphysics at Notre Dame, right out the gate you are going to be able to have fruitful, meaningful conversations with people who study metaphysics at Princeton.

Still, there are cases where there are vast gulfs in the conception of fields, both in terms of what positions are worth serious engagement and in terms of what the assumptions are that are governing inquiry into the field. Some places take Wittgenstein seriously. Others don’t have more than the vaguest idea of who he is. Some places love them some Davidson. Other places haven’t had him on a syllabus in decades.

This year, I’ve been struck by some surprisingly deep fractures in philosophy of action. I’ve sat in on a couple of seminars in philosophy of action at my host institution this year and it has been incredibly fascinating to see how different the conception of the field looks in these courses than it did in my own graduate training, my own teaching, and my own work in related parts of the field. Even though all these accounts are in some sense concerned with agency, the will, and the relationship of agents to actions (that’s why it counts as philosophy of action) it seems to me that the local differences are manifestly not a case of the same basic positions, substantive concerns, and the like being presented through a different constellation of figures. (For those who are wondering, it seems to me less of a Causal Theory vs. non-causalists, and more of a divide between those-who-start-with-Davidson and those-who-start-with-Anscombe, where starting with either does not necessarily entail substantial agreement.)

Lest I be misunderstood, I don’t say any of this by way of criticism of anyone’s conception of their field—please, let those flowers bloom. Indeed, I feel fortunate to have gained a sharper sense of my own philosophical presuppositions as a result of the experience. And, I think we all benefit from a variety of conceptions of a field, from a range of philosophical concerns, and from a broad range of philosophical methods and approaches. (I take it that something like this phenomenon is common enough that at least some departments used to resist incestuous hiring precisely out of a concern for limiting the intellectual vision of their local ecosystem.)

Anyway, what I’m wondering is what other fields have gulfs internal to them that make challenging any substantive discussions across these splintered portions of the field. Maybe Nietzsche scholarship is one instance, with the Frenchified Nietzsche interpreters on one side and the broadly “analytic” Nietzsche scholars on the other side. I imagine that there would be lots of head scratching about how to talk to each other, if (assuming the unlikely) either group had any substantial interest in doing so. But surely there are other instances of a big divide in presuppositions that significantly hinders intelligibility across camps internal to the same subfields.

Any thoughts about good candidates for other deeply fractured fields? I’ve heard suggestions of something similar internal to ethics, with (broadly) sentimentalists on one side and a priorists (rationalists, contractualists, etc.) on the other, but I’m less confident that we’re at a very significant degree of head scratching puzzlement about what the other camp(s) are doing internal to ethics. Any of this going on in phil mind? Epistemology? Political phil? Elsewhere?

Why Does the Pacific APA End on Easter?

This year, as usual, the Pacific Division meeting of the American Philosophical Association ended on Easter Sunday. At the beginning of the conference, God is crucified. While He is dead, everyone delivers their grand lectures and stays up late partying. When He rises, we're on our planes out of town.

Monday, April 06, 2009

On Encouraging Children to Reflect about Morality

Consider these two views of moral education:

(1.) The "liberal", inward-out model: Moral education should stress moral reflection, with rules and punishment playing a secondary role. If six-year-old Sally hits her friend Hank, you have to enforce the rules and punish her (probably), but what's really going to help her improve morally is encouraging her to think about things like: Hank's perspective on the situation, how she feels about having hurt Hank, and the best overall norms for behavior. Adults, likewise, make moral progress by thinking carefully about their own standards of right and wrong and whether their behavior lives up to those standards. Thus, mature morality grows from within: It's a natural development of the values people, upon reflection, discover to be already nascent in themselves.

(2.) The "conservative", outward-in model: Moral education should stress rules and punishment, with moral reflection playing a secondary role. You can't understand and apply the rules, of course, without some sort of reflection on them, but reflection should be in the context of received norms. Otherwise, it's likely just to become rationalization of self-serving impulses. Until people are morally well developed, the values that emerge from their independent and free reflection will almost inevitably be inferior to time-tested traditional cultural values. Thus, mature morality is imposed from without: People are forced to obey certain norms until obedience to those norms becomes habitual. Perhaps eventually those norms will be understood and embraced, but that's near the end of the developmental trajectory, not the beginning.

Now academically affiliated researchers on moral development almost universally prefer the first model to the second (examples include rationalists like Piaget and Kohlberg, most their opponents who stress the importance of sympathy and perspective-taking, as well as people like Damon who endorse a hybrid view). The common idea is that children (and the morally undeveloped in general) improve morally when they are encouraged to think for themselves and given space to discover their own reactions and values.

Now I'm sympathetic to this idea, but here's my thought: Suppose Sally hits Hank and a liberally-minded teacher comes up and asks her how it made her feel to hurt Hank. What child, realistically, would say, "Well, I know he didn't deserve it, but it just felt good pounding him to a pulp!"? The reality is that the child is being asked to reflect in a situation where she knows that the teacher will approve of one answer and condemn another. This isn't free reflection; and the answer the child gives may not reflect her real feelings and values. Instead, it seems, it is a kind of imposition -- and one perhaps all the more effective if the child mistakes the resulting judgment for one that is genuinely her own.

Therefore, maybe, a liberal-seeming style of moral education is effective not because we have in us all an inclination toward the good that only needs encouragement to flower, but rather because reflection in teacher-child, parent-child, and similar social contexts is really an insidious form of imposition -- and thus, perhaps, the conservative's best secret tool.