Tuesday, June 24, 2008

What Is a Word, If a Baby Can Say It?

When my son Davy (who's now eight) was about ten months old he'd cruise around holding onto the couch saying "da da da da". He'd make the same "da da da" when he wanted to play with me. As an eager parent looking for milestones, I wondered if this was his first word but decided it didn't qualify. Soon he added "dis" to his reportoire: He'd point to something and say "dis", seemingly happy if I named what he was pointing at, but sometimes wanting more. What that a word? I asked an expert on infant language. Emphatically (almost tyrannically), she said no. Then more gently she added that developmental psychologists generally didn't take such seeming-words seriously until there were at least ten of them. Then they were words.

But ten of what? A hot trend recently in certain circles is teaching babies sign language: For example, if a baby makes a fist with one hand, that means "milk", if she brings her fingertips together, that means "more". Of course we don't want to be prejudiced against sign language: Words needn't be spoken aloud. The fist sign for milk is a word.

But now suppose that we have a psychologically identical case where instead of making a fist, the baby kicks her left foot in a distinctive way when she wants milk, and the parents learn to respond to that and reinforce it. Is that left-foot-kicking a word? Presumably we don't want to say that. To count such left-foot-kicking as linguistic seems to cheapen language too much. Ordinarily we think of language as something advanced, uniquely human or nearly so (except for maybe a few signing apes and Alex the parrot). The kicking doesn't seem to qualify, any more than we say a dog has language if he gets the leash when he wants a walk. In getting the leash, he communicates non-linguistically. So where to put on the brakes? Not every human communication is a word: A red stoplight is not a word, nor is a wink or a flag or a computer icon. (I think!) ;-)

Even very young babies will tighten their fists sometimes as a sign of hunger -- but surely that's not a word for a newborn. And babies seem quite naturally to point. Is pointing a word? My newly adopted sixteen-month-old daugher Kate will raise both hands over her head to communicate that she's "all done" eating. But just as the point might be a formalized reach, the hands up might be a formalized reaching up to be lifted out of her high chair. In fact, Kate has started raising her hands over her head in frustration when she has been trapped in her car seat too long and wants to be "all done" with that.

If a twelve-month old says "cup" for cup, we call it a word. If she says "mup" for cup, we find it cute and still call it a word. It seems to follow that if she makes regularly makes a sign language signal for cup, we should call that a word; and likewise it seems that if she makes her own unique sign for cup, we should call that a word too. But now the leash and left foot kicking seem to be back in.

So what is a word?

Thursday, June 19, 2008

The Psychology of Philosophy

"Experimental philosophy", as a movement within philosophy, has so far been almost entirely focused on testing people's intuitions and judgments about philosophical puzzle cases. In this post on the Underblog, I argue for a broader vision of experimental philosophy, including the possibility of experiments on:

* introspective claims about the structure of concious experience (e.g., beeper studies to test claims about ordinary lived experience)

* the causes (including the psychological and cultural factors) influencing philosophers' preferences for particular sorts of philosophical theories (e.g., studies of the psychological correlates of a preference for Kantianism over consequentialism in ethics)

* the real-life consequences of adopting or teaching particular philosophical theories (e.g., does teaching students utilitarianism, or Nietzsche, have any positive (or negative) effects on their behavior?)
I'll be presenting these ideas orally at the Experimental Philosophy Workshop pre-conference at the Society for Philosophy and Psychology meeting in Philadelphia next week. (The program shows my presentation title as "Introspection and Experiment", but I've broadened my topic and thus changed the title.)

Comments welcome!

Tuesday, June 17, 2008

Ethicists and Political Philosophers Vote Less Often, Apparently, Than Other Philosophers

I assume that voting in public elections is a duty (a duty that admits of excuses and exceptions, of course) and that it's morally better to vote conscientiously than not to vote.

In previous research, I've found that:
(1.) ethics books are more likely to be missing from academic libraries than other philosophy books (full essay here),
(2.) philosophy students at Zurich do not give increasing amounts to student charities as their education proceeds, and
(3.) (with Joshua Rust) a majority of philosophers think ethicists behave, on average, no better than non-ethicists of similar social background (full essay here).

With Josh Rust's and my current findings on voting patterns, that's now four consecutive studies suggesting that ethicists behave no better than, or maybe even worse than, comparable non-ethicists.

Looking at voter history data from California, Florida, North Carolina, and Washington State, we found voting rates among professors registered to vote:

Ethicists: 0.97 votes/year (227 records total)
Political philosophers (a subgroup of ethicists): 0.95 votes/year (96 records)
Non-ethicist philosophers: 1.07 votes/year (279 records)
Political scientists: 1.11 votes/year (244 records)
Other professors: 0.93 votes/year
The differences over .07 votes/year are statistically significant. The results are stable controlling for age, gender, ethnicity, state of residence, institution type, and political party. Controlling for rank doesn't substantially change the results, except that it raises the voting rate of the comparison group of "other professors" to a rate between that of ethicists and non-ethicists, so that it can't be said that philosophers vote more often than non-philosophers.

Now I'd have thought political philosophers, like political scientists, would be more engaged than average with the political process. Instead -- depressingly (to me; maybe you'll rejoice?) -- it seems that they're less engaged, at least if voting is taken as the measure of engagement.

When I face moral decisions -- decisions like "should I go out and vote even though I'd rather look for Weird Al videos on YouTube?" -- I often reflect on what I should do. I think about it; I weigh the pros and cons; I consider duties and consequences and what people I admire or loathe would do. I am implicitly and deeply committed to the value of reflection in making moral decisions and prompting moral behavior. To suppose that moral reflection is valueless is pretty dark, or at least pretty radical.

Yet if moral reflection does us moral good, you'd think that ethics professors, who are presumably champions of moral reflection, would themselves behave well -- or at least not worse!

(Josh Rust and I will be presenting these results as a poster at the Society for Philosophy and Psychology meeting next week. The full text of the poster will be available shortly on the Underblog.)

Update, June 26:
In the last couple days, Josh and I were able to do a first analysis of new data from Minnesota. In that state, the ethicists and political philosophers appear to be so conscientious in their voting that it knocked the p-value of our main effect from .03 to .06 -- in other words, the trend in Minnesota was so strong the other direction that we can now no longer feel sufficiently confident (employing the usual statisical standards) that the trend we see for ethicists to vote less is not due simply to chance. So we should probably amend our thesis from "ethicists vote less" to the weaker "ethicists vote no more often". However, the Minnesota data also seem to introduce some potential confounds (such as that Minnesota philosophers seem to have unusual job stability) that complicate the interpretation and that we may want to try to compensate for statistically. So the final analysis isn't in!

Saturday, June 14, 2008

Political Scientists Vote More Often Than Other Professors

One theme of my recent research has been the moral behavior of ethics professors -- do they behave any better than others of similar social background? There's good reason to anticipate that they would: Presumably they care a lot and think a lot about morality, and one might hope (at least I would hope!) that would have a positive effect on their behavior.

However, some people don't think we should expect this. After all, doctors smoke, police commit crimes, economists invest badly. Whether they do so any less than anyone else is hard to assess. (However, the evidence I've seen so far suggests that doctors do smoke less and economists do invest better, contra the cynic. I don't know about police.)

Half a year ago I posted a couple of reflections on the lack of data regarding whether political scientists vote more often in public elections than other professors do (here and here). With perhaps more enthusiasm than wisdom, I decided to go out and get the data myself. Josh Rust and I (and some helpful RAs) gathered official voting histories of individuals in California, Florida, North Carolina, and Washington State (Minnesota pending) and matched those records with online information about professors in universities in those states. (The California data included only statewide elections; the other states include at least some local election data.) We looked at the years 2000-2007.

The data suggest that political scientists do vote more often, averaging 1.11 votes/year as opposed to 0.93 votes/year for a comparison group of professors drawn randomly from all other departments except philosophy.

We ruled out gender, political party, state of residence, age, ethnicity, and institution type (research-oriented vs. teaching-oriented) as explanatory factors. All of these factors either had no effect on vote rate (gender, party, institution type) or were balanced between the groups (state, age, ethnicity). The one factor that did have an effect and wasn't balanced between the groups was academic rank: Non-tenure-track faculty voted less often, and there were fewer tenure-track faculty in the comparison group than among the political scientists. However, even looking just at tenure-track faculty, political scientists still vote more: 1.12 votes/year for political scientists, 0.99 for comparison faculty. (Political science department affiliation also remains predictive of vote rate in multiple regression models including rank and other factors.)

These data support my ethicists project in two ways: First, they show at least some relationship between professorial career choice and real-world behavior; and second, since voting is widely (and I think rightly) seen as a duty, it's a measure of one piece of moral behavior. We can see if ethicists (and perhaps especially political philosophers) are more likely to perform this particular duty than are non-ethicists. Results on that soon!

Friday, June 13, 2008

Experimental Philosophy Survey

Thomas Nadelhoffer has posted a new online survey, and he wants philosophical respondents. Link here. I shouldn't reveal the contents, though lest I worsen the problems of self-selection bias! The survey took me about 15 minutes to complete, going pretty fast.

Monday, June 09, 2008

Political Affiliations of American Philosophers, Political Scientists, and Other Academics

As regular readers will know, I've been working hard over the last year thinking of ways to get data on the moral behavior of ethics professors. As part of this project, I have looked at the public voting records of professors in several states (California, Florida, North Carolina, Washington State, and soon Minnesota), on the assumption that voting is a civic duty. If so, we can compare the rates at which ethicists and non-ethicists perform this duty. Soon I'll start posting some of my preliminary analyses.

First, however, I thought you might enjoy some data on the political affiliation of professors in California, Florida, and North Carolina. (These states make party affiliation publicly available information.) Although U.S. academics are generally reputed to be liberal and Democratic, systematic data are sparser than one might expect. Here's what I found.

Among philosophers (375 records total):

Democrat: 87.2%
Republican: 7.7%
Green: 2.7%
Independent: 1.3%
Libertarian: 0.8%
Peace & Freedom: 0.3%
Among political scientists (225 records total:)
Democrat: 82.7%
Republican: 12.4%
Green: 4.0%
Independent: 0.4%
Peace & Freedom: 0.4%
Among a comparison group drawn randomly from all other departments (179 records total):
Democrat: 75.4%
Republican: 22.9%
Independent: 1.1%
Green: 0.6%
By comparison, in California (from which the bulk of the data are drawn), the registration rates (excluding decline to state [19.4%]) are:
Democrat: 54.3%
Republican: 40.3%
Other: 5.3% [source]
Perhaps this accounts for my sense that if there's one thing that's a safe dinner conversation topic at philosophy conferences, it's bashing Republican Presidents.

Now I'm not sure 87.2% of professional philosophers would agree that there's good evidence the sun will rise tomorrow (well, that's a slight exaggeration, but we are an ornery and disputatious lot!), so why the virtual consensus about political party?

Conspiracy theories are out: There is no point in the job interview process, for example, when you would discover the political leanings of an applicant who was not applying in political philosophy. We ask about research, teaching, and that's about it. Even interviewing a political philosopher (a small minority of philosophers) it will not always be evident if the interviewee is "liberal" or "conservative", since her research will often be highly abstract or historical.

Self-interest also seems an insufficient explanation: Many professors are at private institutions, and few philosophy professors earn government grants, so even if Democrats are more supportive of funding for universities and research, many philosophy professors will at best profit very indirectly from that. Furthermore, it's not clear to me -- though I'm open to evidence on this -- that Democrats do serve professors' financial interests better than Republicans. For example, social services for the poor and keeping tuition low seem to have a higher priority among liberal Democrats in California than the salaries of professors.

Democrats might be tempted to flatter themselves with this explanation: Professors are smart and informed, and smart and informed people are rarely Republican. That would be interesting if it were true, and it's empirically explorable; but I suspect that in fact a better explanation has to do with the kind of values that lead one to go into academia and that an academic career reinforces -- though I find myself struggling now to discern exactly what those values are (tolerance of difference? more willingness to believe that knowledgeable people can direct society for the better? less respect for the pursuit of wealth as a career goal?).

Wednesday, June 04, 2008

Self-Blindness?

If introspection is essentially a matter of perceiving one's own mind, as philosophers like John Locke and David Armstrong have suggested, then just as one might lose an organ of outer perception, rendering one blind to events in that modality, so also, presumably, could one lose an organ of inner perception, leaving one either totally introspectively blind or blind to some subclass of one's own mental states, such as one's beliefs or one's pains.

Is such self-blindness possible? There are, as far as I can tell, no clear clinical cases -- no cases of people who feel pain but consistently have no introspective awareness of those pains, no cases of people who can tell what they believe only by noticing how they behave. (I'm excluding cases where the being lacks the concept of pain or belief and so can't ascribe those states at all; and with apologies to Nichols and Stich on schizophrenia). Sydney Shoemaker suggests that the absence of such cases flows from a deep conceptual truth: There's a fundamental connection between believing and knowing what you believe, between feeling pain and knowing you're in pain, that there isn't between being facing a red thing and knowing you're facing a red thing -- and thus in this respect introspection differs importantly from sensory perception.

Since I'm generally a pessimist about the accuracy of introspective reports, my first inclination is to reject Shoemaker's view and allow the possibility of self-blindness. But then why do there seem to be no clear cases of self-blindness? I suspect that in this matter the cases of pain and belief are different.

Consider pain first. Possibility one: The imagined self-blind person has both the phenomenology of pain and typical pain behavior (such as avoidance of painful stimuli), maybe even saying "ow!" If so, on the basis of this behavior, she could determine (as well as anyone else could, from the outside) that she was sometimes in pain; but she would have no direct, introspective knowledge of that pain. Contra Shoemaker, this seems to me not inconceivable. However, it also seems very likely that a real, plastic neural system would detect regularities in the neural outputs generating pain behavior and respond by creating shortcuts to judgments of, or representations of, pain -- shortcuts not requiring sensory detection of actual outward behavior. For example, the neural system could notice the motor impulse to say "ow!" and base a pain-judgment on that impulse (perhaps even if the actual outward behavior is suppressed). This then, might start to look like (might even actually become?) "introspection" of the pain.

Alternatively, the self-blind person might show no pain behavior whatsoever. Then the person would behave identically to someone with total pain insensitivity (and cases of total pain insensitivity do exist). But now we're faced with the question: Do people normally classified as utterly incapable of feeling pain really feel no pain, or do they have painful phenomenology somewhere with no means to detect it and no way to act on it? The latter possibility seems extravagant to me but not conceptually impossible. And maybe even a fuller understanding of the neuropsychology of pain and pain insensitivity might help us decide whether there are some actual cases of the latter.

Regarding belief, I'm more in sympathy with Shoemaker. My own view is that to believe something is just a matter of being prone to act and react in ways appropriate to, or that we are apt to associate with, having the belief in question (taking other mental states and excusing conditions into account). Among the actions and reactions appropriate to belief is self-ascription of the belief in question, self-ascription that doesn't rely on the observation of one's own behavior. Someone who had the concept of belief but utterly lacked direct self-ascriptive capacity would be in some way defective not just as a perceiver of her beliefs but as a believer.

(Thanks to Amy Kind and Charles Siewert whose excellent articles criticizing Shoemaker on self-blindness prompted this post.)

Thursday, May 29, 2008

More Philosophy Ph.D. Admissions Data from U.C. Riverside

[Due to objections by some members of the UCR department, this post has been removed.]

Monday, May 26, 2008

Will the Real Issue Please Stand Up? (by guest blogger Bryan Van Norden)

One of the classic debates in ethics is between realism and anti-realism. It's hard to precisely state what is at issue without being tendentious, but one way would be this. Are there moral facts (realism) or are there just individual human or social opinions or reactions (anti-realism)?

I'm not going to say anything in this post about the arguments for or against each side, and I'm intentionally not going to say which side I'm on personally. Instead, I want to just make a couple of sociological observations (with the caveat that my results are purely anecdotal).

(1) Most people feel strongly about this issue, whether or not they have a "philosophical" mind. This topic is a sure-fire discussion starter in any introductory philosophy class.

(2) Whichever side a person agrees with, she generally thinks that the other position is pretty obviously mistaken, and is a little bemused that anyone actually believes the other side.

(3) Realists worry that, if you actually took anti-realism seriously, it would encourage some sort of moral decay, while anti-realists worry that realism is really just a rationale for being dogmatic about morality.

(4) If pressed, most realists will assert that they "know" that anti-realism does NOT actually encourage moral decay, while anti-realists will assert that they "know" that realism is NOT actually just a rationale for being dogmatic about morality.

My sense is that (3) is what accounts for (1). In addition, no matter how often people assert what they do in (4), they still really believe (3) in their gut. This explains (2), because the arguments for one's position are really just rationalizations, while the arguments against one's position don't touch what really motivates one to accept it.

Maybe it would lead to a more productive debate if we talked less about moral realism and anti-realism, and more about how to find the mean between (A) taking one's ethical commitments seriously, and (B) dogmatically sticking to one's commitments? (David Wong has an interesting discussion of this in his recent book, Natural Moralities, pp. 179-272.)

(By the way, sorry for being so behind in replying to comments. I got a copy-edited manuscript this week, which I was rushing to revise for the publisher. But I sent it off, and I'll be back to my contrary self on Monday. *smile* )

Thursday, May 22, 2008

The Unreliability of Naive Introspection...

... that is, my essay of that title, is now out in Philosophical Review. I confess to some pride: The essay is the culmination of years of thought and discussion, including a series of more narrowly focused essays on the same general theme; and Philosophical Review is the most selective and prestigious of all philosophy journals.

A certain very eminent philosopher (who will go unnamed) told me that he thought the essay was perhaps "the chattiest essay ever published in Phil Review". I'm not quite sure what to make of that remark....

Here are the first two paragraphs [footnotes excluded]:

Current conscious experience is generally the last refuge of the skeptic against uncertainty. Though we might doubt the existence of other minds, that the sun will rise tomorrow, that the earth existed five minutes ago, that there's any "external world" at all, even whether two and three make five, still we can know, it's said, the basic features of our ongoing stream of experience. Descartes espouses this view in his first two Meditations. So does Hume, in the first book of the Treatise, and -- as I read him -- Sextus Empiricus. Other radical skeptics like Zhuangzi and Montaigne, though they appear to aim at very general skeptical goals, don't grapple specifically and directly with the possibility of radical mistakes about current conscious experience. Is this an unmentioned exception to their skepticism? Unintentional oversight? Do they dodge the issue for fear that it is too poor a field on which to fight their battles? Where is the skeptic who says: We have no reliable means of learning about our own ongoing conscious experience, our current imagery, our inward sensations -- we are as in the dark about that as about anything else, perhaps even more in the dark?

Is introspection (if that's what's going on here) just that good? If so, that would be great news for the blossoming -- or should I say recently resurrected? -- field of consciousness studies. Or does contemporary discord about consciousness -- not just about the physical bases of consciousness but seemingly about the basic features of experience itself -- point to some deeper, maybe fundamental, elusiveness that somehow escaped the notice of the skeptics, that perhaps partly explains the first, ignoble death of consciousness studies a century ago?

[To continue, see the official Phil Review website, or the version on my own website.]

Monday, May 19, 2008

What Sorts of People Should There Be?

A fascinating question. A reflexive rejection of eugenics is too simplistic. There are so many ways now open (or opening) to shape ourselves and future generations, and the benefits and drawbacks are so diverse and complex, that you'll be glad to hear of

the new blog all about it!

Defining "Consciousness"

Scientists, students, ordinary folks, and even philosophers sometimes find the word "consciousness" baffling or suspicious. Understandably, they want a definition. To the embarrassment of us in "consciousness studies", it proves surprisingly difficult to give one. Why? The reason is this: The two most respectable avenues for scientific definition are both blocked in the case of consciousness.

Analytic definitions break a concept into more basic parts: A "bachelor" is a marriageable but unmarried man. A "triangle" is a closed, three-sided planar figure. In the case of consciousness, analytic definition is impossible because consciousness is already a basic concept. It's not a concept that can be analyzed in terms of something else. One can give synonyms ("stream of experience", "qualia", "phenomenology") but synonyms are not scientifically satisfying in the way analytic definitions are.

Functional definitions characterize terms by means of their causal role: A "heart" is the primary organ for pumping blood; "currency" is whatever serves as the medium of exchange. Someday maybe a functional definition of consciousness will be possible; but first we have to know what kind of causal role (if any) conscious plays; and we're a long way from knowing this. Various philosophers and psychologists have theories, of course, but to define consciousness in terms of one of these contentious theories begs the question.

So maybe the best we can do is definition by instance and counterinstance: "Furniture" includes these things (tables, desks, chairs, beds) and not these (doors, toys, clothes). "Square" refers to these shapes and not these. Hopefully with enough instances and counterinstances one begins to get the idea. So also with consciousness: Consciousness includes inner speech; visual imagery; felt emotions; dreams; hallucinations; vivid visual, auditory, and other sensations. It does not include: Immune system response, early visual processing, myelination of the axons; what goes on in dreamless sleep.

Unfortunately, definition by instance and counterinstance leaves unclear what to do about cases that don't obviously fit with either the given instances or counterinstances: If all the given instances and counterinstances of "square" are in Euclidean space, what does one do with non-Euclidean figures? Are paintings "furniture"?

Now of course some cases are just vague and rightly remain so. But one question that interests me turns on exactly the kinds of cases that definition of consciousness by instance and counterinstance leaves unclear: Whether unattended or peripheral stimuli (the hum of the refrigerator in the background when you're not thinking about it, the feeling of your feet in your shoes) are conscious. It would beg the question to include such cases in one's instances or counterinstances. But this class of potential instances is so large and important that to ignore it risks leaving unclear exactly what sort of phenomenon "consciousness" is.

Now maybe (this is my hope) there really is just one property -- what consciousness researchers call "consciousness" -- that stands out as the obvious referent of any term meant to fit the instances and counterinstances above, so that no human would accidentally glom on to another property, given the definition, just as no human would glom on to "undetached rabbit part" as the referent of the term "rabbit" used in the normal way. But this may not be true; and if it's also not acceptable (as I think it's not) simply to lump cases like the sound of the fridge and the feeling of one's shoes into the class of vague, in-between cases, then even definition by instance and counterinstance fails as a means of characterizing consciousness.

Are we left then with Ned Block's paraphrase of Louis Armstrong: "If you got to ask [what consciousness/jazz is], you ain't never gonna get to know"?

Thursday, May 15, 2008

The Hermeneutic Alternative (by guest blogger Bryan Van Norden)

Philosophy begins in wonder. -- Aristotle

Aristotle was wrong. Philosophy begins when a community of people encounter a problem that outstrips their current methods for problem-solving. For example, in ancient Greece, the Sophists seemingly could argue persuasively for either side in a court case or public policy debate. Or in Eastern Zhou-dynasty China, the traditional Way of organizing society was no longer promoting prosperity and preserving social order. Plato addressed the former problem, and Confucius the latter. (Forgive me for greatly oversimplifying the views of these two subtle and multifaceted philosophers.)

Faced with a philosophy-inducing problem, members of the community continue to share (and hold true) most of their background beliefs. If they did not, they would be unable to communicate about the problem. But whatever the problem is, it will call into question some of their beliefs. (For example, "rational argumentation can arrive at objective truth" or "the Way of the ancients is relevant to contemporary society.") On the basis of their shared beliefs, the members of the community formulate solutions to the problem. (For example, "mathematics provides a paradigm for how rational argumentation can succeed" or "if we ethically cultivate individuals and put them into positions of authority, society can be returned to the ancient Way.")

Any solution must satisfy two criteria. (1) It must answer all plausible, substantive objections that are raised against it by other members of the community (including alternative solutions); and (2) it must fit our interactions with the world. In other words, philosophy always involves two types of dialogue partners: other people and the world. (Obiter dicta, I think Richard Rorty tended to forget or underemphasize the role of the latter dialogue partner.) To continue my earlier examples, Plato had to answer the objection that most people are not convinced by the type of argumentation he recommends (and he replied with the "Myth of the Cave") and Confucius had to answer the objection that brute force was the only plausible method for enforcing social order (and he replied with the concept of sagely "Virtue").

What is the payoff of this "hermeneutic account" of philosophy?

Contrast Descartes, who began with subjective "ideas," and then tried to make the jump from them to the world. The problem is that if you start with subjective ideas, and assume that there is a world independent of those ideas, you will be led to skepticism. Or if you start with subjective ideas, and abandon the notion of some unattainable world beyond them, you will be led to relativism. (I think the influence of the Cartesian picture is part of the reason undergraduates assume that relativism or skepticism is self-evident.) But Descartes' epistemological starting point is arbitrary and unwarranted. We begin as creatures in the world, communicating with other creatures in the world with whom we share many common beliefs about the world. Now, through a subtle process of abstraction we can temporarily adopt a Cartesian standpoint, but we do not start out there, and we are not obligated to go there.

The methodological implication of the hermeneutic approach is that, in order for our position to be justified, we need to (1) know the major objections that have been raised against our "solution," and (2) know the major alternative solutions to the problem we face, so that we can (3) answer the objections, and (4) explain why our solution is superior to the alternatives. Again, a contrast with Descartes is instructive, because his Meditations invites us to think of philosophy as an individual process conducted in isolation from previous beliefs. But, as I noted in an earlier post, one cannot even understand Descartes himself without seeing him as a participant in an ongoing dialogue. So the individualist methodology is self-undermining.

Tuesday, May 13, 2008

Philosophers' Spouses

Between the new daughter and two and a half weeks (so far) of raging sinusitis, it's been hard for me to keep up with the blog. Thankfully, Hagop and Bryan have done some interesting guest posts in the meantime!

But I can't let things go entirely, not without at least tossing out a little food for thought. Here's the food: In my experience, philosophers' spouses (unless they are themselves philosophers) are almost universally disdainful of the value of philosophy -- much more so than the average comparably-educated non-philosopher, it seems to me, and much more so than the spouses of professors in other fields are of the value of those other fields.

Here's a tiny bit of empirical data: Eleven people with no academic affiliation, mostly philosophers' spouses, responded to Josh Rust's and my survey of people's opinion of the moral behavior of ethicists. None of them thought ethicists behaved better on average than non-academics, six thought they behaved the same, and five thought they behaved worse (two-tailed binomial, p = .06). Although the sample size is obviously very small, of all respondents, philosophers' spouses appeared to have the darkest view of ethicists' behavior. (I wonder what ethicists' spouses in particular would say.)

Suppose I'm right about the disdain philosophers' spouses generally have for philosophy. What might explain that? Do they have a clearer view of philosophy than we? Bryan complains that (American) philosophers don't get no respect. Maybe we don't deserve respect, and our spouses are just the ones who know this best?

Update, May 14: Here's one possibility: Philosophy is about confronting questions that resist straightforward resolution and many of which are pretty much timeless. When faced with such a daunting task, most of what we mere mortals can provide is only bunk, even if the bunk-providing philosophers themselves don't realize it. (It doesn't follow that philosophy isn't worth doing.)

Tuesday, May 06, 2008

Philosophers Don't Get No Respect (by guest blogger Bryan Van Norden)

(This is the second in a series of guest blog entries by Bryan.)

Eric wrote an interesting entry two months ago in which he noted how much higher the social status of philosophers is in Iran than in the U.S. I'd like to expand on that a bit.

I think that almost every civilization today or in recorded history has given philosophers (and humanists in general) more respect than does the contemporary US.

Abelard, the medieval philosopher, was greeted like visiting royalty wherever he went. The brilliant and well-born Heloise could have had any husband she wanted, but she famously said to him, "I would rather be your whore than another man's wife."

I knew a fellow graduate student who studied in Taiwan for a while. He came back with a Chinese wife from a wealthy and influential family. She knew that her new husband would never make a lot of money, but she was content because of the prestige that came with being a scholar (or a scholar's wife). When he got his first academic job, and she saw what the social status of professors in the US is, she divorced him and returned to Taiwan.

Jurgen Habermas is routinely consulted by European media for his views on current events, as were Derrida and Bertrand Russell. Here in the US, Larry King has interviewed Sean Penn about his views on the Iraq War, and Jenny McCarthy about the causes of autism. And Paris Hilton is a celebrity because she had the good fortune to be born rich, and the misfortune to appear in a sex tape.

If you look at representations of intellectuals in US media, they are almost always either arrogant and cruel (like Professor Kingsfield of The Paper Chase) or amusingly feckless (like Diane Chambers or Fraser Crane on Cheers). As Eric reminded me, there is a bit of an exception for scientists. Einstein posters still grace a few dorm rooms, and brilliant doctors like "House" are often pop-culture icons. But there is no social cachet for those of us who know that modern natural science would have been impossible without our own intellectual discipline.

I think that the American disdain for intellectuals grew out of a preference for populism and a rejection of what was seen as European elitism. But our country is in actuality very elitist. But it is not an elitism of intelligence and achievement. It is an elitism of wealth and celebrity.

Saturday, May 03, 2008

Crossing Cultures in Free Will (by Guest Blogger Hagop Sarkissian)

Academic philosophers are a fairly homogeneous bunch--mostly male, mostly white. So, when a philosopher proposes a thought experiment and then goes on to make a general claim of the form "in this case, most people would surely say that P" or "in this case, it is clearly the case that P", one might wonder whether the philosopher's intuitions really are so obvious or widely shared. Perhaps only philosophers, or male philosophers, or Western male philosophers, or Western male philosophers who maintain theory x, would find it obvious (or even entertain the idea) "that P".

Indeed, in recent years, experimental philosophers interested in such intuitions about particular cases (and their role in philosophical theories) have discovered that it's not hard to find significant cross-cultural variation. For example, recent studies have shown that Americans, East Asians and Indians may differ considerably in their intuitions concerning key thought experiments in epistemology and philosophy of language.

Some colleagues and I wanted to see whether this phenomenon held true for beliefs concerning free will and moral responsibility. We asked participants in Colombia, Hong Kong, India, and the United States what they thought about the following case. (For a video presentation of these questions, click here.):

**********

Imagine a universe (Universe A) in which everything that happens is completely caused by whatever happened before it. This is true from the very beginning of the universe, so what happened in the beginning of the universe caused what happened next, and so on right up until the present. For example one day John decided to have French Fries at lunch. Like everything else, this decision was completely caused by what happened before it. So, if everything in this universe was exactly the same up until John made his decision, then it had to happen that John would decide to have French Fries.

Now imagine a universe (Universe B) in which almost everything that happens is completely caused by whatever happened before it. The one exception is human decision making. For example, one day Mary decided to have French Fries at lunch. Since a person’s decision in this universe is not completely caused by what happened before it, even if everything in the universe was exactly the same up until Mary made her decision, it did not have to happen that Mary would decide to have French Fries. She could have decided to have something different.

The key difference, then, is that in Universe A every decision is completely caused by what happened before the decision – given the past, each decision has to happen the way that it does. By contrast, in Universe B, decisions are not completely caused by the past, and each human decision does not have to happen the way that it does.

1. Which of these universes do you think is most like ours? (circle one)

Universe A-----Universe B

2. In Universe A, is it possible for a person to be fully morally responsible for their actions?

YES-----NO

**********

Previous work in cross-cultural psychology has shown that Westerners and non-Westerners differ in the way they think about moral responsibility, individual agency, and even the more fundamental notion of what it means to be a person; naturally, we expected to find some significant differences in the response patterns of these groups. Not so. In all four cultures, the majority of participants responded as indeterminists and incompatibilists! That is, the majority of participants believed our own universe to be indeterministic, and denied that moral responsibility could be compatible with determinism.





How is it, then, that individuals from such different cultural and religious backgrounds, with divergent ways of understanding the world, who have probably never been instructed on the topic of causal determinism, all tend to embrace the same two theses--indeterminism and incompatibilism? Such cross-cultural similarities in beliefs cry out for one of two kinds of explanation. The first would focus on innate endowment, such as some basic capacities concerning causal cognition or theory of mind. The second would focus on shared experience, such as the phenomenology of moral choice or the unpredictability of human action. Either approach seems worthy of exploration. It is also possible—and indeed quite likely—that the similarities here arise from a complex interaction between innate endowment and shared experience. Future research may shed light on the mechanisms involved. For now, it remains a puzzling finding in an area where one might expect some variation. 

Thursday, May 01, 2008

Unsolicited Advice to Students and Their Advisors (by guest blogger Bryan Van Norden)

This blog entry is by Bryan Van Norden, Professor in the Philosophy Department and the Department of Chinese and Japanese at Vassar College.

Thank you to Eric for kindly allowing me to be a guest blogger for the next few weeks. The first topic I would like to write about is the importance of knowing the secondary literature in one's field.

1. The Problem

I recently wrote a Letter to the Editor that was published in the Proceedings and Addresses of the APA. In it, I described my experience when my department was interviewing job candidates. I noted that we met many terrific young philosophers, and ended up hiring someone we are delighted with. However, we also discovered that many job candidates are not familiar with even the most basic secondary literature on their areas of research (including the work of their own supposed advisors). I concluded the letter by reminding my fellow philosophers of the obvious (I hope) fact that professors have an obligation to train their graduate students. I was writing primarily about "mainstream" philosophy, but my experience has been the same in Chinese philosophy.

So my claim is that it is crucial to know the secondary literature and that far too many people get doctorates without knowing it (or even knowing that it is important).

2. Why Is It a Problem?

I don't know how many people would actually come out and say it, but I think there is a common view that it is not important to know the secondary literature. This view has several sources.

Isn't what's really important that we read the PRIMARY texts?

It's crucial that we read the primary texts! But it is not enough to read the primary texts.

Oh yeah? Why not?

Newton famously said, "If I have seen farther than others, it is because I stand on the shoulders of giants." By this he meant that his work would have been impossible without building upon the previous research of people like Euclid, Copernicus, Galileo and Kepler. Where would we be today if Newton had remained ignorant of them? So even in natural science, which people often think of as an enterprise that can "prove" things independently of tradition, it is impossible to achieve progress without building upon previous research.

But I want to think independently! I don't want to just parrot what people in the past have said on this topic.

Good! But you won't be able to do so unless you become self-aware about what assumptions you bring to the text. Descartes set the tone for much of modern philosophy when he said he was going to reject tradition and custom and just think for himself. Almost everyone today proudly rejects the content of Descartes' claims, but it is far too common to implicitly assume that the methodology (confronting reality with one's individual thoughts) is correct. But the methodology is fundamentally flawed as well. Descartes was certainly original in many ways, but (as any serious historian of modern philosophy will tell you) his work is deeply dependent upon its Platonistic, Aristotelian, Augustinean and Scholastic sources. ("I think therefore I am" is a paraphrase of a line from Augustine's Confessions.)

The issue isn't whether we should be original or not. The issue is whether we can be original and insightful while we are ignorant.

Give me an example of what you are talking about.

Okay. I have heard more than one person ingenuously discuss "Mengzi's claim that human nature is originally good." There's just one problem: Mengzi never says that. Mengzi says that human nature is good, simpliciter. The "originally" is a Neo-Confucian gloss. Even people who have read the primary text often assume the Neo-Confucian reading. But this is the sort of issue raised in the secondary literature.

In general, the problem is that you can't be open-minded if you don't know what the alternatives are to your view.

If the secondary literature is so interesting, just tell me what it says.

What would you say to a student who told you, "I didn't do the reading. Just tell me what it said and I'll argue with you about whether what you say is right."

But don't you think dialogue is important?

Absolutely! But the secondary literature is PART of the dialogue. Besides, if you don't know the secondary literature and I do, how productive will my conversation with you be?

But research is hard work. It's more fun to just chat about my impressions of the text.

Aw, I suspected that was the root of it all! ;)

3. The Solution

I'd like to conclude with a list of what are, in my opinion, the absolutely essential secondary readings for anyone interested in pre Qin dynasty Chinese philosophy. (One could easily add to this list, but I think it would be hard to say anything on it is optional for someone who claims to have an AOS in this area.)

Chan, Alan K.L., ed. Mencius: Contexts and Interpretations (U Hawaii Press).

Cook, Scott, ed. Hiding the World in the World: Uneven Discourses on the Zhuangzi (SUNY Press).

Csikszentmihalyi, Mark and PJ Ivanhoe, eds., Religious and Philosophical Aspects of the Laozi (SUNY Press).

Creel, Herrlee. Confucius and the Chinese Way (out of print).

Fingarette, Herbert. Confucius -- the Secular as Sacred (Harper Torchbooks).

Goldin, Paul. After Confucius (U of Hawaii Press).

Graham, A.C. Disputers of the Tao (Open Court).

Graham, A.C. Studies in Chinese Philosophy and Philosophical Literature (SUNY Press).

Hall, David and Roger Ames. Thinking through Confucius. (Or at least one other book in the trilogy they wrote, which includes Thinking from the Han and Anticipating China.)

Hansen, Chad. Language and Logic in Ancient China (U of Michigan Press). (Or his A Daoist Theory of Chinese Thought.)

Harbsmeier, Christoph. Language and Logic. Vol 7, Part 1 of Joseph Needham, ed., Science and Civilisation in China (Cambridge U Press). (Or A.C. Graham's Later Mohist Logic, Ethics and Science. Some of the same material is also covered in Graham's Disputers of the Tao.)

Ivanhoe, Philip J. Confucian Moral Self Cultivation. 2nd ed. (Hackett).

Kjellberg, Paul and PJ Ivanhoe, eds., Essays on Skepticism, Relativism and Ethics in the Zhuangzi (SUNY Press).

Kline, Thornton and PJ Ivanhoe, eds., Virtue, Nature and Agency in the Xunzi (Hackett).

Kohn, Livia and Michael LaFargue, Lao-tzu and the Tao-te-ching (SUNY Press).

Kupperman, Joel. Learning from Asian Philosophy(Oxford).

Liu, Xiusheng and P.J. Ivanhoe, eds., Essays on the Moral Philosophy of Mengzi (Hackett).

Mair, Victor. Experimental Essays on Chuang-tzu (U of Hawaii).

Nivison, David S. The Ways of Confucianism (Open Court).

Schwartz, Benjamin. The World of Thought in Ancient China (Harvard/Belknap).

Shun, Kwong-loi. Mencius and Early Chinese Thought (Stanford U Press).

Tu, Wei-ming. Centrality and Commonality (SUNY Press).

Van Norden, Bryan W., ed. Confucius and the Analects: New Essays (Oxford). (This is the only secondary anthology in English on this topic.)

Wong, David. Natural Moralities (Oxford).

Yearley, Lee H. Mencius and Aquinas: Theories of Virtue and Conceptions of Courage (SUNY Press).

Monday, April 28, 2008

Does Studying Economics Make You Selfish?

There's been a lot of discussion in economics circles about how economics training makes people more selfish -- in particular, by teaching people "rational choice theory", the cartoon version of which portrays rationality as a matter of always acting in one's perceived (economic) self interest (for example, by defecting in prisoner's dilemma games and offering very little in ultimatum games). Accordingly, the economics literature contains a few much-cited studies that seem to show that economics students behave more selfishly than other students.

However, virtually all the experiments cited in support of this view are flawed in one of two ways. Either they test students on basically the same sorts of games discussed in economics classes, or they rely on self-report of selfishness. Relying on econ-class games makes generalizing the results very problematic. It's no surprise that after a semester of being told by your professor that defecting (basically, ratting on your accomplice to get less prison time) would be the rational thing to do in a prisoner's dilemma game, when that same professor or one of his colleagues gives you a pencil-and-paper version of the prisoner's dilemma, you're more likely to say you'd defect than you would otherwise have been (even with small real stakes). What relationship this has to actually screwing over acquaintances is another question.

Likewise, relying on self-report of selfishness is problematic for all the reasons self-report is usually problematic in the domain of morality, and in this case there's an obvious additional confound: People exposed to rational choice theory might feel less embarrassed to confess their selfish behavior (since it is, after all, rational according to the theory), and so might show up as more selfish on self-report measures even if they actually behave the same as everyone else.

I've found so far only three real-world studies of the relationship between economics training and selfishness, and none suggest that economics training increases selfishness.

(1.) Though I find their study too problematic to rely much on, Yezer et al. (1996) found that envelopes containing money were more likely to be forwarded with the money still in them if they were dropped in economics classes than other classes.

(2.) Frey and Meier (2003) found that economics majors at University of Zurich were less likely than other majors to opt to give to student charities when registering for classes, but that effect held starting with the very first semester (before any exposure to rational choice theory), and the ratio of economics majors to non-economics majors donating remained about the same over time (all groups declined a bit as their education proceeded).

(3.) Studying professional economists, Laband and Beil (1999) found a majority to pay the highest level of dues to the American Economic Association (dues prorated on self-reported income), though they could without detection or punishment have reported lower income and so paid less. Through an analysis of proportion paying dues in each income category vs. proportion in the profession making income in those categories they found similar rates of cheating in self-reported income among sociologists and political scientists.

I see these findings as the flip side of what I've been finding with ethicists: Just as ethical training doesn't seem to increase rates of actual moral behavior much, if at all, so also being bathed in rational choice theory (if, indeed, this is what economics students are mostly taught) doesn't seem to induce real-world selfishness.

Wednesday, April 23, 2008

Qi (Ch'i) and Moral Psychology

The term qi (or ch'i) is known to Westerners mainly through martial arts and new age mumbo-jumbo, and is taken to mean something like spiritual and physical vitality, or the mystical medium of that vitality. The word goes back to classical Chinese, where it originally meant "air" or "breath" and then by extension the vital energy connected with breath, working its way into early Chinese medicine and theories of the body.

Robin Wang has recently been exploring the connections between qi and moral psychology in early Chinese thought. Among other things, she notes the importance of caring for the body in the tradition -- both the moral obligation to do it but also, perhaps, the asserted connection between moral goodness and physical health. (One index of the importance of that connection in the ancient Chinese tradition is the frequency with which it is mocked by Zhuangzi.)

Confucius (if the passage is authentic) notes the connection between morality and qi thus:

There are three things the gentleman should guard against. In youth when the blood and qi are still unsettled he should guard against the attraction of feminine beauty. In the prime of life when the blood and qi have become unyielding, he should guard against bellicosity. In old age when the blood and qi have declined, he should guard against acquisitiveness" (16.7, Lau trans.).
Thus, disorders of qi are associated with a propensity toward certain moral failings. Mencius even more famously associates a "flood-like qi" with moral rightness (2A2).

There is something attractive in this view. It's uncontroversial that peace of mind is good for your health, and it's very plausible that health is generally good for your peace of mind. It may be hard to muster up the energy or will to do what's right if one is feeling substantially less than vigorous, and certain types of physical shortcomings may lead us more easily to certain sorts of moral temptations.

Yet at the same time, a concept that intimately connects physical health and moral health strikes me as highly noxious. Is the highest degree of moral propriety impossible from a wheelchair, or from one's deathbed, or from someone with chronic fatigue? It seems to me that physical disability often gives as much to us morally as it takes away (e.g., by increasing our sympathy with others or by broadening our perspective).

What the connection is between physical health and moral behavior is an empirical question, of course, but one that would be very difficult to study well. I've heard of no studies. (And any one study, anyway, would have to be highly limited.) In the meantime, I say down with the concept of qi!

P.S.: For those readers interested in Chinese philosophy who haven't noticed it yet, I recommend Manyul Im's Chinese Philosophy Blog.

Friday, April 18, 2008

"Mama" as an Early Expression of Need?

When we first met Kate, our adoptive Chinese daughter, at 13 months old, she was already babbling, saying "mama(-m)" and "baba(-b)". Surprisingly to me, she seemed to use them differentially -- "mamama" when she was upset or needing something, "bababa" in a playful mood. I had noticed the same thing in my son when he was about 10 months old, although for him the second sort of babbling sounded like "dadada". At the time, I figured Davy's differentiated babbling was due to our parental reinforcements and interpretations -- that when he said "mama" we were more likely to hand him to his mom, and when he said "dada" we were more liked to hand him to dad, and the like. What was striking to me was that, as an orphan, Kate had no mom or dad. (It's not even clear that she had ever seen a man before.)

Since at least the 1960s, linguists have known that a wide range (maybe even of majority) of languages of very different origins use something like the "mama" (or "nana") sound for mother and something like the papa/baba/tata/dada sounds (all related phonologically) for father. For example, in Chinese "mama" and "baba" are baby-ese for mother and father, in Hebrew it's "ima" and "abba", and of course there's Spanish, Italian, etc.

The standard first remark about this is that these words are very easy for babies to say. I wonder if in addition to this, "mamama" goes more naturally with need or distress while the other goes more naturally with fun and exploration -- for example if the "m" phoneme is easier to make with the facial expression of distress (it seems to me that m's are easier with a tight, distressed face, p's and d's easier with an open, relaxed face, but maybe that's just me). That would explain in a way other theories would not why even an orphan would show the sort of differentiation Kate does between those two sounds. Given the different roles mothers and fathers typically play, it's then easy to see why one babble would come to be associated with the mother and the other with the dad.

(Probably some linguist has suggested this before, but for obvious reasons I haven't time right now to do serious reference chasing. Even without the reference chasing, though, I feel comfortable doubting whether anyone has thought to study 12-month-old orphans specifically in this connection.)

Friday, April 11, 2008

Tucson Presentation

On the Underblog I've posted the text of the talk I gave this morning for the Toward a Science of Consciousness conference. Given that the title of the talk predicts the possible demise of consciousness studies, I thought it might be wise to check that the back door was unlocked in case I needed to make a quick escape! (Actually, everyone was very nice.)

The talk basically combined my reflections on the richness or thinness of experience from my 2007 article on the topic, "Do You Have Constant Tactile Experience of Your Feet in Your Shoes?" (advocates of a rich view say yes [and that we also constantly auditorially experience the hum of traffic in the background, etc.], while advocates of a thin view say no), with the pessimistic argument in a recent post that the question is unresolvable -- and, worse, that it's likely impossible to establish a general theory of consciousness without first settling the rich-vs-thin question. Grand conclusion: A general theory of consciousness may be beyond human reach.

The one new thing was this: Ned Block and Michael Tye have recently argued that experience outruns attention (and thus that the thin view can't be right) because one visually experiences at least the gist of the parts of a visual display (for example on a psychologist's computer) to which you don't focally attend -- for example in "change blindness" demonstrations. Or to put it another way: Your visual attention right now might be on a few of the words in this sentence, but surely you also visually experience more of what is before you than just those few words, so (they say) this shows that attention is not required for experience.

The problem with that argument, I think, is that it ignores the (quite reasonable) possibility that attention comes in degrees and can spread beyond just a narrow point. In some sense, one is attending to the whole computer display even if the finest, most focused point of attention is just a few words at a time. What remains open -- and what I'm pessimistic about resolving -- is whether you also simultaneously visually experience the picture on the wall behind the computer and the pressure of the chair against your back. (Of course you experience them now, but did you experience them ten minutes ago when you weren't thinking about it? I think neither objective nor subjective methods to address this question can yield a trustworthy answer.)

Anyway, I enjoyed having the chance to pontificate about this for a while for the people at this conference! With the new book and the plenary talk, people are starting to treat me almost like I'm an established philosopher. (Maybe it helps, too, that I'm going a bit gray at the temples.) I think it would be a good thing not to get too used to that.

Wednesday, April 09, 2008

Experiential Blanks

I'm on the road, at the biennial consciousness conference in Tucson. Yesterday, Russ Hurlburt and I led a workshop on the use of beepers to explore the stream of experience (the topic of our recent book). As part of the workshop, we "beeped" the audience -- a few random beeps sounded through speakers, interrupting our PowerPoint presentation, and each audience member was to reflect as best she could on her "last undisturbed moment of inner experience" just before each beep. For each beep, we selected a random audience member to describe her experience, and we interviewed her about it, we argued with each other about it, and other members of the audience pitched in, too. Great fun, I thought.

I was especially struck by one of the audience member's reports. He said that, as best as he could tell, he had no inner experience whatsoever, no consciousness, no phenomenology, at the moment just before the beep. He did have an experience a few moments before the beep -- one of feeling his nametag pressing against his chin (he was absent-mindedly playing with it) -- but at the moment of the beep itself, nothing. Russ was talking, but he didn't have any auditory experience. If I understand him correctly, he felt he had no sensory experience of any sort, nor any emotional experience, nor any imagery, nor any conscious thoughts, no experience at all.

People do sometimes say this when they are sampled with beepers, Russ has found in his decades of study. I've also heard a couple of reports of this sort among the forty or so people I've interviewed using Russ's beeper methodology -- but in both of my own cases, the subjects had fallen asleep and the beep had woken them.

Now I confess that I incline toward a rich view of experience -- according to which we generally have constant visual experience, constant auditory experience, constant tactile experience of our feet in our shoes (though peripherally and faintly, of course!), and much else going on with us experientially at any one time. I'm not at all sure that this view is right, but to think that we have waking moments with no experience whatsoever...!

This is one of the cool things about beeper interviews: People say things you'd never expect them to say, they describe their experience in ways you (and even they) might have thought impossible, with all sincerity. It jars me from my complacency.

Wednesday, April 02, 2008

First Draft to Publication, Fifteen Years

I had hoped to be back to regular posting by now, a week and a half after my return from China, but things are still pretty chaotic!

I posted about a year ago on the Two Envelope Paradox, and my paper with Josh Dever on the topic has finally been published (as of Monday, in Sorites).

In 1993, when Josh and I were both graduate students in Berkeley, he introduced me to the paradox, which is very simple to formulate:

You are presented with the choice between two envelopes, Envelope A and Envelope B. You know that one envelope has half as much money as the other, but you don't know which has more. Arbitrarily, you choose Envelope A. Then you think to yourself, should I switch to Envelope B instead? There's a 50-50 chance it has twice as much money and a 50-50 chance it has half as much money. And since double or nothing is a fair bet, double or half should be more than fair! Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch. (But of course that's absurd.)
For some reason, the problem completely took hold of me. I found myself waking in the middle of the night and writing equations. Josh and I bothered just about every graduate student at Berkeley and about half the faculty with the problem. It seemed to me, to us, that the core problem was in the use of a variable with different expectations in different terms of the expected value equation (in the first term, where Envelope A has more, the expectation of X, the value in Envelope A, is higher than it is in the second term, which represents the possibility that Envelope A has less). Just about everyone we spoke to was eventually won over by our reasoning on this, and I presented a paper on it at a graduate student conference later that year.

For a while, I flirted with the idea of writing my dissertation on decision theory, but when I decided to work on connections between philosophy and developmental psychology instead, it seemed the practical decision to set the essay aside. (Berkeley had at the time, and maybe still has, a culture of discouraging graduate students from attempting to publish essays based on anything other than a virtually completed dissertation.)

A couple years later, one of our professors, Charles Chihara, published a paper on the problem (in which he generously thanks me) with a solution similar to ours but also in some important ways different -- and not, it seemed to me, very mathematically precise. Other approaches to the problem came out through the mid- and late 1990s, when it was briefly trendy, but all of them seemed to me to miss the point.

In 2002, I had a long conversation about the problem with Terry Horgan, who had published a couple of papers on it, and I felt myself almost convincing him that my solution was better than his own. (He might not agree with this description of our conversation!) He advised that I seek publication again, so I teamed up with Josh and wrote a new version of the essay.

In 2003, we submitted to Mind, which had published other essays on the problem, but which we thought was a longshot. The referee report came back saying that our solution, though correct, was too technical -- although we felt our paper less technical (and maybe, too, of broader general interest, though that's harder to judge) than the other papers Mind had published on the topic. We received the same reply -- with even less justification, we think, from Analysis, which has published more essays on the topic than any other journal. We then sent the essay to Theory and Decision, whose referee gave us the first substantive criticism we had received (a helpful simplication of our proof) but who recommended rejecting our solution as "obvious" -- despite the fact that in ten years no other essay on the topic had offered it! We considered Synthese, but Springer is such a noxious and expensive publisher, that we decided to send it to an open-access journal instead. We chose the Australian Journal of Logic, which, when we received no reply to several queries sent through various media over the course of a year, we decided had folded. (Though now I see they have a 2007 issue. Hm!) So we withdrew the paper from there to send it to another open-access online journal, Sorites, which we also started to worry about when we got no replies over the course of six months. Finally, when we were about to withdraw the article, we received an apologetic email from the editor -- now, after fifteen years! -- and five journals, and only one minor substantive criticism, it's finally in print.

Anyone out there with a more convoluted publication story?

Sunday, March 30, 2008

Dongguan Orphanage, Guangdong Province, China

It's not philosophy or psychology, but I thought I'd post a few reflections on the orphanage (technically, "Social Welfare Institute") from which we just adopted our 14-month-old daughter, Kate -- partly for general interest and partly because accurate information about Chinese orphanages is hard to come by.

Let me start with a few pictures. From the outside:



Pretty nice looking -- especially compared to the (by American standards) rundown environment of Dongguan and Guangzhou:


Adoptive parents were led to an elegant reception room:


Note the beveled glass tables, elegant walls, leather couches, gleaming hardwood floor. (Those are not things I can afford in my house or office.) I've blocked out the face of our guide for his privacy; the woman standing in the center is the orphanage director.

The orphanage kindly allowed us to see the infants' area, but forbade us to take pictures, so a verbal description will have to do: Two rooms of dingy old rolling steel cribs, each with a baby lying on her back, completely silent. A washroom for dishes, laundry, and babies -- as far as I could tell, all in the same sink. A "playroom" empty but for some blue pads on the floor and single plastic chair.

How eerie it was to step into an utterly silent room containing 25 babies! I can only think that they learned that crying was useless. All the babies were malnourished. Kate is below the 3rd percentile in weight (even on Chinese charts) and head circumference, with calcium deficiency that shows itself (as is evidently typical in Chinese orphans) in splayed ribs at the bottom of the ribcage -- splayed, I'm guessing, because the top part, where the ribs join in the sterum, is stunted and narrow? For malnourished infants, it may be especially adaptive not to waste energy in useless crying.

I kept wondering how much a little more calcium would cost compared to the statue out front (visible in the first picture) and the elegant grounds and couches. Though we liked the nannies who worked there, especially the one who seemed to have had primary responsibility for Kate -- Kate smiled when she saw her -- I couldn't resist the thought that appearance to the outside world was a higher priority to those running the orphanage than the health of the babies. My wife says I'm being uncharitable: Maybe the elegant grounds and waiting rooms are necessary to attract funding from those who can give it. Universities spend big to wine and dine potential donors. Likely, famine relief organizations get more money if they spend some of their proceeds on indulgences for their contributors (not that donors explicitly want that -- presumably if you asked them they'd advocate sending as much as possible directly to the beneficiaries). If so, there's a morally and psychologically interesting paradox of charity.

Well, it seems I can't avoid thinking about philosophy and psychology after all, even when I try!

Thursday, March 27, 2008

The Problem of De in the Analects: Hard and Easy (pt. 2) (by Guest Blogger Hagop Sarkissian)

A noble knight is about to leave on a mission to an inhospitable, barbarian place. Some are skeptical that his mission will be successful. After all, he must deal with petty, uncouth individuals. The knight, however, is not troubled; he has a nifty trick. When he is among petty people or barbarians, their behavior instantly changes. They are literally transformed in his presence, bending to him as sure as grass bends to wind.

The above passage can either be about: a) a Confucian nobleman or b) a jedi knight. That, to me is the hard problem of de in the Analects. Consider the following passages which describe the nobleman's de--a kind of power or force that accrues to morally advanced individuals and has telltale effects on others:

9.14 – The master expressed a desire to go and live among the Nine Yi Barbarian tribes. Someone asked him, “How could you bear with their uncouthness?” The Master replied, “If a nobleman were among them, what uncouthness would there be?”

12.19 –The nobleman's de is wind; petty de is grass. When the wind blows, grass bends.

In the Star Wars movies, a jedi knight can--much like the nobleman above--get other, weak-minded sentient beings to bend or yield to him. This feat, dubbed either "force persuasion" or the "jedi mind trick", is accomplished through ritual gesture and verbal incantation--again, similar to the magical effects associated with the performance of rituals (such as wedding rites or ceremonial forms of greeting) in classical Confucianism.

Now, these effects might seem less impressive than those associated with political de (discussed in a previous post)--where the ruler just sits in a ceremonial position and the whole empire is ordered. Then again, the nobleman lacks many of the perks associated with rulership: a) he's not recognized as the Son of Heaven, b) he lacks all the the ceremonial regalia that comes with being the Son of Heaven, c) is considerably lower on the socio-political ladder, and d) has a pedantic day job preserving traditional rites and ceremonies. Given all this, the passages above seem incredible indeed.

How can we understand this 'force' of the nobleman? What kind of person are the 'little people' and 'barbarians' yielding to? Well, de is frequently linked to practices of self-cultivation (xiu 修 – e.g. 7.3, 12.10, 12.21, 16.1). Perhaps the key to understanding the power of de lies in these practices and the kind of nobleman they were meant to produce. Here is what we find:

The ideal nobleman says the right things at the right times. He's concerned about being a good person and works hard at it. He dresses well (clean and sharp, not flashy). He seems genuine, and has a natural ease about him. He's a good son to his father and a good father to his son, takes care of those close to him and helps others when appropriate. He's got a knack for diffusing disagreements (cleverly alluding to classical poetry and folk songs to convey subtle, delicate points). And if you need advice on wedding gifts or funeral attire, he's a godsend.

There are people I've met who've had many of these qualities, and some of them have a knack for getting along with people. So maybe I can understand how a cultivated nobleman, already enjoying a certain standing in the social hierarchy, can have an attractive, disarming charisma about him and command respect in the community. This much might explain, for example, the bending of the petty people in 12.19.

But the transformation of the foreign barbarians? That's really hard to buy. How is it that they magically behave themselves in the presence of a Confucian knight, but are otherwise 'uncouth'? I don't see how this works.

Tuesday, March 25, 2008

Welcome, Kate!

We're back from China with a new family member, Katherine Jieying ("Kate")! When we've settled in a little better, I'll post a few reflections about her orphanage. In the meantime, a few pictures.

The day we picked her up:


And in a Buddhist temple:



She's a sweet-tempered girl who likes congee (Chinese rice soup with chicken), playing in the bathtub (with or without water), and cruising around the house and yard holding onto fingers or furniture. The photos don't sufficiently capture her heart-melting smile when she's at full beam.

Tuesday, March 18, 2008

Objectivism, Relativism, and Squatter's Rights in Metaethics (by Guest Blogger Hagop Sarkissian)

Moral objectivism is the view that moral truth (or justification) is independent of tradition, custom, or social acceptance. Put another way, it's the view that there is an objective fact of the matter whether any given action is morally right or wrong, permissible or impermissible. Moral objectivism is often contrasted with moral relativism, the view that moral truths (or justification) is relative to cultures or other such groups.

To me, moral relativism is just obviously true (as I have argued previously in comments on this blog). But many philosophers have argued that moral relativism does not jibe with ordinary moral discourse. Ordinary moral discourse, they claim, assumes moral objectivism. Philospohers making this claim range from moral realists on the one hand, to moral fictionalists (or 'error theorists') on the other.

Consider J.L. Mackie, who argued that ordinary moral claims purport to describe facts about mind-independent, objective moral values. Mackie denies that such values exist. "Although most people in making moral judgments implicitly claim, among other things, to be pointing to something objectively prescriptive, these claims are all false". Mackie thinks this 'error theory' "goes against assumptions ingrained in our thought and built into some of the ways in which language is used"; and "since it conflicts with what is sometimes called common sense, it needs very solid support. It is not something we can accept lightly or casually and then lightly pass on" (35).

Does common sense morality assume objectivity? According to a recent study by Goodwin and Darley, most folk actually don't believe that their moral judgments are objectively true, except for cases involving stock examples such as gunfire, robbery, and cheating. On many other moral issues, most believe their judgments to be opinion and not fact. Goodwin and Darley sum up one of their experiments as follows:

"Participants generally agreed (on a six-point scale) with the goodness of anonymous donations (5.42), the badness of opening gunfire on a crowd (5.79), or of robbing a bank (5.77), and the wrongness of conscious racial discrimination (5.86) or of cheating on a lifeguard exam (5.72). But they varied considerably in how likely they were to regard these statements as true: 36%, 68%, 61%, 54%, and 58%, respectively. Perhaps more strikingly, although participants generally agreed (albeit not as strongly) with the permissibility of abortion (4.12), assisted death (4.36), and stem cell research (4.58) in the way we described them, they were highly reluctant to assign truth to statements expressing this agreement: 2%, 8%, and 2%, respectively."

If these findings are replicated, they suggest that, contrary to what many moral realists and fictionalists claim, people believe that their moral claims are objectively true only in a narrow range of cases that enjoy widespread agreement, the rest of them being no more 'objectively true' than matters such as one's taste in music (4%), film (9%) or art (4%).

This brings me to "squatter's rights". In a recent paper, Eddy Nahmias and colleagues argue that a philosophical theory x has 'squatter's rights' compared to a competing theory y if, all else being equal, theory x accords with our common sense intuitions while theory y doesn't. If objectivism does not turn out to be the common sense view, I find it hard to see how relativism doesn't have squatter's rights in metaethics. It's consistent with common sense morality, does as good a job as any other theory in explaining moral agreement and disagreement, and seems best able to account for the variety of moral traditions existing in the world.

Sunday, March 16, 2008

The Problem of De in the Analects: Hard and Easy (pt. 1) (by Guest Blogger Hagop Sarkissian

There is a concept in the Analects of Confucius that is of patent importance to his teachings but remains obscure. This is the concept of de 德, which refers to the ability of a person to command awe and attention, to have others comply with his wishes without resorting to coercion.

Some passages describing de are frankly startling, and coming to some plausible explanation of them has proven problematic. But I think it may be helpful to distinguish between 'the hard problem of de' and 'the easy problem of de' (obviously following from
Chalmers's example concerning consciousness). The easy problem of de is ruler or political de. The hard problem is the nobleman's de. Today, I want to deal with the former, leaving the latter for a subsequent post.

Consider, for example, the following passages describing ruler de:

2.1--The master said, "One who rules by de is comparable to the Pole Star, which remains in its place and receives the homage of the myriad lesser stars."

2.3--The master said, "Guide them with governance, regulate them with punishments, and the people will evade these with no sense of shame. Guide them with de, regulate them through ceremonial propriety, and the people will have a sense of shame and be orderly."

8.18--The master said, "Majestic! Shun and Yu possessed the entire world without managing it."

15.5--The master said, "Someone who ruled without acting (wu-wei 無為)--was this not Shun? What did he do? He made himself reverent and took his proper position facing south, that is all."

These and related passages (e.g. 13.6) describe the de of a ruler (or sage king). On the one hand, they seem pretty impressive, maybe even quixotic or fantastical. (Could Shun really rule by simply sitting on his throne and facing south?) Indeed, some have thought these passages rife with belief in magical powers. Donald Munro called this the 'mana thesis'. On this view, the king must possess some inner / spiritual / psychic power or energy that emanates outward and magically transforms, orders and harmonizes the kingdom. Such an interpretation is understandable because it does seem hard to explain what's going on in these passages. But in the end, I think the problem of 'ruler de' is actually the easy problem.

The reason is simple. Very early on, there were commentators who explicated the ability to rule 'effortlessly' through de as resulting from much prior effort, such as the ruler's a) effectively discharging his roles of setting policy and appointing capable officials, and b) benefitting his subjects (thereby gaining their loyalty, love, and reciprocity).

One of the ruler's most important functions (emphasized by Confucians, Mohists and Legalists alike) was to attract capable individuals to fill administrative and bureaucratic positions to properly manage the kingdom's affairs. The ruler's personal virtue would be a key factor in attracting such individuals and commanding their loyalty. The operations of this larger bureaucracy explain how the ruler could rule 'effortlessly'--by just sitting on the throne (as it were). (Even the incorruptible, wholly sagacious Shun needed the help of ministers to rule--8.20.) Moreover, with the help of capable bureaucrats and officials, the ruler would be able to meet the needs of his subjects, thereby gaining their loyalty as well.

So there's no real mystery here. An efficient bureaucracy, a loyal and loving population, and a broader political philosophy emphasizing deference and loyalty to those above in the hierarchical chain, all seems to explain ruler de rather easily. Indeed, any account of ruler de seems incomplete without these considerations. Am I missing something?

Monday, March 10, 2008

Situationism and the Self-Centeredness of Virtue Ethics (by Guest Blogger Hagop Sarkissian)

As noted in previous posts on this blog, many philosophers of late have been concerned with the implications of situationist social psychology for moral philosophy. Situationism is... well, I'll just use Eric's description from a previous post:

"Recent social psychology has shown that the factors governing human behavior are largely situational rather than characterological. If Robin behaves generously and Sanjay behaves greedily in some particular case, that's more likely to be due to differences in their situation than to differences in their personality."

Think of the Milgram Experiments, the Asch conformity studies, the Princeton Seminary study, etc.

Most philosophers have been concerned whether situationism discredits virtue ethics, a recently popular ethical theory which underscores the importance of character traits to structure guide one's conduct and lead to a flourishing moral life. Situationism, by contrast, claims that character traits are rather inefficacious when compared to the influence of external, situational variables.

I don't really have a horse in this race--i.e. whether Aristotelian virtue ethics rests on an untenable psychology, or whether character traits of a robust sort really *exist* or not. To me, the situationist literature is of concern beyond its implications for such philosophical theories. It seems genuinely troubling that one's own behavior could be shaped so decisively by situational factors, that whether one is virtuous or vicious can hinge on minor perturbations in one's environment.

So, what to do in the face of situationism? Here, we have some practical advice. Many philosophers have endorsed what I call a seek/avoid strategy. These philosophers recognize that situational influence is pervasive and weighty. However, they argue that it remains possible, when one is not caught up in novel or unusual situations, to selectively choose the general types of situations one wants to encounter and structure one's life accordingly. Individuals should seek situations that strengthen or support virtuous behavior, and avoid situations that tend toward vice or moral failure. In choosing situations, one chooses to embrace the behavioral tendencies they elicit.

That seems like sage advice to me. But I find it extremely one-sided. Here's what I mean. The seek/avoid strategy is animated by the thought that our behavior is tightly keyed to our situations--oftentimes, to the behavior of others in our situations. It therefore emphasizes one path of influence: from situations to persons. But if other people in our situations can subtly affect our own behavior, then it seems as though we must return the favor. In other words, from our own (actor's) perspective, situational influence can be responsible for our own behavior, but from another (observer's) perspective, our own behavior constitutes part of the situational influence. So, just as we should mind how others are partly responsible for our own behavior (as those motivating the seek/avoid strategy claim), shouldn't we, too, be mindful of how we are partly responsible for the behavior of others?

To me, the real lesson of situationism lies in how it shows, in striking fashion, that no person is an island--that all our behavior is heavily interconnected, and that what I do really affects what you do, and vice-versa.

You know, I've heard it said that virtue ethics--insofar as it aims for flourishing, eudemonia, or individual happiness--is a selfish or self-centered ethical theory, concerned primarily with one's own person and one's own life prospects. I can't help but think that such a self-centered attitude is pervasive in the existing responses to situationism.

Sunday, March 09, 2008

In China

We'll be meeting our new adoptive daughter in about seven hours. But of course I wanted to check my blog first! I can't seem to defeat the Chinese blocks, but since Blogger is open, I can still add posts (as you see). Free speech, censored hearing.

Friday, March 07, 2008

Is It Just Very Different in Iran?

When I was a graduate student at Berkeley, an undergraduate born in Iran told me that the reason he wanted to someday earn a Ph.D. in philosophy was this: Professors are the most respected people in society, and philosophy professors are the most respected of all professors. (A Google search of his name now doesn't seem to return any philosophers.)

Just some food for thought as I prepare to board a big bird to China....