Saturday, February 26, 2011

Does Thinking about Applied Ethics Generate Mainly Remorse Rather Than Behavior Change?

That's what Randy Cohen seems to be saying in his farewell column on ethics for the New York Times Magazine:

Writing the column has not made me even slightly more virtuous. And I didn’t have to be: it was in my contract. O.K., it wasn’t. But it should have been. I wasn’t hired to personify virtue, to be a role model for the kids, but to write about virtue in a way readers might find engaging. Consider sports writers: not 2 in 20 can hit the curveball, and why should they? They’re meant to report on athletes, not be athletes. And that’s the self-serving rationalization I’d have clung to had the cops hauled me off in handcuffs.

What spending my workday thinking about ethics did do was make me acutely conscious of my own transgressions, of the times I fell short. It is deeply demoralizing. I presume it qualifies me for some sort of workers’ comp. This was a particular hazard of my job, but it is also something every adult endures — every self-aware adult — as was noted by my great hero, Samuel Johnson, the person I most quoted in the column: “He that in the latter part of his life too strictly inquires what he has done, can very seldom receive from his own heart such an account as will give him satisfaction.” To grow old is to grow remorseful, both on and off duty.
Consider Cohen's sports-writer analogy. Often when I present my work on the moral behavior of ethicists, people (not a majority, but maybe a majority of ethicists) will respond by tossing out half-baked analogies: Should we expect basketball coaches to be better at basketball? Epistemologists to have more knowledge? Sociology professors to be popular with their peers? The thought behind such analogies appears to be: obviously no, and so also not in the analogous ethics professors case.

First: Is it so obviously no? I wouldn't expect sports writers to out-hit professional baseball players, but that's not the issue. The issue is whether writing about sports has *any* relationship to one's sports skills -- and it's not obvious that it wouldn't: We learn sports skills in part by watching; thinking about strategy is not entirely useless; and sports writers might have and sustain an interest in sports that has positive consequences for behavior. The relevant question isn't: Should we expect sports writers to be baseball stars, but rather should we expect sports writers to be a little bit better at sports, on average, than non-sports writers? Analogously, the question I have posed about ethicists, and that Cohen seems to be posing to himself is not: Should we expect ethicists to be saints -- the major-league sluggers of morality -- but rather should we expect them to be a little morally better behaved than others of similar social background, or (not entirely equivalently, of course) than they would have been had they not studied ethics.

Understood properly, this question is, I think, neither obvious nor trivial. And Cohen notes that his use of the sports-writer analogy is a rationalization -- suggesting, perhaps, that he thinks it might have been reasonable to expect some change for the better in him as a result of his reflections, a change that failed to materialize.

Cohen says his reflections have mainly left him feeling bad about himself. His tone here seems to me to be oddly defeatist. Johnson, in the passage Cohen cites, is expressing the inevitability of remorse about the past, which (being past) is unchangeable. But ethical reflection happens midstream. It is as though Cohen is saying that the only effect of reflecting ethically and discovering that one has done some bad thing is to feel bad about oneself, that it's not realistic to expect actual changes in one's behavior as a result of such reflections. Now, while it might not be realistic to expect Cohen-style applied ethical reflection in ordinary life to transform one overnight into a saint, perhaps we might hope that it would at least nudge one a little bit toward avoiding, in the future, the sort of behavior about which one now feels remorseful. To abandon such hope, to think that such reflection is necessarily behaviorally ineffectual -- isn't that quite a dark view of the relationship between moral reflection and moral behavior?

Wednesday, February 16, 2011

The Wason Selection Task and the Limits of Human Philosophical Cognition

The famous Wason Selection Task runs like this: You are to imagine four cards, each with a number on one side and a letter on the other side. Of two cards, you see only the letter side and you don't know what's on the number side. Of the other two cards, you see only the number side and don't know what's on the letter side. Imagine the four cards laid out as follows:

(image from: http://www.psypress.com/groome/figures/)

Here's the question: What card or cards do you need to turn over to test the rule "If there is a K on one side there is a 2 on the other", to see if it is violated? The large majority of undergraduates (and of almost every group tested) gets this question wrong. One common answer: the K and the 2. The correct answer is the K and the 7.

That K and 7 is the correct answer is much more intuitively evident when we put the task in a context of social cognition or cheating detection rather than in an abstract context. Imagine that instead of numbers and letters the cards featured beverages and ages and the rule in question was "If there is an alcoholic beverage on one side, there is an age of 21 or greater on the other". Then the cards would look like this: sprite, gin, 32 years old, 17 years old. The large majority of people will correctly say that to check for violations of the rule you need to turn over the gin card and the 17-year-old card. But that's exactly the same logical structure as the much more difficult letter-number task.

Psychologists have discussed at length the cognitive mechanisms and what makes some versions of the task easier than others, but the lesson I want to draw is metaphilosophical: We humans stink at abstract thought. Logically speaking, the Wason selection task is incredibly simple. But still it's hard, even for well educated people -- and even logic teachers familiar with the task need to stop and think a minute, slow down to get it right; they don't find it intuitive. Compare this to how intuitively we negotiate the immense complexities of language and visual perception.

Nor is it just the Wason selection task we are horribly bad at, but all kinds of abstract reasoning. I used to write questions for the Law School Admissions Test. One formula I had for a difficult question was just a question with several options expressed in ordinary language with a conditional and a negation and no intuitive support. The poor victim of one of my questions would first read a paragraph about the economics of Peru (for example). Then I would ask which of the following must be true if everything in the paragraph is true: (a.) Unless crop subsidies are increased, the exchange rate will decline, (b.) The exchange rate will rise only if crop subsidies are not decreased, (c.) If crop subsidies are decreased, the exchange rate will not rise, etc. My own mind would melt trying to figure out which of the options is correct (or whether, even, some might be equivalent), and all we're talking about is a very simple conditional and negation! So yes, you have me to blame for your less-than-perfect LSAT score if you took the exam in the late 1990s. Sorry!

Okay, now consider... Hegel. Or even something as structurally simple as the Sleeping Beauty Problem.

Most of our philosophical ambitions are far beyond ordinary human cognitive capacity. Our philosophical opinions are thus much more likely to be driven by sociological and psychological factors that have little to do with the real merit of the arguments for and against. This partly explains why we make so little philosophical progress over the centuries. If we are succeeded by a species (biological or robotic) with greater cognitive capacities, a species that finds the Wason selection task and abstractly combining conditionals and negations to be intuitively simple, they will laugh at our struggles with Sleeping Beauty, with Kant's Transcendental Deduction, with paradoxes of self-reference, etc., in the same way that we, once we are fully briefed and armed on the Wason Selection Task, are tempted to (but shouldn't) laugh at all the undergraduates who fail it.

Friday, February 11, 2011

My New Book, Perplexities of Consciousness

... is now out, here at Amazon and here at Barnes & Noble.

Tuesday, February 08, 2011

German and English Philosophers in 1914: "World War Is a Wonderful Idea!"

I was struck by the following passage, reading Decline of the German Mandarins (Fritz Ringer, 1969):

Early in August of 1914, the war finally came. One imagines that at least a few educated Germans had private moments of horror at the slaughter which was about to commence. In public, however, German academics of all political persuasions spoke almost exclusively of their optimism and enthusiasm. Indeed, they greeted the war with a sense of relief. Party differences and class antagonisms seemed to evaporate at the call of national duty.... intellectuals rejoiced at the apparent rebirth of "idealism" in Germany. They celebrated the death of politics, the triumph of ultimate, apolitical objectives over short-range interests, and the resurgence of those moral and irrational sources of social cohesion that had been threatened by the "materialistic" calculation of Wilhelmian modernity.

On August 2, the day after the German mobilization order, the modernist Ernst Troeltsch spoke at a public rally. Early in his address, he hinted that "criminal elements" might try to attack property and order, now that the army had been moved from the German cities to the front. This is the only overt reference to fear of social disturbance that I have been able to discover in the academic literature of the years 1914-1916.... the German university professors sang hymns of praise to the "voluntary submission of all individuals and social groups to this army." They were almost grateful that the outbreak of war had given them the chance to experience the national enthusiasm of those heady weeks in August. (p. 180-181)
With the notable exception of Bertrand Russell (who lost his academic position and was imprisoned for his pacifism), philosophers in England appear to have been similarly enthusiastic. Wittgenstein never did anything so cheerily, it seems, as head off to fight for Austria. Alfred North Whitehead rebuked his friend and co-author Russell for his pacifism and eagerly sent off to war his sons North and Eric. (Eric Whitehead died.)

If there is anything that seems, in retrospect, not to have been a wonderful idea it was World War I, which destroyed millions of lives to no purpose. (At best, it should have been viewed as a regrettable necessity in the face of foreign aggression; but that was rarely the attitude in 1914, from what I have read.) Philosophers at the time, evidently, were no more capable of seeing the (seemingly immensely obvious) downsides of world war than was anyone else.

You might ask: Why should philosophers have been more capable of seeing what was wrong with World War I? Isn't it entirely unsurprising that they should be just as enthusiastic as the rest of their compatriots?

Here's a model of philosophical reflection on which philosophers' enthusiasm for World War I is unsurprising: Philosophers -- and everyone -- possess their views on the big questions of life for emotional and sociological reasons that have nothing to do with their philosophical theories and philosophical readings. They recruit Kant, Mill, Locke, Rousseau, Aristotle, etc., only after the fact to justify what they would have believed anyway. Moral and political philosophy is nothing but post-hoc rationalization.

Here's a model of philosophical reflection on which philosophers' enthusiasm for World War I is, in contrast, surprising: Reading Kant, Mill, Locke, Rousseau, Aristotle, etc., helps give one a broadly humanitarian view, helps one see that people everywhere deserve respect, pushes one toward a more encompassing and cosmopolitan worldview, helps one gain a little critical perspective on the political currents of one's own time, helps one better see through the rhetoric of demagogues and narrow politicians.

Which vision of philosophy do you prefer?

Thursday, February 03, 2011

Imagery in Front of One's Forehead

I posted briefly on this in 2006, but I continue to be struck by the following fact: When I interview people about their imagery they often report their images as seeming to be located in front of their foreheads. Often when people say this, they give me a slightly embarrassed look, as though this report surprises them too. Only a minority of subjects report imagery in front of their forehead, but it's a pretty healthy minority (rough estimate: 25%).

I don't think that I am forcing this report on subjects: At first, such reports surprised me, too. Antecedently, I would have thought the most likely possibilities to be: (1.) the image occurs nowhere in egocentric space, not seeming to be subjectively located at all; (2.) the image is subjectively experienced as inside the head; (3.) the image is subjectively experienced as before the eyes. Subjects do sometimes report (1)-(3), though not appreciably more often than in front of the forehead.

As far as I know, there is no serious study of the subjectively experienced location of imagery, including its conditions and variations. Some questions that arise:

(1.) Do people's differences in report about the subjective location of their imagery reflect real differences between people in how they experience imagery, or is the subjective location of imagery, or lack of subjective location, more or less the same for everyone but for some reason difficult to report accurately?

(2.) Can one control the subjective location of imagery? Suppose I am imagining Napoleon vanquished at Waterloo, and it seems to me that that imagery is transpiring inside my head. Can I take that very same imagery and make it transpire, instead, in front of my forehead? (Obviously, I am not talking about where the brain processes occur, but where it seems to me, subjectively, that the imagery is happening, if it seems to me that it is happening somewhere.) Can I make the image transpire down by my right toe? Can I move it over into the next room? Across town? Is there some limit to this?

(3.) Is there a difference between visually imagining an object as being some place relative to you and having one's image of that object transpiring in a particular subjective location? Getting clear on this distinction (if it is a valid distinction) seems essential to answering (2) accurately. So, for example, can I have an image in my head, or an image unlocated in subjective space, of someone sitting in the empty chair that I am (really) looking at across the room? If this distinction is hard to conceptualize, consider pictures as an analogy: I can have a picture *in my hand* of myself sitting *in that chair over there*. The picture is located in my hand; its representational content is that I am sitting in that chair over there. Is the same kind of split possible for imagery? I can imagine a mini-Napoleon engaged in a battle near my right toe, while I gaze down at my foot, but that is rather different -- isn't it? -- than having an ordinary image of Napoleon's defeat that is experientially located as transpiring by my right toe. If my imagery is experienced as inside of my head, it's not like I am imagining that little mini-Napoleon having climbed into my head, I think.

(4.) Is there interference between visual perception of outward objects from some location and the subjective location of one's imagery? So, for example, if your image is subjectively experienced as in the upper right quadrant of your visual field, are you less likely to detect an object that subtly appears in that location? (I have a vague feeling that someone has tested this, though maybe not with the distinction I articulate at (3) clearly in mind. Reminders/references appreciated. [I don't just mean Perky 1910, though that seems relevant.])

(5.) Why would in front of the forehead be a more common location in subjective space than, say, down by the cheeks? Does this reflect some cultural supposition about where images must be? (But if so, where in the culture?) Does it reflect some real phenomenon, perhaps some cognitive efficiency gained, by representing one's imagery experiences as positioned there?

Tuesday, February 01, 2011

Ethicists' Courtesy at Philosophy Conferences

In 2008 and 2009, my collaborators and I stalked philosophy conferences noting instances of courteous and discourteous behavior. Our aim was to collect evidence about whether ethicists behave any more courteously, on average, than do other philosophers. We used three measures of courtesy:

* talking audibly while the speaker is talking (vs. remaining silent);

* allowing the door to slam shut while entering or exiting mid-session (vs. attempting to close the door quietly);

* leaving behind clutter at the end of a session (vs. leaving one’s seat tidy).
Ethicists did not behave detectably differently by any of the three measures, thereby proving that they are not Confucian sages. (In fact, there was a session on neo-Confucianism among the coded sessions.)

We assume that disruptively yakking, littering, and slamming doors tends neither to advance the greatest good, to flow from universalizable maxims, nor to display morally virtuous character traits. Thus, these results fit with Josh Rust's and my overall finding, across several studies, that ethicists behave no morally better, on average, than do other people of similar social background.

We did find, however, that audiences in environmental ethics sessions tended to litter less.

Full details here.

Friday, January 21, 2011

In Germany, Slacking on the Blog

I've been trying to post at least once a week recently, but I didn't manage to pull it off this week. I'm in Germany until the 28th, with a hectic schedule. It has been very interesting and productive!

Monday, January 10, 2011

Ethicists' Responsiveness to Student Emails: New Essay in draft

by Joshua Rust and Eric Schwitzgebel

Available here.

Abstract:
Do professional ethicists behave any morally better than do other professors? Do they show any greater consistency between their norms and their behavior? In response to a survey question, a large majority of professors (83% of ethicists, 83% of non-ethicist philosophers, and 85% of non-philosophers) expressed the view that “not consistently responding to student emails” is morally bad. A similarly large majority of professors (>80% of all groups) claimed to respond to at least 95% of student emails. We sent these professors, and others, three emails designed to look like queries from students: one concerning office hours, one about declaring a major, and a third about a future course of the professor’s drawn from posted schedules of classes. All three emails were tested against spam filters, and we had direct confirmation that almost all target email addresses were actively used. Professors responded to about 60% of the emails. Ethicists’ email response rates were within statistical chance of the other two groups’. Expressed normative view correlated with self-estimated rate of email responsiveness, especially among the ethicists. However, for all groups of professors, measured email responsiveness was virtually unrelated to either expressed normative view or self-estimated email responsiveness.

Friday, January 07, 2011

Two Approaches to Transparency about Self-Knowledge

According to transparency views of self-knowledge, we learn about our own mental states not by turning our attention inward to detect the presence or absence of those states (as "inner sense" and "self-monitoring" views suggest) but rather by turning our attention to the outside world. In Gareth Evans's (1982) example, if someone asks me if I think there will be a third world war, in answering that question, I don't think about myself; rather, I think about the state of the world.

Suppose I answer "yes" to Evans's question. I have reached some sort of judgment. But what exactly have I reached a judgment about? There are two very different options insufficiently distinguished in the literature. Option 1: I am reaching a judgment about the world. In the context of the question (which was, literally speaking, about what I think), that judgment about the world serves a self-ascriptive function. Option 2: I am reaching a judgment about my mind. I'm not attending to my mind as a means of reaching that judgment, but the judgment is still a self-directed one. Call Option 1 the Topic Shifting approach to transparency and Option 2 the Self-Judgment approach.

Topic Shifting and Self-Judgment have complementary virtues and vices. Topic Shifting fits nicely with the intuitive sense in Evans's and others' examples that I'm not really thinking about my own mind; but then it's not clear why the result is supposed to be self-knowledge. It doesn't even seem to be self-belief. Conversely, on the Self-Judgment approach it's clear why the conclusion might count as self-knowledge, but we seem to have abandoned the core idea that I am thinking about the world, not my own mind, in answering the question.

Why not have it both ways? Combining Options 1 and 2: The transparency procedure produces a judgment that is both about my mind and about the world. This Dual Content approach shares with the Self-Judgment approach that it's clear how the product of the transparency procedure could be self-knowledge. And yet we can retain much of the original transparency intuition: The conclusion of my reflections does involve, perhaps even is mostly, a judgment about the world.

Think about avowals. An avowal, as I intend the term, is an assertion with a dual fucntion: If I avow some proposition P (say "the world is flat") I am doing two things. I am asserting that the world is flat, and I am asserting that I believe (or judge) that the world is flat. This self-attributive aspect of avowals distinguishes them from simple assertions. On the Dual Content approach, the transparency procedure generates avowals.

Consider a spectrum from simple assertion to self-alienated confession: The simple assertion that P is not at all an assertion that I believe that P. The self-alienated confession that P is not at all an assertion that P is true, but only that I (seem to) believe it. Through the middle is a range of avowals with different degrees of emphasis on asserting P vs. asserting belief that P. Assertions containing self-ascriptive phrases like "I think" might tend, on average, to be somewhat more toward the confession side than assertions without self-ascriptive phrases.

Final thought: The public visibility of blog posts, Facebook status updates, and the like creates an atmosphere of self-observation that tends to convert simple assertions into avowals. We are thus becoming an avowal society.

Thursday, December 30, 2010

Nazi Philosophers

Recently, I've done a fair bit of work on the moral behavior of ethics professors (mostly with Josh Rust). We consistently find that ethics professors behave no better than socially comparable non-ethicists. So far, the moral violations we've examined are mostly minor: stealing library books, not voting in public elections, neglecting student emails. One might argue that even if ethicists behave no better in such day-to-day ways, on grand issues of moral importance -- decisions that reflect one's overarching worldview, one's broad concern for humanity, one's general moral vision -- they show greater wisdom.

Enter the Nazis.

Nazism is an excellent test case of the grand-wisdom hypothesis for several reasons: For one thing, everyone now agrees that Nazism is extremely morally odious; for another, Germany had a robust philosophical tradition in the 1930s and excellent records are available on individual professors' participation in or resistance to the Nazi movement. So we can ask: Did a background in philosophical ethics serve as any kind of protection against the moral delusions of Nazism? Or were ethicists just as likely to be swept up in noxious German nationalism as were others of their social class? Did reading Kant on the importance of treating all people as "ends in themselves" (and the like) help philosophers better see the errors of Nazism or, instead, did philosophers tend to appropriate Kant for anti-Semitic and expansionist ends?

Heidegger's involvement with Nazism is famous and much discussed, but as I see him as a single data point. There were, of course, also German philosophers who opposed Nazism. My question is quantitative: Were philosophers any more likely than other academics to oppose Nazism -- or any less likely to be enthusiastic supporters -- than were other academics? I'm not aware of any careful, quantitative attempts to address this question (please do let me know if I'm missing something). It can't be an entirely straightforward bean count because dissent was dangerous and the pressures on philosophers were surely not the same as the pressures on academics in other departments -- probably the pressures were greater than on fields less obviously connected to political issues -- but we can at least start with a bean count.

There's a terrific resource on philosophers' involvement with Nazism: George Leaman's Heidegger im Kontext, which contains a complete list of all German philosophy professors from 1932 to 1945 and provides summary data on their involvement with or resistance to Nazism. I haven't yet found a similar resource for comparison groups of other professors, but Leaman's data are nonetheless interesting.

In Leaman's data set, I count 179 professors with "habilitation" in 1932 when the Nazis started to ascend to power (including Dozents and ausserordentlichers but not assistants). (Habilitation is an academic achievement after the Ph.D., without an equivalent in Britain or the U.S., with requirements roughly comparable to gaining tenure in the U.S.) I haven't attempted to divide these professors, yet, into ethicists vs. non-ethicists, so the rest of this post will just look at philosophers as a group. Of these, 58 (32%) joined the Nazi Party, the SA, or the SS. Jarausch and Arminger (1989) estimate that the percentage of university faculty in the Nazi party was between 21% and 25%. Philosophers were thus not underrepresented in the Nazi party.

The tricky questions come after this first breakdown: To what extent did joining the party reflect enthusiasm for its goals vs. opportunism vs. a reluctant decision under pressure?

I think we can assume that membership in the SA or SS reflects either enthusiastic Nazism or an unusual degree of self-serving opportunism: Membership in these organizations reflected considerable Nazi involvement and was by no means required for continuation in a university position. Among philosophers with habilitation in 1932, two (1%) joined the SS and another 20 (11%) joined (or were already in) the SA (one philosopher joined both), percentages approximately similar to the overall academic participation in those organizations. However, I suspect this estimate substantially undercounts enthusiastic Nazis, since a number of philosophers (including briefly Heidegger) appear to have gone beyond mere membership to enthusiastic support through their writings. I haven't yet attempted to quantify this -- though one further possible measure is involvement with Alfred Rosenberg the notorious Nazi racial theorist. Combining the SA, SS, and Rosenberg associates yields a minimum of 30 philosophers (17%) on the far right side of Nazism, not even including those who received their university posts after the Nazis rose to power (and thus perhaps partly because of their Nazism).

What can we say about the philosophers who were not party members? Well, 22 (12% of the 179 habilitated philosophers) were Jewish. Another 52 (29%) were deprived of the right to teach, imprisoned, or otherwise severely penalized by the Nazis for Jewish family connections or political unreliability (often both). It's somewhat difficult to tease apart how many of this latter group took courageous stands vs. found themselves insufferable to the Nazis due to family connections or previous political commitments outside of their control. One way to look at the data are these: Among the 157 non-Jewish habilitated philosophy professors, 37% joined the Nazi party and 30% were severely penalized by the Nazis (this second number excludes 5 people who were Nazi party members and also severely penalized), leaving 33% as what we might call "coasters" -- those who neither joined the party nor incurred severe penalty. Most of these coasters had at least token Nazi affiliations, especially with the NSLB (the Nazi organization of teachers), but probably NSLB affiliation alone did not reflect much commitment to the Nazi cause.

Membership in the Nazi party would not reflect a commitment to Nazism (or, also problematic, an unusually strong opportunistic willingness to fake commitment to further one's career) if joining the Nazi party was necessary simply to getting along as a professor. The fact that about a third of professors could be "coasters" suggests that token gestures of Nazism, rather than actual Nazi party membership, were sufficient for getting along, as long as one did not actively protest or have Jewish affiliations. Nor were the coasters mostly old men on the verge of retirement (though there was a wave of retirements in 1933, the year the Nazis assumed power). If we include only the subset of 107 professors who were not Jewish, habilitated by 1932, and continuing to teach past 1940, we still find 30% coasters (28% if we exclude two emigrants).

Here's what I tentatively conclude from this evidence: Philosophy professors were not forced to join the Nazi party. However, a substantial proportion did so voluntarily, either out of enthusiasm or opportunistically for the sake of career advancement. A substantial minority, at least 19% of the non-Jews, occupied the far right of the Nazi party, as reflected by membership in the SS, SA, or association with Rosenberg. Regardless of how the data look for other academic disciplines, it seems unlikely that we will be able to conclude that philosophers tended to avoid Nazism. Nonetheless, given that 30% of non-Jewish philosophers were severely penalized by the Nazis (including one executed for resistance and two who died in concentration camps), it remains possible that philosophers are overrepresented among those who resisted or were ejected.

Monday, December 27, 2010

My Forthcoming Book

... is at a discount for Jan. 1 release, quoted at $18.45 here at Amazon and $19.10 here at Barnes & Noble. (List price $27.95.) Get 'em while they're hot!

Thursday, December 23, 2010

Friday, December 17, 2010

Philosophers Buying Into Nazi Censorship?

This, from a recent article in Science, examining word usage frequencies using Google's huge corpus of books:

We probed the impact of censorship on a person’s cultural influence in Nazi Germany. Led by such figures as the librarian Wolfgang Hermann, the Nazis created lists of authors and artists whose “undesirable”, “degenerate” work was banned from libraries and museums and publicly burned (26-28). We plotted median usage in German for five such lists: artists (100 names), as well as writers of Literature (147), Politics (117), History (53), and Philosophy (35) (Fig 4E). We also included a collection of Nazi party members [547 names, ref (7)]. The five suppressed groups exhibited a decline. This decline was modest for writers of history (9%) and literature (27%), but pronounced in politics (60%), philosophy (76%), and art (56%). The only group whose signal increased during the Third Reich was the Nazi party members [a 500% increase; ref (7)].
One interpretation, perhaps, is that philosophers socked it to Hitler and suffered most. However, given the rate at which philosophers appear to have co-operated with the Nazis (explored by George Leaman in Heidegger im Kontext and hopefully subject of a future post), I don't think we should rule out another interpretation: Philosophers tended to accept the Nazi censorship and stopped referring to the censored authors, more so than academics in other fields.

I wonder if there is a way to tease these hypotheses apart....

HT: Bernie Kobes.

Wednesday, December 15, 2010

German Tour in January

I will be in Germany from January 18-28. Here's the schedule of talks, plus one graduate student conference. There are also a few less formal events (seminar discussions and the like). Please feel free to contact me or the host departments if you'll be in the area and interested.

Jan. 20: Osnabrueck: "Shards of Self-Knowledge" (6 p.m. start)

Jan. 21-22: Osnabrueck: Post-graduate conference on "the Work of Eric Schwitzgebel, the Epistemological Status of First-Person Methodology in Science, and the Metaphysics of Belief"; I will present "The Problem of Known Illusion and the Problem of Undetectable Illusion" and "The Moral Behavior of Ethics Professors"

Jan. 24: Berlin: "Knowing What You Believe" (6 p.m. start)

Jan. 26: Bochum: "Knowing What You Believe" (6 p.m. start)

Jan. 27: Mainz: "Shards of Self-Knowledge" (6 p.m. start)

Sunday, December 12, 2010

Luke Muehlhauser Interviews Me about Self-Knowledge of Conscious Experience and about the Moral Behavior of Ethics Professors

... here. Self-knowledge of conscious experience is the topic until 57:12, and then there's about twenty minutes on ethics professors.

Thursday, December 09, 2010

"Objects in Mirror Are Closer Than They Appear"

... so it says, at least, on my passenger side mirror.


(image from http://amchurchadultdiscipleship.net)

I've been worrying though, are they closer than they appear?  This might seem a strange thing to worry about, but I refuse to be thus consoled.

Here's a case for saying that objects in the mirror are closer than they appear: The mirror is slightly convex so as to give the driver a wider field of view.  As a result, the expanse of mirror reflecting light from the object into my eye is smaller than the expanse would be if the mirror were flat.  Thus, the size of the object "in the mirror" is smaller than it would be in a flat mirror.  If we assume that flat mirrors accurately convey size, it seems to follow that the size of the object in the mirror is inaccurately small.  Finally, apparent distance in a mirror is determined by apparent size in a mirror, smaller being farther away.

The argument for the other side is, at first blush, much simpler: Objects in the mirror are no closer than they appear, at least for me, because as an experienced driver I never misjudge, or am even tempted to misjudge, their distance.

Now both of these arguments are gappy and problematic.  For example, on the first argument: Why should flat mirrors be normative of apparent size?  And why shouldn't we say that the object is larger than it appears (but appearing the right distance away), rather than closer than it appears (but perhaps appearing the right size)? That is, why does it look like a distant, full-sized car rather than a nearby, smallish car?

You might be tempted to mount a simpler argument for the "closer than they appear" claim: A naive mirror-user will misjudge the distance of objects seen in a slightly convex mirror.  The naive mirror-user's misjudgments are diagnostic of apparent size -- perhaps they are based primarily on "appearances"? -- and this apparent size does not change with experience.  The experienced mirror-user, in contrast, makes no mistakes because she learns to compensate for apparent size.  But this argument is based on the dubious claim that the experience of a novice perceiver is qualitatively the same as the experience of an expert perceiver -- a claim almost universally rejected by contemporary philosophers and psychologists.  It's also unclear whether the naive mirror-viewer would make the mistake if warned that the mirror is convex.  (Can apparent size in a mirror be contingent upon verbally acquired knowledge of whether the mirror is slightly convex or concave?)

Should we, then, repudiate the manufacturers' claim, at least as it applies to experienced drivers?  Should we, perhaps, recommend that General Motors hire some better phenomenologists?  Well, maybe.  But consider carnival mirrors: My image in a carnival mirror looks stretched out, or compressed, or whatever, even if I am not for a moment deceived.  Likewise, the lines in the Poggendorff Illusion look misaligned, even if I have seen the illusion a thousand times and know exactly what is going on.  Things look rippled through a warped window, no matter how often I look through that window.  Perhaps you, too, want to say such things about your experience.  If so, how is the passenger-side mirror case different?

Here is one way it might be different: It takes a certain amount of intellectual stepping back to not be taken in by the carnival mirror or the Poggendorff Illusion or the warped window.  The visual system, considered as a subset of my whole cognitive system, is still fooled.  And maybe this isn't so for the passenger-side mirror case.  But why not?  And does it really take such intellectual stepping back not to be fooled in the other cases?  Perhaps there's a glass of water on my table and the table looks warped through it.  I'm not paying any particular attention to it.  Is my visual system taken in?  Am I stepping back from that experience somehow?  It's not like I just ignore visual input from that area: If the table were to turn bright green in that spot or wiggle strangely, I would presumably notice.  Is my father's visual system fooled by the discontinuity between the two parts of his bifocals?  Is mine fooled by the discontinuities at the edge of my rather strong monofocals as they perch at the end of my nose?  And what if, as Dan Dennett and Mel Goodale others have suggested, there are multiple and possibly conflicting outputs from the visual system, some fooled and some not?

Can we say both that objects are farther than they appear in passenger-side mirror (in one sense) and that they aren't (in some other sense)?  I'm inclined to think that such a "dual aspect" view in this case only doubles our problems, for it's not at all clear what these two senses would be: They can't be the same two senses, it seems, in which a tilted penny is sometimes said to look in one way round and in another way elliptical -- for what would we then say about the tilted penny viewed in a convex mirror?  We would seem to need three answers.

Hey, wait, don't drive off now -- we've only started!

Wednesday, December 08, 2010

Not in JFP: Tenure-track Position at UCR in History of Philosophy

Normally, I would just let Jobs for Philosophers carry the U.C. Riverside Philosophy Department ads, but for whatever reason, this ad still isn't posted over there -- which explains, perhaps, why our applicant pool is looking a little thin!

University of California, Riverside, CA. Asst Prof., tenure-track, available July 1, 2011. 4 courses/year on the quarter system, graduate and undergraduate teaching. Thesis supervision and standard non-teaching duties. AOS: History of Philosophy, with particular interests in Ancient, Early Modern, and/or 19th/20th Century European Philosophy. Requires ABD or Ph.D., and compelling evidence of achievement in and commitment to research and publication. In addition, the successful candidate must be committed to teaching effectively at all levels, including graduate mentoring. Furthermore, he or she will be expected to enhance connections among research groups in the department and, where applicable, within the College of Humanities, Arts and Social Sciences. Salary commensurate with education and experience. Position available July 1, 2011. Submit a current CV, writing sample, at least three letters of reference, evidence of teaching excellence and a letter of application by January 3, 2011 to: Professor Mark Wrathall,Chair, Search Committee, Department of Philosophy, University of California, Riverside, CA 92521-0201. Review of applications begins on January 3, 2011 and continues until the position is filled. UC Riverside is an Equal Opportunity/Affirmative Action Employer committed to excellence through diversity.
Please spread the word to relevant parties!

Yes, I did say "Jobs for Philosophers". Every year in North America there are a few hundred exceptions to this apparent oxymoron.

Friday, December 03, 2010

Some Awesomely Beautiful Pictures of Structures in the Brain

here.

Unfortunately, the immaterial soul continues to elude photographic capture.

(HT: Theresa Cook.)

Wednesday, November 24, 2010

Professors' Moral Attitudes about Responding to Student Emails Are Almost Completely Unrelated to Their Actual Responsiveness to Student Emails

... or so say Josh Rust and I in an article were are busily writing up.  (We reported some of the data in an earlier blog post here.)

Below is my favorite figure from the current draft.  On the x-axis is professors' expressed normative view about the morality or immorality of "not consistently responding to student emails", in answer to a survey question, with answers ranging from 1 ("very morally bad") through 5 ("morally neutral") to 9 ("very morally good").  (In fact, only 1% of respondents answered on the morally good side of the scale, showing that we aren't entirely bonkers.)  On the y-axis is responsiveness to three emails Josh and I sent to those same survey respondents -- emails designed to look as through they were from undergraduates, asking questions about such things as office hours and future courses.

(I can't seem to get my graphs to display quite right in Blogger.  If the graph is cut off, please click to view the whole thing.  The triplets of bars represent ethicists, non-ethicist philosophers, and professors in departments other than philosophy, respectively.)

Tuesday, November 16, 2010

Carruthers and Schwitzgebel on Knowledge of Attitudes

... a Philosophy TV dialogue that came out a couple of weeks ago, but which I forgot to link to at the time.

Peter and I both deny that we have privileged self-knowledge of our attitudes (at least in any strong sense of "privilege"), but since we're philosophers we still find plenty to disagree about!

Thursday, November 11, 2010

The Phenomenology of Being a Jerk

Most jerks, I assume, don't know that they're jerks. This raises, of course, the question of how you can find out if you're a jerk. I'm not especially optimistic on this front. In the past, I've recommended simple measures like the automotive jerk-sucker ratio -- but such simple measures are so obviously flawed and exception-laden that any true jerk will have ample resources for plausible rationalization.

Another angle into this important issue -- yes, I do think it is an important issue! -- is via the phenomenology of being a jerk. I conjecture that there are two main components to the phenomenology:

First: an implicit or explicit sense that you are an "important" person -- in the comparative sense of "important" (of course, there is a non-comparative sense in which everyone is important). What's involved in the explicit sense of feeling important is, to a first approximation, plain enough. The implicit sense is perhaps more crucial to jerkhood, however, and manifests in thoughts like the following: "Why do I have to wait in line at the post office with all the schmoes?" and in often feeling that an injustice has been done when you have been treated the same as others rather than preferentially.

Second: an implicit or explicit sense that you are surrounded by idiots. Look, I know you're smart. But human cognition is in some ways amazingly limited. (If you don't believe this, read up on the Wason selection task.) Thinking of other people as idiots plays into jerkhood in two ways: The devaluing of others' perspectives is partly constitutive of jerkhood. And perhaps less obviously, it provides a handy rationalization of why others aren't participating in your jerkish behavior. Maybe everyone is waiting their turn in line to get off the freeway on a crowded exit ramp and you (the jerk) are the only one to cut in at the last minute, avoiding waiting your turn (and incidentally increasing the risk of an accident and probably slowing down non-exiting traffic). If it occurs to you to wonder why the others aren't doing the same you have a handy explanation in your pocket -- they're idiots! -- which allows you to avoid more uncomfortable potential explanations of the difference between you and them.

Here's a self-diagnostic of jerkhood, then: How often do you think of yourself as important, how often do you expect preferential treatment, how often do you think you are a step ahead of the idiots and schmoes? If this is characteristic of you, I recommend that you try to set aside the rationalizations for a minute and do a frank self-evaluation. I can't say that I myself show up as well by this self-diagnostic as I would have hoped.

How about the phenomenology of being a sweetie -- if we may take that as the opposite of a jerk? Well, here's one important component, I think: Sweeties feel responsible for the well-being of the people around them. These can be strangers who drop a folder full of papers, job applicants who are being interviewed, their own friends and family.

In my effort to move myself a little be more in the right direction along the jerk-sweetie spectrum, I am trying to stir up in myself more of that feeling of responsibility and to remind myself of my fallible smallness.

Thursday, November 04, 2010

Not By Argument Alone (by Guest Blogger G. Randolph Mayes)

I just gave a talk at Gonzaga University called “Not by Argument Alone” in which I tried to show how explanatory reasoning figures into the resolution of philosophical problems. It begins with the observation that we sometimes have equally good reasons for believing contradictory claims. This is the defining characteristic of philosophical antinomies, but it is a common feature of everyday reasoning as well.

For example, Frank told me to meet him at his office at 3 PM if I wanted a ride home. But I’ve been waiting for 15 minutes now and still no Frank. This problem can be represented as a contradiction of practical significance: Frank both will and will not be giving me a ride home. One of these claims must go. The problem is that I have very good reasons for believing both. Frank is a very reliable friend, as is my memory for promises made. On the other hand, my ability to observe the time of day and the absence of Frank at that time and location is quite reliable as well.

So how do I decide which claim to toss? I consider the possibility that Frank is not coming, but this immediately raises the following question: Why not? (He forgot; he lied, he was mugged; I am late?) I consider the possibility that Frank will still show. This immediately raises another question: Why isn’t he here? ( He was delayed; I am early; he is here but I don’t see him?) Both of these questions are requests for explanations and producing good answers to them is essential to the rational resolution of the contradiction. Put differently, I should deny the claim whose associated explanation questions I am best capable of answering.

This is one way of explicating the view that rational belief revision depends on considerations of ‘explanatory coherence.’ The idea is typically traced to Wilfrid Sellars, and it has since been developed along epistemological, psychological, and computational lines. Oddly, however, it has not been explored much as a model for the resolution of philosophical questions. I don’t know why, but I speculate that it is because philosophers don’t naturally represent philosophical thinking in explanatory terms. Typically, a philosophical ‘theory’ is represented not so much as a proposed explanation of some interesting fact as it is a proposed analysis of some problematic concept.

In my view, though, philosophers engage in the creation of explanatory hypotheses all the time. Consider the traditional problem of perception. Just about everyone agrees that we perceive objects. But whereas the physicalist argues that we perceive independently existing physical objects, the phenomenalist is equally persuasive that the objects of perception are mind-dependent. Again, one claim must go. Suppose we deny the phenomenalist’s claim. But then how do we explain illusions and hallucinations, which are phenomenologically indistinguishable from physical objects? Suppose we deny the physicalist’s claim. But then how do we explain the origin of experience itself?

When we explicitly acknowledge that explanation is a necessary step in philosophical inquiry, we thereby acknowledge the responsibility to identify criteria for evaluating the explanations that we propose. Too often philosophical theories are defended simply on the basis of their intuitive appeal. But why would we expect this to reflect anything more than our intuitive preference for believing the claims that they preserve? In science, the ability of a theory to explain things we already know is a paltry achievement. A good explanation must successfully predict novel phenomena or unify familiar phenomena not previously known to be related. Are philosophical explanations subject to the same criteria? If so, then let’s explicitly apply them. If not, well, then I think we’ve got some explaining to do.

This is my last post! Thanks very much for reading and thanks especially to Eric for giving me this opportunity to float some of my thoughts on The Splintered Mind.

Friday, October 29, 2010

The Convincing Explanation (by Guest Blogger G. Randolph Mayes)

The Stone is the new section of the New York Times devoted to philosophy and this week it contains an interesting piece called “Stories vs. Statistics” by John Allen Paulos. It is worth reading in its entirety, but for my money the most important point he makes is this:

The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true.
Our tendency to confuse plausibility with probability is also at the heart of a short essay of mine (forthcoming in the journal Think), called “Beware the Convincing Explanation.” Paulos clarifies the excerpt above by reference to the ‘conjunction fallacy,’ which I discussed in an earlier post. In my essay I try to get at it from a different angle, by distinguishing the respective functions of argument and explanation.

Here is the basic idea: Normally, when we ask for an argument we are asking for evidence, which is to say the grounds for believing some claim to be true. An explanation, on the other hand, is not meant to provide grounds for belief; rather it tells us why something we already believe is so. Almost everyone understands this distinction at an intuitive level. For example, suppose you and I were to have this conversation about our mutual friend Toni.
Me: Boy, Toni is seriously upset.

You: Really? Why?

Me: She’s out in the street screaming and throwing things at Jake.
You can tell immediately that we aren’t communicating. You asked for an explanation, the reason Toni is upset. What I gave you is an argument, my reasons for believing she is upset. But now consider a conversation in which the converse error occurs:
Me: Boy, Toni is seriously upset.

You: Really? How do you know that?

Me: Jake forgot their date tonight and went drinking with his pals.
This time my response actually begs the question. Jake blowing off the date would certainly explain why Toni is upset, but an explanation is only appropriate if we agree that she is. Since your question was a request for evidence, it is clear that you are not yet convinced of this and I’ve jumped the gun by explaining what caused it.

What’s interesting is that people do not notice this so readily. In other words, we often let clearly explanatory locutions pass for arguments. This little fact turns out to be extremely important, as it makes us vulnerable to people who know how to exploit it. For example, chiropractic medicine, homeopathy, faith healing -- not to mention lots of mainstream diagnostic techniques and treatments -- are well known to provide little or no benefit to the consumer. Yet their practitioners produce legions of loyal customers on the strength of their ability to provide convincing explanations of how their methods work. If we were optimally designed for detecting nonsense, we would be highly sensitive to people explaining non-existent facts. We aren’t.

Now, to be fair, there is a sense in which causes can satisfy evidential requirements. After all, Jake blowing off the date can be construed as evidence that Toni will be upset when she finds out. However, it is quite weak evidence compared to actually watching Toni go off on him. So, we can put the point a bit more carefully by saying that what people don’t typically understand is how weak the evidence often is when an explanation gets repurposed as an argument.

Following Paulos, we can say that the convincing explanations succeed in spite of their evidential impotence because they are good stories that give us a satisfying feeling of understanding a complex situation. Importantly, this is a feeling that could not be sustained if we were to remain skeptical of the claim in question, as it is now integral to the story.

Belief in the absence of evidence is not the only epistemic mischief that explanations can produce. The presence or absence of an explanation can also inhibit belief formation in spite of strong supporting evidence. The inhibitory effect of explanation was demonstrated in a classic study by Anderson, Lepper and Ross which showed that people are more likely to persist in believing discredited information if they had previously produced hypotheses attempting to explain that information. Robyn Dawes has documented a substantial body of evidence for the claim that most of us are unmoved by statistical evidence unless it is accompanied by a good causal story. Of particular note are studies by Nancy Pennington and Reid Hastie which demonstrate a preference for stories over statistics in the decisions of juries.

Sherlock Holmes once warned Watson of the danger of the convincing explanation: “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” Damn good advice from one of the greatest story-tellers of all.

Thursday, October 28, 2010

Mad Belief?

Mad belief -- in David Lewis's sense of "mad" -- would be belief with none of the normal causes and none of the normal effects. Such belief, I suggest, would not be belief at all. Delusions might sometimes be cases of beliefs gone half-mad -- beliefs with enough of the functional role of belief that it's not quite right to say that the deluded subject believes but also diverging enough from the functional role of belief that it's not quite right simply to say that the subject fails to believe.

So I say, at least, in an essay now available in draft on my website. (As always, comments, revisions, objections welcome -- either attached to this post or emailed directly to me.)

The essay is a commentary on Lisa Bortolotti's recent book Delusions and Other Irrational Beliefs, though it should be readable without prior knowledge of Lisa's book. You might remember Lisa from her recent stint as a guest blogger here at The Splintered Mind.

Sunday, October 24, 2010

Why We Procrastinate (by guest blogger G. Randolph Mayes)

James Surowiecki recently wrote a nice full-length review of The Thief of Time for The New Yorker magazine. It sounds like a fantasy novel by Terry Pratchett, but is actually a collection of mostly pointy-headed philosophical essays about procrastination edited by Chrisoula Andreou and Mark White. Procrastination is a great topic if you are interested in the nature of irrationality, as philosophers and psychologists tend to think of procrastination as something that is irrational by definition. For example, in the lead article of this volume George Ainslie defines procrastination as “generally meaning to put off something burdensome or unpleasant, and to do so in a way that leaves you worse off.”

I recently published an article about cruelty in which I argued that it is a mistake for scientists to characterize the phenomenon of cruelty in a way that respects our basic sense that it is inherently evil. I find myself wondering whether the same sort of point might be raised against the scientific study of procrastination.

Most researchers appear to accept Ainslie’s characterization of procrastination as an instance of "hyperbolic discounting," which is an exaggeration of an otherwise defensible tendency to value temporally proximate goods over more distant ones. Everyone understands that there are situations (like a time-sensitive debt or investment opportunity) when it is rational to prefer to receive 100 dollars today rather than 110 dollars next week. But Ainslie and many others have demonstrated that we typically exhibit this preference even when it makes far more sense to wait for the larger sum.

Hyperbolic discounting subsumes procrastination in a straightforward way. According to Ainslie, whenever we procrastinate we are choosing a more immediately gratifying activity over one whose value can only be appreciated in the long run. When making plans a week in advance, few would choose to schedule the evening before a big exam catching up on neglected correspondence or deleting old computer files. But when the decision is left until then, that’s exactly the sort of thing we find ourselves doing.

One interesting result of defining procrastination as Ainslie does is that whether we are procrastinating at any given time depends on what happens later, not how we feel about it now. For example, reading this blog is something you might describe as procrastinating on cleaning your filthy apartment. But, according to Ainslie’s definition, you are only procrastinating now if you subsequently fail to get the apartment clean before your guests arrive for dinner (because otherwise you aren’t “worse off”). There is nothing absurd about this, and science certainly has no obligation to be faithful to ordinary usage. But this disparity does highlight an interesting possibility, namely that what Ainslie and his colleagues call procrastination is really just the downside of a generally rational tendency to avoid beginning onerous tasks much before they really, really need to be done.

Why would this be rational? Well, you could start cleaning your apartment right now. But- wait! -there is a good chance that if you do you will become the victim of Parkinson’s Law: Work expands so as to fill the time available for its completion. Putting it off until the last minute can be beneficial because you work much more energetically and efficiently when you are under the gun. (And if you don’t, then you will learn to, which is an important life skill.) Of course, this strategy occasionally backfires. We sometimes underestimate the time we need to meet our goals; unanticipated events, like a computer crashing or guests arriving early, can torpedo the deadlining strategy. But these exceptions, which are often uncritically taken as proof of the irrationality of procrastination, may simply be a reasonable price to pay for the value it delivers when it works.

Most of us think of procrastination as a bad thing and we tell researchers that we do it too much. But should this kind of self-reporting be trusted? Do we just know intuitively that we would be generally better off if we generally procrastinated less? Scientists can define procrastination as harmful if they want to, but they also might want to reconsider the wisdom of a definition that makes beneficial procrastination a logical absurdity. In doing so, they may discover that the powerful notion of hyperbolic discounting has made them too quick to accept a universal human tendency as a fundamentally irrational one.

Friday, October 15, 2010

U.C. Regents to Add Air Consumption Fee

Earlier today, the University of California regents unanimously voted to impose a new Air Consumption Fee on students, faculty, and staff. The new fee will go into effect on January 1.

University of California President Mark G. Yudof said, "Most people think of air as free, but they don't realize that it needs to be processed through ventilation systems." Ventilation systems, he added, "cost money both to build and to maintain. In times of economic difficulty for the University of California, we need to look carefully at costs, such as the cost of air."

The new Air Consumption Fee will be $1,210.83 per quarter for students on the quarter system and $1,816.25 per semester for students on the semester system. For faculty and staff, the Air Consumption Fee will be 23% of their base salary. University of California's chief economist for the Office of the President, Muss Erhaben, noted, "That may seem like a lot to pay for air, but recent studies have suggested that demand for air is relatively inelastic" and thus not very sensitive to changes in price.

Student, faculty, and staff advocacy groups were predictably outraged by the move. "The sudden imposition of new fees on students, especially in the middle of the academic year, creates enormous hardships, especially for students already in economic difficulty," commented U.C.L.A. student representative Tengo K. Respirar. "For example, I had been hoping for an iPad for Christmas. Instead, my parents will be buying me air."

Others complained that the fee was unfair to those who use less air. "I can stop my heart and breathing for minutes at a time and consume only a half cup of rice and thin broth every day," said Swami B. Retha Litla. "I should not be expected to pay the same as a football player." Donna M. de l'Air, a U.C. Riverside Associate Professor in Comparative Languages and Literatures, noted that the Air Consumption Fee will be deducted from her salary even though she will be on sabbatical in France for winter quarter, and thus will be consuming no University of California air. In response to this concern, a representative of the Office of the President stated that the University of California is working on exchange arrangements with other universities to ensure that professors and students in residence elsewhere will not be double-charged for air.

In related news, the U.C. regents also voted to institute a new tier for employees. Current employees who wish the university to abide by its previous salary and benefits agreements may elect to join the Traditional Plan tier at an annual cost of 50% of their salary. Alternatively, employees may elect to join the New Plan at no charge. The New Plan involves a 50% reduction in salary. "We are proud that in these difficult budgetary times we have been able to abide by all our agreements and avoid salary cuts, at least for staff who pay to join the Traditional Plan," said President Yudof.

The Illusion of Understanding (by guest blogger G. Randolph Mayes)

Every teacher knows that magic moment when the light snaps on in a student’s head and bitter confusion gives way to the warm glow of understanding. We live for those moments. Subsequent moments can be slightly less magical, however. There is, for example, the moment we begin to grade said student’s exam, and realize that we’ve been had yet again by the faux glow of illusory understanding.

The reliability and significance of our sense of understanding (SOU) has been the subject of research in recent years. I indicated in the previous post that philosophers of science generally agree that there is a tight connection between explanation and understanding. Specifically, they agree that the basic function of explanation is to increase our understanding of the world. But this agreement is predicated on an objective sense of the term ‘understanding,’ typically referring to a more unified belief system or a more complete grasp of causal relations. There is no similar consensus concerning how our subjective SOU relates to ‘real’ understanding, or indeed whether it is of any philosophical interest at all.

One leading thinker who has argued for the relevance of the SOU to the theory of explanation is the developmental psychologist Alison Gopnik. Gopnik is a leading proponent of the view that the developing brains of children employ learning mechanisms that closely mirror the process of scientific inquiry. As the owner of this blog has aptly put it, Gopnik believes that children are invested with ‘a drive to explain,’ a drive she compares to the drive for sex.

For Gopnik, the SOU is functionally similar to an orgasm. It is a rewarding experience that occurs in conjunction with an activity that tends to enhance our reproductive fitness. So just as a full theory of reproductive behavior will show how orgasm contributes or our reproductive success, a full theory of explanatory cognition will show how the SOU contributes to our explanatory success.

Part of the reason Gopnik compares the SOU to the experience of orgasm is that they can both be detached from their respective biological purposes. Genital and theoretical masturbation are both pleasurable yet non (re)productive human activities. Gopnik thinks that just as no one would consider the high proportion of non reproductive orgasms as evidence that orgasm is unrelated to reproduction, no one should take a high frequency of illusory SOU’s as evidence that the SOU is unrelated to real understanding.

But the analogy between orgasm and the SOU has its limits. The SOU can not really be detached from acts of theorizing as easily as orgasm can be detached from acts of reproduction. One might achieve a free floating SOU as a result of meditation, mortification or drug use, but this will be relatively unusual in comparison to the ease and frequency with which orgasms can be achieved without reproductive sex. For the most part SOU’s come about as a result of unprotected intercourse with the world. If illusory SOU’s are common, and this can not be explained by reference to their detachability, it is reasonable to remain skeptical about the importance of the SOU in producing real understanding.

One such skeptic is the philosopher of science J. D. Trout. Trout does not deny that our SOU may sometimes result from real understanding, but he thinks it is the exception rather than the rule. Moreover, Trout thinks that illusory SOU’s are typically the result of two well-established cognitive biases: overconfidence and hindsight. (Overconfidence bias is the tendency to overestimate the likelihood that our judgments are correct. Hindsight bias is the tendency to believe that past events were more predictable than they really were.) Far from being a reliable indicator of real understanding, Trout holds that the SOU mostly reinforces a positive illusion we have about our own explanatory abilities. (This view also finds support in the empirical research of Frank Keil who has documented an ‘illusion of explanatory depth’)

Is it true that illusory SOU’s are more common than veridical ones? I’m not sure about this. I‘m inclined to think most of our daily explanatory episodes occur below the radar of philosophers of science. Consider explanations that occur simply as the result of the limits of memory. My dog is whining and it occurs to me that I haven’t fed her. The mail hasn’t been delivered, and then I recall it is a holiday. I see a ticket on my windshield and I remember that I didn’t feed the meter. I have an dull afternoon headache and realize I’ve only had three cups of coffee. These kinds of explanatory episodes occur multiple times every day. The resulting SOU’s are powerful and only rarely misleading.

But when choosing between competing hypotheses or evaluating explanations supplied by others Trout is surely correct that the intensity of an SOU has little to do with our degree of understanding. We experience very powerful SOU’s from just-so stories and folk explanations that have virtually no predictive value. Often a strong SOU is simply the result of the fact that it allays our fears or settles cognitive dissonance in an emotionally satisfying way.

In the end, I’m not sure that Trout and Gopnik have a serious disagreement. For one thing, Gopnik’s focus in on the value of the SOU for the developing mind of a child. It may be that the the unreflective minds of infants are uncorrupted by overconfidence, hindsight, or the need to believe. It may also be that a pre-linguistic child’s SOU is particularly well-calibrated for the kind of learning it is required to do.

Trout does not argue that the SOU is completely unreliable, and Gopnik only needs it to be reliable enough to have conferred a selective advantage on those whose beliefs are reinforced by it. There are different ways that this can happen. As Trout himself points out, the SOU may contribute to fitness simply by reinforcing the drive to explain. But even if our SOU is only a little better than chance at selecting the best available hypothesis at any given time, it could still be tremendously valuable as part of an iterated process that remains sensitive to negative feedback. As I indicated in the previous post, our mistake may be to think of the SOU as something that justifies us in believing our hypotheses. It may simply help us to generate or select hypotheses that are slightly more likely to be true than their competitors.

Tuesday, October 12, 2010

Poor, Unloved Auguste Comte

... still, almost two centuries later, has no scholarly-quality English translation of his (1830-1842) magnum opus, Cours de Philosophie Positive.  This is, I think, rather a scandal for such an important philosopher.

(How important? Well, Dean Simonton's mid-20th-century measure of the historical importance of thousands of philosophers, according to textbook pages dedicated to them and similar measures, ranks him as the 17th most important philosopher in history, between Rousseau and Augustine -- though I'd guess that Anglophone philosophers in 2010 wouldn't rank him quite so high.)

The standard translation of Cours de Philosophie Positive is The Positive Philosophy of Auguste Comte, "freely translated and condensed by Harriet Martineau" in 1896. Wait, what?!  Freely translated and condensed? What is this, the friggin' Reader's Digest version? You're not planning to quote from it, I hope.

Probably Comte's most famous contribution to philosophy of psychology is his brief argument against the possibility of a science of introspection. Here is Martineau's translation of the passage in which Comte lays out his argument:

In the same manner, the mind may observe all phenomena but its own.  It may be said that a man's intellect may observe his passions, the seat of the reason being somewhat apart from that of the emotions in the brain; but there can be nothing like scientific observation of the passions, except from without, as the stir of the emotions disturbs the observing faculties more or less. It is yet more out of the question to make intellectual observation of intellectual processes. In order to observe, your intellect must pause from activity; yet it is this very activity that you want to observe. If you cannot effect the pause, you cannot observe: if you do effect it, there is nothing to observe (vol. 1, p. 12).
I won't inflict the original French upon you (it is available in Google books here, if you're interested), but for comparison here is William James's translation in his Principles of Psychology:
It is in fact evident that by an invincible necessity, the human mind can observe directly all phenomena except its own proper states.  For by whom shall the observation be made? It is conceivable that a man might observe himself with respect to the passions that animate him, for the anatomical organs of passion are distinct from those whose function is observation. Though we have all made such observations on ourselves, they can never have much scientific value, and the best mode of knowing the passions will always be that of observing them from without; for every strong state of passion... is necessarily incompatible with the state of observation. But as for observing in the same way intellectual phenomena at the time of their actual presence, that is a manifest impossibility. The thinker cannot divide himself into two, of whom one reasons while other other observes him reason.  The organ observed and the organ observing being, in this case, identical, how could observation take place? This pretended psychological method is then radically null and void. On the one hand, they advise you to isolate yourself, as far as possible, from every external sensation, especially every intellectual work, -- for if you were to busy yourself even with the simplest calculation, what would become of internal observation? -- on the other hand, after having with the utmost care attained this state of intellectual slumber, you must begin to contemplate the operations going on in your mind, when nothing there takes place!  Our descendants will doubtless see such pretensions some day ridiculed upon the stage (1980/1981, p. 187-188).
(The ellipses above mark one phrase James omits: "c'est-à-dire précisément celui qu'il serait le plus essential d'examiner" which, in my imperfect French I would translate as "that is to say, precisely that which it would be the most essential to examine". It is perhaps also worth remarking that no emphasis on "passions" or "intellectual" appears in my edition of Comte, though "intérieure" is italicized.)

Not only does the Martineau translation lose the detail and the color of the original, it is philosophically and psychologically sloppy. For example, Comte makes no reference to the "brain" or the "seat of reason"; instead -- as James indicates -- he talks about "the organs... whose function is observation" ("les organes... destinés aux fonctions observatrices"). And what is this that Martineau says about "the stir of the emotions disturbs the observing faculties more or less"? There is no trace of this clause in the original text. Martineau has inserted into Comte's text an observation that she evidently thinks he should have made.

We should no longer cite Martineau's translation as though it were of scholarly quality. There is no scholarly translation of Comte's most important work.

Wednesday, October 06, 2010

Is Explanation the Foundation? (by guest blogger G. Randolph Mayes)

One of my main interests is explanation. I think there may be no other concept that philosophers lean on so heavily, yet understand so poorly. Here are some examples of how critical the concept of explanation has become to contemporary philosophical debates.

1. A popular defense of scientific realism is that the existence of theoretical entities provides the best explanation of the success of the scientific enterprise.

2. A popular view concerning the nature of inductive rationality is that it rests on an inference to the best explanation.

3. A popular argument for the for the existence of other minds is that other minds provide the best explanation of the behavior of other bodies.

4. A popular argument for the existence of God is that a divine intelligence is the best explanation of the observed order in the universe.

This is a short list. The concept of explanation has been invoked in similar ways to analyze the nature of knowledge, theories, reduction, belief revision, and abstract entities. Interestingly, few of the very smart people who defend these views tell us what explanation is. The reason is simple: we don’t really know. The dirty secret is that explanation is just no better understood than any of the things that explanation is invoked to explain. In fact, it is actually worse than that. If you spend some time studying the many different theories of explanation that have been developed during the last 60 years or so, you’ll find that most of them give little explicit support to these arguments.

The reason for this is worth knowing. Most philosophical theories of explanation have been developed in an attempt to identify the essential features of a good scientific explanation. The good-making features of explanation were generally agreed to be those that would account for how explanation produces (and expresses) scientific understanding. There are many different views about this, but an assumption common to most of them is that a good scientific explanation must be based on true theories and observations. That sounds pretty reasonable, but here’s the rub: If truth is a necessary condition of explanatory goodness, then it makes no sense at all to claim that a theory’s explanatory goodness is our basis for thinking it is true.

All of the arguments noted above do just this, invoking a principle commonly known as “inference to the best explanation” (IBE, aka ‘abduction’). This idea, first articulated by Charles Peirce, has been the hope of philosophy ever since W.V.O. Quine pounded the last nail into the coffin of classical empiricism. This latter tradition had sought in vain to demonstrate that inductive rationality could ultimately be reduced to logic. For many, IBE is a principle that, while not purely logical, might serve as a new ‘naturalized’ foundation of inductive rationality.

Bas van Fraassen, the great neo-positivist, has blown the whistle on IBE most loudly, arguing that it is actually irrational. One of his criticisms is quite simple: It is literally impossible to infer the best explanation; all we can infer is the best explanation we have come up with so far. It may just be the best of a bad lot.

One way to understand the disconnect between traditional theories of explanation and IBE is to note that there are two fundamentally different ways of thinking about explanation. In one, basically transactional sense, explanations are the sorts of things we seek from pre-existing reserves of expert knowledge. When we ask scientists why the night sky is dark or why it feels good to scratch an itch, we typically accept as true whatever empirical claims they make in answering our question. Our sense of the quality of the explanation is limited to how well we think this information has answered the question we’ve posed. This, I think, is the model implicit in most traditional theories of explanation. The aim is to show in what sense, beyond the mere truth of the claims, that science can be said to provide the best answers.

In my view, IBE has more to do with a second sense of explanation, belonging to the context of discovery rather than communication of expert knowledge. In this sense, explaining is a creative process of hypothesis formation in response to novel or otherwise surprising information. It can occur within a single individual, or within a group, but in either case it occurs because of the absence of authoritative answers. It is in this sense of the term that it can make sense to adopt a claim on the basis of its explanatory power.

Interestingly, much of the work done on transactional accounts of explanation is highly relevant to the discovery sense of the term. Many of the salient features of good explanations are the same in both, notably: increased predictive power, simplicity, and consilience. (This point is made especially clearly in the work of philosophically trained cognitive psychologists like Tania Lombrozo.) What is not at all clear, however, is that any of the IBE arguments noted above will have the intended impact when the relevant sense of explanation belongs more to what Reichenbach called “the context of discovery” rather than the “context of justification.”

Tuesday, October 05, 2010

Applying to Graduate School in Philosophy

Time to start getting your act together, if that's your plan!

Regarding M.A. programs, I recommend the guest post by Robert Schwartz of University of Wisconsin, Milwaukee.

Regarding applying to Ph.D. programs, I stand by the advice I gave in 2007, with a few caveats:

  • The academic job market is horrible now, after having been unusually good from about 1999-2007.  Hopefully it will recover in a few years, though to what extent philosophy departments will participate in that recovery is an open question.  Bear these trends in mind when looking at schools' placement records.
  • The non-academic job market is also horrible now.  When the non-academic job market is horrible, graduate school admissions is generally more competitive.
  • In my posts, I may have somewhat underestimated the importance of the GRE.  However, I want to continue to emphasize that different schools, and different admissions committees within the same school over time, take the GRE seriously to different degrees, and thus a low GRE score should by no means doom your hopes.  If you have a GRE score that is not in keeping with your graduate school ambitions, I recommend applying to more than the usual number of schools, so that your application will land among at least a few committees that don't give much weight to the GRE.
The original seven posts have comments threads on which you may post questions or comments.

Thursday, September 30, 2010

Explaining Irrationality (by guest blogger G. Randolph Mayes)

In one of the last papers he wrote before dying almost exactly one year ago, John Pollock posed what he called “the puzzle of irrationality”:

Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question ... [Assuming] rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally?
Consider just one example, taken from Philip Johnson-Laird’s recent book How We Reason: Paolo went to get the car, a task that should take about five minutes, yet 10 minutes have passed and Paolo has not returned. What is more likely to have happened?
1. Paolo had to drive out of town.

2. Paolo ran into a system of one way streets and had to drive out of town.
The typical reader of this blog probably knows that the answer is 1. After all (we reason) 2 can’t be more likely, since 1 is true whenever 2 is. But I’ll bet you felt the tug of 2 and may still feel it. (This human tendency to commit the ‘conjunction fallacy’ was famously documented by the Israeli psychologists Daniel Kahneman and Amos Tversky.)

So we feel the pull of wrong answers, yet are (sometimes) capable of reasoning toward the correct ones.

Pollock wanted to know why we are built this way. Given that we can use the rules that lead us to the correct answers, why didn’t evolution just design us to do so all the time? Part of his answer- well-supported by the last 50 years of psychological research - is that most of our beliefs and decisions are the result of ‘quick and inflexible’ (Q&I) inference modules, rather than explicit reasoning. Quickness is an obvious fitness conferring property, but the inflexibility of Q&I modules means that they are prone to errors as well. (They will, for example, make you overgeneralize, avoiding all spiders, snakes, and fungi rather than just the dangerous ones.)

Interestingly, though, Pollock does not think human irrationality is simply a matter of the error proneness of our Q&I modules. In fact, he would not see a cognitive system composed only of Q&I modules as capable of irrationality at all. For Pollock, to be irrational, an agent must be capable of both monitoring the outputs of her Q&I modules and overriding them on the basis of explicit reasoning (just as you may have done above.) Irrationality, then, turns out to be any failure to override these outputs when we have the time and information needed to do so. Why we are built to often fail at this task is not entirely clear. Pollock speculates that it is a design flaw resulting from the fact that our Q&I modules are phylogenetically older than our reasoning mechanisms.

I think on the surface this is actually a very intuitive account of irrationality, so much so that it is easy to miss the deeper implications of what Pollock has proposed here. Most people think of rationality as a very special human capacity, the ‘normativity’ of which may elude scientific understanding altogether. But for Pollock, rationality is something that every cognitive system has simply by virtue of being driven by a set of rules. Human rationality is certainly interesting in that it is driven by a system of Q&I modules that can be defeated by explicit reasoning. What really makes us different, though, is not that we are rational, but that we sometimes fail to be.

Brie Gertler and I Argue about Introspection on Philosophy TV

here.  For what it's worth, I thought it went pretty well.  We were able to home in on some of our central points of disagreement and push each other on them a bit.

Tuesday, September 28, 2010

Are Ethicists Any More Likely to be Blood or Organ Donors Than Are Other Professors?

Short answer: no.  Not according to self-report, at least.

These results come from Josh Rust's and my survey of several hundred ethicists, non-ethicist philosophers, and professors in other departments. (Other survey results, and more about the methods, are here, here, here, here, here, here, here, and here.)

Before asking for any self-reports of behavior, we asked survey respondents to rate various behaviors on a nine-point scale from "very morally bad" through "morally neutral" to "very morally good". Among the behaviors were:

Not having on one’s driver’s license a statement or symbol indicating willingness to be an organ donor in the event of death,
and
Regularly donating blood.
In both cases, ethicists were the group most likely to praise or condemn the behavior (though the differences between ethicists and other philosophers were within statistical chance).  60% of ethicists rated not being an organ donor on the "morally bad" side of the scale, compared to 56% of non-ethicist philosophers and 42% of non-philosophers (chi-square, p = .001).  And 84% of ethicists rated regularly donating blood on the "morally good" side of the scale, compared to 80% of non-ethicist philosophers and 72% of non-philosophers (chi-square, p = .01).

In the second part of the questionnaire, we asked for self-report of various behaviors, including:
Please look at your driver’s license and indicate whether there is a statement or symbol indicating your willingness to be an organ donor in the event of death,
and
When was the last time you donated blood?
The groups' responses to these two questions were not statistically significantly different: 67% of ethicists, 64% of non-ethicist philosophers, and 69% of philosophers reported having an organ donor symbol or statement on their driver's license (chi-square, p = .75); and 13% of ethicists, 14% of non-ethicist philosophers, and 10% of non-philosophers reported donating blood in 2008 or 2009 (the survey was conducted in spring 2009; chi-square, p = .67).  A related question asking how frequently respondents donate blood also found no detectable difference among the groups.

These results fit into an overall pattern that Josh Rust and I have found: Professional ethicists appear to behave no better than do other professors.  Among our findings so far:
  • Arbitrarily selected ethicists are rated as overall no morally better behaved by members of their own departments than are arbitrarily selected specialists in metaphysics and epistemology (Schwitzgebel and Rust, 2009).
  • Ethicists, including specialists in political philosophy, are no more likely to vote than are other professors (though Political Science professors are more likely to vote than are other professors; Schwitzgebel and Rust, 2010).
  • Ethics books, including relatively obscure books likely to be borrowed mostly by professors and advanced students in philosophy, are more likely to be missing from academic libraries than are other philosophy books (Schwitzgebel, 2009).
  • Although ethics professors are much more likely than are other professors to say that eating the meat of mammals is morally bad, they are just about as likely to report having eaten the meat of a mammal at their previous evening meal (Splintered Mind post, May 22, 2009).
  • Ethics professors appear to be no more likely to respond to undergraduate emails than are other professors (Splintered Mind post, June 16, 2009).
  • Ethics professors were statistically marginally less likely to report staying in regular contact with their mothers (Splintered Mind post, August 31, 2010).
  • Ethics professors did not appear to be any more honest, overall, in their questionnaire responses, to the extent that we were able to determine patterns of inaccurate or suspicious responding (Splintered Mind post, June 4, 2010).
Nor is it the case, for the normative questions we tested, that ethicists tend to have more permissive moral views.  If anything (as with organ donation), they tend to express more demanding moral views.

All this evidence, considered together, creates, I think, a substantial explanatory challenge for the approximately one-third of non-ethicist philosophers and approximately one-half of professional ethicists who -- in another of Josh's and my surveys -- expressed the view that on average ethicists behave a little morally better than do others from socially comparable groups.

We do have preliminary evidence, however, that environmental ethicists litter less.  Hopefully we can present that evidence soon.  (First, we have to be sure that we are done with data collection.)