Saturday, February 26, 2011

Does Thinking about Applied Ethics Generate Mainly Remorse Rather Than Behavior Change?

That's what Randy Cohen seems to be saying in his farewell column on ethics for the New York Times Magazine:

Writing the column has not made me even slightly more virtuous. And I didn’t have to be: it was in my contract. O.K., it wasn’t. But it should have been. I wasn’t hired to personify virtue, to be a role model for the kids, but to write about virtue in a way readers might find engaging. Consider sports writers: not 2 in 20 can hit the curveball, and why should they? They’re meant to report on athletes, not be athletes. And that’s the self-serving rationalization I’d have clung to had the cops hauled me off in handcuffs.

What spending my workday thinking about ethics did do was make me acutely conscious of my own transgressions, of the times I fell short. It is deeply demoralizing. I presume it qualifies me for some sort of workers’ comp. This was a particular hazard of my job, but it is also something every adult endures — every self-aware adult — as was noted by my great hero, Samuel Johnson, the person I most quoted in the column: “He that in the latter part of his life too strictly inquires what he has done, can very seldom receive from his own heart such an account as will give him satisfaction.” To grow old is to grow remorseful, both on and off duty.
Consider Cohen's sports-writer analogy. Often when I present my work on the moral behavior of ethicists, people (not a majority, but maybe a majority of ethicists) will respond by tossing out half-baked analogies: Should we expect basketball coaches to be better at basketball? Epistemologists to have more knowledge? Sociology professors to be popular with their peers? The thought behind such analogies appears to be: obviously no, and so also not in the analogous ethics professors case.

First: Is it so obviously no? I wouldn't expect sports writers to out-hit professional baseball players, but that's not the issue. The issue is whether writing about sports has *any* relationship to one's sports skills -- and it's not obvious that it wouldn't: We learn sports skills in part by watching; thinking about strategy is not entirely useless; and sports writers might have and sustain an interest in sports that has positive consequences for behavior. The relevant question isn't: Should we expect sports writers to be baseball stars, but rather should we expect sports writers to be a little bit better at sports, on average, than non-sports writers? Analogously, the question I have posed about ethicists, and that Cohen seems to be posing to himself is not: Should we expect ethicists to be saints -- the major-league sluggers of morality -- but rather should we expect them to be a little morally better behaved than others of similar social background, or (not entirely equivalently, of course) than they would have been had they not studied ethics.

Understood properly, this question is, I think, neither obvious nor trivial. And Cohen notes that his use of the sports-writer analogy is a rationalization -- suggesting, perhaps, that he thinks it might have been reasonable to expect some change for the better in him as a result of his reflections, a change that failed to materialize.

Cohen says his reflections have mainly left him feeling bad about himself. His tone here seems to me to be oddly defeatist. Johnson, in the passage Cohen cites, is expressing the inevitability of remorse about the past, which (being past) is unchangeable. But ethical reflection happens midstream. It is as though Cohen is saying that the only effect of reflecting ethically and discovering that one has done some bad thing is to feel bad about oneself, that it's not realistic to expect actual changes in one's behavior as a result of such reflections. Now, while it might not be realistic to expect Cohen-style applied ethical reflection in ordinary life to transform one overnight into a saint, perhaps we might hope that it would at least nudge one a little bit toward avoiding, in the future, the sort of behavior about which one now feels remorseful. To abandon such hope, to think that such reflection is necessarily behaviorally ineffectual -- isn't that quite a dark view of the relationship between moral reflection and moral behavior?

Wednesday, February 16, 2011

The Wason Selection Task and the Limits of Human Philosophical Cognition

The famous Wason Selection Task runs like this: You are to imagine four cards, each with a number on one side and a letter on the other side. Of two cards, you see only the letter side and you don't know what's on the number side. Of the other two cards, you see only the number side and don't know what's on the letter side. Imagine the four cards laid out as follows:

(image from: http://www.psypress.com/groome/figures/)

Here's the question: What card or cards do you need to turn over to test the rule "If there is a K on one side there is a 2 on the other", to see if it is violated? The large majority of undergraduates (and of almost every group tested) gets this question wrong. One common answer: the K and the 2. The correct answer is the K and the 7.

That K and 7 is the correct answer is much more intuitively evident when we put the task in a context of social cognition or cheating detection rather than in an abstract context. Imagine that instead of numbers and letters the cards featured beverages and ages and the rule in question was "If there is an alcoholic beverage on one side, there is an age of 21 or greater on the other". Then the cards would look like this: sprite, gin, 32 years old, 17 years old. The large majority of people will correctly say that to check for violations of the rule you need to turn over the gin card and the 17-year-old card. But that's exactly the same logical structure as the much more difficult letter-number task.

Psychologists have discussed at length the cognitive mechanisms and what makes some versions of the task easier than others, but the lesson I want to draw is metaphilosophical: We humans stink at abstract thought. Logically speaking, the Wason selection task is incredibly simple. But still it's hard, even for well educated people -- and even logic teachers familiar with the task need to stop and think a minute, slow down to get it right; they don't find it intuitive. Compare this to how intuitively we negotiate the immense complexities of language and visual perception.

Nor is it just the Wason selection task we are horribly bad at, but all kinds of abstract reasoning. I used to write questions for the Law School Admissions Test. One formula I had for a difficult question was just a question with several options expressed in ordinary language with a conditional and a negation and no intuitive support. The poor victim of one of my questions would first read a paragraph about the economics of Peru (for example). Then I would ask which of the following must be true if everything in the paragraph is true: (a.) Unless crop subsidies are increased, the exchange rate will decline, (b.) The exchange rate will rise only if crop subsidies are not decreased, (c.) If crop subsidies are decreased, the exchange rate will not rise, etc. My own mind would melt trying to figure out which of the options is correct (or whether, even, some might be equivalent), and all we're talking about is a very simple conditional and negation! So yes, you have me to blame for your less-than-perfect LSAT score if you took the exam in the late 1990s. Sorry!

Okay, now consider... Hegel. Or even something as structurally simple as the Sleeping Beauty Problem.

Most of our philosophical ambitions are far beyond ordinary human cognitive capacity. Our philosophical opinions are thus much more likely to be driven by sociological and psychological factors that have little to do with the real merit of the arguments for and against. This partly explains why we make so little philosophical progress over the centuries. If we are succeeded by a species (biological or robotic) with greater cognitive capacities, a species that finds the Wason selection task and abstractly combining conditionals and negations to be intuitively simple, they will laugh at our struggles with Sleeping Beauty, with Kant's Transcendental Deduction, with paradoxes of self-reference, etc., in the same way that we, once we are fully briefed and armed on the Wason Selection Task, are tempted to (but shouldn't) laugh at all the undergraduates who fail it.

Friday, February 11, 2011

My New Book, Perplexities of Consciousness

... is now out, here at Amazon and here at Barnes & Noble.

Tuesday, February 08, 2011

German and English Philosophers in 1914: "World War Is a Wonderful Idea!"

I was struck by the following passage, reading Decline of the German Mandarins (Fritz Ringer, 1969):

Early in August of 1914, the war finally came. One imagines that at least a few educated Germans had private moments of horror at the slaughter which was about to commence. In public, however, German academics of all political persuasions spoke almost exclusively of their optimism and enthusiasm. Indeed, they greeted the war with a sense of relief. Party differences and class antagonisms seemed to evaporate at the call of national duty.... intellectuals rejoiced at the apparent rebirth of "idealism" in Germany. They celebrated the death of politics, the triumph of ultimate, apolitical objectives over short-range interests, and the resurgence of those moral and irrational sources of social cohesion that had been threatened by the "materialistic" calculation of Wilhelmian modernity.

On August 2, the day after the German mobilization order, the modernist Ernst Troeltsch spoke at a public rally. Early in his address, he hinted that "criminal elements" might try to attack property and order, now that the army had been moved from the German cities to the front. This is the only overt reference to fear of social disturbance that I have been able to discover in the academic literature of the years 1914-1916.... the German university professors sang hymns of praise to the "voluntary submission of all individuals and social groups to this army." They were almost grateful that the outbreak of war had given them the chance to experience the national enthusiasm of those heady weeks in August. (p. 180-181)
With the notable exception of Bertrand Russell (who lost his academic position and was imprisoned for his pacifism), philosophers in England appear to have been similarly enthusiastic. Wittgenstein never did anything so cheerily, it seems, as head off to fight for Austria. Alfred North Whitehead rebuked his friend and co-author Russell for his pacifism and eagerly sent off to war his sons North and Eric. (Eric Whitehead died.)

If there is anything that seems, in retrospect, not to have been a wonderful idea it was World War I, which destroyed millions of lives to no purpose. (At best, it should have been viewed as a regrettable necessity in the face of foreign aggression; but that was rarely the attitude in 1914, from what I have read.) Philosophers at the time, evidently, were no more capable of seeing the (seemingly immensely obvious) downsides of world war than was anyone else.

You might ask: Why should philosophers have been more capable of seeing what was wrong with World War I? Isn't it entirely unsurprising that they should be just as enthusiastic as the rest of their compatriots?

Here's a model of philosophical reflection on which philosophers' enthusiasm for World War I is unsurprising: Philosophers -- and everyone -- possess their views on the big questions of life for emotional and sociological reasons that have nothing to do with their philosophical theories and philosophical readings. They recruit Kant, Mill, Locke, Rousseau, Aristotle, etc., only after the fact to justify what they would have believed anyway. Moral and political philosophy is nothing but post-hoc rationalization.

Here's a model of philosophical reflection on which philosophers' enthusiasm for World War I is, in contrast, surprising: Reading Kant, Mill, Locke, Rousseau, Aristotle, etc., helps give one a broadly humanitarian view, helps one see that people everywhere deserve respect, pushes one toward a more encompassing and cosmopolitan worldview, helps one gain a little critical perspective on the political currents of one's own time, helps one better see through the rhetoric of demagogues and narrow politicians.

Which vision of philosophy do you prefer?

Thursday, February 03, 2011

Imagery in Front of One's Forehead

I posted briefly on this in 2006, but I continue to be struck by the following fact: When I interview people about their imagery they often report their images as seeming to be located in front of their foreheads. Often when people say this, they give me a slightly embarrassed look, as though this report surprises them too. Only a minority of subjects report imagery in front of their forehead, but it's a pretty healthy minority (rough estimate: 25%).

I don't think that I am forcing this report on subjects: At first, such reports surprised me, too. Antecedently, I would have thought the most likely possibilities to be: (1.) the image occurs nowhere in egocentric space, not seeming to be subjectively located at all; (2.) the image is subjectively experienced as inside the head; (3.) the image is subjectively experienced as before the eyes. Subjects do sometimes report (1)-(3), though not appreciably more often than in front of the forehead.

As far as I know, there is no serious study of the subjectively experienced location of imagery, including its conditions and variations. Some questions that arise:

(1.) Do people's differences in report about the subjective location of their imagery reflect real differences between people in how they experience imagery, or is the subjective location of imagery, or lack of subjective location, more or less the same for everyone but for some reason difficult to report accurately?

(2.) Can one control the subjective location of imagery? Suppose I am imagining Napoleon vanquished at Waterloo, and it seems to me that that imagery is transpiring inside my head. Can I take that very same imagery and make it transpire, instead, in front of my forehead? (Obviously, I am not talking about where the brain processes occur, but where it seems to me, subjectively, that the imagery is happening, if it seems to me that it is happening somewhere.) Can I make the image transpire down by my right toe? Can I move it over into the next room? Across town? Is there some limit to this?

(3.) Is there a difference between visually imagining an object as being some place relative to you and having one's image of that object transpiring in a particular subjective location? Getting clear on this distinction (if it is a valid distinction) seems essential to answering (2) accurately. So, for example, can I have an image in my head, or an image unlocated in subjective space, of someone sitting in the empty chair that I am (really) looking at across the room? If this distinction is hard to conceptualize, consider pictures as an analogy: I can have a picture *in my hand* of myself sitting *in that chair over there*. The picture is located in my hand; its representational content is that I am sitting in that chair over there. Is the same kind of split possible for imagery? I can imagine a mini-Napoleon engaged in a battle near my right toe, while I gaze down at my foot, but that is rather different -- isn't it? -- than having an ordinary image of Napoleon's defeat that is experientially located as transpiring by my right toe. If my imagery is experienced as inside of my head, it's not like I am imagining that little mini-Napoleon having climbed into my head, I think.

(4.) Is there interference between visual perception of outward objects from some location and the subjective location of one's imagery? So, for example, if your image is subjectively experienced as in the upper right quadrant of your visual field, are you less likely to detect an object that subtly appears in that location? (I have a vague feeling that someone has tested this, though maybe not with the distinction I articulate at (3) clearly in mind. Reminders/references appreciated. [I don't just mean Perky 1910, though that seems relevant.])

(5.) Why would in front of the forehead be a more common location in subjective space than, say, down by the cheeks? Does this reflect some cultural supposition about where images must be? (But if so, where in the culture?) Does it reflect some real phenomenon, perhaps some cognitive efficiency gained, by representing one's imagery experiences as positioned there?

Tuesday, February 01, 2011

Ethicists' Courtesy at Philosophy Conferences

In 2008 and 2009, my collaborators and I stalked philosophy conferences noting instances of courteous and discourteous behavior. Our aim was to collect evidence about whether ethicists behave any more courteously, on average, than do other philosophers. We used three measures of courtesy:

* talking audibly while the speaker is talking (vs. remaining silent);

* allowing the door to slam shut while entering or exiting mid-session (vs. attempting to close the door quietly);

* leaving behind clutter at the end of a session (vs. leaving one’s seat tidy).
Ethicists did not behave detectably differently by any of the three measures, thereby proving that they are not Confucian sages. (In fact, there was a session on neo-Confucianism among the coded sessions.)

We assume that disruptively yakking, littering, and slamming doors tends neither to advance the greatest good, to flow from universalizable maxims, nor to display morally virtuous character traits. Thus, these results fit with Josh Rust's and my overall finding, across several studies, that ethicists behave no morally better, on average, than do other people of similar social background.

We did find, however, that audiences in environmental ethics sessions tended to litter less.

Full details here.