Tuesday, January 29, 2013

Metaphysical Skepticism a la Kriegel

I'm totally biased. I admit it. But I love this new paper by Uriah Kriegel.

Taking the metaphysical question of the ontology of objects as his test case, Kriegel argues that metaphysical disputes often fail to admit of resolution. They fail to admit of resolution, Kriegel argues, not because such disputes lack substance or are merely terminological, but for the more interestingly skeptical reason that although there may be a fact of the matter which metaphysical position is correct, we have no good means of discovering that fact. I have argued for a similarly skeptical position about the "mind-body problem", that is, the question of the relationship between mind and matter. (Hence my pro-Kriegel bias.) But Kriegel develops his argument in some respects more systematically.

Consider a set of four fundamental particles joined together in a tetrahedron. How many objects are there really? A conservative ontological view might say that really there are just four objects and no more: the four fundamental particles "arranged tetrahedronwise". A liberal ontological view might say that really there are fifteen objects: each of the four particles, plus each of the six possible pairings of the particles, plus each of the four possible triplets, plus the whole tetrahedron. An intermediate ("common sense"?) view might hold that the individual particles are each real objects, and so is the tetrahedron, but not the pairs and triplets, for a total of five objects.

Now who is right? Kriegel envisions three possible approaches to determining where the truth lies: empirical testing, appeal to "intuition", and appeal to theoretical virtues like simplicity and parsimony. However, none of these approaches seems promising.

Contra empirical testing: There is, it seems, no empirical fact on which the conservative and liberal would disagree. It's not like we could bombard the tetrahedron with radiation of some sort and the conservative would predict one thing, the liberal another.

Contra appeal to intuition: "Intuition" is a problematic concept in metaphilosophy. But maybe it means something like common sense or coherence with pre-theoretical opinion. Intuition in this sense might favor the five-object answer (the four particles plus the whole), but that's not entirely clear. However, Kriegel argues, hewing to intuition means doing only what P.F. Strawson calls "descriptive metaphysics" -- metaphysics that aims merely to reveal the structure of reality implicit in people's (possibly misguided) conceptual schemes. If we're aiming to discover real metaphysical truths, and not merely what's already implicit in ordinary opinion, we are doing instead what Kriegel and Strawson call "revisionary metaphysics"; and although descriptive metaphysics is beholden to intuition, revisionary metaphysics is not.

Contra appeal to theoretical virtue: Theoretical virtues like simplicity and parsimony might be pragmatic or aesthetic virtues but, Kriegel argues, there seems to be no reason to regard them as truth-conducive in metaphysics. Is there reason to think that the world is simple, and thus that a simple metaphysical theory is more likely to be true than a complex one? Is there reason to think the world contains few entities, and thus that a parsimonious metaphysical theory that posits fewer entities than its rivals is more likely to be true? Kriegel suggests not.

As I said, I love this paper and I'm sympathetic with its conclusion. But I'm a philosopher, so I can't possibly agree entirely with another philosopher working in the same area. That's just not in our nature! So let me issue two complaints.

First: I'm not sure object ontology is the best test case for Kriegel's view. Maybe there's a real fact of the matter whether there are four, five, or fifteen objects in our tetrahedron, but it's not obviously so. It seems like a good case for reinterpretation as a terminological dispute, if any case is. If Kriegel wants to make a general case for metaphysical skepticism, he might do better to choose a dispute that's less tempting to dismiss as terminological, such as the dispute about whether there are immaterial substances or properties. (In fairness, I happen to know he is working on this now.)

Second: It seems to me that Kriegel commits to more strongly negative opinions about the epistemic value of intuition and theoretical virtue than is necessary or plausible. It sounds, in places, like Kriegel is committing to saying that there's no epistemic value, for metaphysics, in harmony with pre-theoretical intuition or in theoretical virtues like simplicity. These are strong claims! We can admit, more plausibly I think, that intuitiveness, simplicity, explanatory power, etc., have some epistemic value while still holding that the kinds of positions that philosophers regard as real metaphysical contenders tend not to admit of decisive resolution by appeal to such factors. Real metaphysical contenders will conflict with some intuitions and harmonize with others, and they will tend to have different competing sets of theoretical virtues and vices, engendering debates it's difficult to see any hope of resolving with our current tools and capacities.

Consider this: It seems very unlikely that the metaphysical truth is that there are exactly fourteen objects in our tetrahedron, to wit, all of the combinations admitted by the liberal view except for one of the four triplet combinations. Such a view seems arbitrary, asymmetric, unsimple, and intuitively bizarre, compared to the more standard options. If you agree, then you should accept, contra a strong reading of Kriegel's argument, that those sorts of theoretical considerations can take us some distance, epistemically. It's just that they aren't likely to take us all the way.

Wednesday, January 23, 2013

Oh That Darn Rationality, There It Goes Making Me Greedy Again!

Or something like that?

In a series of studies, David G. Rand and collaborators found that participants in behavioral economics games tended to act more selfishly when they reached decisions more slowly. In one study, participants were paid 40 cents and then given the opportunity to contribute some of that money into a common pool with three other participants. Contributed money would be doubled and then split evenly among the group members. The longer participants took to reach a decision, they less they chose to contribute on average. Other studies were similar, some in physical laboratories, some conducted on the internet, some with half the participants forced into hurried decisions and the other half forced to delay a bit, some using prisoner's dilemma games or altruistic punishment games instead of public goods games. In all cases, participants who chose quickly shared more.

I find the results interesting and suggestive. It's a fun study. (And that's good.) But I'm also struck by how high the authors aim in their introduction and conclusion. They seek to address the question: "are we intuitively self-interested, and is it only through reflection that we reject our selfish impulses and force ourselves to cooperate? Or are we intuitively cooperative, with reflection upon the logic of self-interest causing us to rein in our cooperative urges and instead act selfishly?" (p. 427). Ten experiments later, we have what the authors seem to regard as pretty compelling general evidence in favor of intuition over rationality as the ground of cooperation. The authors' concluding sentence is ambitious: "Exploring the implications of our findings, both for scientific understanding and public policy, is an important direction for future study: although the cold logic of self-interest is seductive, our first impulse is to cooperate" (p. 429).

Now it might seem a minor point, but here's one thing that bothers me about most of these types of behavioral economics games on self-interest and cooperation: It's only cooperation with other participants that is considered to be cooperation. What about a participant's potential concern for the financial welfare of the experimenter? If a participant makes the "cooperative" choice in the common goods game, tossing her money into the pool to be doubled and then split back among the four participants, what she has really done is paid to transfer money from the pockets of the experimenter into the pockets of the other participants. Is it clear that that's really the more cooperative choice? Or is she just taking from Peter to give to Paul? Has Paul done something to be more deserving?

Maybe all that matters is that most people would (presumably) judge it more cooperative for participants to milk all they can from the experimenters in this way, regardless of whether in some sense that is a more objectively cooperative choice? Or maybe it's objectively more cooperative because the experimenters have communicated to participants, through their experimental design, that they are unconcerned about such sums of money? Or maybe participants know or think they know that the experimenters have plenty of funding, and consequently (?) they are advancing social justice when they pay to transfer money from the experimenters to other participants? Or...?

These quibbles feed a larger and perhaps more obvious point. There's a particular psychology of participating in an experiment, and there's a particular psychology of playing a small-stakes economic game with money, explicitly conceptualized as such. And it is a leap -- a huge leap, really -- from such laboratory results, as elegant and well-controlled as they might be, to the messy world outside the laboratory with large stakes, usually non-monetary, and not conceptualized as a game.

Consider an ordinary German sent to Poland and instructed to kill Jewish children in 1942. Or consider someone tempted to cheat on her spouse. Consider me sitting on the couch while my wife does the dishes, or a student tempted to copy another's answers, or someone using a hurtful slur to be funny. It's by no means clear that Rand's study should be thought to cast much light at all on cases such as these.

Is our first impulse cooperative, and does reflection makes us selfish? Or is explicit reflection, as many philosophers have historically thought, the best and most secure path to moral improvement? It's a fascinating question. We should resist, I think, being satisfied too quickly with a simple answer based on laboratory studies, even as a first approximation.

Tuesday, January 22, 2013

CFP: Consciousness and Experiential Psychology

The Consciousness and Experiential Psychology Section of the British Psychological Society will be meeting in Bristol, September 6-8. They have issued a call for submissions "particularly, but not exclusively" on the topics of non-verbal expression, affective or emotional responses, and subjectivity and imagination.

Details here. I will be one of four keynote speakers.

About a week later (September 12-13), also in Bristol, the Experimental Philosophy Group UK is planning to hold their annual workshop. I'll be keynoting there too. For details on that one, contact Bryony Pierce (bryonypierce at domain: btinternet.com, and/or experimentalphilosophyuk at domain: gmail.com) or check their website for updates.

Make it a two-fer!

Saturday, January 19, 2013

A Memory from Grad School

Circa 1994: Josh Dever would be sitting on a couch in the philosophy graduate student lounge at U.C. Berkeley. I would propose to him a definition of "dessert" (e.g.: "a sweet food eaten after the main meal is complete"). He would shoot it down (e.g.: "but then it would be a priori that you couldn't eat dessert first!"). Later he would propose a definition to me, which I would shoot down. Over time, the definitions became ever more baroque. Other graduate students participated too.

Eventually Josh decided that he would define a dessert as anything served on a dessert plate. Asked what is a dessert plate is, he would say it was intuitively obvious. Presented with an objection ("So you couldn't eat Oreos right out of the bag for dessert?") he would simply state that he was willing to "bite the bullet" and accept a certain amount of revision of our pre-theoretical opinions.

At the time it seemed like cheating. In retrospect, I think Josh saw right to the core.

Thursday, January 17, 2013

Being the World's Foremost Expert on X Takes Time

According to the LA Times, governor Jerry Brown wants to see "more teaching and less research" in the University of California.

I could see the state of California making that choice. Maybe we U.C. professors teach too few undergraduate courses for our state's needs. (I teach on average one undergraduate course per term. I also advise students individually, supervise dissertations, and teach graduate seminars.) But here's a thought. If it is valuable to have some public universities in which the undergraduate teaching and graduate supervision is done by the foremost experts in the world on the topics in question, then you have to allow professors considerable research time to attain and sustain that world-beating expertise. Being among the world's foremost experts on childhood leukemia, or on the neuroscience of visual illusion, or on the history of early modern political philosophy, is not something one can squeeze in on the side, with a few hours a week.

In my experience, it takes about 15 hours a week to run an undergraduate course (longer if it's your first time teaching the course): three hours of lecture, plus lecture prep time, plus office hours, plus reviewing the assigned readings and freshening up on relevant connected literature, plus grading and test design, plus email exchanges with students and course management. And let's suppose that a typical professor works about 50 hours a week. If Professor X at University of California teaches two undergraduate lecture courses per term, that leaves 20 hours a week for research and everything else (including graduate student instruction and administrative tasks like writing recommendation letters, serving on hiring committees, applying for grants, refereeing for journals, keeping one's equipment up to date...). If Professor Y at University of Somewhere Else teaches one undergraduate lecture course per term, that leaves 35 hours a week for research and everything else. How is Professor X going to keep up with Professor Y? Over time, if teaching load is substantially increased, the top experts will disproportionately be at the University of Somewhere Else, not the University of California.

Of course some people manage brilliantly productive research careers alongside heavy undergraduate teaching loads. I mean them no disrespect. On the contrary, I find them amazing! My point above concerns only what we should expect on average.

Tuesday, January 15, 2013

Blind Photographers

Yes, that's right. Some of the world's leading blind photographers! Here at UCR. What is it like to point your camera, print, and present it, all without seeing? What is your method of selection? What is your relation to your art?

photo by Ralph Baker, from http://www.cmp.ucr.edu/exhibitions/sightunseen/

The Moral Behavior of Ethicists and the Rationalist Delusion

A new essay with Joshua Rust.

In the first part, we summarize our empirical research on the moral behavior of ethics professors.

In the second part, we consider five possible explanations of our finding that ethics professors behave, on average, no better than do other professors.

Explanation 1: Philosophical moral reflection has no material impact on real-world moral behavior. Jonathan Haidt has called the view that explicit moral reflection does have a substantial effect on one's behavior "the rationalist delusion".

Explanation 2: Philosophical moral reflection influences real world moral behavior, but only in very limited domains of professional focus. Those domains are sufficiently narrow and limited that existing behavioral measures can't unequivocally detect them. Also, possibly, any improvement in such narrow domains might be cancelled out, in terms of overall moral behavior, by psychological factors like moral licensing or ego depletion.

Explanation 3: Philosophical moral reflection might lead one to behave more morally permissibly but no morally better. The idea here is that philosophical moral reflection might lead one to avoid morally impermissible behavior while also reducing the likelihood of doing any more than is strictly morally required. Contrast the sometimes-sinner sometimes-saint with the person who never goes beyond the call of duty but also never does anything really wrong.

Explanation 4: Philosophical moral reflection might compensate for deficient moral intuitions. Maybe, from early childhood or adolescence, some people tend to lack strongly motivating moral emotions. And maybe some of those people are also drawn to intellectual styles of moral reflection. Without that moral reflection, they would behave morally worse than the average person, but moral reflection helps them behave better than they otherwise would. If enough ethicists are like this, then philosophical moral reflection might have an important positive effect on the moral behavior of those who engage in it, but looking at ethicists as a group, that effect might be masked if it's compensating for lower rates of moral behavior arising from emotional gut intuitions.

Explanation 5: Philosophical moral reflection might have powerful effects on moral behavior, but in both moral and countermoral directions, approximately cancelling out on average. It might have positive effects, for example, if it leads us to discover moral truths on which we then act. But perhaps equally often it becomes toxic rationalization, licensing morally bad behavior that we would otherwise have avoided.

Josh and I decline to choose among these possibilties. There might be some truth in all or most of them. And there are still other possibilities, too. Ethicists might in fact engage in moral reflection relevant to their personal lives no more often than do other professors. Ethicists might find themselves increasingly disillusioned about the value of morality at the same time they improve their knowledge of what morality in fact requires. Ethicists might learn to shield their personal behavior from the influence of their professional reflections, as a kind of self-defense against the apparent unfairness of being held to higher standards because of their choice of profession....

Full essay here. As always thoughts and comments welcome, either by email or attached to this post.

Friday, January 11, 2013

Fame Through Friend-Citation

Alert readers might raise the following objection to my recent post, Fame Through Self-Citation: By the 500th article of our Distinguished Philosopher's career, his reference list will include 499 self-citations. Journal editors might find that somewhat problematic.

If this concern occurred to you, you are just the sort of perceptive and prophetic scholar who is ready for my Advanced Course in Academic Fame! The advanced course requires that you have five Friends. Each Friend agrees to publish five articles per year, in venues of any quality, and each published article will self-cite five times and cite five articles of each other Friend. (In Year One, each Friend will cite the entire Friendly corpus.)

Assuming that the six Friends' publications can be treated serially, ABCDEFABCDEF..., by the end of Year One, Friend A will have 29 + 23 + 17 + 11 + 5 = 85 citations, Friend B will have 80 citations, Friend C 75, Friend D 70, and Friend E 65, and Friend F 60. Friend A will have an h-index of 5 and the others will have indices of 4. By the end of Year Six, the Friends will have 4,935 Friendly citations to distribute among themselves, or 822.5 citations each. If they aim to maximize their h-index, each can arrange to have an h-index of at least 25 by then. (That is, each can have 25 articles that are each cited at least 25 times.) This would give them an h-index exceeding that of the 20 or so characteristic of leading youngish philosophers like Jason Stanley and Keith DeRose. (See Brian Leiter's discussion here.)

By the end Year 20, the Friends will have 100 articles each and 17,535 citations to distribute among those articles. Wisely-enough distributed, this will permit them to achieve h-indices in excess of 50, higher than the indices of such leading philosophers as Ned Block , David Chalmers, and Timothy Williamson. By the end of their 50-year careers, they will have 7,422.5 Friendly citations each, permitting h-indices in the mid-80s, substantially in excess of the h-index of any living philosopher I am aware of.

But this underestimates the possibilities! These Friends will be asked to referee work for the same journals in which they are publishing. They wouldn't be so tacky as to use the sacred trust of their refereeing positions to insist that other authors cite their own work -- but of course they can recommend the important work of their Friends! If each Friend accepts one refereeing assignment per month and successfully recommends publication for half of the articles they referee, contingent upon the authors' citing one article from each of the other five Friends, that will add 180 more citations per year to the pool. Other scholars, seeing the range of authors citing each of the Friends' work, will naturally regard each Friend as a leading contributor to the field, whom they must also therefore cite, creating a snowball effect. Can h-indices of 100 be far away?

(Any resemblance of this strategy to the behavior of actual academics is purely coincidental.)

Fame Through Self-Citation

Let's say you're not picky about quality or venue. It shouldn't be too hard get five publications a year. And let's say that every one of your publications cites every one of your previously published or forthcoming works. In Year One, you will have 0 + 1 + 2 + 3 + 4 = 10 citations. By Year Two, you will have 5 + 6 + 7 + 8 + 9 = 35 more citations, for 45 total. Year Three gives you 60 more citations, for 105 total. Year Four, 85 for 190; Year Five, 110 for 300; Year Six, 135 for 435. You're more than ready for tenure!

In one of his blog posts about Google Scholar, Brian Leiter suggests that an "h-index" of about 20 is characteristic of "leading younger scholars" like Keith DeRose and Jason Stanley. You will have that h-index by the end of your eighth year out -- considerably faster than either DeRose or Stanley! In your 18th year, your h-index will match that of David Chalmers (currently in his 19th year out). By the end of the 50th year of your long and industrious career, you will have 250 publications, 31,125 total citations, and an h-index of 125, vastly exceeding that of any living philosopher I am aware of (e.g., Dan Dennett), and approaching that of Michel Foucault.

Self-citation: the secret of success!

Update, Jan. 12: See also my Advanced Course in Academic Fame: Fame Through Friend-Citation.

Wednesday, January 09, 2013

The Emotional Psychology of Lynching

Today, for my big lower-division class on Evil (enrollment 300), I'm teaching about Southern U.S. racial lynching in the early 20th century. My treatment centers on lynching photography, especially photos that include perpetrators and bystanders, which were often proudly circulated as postcards.

Here's one photo, from James Allen et al. (2000) Without Sanctuary:

Here are a couple spectator details from the above photo:
Looking at the first spectator detail, I'm struck by the thought that this guy probably thought that the lynching festivities made for an entertaining date with the girls.

Here's the accompanying text from Allen et al.:

The following account is drawn from James Cameron's book, A Time of Terror: Thousands of Indianans carrying picks, bats, ax handles, crowbars, torches, and firearms attacked the Grant County Courthouse, determined to "get those goddamn Niggers." A barrage of rocks shattered the jailhouse windows, sending dozens of frantic inmates in search of cover. A sixteen-year-old boy, James Cameron, one of the three intended victims, paralyzed by fear and incomprehension, recognized familiar faces in the crowd -- schoolmates, and customers whose lawns he had mowed and whose shoes he had polished -- as they tried to break down the jailhouse door with sledgehammers. Many police officers milled outside with the crowd, joking. Inside, fifty guards with guns waited downstairs.

The door was ripped from the wall, and a mob of fifty men beat Thomas Shipp senseless and dragged him into the street. The waiting crowd "came to life." It seemed to Cameron that "all of those ten to fifteen thousand people were trying to hit him all at once." The dead Shipp was dragged with a rope up to the window bars of the second victim, Abram Smith. For twenty minutes, citizens pushed and shoved for a closer look at the "dead nigger." By the time Abe Smith was hauled out he was equally mutilated. "Those who were not close enough to hit him threw rocks and bricks. Somebody rammed a crowbar through his chest several times in great satisfaction." Smith was dead by the time the mob dragged him "like a horse" to the courthouse square and hung him from a tree. The lynchers posed for photos under the limb that held the bodies of the two dead men.

Then the mob headed back from James Cameron and "mauled him all the way to the courthouse square," shoving and kicking him to the tree, where the lynchers put a hanging rope around his neck.

Cameron credited an unidentified woman's voice with silencing the mob (Cameron, a devout Roman Catholic, believes that it was the voice of the Virgin Mary) and opening a path for his retreat to the county jail....

The girls are holding cloth souvenirs from the corpses. The studio photographer who made this postcard printed thousands of copies over the next ten days, selling for fifty cents apiece.

According to Cameron's later account, Shipp and Smith were probably guilty of murdering a white man and raping a white woman. He insists that he himself had fled the scene before either of those crimes occurred. According to historical records, in only about one-third of racial lynching cases were the victims even accused of a grievous crime such as rape or murder. In about one-third of cases, they were accused of non-grievous crime, such as theft. In about one-third of cases, the victims were accused of no real crime at all, only of being "uppity", or of having consensual sexual relations with a white woman, or the victim was a friend or family member of another lynching victim.

Wednesday, January 02, 2013

On Trusting Your Sense of Fun

Maybe, like me, you're a philosophy dork. Maybe, like me, when you were thirteen, you said to your friends, "Is there really a world behind that closed door? Or does the outside world only pop into existence when I open the door?", and they said, "Dude, you're weird! Let's go play basketball." Maybe, like me, when you were in high school you read science fiction and wondered whether an entirely alien moral code might be as legitimate as our own, and this prevented you from taking your World History teacher entirely seriously.

If you are a deep-down philosophy dork, then you might have a certain underappreciated asset: a philosophically-tuned sense of what's fun. You should trust that sense of fun.

It's fun -- at least I find it fun -- to think about whether there's some way to prove that the external world exists. It's fun to see whether ethics books are any less likely to be stolen than other philosophy books. (They're actually more likely to be stolen, it turns out.) It's fun to think about why people used to say they dreamed in black and white, to think about how weirdly self-ignorant people often are, to think about what sorts of bizarre aliens might be conscious, to think about whether babies know that things continue exist outside of their perceptual fields. At every turn in my career, I have faced choices about whether to pursue what seems to me to be boring, respectable, philosophically mainstream, and at first glance the better career choice, or whether instead to follow my sense of fun. Rarely have I regretted it in the long term when I have chosen fun.

I see three main reasons a philosophy dork should trust her sense of fun:

(1.) If you truly are a philosophy dork in the sense I intend the phrase -- and I assume most readers of this blog are (consider: this is how you're spending your free time?!) -- then your sense of what's fun will tend to manifest some sort of attunement to what really is philosophically worth pursuing. You might not be able quite to put your finger on why it's worth pursuing, at first. It might even just seem a pointless intellectual lark. But my experience is that the deeper significance will eventually reveal itself. Maybe it's just that everything can be explored philosophically and brought around back to main themes, if one plunges deep enough. But I'm inclined to think it's not just that. The true dork's mind has a horse-sense of where it needs to go next.

(2.) It energizes you. Few things are more dispiriting than doing something tedious because "it's good for your career". You'll find yourself wondering whether this is really the career for you, whether you're really cut out for philosophy. You'll find yourself procrastinating, checking Facebook, spacing out while reading, prioritizing other responsibilities. In contrast, if you chase the fun first, you will find yourself positively eager, at a visceral level, to do your research. And this eagerness can then be harnessed back into a sense of responsibility. Finding your weird passion first, and figuring out what you want to say about it, can energize you to go back later and read what others have said about your topic, so you can fill in the references, connect it with previous research, sophisticate your view in light of others' work. It's much more rewarding to read the great philosophers, and one's older contemporaries, when you have a lens to read them through than when you're slogging through them from a vague sense of duty.

(3.) Fun is contagious. So is boredom. Readers are unlikely to enjoy your work and be enthusiastic about your ideas, if even you don't have that joy and enthusiasm.

These remarks probably generalize across disciplines. I think of Richard Feynman's description of how he recovered from his early-career doldrums (see the last fifth of this autobiographical essay).

Tuesday, January 01, 2013

Essays of 2012

2012 was a good research year for me.

These essays appeared in print in 2012:

These essays are finished and forthcoming: