Sunday, July 01, 2012

Deflating the Moral Thermostat

Ethics professors don't appear to behave any better than do non-ethicists. (See here for my most recent work on this topic, collaborative with Josh Rust.) On what I've been calling the "inert discovery" model of philosophical moral reflection, moral reflection nonetheless still tends to lead to the discovery of moral truths.  It just has no improving effect on one's personal behavior.  So, for example, philosophical moral reflection might lead one to think that eating meat is morally bad (which indeed a majority of ethicists do seem to think), but without any material effects on one's cheeseburger consumption rate (as my studies also seem to show).

Now, on the face of it, the inert discovery model has seemed to me empirically somewhat unlikely.  If one discovers that something is morally bad, shouldn't that add to one's motivations not to do it? And shouldn't one then, on average, do it a bit less?  Shouldn't coming to believe that eating meat is morally bad slightly reduce, at least on average, one's cheeseburger consumption rate?

Here's one possible way, though, to make the inert discovery model work. Plausibly, most people do not -- despite the classic Dixieland song -- actually wish to number among the saints. Instead, maybe, most people have what we might think of as a moral thermostat. We want to be about as morally good as our neighbors, or maybe (in our perhaps somewhat deluded self-conception) somewhat better, or at least not among the first-class jerks.  (Different people might have different thermostat settings.) If we do have moral thermostats, that would explain the moral licensing effect -- the tendency to be more likely to do something a little morally bad after having done something a little morally good. Once you have met or exceeded your target level of moral goodness, you can then better justify getting away with some small violation. And to a large extent, for most people, this moral thermostat might be comparative: If other people are cheating, it is easier to justify allowing yourself to also cheat than if no one else is cheating. Who wants to be the sucker saint?

So maybe, then, if an ethicist who comes to accept high moral standards against eating meat, and in favor of donating large amounts of money to famine relief, etc., does not adjust her moral thermostat relative to others, her behavior will not change despite the moral discovery. She wants, say, to be morally better than 80% of those around her. And even before discovering the badness of eating meat, she conceptualized herself that way. Now, if she ceases eating meat while others do not, she will be (in her own conception) better than, say, 85% of others -- heading too far toward sucker-saint territory for her own tastes.

Thus, the result of an ethicist's accepting high moral standards would be not an improvement in personal behavior but rather the adoption of the view that both she and other people are morally worse, by absolute standards, than she had previously thought.


Eric said...

Maybe the inert model is better explained by Billy Joel ("I'd rather laugh with the sinners than cry with the saints") than my "When The Saints Go Marching In".

Scott Bakker said...

Why not the simplest explanation of all: talk is cheap. In this case, even the talk we tell ourselves.

Dual process cognition models suggest that conscious, deliberative thought is more a social display than an epistemic mechanism. A neurocorporate PR department (which the more we learn, the more it seems (I think)to mirror structural characteristics of REAL corporate PR departments! post hoc and out-of-loop). Sad fact is, the hypocrisy you note is the norm across a whole spectrum of issues. If you plug this into the recent game-theory work on the role of reputation in the evolution of cooperation, then you could surmise that practical defections from explicit moral claims is something we unconsiously game against perceived threats to our reputation. Absent the latter, the gut brain sees no real need to take the deliberative brain all that seriously.

The experimental key to this hypothesis would be to place this explicit 'moral talk' in explicit moral situations: What would happen if this particular instance of hypocrisy actually bore real social/reputational consequences?

My guess is that the brain would call a press conference in a hurry!

clasqm said...

Why do you keep picking on ethicists? Do all the Economics professors arrive at your campus in Lamborghinis? Do all the PolSci faculty resign after they've done their doctorate to run for high office? Come to think of it, has anyone with a doctorate in PolSci ever been elected to any post above municipal dog catcher?

There are plenty of smoking doctors around, and the most complete atheists I know are professional theologians. Not all of them, true, but enough.

So people have different talents. Some are really good at making money, running a country, healing people, or talking to God. Then there's those of us that are good at THINKING about those people and what they do.

Perhaps, just perhaps, what we do can help them do their thing a little better. But why assume that what we do must also make it possible for us to do what they do? And why be flabbergasted when that prediction fails to come about?

Folksy sidenote: in the Dutch language there is a proverb that translates as "the shoemaker's children walk in their bare feet".

You don't become a professor of Ethics because you are an ethical, caring person to start with. If you were, would you waste 15-20 years of your life getting a PhD, fighting for tenure etc? No, you would spend that valuable time at professional do-gooding in some godforsaken slum. You become a professor of Ethics (or anything else) because you discover at a tender age that you are able to think about it a little more clearly than other people and that, if you play your cards right, they will actually pay you a salary for it. But if, in your freshman year, the Ethics course had been cancelled, you might have been an anthropologist now.

Very occasionally you might come across someone who is good at thinking about ethics and also good at being ethical. But you might also come across another equally good ethical thinker who happens to be a champion bridge player. Do we need to create a thermostat theory to connect ethical theory to cardplaying ability? Don't think so either. So why do we need one to connect ethics to Ethics? These are different activities. If either influences the other, that is a happy coincidence. In the same way, the bridge player may well feel that card playing sharpens his intellect generally and helps him in his day job, but we don't incorporate it in the curriculum. Well, at least not on most campuses.

Assuming that our cogitation must necessarily result in improved action, not only among the actors, but also among the thinkers, is precisely the kind of thinking that has brought us to the 21st century university, the primary function of which is to create worker drones for the cubicle farms. But let me not get started on that ...

Rolf Degen said...

Ethicists are not the worst, by far: Most ethicists do not lecture other people about the reprehensibleness of, say, eating meat or not giving more money to the poor. The may think so, privately, but keep it to themselves and go on eating cheeseburgers. Journalists are worse: They routinely do cover stories about how horribly wrong (and planet killing) it is to eat meat. And these are people who eat blood sausages for breakfest. There seems to be some moral licensing in this: Esposing the moral badness in others earns yourself some moral points you can redeem at the butcher.