Monday, June 23, 2014

The Calibration View of Moral Reflection

Oh, when the saints go marching in
Oh, when the saints go marching in
Lord, I want to be in that number
When the saints go marching in.
No. No you don't, Louis. Not really.

If you want to be a saint, dear reader, or the secular equivalent, then you know what to do: Abandon those selfish pleasures, give your life over to the best cause you know (or if not a single great cause then a multitude of small ones) -- all your money, all your time. Maybe you'll misfire, but at least we'll see you trying. But I don't think we see you trying.

Closer to you what you really want, I suspect, is this: Grab whatever pleasures you can here on Earth consistent with just squeaking through the pearly gates. More secularly: Be good enough to meet some threshold, but not better, not a full-on saint, not at the cost of your cappuccino and car and easy Sundays. Aim to be just a little bit better, maybe, in your own estimation, than your neighbor.

Here's where philosophical moral reflection can come in very handy!

As regular readers will know, Joshua Rust and I have done a number of studies -- eighteen different measures in all -- consistently finding that professors of ethics behave no morally better than do socially similar comparison groups. These findings create a challenge for what we call the booster view of philosophical moral reflection. On the booster view, philosophical moral reflection reveals moral truths, which the person is then motivated to act on, thereby becoming a better person. Versions of the booster view were common in both the Eastern and the Western philosophical traditions until the 19th century, at least as a normative aim for the discipline: From Confucius and Socrates through at least Wang Yangming and Kant, philosophy done right was held to be morally improving.

Now, there are a variety of ways to duck this conclusion: Maybe philosophical ethics neither does nor should have any practical relevance to the philosophers expert in it; or maybe most ethics professors are actually philosophizing badly; or.... But what I'll call the calibration view is, I think, among the more interesting possibilities. On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for. This could often involve consciously deciding to act morally worse.

Consider moral licensing in social psychology and behavioral economics. When people do a good deed, they then seem to behave worse in follow-up measures than people who had no opportunity to do a good deed first. One possible explanation is something like calibration: You want to be only so good and not more. A unusually good deed inflates you past your moral target; you can adjust back down by acting a bit jerkishly later.

Why engage in philosophical moral reflection, then? To see if you're on target. Are you acting more jerkishly than you'd like? Seems worth figuring out. Or maybe, instead, are you really behaving too much like a sweetheart/sucker/do-gooder and really you would feel okay taking more goodies for yourself? That could be worth figuring out, too. Do I really need to give X amount to charity to be the not-too-bad person I'd like to think I am? Could I maybe even give less? Do I really need to serve again on such-and-such worthwhile-but-boring committee, or to be a vegetarian, or do such-and-such chore rather than pushing it off on my wife? Sometimes yes, sometimes no. When the answer is no, my applied philosophical moral insight will lead me to behave morally worse than I otherwise would have, in full knowledge that this is what I'm doing -- not because I'm a skeptic about morality but because I have a clear-eyed vision of how to achieve exactly my own low moral standards and nothing more.

If this is right, then two further things might follow.

First, if calibration is relative to peers rather than absolute, then embracing more stringent moral norms might not lead to improvements in moral behavior in line with those more stringent norms. If one's peers aren't living up to those standards, one is no worse relative to them if one also declines to do so. This could explain the cheeseburger ethicist phenomenon -- the phenomenon of ethicists tending to embrace stringent moral norms (such as that eating meat is morally bad) while not being especially prone to act in accord with those stringent norms.

Second, if one is skilled at self-serving rationalization, then attempts at calibration might tend to misfire toward the low side, leading one on average away from morality. The motivated, toxic rationalizer can deploy her philosophical tools to falsely convince herself that although X would be morally good (e.g., not blowing off responsibilities, lending a helping hand) it's really not required to meet the mediocre standards she sets herself and the mediocre behavior she sees in her peers. But in fact, she's fooling herself and going even lower than she thinks. When professional ethicists behave in crappy ways, such mis-aimed low-calibration rationalizing is, I suspect, often exactly what's going on.


Joshua Rust said...

“On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for.”

If the proper role of philosophical moral theorizing is moral self-improvement then perhaps the calibration view offers a compelling alternative. There are, of course, many who would deny the antecedent by claiming, with John Rawls, that the proper role of moral theorizing is to arrive at a worked out set of moral principles. It also seems like one could agree with Rawls and still embrace the booster view (so that the booster view doesn’t necessarily imply that one thinks the aim of moral theorizing is practical): while moral truth is the aim, such investigations might have the additional and happy effect of making one morally better.

So I guess the first task is to argue--with the ancients--that the proper role of moral theorizing is practical, not theoretical. Once that is done I think the calibration view offers a plausible (and sassy!) alternative as to what these practical aims are

Anonymous said...

It seems to me that the task of discerning the correct moral rules is separate from the task of motivating people to follow the correct moral rules. If you think, with the internalists, that apprehending the correct moral rules conceptually requires some motivation to be moral, it's going to be very puzzling that moral philosophers, who apparently have better access to the moral rules, behave no better. But motivational internalism might not be correct.

Given your evidence that moral philosophers don't act better than laypeople, we have a couple of possible explanations. One explanation is that moral philosophers really don't have better access to moral truth than the layperson. But another explanation (and maybe just as good), is that moral philosophers might be getting closer to moral truth, but that moral truth does not include any information about how to motivate people to act morally. The correct moral standards might be pretty exacting, and morality in general seems to require that we forgo many of our desires. Moral philosophers have no special training in asceticism (perhaps the opposite, given how much most of us drink during grad school!). What they do have training in is recognizing their moral commitments and following those commitments to their logical conclusions.

Eric Schwitzgebel said...

Josh: Right! Although maybe the thing to do is to drop the "the" -- I know, I said it first -- and replace it with "a". *A* or *one* proper aim of philosophical moral theorizing is practical. Probably easier to argue than "the".

Eric Schwitzgebel said...

Anon 11:12: I guess I'm approximately a motivational internalist. I think that at least as a matter of empirical fact, the fact that something is morally good is for most people most of the time a positive feature of an action. (Internalism is often cast in terms of strict necessity, in which case I have a complicated "dispositional stereotype view", but I don't need to pull that out here, since I don't think I need strict necessity for the issues presently at hand.) So then it is still somewhat puzzling if moral philosophers get closer to the moral truths in their opinions but then their behavior doesn't, **on average**, move in that direction at least a little bit. So the calibration view is intended partly as a response to that puzzle.

I do agree that philosophers aren't experts at motivating people, or themselves. In fact, they seem remarkably bad at it. So that surely is part of the story. But I think even just a wee bit of internalism sauce, plus the view that philosophical ethics discovers moral truths, already starts to generate some explanatory pressure in face of ethicists' and non-ethicists' very similar behavior.

Regina Rini said...

I'm wondering how you explain the motivational force of moral 'calibrating'. When you give the non-secular version it makes sense: do as much good as is necessary to get into heaven (because getting into heaven is good) but then don't do any more (because why waste your time when you've already earned heaven). This just sounds like solid prudential reasoning.

But how does it work for the secular version? What do you *get* for being moral up to such-and-such a threshold? What do you stop getting if you are moral beyond that threshold? This doesn't sound like obvious prudential reasoning anymore, once we remove the heavenly reward.

I'm sympathetic to the view that moral motivation is primitive - i.e. we just are (naturally) motivated to behave morally, and there is no sense in trying to cash that motivation out in some other terms. But that leaves the calibration view with some explaining to do. Why (speaking psycho/biologically here) should we be motivated to be good to some threshold but not higher, and yet go on *judging* actions beyond that threshold as good or admirable?

Rolf Degen said...

Nice theory, Eric. But with your aiming at the moral licensing theory, you are at the heart of the replication crisis that is currently haunting the field of psychology. Moral licensing did not fare well:

chinaphil said...

Great post. I'm not certain if this is a real problem, but I'm just a little bit concerned about the apparent opposition being set up here between moral behaviour on the one hand and self-interest on the other. It seems like a bit of a strong assumption, and wouldn't hold with, say, Confucius's view of virtue: when he was 70 he no longer desired to do wrong.
I'm not really a Confucian, but I tend to think that moral behaviour is 90% influenced by environment, and in an good environment one's desires start to line up with the moral good.

Eric Schwitzgebel said...

Thanks for the thoughtful comments, folks!

Regina: I'm inclined to think that people are motivated, in part, by something like a self-theory or a self-image -- or a "moral identity". They want to see themselves as at-least-so-moral (maybe absolute, maybe relative to peers), and that can be pretty powerfully motivating, on top of recognizing the factors that motivate moral behavior in general (e.g. caring about others). I'm also inclined to think that being not-more-than-X-much-more-moral-than-my-peers can be motivated by feelings of fairness or the sense that it would be unjust for you to sacrifice more. Some of this is backed up by social psychology, though as Rolf points out research in this area is still pretty tentative.

Eric Schwitzgebel said...

Rolf: Very interesting! I'd missed your post on that. I'll need to look into that more carefully before citing in the future.

Eric Schwitzgebel said...

chinaphil: I love Analects 2.4. Xunzi, as I'm sure you know, says some similar things. I agree that's a terrific ideal, if you can get there, but there's some question about whether that's a rational path to choose. Xunzi's moral program, I'm inclined to think, must either start by brute force or by lies about the prudential value of morally good behavior -- unless society can be so totally controlled (*too controlled!) that morally good behavior reliably pays.

Callan S. said...

It sounds more like an evaluation of people who are essentially whipped into 'the right thing to do' or do so so as to avoid social whipping. People who don't wanna be there and will optimise there 'fulfilment'/whipping avoidance to absence ratio. Like some kid who hates schoolwork doing the least he can of it to avoid punishment.

Is there an examination out there of this sort of 'ushering', whether it actually converts any to anything other than that? Or whether it's just a profitable whip to wield in and of itself, making it not so pious to do?

Or is it really taken as the best there is? Or possibly again, is taking it as the best there is profitable, because it enables the whip?

Oh yah, I'm pretty lawer-ly in rolling this stuff around! Workin' all the tax/sin breaks! ;)

Also with the ethicists, vegetarianism and meat were the studies binary? Or was there a measure of how much they ate and perhaps a reduction in meat consumption might be observed?

I mean I think I heard that the bible has some reference to never sit in a chair that a menstrating woman has sat in (from some guy who attempted to follow the bible exactly for a year - he almost got beat up for throwing a pebble gently at a self proclaimed adulterer, to forfil the stoning clauses). Surely the ethicist can see the historical 'morality' of that attitude? But why would one expect an absolute adoption? Though a partial avoidance of such chairs would seem pretty weird to most of us (the guys wife went and deliberately sat on every chair in their house in reaction to the whole thing - he had to go buy a new chair for himself), even as a diet with less meat might seem legit.

Why not expect the ethicist to shun all chairs which menstrating women have sat upon, after recognising the historical morality of the issue (making female ethicists lives fairly awkward - though of course the bible simply assumed a male reader)?

Sid K said...

A hybrid of the booster and calibration view: Professional ethicists point out and argue for stringent moral norms. This motivates other people to follow these norms. Ethicists are now more moral because they've made other people more moral. Therefore, the ethicists feel less moral force to adopt the moral norms themselves.

Example: Ethicist points out that vegetarianism is good. Many people turn vegetarian after reading her work. She feels that she herself can get by without being vegetarian, because on net, she has caused more good than harm.

Callan S. said...

Kind of a moral ponzi scheme, Sid?

Rolf Degen said...

Right, Callan. That is the same one that Peter Singer seems to follow. When it was found out that he does not give away as much to the poor as he demands from others, he showed this reaction:

"When asked about this, he forthrightly admitted that he was not living up to his own standards. He insisted that he was doing far more than most and hinted that he would increase his giving when everybody else started contributing similar amounts of their incomes."

And about his vegetarianism, who knows?

Eric Schwitzgebel said...

Thanks for the comments, folks! I'm reading them by my attempts to reply keep crashing (i

Callan S. said...

Hope you are saving those posts to notepad, Eric! It's awful to lose well composed posts to the whims of the glitchy server!

Eric Schwitzgebel said...

Callan/Sid/Rolf -- thanks for those comments. I'm back in Riverside, trying to get on track.

On Singer: I have some sympathy for people with high moral standards who don't entirely live up to them, as long as they are making some motion in that direction. Better than people who toxically rationalize into horribly low standards which they then adhere to!

On the Ponzi scheme: I know of one famous ethicist vegetarian who I am told reasoned exactly as Sid says and now eats meat. I'm hesitant to name names without confirmation from the individual himself, but it's not Singer.

On the binary thing and possibility of reduction: The short answer is that it's complicated when you look at the details. (Too complicated for this comment box.) There are a couple of pages on it in the vegetarianism section of my most recent Phil Psych paper with Rust if you want the full answer. That paper also addresses the issue of holding ethicists not to general norms but to their own explicitly endorsed norms. Here again, they seem to be no different from non-ethicists (neither better nor worse correlation between expressed norms and personal behavior than the correlation for non-ethicists).