Oh, when the saints go marching in Oh, when the saints go marching in Lord, I want to be in that number When the saints go marching in.
If you want to be a saint, dear reader, or the secular equivalent, then you know what to do: Abandon those selfish pleasures, give your life over to the best cause you know (or if not a single great cause then a multitude of small ones) -- all your money, all your time. Maybe you'll misfire, but at least we'll see you trying. But I don't think we see you trying.
Closer to you what you really want, I suspect, is this: Grab whatever pleasures you can here on Earth consistent with just squeaking through the pearly gates. More secularly: Be good enough to meet some threshold, but not better, not a full-on saint, not at the cost of your cappuccino and car and easy Sundays. Aim to be just a little bit better, maybe, in your own estimation, than your neighbor.
Here's where philosophical moral reflection can come in very handy!
As regular readers will know, Joshua Rust and I have done a number of studies -- eighteen different measures in all -- consistently finding that professors of ethics behave no morally better than do socially similar comparison groups. These findings create a challenge for what we call the booster view of philosophical moral reflection. On the booster view, philosophical moral reflection reveals moral truths, which the person is then motivated to act on, thereby becoming a better person. Versions of the booster view were common in both the Eastern and the Western philosophical traditions until the 19th century, at least as a normative aim for the discipline: From Confucius and Socrates through at least Wang Yangming and Kant, philosophy done right was held to be morally improving.
Now, there are a variety of ways to duck this conclusion: Maybe philosophical ethics neither does nor should have any practical relevance to the philosophers expert in it; or maybe most ethics professors are actually philosophizing badly; or.... But what I'll call the calibration view is, I think, among the more interesting possibilities. On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for. This could often involve consciously deciding to act morally worse.
Consider moral licensing in social psychology and behavioral economics. When people do a good deed, they then seem to behave worse in follow-up measures than people who had no opportunity to do a good deed first. One possible explanation is something like calibration: You want to be only so good and not more. A unusually good deed inflates you past your moral target; you can adjust back down by acting a bit jerkishly later.
Why engage in philosophical moral reflection, then? To see if you're on target. Are you acting more jerkishly than you'd like? Seems worth figuring out. Or maybe, instead, are you really behaving too much like a sweetheart/sucker/do-gooder and really you would feel okay taking more goodies for yourself? That could be worth figuring out, too. Do I really need to give X amount to charity to be the not-too-bad person I'd like to think I am? Could I maybe even give less? Do I really need to serve again on such-and-such worthwhile-but-boring committee, or to be a vegetarian, or do such-and-such chore rather than pushing it off on my wife? Sometimes yes, sometimes no. When the answer is no, my applied philosophical moral insight will lead me to behave morally worse than I otherwise would have, in full knowledge that this is what I'm doing -- not because I'm a skeptic about morality but because I have a clear-eyed vision of how to achieve exactly my own low moral standards and nothing more.
If this is right, then two further things might follow.
First, if calibration is relative to peers rather than absolute, then embracing more stringent moral norms might not lead to improvements in moral behavior in line with those more stringent norms. If one's peers aren't living up to those standards, one is no worse relative to them if one also declines to do so. This could explain the cheeseburger ethicist phenomenon -- the phenomenon of ethicists tending to embrace stringent moral norms (such as that eating meat is morally bad) while not being especially prone to act in accord with those stringent norms.
Second, if one is skilled at self-serving rationalization, then attempts at calibration might tend to misfire toward the low side, leading one on average away from morality. The motivated, toxic rationalizer can deploy her philosophical tools to falsely convince herself that although X would be morally good (e.g., not blowing off responsibilities, lending a helping hand) it's really not required to meet the mediocre standards she sets herself and the mediocre behavior she sees in her peers. But in fact, she's fooling herself and going even lower than she thinks. When professional ethicists behave in crappy ways, such mis-aimed low-calibration rationalizing is, I suspect, often exactly what's going on.