In a forthcoming article, Piercarlo Valdesolo and David DeSteno tried the following experiment: In part one, participants were faced with the possibility of doing one of two tasks -- one a brief and easy survey, the other a difficult and tedious series of mathematics and mental rotation problems -- and they were given the choice between two decision procedures. Either they could choose the task they preferred, in which case (they were led to believe) the next participant would receive the other task, or they could allow the computer to randomly assign them to one of the two tasks, since "some people feel that giving both individuals an equal chance is the fairest way to assign the tasks". Perhaps unsurprisingly, 93% of participants chose simply to give themselves the easy task.
In part two, participants were asked to express opinions about various aspects of the experiment, including rating how fairly they acted (on a 7-point scale from "extremely fairly" to "extremely unfairly"). Some participants completed these questions under normal conditions; others completed the questions under "cognitive load" -- that is, while simultaneously being asked to remember strings of seven digits. A third group did not complete part one, but watched a confederate of the experimenter complete it, rating the confederate's fairness.
Again unsurprisingly, people rated the choice of the easy task as more unfair when they saw someone else make that choice than when they made that choice themselves. But here's the interesting part: They did not do so when they had to make the judgment under the "cognitive load" of memorizing numbers.
Consider two possible models of rationalization. On the first model, we automatically see whatever we do as okay (or at least more okay than it would be if others did it) and the work of rationalization comes after this immediate self-exculpatory tendency. On the second model, our first impulse is to see our action in the same light we would see the same action done by others, and we have to do some rationalizing work to undercut this first impulse and see ourselves as (relatively) innocent. The current experiment appears to support the second model.
I suspect that moral reflection is bivalent -- that sometimes it helps drive moral behavior but sometimes it serves merely to dig us deeper into our rationalizations and is actually morally debilitating. It is by no means clear to me now which tendency dominates. (I originally inclined to think that moral reflection was overall morally improving, but my continued reflections on the moral behavior of ethics professors are leading me to doubt this.) Valdesolo and DeSteno's experiment and the second model of rationalization fit nicely with the negative side of the bivalent view: The more we devote our cognitive resources to reflecting on the moral character of our past behavior, the more we tend to make false angels of ourselves.