Innocence is indeed a glorious thing; but, on the other hand, it is very sad that it cannot well maintain itself, and is easily seduced. On this account even wisdom -- which otherwise consists more in conduct than in knowledge -- still has need to science [i.e., scholarship], not in order to learn from it, but to secure for its precepts admission and permanence. Against all the commands of duty which reason represents to the human being as so deserving of respect, he feels in himself a powerful counterweight in his needs and inclinations.... Hence there arises a natural dialectic, that is, a disposition to argue against these strict laws of duty... and if possible, to make them more compatible with our wishes and inclinations.... Thus is the common human reason compelled to go out of its sphere and to take a step into the field of practical philosophy... in order to attain in it information and clear instruction respecting the source of its principle, and the correct determination of it in opposition to the maxims which are based on wants and inclinations, so that it may escape from the perplexity of opposite claims, and not run the risk of losing all genuine moral principles through the equivocation into which it easily falls (Abbott & Denis, trans., 1785/2005, p. 65-66 [404-405]).The idea appears to be that without philosophical training, our moral judgments are easily led astray by our "wants and inclinations". We will concoct superficially plausible rationalizations to justify actions or principles that support our (often self-serving) desires. If I want to be able to steal a library book, I will concoct a superficial rationalization to justify that. If I am keen on Donald Trump, I will whip up a breezy story according to which his behavior is admirable. Philosophy, because it taps into the true moral law and has the power to see through bad arguments, can help protect us against those tendencies.
I feel the pull of that thought. Yet I worry that, empirically, things might tend in fact to run the opposite direction on average. Philosophical training might increase the tendency toward self-serving rationalization. It might do so in three ways: (1.) by providing more powerful tools for rationalization (more argument styles and competing principles that can be drawn upon), (2.) by giving rationalization a broader field of play (by tossing more of morality into doubt), and (3.) by providing more psychological occasion for rationalization (by nurturing the tendency to reflect on principles rather than simply take things for granted). Education in moral philosophy might be less a bulwark against rationalization than a training grounds for it.
This sort of claim is hard to test empirically, but I have two small pieces of evidence that seem to support this pessimistic view over the optimistic view of Kant:
First, in work forthcoming in Mind & Language, Fiery Cushman and I found that philosophers, more than other professors and more than non-academics, tended to endorse moral principles in labile ways to match up with psychologically manipulated intuitions about particular cases. (Ethics PhDs showed the largest effect size overall.) Participants in our experiment were presented moral puzzle cases in one of two orders: an order that favored rating the two cases equivalently and an order that favored treating the two cases as different. Later, participants were asked if they endorsed or rejected moral principles that favored treating the cases as different. Philosophers and especially ethicists showed the greatest order effects on their judgments about moral principles, suggesting a greater-than-average predilection for post-hoc rationalization of their order-manipulated judgments about the individual scenarios.
Second, in work under submission, Joshua Rust and I found that professional ethicists, more than professors in other fields, seemed to exhibit self-congratulatory rationalization in their normative attitudes about replying to emails from students. In our study, all groups of professors (ethicists, non-ethicist philosophers, and non-philosophers) were similar in several dimensions: In a survey, they all claimed very high rates of responsiveness to student emails (the majority claimed 100% responsiveness); and the large majority of all groups (84% overall) rated "not consistently responding to student emails" on the bad side of a moral scale; and all groups replied at the same mediocre rate (about 60%) when we actually sent them emails designed to look as though they were from undergraduates. Also, all groups showed the same very weak to non-existent correlation between self-reported behavior and actually measured email responsiveness and between expressed normative attitude and actually measured email responsiveness. Despite all these similarities, however, there was one very large difference between the groups: Ethicists showed by far the largest relationship between normative attitude and self-reported email responsiveness. One natural interpretation of these results, we think, is that professors tend to have very poor self-knowledge of their actual rates of responsiveness to student emails, but that ethicists will, more than other professors, rationalizingly adjust their norms to match their illusions about their behavior. (It's also possible, though, that ethicists were more likely to adjust their self-reports to match their previously expressed normative attitudes, thus exhibiting either more outward deception or more self-deception.)