(co-written with Jonathan E. Ellis)
Rationalization, as we understand it, involves a mismatch between the justificatory grounds one offers for one’s beliefs and the real underlying causes of one’s beliefs. A particular conclusion is favored in advance, and reasoning is recruited post-hoc, more with the aim of justifying that conclusion than with the aim of ascertaining the truth whatever it might be. In rationalization, one wants to establish that a favored conclusion is true, or reasonable to believe, and it is this desire, more than a sober assessment of the evidence, that causally explains one’s argumentative moves. (See this post for one attempt to lay this out more formally.)
We think it's possible that a substantial portion of moral and philosophical reasoning -- including among professional philosophers (ourselves included) -- involves rationalization in this sense. The desire to defend a preferred conclusion -- a particular point one has recently made in print, for instance, or a favored philosophical thesis ("God exists," "Everything supervenes on the physical," etc.) -- might surreptitiously bias the employment and evaluation of plausibility, methodology, epistemic obligation, one's memories, one's starting points, one's feelings of confidence, and so forth, so thoroughly that that desire, rather than the reasons one explicitly offers, is the best underlying causal explanation of why one accepts the views one does.
Suppose, hypothetically, that lots of moral and philosophical thinking is like that. (We won't defend that hypothetical here.) An interesting epistemic question is, so what? Would it be epistemically bad if moral and philosophical thinking were, to a substantial extent, highly biased post-hoc rationalization?
Here are three reasons one might think rationalization may not be so bad:
(1.) On some topics -- including perhaps ethical and other philosophical topics -- our intuitive judgments may be more trustworthy than our reasoning processes. You know -- without having to work up some abstract argument for it -- that it's (normally) wrong to harvest organs from an unwilling donor; and if the first abstract argument you try to concoct in defense doesn't quite get you there, it makes perfect sense to hunt around for another argument rather than to decrease confidence in the moral conclusion, and it makes sense to give much more careful critical scrutiny to arguments in favor of forced organ donation.
(2.) Moral and philosophical reasoning is a group enterprise, and the community benefits from having passionate advocates with a variety of opinions, who defend their views come what may. Even if some of those people fail to be epistemically rational at the individual level, they might contribute to group-level rationality. Maybe the scientific psychological community, for example, needs people who support implausibly extreme versions of nativism and empiricism, to anchor the sides of the debate. Moral and philosophical communities might likewise benefit from passionate advocates of unlikely or disvalued positions. (See Kuhn and Longino on scientific communities.)
(3.) Even if rationalization is not epistemically beneficial, it might not be deleterious, at least in the context of professional philosophy. Who cares why a philosopher has the views she does? All that matters, one might think, is the quality of the arguments that are produced. Low-quality arguments will be quickly shot down, and high-quality arguments will survive even if their origins are not psychologically handsome. To use a famous scientific example: It doesn't matter if a vision of the structure of benzene came to you in a dream, as long as you can defend your view of that structure after the fact, in dialogue with your peers.
While acknowledging these three points, we think that the epistemic costs of rationalization far outweigh the benefits.
(A.) Rationalization leads to overconfidence. If one favors conclusion P and systematically pursues and evaluates evidence concerning P in a highly biased manner, it's likely (though not inevitable) that one will end up more confident in the truth of P than is epistemically warranted. One might well end up confidently believing P despite the weight of available evidence supporting the opposite of P. This can be especially dangerous when one is deciding whether to, say, convict the defendant, upbraid the student, do a morally questionable action.
(B.) Rationalization impedes peer critique. There's a type of dialectical critique that is, we think, epistemically important in moral and philosophical reasoning -- we might call it "engaged" or "open" dialogue -- in which one aims to offer to an interlocutor, for the interlocutor's examination and criticism, one's real reasons for believing some conclusion. One says, "here's why I think P", with the aim of offering considerations in favor of P that simultaneously play two roles: (i.) they epistemically support P (at least prima facie); and (ii.) acceptance of them is actually causally effective in sustaining one's belief that P is the case. Exposing not only your conclusion but your reasons for favoring that conclusion offers your interlocutor two entry points for critique rather than just one: not only "is P true or well supported?" but also "is your belief that P well justified?" These can come apart, especially in the case where one's interlocutor might be neutral about P but rightly confident that one's basis for belief is insufficient. ("I don't know whether the stock market will rise tomorrow, but seeing some guy on TV say it will rise isn't good grounds for believing it will.") Rationalization disrupts this type of peer critique. One's real basis remains hidden; it's not really up for peer examination, not really exposed to the risk of refutation or repudiation. If one's putative basis is undermined one is likely simply to hunt around for a new putatively justifying reason.
(C.) In an analogous way, rationalization undermines self-critique. An important type of self-critique resembles peer critique. One steps back to explicitly consider one's putative real reasons for believing P, with the idea that reflection might reveal them to be less compelling that one had initially thought. As in the peer case, if one is rationalizing, the putative reasons don't really explain why one believes, and one's belief is likely to survive any potential undercutting of those putative reasons. The real psychological explanation of why you believe remains hidden, unexamined, not exposed to self-evaluation.
(D.) As a bit of a counterweight to point (2) above, concerning community benefits: At the community level, there's much to be said in favor of a non-rationalizing approach to dialogue, in which one aims to frankly and honestly expose one's real reasons. If you and I are peers, the fact that something moves me is prima facie evidence that it should move you too. In telling you what really moves me to favor P, I am inviting you into my epistemic perspective. You might learn something by charitably considering my point of view. Rationalization disrupts this cooperative enterprise. If I offer you rationalizations instead of revealing the genuine psychological grounds of my belief, I render false the first premise in your inference from "my interlocutor believes P because of reason R, so I should seriously consider whether I too ought to believe P for reason R".
The force of consideration (1) in favor of rationalization depends on recognizing that intuition can be more trustworthy than argument in some moral and philosophical domains. It's possible to recognize this fact without endorsing rationalization. One approach is to frankly admit that one believes on intuitive grounds. A non-rationalizing argument for the conclusion in question can then be something like: (i.) I find conclusion X intuitively attractive, and (ii.) it's reasonable for me to accept my intuitive judgments in this domain. That argument can then be exposed to interpersonal critique and self-critique.
It’s also unclear how much comfort is really justified by consideration (3), concerning quality detection. In moral and philosophical reasoning, quality can be difficult to assess. We are not confident that a philosophical community full of rationalizers is likely to reject only low-quality arguments, especially if patterns of motivated reasoning don't scatter randomly through the community, but tend to favor certain conclusions over others for reasons other than epistemic merit.
For these reasons we think we ought to be disappointed and concerned if it turns out that our moral and philosophical reasoning is to a large extent merely post-hoc rationalization.
A special thanks to Facebook friends for their helpful thoughts about on this earlier post on my wall: