Thursday, January 07, 2016

So What If Moral and Philosophical Reasoning Is Post-Hoc Rationalization?

(co-written with Jonathan E. Ellis)

Rationalization, as we understand it, involves a mismatch between the justificatory grounds one offers for one’s beliefs and the real underlying causes of one’s beliefs. A particular conclusion is favored in advance, and reasoning is recruited post-hoc, more with the aim of justifying that conclusion than with the aim of ascertaining the truth whatever it might be. In rationalization, one wants to establish that a favored conclusion is true, or reasonable to believe, and it is this desire, more than a sober assessment of the evidence, that causally explains one’s argumentative moves. (See this post for one attempt to lay this out more formally.)

We think it's possible that a substantial portion of moral and philosophical reasoning -- including among professional philosophers (ourselves included) -- involves rationalization in this sense. The desire to defend a preferred conclusion -- a particular point one has recently made in print, for instance, or a favored philosophical thesis ("God exists," "Everything supervenes on the physical," etc.) -- might surreptitiously bias the employment and evaluation of plausibility, methodology, epistemic obligation, one's memories, one's starting points, one's feelings of confidence, and so forth, so thoroughly that that desire, rather than the reasons one explicitly offers, is the best underlying causal explanation of why one accepts the views one does.

Suppose, hypothetically, that lots of moral and philosophical thinking is like that. (We won't defend that hypothetical here.) An interesting epistemic question is, so what? Would it be epistemically bad if moral and philosophical thinking were, to a substantial extent, highly biased post-hoc rationalization?

Here are three reasons one might think rationalization may not be so bad:

(1.) On some topics -- including perhaps ethical and other philosophical topics -- our intuitive judgments may be more trustworthy than our reasoning processes. You know -- without having to work up some abstract argument for it -- that it's (normally) wrong to harvest organs from an unwilling donor; and if the first abstract argument you try to concoct in defense doesn't quite get you there, it makes perfect sense to hunt around for another argument rather than to decrease confidence in the moral conclusion, and it makes sense to give much more careful critical scrutiny to arguments in favor of forced organ donation.

(2.) Moral and philosophical reasoning is a group enterprise, and the community benefits from having passionate advocates with a variety of opinions, who defend their views come what may. Even if some of those people fail to be epistemically rational at the individual level, they might contribute to group-level rationality. Maybe the scientific psychological community, for example, needs people who support implausibly extreme versions of nativism and empiricism, to anchor the sides of the debate. Moral and philosophical communities might likewise benefit from passionate advocates of unlikely or disvalued positions. (See Kuhn and Longino on scientific communities.)

(3.) Even if rationalization is not epistemically beneficial, it might not be deleterious, at least in the context of professional philosophy. Who cares why a philosopher has the views she does? All that matters, one might think, is the quality of the arguments that are produced. Low-quality arguments will be quickly shot down, and high-quality arguments will survive even if their origins are not psychologically handsome. To use a famous scientific example: It doesn't matter if a vision of the structure of benzene came to you in a dream, as long as you can defend your view of that structure after the fact, in dialogue with your peers.

While acknowledging these three points, we think that the epistemic costs of rationalization far outweigh the benefits.

(A.) Rationalization leads to overconfidence. If one favors conclusion P and systematically pursues and evaluates evidence concerning P in a highly biased manner, it's likely (though not inevitable) that one will end up more confident in the truth of P than is epistemically warranted. One might well end up confidently believing P despite the weight of available evidence supporting the opposite of P. This can be especially dangerous when one is deciding whether to, say, convict the defendant, upbraid the student, do a morally questionable action.

(B.) Rationalization impedes peer critique. There's a type of dialectical critique that is, we think, epistemically important in moral and philosophical reasoning -- we might call it "engaged" or "open" dialogue -- in which one aims to offer to an interlocutor, for the interlocutor's examination and criticism, one's real reasons for believing some conclusion. One says, "here's why I think P", with the aim of offering considerations in favor of P that simultaneously play two roles: (i.) they epistemically support P (at least prima facie); and (ii.) acceptance of them is actually causally effective in sustaining one's belief that P is the case. Exposing not only your conclusion but your reasons for favoring that conclusion offers your interlocutor two entry points for critique rather than just one: not only "is P true or well supported?" but also "is your belief that P well justified?" These can come apart, especially in the case where one's interlocutor might be neutral about P but rightly confident that one's basis for belief is insufficient. ("I don't know whether the stock market will rise tomorrow, but seeing some guy on TV say it will rise isn't good grounds for believing it will.") Rationalization disrupts this type of peer critique. One's real basis remains hidden; it's not really up for peer examination, not really exposed to the risk of refutation or repudiation. If one's putative basis is undermined one is likely simply to hunt around for a new putatively justifying reason.

(C.) In an analogous way, rationalization undermines self-critique. An important type of self-critique resembles peer critique. One steps back to explicitly consider one's putative real reasons for believing P, with the idea that reflection might reveal them to be less compelling that one had initially thought. As in the peer case, if one is rationalizing, the putative reasons don't really explain why one believes, and one's belief is likely to survive any potential undercutting of those putative reasons. The real psychological explanation of why you believe remains hidden, unexamined, not exposed to self-evaluation.

(D.) As a bit of a counterweight to point (2) above, concerning community benefits: At the community level, there's much to be said in favor of a non-rationalizing approach to dialogue, in which one aims to frankly and honestly expose one's real reasons. If you and I are peers, the fact that something moves me is prima facie evidence that it should move you too. In telling you what really moves me to favor P, I am inviting you into my epistemic perspective. You might learn something by charitably considering my point of view. Rationalization disrupts this cooperative enterprise. If I offer you rationalizations instead of revealing the genuine psychological grounds of my belief, I render false the first premise in your inference from "my interlocutor believes P because of reason R, so I should seriously consider whether I too ought to believe P for reason R".

The force of consideration (1) in favor of rationalization depends on recognizing that intuition can be more trustworthy than argument in some moral and philosophical domains. It's possible to recognize this fact without endorsing rationalization. One approach is to frankly admit that one believes on intuitive grounds. A non-rationalizing argument for the conclusion in question can then be something like: (i.) I find conclusion X intuitively attractive, and (ii.) it's reasonable for me to accept my intuitive judgments in this domain. That argument can then be exposed to interpersonal critique and self-critique.

It’s also unclear how much comfort is really justified by consideration (3), concerning quality detection. In moral and philosophical reasoning, quality can be difficult to assess. We are not confident that a philosophical community full of rationalizers is likely to reject only low-quality arguments, especially if patterns of motivated reasoning don't scatter randomly through the community, but tend to favor certain conclusions over others for reasons other than epistemic merit.

For these reasons we think we ought to be disappointed and concerned if it turns out that our moral and philosophical reasoning is to a large extent merely post-hoc rationalization.

--------------------------------------

A special thanks to Facebook friends for their helpful thoughts about on this earlier post on my wall:

Proposition: Most philosophical reasoning is post-hoc rationalization (in the sense that the reasons offered aren't in...

Posted by Eric Schwitzgebel on Friday, December 11, 2015

15 comments:

  1. I'm finding it hard to see how your considerations A-D could possibly outweigh point #1, assuming that all your objections to rationalization are themselves also about "reasoning processes." Of course, such a stark variety of intuitionism might also perhaps imply epistemic nihilism about moral judgments, but even that wouldn't do any more than level the playing field between rationalizers and their critics.

    ReplyDelete
  2. I am worried about the epistemic merit of intuitions in topics of moral and philosophical controversy, so landing too hard on (1) does seem to risk nihilism if one also thinks that non-rationalizing reasoning is a pretty weak tool. But I also think one can be an intuitionist without endorsing rationalization, as long as one is sufficiently metaphilosohpical (as we suggest near the end of the post). In fact, I think that's a pretty interesting place to be: confess the intuitiveness, then try to evaluate the merit of relying on intuition for cases of this sort. One model here is Joshua Greene's discussion of when "point-and-shoot" morality should, and should not, be expected to be successful.

    ReplyDelete
  3. Great piece. Great topic. The strange thing to note is that our capacity to rationalize must have been adaptive ancestrally. This means your case actually turns on presuming some radical break between our circumstances and those responsible for filtering out our basic capacities.

    One thing you can say is that they were laid down in cognitive ecologies featuring a great degree of interpersonal reliability and interdependence, and it does seem to be the case that our capacity to be honest brokers is correlated with environmental exigencies (I'm thinking of Sperber's account, here). But with philosophy, you have an ecology that seems tailor-made to encourage rationalization. In other words, perhaps there's probably a big difference in the kinds of intuitions we have on the bridge of a ship in a storm, say, and a philosophy colloquium! If aliens offended by our philosophical views were in route to destroy the earth pending some kind of resolution of our mind/body dilemmas, I suspect you would see philosophers playing a far different game of giving and asking for reasons.

    The point is once you reference the value of rationalization and the epistemic versus nonepistemic efficacies of intuition to cognitive ecologies/contexts, I actually think your pro/con categorization scheme would quickly become *very* complex. You might even want to retreat to a more prudential line, punting on the complexities with some kind of all things being equal argument.

    For me, the problem with rationalization is primarily that our cognitive ecology *has* changed so much. (Greene espouses a similar view regarding moral reasoning, if I remember aright). The cues we counted on to toggle our cooperative/competitive reasoning dispositions are no longer reliably linked to our problems. This has been my big fear vis a vis the internet for years now, the way it removes the communicative constraints provided by geography. Tim Allen's character always had to sort his stupidity out with a faceless Wilson before: now he need only google to find choirs and choirs of faceless confirmation. I actually worry Trump's 'maladaptive rationalizations for sale' campaign could be part of what will prove to be a disastrous cultural drift.

    ReplyDelete
  4. ...'steadily identifying and eliminating rationalizations that lead to contradictions'...
    Then in what form will knowledge reveal itself; John Cottingham proposes sensation, emotion and mentation as epistemological reasons for reason...kindred functions in all of us...

    ref: Socratic Method...

    ReplyDelete
  5. How do you risk nihilism by landing too hard on (1)?

    Maybe a lot of assertions are largely flim flam art pieces with occasional thin lines of practical ramifications weaved through them (kind of like there's a lot of junk DNA for every bit of viable/practical DNA).

    Or is that nihilistic? Maybe I've been overly influenced, but I think there probably were ecologies where indulging the flim flams either went no where in terms of boon or bane, or aided reproduction, or painfully slowly went toward some practical result - but now that ecology might be degrading as much as the environment that is the ozone is degrading.

    Maybe hitting it hard is the ace we've been sitting on for a long time, in preference for a lot of other plays?

    ReplyDelete
  6. Callan: But as the ecologies change, there's reason to be more rather than less worried about the trustworthiness of our intuitive responses, don't you think?

    ReplyDelete
  7. What you and Jonathan say here sounds spot on.

    One element that may mitigate the epistemic perils of rationalization is that it is very often manifest as such to the hearer. Call this "transparent rationalization". When someone tells you, "It's ok that I hit him; he was being a pain in the ass and needed to be taught a lesson" it's pretty clear what's going on. The hearer is unlikely to be mislead about either the rationalizers reasons for hitting or about the proper justification for violence.

    And in a case like this I reckon the rationalizer knows that his cited reason is not his real reason as well. Call this "self-transparent rationalization". So, you may also want to consider that possibility that rationalizers don't themselves believe the content of their rationalizations. (I argue for this in a couple of papers).

    ReplyDelete
  8. Eric, I'm not sure where you're coming from? You'd said 'I am worried about the epistemic merit of intuitions in topics of moral and philosophical controversy, so landing too hard on (1) does seem to risk nihilism if one also thinks that non-rationalizing reasoning is a pretty weak tool.'

    Was there some way I was advocating the trustworthiness of our intuitive responses by asking how do we risk nihilism in hitting (1)?

    ReplyDelete
  9. Jason, the example sounds like response rationalisation - "He doesn't get to wack someone - just because!". Ie, the conclusion he doesn't get to do X comes first, then the reasoning for the conclusion comes after.

    Generally the rationaliser believes their conclusion - which road that leads to Rome/their conclusion would generally be optional, of course.

    ReplyDelete
  10. Callan: I guess I was misreading you -- I thought your "ace" comment was meant to suggest the idea that we should rely more on intuition.

    ReplyDelete
  11. Eric, yep, missread me - but to switch topic a little, a little bit yeah in regards to intuition! For example, I think it's pointless to save pandas by having them in an iron lung with food intravenously fed to them. You have to preserve their environment (to some degree) along with the specie, to preserve the specie.

    I think our intuitions may as well be part of our environment, as human beings. They do need to be preserved (to some degree) in order to preserve us.

    But there's going to have to be some failure of preservation (otherwise it's a lovecraftian retreat back into the ignorance of a new darkage)

    But that's all switching topic on my part - I didn't mean relying more on intuition. Especially while we alter our environment to render traditional intuitions less reliable!

    ReplyDelete
  12. I think there's a serious problem with this line of thought. It seems to me that it assumes that philosophy is a bit like science in that there is some stuff that we know, and if we make our arguments properly, then we'll be able to work out some new stuff. As you argue (rightly, I think), if we are in fact rationalising a lot then we probably aren't making our arguments properly. And thus we'll end up in some wrong conclusions.

    However, this seems to me to be precisely not what philosophy is like. There simply are no grounds that are accepted by everyone. More than that, it is the very nature of philosophy that whatever your grounds are, philosophers don't accept them. I mean this quite literally: I think that it is precisely the job of philosophy to cast doubt on whatever grounds people are using.

    Given the above, what is philosophy for? One plausible response, I think, is that philosophy is for (1) inventing new ideas, and (2) teasing out relationships between them. I can't immediately think how (1) would be affected by rationalisation. (2) would be subject to mixed effects. One effect would be negative, because as you argue above, rationalisation undermines our ability to generate correct arguments. But another effect would be positive: people seem to be really, really good at rationalisation. In fact, we seem to do it automatically; whereas we are notoriously bad at many other kinds of logical thinking. So in a context where what we lack is not utter accuracy but invention (because we've just invented some new ideas, and we now have to link them into existing idea networks), letting rationalisation take some of the strain could be very positive. I suggest that the sheer volume of argumentation that we generate by allowing rationalisation overwhelms the negative effect on quality. And so long as some other mechanism exists to recognise argument quality - peer review, for example - then rationalisation could be a net positive.

    ReplyDelete
  13. Having skimmed your Facebook discussion - I think what I'm arguing is what Eric Winsberg was saying, plus an explicit claim about how it happens that philosophers do their best work on ideas they're interested in, i.e. that generating arguments is hard, and generating a large volume of arguments is valuable, therefore the flawed mechanism of rationalisation has value.

    ReplyDelete
  14. It's not valuable except to bamboozle/used car salesman someone!

    ReplyDelete
  15. That's an interesting perspective, chinaphil. It fits nicely with some of things I want to say about philosophy as an opportunity to open up possibilities rather than to settle on the one right theory. But I'd hope that one can have one's cake and eat it too: energetically championing views while at the same time not needing to bamboozle oneself and others with overconfident rationalizations.

    ReplyDelete