Wednesday, February 21, 2018

Rationalization: Why Your Intelligence, Vigilance and Expertise Probably Don't Protect You (with Jon Ellis)

(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)

We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. You try to point it out, but they deny it, and dig in more.

More formally, in recent work we have defined rationalization as what occurs when a person favors a particular view as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, and then engages in a biased search for and evaluation of justifications that would seem to support that favored view.

You, of course, never rationalize in this way! Or, rather, it doesn’t usually feel like you do. Stepping back, you’ll probably admit you do it sometimes. But maybe less than average? After all, you’re a philosopher, a psychologist, an expert in reasoning – or at least someone who reads blog posts about philosophy, psychology, and reasoning. You're especially committed to the promotion of critical thinking and fair-minded reasoning. You know about all sorts of common fallacies, and especially rationalization, and are on guard for them in your own thinking. Don't these facts about you make you less susceptible to rationalization than people with less academic intelligence, vigilance, and expertise?

We argue that no. You’re probably just as susceptible to post-hoc rationalization, maybe even more, than the rest of the population, though the ways it manifests in your reasoning may be different. Vigilance, academic intelligence, and disciplinary expertise are not overall protective against rationalization. In some cases, they might even enhance one’s tendency to rationalize, or make rationalizations more severe when they occur.

While some biases are less prevalent among those who score high on standard measures of academic intelligence, others appear to be no less frequent or powerful. Stanovich, West and Toplak (2013), reviewing several studies, find that the degree of myside bias is largely independent of measures of intelligence and cognitive ability. Dan Kahan finds that on several measures people who use more “System 2” type explicit reasoning show higher rates of motivated cognition rather than lower rates (2011, 2013, Kahan et al 2011). Thinkers who are more knowledgeable have more facts to choose from when constructing a line of motivated reasoning (Taber and Lodge 2006; Braman 2009).

Nor does disciplinary expertise appear to be protective. For instance, Schwitzgebel and Cushman (2012, 2015) presented moral dilemma scenarios to professional philosophers and comparison groups of non-philosophers, followed by the opportunity to endorse or reject various moral principles. Professional philosophers were just as prone to irrational order effects and framing effects as were the other groups, and were also at least as likely to “rationalize” their manipulated scenario judgments by appealing to principles post-hoc in a way that would render those manipulated judgments rational.

Furthermore, since the mechanisms responsible for rationalization are largely non-conscious, vigilant introspection is not liable to reveal to the introspector that rationalization has occured. This may be one reason for the “bias blind spot”: People tend to regard themselves as less biased than others, sometimes even exhibiting more bias by objective measures the less biased they believe themselves to be (Pronin, Gilovich and Ross 2004; Uhlmann and Cohen 2005). Indeed, efforts to reduce bias and be vigilant can amplify bias. You examine your reasoning for bias, find no bias because of your bias blind spot, and then inflate your confidence that your reasoning is not biased: “I really am being completely objective and reasonable!” (as suggested in Erhlinger, Gilovich and Ross 2005). People with high estimates of their objectivity might also be less likely to take protective measures against bias (Scopeletti et al 2015).

Partisan reasoning can be invisible to vigilant introspection for another reason: it need not occur in one fell swoop, at a sole moment or a particular inference. Rather, it can be the result of a series or network of “micro-instances” of motivated reasoning (Ellis, manuscript). Celebrated cases of motivated reasoning typically involve a person whose evidence clearly points to one thing (that it’s their turn, not yours, to do the dishes) but who believes the very opposite (that it’s your turn). But motives can have much subtler consequences.

Many judgments admit of degrees, and motives can have impacts of small degree. They can affect the likelihood you assign to an outcome, or the confidence you place in a belief, or the reliability you attribute to a source of information, or the threshold for cognitive action (e.g., what would trigger your pursuit of an objection). They can affect these things in large or very small ways.

Such micro-instances (you might call it motivated reasoning lite) can have significant amplificatory effects. This can happen over time, in a linear fashion. Or it can happen synchronically, spread over lots of assumptions, presuppositions, and dispositions. Or both. If introspection doesn't reveal motivated reasoning that happens in one fell swoop, micro-instances are liable to be even more elusive.

This is another reason for the sobering fact that well-meaning epistemic vigilance cannot be trusted to preempt or detect rationalization. Indeed, people who care most about reasoning, or who have a high “need for cognition”, or who attend to their cognitions most responsibly, may be the most impacted of all. Their learned ability to avoid the more obvious types of reasoning errors may naturally come with cognitive tools that enable more sophisticated, but still unnoticed, rationalization.

Coming tomorrow: Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization.

Full length article on the topic here.

5 comments:

  1. My compliments. This is a really fascinating topic and I buy the plausibility of the proposed general mechanism by which certain forms of vigilance, expertise and intelligence do not protect from general phenomena of rationalization. I like how direct you guys are in tackling such a difficult and important question.
    That said, I would be really surprised if having expertise in, for instance, rationalization or the existence of certain biases in moral reasoning did not have some form of positive effect in the diminution of those very same biases. I buy your point that there are pathways by which rationalization and bias can come back with a vengeance and I am sensitive to the fact that the diminution of the bias in some respect can have unintended consequences in subsequent moral reasoning (such as for instance in Small, Loewenstein & Slovic 2006).
    But still, it would be really surprising if being an expert on motivated reasoning did not have certain effects on motivated reasoning. Wouldn't it? Great post. Thanks.

    ReplyDelete
  2. Thanks for the comment, Hugo! You would think it would. But the evidence seems to be that it's overall not protective and maybe even amplifies rationalization. I'd guess that *some* forms of rationalization might become less common or attractive, while others become more common or attractive.

    The most shocking lack-of-expertise-advantage in philosophical thinking that I know of is my 2015 study with Cushman, which surprised even us. We found that self-reported experts on Kahneman & Tversky loss-aversion and on philosophical "trolley problems", >90% of whom had PhDs in philosophy, showed exactly the same types of framing effects and order effects on their judgments about classic, well-known versions of those problems -- even when they were explicitly encouraged to slow down and give thoughtful reflective answers.

    ReplyDelete
  3. I understand that there is another type of reflective thinking that avoids rationalization.
    Avoiding absolutes and certainties as Dewey and Toulmin recommended, let go of either/or thinking and try to explain opinions based on understandings that are contingent, situated and probable, that are reasonable as well as rational.

    ReplyDelete
  4. Comment from Howard Berman:

    Do people high on openness to experience show some immunity to your mentioned vice?
    I think you need to zero in on just where and how rationalizing happens. Which I think you're taking a first pass at. To craft a micro sociology of rationalization. The best thing to do in these cases, rather than bang heads, is to pinpoint just where the differences lay as honestly as possible. Though I suspect the adversarial or eristic set up of philsososphy helps or hurts

    make of it what it is

    Phi sci magazine is out of business

    ReplyDelete
  5. "You're especially committed to the promotion of critical thinking and fair-minded reasoning."

    This would probably be worth running a poll on - what does this commitment mean? As I assess it commitment means nothing - it's like having a gym membership. That means nothing in itself, it's whether you go to gym and do the hard work that matters. Critical thinking and fair-minded reasoning are hard work.

    Do people think just committing is enough - a kind of devotionism? Dare I make a comparison to religious devotion in that regard?

    Could a sense that commitment is all that matters contribute more to rationalization occurring? That the commitment is all that matters - no hard work need be done and especially not a a sense that it's possible not enough hard work was done?

    I'm pretty sure that having a gym membership but not going to the gym doesn't do anything beneficial.

    ReplyDelete