Friday, December 04, 2015

A Theory of Rationalization

The U.C. Santa Cruz philosopher Jon Ellis and I are collaborating on a paper on rationalization in the pejorative sense of the term. I'm trying to convince Jon to accept the following four-clause definition of rationalization:

A person -- whom, following long philosophical tradition, we dub S -- rationalizes some claim or proposition P if and only if all of the following four conditions hold:

1. S believes that P.

2. S attempts to explicitly justify her belief that P, in order to make her belief appear rational, either to herself or others.

3. In doing 2, S comes to accept one or more justifications for P as the rational grounds of her belief.

4. The causes of S's belief that P are very different from the rational grounds offered in 3.

Some cases:

Newspaper. At the newsstand, the man selling papers accidentally gives Estefania [see here for my name choice decision procedure] a $20 bill in change instead of a $1 bill. Estefania notices the error right away. Her first reaction is to think she got lucky and doesn't need to point out the error. She thinks to herself, "What a fool! If he can't hand out correct change, he shouldn't be selling newspapers." Walking away, she thinks, "And anyway, a couple of times last week when I got a newspaper from him it was wet. I've been overpaying for his product, so this turnabout is fair. Plus, I'm sure almost everyone just keeps incorrect change when it's in their favor. That's just the way the game works." If Estefania had seen someone else receive incorrect change, she would not have reasoned in this way. She would have thought it plainly wrong for the person to keep it.

Wedding Toast. Adrian gives a wedding toast where she tells an embarrassing story about her friend Bryan. Adrian doesn’t think she crossed the line. Yes, the story was embarrassing, but not impermissible as a wedding toast. Shortly afterward, Bryan pulls Adrian aside and says he can't believe Adrian told that story. A couple of months before, Bryan had specifically asked that her not to bring that story up, and Adrian had promised not to mention it. Adrian had forgotten that promise when preparing her toast, but she remembers it now that she has been reminded. She reacts defensively, thinking: "Embarrassing the groom is what you're supposed to do at wedding toasts. Bryan is just being too uptight. Although the story was embarrassing, it also shows a good side of Bryan. And being embarrassed like this in front of family and friends is just the kind of thing Bryan needs to help him be more relaxed and comfortable in the future." It is only because Adrian doesn't want to see herself as having done something wrong that she finds this line of reasoning attractive.

The Kant-Hater. Kant's Groundwork for the Metaphysics of Morals -- a famously difficult text -- has been assigned for a graduate seminar in philosophy. Ainsley, a student in that seminar, hates Kant's opaque writing style and the authoritarian tone he thinks he detects in Kant. He doesn't fully understand the text -- who does? -- or the critical literature on it. But the first critical treatment that he happens upon is harsh, condemning most of the central arguments in the text. Because he loathes Kant's writing style, Ainsley immediately embraces that critical treatment and now deploys it to justify his rejection of Kant's views. More sympathetic treatments of Kant, which he later encounters, leave him cold and unwilling to modify his position.

The Racist Philosopher. A 19th century slave-owner, Philip, goes to university and eventually becomes a philosophy professor. Throughout his education, Philip is exposed to ethical arguments against slave-ownership, but he is never convinced by them. He always has a ready defense. That defense changes over time as his education proceeds and his thinking becomes more sophisticated. What remains constant is not any particular justification Philip offers for the ethical permissibility of slave-ownership but rather only his commitment to its permissibility.

These cases might be fleshed out with further plausible details, but on a natural understanding of them the primary causes of the protagonists' beliefs are not the justifications that they (sincerely) endorse for those beliefs -- rather, it's that they want to keep the $20, want not to have wronged a close friend at his wedding, dislike Kant's writing style, have a selfish or culturally-ingrained sense of the permissibility of slave-ownership. It is this disconnection between the epistemic grounds that S employs to defend the rationality of believing P and the psychological grounds that actually drive S's belief that P that is the essence of rationalization in the intended sense of the term.

The condition about which Jon has expressed the most concern is Condition 4: "The causes of S's belief that P are very different from the rational grounds offered in 3." I admit there's something that seems kind of fuzzy or slippery about this condition as currently formulated.

One concern: The causal story behind most beliefs is going to be very complicated, so talk about "the" causes risks sweeping in too much (all the causal history) or too little (just one or two things that we might choose because salient in the context). I'm not sure how to avoid this problem. Alternatives like "the explanation of S's belief" or "the real reason S believes" seem to have the same problems and possibly to invite other problems as well.

Another concern: It's not clear what it is for the causes to be "very different" from the rational grounds that S offers. I hope that it's clear enough in the cases above. Here are some reasons to avoid saying, more simply, that the justifications S offers for P are not among the causes of S's belief that P. First, it seems typical of rationalization that once one finds some putative rational grounds for one's belief, those putative grounds have some causal power in sustaining the belief in the future. Second, if one simply couldn't find anything even vaguely plausible in support of P, one might have given up on P -- so the availability of some superficially plausible justifications probably often plays some secondary causal role in sustaining beliefs that primarily arise from other causes. Third, sometimes one's grounds aren't exactly what one says they are, but close enough -- for example, your putative grounds might be your memory that Isaura said it yesterday, while really it was her husband Jeffrey who said it and what's really effective is your memory that somebody trustworthy said it. When the grounds are approximately what you say they are, it's not rationalization.

So the phrase "the causes... are very different" is meant to capture the idea that if you looked at the whole causal picture, you'd say that neither the putative justifications nor close neighbors of them are playing a major role, or the role you might normatively hope for or expect, in causing or causally sustaining S's belief, even as she is citing them as her justifications.

What do you think? Is this a useful way to conceptualize "rationalization"? Although I don't think we need to hew precisely to pre-theoretical folk intuition, would this account imply any particularly jarring violations of intuition about cases of "rationalization"?

I'd also be happy for reading recommendations -- particularly relevant philosophical accounts or psychological results.

Our ultimate aim is to think about the role of rationalization in moral self-evaluation and in the adoption of philosophical positions. If rationalization is common in such cases, what are the epistemic consequences for moral self-knowledge and for metaphilosophy?

[image source]

-------------------------------------------

For related posts, see What Is "Rationalization?" (Feb. 12, 2007), and Susanna Siegel's series of blog posts on this topic at the Brains blog last year.

23 comments:

  1. I am also on the fence about the fourth premise. It seems to imply that we need to know the actual causes for certain beliefs, which is what you get at when you talk about the "whole causal picture." But even if you had a 10,000 foot view of someone's life, the causes for certain beliefs might never be apparent. For example (and to refer to a previous post of yours), a person that is ableist might have no idea what has caused them to dislike disabled people in the first place. If you looked at their life from 10,000 feet, it might similarly be impossible to determine what has caused that ableism. The ableism is just there; perhaps a part of that person's personality.

    To get around the fourth premise, you might try something like:

    1. S believes that P.

    2. When S first began believing that P, S had no rational grounds for believing that P.

    3. S attempts to explicitly justify her belief that P, in order to make her belief appear rational, either to herself or others.

    4. In doing 2, S comes to accept one or more justifications for P as the rational grounds of her belief.

    It seems like you still run into similar issues as with your fourth premise in terms of figuring out S's original beliefs and why S originally believed that P. But, you no longer have determine the causes for S's belief that P and figure out whether they are very different from her rationalization. Now, you simply have to determine that when S's belief's began, S had no rational grounds for that belief.

    I think a lot of rationalization stems from immediate feelings about a situation, feelings that S doesn't immediately have any reasons for. To apply to your examples:

    Newspaper. Estefania originally initially feels that "she got lucky and doesn't need to point out the error." She has no rational grounds for believing this, but this is her initial feelings about the situation. She later has to rationalize these feelings.

    Wedding Toast,. Adrian originally felt that her toast was in good taste. It's not that she had any reasons to think her toast was in good taste, that's just her initial reaction to it. Had she stopped to think about, she might realize that it was in poor taste, or she might rationalize why she felt it was in good taste. He act of trying to flesh out something for which she has no reasons for is her rationalizing.

    And so on, and so on.

    The more I think about, the more I'm starting to think it's not that different from your conditions. I think there might be something here that makes the fourth premise a little easier to swallow, but whether it's much different than your formulation, I'm not sure. Hmm...

    ReplyDelete
  2. Very nice. A few thoughts:

    1) I usually hear "rationalization" used in regard to actions (or not acting, or thinking about acting) not beliefs tout court. Your examples seem to reinforce this. Is there a way to phrase the account as being about justifications for actions, and not beliefs? (Though I guess the having of a particular thought can be an action, too-- but here the action is the basis of the account, not the thought.) Consider: do we ever try to rationalize beliefs that are mere facts? For example, do we ever rationalize that the sky is blue? Or that it's hot in the Sahara?

    2) Related to 1, rationalization always seems to involve a moral dimension. Does that need to be part of the account?

    3) You touch on this in the worries section, but: depending on what you think the springs of action are, it could turn out that many, many, many of our beliefs are merely "rationalized." And if you don't think reasons as such are causes, they are *all* rationalizations. While I'm oddly sympathetic to that view, I worry that it exposes the account as being too wide.

    Offered merely as possibly-helpful thoughts. I really liked this post.

    ReplyDelete
  3. Hi Eric,

    This is cool. One thought that came to mind when I read through your proposed account of rationalization is that it might be too broadly stated.

    Consider the distinction between discovery and justification in the philosophy of science. In some instances this is invoked, roughly, to distinguish between how a theory came to be conceptualized and how it came to be justified. But given the distinction (which is contentious, but probably points to some intuitive distinction that isn't too problematic), it would seem to follow from your account that to justify a theory is to rationalize it. Let the theory (or some claim partly constitutive of the theory) stand for P in your schema. Then insofar as S believes that P because of certain causes that are very different from from the grounds for justifying P, it would seem to follow that S's citing these grounds count as rationalizing. But that would seem to be the wrong thing to say. Justification is not simply the same thing as rationalization. One can justify a theory (or claim) by citing evidence that had no role in one's initial acceptance of the theory (or claim), without this fitting what we normally think of as rationalization. The history of science seems to me to provide examples of this sort.

    I suspect, however, given your stated aims and examples that you are really after an account that covers only cases where P is a belief with normative content--e.g., that I shouldn't give the money back, or that it was appropriate to tell the story. Am I right about this? If so, you could easily note this in the account. And doing so would block the above worry.

    What do you think?

    ReplyDelete
  4. Hi Eric, I'm not sure if the concerns you are raising about the fourth conditions are Jon's as well. But one thing that comes to mind is that the concept you are developing here seems clearly externalist in nature, and it's reasonable to ask whether there is anything essentially internal about rationalization. Is it possible, for example, that sometimes you rationalize your way to the correct answer (which would end up denying the necessity of the 4th condition.)

    A more internalist conception of rationalization would be to describe it in Quinean fashion as an internal commitment to resolving cognitive dissonance in such a way that it preserves a particular viewpoint, come what may. Sometimes this actually works out, but you got lucky. You weren't really ever open to the alternatives. (A possible worlds analysis might be illuminating here.)

    Some other niggling concerns.

    (A) I think there is such a thing as explanatory rationalization, so I'm wondering if (2) is is too narrowly focused on justification.

    (B) I'm also concerned about the "in order to" clause. I doubt that you mean that this has to be purposive behavior accessible to the agent. So I'm thinking you mean that the reasoning has the particular function for the agent, whether she realizes it or not. When we say that the purpose or function is to appear rational, it suggests quite a bit that you might not intend, such as that the agent is choosing to appear that way rather than be that way (which strikes me as attributing a level of self-awareness and means-end rationality that isn't necessary). Anyway, I'm just wondering if focusing on the strong commitment to conservatism might be a better approach.

    (C) If you're involved in a naturalistic project, then the idea of explicating any concept it a way that corresponds to a particular intuitive valuation of it, good or bad, is problematic. There just isn't any reason to think that our moral judgments are carving nature at its joints.

    (D) Not a concern, just a thought. I was just recently talking to my epistemology students about Gazzaniga's split brain research and the confabulation he was able to induce with the split screen experiments. There is something essentially confabulatory about rationalization I think. That probably is not something you would need to capture in the definition, though.



    ReplyDelete
  5. Sorry, this is one of my favorite topics, so it's hard to stop thinking about it. (Does Jon know Elliot Aronson, btw? I took an undergraduate seminar from him in the early 80's)

    It seems to me that the concept of rationalization can also be at least partically unpacked as a kind of expected value calculation, where the action being evaluated is a belief. Typically it's a flawed expected value calculation, at least partially due to hyperbolic time discounting. In the long run it would be better if we changed our view, but in the short run it is very painful to do so.

    ReplyDelete
  6. Suppose a mathematician believes some mathematical claim P in a tricky and difficult area of mathematics; she then tries to find arguments justifying P that she and other mathematicians will accept; following a vague and relatively minor hunch that she developed by reflecting on her original belief, she finds an argument that turns out to be quite good, even if there are still some questions left open, and takes that as the rational grounds of her belief; then it seems that (4) is also met, and that she was rationalizing. Perhaps even at some point she comes to conclude that her original reason for believing P was not a good one, although her new argument is. But this doesn't seem to have anything to do with "rationalization in the pejorative sense of the term".

    An additional worry is that the causes and grounds of our beliefs seem to shift a lot, relatively speaking, and never more so than when we are actively working to try to be consistent and reasonable. Arguments we originally regarded as weak, we sometimes come to think were stronger than we gave them credit for; arguments that we originally regarded as strong, we sometimes come to regard as flawed; claims that we originally believed on authority we sometimes eventually come to confirm ourselves; things we eventually believe on weak grounds we sometimes find strong grounds for; hypotheses and theories originally developed in one context or for one purpose occasionally find astounding confirmation in another context or when put to another purpose; etc. We need to avoid accounts of rationalizations that apply the label too widely to these kinds of changes; if 'rationalization' just ends up meaning 'a large part of what everyone does in reasoning', it seems not longer to be a concept of much value.

    ReplyDelete
  7. Rational has some relevance near meta-philosophy-physics, but only as rationing...
    As in instinctive physical emotional and mental reasons for rationalizations...
    Disparagingly we see this when reason rejects all reasons except words...

    Ref...interacting-interactive forces...

    ReplyDelete
  8. Thanks for the helpful comments, folks!

    Ryan: I agree that the set-up makes it very difficult in practice to tell whether rationalization has occurred. In a way, though, that's a feature rather than a bug. It *is* difficult! And part of my agenda here is to undermine people's confidence that they are not rationalizing. On the issue of original causes of the belief, one reason I'm disinclined toward that is that I want to allow for cases where people become convinced of the justifications -- genuinely convinced, maybe for good reasons -- and then the justifications do become causally effective in supporting their belief. I'd prefer not to call that rationalization.

    Brandon: This account can apply to rationalizing actions in this way: the belief is the belief that the action was reasonable or morally acceptable. A non-moral case of rationalization might be a metaphysical view in philosophy to which you are irrationally attracted, perhaps because it's the opposite of the view of someone you loathe. I'd rather not commit on the metaphysics of reasons and whether "reasons are causes" -- but there will be psychological states nearby such as responsiveness to those reasons. One advantage of the squirrelly "very different" language is that I can kind of finesse that, I think. And I do think it's quite possible that many of our moral justifications are rationalizations. That's what Haidt might say, for example!

    ReplyDelete
  9. Ben -- In a way this is a complement of Ryan's comment! What I want to do with the types of cases you mention is to say that the causal basis of the belief shifts (and maybe it becomes a case of overdetermination or redundant causation). Once the proper justification is accepted as properly compelling then it becomes a new important supporting cause of your belief, even if it wasn't before. I am going for more than just normative content beliefs. Jon and I want to focus mostly on issues of moral self-knowledge and metaphilosophy, so that's why we chose our examples, but we could have chosen the Critique of Pure Reason for our Kant-hater example quite as easily as the Groundwork of the Metaphysics of Morals.

    Randy: Ha, your internalist version is pretty close to my 2007 theory of rationalization that I link to at the end of the blog. I'm happy with the externalist version now, though! Or maybe some blended position: It's not enough to avoid rationalization that you come up with the correct answer. If you come up with the correct answer it must also become causally efficacious in supporting your belief (see my reply to Ben above).

    Could you expand more on what you're thinking with (A)? On (B), I don't want to make that too conscious or intellectualistic (which is a move away from Jon's initial phrasing of "wanting to") -- rather something like goal-directedness of the sort you can also see in, say, habitual weight shifting in walking. On (C), yeah we're willing to give up on intuition in places. Respecting it fully is not among our aims. On (D): Yes, the Gazzaniga cases seem like excellent examples of rationalization -- though I have some concerns about Gazzaniga's methodology (which we could talk about if you like). Interesting final thought, too. That might help explain especially the rationalizations that aren't wishful-thinking related. With wishful thinking there is also a desire for P to be true, which is a causal factor.

    ReplyDelete
  10. Eric, yes I would be interested in hearing about your doubts regarding Gazzaniga's methodology, when you have the time.

    Regarding my (A), the idea of explanatory rationalization. In thinking it through, I have to say it's a bit tendentious on my part. If we take the object of rationalization as some believed proposition P, then I think justifying the belief that P is probably the best way to characterize the goal of rationalization.

    My own inclination is a bit more holistic. To steal a phrase from you, we have both a drive to justify and a drive to explain. I see rationality itself as the result of interaction between explanatory and justificatory mechanisms. (Explanations provide theories that need to be justified; justification gives us facts that need to be explained.) So when we rationalize, I think we can compromise that process in one way or another in order to, as you say, appear rational.

    But on this way of thinking, it’s more natural to regard P, not as the object of belief, but as a piece of information that has the power to either challenge or reinforce a set of beliefs and attitudes. So I would just talk about a response to P as a rationalization. An example of an explanatory rationalization is what we sometimes call "explaining P away." Which is what Adrian does when she explains away Bryan’s apparently sincere anger as the result of stress rather than as a legitimate beef.

    On positive result of thinking about it this way is that it allows us to speak naturally of rationalizing our attitudes as well as our decisions and behavior. I mean, we can say that Ainsley is justifying his belief that Kant is a bad philosopher, but it really feels more natural to me to say that he is justifying his hatred of Kant. We can say that Estefania is justifying her belief that it is ok to keep the money, but it feels to me more like she is justifying her decision to keep it. So the belief formulation just feels a little Procrustean. Whereas, rationalizing in response to some form of information allows a kind of pluralism about the object of rationalization. (This reminds me a little of your pluralistic attitude toward the mechanisms of self-knowledge.)

    ReplyDelete
  11. Interesting!
    A thought about each condition:
    1. Of course one can rationalize one’s actions, intentions, desires, etc. But you are probably just focusing on belief as a simplifying assumption.
    2. I worry about “appear rational”. If I do something to e.g.” appear honest” that already implies that I know that I’m doing something dishonest. Similarly, it seems like if someone is trying to do something to make her belief appear rational, that implies that she already knows that it isn’t. But this knowledge doesn’t seem required. One can rationalize without knowing that one is rationalizing. Let’s call this person the sincere rationalizer. Why not drop “in order to make her belief appear rational”?
    3. There’s also the cynical rationalizer (in contrast to the sincere rationalizer). The cynical rationalizer might not need really accept the justification posited in 2, The point is simply to get others to accept P by whatever means necessary.
    4. This seems exactly right to me!
    The cynical rationalizers are unlike the rationalizers in your examples (call them straight rationalizers). However, they might be sincere rationalizers.

    ReplyDelete
  12. I think I'd just describe it as starting with a conclusion ('I want X to be true - X is true'), then going backward, inventing/cherry picking supports for that conclusion.

    It might just miss the actual reasons for doing it, simply from being a really poor form of thought.

    BUT, think about it, what is the punishment for breaking a promise about not telling an embaressing story?

    What if actually the person who had the promise broken is themselves simply going to make up a conclusion of what happens to promise breakers, then invent/cherry pick supports for that conclusion?

    So perhaps rationalising is reaction to potential rationalising? Particualarly when that potential rationalising is making up a punishment. I mean, who wants a punishment out of the blue - we'd all be outraged if a judge just made up a sentence for someoone to have a thumb cut off over an unpaid parking fine. So when the person makes up a rationalisation in an attempt to avoid a rationalised punishment, can we really say we're so far away from their position?

    Off topic : Where can one argue this stuff for money/food? lol :)

    ReplyDelete
  13. It seems to me that "rationalize" commonly means "justify", and this can be true of both senses. My other thought was about sour grapes - where it is a post-hoc rationalisation that you don't take seriously anyway.

    ReplyDelete
  14. Thanks for the helpful continuing comments, folks!

    Randy: On Gazzaniga: It was probably ten years ago now, but I dug into the Gazzaniga trying to find a good description of the methodology and detailed results and I never did find one. It was always vague and anecdotal on the most famous stuff. (I'm open to corrections!) And second-hand (I hesitate to name names in a public forum), one researcher said he had been having trouble replicating.

    I can see what you mean about "belief that P" being a bit procrustean as the target of rationalization. On the other hand, there's something nice about having a well-defined (or well-ish defined) target, so I think if we *can* squeeze all the cases we want (or that I want) into that formulation, maybe stick with it.

    Josh:
    On 1: On rationalizing actions, intentions, and desires. Our (procrustean?) hope would be that we can do it all with belief (or judgment)! So in rationalizating an action, you are actually rationalizing your belief that the action was a good one (or some related belief); in rationalizing a desire, you are rationalizing your belief that it's reasonable to want that thing (or some related belief). One advantage of doing it in terms of believed P is that it's easier to see how an argument or set of rationalizing justifications can have the belief that P as a conclusion.

    On 2: Yes, maybe that implicature is less than ideal. My thought is that it can be kind of insincere to start, but if it stays insincere then Condition 3 won't be met, because you won't really accept the justifications. I didn't really unpack "accept" though. And then, yes, you get the cynical rationalizer -- who doesn't meet Condition 3 -- and so isn't rationalizing in the target sense of the term.

    Callan: Sure, there can be rationalizations on both sides. Why not?

    David: I do think that there is a non-pejorative sense of "rationalization" which labels the phenomenon of justifying in an epistemically responsible way. There are also an interesting range of cases in which you only half-accept the rationalizations that you offer, giving in-between cases for Condition 3. Sour grapes seems a good source of such cases.

    ReplyDelete
  15. Eric,

    Surely it affects the impetus of figuring out anything about rationalisation, if the impetus itself is rationalisation driven? I'm speaking for anyone reading the material - some might go 'oh, those darn rationalisers!' without any sense of their own responce being a made up on the spot thing.

    ReplyDelete
  16. Callan -- yes, the critique is self-applying!

    ReplyDelete
  17. I would be interested to see how this would be developed in light of your theory of belief.

    Do you think the dispositional stereotype for a belief in say, consequentialism, includes a disposition to develop arguments in favor of this belief? This would include dispositions to pay close attention in observing facts that might ground premises for this argument, perhaps dispositions to ignore or pay less attention to information that might serve as a counter example.

    My suspicion is that, if we really consider the work philosophers do, belief might well turn out to be prior to argument in most cases. Much of our work, in practice, would then amount to vindicating beliefs that emerge from our education, our cultural biases, what not, by developing deductively valid arguments. The only diversity within one cultural tradition would be different ways of rationalizing our beliefs.

    It strikes me that much epistemology (such as the epistemology surrounding representationalism for belief) operates in a similar way. It is an attempt to prove to ourselves what we are already certain of.

    Some texts in Buddhist philosophy mirror some of the conclusions one might draw from your phenomenal dispositionalism. In particular, they draw a difference between mere changes in belief, in the sense of what we might assent to, and 'deep realizations'. An account of belief might come to show that dispositions to assent to propositions might often result from the actualizing of dispositions to develop arguments, which themselves may result from a 'deeper' belief that is not yet articulated in language.

    ReplyDelete
  18. I find the definition as given quite convincing. But I have a bit of a problem with the project: is this supposed to be a semantic analysis of the word "rationalization" as it exists in the natural language English? If not, what relation does this concept bear to that word? If it is, then where's the linguistic evidence?

    My feeling is that disapproval is a necessary semantic element in a definition of the English word. Generally, I think the disapproval attaches to both the action being rationalized and the mode of justification, but it probably could attach to only one or the other.

    ReplyDelete
  19. Louis, you write:

    "Do you think the dispositional stereotype for a belief in say, consequentialism, includes a disposition to develop arguments in favor of this belief? This would include dispositions to pay close attention in observing facts that might ground premises for this argument, perhaps dispositions to ignore or pay less attention to information that might serve as a counter example."

    The first seems plausible -- that one would be disposed to develop arguments. The disposition to ignore counterarguments and seek mainly confirmatory evidence, I'm not as sure about. Since the stereotype is grounded in folk psychology, it can come apart from the empirical facts. And I want the most central things in the stereotype to be things that (absent excusers) would seem odd if the believer lacked them.

    So if I say "P" and then you say, "what's your reason?" and I say "no reason" -- there is something a little odd about that, normally. But it doesn't seem odd in the same way for a P-believer to be especially interested and struck by prima facie counterevidence to P. That seems more like an optional matter of epistemic style -- does one tend to dismiss counterevidence or tend to find it interesting and troublesome, seriously considering revision in light of it?

    ReplyDelete
  20. chinaphil --

    I'm not intending an analysis of the English meaning but something more like a Carnapian 'explication' -- a more formal characterization that hopefully isn't too far from ordinary usage but it mainly designed to be theoretically valuable. I do want to retain the pejorative tone of the term (unlike, say, Davidson in his use of "rationalization" in saying that beliefs and desires "rationalize" action).

    Admittedly an explication's success is hard to evaluate before seeing the term put to its theoretical work. But jarring violations of ordinary usage, of the sort that might pop out to philosophers reading a blog, are the type of thing that would tend to go in the negative column for an explication (though which can also be outweighed if there are compensating advantages).

    ReplyDelete
  21. Eric,

    I wonder if there's any point at which it stops being self applying? Or can one go backwards through each perception, finding a rationalisation in each and the following evaluation of the evaluation!>

    ReplyDelete
  22. Cairo -- I think it's possible to never break out of the cycle. I also think we aren't *always* rationalizing. I also think that it's hard to have self-knowledge about such matters!

    ReplyDelete
  23. Reading recommendation for psychological results: Dan Batson's relentless and sobering new book WHAT'S WRONG WITH MORALITY?

    ReplyDelete