Showing posts with label epistemology. Show all posts
Showing posts with label epistemology. Show all posts

Friday, November 11, 2022

Credence-First Skepticism

Philosophers usually treat skepticism as a thesis about knowledge. The skeptic about X holds that people who claim to know X don't in fact know X. Religious skeptics think that people who say they know that God exists don't in fact know that. Skeptics about climate change hold that we don't know that the planet is warming. Radical philosophical skepticism asserts broad failures of knowledge. According to dream skepticism, we don't know we're not dreaming. According to external world skepticism, we lack knoweldge about the world beyond our own minds.

Treating skepticism as a thesis about knowledge makes the concept or phenomenon of knowledge crucially important to the evaluation of skeptical claims. The higher the bar for knowledge, the easier it is to justify skepticism. For example, if knowledge requires perfect certainty, then we can establish skepticism about a domain by establishing that perfect certainty is unwarranted in that domain. (Imagine here the person who objects to an atheist by extracting from the atheist the admission that they can't be certain that God doesn't exist and therefore they should admit that they don't really know.) Similarly, if knowledge requires knowing that you know, then we could establish skepticism about X by establishing that you can't know that you know about X. If knowledge requires being able to rule out all relevant alternatives, then we can establish skepticism by establishing that there are relevant alternatives that can't be ruled out. Conversely, if knowledge is cheaper and easier to attain -- if knowledge doesn't require, for example, perfect certainty, or knowledge that you know, or being able to rule out every single relevant alternative -- then skepticism is harder to defend.

But we don't have to conceptualize skepticism as a thesis about knowledge. We can separate the two concepts. Doing so has some advantages. The concept of knowledge is so vexed and contentious that it can become a distraction if our interests in skepticism are not driven by an interest in the concept of knowledge. You might be interested in religious skepticism, or climate change skepticism, or dream skepticism, or external world skepticism because you're interested in the question of whether god exists, whether the climate is changing, whether you might now be dreaming, or whether it's plausible that you could be radically mistaken about the external world. If your interest lies in those substantive questions, then conceptual debates about the nature of knowledge are beside the point. You don't want abstract disputes about the KK principle to crowd out discussion about what kinds of evidence we have or don't have for the existence of God, or climate change, or a stable external reality, and how relatively confident or unconfident we should be in our opinions about such matters.

To avoid distractions concerning knowledge, I recommend that we think about skepticism instead in terms of credence -- that is, degree of belief or confidence. We can contrast skeptics and believers. A believer in X is someone with a relatively high credence in X, while a skeptic is someone with a relatively low credence in X. A believer thinks X is relatively likely to be the case, while a skeptic regards X as relatively less likely. Believers in God find the existence of God likely. Skeptics find it less likely. Believers in the external world find the existence of an external world (with roughly the properties we ordinarily think it has) relatively likely while skeptics find it relatively less likely.

"Relatively" is an important word here. Given that most readers of this blog will be virtually certain that they are not currently dreaming, a reader who thinks it even 1% likely that they're dreaming has a relatively low credence -- 99% instead of 99.999999% or 100%. We can describe this as a moderately skeptical stance, though of course not as skeptical as the stance of someone who thinks it's 50/50.

[Dall-E image of a man flying in a dream]

Discussions of radical skepticism in epistemology tend to lose sight of what is really gripping about radically skeptical scenarios: the fact that, if the skeptic is right, there's a reasonable chance that you're in one. It's not unreasonable, the skeptic asserts, to attribute a non-trivial credence to the possibility that you are currently dreaming or currently living in a small or unstable computer simulation. Whoa! Such possibilities are potentially Earth-shaking if true, since many of the beliefs we ordinarily take for granted as obviously true (that Luxembourg exists, that I'm in my office looking at a computer screen) would be false.

To really assess such wild-seeming claims, we should address the nature and epistemology of dreaming and the nature and epistemology of computer simulations. Can dream experiences really be as sensorily rich and realistic as the experiences that I'm having right now? Or are dream experiences somehow different? If dream experiences can be as rich and realistic as what I'm now experiencing, then that seems to make it relatively more reasonable to assign a non-trivial credence to this being a dream. Is it realistic to think that future societies could create vastly many genuinely conscious AI entities who think that they live in worlds like this one? If so, then the simulation possibility starts to look relatively more plausible; if not, then it starts to look relatively less plausible.

In other words, to assess the likelihood of radically skeptical scenarios, like the dream or simulation scenario, we need to delve into the details of those scenarios. But that's not typically what epistemologists do when considering radical skepticism. More typically, they stipulate some far-fetched scenario with no plausibility, such as the brain-in-a-vat scenario, and then ask questions about the nature of knowledge. That's worth doing. But to put that at the heart of skeptical epistemology is to miss skepticism's pull.

A credence-first approach to skepticism makes skepticism behaviorally and emotionally relevant. Suppose I arrive at a small but non-trivial credence that I'm dreaming -- a 0.1% credence for example. Then I might try some things I wouldn't try if I had a 0% or 0.000000000001% credence I was dreaming. I might ask myself what I would do if this were a dream -- and if doing that thing were nearly cost-free, I might try it. For example, I might spread my arms to see if I can fly. I might see if I can turn this into a lucid dream by magically lifting a pen through telekinesis. I'd probably only try these things if I had nothing better to do at the moment and no one was around to think I'm a weirdo. And when those attempts fail, I might reduce my credence that this is a dream.

If I take seriously the possibility that this is a simulation, I can wonder about the creators. I become, so to speak, a conditional theist. Whoever is running the simulation is in some sense a god: They created the world and presumably can end it. They exist outside of time and space as I know them, and maybe they have "miraculous" powers to intervene in events around me. Perhaps I have no idea what I could do that might please or displease them, or whether they're even paying attention, but still, it's somewhat awe-inspiring to consider the possibility that my world, our world, is nested in some larger reality, launched by some creator for some purpose we don't understand. If I regard the simulation possibility as a live possibility with some non-trivial chance of being true, then the world might be quite a bit weirder than I would otherwise have thought, and very differently constituted. Skepticism gives me material uncertainty and opens up genuine doubt. The cosmos seems richer with possibility and more mysterious.

We lose all of this weirdness, awe, mystery, and material uncertainty if we focus on extremely implausible scenarios to which we assign zero or virtually zero credence, like the brain-in-a-vat scenario, and focus our argumentative attention only on whether or not it's appropriate to say that we "know" we're not in those admittedly extremely implausible scenarios.

Thursday, October 13, 2022

The Rational Pressure Argument

guest post by Neil Van Leeuwen

Many people feel some rational pressure to get their religious beliefs to cohere with their more mundane factual beliefs. 

The Mormon Church’s position on the ancestry of American Indians furnishes a telling example. Its original view, stemming from the Book of Mormon, appears to have been that American Indians descended from Lamanites, who were supposed to be an offshoot of Israelites that migrated to the Americas when the Ten Tribes went into exile. Some Church texts even just refer to American Indians as Lamanites. 

Yet over time the position has shifted, as the Church started to recognize the overwhelming evidence of migrations from Asia across the Bering Strait as the source of American Indian populations. The Lamanites were thus later billed as “the principal ancestors of American Indians.” Then, in 2006, the Lamanites were put down as “among” the ancestors of American Indians.

That shift allowed the LDS Church to maintain both (a) its traditional narrative to the effect that American Indians descended from Lamanites and (b) that that narrative is not inconsistent with thoroughly documented facts about the Asiatic origins of American Indian populations (archeological facts, DNA evidence, etc.). 

Even the current, watered-down Lamanite narrative, one must grant, has no outside evidence in its favor whatsoever. But that’s not the point. The point is that architects of LDS Church doctrine, on acquiring greater levels of knowledge about the actual provenance of American Indians (encoded in the factual beliefs in their heads on the matter), felt at least some rational pressure to bring their religious beliefs into coherence (or at least lack of obvious inconsistency) with their evidence-based factual beliefs.

This existence of such felt rational pressure was the topic of an exchange between Thomas Kelly and me at the Princeton Belief Workshop in June, which Eric described in his fascinating blog on that exchange last month. 

A little background for those new to this. 

I have long argued that there exists a cognitive attitude I call religious credence, which is effectively a form of imagining that partly constitutes one’s group identity. Religious credence is typically called “belief” in everyday parlance. But its functional dynamics are far different from those of ordinary factual belief: it’s more under voluntary control, more compartmentalized, less widely used in supporting inference, and far less vulnerable to evidence. Religious credence is thus a distinct cognitive attitude from factual belief. And not only should philosophers and psychologists regard religious credence and factual belief as distinct; emerging psycholinguistic and experimental evidence suggests that lay people already do regard them differently. 

To apply my view to our running example, consider these two attitude reports. (Let’s imagine that Jim is a member of the LDS Church and also an historian.)

1) Jim thinks that there was a migration across the Bering Strait from Asia to North America.

2) Jim believes that American Indians descended from the Lamanite offshoot of the lost Ten Tribes of Israel. 

Since attitude and content are independent features of mental states, there is nothing necessary about religious credence being the attitude that goes with what are typically thought of as religious contents or about factual belief being the attitude that goes with what are typically thought of as factual contents. In fact, there are plenty of attitude-content crossovers, and it is a strength of my view that it can characterize such crossovers clearly. Still, content is heuristic of attitude type, so it is fair to attribute to me the view that the attitude reported in 1) is most likely factual belief and the one reported in 2) is most likely religious credence. 

Here’s the objection that Tom raised, as paraphrased in Eric’s blog. (Note that Tom didn’t bring up the Mormon Church example that I focus on here, but it’s still a useful illustration of what he was talking about.)

If [religious] credence and [factual] belief really are different types of attitude, why does it seem like there’s rational tension between them? Normally, when attitude types differ, there’s no pressure to align them: It’s perfectly rational to believe that P and desire or imagine that not-P. You can believe that it is raining and desire or imagine that it is not raining. But with belief and credence as defined by Van Leeuwen, that doesn’t seem to be so. 

In other words, the fact that Jim feels pressure to modify the attitude reported in 2) so that it at least does not contradict the attitude reported in 1) suggests that the attitudes reported in both are of the same type—contrary to what my distinction suggests. 

Let’s call this The Rational Pressure Argument. I’ve encountered objections like it in the past, but none of them have been put quite as well as this one from Tom via Eric. For what it’s worth, I regard it as the most interesting objection to my view that exists.

Yet it fails. 

To see how, let’s consider The Rational Pressure Argument laid out a bit more formally. This version of it will deploy our running example, but the assumption of one who makes the argument would be that the relevant bits generalize. (Let “CA” be a verb for the cognitive attitude whose status is in question: e.g., “Jim CAs that p” means that Jim has this attitude with p as its content.)

1. Jim CAs the religious stories and doctrines of his Church. [premise]

2. Jim has factual beliefs whose contents bear on how likely it is that those stories and doctrines are true. [premise]

3. Jim feels pressure to bring the cognitive attitudes described in 1 into rational coherence with the factual beliefs described in 2 (at least to the point of eliminating obvious contradictions). [premise]

4. People do not feel pressure to bring attitudes distinct from factual belief (e.g., imagining, desire) into rational coherence with factual belief. [premise]

5. The cognitive attitudes described in 1 are factual beliefs. [from 1-4]

To make this logically tight, various points would have to be filled in, but that could be done easily enough and needn’t be done for purposes of this blog. So let’s grant the validity of the argument. 

The question then is whether the generalized versions of the premises are true—in particular 3 and 4. 

In the exchange during the workshop, I cited ethnographic research that suggests that people like Jim—being WEIRD—are not that representative of people around the world in feeling rational pressure to remove inconsistency between religious credence and factual belief. In other words, I cast doubt on the generalizability of premise 3, a point by which I still stand. But the argument would still be irksome to my position, even if Jim was only representative of a culturally limited set of people. So here I want to make a different point: premise 4 is not generally true

The reason is that a number of cognitive attitudes that aren’t factual belief are nevertheless subject to rational pressure not to contradict one’s factual beliefs. It’s probably true that, as Eric points out, everyday imagining (daydreaming, fantasizing, etc.) isn’t subject to any such pressure (though it does default to being somewhat constrained). But we shouldn’t generalize from that fact about everyday imagining to all types of cognitive attitudes.

Take the attitude of guessing (whatever that amounts to). One can guess that there are 10,002 jellybeans in the jar, or 10,003 jellybeans in the jar, etc. Guessing is voluntary; one’s guesses don’t generally figure into the informational background that guides inference; and so on. Much distinguishes guessing from factual belief, which is why it is a different attitude. Nevertheless, one typically feels some rational pressure for one’s guesses not to contradict one’s factual beliefs: if I learn and hence come to factually believe that there are fewer than 8,000 jellybeans in the jar, this strongly inclines me to revise my guess to below that number. 

Similar points could be made about attitudes like hypothesizing or accepting in a context (see Bratman, 1992): though they are not the attitude of factual belief, they are often accompanied by some rational pressure not to flagrantly contradict factual belief.

So The Rational Pressure Argument can’t be correct; accepting it would commit us to the false view that guessing (etc.) amounts to factually believing—contrary to fact. We thus needn’t feel any pressure from the argument to collapse my distinction between religious credence and factual belief: yes, in some cultural contexts there may be rational pressure for religious credences to have contents that at least don’t contradict factual beliefs; but no, that doesn’t imply that there is only one attitude type there. So given all the other reasons there are for drawing that distinction, we can leave it in place. 

Where does that leave us?

I have much to say here. But basically I think this discussion leaves us with a cluster of large open research questions:

• What is the character of the rational pressure that people seem to feel to not have their religious credences overtly contradict their factual beliefs? It seems to be much weaker, even when it is present at all, than the pressure between factual beliefs to cohere. What else can we say about it?

• Why does this rational pressure seem to be more prevalent in some cultural contexts than others? Rational pressure to make factual beliefs cohere with one another is, as far as I can tell, universal. But rational pressure to make religious credences cohere with factual beliefs is, as far as I can tell, far from universal. What wanderings of cultural evolution made it crop up more in some places than others?

• And finally, the big normative question: even if people don’t in many cases—in point of psychological fact—feel that much rational pressure to bring their religious credences into coherence with their factual beliefs, should they? 

I look forward to seeing more and more research on the first two (descriptive) sorts of questions in the coming years, and I will be participating in such research myself. The third, normative question is perennial and outside the usual scope of the kind of research I usually do. Yet I hope my descriptive work furnishes conceptual resources for posing such normative questions with greater precision than has been available in the past. If it does that, I believe I’ll have done something useful.

[image: Dall-E rendering of "oil painting of guessing versus believing"]

Friday, December 11, 2020

On Self-Defeating Skeptical Arguments

Usually, self-defeating arguments are bad. If say "Trust me, you shouldn't trust anyone", my claim (you shouldn't trust anyone), if true, undermines the basis I've offered in support (that you should trust me). Whoops!

In skeptical arguments, however, self-defeat can sometimes be a feature rather than a bug. Michel de Montaigne compared skeptical arguments to laxatives. Self-defeating skeptical arguments are like rhubarb. They flush out your other opinions first and themselves last.

Let's consider two types of self-defeat:

In propositional self-defeat, the argument for proposition P relies on a premise inconsistent with P.

In methodological self-defeat, one relies on a certain method to reach the conclusion P, but that very conclusion implies that the method employed shouldn't be relied upon.

My opening example is most naturally read as methodologically self-defeating: the conclusion P ("you shouldn't trust anyone") implies that the method employed (trusting my advice) shouldn't be relied upon.

Since methods (other than logical deduction itself) can typically be characterized propositionally then loaded into a deduction, we can model most types of methodological self-defeat propositionally. In the first paragraph, maybe, I invited my interlocutor to accept the following argument (with P1 as shared background knowledge):

P1 (Trust Principle). If x is trustworthy and if x says P, then P.
P2. I am trustworthy.
P3. I say no one is trustworthy.
C. Therefore, no one is trustworthy.

C implies the falsity of P2, on which the reasoning essentially relies. (There are surely versions of the Trust Principle which better capture what is involved in trust, but you get the idea.)

Of course, there is one species of argument in which a contradiction between the premises and the conclusion is exactly what you're aiming for: reductio ad absurdum. In a reductio, you aim to prove P by temporarily assuming not-P and then showing how a contradiction follows from that assumption. Since any proposition that implies a contradiction must be false, you can then conclude that it's not the case the not-P, i.e., that it is the case that P.

We can treat self-defeating skeptical arguments as reductios. In Farewell to Reason, Paul Feyerabend is clear that he intends a structure of this sort.[1] His critics, he says, complain that there's something self-defeating in using philosophical reasoning to show that philosophical reasoning shouldn't be relied upon. Not at all, he replies! It's a reductio. If philosophical reasoning can be relied upon, then [according to Feyerabend's various arguments] it can't be relied upon. We must conclude, then, that philosophical reasoning can't be relied upon. (Note that although "philosophical reasoning can't be relied upon" is the P at the end of the reductio, we don't accept it because it follows from the assumptions but rather because it is the negation of the opening assumption.) The ancient skeptic Sextus Empiricus (who inspired Montaigne) appears sometimes to take basically the same approach.

Similarly, in my skeptical work on introspection, I have relied on introspective reports to argue that introspective reports are untrustworthy. Like Feyerabend's argument, it's a methodological self-defeat argument that can be formulated as a reductio. If introspection is a reliable method, then various contradictions follow. Therefore, introspection is not a reliable method.

You know who drives me bananas sometimes? G.E. Moore. It's annoyance at him (and some others) that inspires this post.

Here is a crucial turn in one of Moore's arguments against dream skepticism. (According to dream skepticism, for all you know you might be dreaming right now.)

So far as I can see, one premiss which [the dream skeptic] would certainly use would be this: "Some at least of the sensory experiences which you are having now are similar in important respects to dream-images which actually have occurred in dreams." This seems a very harmless premiss, and I am quite willing to admit that it is true. But I think there is a very serious objection to the procedure of using it as a premiss in favour of the derived conclusion. For a philosopher who does use it as a premiss, is, I think, in fact implying, though he does not expressly say, that he himself knows it to be true. He is implying therefore that he himself knows that dreams have occurred.... But can he consistently combine this proposition that he knows that dreams have occurred, with his conclusion that he does not know that he is not dreaming?... If he is dreaming, it may be that he is only dreaming that dreams have occurred... ("Certainty", p. 270 in the linked reprint).

Moore is of course complaining here of self-defeat. But if the dream skeptic's argument is a reductio, self-contradiction is the aim and the intermediate claims needn't be known.

----------------------------------

ETA 11:57 a.m.: I see from various comments in social media that that last sentence was too cryptic. Two clarifications.

First, although the intermediate claims needn't be known, everything in the reductio needs to be solid except insofar as it depends on not-P. Otherwise, it's not necessarily not-P to blame for the contradiction.

Second, here's a schematic example of one possible dream-skeptical reductio: Assume for the reductio that I know I'm not currently dreaming. If so, then I know X and Y about dreams. If X and Y are true about dreams, then I don't know I'm not currently dreaming.

----------------------------------

[1] I'm relying on my memory of Feyerabend from years ago. Due to the COVID shutdowns, I don't currently have access to the books in my office.

Thursday, August 27, 2020

What is "Validity" in Social Science? Validity As a Property of Inferences vs of Claims

If you want to annoy your psychology and social science friends, I have just the trick!  Gather four of them together and ask them to explain exactly what validity is.  Then step back and watch them descend into confusion and contradiction.  Bring snacks.

We use the term all the time, with a truly bewildering array of modifiers: internal validity, construct validity, content validity, external validity, logical validity, statistical conclusion validity, discriminant validity, convergent validity, face validity, criterion validity....  Is there one thing, validity in general, which undergirds all of these uses?  And if so, what does it amount to?  Or is "validity" more of a family resemblance concept?  Are all true statements in some sense valid?  Or is validity more specific than that -- perhaps a matter of appropriate application of method?  Can a study or a conclusion or a method or an instrument be valid even if it's entirely mistaken, as long as proper techniques have been employed?  Oh, and wait, is validity really a property of studies and conclusions and methods and instruments?  They seem so different and to have such different criteria of success!

[image: A Defence of the Validity of the English Ordinations]

I've found surprisingly few general treatments of validity in the social sciences which articulate the concept with the kind of rigor and consistency that would satisfy an analytic philosopher.  One of the best and most influential recent attempts is Shadish, Cook, and Campbell 2002.  I'm going to poke at their treatment with one question in mind: What is validity a property of?

Shadish, Cook, and Campbell begin with a seemingly clear commitment: validity is a property of inferences:

We use the term validity to refer to the approximate truth of an inference.[1]  When we say something is valid, we make a judgment about the extent to which relevant evidence supports the inference as being true or correct (p. 34).

In the next paragraph, they emphasize again that validity is a property of specifically of inferences:

Validity is a property of inferences.  It is not a property of designs or methods, for the same design may contribute to more or less valid inferences under different circumstances....  So it is wrong to say that a randomized experiment is internally valid or has internal validity -- although we may occasionally speak that way for convenience (p. 34).

Characterizing validity as a property of inferences resonates with the use of "validity" in formal logic, where it is also generally treated as a property of deductive inferences (well, more accurately, a property of deductive arguments -- but close enough, if we treat inferences as psychological instantiations of arguments).  In formal logic, an inference or argument is deductively valid if and only if, in virtue of its form, it's impossible for the conclusion of the inference to be false if the premises of the inference are true.  [Okay, fine, maybe it's not that simple, but let's not go there today.]

Consider, for example, modus ponens, the inference form in which "P" and "If P, then Q" serve as premises, and "Q" serves as the conclusion.  (P and Q are propositions.)  Modus ponens is normally viewed as a valid form of inference because under the assumption that the two premises are true, the conclusion must be true.  If it's true that Socrates is a man and also true that If Socrates is a man, then Socrates is mortal, then it must also be true that Socrates is mortal.

Logicians normally distinguish validity from soundness: An inference is sound if and only if the inference is valid and the premises are true.  An inference can of course be valid without being sound, for example: (P1.) I am wearing three hats.  (P2.) If I am wearing three hats, I am a famous actor.  (C.) Therefore, I am a famous actor.  That's a perfectly valid inference to a perfectly false conclusion (thanks to at least one false premise).

Inferences are not true or false.  They are valid or invalid.  What is true or false are propositions: the premises and the conclusion.  Got it?  Good!  Lovely!  Now let's go back for a closer look at Shadish et al.  This time let's not forget footnote 1.

We use the term validity to refer to the approximate truth of an inference.[1]  When we say something is valid, we make a judgment about the extent to which relevant evidence supports the inference as being true or correct.

[1] We might use the terms knowledge claim or proposition in place of inference here, the former being observable embodiments of inferences.  There are differences implied by each of these terms, but we treat them interchangeably.

Okay, now wait.  Is validity a property of an inference or is it a property of a claim or proposition?  An inference is one thing and a claim is another!  Shadish et al., despite emphasizing that validity is a property of inferences, confusingly add they will treat "inference" and "knowledge claim" interchangeably.  But an inference is not a knowledge claim.  An inference is a process of moving from the hypothesized truth of one or more claims to a conclusion which, if all goes well, is true if the claims are true.

Could we maybe just say that validity is a property of an inference that has a true conclusion at the end, as a result of employing of good methods?  (This would make "validity" in Shadish et al.’s sense closer to "soundness" in the logician’s sense.)  Or differently but relatedly could we say that validity is a property that a claim has when it is both true and the result of methodologically good inference (and where the truth and inference quality are non-accidentally related)?  Or is validity about justification rather than truth -- "the extent to which relevant evidence supports the inference as being true or correct" (italics added).  Justification can of course diverge from the truth, since sometimes evidence strongly supports a proposition that turns out to be false in the end.  Or should we go back to process here, as suggested by the term "correct", since presumably an inference can be correct, in the sense that it is the right inference to make given the evidence, without its conclusion being true?

Oy vey.  I wish I could say that Shadish et al. clarify this all later and use their terms consistently throughout their influential book, but that's not so -- as indeed they hint in their remark, quoted above, about sometimes speaking loosely as though experiments (and not just inferences or claims) can be valid.  Their book is a lovely guide to empirical methods, but by the standards of analytic philosophy their definition of validity is a mess.

But this post isn't just about Shadish et al. (despite their 47,473 citations as of today).  It's about the treatment of validity in psychology and the social sciences in general.  Shadish et al. exemplify a conceptual looseness I see almost everywhere.

As a first-pass corrective on this looseness let me propose the following:

Psychologists' and social scientists' claims about validity, in my judgment, make the most sense on the whole and are simplest to interpret if we treat validity as fundamentally a property of claims or propositions rather than as a property of inferences (or methods or instruments or experiments).  A causal generalization, for example, of the form that events of type A cause events of type B in conditions C is "valid" if and only if events of type A do cause events of type B in conditions C.  To say that a psychological instrument (such as an IQ test) is "valid" is fundamentally matter of saying that the instrument measures what it claims to measure: Validity is a matter of the truth of that claim.  A study is valid if the claims of which it is composed are true (both its claims about its conclusions and its claims about the manner in which its conclusions are supported).  A measure has "face validity" if superficially it looks like the claims that result from applying that measure will be true claims.  Two measures have "discriminant validity" if the following claim is true: They in fact measure different underlying phenomena.

Validity, in the psychologists' and social scientists' sense, is best conceptualized as a property that belongs to claims: the property those claims have when they are true.  Attributions of validity to ontological entities other than claims, such as measures and studies, can all be reinterpreted as commitments to the truth of certain types of claims that are implicitly or explicitly embodied in the application of measures, the publication of studies, the making of inferences, etc.  (That good method has been used to arrive at the claims, I regard as a cancelable implicature.)

Why go this direction?  If we treat "validity" as a matter of the quality of the inference or the degree of justification of the conclusion regardless of whether the conclusion is in fact true, then we will have a plethora of valid inferences and valid conclusions, and by extension valid measures, valid instruments, and valid causal models that are completely mistaken, because science is hard and what you're justified in concluding is often not so.  But that's not how social scientists generally talk: A valid measure is one that is right, one that works, one that measures what it's supposed to measure, not one that we are (perhaps falsely) justified in thinking is right.

I diagnose the confusion as arising from three sources: First, widespread sloppy conceptual practice that uses "valid" loosely as a general term of praise.  Second, a tendency among those who do want to rigorize to notice that the philosophers' logical notion of validity applies to arguments or inferences, and consequently some corresponding pressure to think of it that way in the social sciences too, despite the dominant grain of social science usage running a different direction.  Third, a confusing liberality both about the types of validity and the ontological objects that can be said to have validity, which makes it hard to see the simple core underlying idea behind it all: that validity is nothing but a fancy word for truth.

Wednesday, April 08, 2020

The Unreliability of Naive Introspection

Wesley Buckwalter has a new podcast Journal Entries, in which philosophers spend 30-50 minutes walking listeners through the main ideas of one of their papers, sometimes adding new subsequent reflections or thoughts about future research in the area.

Today's Journal Entry is my 2008 paper, "The Unreliability of Naive Introspection".

I make the case that Descartes had it backwards when he said that the outside world is known better and more directly than our experiences. We are often radically wrong about even basic features of our currently ongoing experience, even when we reflect attentively upon it with sincere effort in favorable conditions.


Thursday, September 27, 2018

Philosophical Skepticism Is, or Should Be, about Credence Rather Than Knowledge

Philosophical skepticism is usually regarded as primarily a thesis about knowledge -- the thesis that we don't know some of the things that people ordinarily take themselves to know (such as that they are awake rather than dreaming or that the future will resemble the past). I prefer to think about skepticism without considering the question of "knowledge" at all.

Let me explain.

I know some things about which I don't have perfect confidence. I know, for example, that my car is parked in Lot 1. Of course it is! I just parked it there ninety minutes ago, in the same part of the parking lot where I've parked for over a year. I have no reason to think anything unusual is going on. Now of course it might have been stolen or towed, for some inexplicable reason, in the past ninety minutes, or I might be having some strange failure of memory. I wouldn't lay 100,000:1 odds on it -- my retirement funds gone if I'm wrong, $10 more in my pocket if I'm right. My confidence or credence isn't 1.00000. Of course there's a small chance it's not where I think it is. Acknowledging all of this, it's still I think reasonable for me to say that I know where my car is parked.

Now we could argue about this; and philosophers will. If I'm not completely certain that my car is in Lot 1, if I can entertain some reasonable doubts about it, if I'm not willing to just entirely take it for granted, then maybe it's best to say I don't really know my car is there. There is something admittedly odd about saying, "Yes, I know my car is there, but of course it might have recently been towed." Admittedly, explicitly allowing that possibility stands in tension, somehow, with simultaneously asserting the knowledge.

In a not-entirely-dissimilar way, I know that I am not currently dreaming. I am almost entirely certain that I am not dreaming, and I believe I have excellent grounds for that high level of confidence. And yet I think it's reasonable to allow myself a smidgen of doubt on the question. Maybe dreams can be (though I don't think so) this detailed and realistic; and if so, maybe this is one such super-realistic dream.

Now let's imagine two sorts of debates that we could have about these questions:

Debate 1: same credences but disagreement about knowledge. Philosopher A and Philosopher B both have 99.9% credence that their car is in Lot 1 and 99.99% credence that they are awake. Their degrees of confidence in these propositions are identical. But they disagree about whether it is correct to say, in light of their reasonable smidgens of doubt, that they know. [ETA 10:11 a.m.: Assume these philosophers also regard their own degrees of credence as reasonable. HT Dan Kervick.]

Debate 2: different credences but agreement about knowledge. Philosopher C and Philosopher D differ in their credences: Philosopher C thinks it is 100% certain (alternatively, 99.99999% certain) that she is awake, and Philosopher D has only a 95% credence; but both agree that they know that they are awake. Alternatively, Philosopher E is 99.99% confident that her car is in Lot 1 and Philosopher F is 99% confident; but they agree that, given their small amounts of reasonable doubt, they don't strictly speaking know.

I suggest that in the most useful and interesting sense of "skeptical", Philosophers A and B are similarly skeptical or unskeptical, despite the fact that they would say something different about knowledge. They have the same degrees of confidence and doubt; they would make (if rational) the same wagers; their disagreement seems to be mostly about a word or the proper application of a concept.

Conversely, Philosophers C and E are much less skeptical than Philosophers D and F, despite their agreement about the presence or absence of knowledge. They would behave and wager differently (for instance, Philosopher D might attempt a test to see whether he is dreaming). They will argue, too, about the types of evidence available or the quality of that evidence.

The extent of one's philosophical skepticism has more to do with how much doubt one thinks is reasonable than with whether, given a fixed credence or degree of doubt, one thinks it's right to say that one genuinely knows.

How much doubt is reasonable about whether you're awake? In considering this issue, there's no need to use the word "knowledge" at all! Should you just have 100% credence, taking it as an absolute certainty foundational to your cognition? Should you allow a tiny sliver of doubt, but only a tiny sliver? Or should you be in some state of serious indecision, giving the alternatives approximately equal weight? Similarly for the possibility that you're a brain in a vat, or that the sun will rise tomorrow. Philosophers in the first group are radically anti-skeptical (Moore, Wittgenstein, Descartes by the end of the Meditations); philosophers in the second group are radically skeptical (Sextus, Zhuangzi in Inner Chapter 2, Hume by the end of Book 1 of the Treatise); philosophers in the middle group admit a smidgen of skeptical doubt. Within that middle group, one might think the amount of reasonable doubt is trivially small (e.g., 0.00000000001%, or one might think that the amount of reasonable doubt is small but not trivially small, e.g., 0.001%). Debate about which of these four attitudes is the most reasonable (for various possible forms of skeptical doubt) is closer to the heart of the issue of skepticism than are debates about the application of the word "knowledge" among those who agree about the appropriate degree of credence.

[Note: In saying this, I do not mean to commit to the view that we can or should always have precise numerical credences in the propositions we consider.]

--------------------------------------------

Related: 1% Skepticism (Nous 2017).

Should I Try to Fly, on the Off-Chance This Might Be a Dream Body? (Dec 18, 2013).

Thursday, February 22, 2018

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization (with Jon Ellis)

(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)

Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....

For our full-length paper on this topic, see here.

Wednesday, February 21, 2018

Rationalization: Why Your Intelligence, Vigilance and Expertise Probably Don't Protect You (with Jon Ellis)

(with Jonathan E. Ellis; originally appeared at the Imperfect Cognitions blog)

We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. You try to point it out, but they deny it, and dig in more.

More formally, in recent work we have defined rationalization as what occurs when a person favors a particular view as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, and then engages in a biased search for and evaluation of justifications that would seem to support that favored view.

You, of course, never rationalize in this way! Or, rather, it doesn’t usually feel like you do. Stepping back, you’ll probably admit you do it sometimes. But maybe less than average? After all, you’re a philosopher, a psychologist, an expert in reasoning – or at least someone who reads blog posts about philosophy, psychology, and reasoning. You're especially committed to the promotion of critical thinking and fair-minded reasoning. You know about all sorts of common fallacies, and especially rationalization, and are on guard for them in your own thinking. Don't these facts about you make you less susceptible to rationalization than people with less academic intelligence, vigilance, and expertise?

We argue that no. You’re probably just as susceptible to post-hoc rationalization, maybe even more, than the rest of the population, though the ways it manifests in your reasoning may be different. Vigilance, academic intelligence, and disciplinary expertise are not overall protective against rationalization. In some cases, they might even enhance one’s tendency to rationalize, or make rationalizations more severe when they occur.

While some biases are less prevalent among those who score high on standard measures of academic intelligence, others appear to be no less frequent or powerful. Stanovich, West and Toplak (2013), reviewing several studies, find that the degree of myside bias is largely independent of measures of intelligence and cognitive ability. Dan Kahan finds that on several measures people who use more “System 2” type explicit reasoning show higher rates of motivated cognition rather than lower rates (2011, 2013, Kahan et al 2011). Thinkers who are more knowledgeable have more facts to choose from when constructing a line of motivated reasoning (Taber and Lodge 2006; Braman 2009).

Nor does disciplinary expertise appear to be protective. For instance, Schwitzgebel and Cushman (2012, 2015) presented moral dilemma scenarios to professional philosophers and comparison groups of non-philosophers, followed by the opportunity to endorse or reject various moral principles. Professional philosophers were just as prone to irrational order effects and framing effects as were the other groups, and were also at least as likely to “rationalize” their manipulated scenario judgments by appealing to principles post-hoc in a way that would render those manipulated judgments rational.

Furthermore, since the mechanisms responsible for rationalization are largely non-conscious, vigilant introspection is not liable to reveal to the introspector that rationalization has occured. This may be one reason for the “bias blind spot”: People tend to regard themselves as less biased than others, sometimes even exhibiting more bias by objective measures the less biased they believe themselves to be (Pronin, Gilovich and Ross 2004; Uhlmann and Cohen 2005). Indeed, efforts to reduce bias and be vigilant can amplify bias. You examine your reasoning for bias, find no bias because of your bias blind spot, and then inflate your confidence that your reasoning is not biased: “I really am being completely objective and reasonable!” (as suggested in Erhlinger, Gilovich and Ross 2005). People with high estimates of their objectivity might also be less likely to take protective measures against bias (Scopeletti et al 2015).

Partisan reasoning can be invisible to vigilant introspection for another reason: it need not occur in one fell swoop, at a sole moment or a particular inference. Rather, it can be the result of a series or network of “micro-instances” of motivated reasoning (Ellis, manuscript). Celebrated cases of motivated reasoning typically involve a person whose evidence clearly points to one thing (that it’s their turn, not yours, to do the dishes) but who believes the very opposite (that it’s your turn). But motives can have much subtler consequences.

Many judgments admit of degrees, and motives can have impacts of small degree. They can affect the likelihood you assign to an outcome, or the confidence you place in a belief, or the reliability you attribute to a source of information, or the threshold for cognitive action (e.g., what would trigger your pursuit of an objection). They can affect these things in large or very small ways.

Such micro-instances (you might call it motivated reasoning lite) can have significant amplificatory effects. This can happen over time, in a linear fashion. Or it can happen synchronically, spread over lots of assumptions, presuppositions, and dispositions. Or both. If introspection doesn't reveal motivated reasoning that happens in one fell swoop, micro-instances are liable to be even more elusive.

This is another reason for the sobering fact that well-meaning epistemic vigilance cannot be trusted to preempt or detect rationalization. Indeed, people who care most about reasoning, or who have a high “need for cognition”, or who attend to their cognitions most responsibly, may be the most impacted of all. Their learned ability to avoid the more obvious types of reasoning errors may naturally come with cognitive tools that enable more sophisticated, but still unnoticed, rationalization.

Coming tomorrow: Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization.

Full length article on the topic here.

Thursday, February 23, 2017

Belief Is Not a Norm of Assertion (but Knowledge Might Be)

Many philosophers have argued that you should only assert what you know to be the case (e.g. Williamson 1996). If you don't know that P is true, you shouldn't go around saying that P is true. Furthermore, to assert what you don't know isn't just bad manners; it violates a constitutive norm, fundamental to what assertion is. To accept this view is to accept what's sometimes called the Knowledge Norm of Assertion.

Most philosophers also accept the view, standard in epistemology, that you cannot know something that you don't believe. Knowing that P implies believing that P. This is sometimes called the Entailment Thesis. From the Knowledge Norm of Assertion and the Entailment Thesis, the Belief Norm of Assertion follows: You shouldn't go around asserting what you don't believe. Asserting what you don't believe violates one of the fundamental rules of the practice of assertion.

However, I reject the Entailment Thesis. This leaves me room to accept the Knowledge Norm of Assertion while rejecting the Belief Norm of Assertion.

Here's a plausible case, I think.

Juliet the implicit racist. Many White people in academia profess that all races are of equal intelligence. Juliet is one such person, a White philosophy professor. She has studied the matter more than most: She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. When she considers the matter she feels entirely unambivalent. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the Black students never look bright. When a Black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a White or Asian student to do so, even though her Black students make insightful comments and submit excellent essays at the same rate as the others. This bias affects her grading and the way she guides class discussion. She is similarly biased against Black non-students. When Juliet is on the hiring committee for a new office manager, it won’t seem to her that the Black applicants are the most intellectually capable, even if they are; or if she does become convinced of the intelligence of a Black applicant, it will have taken more evidence than if the applicant had been White (adapted from Schwitzgebel 2010, p. 532).

Does Juliet believe that all the races are equally intelligent? On my walk-the-walk view of belief, Juliet is at best an in-between case -- not quite accurately describable as believing it, not quite accurately describable as failing to believe it. (Compare: someone who is extraverted in most ways but introverted in a few ways might be not quite accurately describable as an extravert nor quite accurately describable as failing to be an extravert.) Juliet judges the races to be equally intelligent, but that type of intellectual assent or affirmation is only one piece of what it is believe, and not the most important piece. More important is how you actually live your life, what you spontaneously assume, how you think and reason on the whole, including in your less reflective, unguarded moments. Imagine two Black students talking about Juliet behind her back: "For all her fine talk, she doesn't really believe that Black people are just as intelligent."

But I do think that Juliet can and should assert that all the races are intellectually equal. She has ample justification for believing it, and indeed I'd say she knows it to be the case. If Timothy utters some racist nonsense, Juliet violates no important norm of assertion if she corrects Timothy by saying, "No, the races are intellectually equal. Here's the evidence...."

Suppose Tim responds by saying something like, "Hey, I know you don't really or fully believe that. I've seen how to react to your Black students and others." Juliet can rightly answer: "Those details of my particular psychology are irrelevant to the question. It is still the case that all the races are intellectually equal." Juliet has failed to shape herself into someone who generally lives and thinks and reasons, on the whole, as someone who believes it, but this shouldn't compel her to silence or compel her to always add a self-undermining confessional qualification to such statements ("P, but admittedly I don't live that way myself"). If she wants, she can just baldly assert it without violating any norm constitutive of good assertion practice. Her assertion has not gone wrong in a way that an assertion goes wrong if it is false or unjustified or intentionally misleading.

Jennifer Lackey (2007) presents some related cases. One is her well-known creationist teacher case: a fourth-grade teacher who knows the good scientific evidence for human evolution and teaches it to her students, despite accepting the truth of creationism personally as a matter of religious faith. Lackey uses this case to argue against the Knowledge Norm of Assertion, as well as (in passing) against a Belief Norm of Assertion, in favor of a Reasonable-To-Believe Norm of Assertion.

I like the creationist teacher case, but it's importantly different from the case of Juliet. Juliet feels unambivalently committed to the truth of what she asserts; she feels no doubt; she confidently judges it to be so. Lackey's creationist teacher is not naturally read as unambivalently committed to the evolutionary theory she asserts. (Similarly for Lackey's other related examples.)

Also, in presenting the case, Lackey appears to commit to the Entailment Thesis (p. 598: "he does not believe, and hence does not know"). Although it is minority opinion in the field, I think it's not outrageous to suggest that both Juliet and the creationist teacher do know the truth of what they assert (cf. the geocentrist in Murray, Sytsma & Livengood 2013). If the creationist teacher knows but does not believe, then her case is not a counterexample to the Knowledge Norm of Assertion.

A related set of cases -- not quite the same, I think, and introducing further complications -- are ethicists who espouse ethical views without being much motivated to try to govern their own behavior accordingly.

[image from Helen De Cruz]

Friday, December 04, 2015

A Theory of Rationalization

The U.C. Santa Cruz philosopher Jon Ellis and I are collaborating on a paper on rationalization in the pejorative sense of the term. I'm trying to convince Jon to accept the following four-clause definition of rationalization:

A person -- whom, following long philosophical tradition, we dub S -- rationalizes some claim or proposition P if and only if all of the following four conditions hold:

1. S believes that P.

2. S attempts to explicitly justify her belief that P, in order to make her belief appear rational, either to herself or others.

3. In doing 2, S comes to accept one or more justifications for P as the rational grounds of her belief.

4. The causes of S's belief that P are very different from the rational grounds offered in 3.

Some cases:

Newspaper. At the newsstand, the man selling papers accidentally gives Estefania [see here for my name choice decision procedure] a $20 bill in change instead of a $1 bill. Estefania notices the error right away. Her first reaction is to think she got lucky and doesn't need to point out the error. She thinks to herself, "What a fool! If he can't hand out correct change, he shouldn't be selling newspapers." Walking away, she thinks, "And anyway, a couple of times last week when I got a newspaper from him it was wet. I've been overpaying for his product, so this turnabout is fair. Plus, I'm sure almost everyone just keeps incorrect change when it's in their favor. That's just the way the game works." If Estefania had seen someone else receive incorrect change, she would not have reasoned in this way. She would have thought it plainly wrong for the person to keep it.

Wedding Toast. Adrian gives a wedding toast where she tells an embarrassing story about her friend Bryan. Adrian doesn’t think she crossed the line. Yes, the story was embarrassing, but not impermissible as a wedding toast. Shortly afterward, Bryan pulls Adrian aside and says he can't believe Adrian told that story. A couple of months before, Bryan had specifically asked that her not to bring that story up, and Adrian had promised not to mention it. Adrian had forgotten that promise when preparing her toast, but she remembers it now that she has been reminded. She reacts defensively, thinking: "Embarrassing the groom is what you're supposed to do at wedding toasts. Bryan is just being too uptight. Although the story was embarrassing, it also shows a good side of Bryan. And being embarrassed like this in front of family and friends is just the kind of thing Bryan needs to help him be more relaxed and comfortable in the future." It is only because Adrian doesn't want to see herself as having done something wrong that she finds this line of reasoning attractive.

The Kant-Hater. Kant's Groundwork for the Metaphysics of Morals -- a famously difficult text -- has been assigned for a graduate seminar in philosophy. Ainsley, a student in that seminar, hates Kant's opaque writing style and the authoritarian tone he thinks he detects in Kant. He doesn't fully understand the text -- who does? -- or the critical literature on it. But the first critical treatment that he happens upon is harsh, condemning most of the central arguments in the text. Because he loathes Kant's writing style, Ainsley immediately embraces that critical treatment and now deploys it to justify his rejection of Kant's views. More sympathetic treatments of Kant, which he later encounters, leave him cold and unwilling to modify his position.

The Racist Philosopher. A 19th century slave-owner, Philip, goes to university and eventually becomes a philosophy professor. Throughout his education, Philip is exposed to ethical arguments against slave-ownership, but he is never convinced by them. He always has a ready defense. That defense changes over time as his education proceeds and his thinking becomes more sophisticated. What remains constant is not any particular justification Philip offers for the ethical permissibility of slave-ownership but rather only his commitment to its permissibility.

These cases might be fleshed out with further plausible details, but on a natural understanding of them the primary causes of the protagonists' beliefs are not the justifications that they (sincerely) endorse for those beliefs -- rather, it's that they want to keep the $20, want not to have wronged a close friend at his wedding, dislike Kant's writing style, have a selfish or culturally-ingrained sense of the permissibility of slave-ownership. It is this disconnection between the epistemic grounds that S employs to defend the rationality of believing P and the psychological grounds that actually drive S's belief that P that is the essence of rationalization in the intended sense of the term.

The condition about which Jon has expressed the most concern is Condition 4: "The causes of S's belief that P are very different from the rational grounds offered in 3." I admit there's something that seems kind of fuzzy or slippery about this condition as currently formulated.

One concern: The causal story behind most beliefs is going to be very complicated, so talk about "the" causes risks sweeping in too much (all the causal history) or too little (just one or two things that we might choose because salient in the context). I'm not sure how to avoid this problem. Alternatives like "the explanation of S's belief" or "the real reason S believes" seem to have the same problems and possibly to invite other problems as well.

Another concern: It's not clear what it is for the causes to be "very different" from the rational grounds that S offers. I hope that it's clear enough in the cases above. Here are some reasons to avoid saying, more simply, that the justifications S offers for P are not among the causes of S's belief that P. First, it seems typical of rationalization that once one finds some putative rational grounds for one's belief, those putative grounds have some causal power in sustaining the belief in the future. Second, if one simply couldn't find anything even vaguely plausible in support of P, one might have given up on P -- so the availability of some superficially plausible justifications probably often plays some secondary causal role in sustaining beliefs that primarily arise from other causes. Third, sometimes one's grounds aren't exactly what one says they are, but close enough -- for example, your putative grounds might be your memory that Isaura said it yesterday, while really it was her husband Jeffrey who said it and what's really effective is your memory that somebody trustworthy said it. When the grounds are approximately what you say they are, it's not rationalization.

So the phrase "the causes... are very different" is meant to capture the idea that if you looked at the whole causal picture, you'd say that neither the putative justifications nor close neighbors of them are playing a major role, or the role you might normatively hope for or expect, in causing or causally sustaining S's belief, even as she is citing them as her justifications.

What do you think? Is this a useful way to conceptualize "rationalization"? Although I don't think we need to hew precisely to pre-theoretical folk intuition, would this account imply any particularly jarring violations of intuition about cases of "rationalization"?

I'd also be happy for reading recommendations -- particularly relevant philosophical accounts or psychological results.

Our ultimate aim is to think about the role of rationalization in moral self-evaluation and in the adoption of philosophical positions. If rationalization is common in such cases, what are the epistemic consequences for moral self-knowledge and for metaphilosophy?

[image source]

-------------------------------------------

For related posts, see What Is "Rationalization?" (Feb. 12, 2007), and Susanna Siegel's series of blog posts on this topic at the Brains blog last year.

Tuesday, December 09, 2014

Knowing Something That You Think Is Probably False

I know where my car is parked. It's in the student lot on the other side of the freeway, Lot 30. How confident am I that my car is parked there? Well, bracketing radically skeptical doubts, I'd say about 99.9% confident. I seem to have a specific memory of parking this morning, but maybe that specific memory is wrong; or maybe the car has been stolen or towed or borrowed by my wife due to some weird emergency. Maybe about once in every three years of parking, something like that will happen. Let's assume (from a god's-eye perspective) that no such thing has happened. I know, but I'm not 100% confident.

Justified degree of confidence doesn't align neatly with the presence or absence of knowledge, at least if we assume that it's true that I know where my car is parked (with 99.9% confidence) but false that I know that my lottery ticket will lose (despite 99.9999% confidence it will lose). (For puzzles about such cases, see Hawthorne 2004 and subsequent discussion.) My question for this post is, how far can this go? In particular, can I know something about which I'm less than 50% confident?

"I know that my car is parked in Lot 30; I'm 99.9% confident it's there." -- although that might sound a little jarring to some ears (if I'm only 99.9% confident, maybe I don't really know?), it sounds fine to me, perhaps partly because I've soaked so long in fallibilist epistemology. "I know that my car is parked in Lot 30; I'm 80% confident it's there." -- this sounds a bit odder, though perhaps not intolerably peculiar. Maybe "I'm pretty sure" would be better than "I know"? But "I know that my car is parked in Lot 30; I'm 40% confident it's there." -- that just sounds like a bizarre mistake.

On the other hand, Blake Myers-Schulz and I have argued that we can know things that we don't believe (or about which we are in an indeterminate state between believing and failing to believe). Maybe some of our cases constitute knowledge of some proposition simultaneously with < 50% confidence in that proposition?

I see at least three types of cases that might fit: self-deception cases, temporary doubt cases, and mistaken dogma cases.

Self-deception. Gernot knows that 250 pounds is an unhealthy weight for him. He's unhappy about his weight; he starts half-hearted programs to lose weight; he is disposed to agree when the doctor tells him that he's too heavy. He has seen and regretted the effects of excessive weight on his health. Nonetheless he is disposed, in most circumstances, to say to himself that he's approximately on the fence about whether 250 pounds is too heavy, that he's 60% confident that 250 is a healthy weight for him and 40% confident he's too heavy.

Temporary doubt. Kate studied hard for her test. She knows that Queen Elizabeth died in 1603, and that's what she writes on her exam. But in the moment of writing, due to anxiety, she feels like she's only guessing, and she thinks it's probably false that Elizabeth died in 1603. 1603 is just her best guess -- a guess about which she feels only 40% confident (more confident than about any other year).

Mistaken dogma. Kaipeng knows (as do we all) that death is bad. But he has read some Stoic works arguing that death is not bad. He feels somewhat convinced by the Stoic arguments. He'd (right now, if asked) sincerely say that he has only a 40% credence that death is bad; and yet he'd (right now, if transported) tremble on the battlefield, regret a friend's death, etc. Alternatively: Karen was raised a religious geocentrist. She takes an astronomy class in college and learns that the Earth goes around the sun, answering correctly (and in detail) when tested about the material. She now knows that the Earth goes around the sun, though she feels only 40% confident that it does and retains 60% confidence in her religious geocentrism.

The examples -- mostly adapted from Schwitzgebel 2010, Myers-Schulz and Schwitzgebel 2013, and Murray, Sytsma, and Livengood 2013 -- require fleshing out and perhaps also a bit of theory to be convincing. I offer a variety because I suspect different examples will resonate with different readers. I aim only for an existence claim: As long as there is a way of fleshing out one of these examples so that the subject knows a proposition toward which she has only 40% confidence, I'll consider it success.

As I just said, it might help to have a bit of theory here. So consider this model of knowledge and confidence:

You know some proposition P if you have it -- metaphorically! -- stored in your memory and available for retrieval in such a way that we can rightly hold you responsible for acting or not acting on account of it (and P is true, justified, etc.).

You're confident about some proposition P just in case you'd wager on it, and endorse it, and have a certain feeling of confidence in doing so. (If the wagering, expressing, and feeling come apart, it's a non-canonical, in-between case.)

There will be cases where a known proposition -- because it is unpleasant, or momentarily doubted, or in conflict with something else one wants to endorse -- does not effectively guide how you would wager or govern how you feel. But we can accuse you. We can say, "You know that! Come on!"

So why won't you say "I know that P but I'm only 40% confident in P"? Because such utterances, as explicit endorsements, reflect one's feelings of confidence -- exactly what comes apart from knowledge in these types of cases.

Tuesday, December 02, 2014

"I Think There's About a 99.8% Chance That You Exist" Said the Skeptic

Alone in my office, it can seem reasonable to me to have only about a 99% to 99.9% credence that the world is more or less how I think it is, while reserving the remaining 0.1% to 1% credence for the possibility that some radically skeptical scenario obtains (such as that this is a dream or that I'm in a short term sim).

But in public... hm. It seems an odd thing to say aloud to someone else! The question rises acutely as I prepare to give a talk on 1% Skepticism at University of Miami this Friday. Can I face an audience and say, "Well, I think there's a small chance that I'm dreaming right now"? Such an utterance seems even stranger than the run-of-the-mill strangeness of dream skepticism in solitary moments.

I've tried it on my teenage son. He knows my arguments for 1% skepticism. One day, driving him to school, a propos of nothing, I said, "I'm almost certain that you exist." A joke, of course. How could he have heard it, or how could I have meant it, in any other way?

One possible source of strangeness is this: My audience knows that they are not just my dream-figures. So it's tempting to say that in some sense they know that my doubts are misplaced.

But in non-skeptical cases, we can view people as reasonable in having non-zero credence in propositions we know to be false, if we recognize an informational asymmetry. The blackjack dealer who knows she has a 20 doesn't think the player a fool for standing on a 19. Even if the dealer sincerely tells the player she has a 20, she might think the player reasonable to say he has some doubt about the truth of the dealer's testimony. So why do radically skeptical cases seem different?

One possible clue is this: It doesn't seem wrong in quite the same way to say "I think that we might all be part of a short-term sim". Being together in skeptical doubt seems fine -- in the right context, it might even be kind of friendly, kind of fun.

Maybe, then, the issue is a matter of respect -- a matter of treating one's interlocutor as an equal partner, metaphysically and epistemically? There's something offensive, perhaps, or inegalitarian, or oppressive, or silencing, about saying "I know for sure that I exist, but I have some doubts about whether you do".

I feel the problem most keenly in the presence of the people I love. I can't doubt that we are in this world together. It seems wrong -- merely a pose, possibly an offensive pose -- to say to my seriously ill father, in seeming sincerity at the end of a philosophical discussion about death and God, "I think there's a 99.8% chance that you exist". It throws a wall up between us.

Or can it be done in a different way? Maybe I could say: "Here, you should doubt me. And I too will doubt you, just a tiny bit, so we are doubting together. Very likely, the world exists just as we think it does; or even if it doesn't, even if nothing exists beyond this room, still I am more sure of you than I am of almost anything else."

There is a risk in radical skepticism, a risk that I will doubt others dismissively or disrespectfully, alienating myself from them. But I believe that this risk can be managed, maybe even reversed: In confessing my skepticism to you, I make myself vulnerable. I show you my weird, nerdy doubts, which you might laugh at, or dismiss, or join me in. If you join me, or even just engage me seriously, we will have connected in a way that I treasure.

Wednesday, July 23, 2014

Wildcard Skepticism

Might there be excellent reasons to embrace radical skepticism, of which we are entirely unaware?

You know brain-in-a-vat skepticism -- the view that maybe last night while I was sleeping, alien superscientists removed my brain, envatted it, and are now stimulating it to create the false impression that I'm still living a normal life. I see no reason to regard that scenario as at all likely. Somewhat more likely, I argue -- not very likely, but I think reasonably drawing a wee smidgen of doubt -- are dream skepticism (might I now be asleep and dreaming?), simulation skepticism (might I be an artificial intelligence living in a small, simulated world?), and cosmological skepticism (might the cosmos in general, or my position in it, be radically different than I think, e.g., might I be a Boltzmann brain?).


"1% skepticism", as I define it, is the view that it's reasonable for me to assign about a 1% credence to the possibility that I am actually now enduring some radically skeptical scenario of this sort (and thus about a 99% credence in non-skeptical realism, the view that the world is more or less how I think it is).

Now, how do I arrive at this "about 1%" skeptical credence? Although the only skeptical possibilities to which I am inclined to assign non-trivial credence are the three just mentioned (dream, simulation, and cosmological), it also seems reasonable for me to reserve a bit of my credence space, a bit of room for doubt, for the possibility that there is some skeptical scenario that I haven't yet considered, or that I've considered but dismissed and should take more seriously than I do. I'll call this wildcard skepticism. It's a kind of meta-level doubt. It's a recognition of the possibility that I might be underappreciating the skeptical possibilities. This recognition, this wildcard skepticism, should slightly increase my credence that I am currently in a radically skeptical scenario.

You might object that I could equally well be over-estimating the skeptical possibilities, and that in recognition of that possibility, I should slightly decrease my credence that I am currently in a radically skeptical scenario; and thus the possibilities of over- and underestimation should cancel out. I do grant that I might as easily be overestimating as underestimating the skeptical possibilities. But over- and underestimation do not normally cancel out in the way this objection supposes. Near confidence ceilings (my 99% credence in non-skeptical realism), meta-level doubt should tend overall to shift one's credence down.

To see this, consider a cartoon case. Suppose I would ordinarily have a 99% credence that it won't rain tomorrow afternoon (hey, it's July in southern California), but I also know one further thing about my situation: There's a 50% chance that God has set things up so that from now on the weather will always be whatever I think is most likely, and there's a 50% chance that God has set things up so that whenever I have an opinion about the weather he'll flip a coin to make it only 50% likely that I'm right. In other words, there's a meta-level reason to think that my 99% credence might be an underestimation of the conformity of my opinions to reality or equally well might be an overestimation. What should my final credence in sunshine tomorrow be? Well, 50% times 100% (God will make it sunny for me) plus 50% times 50% (God will flip the coin) = 75%. In meta-level doubt, the down weighs more than the up.

Consider the history of skepticism. In Descartes's day, a red-blooded skeptic might have reasonably invested a smidgen more doubt in the possibility that she was being deceived by a demon than it would be reasonable to invest in that possibility today, given the advance of a science that leaves little room for demons. On the other hand, a skeptic in that era could not even have conceived of the possibility that she might be an artificial intelligence inside a computer simulation. It would be epistemically unfair to such a skeptic to call her irrational for not considering specific scenarios beyond her society's conceptual ken, but it would not be epistemically unfair to think she should recognize that given her limited conceptual resources and limited understanding of the universe, she might be underestimating the range of possible skeptical scenarios.

So now us too. That's wildcard skepticism.

[image source]