Tuesday, March 16, 2010

Knowing What You Don’t Believe

(by Blake Myers-Schulz and Eric Schwitzgebel)

Virtually every introduction to epistemology (online examples include the Stanford Encyclopedia and the Internet Encyclopedia of Philosophy) highlights the debate about what is commonly called the “JTB” theory of knowledge – the view according to which for some subject S to know some proposition P, it is necessary and sufficient that

(1.) P is true.
(2.) S believes that P is true.
(3.) S is justified in believing that P is true.

According to the JTB theory, knowledge is Justified True Belief. Perhaps the most-discussed issue in the last 40 years of epistemology is whether the JTB theory is true. Debate generally centers on whether there is a way of interpreting or revising the third (justification) condition or adding a fourth condition to avoid apparent counterexamples of various sorts (e.g., Gettier examples). Nearly all contemporary analytic philosophers endorse the truth of conditions (1) and (2): You can’t know a proposition that isn’t true, and you can’t know a proposition that you don’t believe. Few assumptions are more central to contemporary epistemology.

However, we – Blake and Eric – don’t find it intuitive that (2) is true. We think there are intuitively appealing cases in which someone can know that something is true without believing (or, per Eric, determinately believing) that it is true. We have four examples:

(A.) The unconfident examinee (from Colin Radford 1966, one of the very few deniers of (2)): Kate is asked on an exam to enter the date of Queen Elizabeth’s death. She feels like she is guessing, but enters the date correctly. Does Kate know/believe that Elizabeth died in 1603?

(B.) The absent-minded driver (from Schwitzgebel in draft): Ben reads an email telling him that a bridge he usually takes to work will be closed for repairs. He drives away from the house planning to take an alternate route but absent-mindedly misses the turn and continues toward the bridge on the old route. Does Ben know/believe that the bridge is closed?

(C.) The implicit racist (also from Schwitzgebel in draft): Juliet is implicitly biased against black people, tending to assume of individual black people that they are not intelligent. However, she vehemently endorses the (true and justified, let’s assume) claim that all the races are of equal intelligence. Does Juliet know/believe that all the races are intellectually equal?

(D.) The freaked-out movie-watcher: Jamie sees a horror movie in which vicious aliens come out of water faucets and attack people, and she is highly disturbed by it, though she acknowledges that it is not real. Immediately after the movie, when her friend goes to get water from the faucet, Jamie spontaneously shouts “Don’t do it!” Does Jamie know/believe that only water will come from the faucet?

In each case, we think, it is much more intuitive to ascribe knowledge than belief.

So, naturally (being experimental philosophers!), we checked with the folk. We used fleshed-out versions of the scenarios above (available here). Some subjects were asked whether the protagonist knew the proposition in question. Other subjects were asked whether the protagonist believed the proposition in question.

The results came in as predicted. Across the four scenarios, 75% of respondents (90/120, 1-prop z vs. 50%, p < .001) said that the protagonist knew, while only 35% said the protagonist believed (42/120; 1-prop z vs. 50%, p = .001). Considering each scenario individually, in each case a substantial majority said the protagonist knew and in no scenario did a majority say the protagonist believed. (A separate group of subjects were asked “Did Kate think that Queen Elizabeth died in 1603?” [and analogously for other scenarios]. The “think” results were very close to the “believe” results in all scenarios except for the unconfident examinee where they were closer to the “know” results.)

We think epistemologists should no longer take it for granted that condition (2) of the JTB account of knowledge is intuitively obvious.

[Cross-posted at the Experimental Philosophy Blog.]

24 comments:

Mark said...

Hi Eric (and Blake),

I think these results of yours are pretty interesting, but, as I discussed in my comments on Blake’s paper at the XPS meeting at the Eastern APA, I don’t think they clearly support your conclusion, that belief isn’t required for knowledge. (Well, maybe your conclusion is just that epistemologists shouldn’t take it for granted that it’s intuitively obvious that belief is required for knowledge. But I wouldn’t have thought anyone needed an argument for that! So, I’ll discuss the more interesting potential conclusion.) I think this is an area where a pragmatic explanation of the empirical results is warranted.

I worked this out in detail at the APA, but the basic idea rests on recognizing the different ways in which people in ordinary conversation use the term “belief” and the ways in which these uses differ from the (at least) quasi-theoretical account of belief epistemologists are endorsing when they claim that, intuitively, belief is required for knowledge. I take it that what epistemologists mean by “belief” in (2) is something like the account the Stanford Encyclopedia attributes to contemporary analytic philosophers of mind: “[They] generally use the term "belief" to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true.” It’s pretty clear that ordinary usage of the term “belief” can depart pretty widely from this. For example, a resistance fighter in occupied France may say, “I believe in the liberty of the French people.” But it’s pretty obvious he’s not claiming that the French people are liberated.

In the same way, and more to the point, I think the term “belief” is sometimes reserved in ordinary conversation for a very strong conviction that something is the case. But this should not be the notion of “belief” epistemologists are endorsing when they claim that belief is required for knowledge. I know and (in the theoretical sense) believe that my car is parked on Grove Street, but crime being what it is in New Haven, I have a much weaker conviction that this is the case than I do that this is a table (which I also know and believe). In other words, there are degrees of belief, and sometimes the term “belief” is reserved for strong belief. (But knowledge doesn’t come in degrees, so we don’t ever reserve the term “knowledge” only for strong knowledge. Consider: “I strongly believe p,” and, “I somewhat believe p,” are each more apt than either, “I strongly know p,” or, “I somewhat know p.”) When, in ordinary conversation, people use the term “belief” to pick out strong beliefs, they are using the term to pick out a more restricted class of mental states than most epistemologists would (or, at least, should) think is required for knowledge.

All that being said, I think the specific problem with your studies is this: They prime people to use a restricted notion of belief (strong belief). The agents in the studies lack a strong belief state that the target propositions obtain. So, participants respond that these agents do not believe the target propositions. There is no correspondent priming in the case of knowledge. So, when asked if the agents knew, subjects respond that they did.

(To be continued...)

Mark said...

(...continued)

Now, I particularly hate when people just invoke pragmatics to explain away results without pointing to a particular pragmatic mechanism, so here’s my story about how your vignettes prime people to use a restricted notion of “believes”: Your vignettes (I think) present people with a clear case of a justified true belief. When you ask, after such vignettes, did, for example, Kate know that Queen Elizabeth died in 1603, you put participants in a position to provide a maximally informative answer. Roughly, you put them in a position to respond whether they think the case of justified true belief you’ve just presented is a case of knowledge. But, when you present the same case and ask is this belief, if they interpret belief broadly, then your question doesn’t allow them the opportunity to make an informationally sufficient contribution to the conversation. Each of your cases is clearly a case where an agent has at least a weak attitude that the relevant proposition is the case, i.e. if the agent has a belief (in the sense of interest to epistemologists). Who could be interested in finding out if people think that? But, you could plausibly be interested in finding out if participants think the agent has a fairly strong belief that the relevant proposition obtains. So, they interpret “believe” as “strongly believe”, and most people think Kate doesn’t strongly believe Queen Elizabeth died in 1603. In other words, if you were asking participants if Kate believes in the general sense, you would be setting them up to violate Grice’s maxim of quantity. Being good little conversational participants, they interpret your question in a way that doesn’t require them to break a conversational maxim.

In fact, for the XPS session I tested this explanation by running a quick study (with Jonathan Phillips’s help) on Mturk. I asked people in one condition, “Did Kate believe that Queen Elizabeth died in 1603?” In the other, I asked, “Did Kate believe on some level that Queen Elizabeth died in 1603?” The second formulation would force them away from the interpretation available in the first, and to a notion of belief familiar to the epistemologist (that invoked in 2). If I was right, we should expect a similar proportion of people to say Kate believed on some level as say she knows—many more than would answer affirmatively to the believes-question. This was what I got. 35% claimed Kate believed—a proportion similar to what you and Blake found. But 80% of participants responded affirmatively to, “Did Kate believe on some level that Queen Elizabeth died in 1603?” A proportion similar to the 88% you and Blake found for the knowledge question (for that particular vignette, as reported at the APA). The difference between the two questions I asked was statistically significant: x2(1, N = 40) = 8.3, p = .004

So, I think there’s some support for my pragmatic explanation of your results—enough, anyway, to block us concluding from those results that belief isn’t (even intuitively) required for knowledge (in the way epistemologists mean).

(Sorry this comment was long. As I mentioned, I’d already thought and written about this, and am glad to have another venue to discuss it.)

Mark Phelan

Eric Schwitzgebel said...

Thanks for the thoughtful, detailed comment, Mark! I regret not having had a chance to chat with you about this at the APA.

I am not sure what it is to "strongly believe". One possibility is confidence. Although Kate lacks confidence, Juliet can be read as highly confident. (In the original Schwitzgebel 2010, this is clear, and we could easily tweak the scenario to bring out the confidence more strongly. Would you predict that doing so would flip the results?) Ben and Jamie, also, would be highly confident if they paused for a moment to reflect. Maybe they momentarily lack confidence? I don't know I am sure to interpret that. When I am not thinking about the roundness of the Earth, which is most of the time, do I lack confidence in its roundness?

Alternatively, "strongly believe" might mean something like "disposed to act in accord with that belief across a broad variety of situations". If so, then I at least (I can't speak for Blake) would agree with your analysis of the folk response, only adding that it's not clear that the philosophical view of belief is, or should be, any different.

I don't see the pragmatic problem with our questions. If we were asking a question with an *obviously* true or false answer, that might force an alternative interpretation, but I take it is not obviously true in all these cases that the protagonist believes; they seem like cases on which opinion might reasonably divide. (The knowledge cases seem more like obviously yes cases to me, but that problem is the reverse of the problem you assert.)

Also, "believing on some level" seems to me a strange sort of attitude. If I were a respondent, I would be torn about whether to ascribe belief in such cases. "Believing on some level" seems like a weak, qualified way of putting it, so I would probably agree to it. "Believing on some level" might be a form of in-between believing. Thus, my own philosophical theory of the relationship between belief and knowledge in these cases (I ascribe determinate knowledge, in-between belief) can easily accommodate it.

T. said...

Mathematics may offer some interesting examples, e.g. the puzzlement about one of Hilbert's finiteness proofs in invariant theory ("this is not mathematics, this is theology"), or a basic existence proof by A. Borel in his "Groupes lineaires algebraiques", whose simple proof he didn't believe until others confirmed it. In physics, Einstein showed more than he could believe in, e.g. he started with his article on light quanta the quantum theory and thought about the cosmological constant.

Eric Schwitzgebel said...

Yes, T., an interesting possibility. There are also, perhaps, some truths of topology and about the infinite that I know but can't quite believe.

Manolis said...

I think the problem with the definition is that it makes no distinction between intellectual knowledge and, for want of a better term, knowledge within the body.

For instance, you can theoretically know how to do the moonwalk, but if you can't actually do it, then you don't really know how to do it.

Alternatively, you can know how to do the moonwalk in practice without knowing how you do it -- you just do it, so to speak, without any thought involved.

This is very common with master chess players, many of whom seem to know where a piece should be moved without having a complete analytic or intellectual understanding of why that move is best. Instead, they feel it in their bones to be the best move.

The implicit racist is a good example of the difference between intellectual and bodily knowledge. The implicit racist is not intellectually a racist, but bodily she is.

The freaked-out-movie watcher is another good example of this distinction.

Manolis said...

So, in short, justified true belief only seems to apply, if at all, to purely intellectual knowledge, which is almost certainly a simplification of how knowledge really works.

The examples are legion of where this intellectual knowledge differs markedly from our bodily knowledge. Nonetheless, Western philosophy privileges intellectual knowledge over bodily knowledge, sometimes for good reason, other times not so much.

And, to continue a discussion from a previous post, the reason I love Kant is that he seems to me to be the first philosopher in Western philosophy to concern himself with human knowledge necessarily being a bodily knowledge and not purely intellectual.

Eric Schwitzgebel said...

Thanks for the comment, Manolis! I'm inclined to agree with you that it is sometimes useful to distinguish purely intellectual from more action-involved knowledge. But I don't think that JTB works even for purely intellectual knowledge, unless the belief is *also* interpreted intellectually. Juliet knows P intellectually but doesn't believe P. But maybe she does believe P *intellectually*. Intellectual-believing would then not imply believing simpliciter.

Mark said...

Thanks for the thoughtful reply, Eric! I think I wasn’t sufficiently clear about the pragmatic story I have in mind. Suppose that the traditionalist is right and intuitively knowledge entails belief. Then, any intuitively obvious case of knowledge is an intuitively obvious case of belief. We both agree that these are intuitively obvious cases of knowledge—and, indeed, participants responses seem to reflect this as well. What you want to test is whether people think knowledge entails belief. But, notice, if they do, (as I’m assuming on behalf of the traditionalist they do) then people are in a position to conclude that these are obvious cases of belief.

Now the particular way in which I think your belief question sets people up to be less than maximally informative doesn’t have to do directly with the obviousness of the state being either knowledge or belief. Rather, it has to do with the specificity of the answer you set them up to give in each condition. When you ask them, “Is this knowledge?”, people are in a position to identify the state in a very specific way. When you ask, on the other hand, “Is this belief?”, were they to interpret belief broadly (in the sense of interest to epistemologists) the question puts them in a position where they cannot be sufficiently specific about the state at hand. Since they have readily available distinct uses of the term “belief”, they opt for one of these other interpretations to be more specific in their identification of the state at hand.

I think what’s going on in the knowledge/belief cases is roughly analogous to the following: Jack’s team was playing football on the old field behind the high school. The field was fairly level: no obvious holes or hills. Jack had forgotten his cleats, and while trying to make a tackle he slipped and fell. While laying on the ground, he took a closer look and noticed that all around him there were slight perturbations in the surface of the field.

Is the old field flat? Having read this scenario, I’m inclined to say no. Is the old field flat for a football field? I’m inclined to say yes. Is there any sense in which the old field is flat? Yes. Why would I answer that the old field isn’t flat to the first question, when I think that it is, in some sense, flat? Because I’m perfectly accustomed to using different senses for flat, and interpreting your question using a particular (restricted) one of these puts me in a position to be more informative than if I use another. This, I think, is basically what’s going on in your cases (it’s just that we have a lexical entry for “knows” as opposed to “flat for a football field”.)

(That’s the basic idea. I’m about to jump on a non-wifi train. I’ll think about other comments and maybe say something about those later.)

T. said...

Two and a half examples where one "knows" something with huge productive applications, but only conditions (2) "S believes that P is true." and (3) "S is justified in believing that P is true." are satisfied:

Two of the most influential ideas leading several long term research projects in modern number theory are "prime numbers 'are' knots in a 3-space" and "1 is a prime number too" (the later is called "field with one element"). Both ideas direct number theory research (the search and formulation of conjectures, the methods of proofs) since the 1960's. In both cases cond. (2) and (3) are satisfied since long, but one does not know if the basic idea can be formulated consistently and is "really" true. And now comes an extra twist: Since a few years, one found nice formulations of a "field with one element" (i.e. cond. (1) is now fullfild for the later), but somehow this is a bit (at least so far) an anticlimax, so far much less interesting than all the great and beautifull results one obtained when cond. (1) was still open. An other case (the half example) is "motivic cohomology", that is a very brave idea of Grothendieck about a nice common core theory behind a bunch of wildely difefrent ways of doing geometry with "numbers" of all sorts. Since ca. 10 y's, such a theory exists, but somehow a bit of an anticlimax as above occured. One calls such cases where one is lead by an idea satisfying cond. (2)+(3), but not (1) "yoga's". There is a "yoga des motifs", "yoga des poids", "yoga of the field with one element" etc.

Manolis said...

Eric, I would go so far as to say that all knowledge is bodily, even knowledge that is not action-based.

I think that's why quantum physics is so hard -- because it's literally so hard to stomach, so hard for what your body normally accepts as the world around you to be changed by what your mind is saying.

said...

Eric......HA!

Visit moi's blog to discover the TRUTH about Erika and Uncle Sam.

)))((((((
(*)...(-)
....U....
..[___]..- - {Soc rates it to ye, tating}

CP said...

One difference between your four examples and those found in traditional works on epistemology is that yours don't involve the formation of knowledge/belief. Asking whether I exist, observing how many coins a man has in his pocket, even looking at my own hands, all involve forming a fresh belief (and, possibly, coming to know something) or going through the same motions I'd use to form such a belief.

Perhaps one could look for a 'JTB' account of coming to know: I come to know P whenever P is true, I come to believe that P is true and I am justified in coming to believe this.

However, in your forgetful driver example, Ben's path may suddenly be blocked by a fallen tree. Then, though he previously believed the road ahead was clear, he comes to believe that it isn't and is clearly right and justified in doing so. By the modified JTB theory, he comes to know that the road ahead is not clear. But it is quite possible that he already knew this - he could have deduced it earlier when told the bridge was under repair, even though he may since have temporarily forgotten. And isn't it somewhat uncomfortable to say he has come to know something he already knew? Some alteration will have to be made, though it may well be enough just to rule out cases where I already know P.



If you are right, at least some items of knowledge are not items of belief (or some mental states corresponding to knowledge are not of the type of mental state that correspond to beliefs, or...). Then it seems plausible that at least some instances of coming to know a fact are not the same (mental) event as coming to believe in that fact. We are certainly well away from forming a belief, and then somehow adding things like verisimilitude and justification until it 'counts as' knowledge. Consequently, we need to explain why my coming to know P is so often seen as ensuring that I come to believe that P is true.

Perhaps there is some analogy with seeing? If I (consciously) see a ball thrown at my face, I necessarily come to believe that a ball is speeding towards my face. I may come to doubt this belief if I know that the ball is possibly illusory, but I nonetheless do believe for a while. (Perhaps this example risks confusing a belief that I will be hit with the instinct to react to perceiving that I will be hit, but I do see and intend a distinction).

I have what I hope will be a suggestive question. Consider that you are staying at a friend's house and are being shown around when you spy that the kitchen lies behind a certain half-open door. You turn around to your friend, who tells you that the kitchen is behind that door. You might answer 'Yes, I saw it' or 'Yes, I believe you'. Does 'Yes, I know' have more in common as an answer with the former or the latter? (Where would remembering/reading/being told that this was the case fit in? Is this all utterly irrelevant?)

T. said...

Set theory examples: (1) on an elementary number game whose proof that it terminates needs transfinite numbers. Would you believe that it really terminates in all cases? (2) an algorithm on braids, relying again on transfinite numbers. Would you use that algorithm and trust it? (3) Cantor's simple "diagonal proof", he wrote that he could not believe it, despite it is so simple and clear.

Eric Schwitzgebel said...

CP and T: Sorry for the slow replies! CP: Those are interesting examples, and you might be right that JTB works better for instances of "coming to know" rather than for dispositional knowledge. I especially like the tree-falling case. Here's how I would put it: Ben already knew that the road was not clear, but he came to *judge* that it was not clear when he saw the fallen tree.

T: Those are interesting examples too, thanks!

T. said...

I'm not sure if this example is one or is usefull: Some years ago, an aquaintance and sociologist studied how people live in refugee camps in Sri Lanka. One of the camps was buildt on ground which the government had lent for a limited no. of years from the owner. Of course, the owner was wellknown and had said all the time that he wants either governmental money for his land, or use it for himself. Everyone knew that and the timeframes. But somehow, all the inhabitants, the now established middle class with shops, long distance business activities, well educated kids outside the camp, as well as the lower class of poor people with small jobs etc., strongly believed that nothing will happen ... when that time run out and Police with excavators surrounded the camp, the inhabitants still told my aquaintance that it's just a show, of course nothing would happen etc. Well, they had a big surprise ... An issue was probably that the middle class inhabitants felt (wrongly) secure because they had the impression of economic and personal mobility. The lower classes denied their knowledge perhaps because they feared to be left too much. I wonder if such processes are of general relevance in our society too. Was the financial crisis really unexpected? What's about the ecological one in process? Here too, we have a culture of widespread denial, here too the economic and political leading classes think, they can just go away, have in worst case just to pay some megabucks for security, geoingeneering etc.

Soluman said...

Eric,
What do you think about the proposition that we infallibly know ourselves to be conscious, even if we may not believe ourselves to be conscious?

I have a tendency to think that our knowledge of our own qualia is infallible, and that we should be able to infallibly discern that we are not, in fact, zombies. On the other hand, some a priori physicalists claim not to be able to tell the difference between themselves and zombies, and so apparently believe that they are zombies... but then I still think that they know they're not, even if they foolishly believe they are. Any help for me here?

Eric Schwitzgebel said...

Interesting thought, Soluman. Maybe! What is it, though, I wonder, to believe that you are conscious? I wonder if there's a philosophical and an ordinary reading of that, and the eliminativist can both know and believe in the ordinary reading and fail to know and fail to believe in the philosophical reading. Hm. I do like the thought that there might be philosophical positions that deprive you of belief without depriving you of knowledge....

soluman said...

Eric,
I kind of feel like "knowing" is a state that already implies consciousness. We don't, for instance, say that our calenders "know" anything, even though they store information.

I guess it seems like consciousness is required to ground intentionality in general, that's also why I'm not too concerned about why zombie Chalmers was writing his book.

Are there maybe two conceptions of knowledge, one that requires a phenomenal component and one that doesn't? Maybe an eliminativist would claim that the calendar knows his schedule, after all.

Marshall said...

Isn't this just an English vocabulary problem? If we simplify the human cognitive machinery as a bin of memory traces and an executive 'consciousness' that processes some (smallish) set of memory traces to produce actions... then your examples all hang on the distinction between traces that are currently accessible to consciousness and those that exist in memory but are not currently accessible.

('Currently' is a loaded word... it must be taken relative to some action program. The memory datum that the bridge is out is not available to the driving-the-car activity. However in C, Juliet has a memory datum that is accessible to discussing-philosophy but is not accessible to interpersonal-relations. Unhealthy but common.)

So I think it's reasonable English usage to identify 'knowledge' with the permanent memory trace rather than the ephemeral thought. "Belief" is a word that leads in a number of directions, but I think here we can take it in the sense of "S would assert that P", which implies conscious access at least. With reference to JTB, Rule 2 would say that the idea of P is accessible to S, and Rule 3, that the memory bin could provide a good context of related traces.

I think we decided that the absent-minded driver had the knowledge that the bridge was closed, even though he forgot. Does this depend on the bridge being closed, as Rule 1 asserts it should? If it turns out that the person who sent the email was mistaken somehow and the bridge is not closed, do you want to say that the status of the driver's memory trace "the bridge is out" becomes not-knowledge?

-Marshall

Eric Schwitzgebel said...

Right. I'm inclined to think it becomes a decision about how we want to use language. And I wouldn't challenge the truth condition for knowledge.

Marshall said...

Sorry, I don't understand what do you mean, you wouldn't challenge?
- asserts the truth condition is necessary
- states a lack of personal need to deny truth claims
- recommends that necessary or not, it's a bad fight to pick
- moralizes, as Wittgenstein p.7, that it's what we must not do
- ?

Eric Schwitzgebel said...

Blue Ridge, that's a bit too compressed for me to understand. We don't intend to deny the truth condition on knowledge, for one thing.

T. said...

New Scientist on "Why are so many people refusing to accept what the evidence is telling them?"