Friday, January 29, 2010

Knowledge Is a Capacity, Belief a Tendency

In epistemology articles and textbooks (e.g., in the Stanford Encyclopedia), you'll often see claims like the following. S (some person) knows that P (some proposition) only if:
(1.) P is true.
(2.) S believes that P.
(3.) S is justified in believing P.
Although many philosophers (following Gettier) dispute whether someone's meeting these three conditions is sufficient for knowing P and a few (like Dretske) also dispute the necessity of condition 3, pretty much everyone accepts that the first two conditions are necessary for knowledge -- or necessary at least for "propositional" knowledge, i.e., knowing that [some proposition is true], as opposed to, for example, knowing how [to do something].

But it's not clear to me that knowing a fact requires believing it. Consider the following case:

Ben the Forgetful Driver: Ben reads an email and learns that a bridge he normally drives across to get to work will be closed for repairs. He immediately realizes that he will have to drive a different route to work. The next day, however, he finds himself on the old route, headed toward the closed bridge. He still knows, I submit, in that forgetful moment, that the bridge is closed. He has just momentarily failed to deploy that knowledge. As soon as he sees the bridge, he'll smack himself on the forehead and say, "The bridge is closed, of course, I know that!" However, contra the necessity of (2) above, it's not clear that, in that forgetful moment as he's driving toward the bridge, he believes (or more colloquially, thinks) the bridge is closed. He is, I think, actually in an in-between state of believing, such that it's not quite right to say that he believes that the bridge is closed but also not quite right to deny that he believes the bridge is closed. It's a borderline case in the application of a vague predicate. (Compare: is a man tall if he is 5 foot 11 inches?) So: We have a clear case of knowledge, but only an in-betweenish, borderline case of belief.

Although I find that a fairly intuitive thing to say, I reckon that that intuition will not be widely shared by trained epistemologists. But I'm willing to wager that a majority of ordinary English-speaking non-philosophers will say "yes" if asked whether Ben knows the bridge is closed and "no" if asked whether he believes or thinks that the bridge is closed. (Actual survey results on related cases are pending, thanks to Blake Myers-Schulz.)

One way of warming up to the idea is to think of it this way: Knowledge is a capacity, while belief is a tendency. Consider knowing how to do something: I know how to juggle five balls if I can sometimes succeed, other than by pure luck, even if most of the time I fail. As long as I have the capacity for appropriate responding, I have the knowledge, even if that capacity is not successfully deployed on most relevant occasions. Ben has the capacity to respond knowledgeably to the closure of the bridge; he just doesn't successfully deploy that capacity. He doesn't call up the knowledge that he has.

Believing that P, on the other hand, involves generally responding to the world in a P-ish way. (If the belief is often irrelevant to actual behavior, this generality might be mostly in counterfactual possible situations.) Believing is about one's overall way of steering cognitively through the world. (For a detailed defense of this view, see here and here.) If one acts and reacts more or less as though P is true -- for example by saying "P is true", by inferring Q if P implies Q, by depending on the truth of P in one's plans -- then one believes. Otherwise, one does not believe. And if someone is mixed up, sometimes steering P-ishly and sometimes not at all P-ishly, then one's belief state is in between.

Consider another case:

Juliet the Implicit Racist: Juliet is a Caucasian-American philosophy professor. Like most such professors, she will sincerely assert that all the races are intellectually equal. In fact, she has better grounds for saying this than most: She has extensively examined the academic literature on racial differences in intelligence and she finds the case for intellectual equality compelling. She will argue coherently, authentically, and vehemently for that conclusion. Yet she is systematically racist in most of her day-to-day interactions. She (falsely) assumes that her black students will not be as bright as her white and Asian students. She shows this bias, problematically, in the way she grades her papers and leads class discussion. When she's on a hiring committee for an office manager, she will require much more evidence to become convinced of the intelligence of a black applicant than a white applicant. And so on.

Does Juliet believe that all the races are intellectually equal? I'd say that the best answer to that question is an in-betweenish "kind of" -- and in some attributional contexts (for example, two black students talking about whether to enroll in one of her classes) a simple "no, she doesn't think black people are as smart as white people" seems a fair assessment. At the same time, let me suggest that Juliet does know that all the races are intellectually equal: She has the information and the capacity to respond knowledgeably even if she often fails to deploy that capacity. She is like the person who knows how to juggle five balls but can only pull it off sporadically or when conditions are just right.

(Thanks to David Hunter, in conversation, for the slogan "knowledge is a capacity, belief a tendency".)

10 comments:

  1. One interesting thing about this point is that many (most?) (almost all?) of the philosophers who propounded this analysis of knowledge would have claimed that Ben continues to believe that the bridge is closed.

    ReplyDelete
  2. Can one be knowledgeable about something based mostly on belief? (For example, BYU’s online course “The Living Prophets.”)

    ReplyDelete
  3. @ Justin: Do you think so? The other option, of course, is to deny that Ben knows. After all, he seems to have momentarily forgotten.

    @ Jett: I don't know about the BYU course, but I'll stick by condition 1 on propositional knowledge. P needs to be *true*. And I'd be disinclined to grant knowledge, even of true propositions, if that knowledge is not grounded in good evidence or generated by some process that reliably-enough tracks the truth.

    ReplyDelete
  4. If I took the course, and at the end I could tell you their full names; birth/death dates and places; parents, spouses, and children; education; dates of leadership in the LDS church; content of their "revelations" in their writings - would I have knowledge of "the Living Prophets" - even if I don't beleive that they are?

    ReplyDelete
  5. Eric,

    As you know, I like this distinction. However, I wonder if it still borrows a bit too much from the essentialism of analytic philosophy, at least in so far as the cognition underlying "know that" seems to present itself in contexts distinct from "know how". I wonder if even less common uses of "know" might better get at any overarching conceptual metaphor. For example, Adam "knew" Eve and they had a baby. As discussed before, there seems to be a sense in which KNOWIG IS SEEING and alternatively, I can know empathy, know love, know hate, and know what it is like to be me while swimming. There seems to be a sense of grasping, possession, indwelling, which might be consistent with your comments about the "information" we possess when we 'have' "knowledge". I am still of the opinion that ascriptions of belief are simplifying explanations rooted in linguistic statements, as would be the case with 'know that' as well. Consistent with this take, it seems, we have the illusion that by 'knowing that' we somehow have special possession of the timeless 'content' of a linguistic statement. Just a thought.

    ReplyDelete
  6. Sorry about the slow reply, all!

    @ Jett: My official answer is: "kind of"! One of the things I argue for repeatedly is that "kind of" is a perfectly fine answer to questions about belief. There doesn't have to be a definite yes or no about it.

    @ Michael: Yes, I like that thought. I think that focusing on cases of knowing other than knowing that helps us think about knowledge in a way that helps show the possible dissociations between knowing that and believing that.

    @ Heather: That looks like a useful site, but I'm not interested in any official relationship.

    ReplyDelete
  7. Hi Eric

    I'm not sure the example of Ben convinces me. I'm inclined to say that Ben believed all along that the bridge was closed, but simply was not attending to that belief. For example, I believe, and have believed for many years, that my third grade teacher was a great teacher. But I did not attend to that belief until I thought of that example. Perhaps Ben was in the same state as he drove his car. He was thinking of other things--what he had to do when he got there, what was on the radio, etc. But had he asked himself whether he should continue to drive to work on the normal route, he would remember that the bridge was closed.

    Now suppose that had he asked himself that question, he would not have remembered that the bridge was closed. In that case, he does not believe that the bridge was closed. But now I would say that he does not know the bridge was closed either. Upon coming to the closed bridge, I a inclined to say that at the moment he remembers that he once knew and believed that the bridge was closed, but forgot both of these things.

    Maybe I say this because I have been trained in analytical philosophy. I am interested to see the results of the survey on the issue though.

    ReplyDelete
  8. Thanks, Scott. Yes, I recognize that many people -- I'm guessing especially English-speaking analytic philosophers -- will have intuitions like yours. The case of Ben is not, I think, compelling; maybe it's an intuitive tie. (I don't think the "asking yourself" test is a good test of belief, but others might.) I have two resources to break the tie: folk intuition (if it goes my way) and pragmatic arguments about the value of thinking about knowledge and belief in certain ways, given that it's at least *open* to us to think of them in those ways.

    ReplyDelete
  9. A couple of comments/questions in response to your post:

    1) I may be misunderstanding your notion of “in-between beliefs”, but it seems you're presupposing the falsity of epistemic accounts of vagueness. According to an epistemic account of vagueness, for any well-formed statement involving a vague predicate – e.g. “X is tall” there will be a fact of the matter as to whether the statement is true, even if the truth-value of such a statement is sometimes unknowable (as is typically the case in borderline instances). If this epistemic account is correct, someone might grant your claim that belief is vague, while still insisting there is always a determinate fact of the matter about whether S believes p. Furthermore, such a person might grant that the cases you offer are borderline cases, while refraining from drawing the metaphysical conclusion you appear to draw (namely, that there is some doxastic state which is neither belief nor suspension of belief, but is somehow in between).

    2) One natural response to the case of Juliet is to insist that she has inconsistent beliefs. In your post, you don’t offer any considerations that count against this hypothesis. In your recent paper on this topic ("Acting contrary to our professed beliefs"), you offer some arguments against this claim, but, as far as I can tell, the main arguments appear to be i) that such a proposal leaves open a number of unanswered questions, and ii) that such a proposal is incompatible with a dispositional account of belief. However, this seems much too quick. With regards to i), it should be noted that all current philosophical accounts of beliefs leave open a number of unanswered questions; I don’t see any reason to suppose that the unanswered questions in this case are in principal unanswerable. With regards to ii), I don’t see why a dispositionalist couldn’t hold the following view: there are various “compartments” in an individuals belief box; in some circumstances, one compartment might be “open” in the sense that the beliefs stored within are consciously-accessible and/or causally efficacious; in other circumstances, the same compartment might be “closed”. On this view, what makes a propositional attitude a belief is still (at least in part) its dispositional profile: if one believes that p, then, necessarily, one acts as though p is true when the compartment containing one’s belief that p is open. Such a dispositionalist will be able to countenance the possibility that a person could simultaneously believe p and believe not-p, as long as the two beliefs are stored in different compartments. (If I recollect correctly, both Stalnaker and Lewis both offer accounts of belief that are broadly dispositionalist, while allowing that people could have different belief compartments.) Personally, I find this view attractive, since I think there are strong independent reasons for holding that it is possible for agents to have inconsistent beliefs. (For instance, I think the best solution to the case of Puzzling Pierre is to say that he has inconsistent beliefs about London and Paderewski.)

    3) In your response to Scott, you suggest that “folk intuition” might break the tie between philosophers' conflicting intuitions about your proposed cases. But why should we place confidence in folk intuitions about beliefs? Recent work in experimental philosophy suggests that the folk’s intuitions about mental state possession are highly variable, and are influenced by such effects as order of presentation of cases. (I’m thinking, for instance, of the order effects that Swain, Alexander, and Weinberg have demonstrated using Lehrer’s Truetemp example.) So why think that the folk are reliable indicators about Ben and Juliet’s doxastic states? We don’t think that polling the folk is likely to answer difficult questions in physics; why think that polling the folk is likely to answer difficult questions in psychology?

    ReplyDelete
  10. Anon Feb 13: Thanks for those very thoughtful comments!

    On 1: Yes, I'm just assuming the falsity of epistemicism about vagueness. The position seems very strange to me. Maybe there's a way to translate what I want to say into epistemicist terms, but I haven't tried to do so.

    On 2: It might be interesting to further develop a dispositionalist compartmentalist view. My guess, though, is that the compartments will create worse problems than they solve. My objection to the "part of me believes P, part believes -P" view is not just that there are unanswered questions but that the questions seem unlikely to admit of happy answers. Admittedly, I don't defend or develop that accusation. It would help if I could see a psychologically realistic version of that view, which really tries to address hard questions. I'm thinking that probably it would turn into what I would see as a here's-one-common-way-of-splintering view.

    On 3: Fair enough.

    ReplyDelete