Friday, January 29, 2010

Knowledge Is a Capacity, Belief a Tendency

In epistemology articles and textbooks (e.g., in the Stanford Encyclopedia), you'll often see claims like the following. S (some person) knows that P (some proposition) only if:

(1.) P is true.
(2.) S believes that P.
(3.) S is justified in believing P.
Although many philosophers (following Gettier) dispute whether someone's meeting these three conditions is sufficient for knowing P and a few (like Dretske) also dispute the necessity of condition 3, pretty much everyone accepts that the first two conditions are necessary for knowledge -- or necessary at least for "propositional" knowledge, i.e., knowing that [some proposition is true], as opposed to, for example, knowing how [to do something].

But it's not clear to me that knowing a fact requires believing it. Consider the following case:

Ben the Forgetful Driver: Ben reads an email and learns that a bridge he normally drives across to get to work will be closed for repairs. He immediately realizes that he will have to drive a different route to work. The next day, however, he finds himself on the old route, headed toward the closed bridge. He still knows, I submit, in that forgetful moment, that the bridge is closed. He has just momentarily failed to deploy that knowledge. As soon as he sees the bridge, he'll smack himself on the forehead and say, "The bridge is closed, of course, I know that!" However, contra the necessity of (2) above, it's not clear that, in that forgetful moment as he's driving toward the bridge, he believes (or more colloquially, thinks) the bridge is closed. He is, I think, actually in an in-between state of believing, such that it's not quite right to say that he believes that the bridge is closed but also not quite right to deny that he believes the bridge is closed. It's a borderline case in the application of a vague predicate. (Compare: is a man tall if he is 5 foot 11 inches?) So: We have a clear case of knowledge, but only an in-betweenish, borderline case of belief.

Although I find that a fairly intuitive thing to say, I reckon that that intuition will not be widely shared by trained epistemologists. But I'm willing to wager that a majority of ordinary English-speaking non-philosophers will say "yes" if asked whether Ben knows the bridge is closed and "no" if asked whether he believes or thinks that the bridge is closed. (Actual survey results on related cases are pending, thanks to Blake Myers-Schulz.)

One way of warming up to the idea is to think of it this way: Knowledge is a capacity, while belief is a tendency. Consider knowing how to do something: I know how to juggle five balls if I can sometimes succeed, other than by pure luck, even if most of the time I fail. As long as I have the capacity for appropriate responding, I have the knowledge, even if that capacity is not successfully deployed on most relevant occasions. Ben has the capacity to respond knowledgeably to the closure of the bridge; he just doesn't successfully deploy that capacity. He doesn't call up the knowledge that he has.

Believing that P, on the other hand, involves generally responding to the world in a P-ish way. (If the belief is often irrelevant to actual behavior, this generality might be mostly in counterfactual possible situations.) Believing is about one's overall way of steering cognitively through the world. (For a detailed defense of this view, see here and here.) If one acts and reacts more or less as though P is true -- for example by saying "P is true", by inferring Q if P implies Q, by depending on the truth of P in one's plans -- then one believes. Otherwise, one does not believe. And if someone is mixed up, sometimes steering P-ishly and sometimes not at all P-ishly, then one's belief state is in between.

Consider another case:

Juliet the Implicit Racist: Juliet is a Caucasian-American philosophy professor. Like most such professors, she will sincerely assert that all the races are intellectually equal. In fact, she has better grounds for saying this than most: She has extensively examined the academic literature on racial differences in intelligence and she finds the case for intellectual equality compelling. She will argue coherently, authentically, and vehemently for that conclusion. Yet she is systematically racist in most of her day-to-day interactions. She (falsely) assumes that her black students will not be as bright as her white and Asian students. She shows this bias, problematically, in the way she grades her papers and leads class discussion. When she's on a hiring committee for an office manager, she will require much more evidence to become convinced of the intelligence of a black applicant than a white applicant. And so on.

Does Juliet believe that all the races are intellectually equal? I'd say that the best answer to that question is an in-betweenish "kind of" -- and in some attributional contexts (for example, two black students talking about whether to enroll in one of her classes) a simple "no, she doesn't think black people are as smart as white people" seems a fair assessment. At the same time, let me suggest that Juliet does know that all the races are intellectually equal: She has the information and the capacity to respond knowledgeably even if she often fails to deploy that capacity. She is like the person who knows how to juggle five balls but can only pull it off sporadically or when conditions are just right.

(Thanks to David Hunter, in conversation, for the slogan "knowledge is a capacity, belief a tendency".)

Monday, January 18, 2010

Supersizing Introspection

I've always enjoyed Andy Clark's work (hence my desire to emulate his drink preferences), but I hadn't ('til now) got around to reading his latest book, Supersizing the Mind. Clark is one the the leading advocates of the view that cognitive processes extend beyond the boundaries of the brain to include aspects of the body and environment. The boundary of skull and skin is no privileged border, such that human cognition can only take place within it. If mental images of Scrabble letters are part of your cognitive process when thinking about your next play, then so also are the actual physical tiles when you manipulate and use them in an analogous way. When you work to create an environment that helps you remember, your knowledge is partly distributed into that environment. The mind is not just in the skull; it is "supersized".

I've been struggling lately to develop a general account of what introspection is. I characterize my view as "pluralist" -- I think a variety of mechanisms drive what are rightly thought of as introspective judgments. It now suddenly dawns on me that what I'm really doing is "supersizing" introspection. Introspective processes -- what are sometimes thought of as the most "inward" things there are -- often include the body and world, and broader aspects of the mind than is generally supposed.

How do I know what emotion I'm in? Do I turn on the inner emotion-scanner mechanism, which then produces the judgment that I'm (say) envious? How do I know my preferences? My imagery? My sensory experience? Philosophical opinion basically divides into two camps: First (probably the mainstream) are those who advocate "detection-after" accounts, according to which I have the experience (or other mental process in question) and once that completes (and maybe also while it continues) a separate scanning process of some sort detects the presence or absence of that state.

Second are those who advocate one or another of a variety of non-detection processes. One example is Alex Byrne, who holds that figuring out whether I believe that P (e.g., whether it will rain tomorrow) involves figuring out whether P is true (that is, whether it really will rain tomorrow -- a fact about the outside world) and then applying a belief formation rule according to which when P is true it is permissible to form the belief that you believe that P. On such a view, we know our beliefs not by introspection, but rather by "extrospection" of the outside world, plus the application of some simple inference rule. Similarly, we might learn about our visual experience by attending not to the visual experience itself but rather to the outward objects that we are seeing. We might learn about our emotions just by attending, proprioceptively, to states of our body. There is no turning in, no self-scanning of the mind, in introspection.

It has always seemed to me that both types of view are partly right: Contra the detection-after views, it seems to me unlikely that introspection is the operation of a simple subpersonal scanning module wholly distinct from the cognitive process that is the target of introspection, and outward-looking processes must be part of the story. Contra the outward-looking views, however, it seems to me that outward-looking processes, too, are only part of the story.

Okay, so how do I know that I'm feeling envious? Partly, I look outward: I notice that I am in the type of situation that is apt to promote envy. Someone has something valuable that I don't have. Maybe I look more carefully at or think more carefully about that thing itself. Partly, perhaps, I notice, proprioceptively, my own physical state -- an arousal of a certain sort. Maybe I notice that I have a visual image of the person suffering a painful death. (How do I know what imagery I have? Well maybe that's by introspection, too, and there will be a pluralist story to tell there also.) Maybe I try turning my thoughts toward what is enviable and not enviable about this person and notice whether my bodily arousal crests and falls. Maybe, in the very labeling of myself as "envious" I partly make it true; I was feeling more diffusedly negative before, and the label crystallizes it. On top of such processes, I see no reason to reject the possibility, and I see several reasons to accept the possibility, that there are subpersonal causal processes (not, necessarily, the operation of dedicated modules) that show some sort of sensitivity directly to the emotional state itself, i.e., that work directly to increase the likelihood of my reaching the judgment that I'm envious, given that I am indeed envious.

I doubt we can usefully carve out some subpart of this multifarious mash-up and say that it, alone, is the "introspective process". I think that introspection, like much of cognition according to Clark, is multi-faceted, partly in short connections in the head, partly in broad interactions in the head, and partly spread out into the body and environment.

This is, of course, not independent of my view that we often get our introspective judgments badly wrong.

Friday, January 08, 2010

British Tour

I'll be speaking around Britain the next couple of weeks. Here's my schedule, if anyone wants to come to a talk or meet for coffee:

Tues Jan 12, 12:30 pm: Arrive in London (overnighting in Oxford until the 19th).

Thurs Jan 14, 12:00 pm, University of London: "Acting Contrary to Our Professed Beliefs, or the Gulf Between Occurrent Judgment and Dispositional Belief" (Institute of Philosophy, School of Advanced Study).

Fri Jan 15, 4:00 pm, Bristol University: "Introspection, What?" (Common Room, Philosophy Department, 9 Woodland Road).

Sat Jan 16, 9:45 am - 6:00 pm, Oxford University: Limitations of Introspection Workshop. 1:45 pm: "Introspection, What?" (JCR Lecture Theatre, St. Catherine's College).

Mon Jan 18, 12:30 pm, Oxford University: "The Moral Behavior of Ethics Professors" (Wellcome Center for Neuroethics, Old Indian Institute).

Tues Jan 19, 6:00 pm, University of Edinburgh: "An Empirical Perspective on the Mencius-Xunzi Debate about Human Nature" (Abden House, Confucius Institute for Scotland).

Wed Jan 20, 11:00 am, University of Edinburgh: Seminar in Philosophy, Psychology, and the Language Sciences.

Wed Jan 20, 5:00 pm, University of Edinburgh: "The Moral Behavior of Ethics Professors" (Department of Philosophy).

Thurs Jan 21, 12:00 pm, University of Leeds: "Acting Contrary to Our Professed Beliefs, or the Gulf Between Occurrent Judgment and Dispositional Belief" (CETL, Philosophy Department).

Thurs Jan 21, 5:00 pm, University of York: "Introspection, What?" (Department of Philosophy).

Fri Jan 22, University of Warwick: "Acting Contrary to Our Professed Beliefs, of the Gulf Between Occurrent Judgment and Dispositional Belief" (Department of Philosophy).

Sat Jan 23, 4:00 pm: Depart from London.

Thursday, January 07, 2010

Might Ethicists Behave More Permissibly but Also No Better?

I've been thinking a fair bit about the relationship between moral reflection and moral behavior -- especially in light of my findings suggesting that ethicists behave no better than non-ethicists of similar social background. I've been working with the default assumption that moral reflection can and often does improve moral behavior; but I'm also inclined to read the empirical evidence as suggesting that people who morally reflect a lot don't behave, on average, better than those who don't morally reflect very much.

Those two thoughts can be reconciled if, about as often as moral reflection is morally salutary, it goes wrong in one of the following ways:

* it leads to moral skepticism or nihilism or egotism,
* it collapses into self-serving rationalization, or
* it reduces our ability to respond unreflectively in good ways.
But all this is rather depressing, since it suggests that if my aim is to behave well, there's no point in morally reflecting -- the downside is as big as the upside. (Or it is, unless I can find a good way to avoid those risks, and I have no reason to think I'm a special talent.)

But it occurs to me now that the following empirical claim might be true: The majority of our moral reflection concerns not what it would be morally good to do but rather whether it's permissible to do things that are not morally good. So, for example, most people would agree that donating to well-chosen charities and embracing vegetarianism would be morally good things to do. (On vegetarianism: Even if animals have no rights, eating meat causes more pollution.) When I'm reflecting morally about whether to eat the slightly less appealing vegetarian dish or to donate money to Oxfam -- or to kick back instead of helping my wife with the dishes -- I'm not thinking about whether it would be morally good to do those things. I take it for granted that it would be. Rather, I'm thinking about whether not doing those things is morally permissible.

So here, then, is a possibility: Those who reflect a lot about ethics have a better sense of which morally-less-than-ideal things really are permissible and which are not. This might make them behave morally worse in some cases -- for example, when most people do what is morally good but not morally required, mistakenly thinking it is required (e.g., voting? returning library books?); and it might make them behave morally better in others (e.g., vegetarianism?) On average, they might behave just about as well as non-ethicists, doing less that is supererogatory but better meeting their moral obligations. If so, then philosophical moral reflection might be succeeding quite well in its aim of regulating behavior without actually improving it, no skepticism or nihilism or rationalization or injury of spontaneous reactions required.