In "The Unreliability of Naive Introspection" (
here and
here), I argue, contra a philosophical tradition going back at least to Descartes,
that we have much better knowledge of middle-sized objects in the world around us than we do of our stream of sensory experience while perceiving those objects.
As I write near the end of that paper:
The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.
Last Saturday, I defended this view for three hours before commentator
Carlotta Pavese and a number of other New York philosophers (including
Ned Block,
Paul Boghossian,
David Chalmers,
Paul Horwich,
Chris Peacocke,
Jim Pryor).
One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.
I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.
First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.
Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.
(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)
So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.
Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).
Update, November 25 [revised 5:24 pm]:
Paul Boghossian writes:
I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.
I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.
Roger White then made the same point without using the brain in a vat scenario.
I do feel some sympathy for the thought that you get
something right in such a case -- but
what exactly you get right, and how dependably... well, that's the tricky issue!