Friday, November 22, 2013

Introspecting My Visual Experience "as of" Seeing a Hat?

In "The Unreliability of Naive Introspection" (here and here), I argue, contra a philosophical tradition going back at least to Descartes, that we have much better knowledge of middle-sized objects in the world around us than we do of our stream of sensory experience while perceiving those objects.

As I write near the end of that paper:

The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.
Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).

One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.

I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.

First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.

Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.

(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)

So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.

Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).

Update, November 25 [revised 5:24 pm]:

Paul Boghossian writes:

I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.

I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.

Roger White then made the same point without using the brain in a vat scenario.

I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!

9 comments:

  1. I'm not sure I understand the question, but an experiment comes to mind where you tell unwitting volunteers that they are trying out a new voice operated computer system. They are to ask a computer details about a hat, that maybe the volunteers can see distantly/dimly through a view port (ostensibly to test the computers language range), for the computer to report.

    But it's not a computer, it's just a person, even oneself. You never show up in the experiment and their many questions will be phrased not as if to something that is having a qualia based experience, even as they take information and ask further questions based on it.

    (though there might be a range of attitudes taken by various volunteers, some of which might ascribe qualia to the 'computer' because of something in it's responces)

    It would kind of remove the issue we have in conversation where standard conversation kind of involves us affirming each other (I mean, you need to do that to get points across, yet at the same time that can get hijacked into affirming qualia). It'd be a kind of alienation experiment.

    ReplyDelete
  2. Callan: Interesting thought, if I'm getting you right. Whatever the "computer" is thought to be right about, it can't be its experience, because it doesn't *have* experience. A kind of inverse brain-in-a-vat!

    ReplyDelete
  3. This is very close to the discussion I had Tim Bayne on Friday: when I mentioned that the interesting thing about the cognitive phenomenology debate wasn't the 'yes/no' divide, but the fact that both sides implicitly agreed that far, far less information was available for metacognition than was available for cognition. In this case the apparent puzzle becomes no more puzzling than that of people arguing over the existence of the bottommost line on an eyechart. He, like your interlocutors, immediately went epistemic: if you realize you lock your keys in your car what could be more *obvious* that you had the thought, 'I locked my keys in my car!' He took this clarity to be indicative of the clarity that cognition possesses phenomenology.

    If you had switched registers, Eric, started explicitly talking about the *dimensionality* of the information available - which is what you seem to be implicitly referencing with your list of 'functional' differences, then I think you'll have pulled your interlocutors onto ground where their intuitions are less likely to confuse them. On a metacognitive heuristic view like my own, the fact of the apparent clarity of some low-dimensional information, the fact that you clearly know that you have experience a or thought x and yet aren't able to say much of anything else with any clarity at all, entirely makes sense. This is the only dimension the heuristic has adapted to access. Metacognitive heuristics are 'dimensionally opportunistic,' skimming only as much or as little to solve for whatever evolutionary pressure motivates the crazy metabolic expense of endogenously tracking the most complicated thing we presently know.

    ReplyDelete
  4. Yes, that seems like a helpful way to think of it, Scott -- though I think you need two more theses to get error about phenomenology. First, the richness/subtly/dimensionality of the phenomenology needs to exceed that of the meta-cognitive judgments about it (Dennett might deny this); and we must be prone to confabulate about that phenomenology or theorize about it on dubious grounds when pressed to extend those meta-cognitive judgments beyond their usual fairly safe skeletal contents. Those two theses both seem plausible to me.

    Tim Bayne has an interesting paper with Maja Spener, by the way, that is more nuanced about introspective reliability than comes across in your description above.

    ReplyDelete
  5. I'm pretty much convinced (similar to Dennett) that there's no 'phenomenology' distinct from our various acts of metacognition - only neural activity (if one insists on stuffing the issue into the knower/known format) that we have no possible means of 'intuiting' as such. The neuroendogenous mechanics of the Inverse Problem are so imposing that I can't see how there could be!

    On this way of looking at things, neglect-driven confabulation simply becomes the default: the notion (like Carruthers argues) that the information accessed comes tagged with extra information regarding 'veridicality' for the purposes of a cognitive *invention* like theoretical metacognition becomes the difficult thing to understand. The heuristics involved have to be among the 'fastest and most frugal' that we possess, given the complexities involved.

    Thanks for the tip on the paper!

    ReplyDelete
  6. Scott, I'm not sure about that first point, though I think it might be workable as long as metacognition isn't *too* expensive. If metacognition needs to manifest as full-blown explicit judgment, then it seems to me hard to deny that the stream of experience has a complexity that outruns that. My current visual experience, as I am attending to it now, has a range of structure and detail that I cannot capture, or at least do not capture, in explicit judgment (and about which I can be wrong).

    Would you disagree with that?

    ReplyDelete
  7. Yet you agree there's no 'attending to a range of structure and detail' short of metacognition. On BBT, the sedimentation of any system into 'subject' and 'object' is itself heuristic, an artifact of the difficulty of cognizing systems from within those systems. It works as well as it does in instances of environmental cognition because the systems tracked within the superordinate 'metasystem' (of system-tracking-system) follow a generally independent causal trajectory. This gives the subject/object heuristic a vast problem ecology: namely all those systems that follow generally independent trajectories. The inverse problem is typically as soluble as we need in these instances. As soon as this independence is compromised, you run into all the well known problems pertaining to 'observer effects.'

    So on this gloss, metacognitive disputes like those pertaining to cognitive phenomenology simply cannot be solved short of an empirical understanding of the mechanisms involved (of which I'm offering only the haziest sketch). The brain is the only thing to be 'wrong about.' So when you advert to the way the 'richness of experience attended to' outruns your ability to make explicit it seems like you have a recipe for good old fashioned - environmental - error, that you have something relatively independent for an explicit metacognitive judgment to be 'wrong about.' But you're already in the observer effect slurry, using one metacognitive judgment (experience is rich) to gerrymander a scene congenial to subject/object heuristic cognition, thus allowing you to attach all the attendent intuitions regarding evaluability and so on to a second (more explicit?) metacognitive judgment.

    Sorry, I know that's wordy, but it's a tricky thing convincing beetles they're trapped inside boxes!


    ReplyDelete
  8. I agree that the whole thing is goofed up by interactions between the process of judging and the process that is target of the judgment, and that those processes themselves are not entirely ontologically separable. It's crazy spaghetti all the way down!

    But I don't think that means we just defer to neuroscience. Two reasons: (1.) The relationship between low-level science and high-level phenomena is complex and maybe even intractable (see Cartwright and Dupre), and (2.) idealism might be right, and all the spatial features that figure in science might depend upon mentality (e.g., as in my post "Kant meets cyberpunk") or their relationship might be more complex than standard materialist stories allow.

    ReplyDelete
  9. I grant both are possibilities, but I think their attractiveness turns on the apparent naturalistic intractability of intentionality. Since I think that intractability dissolves once you take the neglect perspective, I'm more inclined to think good old fashioned mechanical emergence will suffice to tell the story - in fact, I think it already is!

    ReplyDelete