Tuesday, February 28, 2012

Cohen and Dennett on Reportability and Consciousness

I was struck by the following imaginary dialogue in a recent article by Michael Cohen and Daniel Dennett. In the dialogue, P is a patient and F&L are Fahrenfort and Lamme, the targets of C&D's critique.

F&L: ‘You are conscious of the redness of the apple.’

P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’

F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’

P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’

F&L: ‘Yes, because local recurrency correlates with conscious awareness.’

P: ‘Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?’
I think we are meant to find F&L's insistence preposterous, or at least misguided.

Is it preposterous or misguided? I feel the attraction of saying so. But imagine this parallel case, in which the patient's report is of a visually seen object and the scientists are speaking to the patient from another room in the building:
Scientists: ‘You are looking directly at an apple that is on the table two feet in front of you.’

Participant: ‘I am? I don’t see an apple. It just looks like an empty table. Why do you think I’m looking at an apple?’

S: ‘Because we put the apple there, and we are monitoring it with a suite of video cameras and other devices [fill in further convincing details].’

P: ‘But I really don’t see an apple. I see the table, but no apple on it. Yet you still insist that there's an apple there?’

S: ‘Yes, because [repeat suite of persuasive empirical evidence].’

P: ‘Doesn’t it mean something that I am telling you I’m not seeing the apple? Doesn’t that suggest that your video cameras, etc., are broken?’
[Update, Mar. 16: Just to be clear: The dispute I am imagining is not about P's mental life. It's about the fact of whether there is actually an apple on the table.]

Here's my thought: This is a fairly bizarre situation. One wonders if the scientists' cameras are broken after all. But whether to believe the scientists or the participant is going to depend on further details. How trustworthy is the scientists' equipment? Is there some plausible reason to think the participant might really be mistaken? For example, what if the participant had been given a post-hypnotic suggestion? What if the participant were viewing the apple monocularly and the apple were being manipulated so as to remain always in the participant's blind spot? Then we might have excellent reason to believe the scientists over the participant.

I would suggest that the introspective case is epistemically similar. It doesn't hands-down favor the patient. It depends on the details. It depends on such things as the trustworthiness of the external measure of consciousness and whether a reasonable explanation of the patient's error is available. We should not, contra Cohen's and Dennett's apparent intention, conclude from this thought experiment that self-reports of experience are in any strong sense "incorrigible".

(Dennett and I have already gone around a few times on this issue: here and here and here.)

Update, March 13:
Cohen and Dennett have given me permission to post the following reply on their behalf:

Thanks to everyone for the comments! Here are just a few points of clarification of our view.

For us, the subject's words are not the ultimate criterion or touchstone of consciousness. Not at all! Change blindness and inattentional blindness and many other phenomena demonstrate how people often overestimate their knowledge of their own experiences. But if we want to understand everyday subjective experience—and that is what people have in mind generally when they speak of consciousness—we have to start with the understanding that in general people have access to their own experience! But theorists like Block, Lamme, and colleagues want to go a step beyond that. They want to say that there are conscious experiences that you can't even access yourself. In other words, there are experiences that you can't talk about, you can't report, you can't remember, you can't make decisions about, you can't plan to act in one way or another because of, and even if you're directly probed, you will still not realize you're having it. (This is what Block and Lamme mean when they insist that there can be phenomenal consciousness without access consciousness. (See Lamme, 2003 TICS where he talks about conscious states you can't access or realize you're having).

We do not at all deny that there are instances in which a scientist can make observations about a subject's neural state that the subject can't report. There are too many instances in the scientific literature to list here. For example, it's easy with current technology to measure fluctuations in primary visual cortex (V1) using fMRI to which the subject has no access. Not only can he not report the change in his V1, he can't even recognize something on the screen is changing that is causing those neural changes. The idea that subjective report is the only tool we have for consciousness studies is not at all our view. Our view is that there are two quite well established categories: conscious experiences to which subjects have access, and unconscious processes to which they do not have access. If Block and Lamme want to propose a new category of phenomenal consciousness with no access, they must motivate it. How are they proposing to distinguish it from unconscious activity? In virtue of what properties are the phenomena at issue rightly called conscious? Freud proposed a similar distinction, between totally unconscious and “preconscious” activities, and more recently Dehaene et al (TICS 2006) have updated that proposal with a carefully defended taxonomy of sublimininal, preconscious and conscious activity. Are Block and Lamme proposing to simply rename Dehaene et al’s ‘preconscious’ category as phenomenal consciousness? If so, what don’t they like about the term ‘preconscious’? If not, if they want their taxonomy to compete with this taxonomy, they need to tell us what special features mark the phenomenally conscious but unaccessed activities from Dehaene et al’s preconscious activities.

[For a few of my (ES's) thoughts in reaction, see the comments section, March 13.]

44 comments:

Unknown said...

I wish you had kept the scientists' reasons for there being an apple (i.e. the bit about monitoring the brain for certain activity) in the second scenario. If in the second scenario, the subjects brain was active in the way that usually associates with seeing an apple (or seeing red) and there were cameras bearing witness to the person being right in front of the apple, then that would imply that seeing an apple or seeing "red" should no longer be associated with said mental activity—barring the possibility of the subject/apple being manipulated in the way you mentioned. So I would trust the subject despite convincing evidence against him. Then again, I am no philosopher of consciousness.

Ewan said...

What if I *am* the subject? I must presumably conclude that (if the scientists are not lying) the awareness != reported brain activity. So in either direction, not to come to this conclusion requires deciding that someone is lying, no? [Even then, what if I set up the equipment myself and use myself as the subject - unless I can somehow lie to myself, then at least for me the issue is resolved. I grant that this should not necessarily convince anyone other than myself.]

Tyler Tretsven said...

I don't think Cohen and Dennett's example makes a very strong point (if their intention is what you think it is; I don't have access to the article).

The scientists' made their mistake before they confronted the patient. Regardless of whether the patient has perfect access to their own conscious experience, the scientists' inference that the patient is experiencing color is misguided because they make an unwarranted backwards inference about what can be inferred from brain activation patterns. While it's one thing to infer that the brain activation detected during a task is related to that task (after the proper controls), it's problematic to take an activation pattern and infer what the brain is doing (e.g., processing or experiencing color).

The scientists are misguided because they can't rationally assert what they do and not because they argue with the patient about what the patient sees – in my reading of it at least.

David Duffy said...

In the first scenario, I guess we must infer that there was a correspondence between brain activity and self-report in previously tested individuals. As a youngster, I was impressed by Charles Tart's idea that individuals differ in their level of conscious access to mental faculties common to everyone. See also the biofeedback movement etc.

Pete Mandik said...

I'm less inclined to read Dennett and Cohen as asserting an incorrigibility claim. I read them instead as saying something like: given the deep pretheoretic entrenchment of the first-person accessibility of our own conscious states, any theory that entails otherwise better be overwhelmingly kick-ass (and, by the way Lamme, et al don't even come close in the kick-ass department).

Historical analogy: That light was something that you could see, or directly see the effects of without special apparatus was pretty strongly entrenched prior to the theory that visible light was just an instance of a broader class of phenomena, electromagnetic radiation. EM theory entails the existence of invisible light. If the amount of evidence and explanatory breadth that EM theory had going for it was as paltry as your run-of-the-mill contemporary neuroscientific theory of consciousness, e.g. Lamme loop-theory, EM theory would be rightly laughed out of town.

It strikes me as unscientific to regard any aspect of a phenomena as absolutely definitive of it: It's always open for future science to overturn our present pretheoretic hunches. But I don't see anything in present science that suffices to overturn first-person accessibility as a marker of conscious states. Voluntary verbal report is the single best test we presently have for whether a subject's state is a conscious one. Screw incorrigibility.

Eric Schwitzgebel said...

Thanks for the comments, folks!

Nick: That seems right to me.

Ewan: I don't think anyone need be lying. Honest mistakes are also a possibility, aren't they?

Tyler: Maybe you're right about the current situation. How much would you generalize? Let's imagine that neuroscience matures a bit more. Would you continue to hold that one can't infer from activation patterns to conscious experiences?

David: Right.

Eric Schwitzgebel said...

Peter: I acknowledge that interpretation of C&D as committed to incorrigibility doesn't follow from this article considered in isolation. However, given Dennett's endorsement of incorrigibility in other texts (e.g., his 1991 book and articles in 2000 and 2002) and the harmony between these remarks and what he says in those texts, I interpret it as a gesture toward incorrigibility. I could be wrong. Maybe Dennett's view has changed. And I have to confess that I've never fully understood him on this issue, perhaps because I can't shed what he has called industrial-grade phenomenal realism.

If the point is only that Lamme's view isn't sufficiently awesome to overturn self-reports in such situations -- well, maybe so. But even then, it's going to depend on features of the situation that remain invisible in the dialogue. I recommend for your consideration the possibility that reports of subjective experience are often sufficiently unreliable that they would fail to gain epistemic precedence over even a fairly weak theory.

If I could persuade Dennett to join you in saying "screw incorrigibility", I would be most delighted!

Charles T. Wolverton said...

Like a commenter on an older post re Dennett, I like to think in terms of learning simple tasks, eg, "recognizing colors", viz, the ability to respond reliably to exposure to light with certain characteristics by uttering (or otherwise indicating) a color name, say, "red". Assuming that a subject has demonstrated both this ability and a pattern of consistent sincerity, I'd be inclined to ascribe to the subject first person incorrigibility re color recognition since there seems no obvious reason that the subject might need to change already reliable responses. (I'm assuming that the subject's responses are "correct" in the sense that they agree with those of almost everyone else. But even had the subject been mis-trained to respond "red" when almost everyone else responds "green", the subject is "fallible" only in an easily correctable way.)

I infer that your "industrial-grade phenomenal realism" entails that there is in some sense a "true" or "real" phenomenal experience of colors. That may be the case for some neurophysiological reason unknown to me, but I see no obvious need to assume that it is. Even if the phenomenal experiences that cause you and me to respond "red" are dramatically different, neither of us is "right" or "wrong". That's just how we've been trained to respond to retinal stimulation by light with certain characteristics. If we both do it reliably, where is "fallibility"?

One explanation for the "bizarre" C&D experiment could be a version of familiar blindsight experiments. A subject with hemi-field blindsight (incorrigibly, IMO) denies phenomenal-awareness of the color or presence of an object in the hemi-field but nevertheless is able to detect movement and/or color changes, thereby suggesting access-awareness. (The possibility of dual path processing for phenomenal and access awareness was the point of my perhaps cryptic comment on the Dretske-US post.)

Eric Schwitzgebel said...

But, Charles, reliability does not imply perfection! (At least not how the term is generally used.) And with perfection, shouldn't a good enough conspiracy of evidence justifiably convince us that the subject is mistaken about his own phenomenology, contra incorrigibility?

Charles T. Wolverton said...

Well, in the arena of human behavior I don't assume absolutes, so "reliability" as "perfection" didn't occur to me, but neither did "incorrigible" as "under no circumstances whatsoever".

In any event, the point of the second paragraph in my comment is that I don't even know what someone's "being mistaken about his own phenomenology" means. Consider simply recognizing the presence of a color. That is presumably a learned association of neural activity - possibly manifest in a phenomenal experience, but not necessarily - with a word. A person can be mistaken about the association (eg, now say "green" despite having repeatedly said "red" when previously viewing a surface), but I don't know what it would mean to be mistaken about the phenomenology itself (assuming no interim damage in the processing chain). In the blind spot example, the subject isn't mistaken about his phenomenology, only about the claim "no apple is present"; turn the head slightly and the object phenomenally (re)appears. Similarly for blindsight - in my hypothetical model of visual processing, one processing path is working fine but the phenomenal path isn't working at all. And we doubt someone who claims "a pink elephant is over there" not because his phenomenology is "wrong" but because no one else (at least no one sober) confirms that claim.

Or perhaps I'm missing some key point.

Baron P said...

Either or all of them could simply be lying.

Eric Schwitzgebel said...

That second issue, Charles, is a tireless dispute. My thought is that a pretty substantial burden of proof would be on the philosopher who asserts that it makes no sense to say that someone is mistaken about her phenomenology. In my 2011 book, I present lots of cases that seem (to me) naturally to be interpreted as involving mistakes about one's own phenomenology.

Charles T. Wolverton said...

Just to be clear, Eric, I didn't - and wouldn't except perhaps carelessly - say someone's position "makes no sense" re an issue like this on which my own position is highly suspect. I said - intended quite literally - that I don't even know what "mistakes about one's own phenomenology" means. Or for that matter, what being "right" about it means. Relative to what standard? Pointers to relevant online papers would be much appreciated.

Eric Schwitzgebel said...

Maybe I misheard you with my phenomenal realist ears. For a phenomenal realist, it's clear what it means to be wrong: If your phenomenology has property X and you deny that it has property X you're wrong! Non-realists about phenomenology aren't going to want to say that, and I think that non-realist strand in Dennett is what pushes him toward a type of incorrigibilism.

One angle in on this would be my exchange with Dennett in the links above.

Charles T. Wolverton said...

I just finished your 2008 paper on Naive Introspection and see that our problem is scope. In the complex cases you describe in the paper, I don't have an opinion, other than to agree that judgments made based on PE are often misleading. But I'm addressing only the narrow issue of visual phenomenal experience (V-PE) in the very simple cases in the post: is an object there, and if so is it red? If the subject has a V-PE, he certainly may make mistaken judgments based on it, eg, the "red" object looks green to him, but I wouldn't describe that as his V-PE being "wrong"? In the case of blindsight the subject claims to have no V-PE despite being able to describe movement of the object, but again I wouldn't describe that as his being "wrong". That's just what blindsight ala this example amounts to. (OTOH, I suppose one who denies the separable processing that I hypothesize could say that the subject is wrong - but I would then think he was wrong!)

Ie, I think "what we have here is failure to communicate" applies.

Eric Schwitzgebel said...

Yes, maybe so. Philosophy is sometimes that way!

Zach said...

Professor S, I'm here to try to rescue Dennett again.

I have a problem with your parallel case. The scientists inform the subject that she is "looking at" an apple. The subject denies it. The problem is that the disagreement is ambiguous: are they are disagreeing (1) about the subject's phenomenology or (2) about whether there is in fact an apple in front of her eyes?

I think you're hovering between these interpretations. The case is relevant only on interpretation (1), but the case returns the intuitive result you want only on interpretation (2).

Cohen+Dennett surely would admit that the subject might be mistaken about whether the apple is in fact before her eyes (or about whether the apple is in fact red). In order to be an effective counterweight, your case needs to show that the subject might be wrong about her phenomenology.

From your post, I don't come away with interpretation (1). First, the scientists cite the video as evidence, which makes me think that they are arguing about (2). Second, you suggest a possible explanation for the subject's insistence that she is not "looking at" an apple: "What if the participant were viewing the apple monocularly and the apple were being manipulated so as to remain always in the participant's blind spot?" If this were so, then the apple would *not* be part of her phenomenology (as it would seem to her that it was not there). And so the subject would not be wrong about her phenomenology.

Here's a better example for you (borrowed from Dennett):

Suppose that the scientists tamper with the subject's brain, inverting the colors of her memories but leaving her visual processing system wholly intact. The subject (P) looks up at a blue sky and an exchange ensues:

P: The sky is orange!
S: No, actually it's blue.
P: Well, I'm definitely *experiencing* it as orange.
S: No, actually you're not! Blue light is hitting your eyes and your visual systems are processing it normally.

This seems like a stronger case for you. In fact, I do believe that even in this case, the subject experiences the sky as orange, but I've done enough blabbing.

Charles T. Wolverton said...

Zach makes the points I was trying to make only perhaps more clearly. However, I think the description of the example "borrowed from Dennett" is worded in a typically misleading vocabulary. I'd rephrase it like this:

Suppose Antipodean scientists tamper with subject S's brain, swapping the association of visual phenomenal experiences with the color names "orange" and "blue" - but otherwise leaving S's visual processing system unaffected. The subject (P) looks up at a sky that most people would describe as "looking blue" and this exchange between S and two other persons (N1,N2) ensues:

S: The sky looks orange today!
N1: Strange - it looks blue to me.
S: Well, I associate my current visual phenomenal experience with saying that something "looks orange".
N2: I agree with N1 - it looks blue.
N1: See, P? You're wrong!
N2: Or maybe she's right about the phenomenal experience but wrong about the association.


I'm with N2 (and I take it Zach is as well). Although I wouldn't say S is "right" but just that S's association is not the typical one and that in principle, S could learn a new association that would result in "normal" behavior.

Anonymous said...

@Zach - surely the problem isn't meant to be that S disagrees with N1 and N2. It's meant to be that S claims to have a different experience despite exactly the same physical process of perception having occurred. The problem could probably be formulated even for creatures that don't have concepts, so long as they have phenomenologies and can respond to changes in them.

Is there perhaps an ambiguity in 'normal' at work? There's the normal way of associating sensation-words with surface appearances - normality within a linguistic community - and then there's the normal way for a subject's phenomenology to respond to a particular stimulus - normality within the subject's history of experience.

I took it that the point of all the examples is that if we have a sufficiently well-supported physical explanation of the link between stimulus and phenomenology, then if in some situation we apply the stimulus and the reported phenomenology is not what our physical theory predicts, it can be more likely that the phenomenological report is mistaken than that the theory has failed. My only worry is that, if upholding the theory requires allowing for phenomenological reports to be mistaken, then this may itself undermine the theory to some extent in that most of the evidence for it will come in the form of correlations between certain physical events and certain phenomenological events, with the only way of gaining access to the latter being through self-reporting, which must be taken at the evidence-gathering stage to be very reliable, though not necessarily perfect.

(Looking back, some of this would probably be explained better using conditional probabilities.)

Anonymous said...

(And shorter sentences.)

Anonymous said...

And sorry - that was a response to Charles' last comment, not to Zach.

Charles T. Wolverton said...

anon -

Yes, my use of "normal" is shorthand for "typical of most people in a person's linguistic community". So, in the following, "normal" should be understood that way.

All that matters is that the association changes. That can be effected (in principle) in two ways:

i) by changing the phenomenal experience (PE) that is "normal" for incident light having "blue" spectral characteristics to the PE that is "normal" for incident light having "orange" spectral spectral characteristics. In that case, S has the "orange" phenomenal experience and says "orange".

ii) by changing the color word associated with the (PE) that is "normal" for incident light having "blue" spectral characteristics from "blue" to "orange". In that case, S has the "blue" phenomenal experience but says "orange".

I say "in principle" because my impression is that we don't know how to effect either change. As you suggest (I think), it might turn out that you can't actually do one of these - or perhaps either - in practice. In any event, it remains unclear to me precisely what a PE "mistake" might be. In neither of these cases is S "wrong" about her phenomenology, it's just that her response to "blue" incident light isn't "normal".

I don't understand your comment about "creatures that don't have concepts". One can imagine a light spectral analyzer hooked up to a voice synthesizer so that incident "blue" light causes the synthesizer to synthesize "blue", etc. Since we would know how to change the spectral analyzer's "PE" or the association of analyzer output to synthesized word, either i) or ii) could be effected in practice. Does that setup have concepts (in any non-trivial sense)?

Although I too try to apply conditional probabilities whenever possible since I think lots of confusions could be avoided if they were more widely understood, I don't see a role for them here. Am I missing something?

Or always a real possibility, I may be wrong about any or all of this.

Anonymous said...

The point about creatures without concepts is rubbish and I'm happy to drop it. (After all, it's standard to think that to be in error you need to have made a judgment, and to make judgments you need concepts.) I was just worried that you were focusing more on the communicability of phenomenology reports than their correctness. But this point wasn't much help with that worry.

Here's how I saw the example working with conditional probabilities. Let's say that we have some theory linking brain states and environmental conditions to phenomenology, which is expressed by sentence T. (Ignore the possibility that there's no such sentence.) We're performing an experiment in conditions C. O means 'subject experiences orange', R means 'subject reports experiencing orange', and B means 'subject has brain state S' where the theory predicts that the subject won't experience orange if in state S in these conditions, i.e. T entails (B&C)-->~O.

The subject makes a mistake in reporting their phenomenology if R&~O is true. I'm assuming that the subject is a competent enough speaker that their report reflects their judgment - i.e. they don't make a mistaken report just by not understanding what they're saying. The argument seems to me to involve conceiving of an experiment, i.e. choice of C, in which the subject reports an orange experience, i.e. R is true given C, but P(B|T&C) is high, so P(B&C|T&C) is high, so P(~O|T&C) is high, so P(R&~O|T&C) is high, and if P(T) is independently high enough, then maybe we can get P(R&~O|C) high. Which I would gloss as saying that we can arrange circumstances in which the subject reports experiencing orange but is likely mistaken.

Another approach might just be to say that if, given circumstances C, we can get confident enough of both B and (B&C&T)-->~O, and yet R is true, then if we are confident enough of T we should also be quite confident of R&~O, which says that the subject makes a mistaken phenomenological report.

I'm guessing that you doubt the possibility of an experiment, i.e. choice of C, such that all of R, B and (B&C&T)-->~O come out true. The original concern, I think, was whether we could ever be confident enough of B and T that we should accept R&~O, but I can see why you might say that if there is no possible choice of C in which we are forced to make the choice then the question is a moot one.

But I'm tempted to think that pretty much any judgment (and so any report, given a competent speaker) is possible in any circumstances. I think the cognitive processes responsible for experiencing and for judging would have to be surprisingly closely linked for this not to be the case, but I don't know much about the relevant science. But I have no problem with R being possible in any and every case where ~O is true.

Charles T. Wolverton said...

OK, I see what you're doing. But for my T, P{~O&R|T}=0 for all C.

That's because I assume that color visual phenomenal experience is an illusion. And although inferences about C based on the illusion can be mistaken, I don't know what it means to say that the illusion itself is mistaken.

In any event, you appear to be projecting forward to the day when neurophysiology has advanced to the point where we can monitor the neural activity of a subject exposed to an illuminated object and based on the observed neural activity determine the V-PE that would occur in a "normal" subject. Suppose the observed neural activity (equivalently, the V-PE) for subject S is that which would cause a "normal" subject to report that the object "looks blue", but S reports that it "looks orange". Here are several possible explanations:

1. S is lying. But that's a character defect, not a perceptual error.

2. S has learned to associate the wrong word with the V-PE. But that's an error of educational history, maybe lazy parents. Or perhaps a memory disorder.

3. S has suffered neuronal damage such that although the observed neural activity at one time caused the V-PE associated with "blue", it now causes the V-PE associated with "orange". But then the report isn't mistaken.

4. The V-PE is correct one for "blue" objects and the association with the word "blue" is correct but the V-PE is mistakenly identified as the one for "ornage" objects. But that's Cartesian thinking - identified by whom/what?

Ie, I don't see a way of describing the situation that makes "S's V-PE report is mistaken" a reasonable assertion. But that may be my deficient imagination. Other possibilities?

Anonymous said...

OK. I think we're now just disagreeing over whether ~O&R is even a possibility (which is a bit different from what the original post was asking, about what evidence would be required to be justified in believing it, though if it can be shown that it's not even possible then there can never be any justification to someone who knows it isn't).

I would accept something like option 4, though in more realist terms. E.g., forgetting about the associations between experiences and objects, let's just say there are two different and mutually exclusive experiences such that the subject can judge whether they are in one or the other, and they judge that they are in the first even though they are in the second.

On the 'identified by whom/what?' question, I'm quite happy for there to be a faculty of judging that is separate from that of experiencing orange and blue, perhaps because they correspond to different types of brain activity. The faculty of judgment is such that it can respond to blue/orange experiences in a knowledge-producing way, though it can also judge incorrectly, and whenever it makes either a correct or incorrect judgment this produces whatever our phenomenology of judgment is.

Of course, this is just a minimal account. There should also be some explanation as to how we can pay more or less attention to our visual phenomenology, and maybe how there can be feedback in the other direction, from judgment to experience (this might be what occurs in some optical illusions or instances of aspect shifting). And maybe we should want to postulate genuine higher-order experiences (experiences of experiences) too. (I wonder if, perhaps, if there are such higher-order experiences, then even your theory can allow for error, when the experience of orange and the experience of blue are not mixed up, but the experience of the experience of orange and the experience of the experience of blue are; but perhaps this depends on whether you would take words to be used in response to experiences or to experiences of experiences.)

Finally, though I don't know how important this is, I'm interested in what you mean in saying that all experiences of orange and blue are illusions. What are they illusions of? Do you mean that you don't think orangeness and blueness are properties of objects, so the experiences aren't representing anything, but are adding something new?

Baron P said...

"S is lying. But that's a character defect, not a perceptual error."

Lying is a tactic that we all use when deemed by our cultures as acceptable. It's no more a defect than would be a tendency for inappropriate truthfulness.

Charles T. Wolverton said...

I don't see why going from the simple but precise (reporting that an object "looks blue") to the imprecise and unbounded (judging which of two "different and mutually exclusive experiences" the subject "is in") makes anything "more realist". The latter suggests to me a much more cognitive process than seems necessary for the simple examples in the post while leaving the response undefined.

For better or worse, I try to think of such processes in simple stimulus-response terms. That's why I keep referring to incident light and terse reports - no ambiguity about either stimulus or response. Implicit in that simple model is that C doesn't really enter into the process. Of course, if S gets to move around, ask questions, solicit a second opinion, ask for details about lighting, etc, then C does matter and the process becomes much more cognitive. But that wasn't the situation as described in the examples.

I also agree that to get from simple stimulus-response to knowledge requires much more elaborate processing, but my simplified versions of the examples in the post don't require much knowledge. Eg, in Eric's scenario the argumentative exchange is just for dramatic effect. It's essence is captured by:

E: See an apple?
S: No.

Also, statements like "The faculty of judgment is such that it can respond to blue/orange experiences in a knowledge-producing way" make me nervous, smacking as they do of the Myth of the Given. (I buy Sellars' EPM argument, at least to the extent that I understand it.)

I got in the unfortunate habit of referring to V-PE as "illusions" and keep doing so despite having decided that a better - because less inflammatory - descriptor is "artefacts". And the "what" is, at minimum, the mental image of colors. When something "looks red", where does the mental image of "red" come from? And how? And why? The "red" mental image of an object which isn't actually red may be an artefact of visual stimulus processing in somewhat the same sense that the "solid" mental image of an object which isn't actually solid is.

Surprisingly, even answering the Why? question seems quite difficult. Blindsight suggests to me that much - most? all? - of the functionality that we instinctively assume requires V-PE may not. Re your mention of attentiveness and feedback, my guess du jour is that V-PE may have something to do with introducing delay into the stimulus response process in situations where careful attention to candidate response options is important rather than a quick reflexive response. But I can't justify that guess at all.

I think I'm at (perhaps beyond) the threshold of my minimal understanding of all this, so while enjoying the exchange I'm not sure I have much to add.

Anonymous said...

Sorry, "can respond to blue/orange experiences in a knowledge-producing way" is meant to be talking about introspective knowledge of the blue/orange experience only. I certainly wouldn't want to claim that any knowledge of the cause of the experience is produced. I am also attracted by the idea that the existence of such experiences at all is useful mostly to allow double-checking and the direction of the attention to particular features (rather than to serve as some kind of knowledge basis from which conclusions about the world are reached by instinct or reasoning).

I take issue with your stimulus-response way of talking about visual experience because I don't think stimuli are really relevant. We're interested in introspective knowledge of ones internal (probably brain-)states, with how they're brought about being a matter for the lab techs (or perhaps that's not what you're interested in, in which case I just have the wrong end of the stick). Unless your stimuli just are what I'm thinking of as experiences, but in that case what process of reliably getting from each stimulus to the correct response can you think of that doesn't involve gaining knowledge of the stimulus?

I also suspect we are thinking differently abut the analogy in the original post. I take it that in the second case the relevant question is 'are you looking at an apple?', not 'do you see one?'. Whether you're looking at an apple is something that can be determined by scientists with cameras, although you can also come to know that you're looking at an apple if you come to know that you're seeing an apple via some easy introspection. The claim is first of all that the subject's evidence that they aren't looking, because they aren't seeing, may be insufficient to overcome the scientists' evidence to the contrary from their cameras. The analogy, as I take it, is between looking at an apple and experiencing red, between seeing an apple and making an introspective judgment of experiencing red, and between the cameras and whatever instruments are used to probe the subject's brain. If this is an evidence-relation-preserving analogy, then the evidential relations in the second case can be repeated in the first, so that the scientists can win even in the first case.

I go through my take on this only because I think that if you want to use your stimulus-response approach in both cases, then if in the second case the stimulus is the apple, or some pattern of light, or whatever, in the first case the stimulus will have to be an experience (of red or grey). Do you think there can be a workable analogy between these object-question pairs: apple-'what are you looking at?' and orange-'what are you experiencing?'? If so, then isn't it plausible that there are judgments separate from experiences in the second case, and that they can be either right or wrong, just as we make judgments about apples that can be right or wrong? And if not, then what do you think people are doing when they think and talk abut their own experiences?

(The explanation of your use of 'illusion' makes good sense to me - thanks.)

Charles T. Wolverton said...

Well, since we seem to be fairly close but nonetheless have some communication gaps, let's go thru the gory details and see if we can converge. Since I routinely run into that problem, I'd be interested in trying to locate the source of the gaps.

First, let's nail down the test scenario and some terminology. The subject S is seated in a dark room, eyes directed straight ahead. To make things as simple as possible, I'd prefer that the object in question not be identified as to kind so that we are focused only on questions about its presence and color. For the same reason, I'd prefer that if present, it not be on a table but just suspended in space. If present, the object can be illuminated so as to result in light with any desired spectral power distribution (SPD) being incident on the subject's retina (it's a thought experiment, after all, so we can hypothesize even technically challenging scenarios.) Such light is the "stimulus" for S's visual processing. It's important because the light's SPD is the only entity in the whole setup of which it can be said meaningfully that, for example, it "is red" (see Note 1). And even then saying that an "SPD is red" is only an abbreviation for "light having this SPD that is incident on a "normal" subject's retina will evoke from the subject a response involving the word "red", eg, "that looks red" or "that is red". (see Note 2)

Now assume an object is present and that it is illuminated so that light with some SPD is incident on S's retina. This results in neural activity in S's visual processing subsystem, This "subsystem" is a notional concept, the detailed architecture of which need not be addressed (fortunate, since I don't know it!) It suffices to make the seemingly reasonable assumption that within that subsystem there will be neural activity some component of which can be considered a "signature" of the SPD of the incident light in that it can (in principle) individuate SPDs (see Note 3).

==========continued =======

Note 1: "is X" and "looks X" are terms of art from Sellars' Empiricism and Phil of Mind. The distinction is level of endorsement, or confidence. The former would apply if the circumstances are such that one is willing to assert unequivocally that, for example, an object "is red". The latter is less confident, perhaps owing to circumstances in which the possibility that one is mistaken is more probable.

Note 2: Recall that "normal" was explicitly defined earlier.

Note 3: This ignores complications like metamerism which can result in different SPDs having the same signature. All that matters here is the idea that (in the sense of every day usage) light of one color can be distinguished from light of a distinctly different color, say blue from orange.

Charles T. Wolverton said...

Now it gets dicey - the following is just my idea, although so far I've found no obvious reason to doubt its feasibility. (I think it's a version of "connectionism", but I haven't yet read enough on that subject to be sure.)

I envision that between the sensory neurons and the motor neurons is a (notional) neural network (biological, not computational!) that is formed by plasticity as a consequence of learning from experience. At the simplest level, an infant is exposed to an arbitrary surface reflecting light that "is red" (in the sense defined above) while simultaneously the teacher (typically a family member) utters the word "red". Over time this results in an "association" (ie, a connection in the neural network) between some sort of representation of the neural signatures for such SPDs (there will, of course, be many technically distinguishable ones that will in practice be effectively equivalent). Later, the (now) child will develop the ability to excite motor neurons so as to mimic uttering "red" when certain SPD signatures occur. Ultimately, this process results in connections via the neural network that cause the occurrence of certain signatures to result in the utterance of corresponding color names. At this early stage of development, the process is completely reflexive. Nothing cognitive (in any meaningful sense) is going on. It is completely analogous to my spectral analyzer-synthesizer example.

Obviously, in a fully developed non-child the process is more complicated in detail but not necessarily in concept. The neural network grows and comes to include inputs from historical memory, the current environment, et al. But this only increases the complexity and flexibility. (Now comes the part you'll presumably hate). But it isn't obvious that it necessarily becomes any less automatic. Ie, it can still be viewed as stimulus-response, although the range of stimuli accommodated and the variety of possible responses dramatically increase. Above some (again, notional) threshold of complexity of response we describe the responses as indicating "cognitive" mental activity.

Notice that in all this, phenomenal experience plays no role. We can address that later, but for now that's enough. Fire away.

Eric Schwitzgebel said...

Anon/Charles/Zach -- I see the conversation got deep while I was off doing my weekendy-family things!

Zach: I did mean your (2). The scientists in my parallel case are saying there is an apple before the subject's eyes. That is, they are saying that there is an apple on the table. They are saying nothing about the subject's phenomenology. At least, that's how I intended it to be interpreted. I wanted to draw the parallel *not* between two different types of phenomenal reports but rather between being wrong in a phenomenal report and being wrong in a report about a straightfowardly observable fact about the outside world.

My thought was this: In *both* phenomenal report cases and in outside-world cases, you can set up the scenario so that it seems like such an obvious fact that is being reported that it is bizarre that someone could go wrong about it. And yet people *could* go wrong about it. No one gets incorrigibility; it is always a weighing up of competing evidence.

At least that was my thought, perhaps ineptly conveyed!

Zach said...

Hm. I should have realized you wouldn't make that mistake. I understand what you were after now. Apologies for the misinterpretation.

Charles T. Wolverton said...

Eric -

This may be beating a dead horse, but there seems to me to be a significant difference between the two cases.

In your example, several possible explanations for the subject's failure to "detect" the presence of an object have been suggested. Also, there is 3rd person observable evidence available to the experimenters for use in establishing the "truth" of the object's presence. So, it's quite clear what it means for the subject to be mistaken and also how the mistake might be corrected. Eg, in the blind spot case, get the object out of the blind spot.

But in the other example, those aspects seems much less clear. Assume there is no reason to doubt either the test equipment or the test subject. The "recurrent processing" detected in the current subject corresponds in most subjects to the phenomenal experience (PE) that they describe as "seeing red". Nonetheless, the current subject claims to have no color PE at all. So, what might explain that and what would be considered "correcting" it? Why wouldn't one instead call the subject's claim an indication of not being "normal"? A "mistake" suggests deviation from some standard of "truth"; what is that standard for this case? (Not the equipment - it's measuring not PE but a presumed correlate.)

To clarify, I now agree that even in this simple scenario a subject's introspection shouldn't be described as "incorrigible". Not because it can be "wrong" but because I don't know what the subject's being "wrong" - or "right" - about her PE means. OTOH, I do know what her PE being normal or abnormal means.

Zach said...

Charles, there are many differences between the cases. You are right that we understand what is going on in the apple case much better; that's why Professor S chose it.

What's important is that although under normal circumstances, we wholeheartedly trust the subject when she sincerely asserts that there is no apple on the table, her assertion does not END the discussion. If scientists presented good enough counterevidence (and an explanation for her heartfelt denial), then we might suspect that the subject was in error. We don't "hands down" favor the subject.

Professor S thinks that the Cohen+Dennett case might be like this. We are inclined to wholeheartedly trust the subject under normal circumstances, but if our counterevidence was strong enough, we might believe that the subject was in error about her own experience.

Unless you want to enter the land of introspective infallibility (which Professor S has written a book arguing against), you have to recognize that Schwitz makes a valid point.

Charles T. Wolverton said...

Zach -

My issue has nothing at all to do with general introspective infallibility. You say that in the Cohen+Dennett case:

if our counterevidence was strong enough, we might believe that the subject was in error about her own experience.

Agreed, assuming one can define precisely what it means for the subject to be in error about her own phenomenal experience and can imagine what might constitute strong enough "counterevidence". I can do neither. (I assume the "recurrent processing" detector - or at least it's application to this example - is hypothetical.)

Zach said...

You say that you can not imagine what it what it means for a person to be in error about his own phenomenal experience. That introspective infallibility - of a certain sort.

Schwitz is seriously skeptical of that sort of infallibility. His book argues that we are wrong surprisingly often about our own phenomenal experience. He provides many examples.

I highly recommend it, although I disagree with S on a key issue.

Zach said...

Should say: That introspective infallibility - of a certain sort.

Zach said...

Someone is playing a cruel joke on me.

It should say: That *IS* introspective infallibility - of a certain sort

Anonymous said...

Charles,

Predictably, I'm most interested in your second-last paragraph addressed to me (starting: "Obviously, in a fully developed non-child...").

My first response is that colour assessment doesn't happen in isolation - it's always interdependent with assessments of shape and position, and possibly others. For one thing, to attribute e.g. blueness to a surface you have to have identified a surface. Also, it's fine presenting a baby with just one patch of blue and asking for a name, but what if you give them an orange patch on the left of a blue one? Just saying both 'orange' and 'blue' in turn won't cut it - they have to be able to respond with something like 'orange left, blue right'. And, though I know you didn't want to get onto this, colour phenomenology is likewise tied up with the rest of visual phenomenology. Do you think that all other aspects of visual phenomenology are not open to misidentification? If not, then what makes the difference between those aspects that are mis-identifiable and the aspect(s) corresponding to colour, which you say aren't?

Anonymous said...

Second, I don't think your stimulus-response theory makes room for some of our abilities with colours - in particular, the ones I think you'd think of as more 'cognitive'. For instance, take the ability to recall colours that were never in the attention. If we stand in the street talking, I can ask you 'what colours were the hats of the last three people to walk by?', and you can probably answer, or at least give one, even if you weren't paying attention. I could also ask 'what colours did the hats of the last three people to walk by look to you to be?' and you could probably answer, though, I expect, with some errors... In any case, I think that being able to recall colours that weren't originally in the attention isn't something a stimulus-response machine can do, as there was no response then and there is no stimulus now they are being recalled.

Another such ability is the use of colours in the imagination. I can reasonably instruct you to 'imagine the cup like this, only green', or to 'imagine the colour of this cup, only darker'. And, of course, you can do this without prompting. (See also, perhaps: http://en.wikipedia.org/wiki/The_Missing_Shade_of_Blue.) I think it is also possible to compare colours between simultaneously present images and across time, and to predict the aesthetic effects of different combinations of colours before trying them out for real. All these abilities seem to me to rely on our being able to reflect on and reflectively re-use our experiences. I think there is enough cognitive distance between the experiences and our reflective faculties that the latter can make mistakes with respect to the former.

(I also, more speculatively, think that colour experiences can have characters beyond their positions in the spectrum, or some more detailed colour space. These are what we might describe metaphorically as the heat of reds as opposed to the coolness of blues, and so on. I.e., there are more to colours, as we experience them, than their relational properties as points in a colour space. And then how can these characters be distinguished without some faculty for doing so, and why can't it make mistakes?)

Finally, I don't know a lot about how children learn about colours (and nor do I know a lot about what the experts know about it, though I suspect it isn't much, all things considered), so I don't want to put too much emphasis on that side of your account. I just wonder what you think the human teacher is imparting to the developing child. It strikes me as very implausible that a human teacher is required to teach a word (or several) for red to a child in order for them to be able to recognise it. The lines between red and yellow or purple, on the other hand, need to be taught. Also, the lines between red and pink or brown, which I think are of a different character. And even if a child will naturally develop concepts like 'redder' and 'less red' - which I think is likely, for physical and evolutionary reasons - they will need to be taught the corresponding words. But both a child and a spectral analyzer are sensitive to differences in colour before they learn responses. The spectral analyzer, for instance, will produce different outputs given different hues before these outputs are inputted to a synthesizer (so in a sense, sensitivity requires there to be different responses already). I think your account (rightly) relies on this. But aren't we really interested in the sensitivity, rather than the ability to give specific responses? Maybe you think that we can't be aware of and reflect on this sensitivity without having been taught words for the differences to which it is sensitive - is that right? - but otherwise, if such reflection is possible without words, then mustn't there be a faculty for recalling and recognising differences and our reactions to them, and then why can't it mis-recognise some reactions?

Charles T. Wolverton said...

anon and Zach -

Thanks for that response, anon. I can address most of the issues you raise, but I'd like to start simple. Comparing the phenomenal experiences of shape and color should be instructive.

We three are the experimenters. In a test room, a figure has been drawn on a white wall before which a subject is seated. We have previously agreed among ourselves that the figure on the wall is a square, so we have consensus on the "correct" answer to the question "what is the the figure on the wall?". But the current subject answers "it looks like a circle".

Assume we have a high degree of confidence that the subject is reporting truthfully. Then the report clearly indicates a mistake of some sort. One possibility is that the subject has learned an incorrect association between phenomenal experience of a square and a word. So, we ask the subject to describe her mental image, and the she responds "it's the locus of points on the wall that are equidistant from a specific point on the wall". The subject has a good grasp of the concept of a circle, so it seems likely that the mistake relates to the phenomenal experience itself - it is in some sense "incorrect".

We now ask the subject to estimate the radius of the mental image of the circle, and the subject answers "about a foot".

My point about a subject's making a "mistake" about the phenomenal experience of a color is that the question "what color is the figure" is more like the second test question than the first. I obviously wouldn't argue that because introspection is infallible, "about a foot" must be "correct". But neither would I say that it is "incorrect" - the question is meaningless and therefore has no answer. Similarly, I question the concept of a person making a "mistake" about the phenomenal experience of a color. Not because the subject's answer must be "correct" but because I'm not sure about the meaningfulness of saying that the report is either "correct" or "incorrect". What is analogous to the subject's verbal description of the phenomenal experience of a figure that we might use to judge whether "it looks orange to me" is or isn't correct?

To repeat, I'm asking a real question to which there may well be an obvious answer - in which case I'll be happy to concede that I haven't been thinking clearly. But forget about introspective infallibility - just suggest some answers. One possibility is that the device implicit in the C&D exchange in Eric's post is real and known to reliably correlate readings and experiences. I'm not familiar with such a device, but if there is one I'd certainly like to learn more about it. And, of course, I'll concede the issue.

Eric Schwitzgebel said...

Reflections on the March 13 reply by Cohen and Dennett:

My own position on the issue of inaccessible consciousness is this: I think understand what it means to say you do access a conscious experience (and report it accurately) and what it means to say you don’t access it (and fail to report it, or report it very inaccurately). But to say you can’t access a conscious experience – that type of claim I have trouble evaluating.

Put in a maximally extreme form, it doesn’t seem very plausible (e.g., there’s no way, under any conditions or with any sort of training, that someone could access this broad type of experience). At least it would require a powerful argument. (Compare a similar claim about non-mental phenomena, e.g., events outside our light cone?) If we read the claim in an extremely tepid form, then it starts to seem eminently plausible, as long as one isn’t an infallibilist (this person wasn’t able to get at this specific experience in these conditions for some reason).

So I don’t feel I have a good sense of where the turf is that is under dispute, when the debate is couched abstractly. My own preference is to evaluate the likelihood of unreported experience on a case-by-case basis – though very tentatively, given how little we really know about consciousness (in my view) at this juncture.

Dave the Philosopher said...

Let's assume the hypothetical, that scientists possess knowledge of the neural correlates of subjective experiences to a much greater degree than they now do. How did they come to such knowledge?

How do you know that two things are correlated? You do a lot of experiments, and you find that you observe them happening together and you do not observe them not happening together. How do they know that brain state B is correlated with mental state M? They observe brain state B with their physical equipment. How do they observe mental state M?

This is trickier. As I see it there are three options. (1) They assume it, e.g., they showed a sample a red thing and they didn't show it to the control group and and noted the brain states. In such a case I don't think we can think of them as having observed the mental state, but rather they are correlating two physical facts with each other. (2) They ask the test subjects for their subjective report, in which case, that means we have to trust people's subjective reports. If they do this, then why should we trust the person in the current case any less? Or (3) they would infer it from some other data. What other kind of data would that be? It would have to be other observations of the relation of some variable with subjective state, in which case the question repeats itself.

In the second case, where the subject claims not to see the apple that is evidently there, how do the scientists know that there really is in fact an apple there? Because there is one there. They never had to go find out that when you put an apple in front of most people they will see an apple. It is evident.

Therefore I would submit that the first case is much different from the second. In the first case, the entire evidential basis of the scientists confidence is the subjective report. Subjectivity is the authority. If we observed brain state R we would have no basis at all for thinking that the person was experiencing red if not for the fact that lots of people reported having seen red while also exhibiting R. Therefore, if someone denies seeing red while still exhibiting R, we have no reason to deny the person authority. But the evidential basis for the belief that the person is seeing the apple at all is not the fact that we've observed that when most people see an apple they report seeing an apple, it is far simpler. It is because there is an apple in front of the person.

Eric Schwitzgebel said...

Dave, I'm inclined to think that even if self-reports are an important part of the epistemic base, individual reports can still be overturned.