Tuesday, October 22, 2024

An Objection to Chalmers's Fading Qualia Argument

[Note: This is a long and dense post. Buckle up.]

In one chapter of his influential 1996 book, David Chalmers defends the view that consciousness arises in virtue of the functional organization of the brain rather than in virtue of the brain's material substrate.  That is, if there were entities that were functionally/organizationally identical to humans but made out of different stuff (e.g. silicon chips), they would be just as conscious as we are.  He defends this view, in part, with what he calls the Fading Qualia Argument.  The argument is enticing, but I think it doesn't succeed.

Chalmers, Robot, and the Target Audience of the Argument

Drawing on thought experiments from Pylyshyn, Savitt, and Cuda, Chalmers begins by imagining two cases: himself and "Robot".  Robot is a functional isomorph of Chalmers, but constructed of different materials.  For concreteness (but this isn't essential), we might imagine that Robot has a brain with the exact same neural architecture as Chalmers' brain, except that the neurons are made of silicon chips.

Because Chalmers and Robot are functional isomorphs, they will respond in the same way to all stimuli.  For example, if you ask Robot if it is conscious, it will emit, "Yes, of course!" (or whatever Chalmers would say if asked that question).  If you step on Robot's toe, Robot will pull its foot back and protest.  And so on.

For purposes of this argument, we don't want to assume that Robot is conscious, despite its architectural and functional similarity to Chalmers.  The Fading Qualia Argument aims to show that Robot is conscious, starting from premises that are neutral on the question.  The aim is to win over those who think that maybe being carbon-based or having certain biochemical properties is essential for consciousness, so that a functional isomorph made of the wrong stuff would only misleadingly look like it's conscious.  The target audience for this argument is someone concerned that for all Robot's similar mid-level architecture and all of its seeming "speech" and "pain" behavior, Robot really has no genuinely conscious experiences at all, in virtue of lacking the right biochemistry -- that it's merely a consciousness mimic, rather than a genuinely conscious entity.

The Slippery Slope of Introspection

Chalmers asks us to imagine a series of cases intermediate between him and Robot.  We might imagine, for example, a series each of whose members differs by one neuron.  Entity 0 is Chalmers.  Entity 1 is Chalmers with one silicon chip neuron replacing a biological neuron.  Entity 2 is Chalmers with two silicon chip neurons replacing two biological neurons.  And so on to Entity N, Robot, all of whose neurons are silicon.  Again, the exact nature of the replacements isn't essential to the argument.  The core thought is just this: Robot is a functional isomorph of Chalmers, but constructed of different materials; and between Chalmers and Robot we can construct a series of cases each of which is only a tiny bit different from its neighbors.

Now if this is a coherent setup, the person who wants to deny consciousness to Robot faces a dilemma.  Either (1.) at some point in the series, consciousness suddenly winks out -- between Entity I and Entity I+1, for some value of I.  Or (2.) consciousness somehow slowly fades away in the series.

Option (1) seems implausible.  Chalmers, presumably, has a rich welter of conscious experience (at least, we can choose a moment at which he does).  A priori, it would be odd if the big metaphysical jump from that rich welter of experience to zero experience would occur with an arbitrarily tiny change between Entity I and Entity I+1.  And empirically, our best understanding of the brain is that tiny, single-neuron-and-smaller differences rarely have such dramatic effects (unless they cascade into larger differences).  Consciousness is a property of large assemblies of neurons, robust to tiny changes.

But option (2) also seems implausible, for it would seem to involve massive introspective error.  Suppose that Entity I is an intermediate case with very much reduced, but not entirely absent, consciousness.  Chalmers suggests that instead of having bright red visual experience, Entity I has tepid pink experience.  (I'm inclined to think that this isn't the best way to think about fading or borderline consciousness, since it's natural to think of pink experiences as just different in experienced content from red cases, rather than less experiential than red cases.  But as I've argued elsewhere, genuinely borderline consciousness is difficult or impossible to imaginatively conceive, so I won't press Chalmers on this point.)

By stipulation, since Entity I is a functional isomorph, it will give the same reports about its experience as Chalmers himself would.  In other words, Entity I -- despite being barely or borderline conscious -- will say "Oh yes, I have vividly bright red experiences -- a whole welter of exciting phenomenology!"  Since this is false of Entity I, Entity I is just wrong about that.  But also, since it's a functional isomorph, there's no weird malfunction going on either, that would explain this strange report.  We ordinarily think that people are reliable introspectors of their experience; so we should think the same of Entity I.  Thus, option (2), gradual fading, generates an implausible tension: We have to believe that Entity I is radically introspectively mistaken; but that involves committing to an implausible degree of introspective error.

Therefore, neither option (1) nor option (2) is plausible.  But if Robot were not conscious, either (1) or (2) would have to be true for at least one Entity I.  Therefore, Robot is conscious.  And therefore, functional isomorphism is sufficient for consciousness.  It doesn't matter what materials an entity is made of.

We Can't Trust Robot "Introspection"

I acknowledge that it's an appealing argument.  However, Chalmers' response to option (2) should be unconvincing to the argument's target audience.

I have argued extensively that human introspection, even of currently ongoing conscious experience, is highly unreliable.  However, my reply today won't lean on that aspect of my work.  What I want to argue instead is that the assumed audience for this argument should not think that the introspection (or "introspection" -- I'll explain the scare quotes in a minute) of Entity I is reliable.

Recall that the target audience for the argument is someone who is antecedently neutral about Robot's consciousness.  But of course by stipulation, Robot will say (or "say") the same things about its experiences that Chalmers will say.  Just like Chalmers, and just like Entity I, it will say "Oh yes, I have vividly bright red experiences -- a whole welter of exciting phenomenology!"  The audience for Chalmers' argument must therefore initially doubt that such statements, or seeming statements, as issued by Robot, are reliable signals of consciousness.  If the audience already trusted these reports, there would be no need for the argument.

There are two possible ways to conceptualize Robot's reports, if they are not accurate introspections: (a.) They might be inaccurate introspections.  (b.) They might not be introspections at all.  Option (a) allows that Robot, despite lacking conscious experience, is capable of meaningful speech and is capable of introspecting, though any introspective reports of consciousness will be erroneous.  Option (b) is preferred if we think that genuinely meaningful language requires consciousness and/or that no cognitive process that fails to target a genuinely conscious experience in fact deserves to be called introspection.  On option (b) Robot only "introspects" in scare quotes.  It doesn't actually introspect.

Option (a) thus assumes introspective fallibilism, while option (b) is compatible with introspective infallibilism.

The audience who is to be convinced by the slow-fade version of the Fading Qualia Argument must both trust the introspective reports (or "introspective reports") of the intermediate entities while not trusting those of Robot.  Given that some of the intermediate entities are extremely similar to Robot -- e.g., Entity N-1, who is only one neuron different -- it would be awkward and implausible to assume reliability for all the intermediate entities while not doing so for Robot.

Now plausibly, if there is a slow fadeout, it's not going to be still going on with an entity as close to Robot as Entity N-1, so the relevant cases will be somewhere nearer the middle.  Stipulate, then, two values I and J not very far separated (0 < I < J < N) such that we can reasonably assume that if Robot in nonconscious, so is Entity J, while we cannot reasonably assume that if Robot is nonconscious, so is Entity I.  For consistency with their doubts about the introspective reports (or "introspective reports") or Robot, the target audience should have similar doubts about Entity J.  But now it's unclear why they should be confident in the reports of Entity I, which by stipulation is not far separated from Entity J.  Maybe it's a faded case, despite its report of vivid experience.

Here's one way to think about it.  Setting aside introspective skepticism about normal humans, we should trust the reports of Chalmers / Entity 0.  But ex hypothesi, the target audience for the argument should not trust the "introspective reports" of Robot / Entity N.  It's then an open question whether we should trust the reports of the relevant intermediate, possibly experientially faded, entities.  We could either generalize our trust of Chalmers down the line or generalize our mistrust of Robot up the line.  Given the symmetry of the situation, it's not clear which the better approach is, or how far down or up the slippery slope we should generalize the trust or mistrust.

For Chalmers' argument to work, we must be warranted in trusting the reports of Entity I at whatever point the fade-out is happening.  To settle this question, Chalmers needs to do more than appeal to the general reliability of introspection in normal human cases and the lack of functional differences between him, Robot, and the intermediate entities.  Even an a priori argument that introspection is infallible will not serve his purposes, because then the open question becomes whether Robot and the relevant intermediate entities are actually introspecting.

Furthermore, if there is introspective error by Entity I, there's a tidy explanation of why that introspective error would be unsurprising.  For simplicity, assume that introspection occurs in the Introspection Module located in the pineal gland, and that it works by sending queries to other parts of the brain, asking questions like "Hey, occipital lobe, is red experience going on there right now?", reaching introspective judgments based on the signals that it gets in reply.  If Entity I has a functioning, biological Introspection Module but a replaced, silicon occipital lobe, and if there really is no red experience going on in the occipital lobe, we can see why Entity I would be mistaken: Its Introspection Module is getting exactly the same signal from the occipital lobe as it would receive if red experience were in fact present.

It's highly doubtful that introspection is as neat a process as I've just described.  But the point remains.  If Entity I is introspectively unreliable, a perfectly good explanation beckons: Whatever cognitive processes subserve the introspective reporting are going to generate the same signals -- including misleading signals, if experience is absent -- as they would in the case where experience is present and accurately reported.  Thus, unreliability would simply be what we should expect.

Now it's surely in some respects more elegant if we can treat Chalmers, Robot, and all the intermediate entities analogously, as conscious and accurately reporting their experience.  The Fading Qualia setup nicely displays the complexity or inelegance of thinking otherwise.  But the intended audience of the Fading Qualia argument is someone who wonders whether experience tracks so neatly onto function, someone who suspects that nature might in fact be complex or inelegant in exactly this respect, such that it's (nomologically/naturally/scientifically) possible to have a behavioral/functional isomorph who "reports" experiences but who in fact entirely lacks them.  The target audience who is initially neutral about the consciousness of Robot should thus remain unmoved by the Fading Qualia argument.

This isn't to say I disagree with Chalmers' conclusion.  I've advanced a very different argument for a similar conclusion: The Copernican Argument for Alien Consciousness, which turns on the idea that it's unlikely that, among all behaviorally sophisticated alien species of radically different structure that probably exist in the universe, humans would be so lucky as to be among the special few with just the right underlying stuff to be conscious.  Central to the Fading Qualia argument in particular is Chalmers' appeal to the presumably reliable introspection of the intermediate entities.  My concern is that we cannot justifiably make that presumption.

Dancing Qualia

Chalmers pairs the Fading Qualia argument with a related but more complex Dancing Qualia argument, which he characterizes as the stronger of the two arguments.  Without entering into detail, Chalmers posits for sake of reductio ad absurdum that the alternative medium (e.g., silicon) hosts experiences but of a different qualitative character (e.g., color inverted).  We install a system in the alternative medium as a backup circuit with effectors and transducers to the rest of the brain.  For example, in addition to having a biological occipital lobe, you also have a functionally identical silicon backup occipital lobe.  Initially the silicon occipital lobe backup circuit is powered off.  But you can power it on -- and power off your biological occipital lobe -- by flipping a switch.  Since the silicon lobe is functionally identical to the biological lobe, the rest of the brain should register no difference.

Now, if you switch between normal neural processing and the backup silicon processor, you should have very different experience (per the assumption of the reductio) but you should not be able to introspectively report that different experience (since the backup circuit interacts identically with the rest of the brain).  That would again be a strange failure of introspection.  So (per the rules of reductio) we conclude that the initial premise was mistaken: Normal neural processing should generate the same types of experience as functionally identical processing in a silicon processor.

(I might quibble that you-with-backup-circuit is not functionally isomorphic to you-without-backup-circuit -- after all, you now have a switch and two different parallel processor streams -- and if consciousness supervenes on the whole system rather than just local parts, that's possibly a relevant change that will cause the experience to be different from the experience of either an unmodified brain or an isomorphic silicon brain.  But set this issue aside.)

The Dancing Qualia argument is vulnerable on the introspective accuracy assumption, much as the Fading Qualia argument is.  Again for simplicity, suppose a biological Introspection Module.  Suppose that what is backed up is the portion of the brain that is locally responsible for red experience.  Ex hypothesi, the silicon backup gives rise to non-red experience but delivers to the Introspection Module exactly the same inputs as that module would normally receive from an organic brain part experiencing red.  This is exactly the type of case where we should expect introspection to be unreliable.

Consider an analogous case of vision.  Looking at a green tree 50 feet away in good light, my vision is reliable.  Now substitute a red tree in the same location and a mechanism between me and the tree such that all the red light is converted into green light, so that I get exactly the same visual input I would normally receive from looking at a green tree.  Even if vision is highly reliable in normal circumstances, it is no surprise in this particular circumstance if I mistakenly judge the red tree to be green!

As I acknowledged before, this is a cartoon model of introspection.  Here's another way introspection might work: What matters is what is represented in the Introspection Module itself.  So if the introspection module says "red", necessarily I experience red.  In that case, in order to get Dancing Qualia, we need to create an alternate backup circuit for the Introspection Module itself.  When we flip the switch, we switch from Biological Introspection Module to Silicon Introspection Module.  Ex hypothesi, the experiences really are different but the Introspection Module represents them functionally in the same way, and the inputs and outputs to and from the rest of the brain don't differ.  So of course there won't be any experiential difference that I would conceptualize and report.  There would be some difference in qualia, but I wouldn't have the conceptual tools or memorial mechanisms to notice or remember the difference.

This is not obviously absurd.  In ordinary life we arguably experience minor version of this all the time: I experience some specific shade of maroon.  After a blink, I experience some slightly different shade of maroon.  I might entirely fail to conceptualize or notice the difference: My color concepts and color memory are not so fine grained.  The hypothesized red/green difference in Dancing Qualia is a much larger difference -- so it's not a problem of fineness of grain -- but fundamentally the explanation of my failure is similar: I have no concept or memory suited to track the difference.

On more holist/complicated views of introspection, the story will be more complicated, but I think the burden of proof would be on Chalmers to show that some blend of the two strategies isn't sufficient to generate suspicions of introspective unreliability in the Dancing Qualia cases.

Related Arguments

This response to the Fading Qualia argument draws on David Billy Udell's and my similar critique of Susan Schneider's Chip Test for AI consciousness (see also my chapter "How to Accidentally Become a Zombie Robot" in A Theory of Jerks and Other Philosophical Misadventures).

Although this critique of the Fading Qualia argument has been bouncing around in my head since I first read The Conscious Mind in the late 1990s, it felt a little complex for a blog post but not quite enough for a publishable paper.  But reading Ned Block's similar critique in his 2023 book has inspired me to express my version of the critique.  I agree with Block's observations that "the pathology that [Entity I] has [is] one of the conditions that makes introspection unreliable" (p. 455) and that "cases with which we are familiar provide no precedent for such massive unreliability" (p. 457).

9 comments:

David Duffy said...

I find it really hard to work out what "introspection" actually means here viz access and reporting. Instead of this SFnal thought experiment, I prefer the example of hypnotic analgesia under the claim of Barber that all hypnotic phenomena can occur in "normal consciousness". Then we have Entity0 saying that the cold pressor test is painful, Entity1 saying that it is painful but it doesn't hurt, and Entity2 saying that it is not painful. Traditional hypotheses are (A) that it really is painful, but E1 and E2 are lying about/denying their true experience (maybe they are tachycardic, showing they really are suffering), that (B1) E1 and E2 are not attending to part (E1) or all (E2) of their perceptions and interoceptions, or perhaps equivalently (B2) E1 and E2 are blocking peripheral signals and so perception never occurs. (A) seems akin to doubting the reports of the robot re here a lack of an expected "conscious" experience. I don't know if this really helps.

David Duffy said...

Obviously, I pick pain because of all the philosophers who use it as an example of irrefutable nonignorable conscious experience.

Anonymous said...

Fascinating post. Deep stuff in Chalmers’ arguments here.

My take: True introspection (I see “red”) is the mind’s detection of an immaterial aspect of itself which then has a causal influence upon how the material generating or grounding it behaves. There can be no true PHYSICAL introspection module as mental introspection is not detecting anything physical. So if the immaterial mind fades, the material mind would have to notice it. A zombie might claim to see red because the “physical” or “functional” role of red is being “detected” by the zombie but I doubt the Zombie would necessarily behave the same way as the non-zombie when detecting this. (So, in the strict sense, I suspect true zombies are actually inconceivable because we can know that immaterial states can, at least sometimes have physical effects that wouldn’t be present, or be different, if consciousness was absent, primarily because of the paradox of phenomenal judgement.) I deny the causal closure of the physical, a keystone of Chalmers thought, if closure implies the causal determination of physical processes absent mental influence. This doesn’t mean that mental states have to perform a causal role in the sense of a node in a deterministic chain of causal events, but rather they act as hidden variables collapsing certain indeterminacies in physical organization that look probabilistically determined from “the outside.” (Perhaps via quantum coherence or some such science-y explanation.) I’m convinced that problems with the paradox of phenomenal judgement are unsolvable and imply some kind of interactionist model of mind and matter, though perhaps more subtle than seeing mind as a domino in a collapsing line of dominoes. This paper on the topic was excellent: https://aporia.byu.edu/pdfs/naegle-the_paradox_of_phenomenal_judgment_and_causality.pdf

Arnold said...

Panpsychism: This philosophical view suggests that consciousness is a fundamental property of the universe, present for all things, from atoms, chips, humans and galaxies-beyond...
...Then Earth, as a complex system, possessing consciousness, is for exploring the relationship of the place of human consciousness with the consciousness of Earth in cosmology....

Anonymous said...

This argument is due to Hans Moravec originally, isn't it?

Arnold said...

Dare we forget...'Dennett's work emphasizes the importance of understanding cognition from multiple perspectives.
He challenges traditional dualistic views of mind and body, arguing for a more integrated approach'....my thoughts, edited by Gemini AI...

Anonymous said...

It seems to me it would be likely that Chalmer's robot would be conscious, if it could be constructed, but still I see a problem with Chalmers's argument as outlined.

It centers around these quotes from the post:

“(2.) consciousness somehow slowly fades away in the series”

“But as I've argued elsewhere, genuinely borderline consciousness is difficult or impossible to imaginatively conceive, …”

When reasoning about consciousness, we don't have empirical methods, just introspection. I can imagine a future science that can study consciousness empirically – maybe there is a machine that gives a view into what it is like to be a bat or an earthworm, or even another person. Maybe it just gives a quantitative reading of the qualities of consciousnes. When we can build the robot described in the post, we may be close to having a technology to support the empirical study of consciousness. And then we may discover that consciousness varies as much as intelligence, that the introspected feeling of either on or off – conscious or not, is an illusion. Maybe there was no time in the evolution of life when consciousness suddenly happened. Maybe like so many things it evolved slowly.

Eric Schwitzgebel said...

Thanks for the comments, folks!

David: Yes -- and it's hard to know which is the case among those options, though perhaps it's not wholly empirically intractable, if there's good convergent evidence from several approaches. The Fading Qualia thought experiment is less empirically tractable. And if you're a hard core functionalist of a certain type, it might even seem inconceivable that consciousness and introspection could be anything other than a functional process.

Anon Oct 23: Right, interactionist dualism will generate a very different story! On that view, we would have to expect some failure of functional duplication in the replacement process -- and perhaps that would be noticeable.

Anon Oct 25: Cuda 1985 predates Moravec's famous book by a few years -- but maybe there's an earlier formulation by Moravec?

Arnold: Yes, I agree, multiple approaches are good.

Anon Oct 27: It does seem plausible that consciousness emerged slowly and -- if you're willing to allow this -- permits in-between / indeterminate cases. (This is the thesis of a 2023 article of mine.) On a consciousness detector: Maybe! I wouldn't say it's impossible in principle, but given the dissensus in the science of consciousness, it is at best a long way off.

Arnold said...

...Introspection-extrospection-isomorphism; with or without purpose...

At NPR news: "Ever felt so stressed you didn’t know what to do next?...
...try talking to your 'parts"...

My view: try letting your parts talk to you...purposeful in-betweenness...