In his 1996 book and a related 1995 article, David Chalmers offers what he calls the "fading qualia" argument that there's nothing in principle special about neurons (see also Cuda 1985). The basic idea is that, in principle, scientists could swap your neurons out one by one, and you'd never notice the difference. But if your consciousness were to disappear during this process, you would notice the difference. Therefore, your consciousness would not disappear. A similar idea underlies Susan Schneider's "Chip Test" for silicon consciousness: To check whether some proposed cognitive substrate really supports consciousness, slowly swap out your neurons for that substrate, a piece at a time, checking for losses of consciousness along the way.
In a recent article critical of Schneider, David Udell and I have criticized her version of the swapping test. Our argument can be adapted to Chalmers's fading qualia argument, which is my project today.
First, a bit more on how the gradual replacement is supposed to work. Suppose you have a hundred billion neurons. Imagine replacing just one of those neurons with a silicon chip. The chemical and electrical signals that serve as inputs to that neuron are registered by detectors connected to the chip. The chip calculates the effects that those inputs would have had on the neuron's behavior -- specifically, what chemical and electrical signals the neuron, had it remained in place, would have given as outputs to other neurons connected to it -- and then delivers those same outputs to those same neurons by effectors attached to the silicon chip on one end and the target neurons at the other end. No doubt this would be complicated, expensive, and bulky; but all that matters to the thought experiment is that it would be possible in principle. A silicon chip could be made to perfectly imitate the behavior of a neuron, taking whatever inputs the neuron would take and converting them into whatever outputs the neuron would emit given those inputs. Given this perfect imitation, no other neurons in the brain would behave differently as a result of the swap: They would all be getting the same inputs from the silicon replacement that they would have received from the original neuron.
So far, we have replaced only a single neuron, and presumably nothing much has changed. Next, we swap another. Then another. Then another, until eventually all one hundred billion have been replaced, and your "neural" structure is now entirely constituted by silicon chips. (If glial cells matter to consciousness, we can extend the swapping process to them also.) The resulting entity will have a mind that is functionally identical to your own at the level of neural structure. This implies that it will have exactly the same behavioral reactions to any external stimuli that you would have. For example, if it is asked, "Are you conscious?" it will say, "Definitely, yes!" (or whatever you would have said), since all the efferent outputs to your muscles will be exactly the same as they would have been had your brain not been replaced. The question is whether the silicon-chipped entity might actually lack conscious experiences despite this behavioral similarity, that is, whether it might be a "zombie" that is behaviorally indistinguishable from you despite having nothing going on experientially inside.
Chalmers's argument is a reductio. Assume for the sake of the reductio that the final silicon-brained you entirely lacks conscious experience. If so, then sometime during the swapping procedure consciousness must either have gradually faded away or suddenly winked out. It's implausible, Chalmers suggests, that consciousness would suddenly wink out with the replacement of a single neuron. (I'm inclined to agree.) If so, then there must be intermediate versions of you with substantially faded consciousness. However, the entity will not report having faded consciousness. Since (ex hypothesi) the silicon chips are functionally identical with the neurons, all the intermediate versions of you will behave exactly the same as they would have behaved if no neurons had been replaced. Nor will there be other neural activity constitutive of believing that your consciousness is fading away: Your unreplaced neurons will keep firing as usual, as if there had been no replacement at all.
However, Chalmers argues, if your consciousness were fading away, you would notice it. It's implausible that the dramatic changes of consciousness that would have to be involved when your consciousness is fading away would go entirely undetected during the gradual replacement process. That would be a catastrophic failure of introspection, which is normally a reliable or even infallible process. Furthermore, it would be a catastrophic failure that occurs while the cognitive (neural/silicon) systems are functioning normally. This completes the reductio. Restated in modus tollens form: If consciousness would disappear during gradual replacement, you'd notice it; but you wouldn't notice it; therefore consciousness would not disappear during gradual replacement.
As Udell and I frame it in our discussion of Schneider, this argument has an audience problem. Its target audience is someone who is worried that despite in-principle functional identicality at the neuronal level, silicon might just not be the right kind of stuff to host consciousness. Someone who has this worry presumably does not trust the introspective reports, or the seemingly-introspective reports, of the silicon-brained entity. The silicon-brained entity might say "Yes, of course I'm conscious! I'm experiencing right now visual sensations of your face, auditory sensations of my voice, and a rising feeling of annoyance at your failure to believe me!" The intended audience remains unconvinced by this apparent introspective testimony. They need an argument to be convinced otherwise -- the Fading Qualia argument.
Let's call the entity (the person) before any replacement surgery r0, and the entity after all their neurons are replaced rn, where n is the total number of neurons replaced. During replacement, this entity passes through stages r1, r2, r3, ... ri, ... rn. By stipulation, our audience doesn't trust the introspective or seemingly introspective judgments of rn. This is the worry that motivates the need for the Fading Qualia argument. In order for the argument to work, there must be some advantage that the intermediate ri entities systematically possess over rn, such that we have reason to trust their introspective reports despite distrusting rn's report.
Seemingly introspective reports about conscious experience may or may not be trustworthy in the normal human case (Schwitzgebel 2011; Irvine 2013). But even if they're trustworthy in the normal human case, they might not be trustworthy in the unusual case of having pieces of one's brain swapped out. One might hold that introspective judgments are always trustworthy (absent a certain range of known defeaters, which we can stipulate are absent), in other words, that unless a process accurately represents a target conscious experience it is not a genuinely introspective process. This is true, for example on containment views of introspection, according to which properly formed introspective judgments contain the target experiences as a part (e.g., "I'm experiencing [this]"). Infallibilist views of introspection of that sort contrast with functionalist views of introspection, on which introspection is a fallible functional process that garners information about a distinct target mental state.
A skeptic about silicon consciousness might either accept or reject an infallibilist view of introspection. The Fading Qualia argument will face trouble either way.
[A Trilemma for the Fading Qualia Argument (click to enlarge and clarify figure): Optimists about silicon chip consciousness have no need for an argument in favor of rn consciousness, because they are already convinced of its possibility. On the other hand, skeptics about silicon consciousness are led to doubt either the presence or the reliability of ri's introspection (depending on their view of introspection) for the same reason they are led to doubt rn's consciousness in the first place.]
If a silicon chip skeptic holds that genuine introspection requires and thus implies genuine consciousness, then they will want to say that a "zombie" rn, despite emitting what looks from the outside like an introspective report of conscious experience, does not in fact genuinely introspect. With no genuine conscious experience for introspection to target, the report must issue, on this view, from some non-introspective process. This raises the natural question of why they should feel confident that the intermediate ris are genuinely introspecting, instead of merely engaging in a non-introspective process similar to rn's. After all, there is substantial architectural similarity between rn at at least the late-stage ris. The skeptic needs, but Chalmers does not provide, some principled reason to think that entities in the ri phases would in fact introspect despite rn's possible failure to do so -- or at least good reason to believe that the ris would successfully introspect their fading consciousness during the most crucial stages of fade-out. Absent this, reasonable doubt about rn introspection naturally extends into reasonable doubt about introspection in the ri cases as well. The infallibilist skeptic about silicon-based consciousness needs their skepticism about introspection to be assuaged for at least those critical transition points before they can accept the Fading Qualia argument as informative about rn's consciousness.
If a skeptic about silicon-based consciousness believes that genuine introspection can occur without delivering accurate judgments about consciousness, analogous difficulties arise. Either rn does not successfully introspect, merely seeming to do so, in which case the argument of the previous paragraph applies, or rn does introspect and concludes that consciousness has not disappeared or changed in any radical way. The functionalist or fallibilist skeptic about silicon-based consciousness does not trust that rn has introspected accurately. On their view, rn might in fact be a zombie, despite introspectively-based claims otherwise. Absent any reason for the fallibilist skeptic about silicon-based consciousness to trust rn's introspective judgments, why should they trust the judgments of the ris -- especially the late-stage ris? If rn can mistakenly judge itself conscious, on the basis of its introspection, might someone undergoing the gradual replacement procedure also erroneously judge its consciousness not to be fading away? Gradualness is no assurance against error. Indeed, error is sometimes easier if we (or "we") slowly slide into it.
This concern might be mitigated if loss of consciousness is sure to occur early in the replacement process, when the entity is much closer to r0 than rn, but I see no good reason to make that assumption. And even if we were to assume that phenomenal alterations would occur early in the replacement process, it's not clear why the fallibilist should regard those changes as the sort that introspection would likely detect rather than miss.
The Fading Qualia argument awkwardly pairs skepticism about rn's introspective judgments with unexplained confidence in the ri's introspective judgments, and this pairing isn't theoretically stable on any view of introspection.
The objection can be made vivid with a toy case: Suppose that we have an introspection module in the brain. When the module is involved in introspecting a conscious mental state, it will send query signals to other regions of the brain. Getting the right signals back from those other regions -- call them regions A, B, and C -- is part of the process driving the judgment that experiential changes are present or absent. Now suppose that all the neurons in region B have been replaced with silicon chips. Silicon region B will receive input signals from other regions of the brain, just as neural region B would have, and silicon region B will then send output signals to other brain regions that normally interface with neural region B. Among those output signals will be signals to the introspection module.
When the introspection module sends its query signal to region B, what signal will it receive in return? Ex hypothesi, the silicon chips perfectly functionally emulate the full range of neural processes of the neurons they have replaced; that's just the set-up of the Fading Qualia argument. Given this, the introspection module would of course receive exactly the same signal it would have received from region B had region B not been replaced. If so, then entity ri will presumably infer that activity in region B is conscious. Maybe region B normally hosts conscious experiences of thirst. The entity might then say to itself (or aloud), "Yes, I'm still feeling thirsty. I really am having that conscious experience, just as vividly, with no fading, despite the replacement of that region of my brain by silicon chips." This would be, as far as the entity could tell, a careful and accurate first-person introspective judgment.
(If, on the other hand, the brain region containing the introspection module is the region being replaced, then maybe introspection isn't occurring at all -- at least in any sense of introspection that is committed to the idea that introspection is a conscious process.)
A silicon-chip consciousness optimist who does not share the skeptical worries that motivate the need for the Fading Qualia argument might be satisfied with that demonstration. But the motivating concern, the reason we need the argument, is that some people doubt that silicon chips could host consciousness even if they can behave functionally identically with neurons. Those theorists, the target audience of the Fading Qualia argument, should remain doubtful. They ought to worry that the silicon chips replacing brain region B don't genuinely host consciousness, despite feeding output to the introspection module that leads ri to conclude that consciousness has not faded at all. They ought to worry, in other words, that the introspective process has gone awry. This needn't be a matter of "sham" chips intentionally designed to fool users. It seems to be just a straightforward engineering consequence of designing chips to exactly mimic the inputs and outputs of neurons.
This story relies on a cartoon model of introspection that is unlikely to closely resemble the process of introspection as it actually occurs. However, the present argument doesn't require the existence of an actual introspection module or query process much like the toy case above. An analogous story holds for more complex and realistic models. If silicon chips functionally emulate neurons, there is good reason for someone with the types of skeptical worries about silicon-based consciousness that the Fading Qualia argument is designed to address to similarly worry that replacing neurons with functionally perfect silicon substitutes would either create inaccuracies of introspection or replace the introspective process with whatever non-introspective process even zombies engage in.
The Fading Qualia argument thus, seemingly implausibly, combines distrust of the putative introspective judgments of rn with credulousness about the putative introspective judgments of the series of ris between r0 and rn. An adequate defense of the Fading Qualia argument will require careful justification of why someone skeptical about the seemingly introspective judgments of an entity whose brain is entirely silicon should not be similarly skeptical about similar seemingly introspective judgments that occur throughout the gradual replacement process. As it stands, the argument lacks the necessary resources legitimately to assuage the doubts of those who enter it uncertain about whether consciousness would be present in a neuron-for-neuron silicon isomorph.
----------------------------------------
Related:
"Chalmers's Fading/Dancing Qualia and Self-Knowledge" (Apr 22, 2010)
"How to Accidentally Become a Zombie Robot" (Jun 23, 2016)
Much of the text above is adapted with revisions from:
"Susan Schneider's Proposed Tests for AI Consciousness: Promising but Flawed" (with David Billy Udell), Journal of Consciousness Studies, 28 (5-6), 121-144.
Another interesting post Eric!
ReplyDeleteCan't say I'm a giant fan of the Fading Qualia argument, even though I have no issue with machine consciousness. As you note, it's a reductio, and the problem with reductio ad absurdum is that history has repeatedly shown us that reality is absurd, at least by the standards we hold prior to each paradigm shifting discovery. (See this year's Nobel prizes in physics.)
My take is that we can't trust rn's introspection, or rj's, or even r0's. We can't even trust our own introspection. Maybe we are an rn right now who just think we're an r0. How could we know that what we think is our own consciousness is real and true consciousness? What evidence could convince us we don't have the real thing? Or that we do?
Maybe the answer is to remember that r0 is a result of natural selection, and that selection can only work against adaptive traits and capabilities. That would seem to make the comparison between rn and r0 easier. But I’m a functionalist, so that would be my take.
Mike
reaction (1) "you'd never notice the difference", but 'noticing' is consciousness...
ReplyDeletereaction (2) neural systems vs neurons vs values vs...
reaction (3) consciousness may need different energies by many other evolutionary functions in us, to be what it is as consciousness to us...
reaction (4) I think our consciousness can continue to evolve but with much more introspection...
I'm inclined to think there is a ship of Theseus issue - that piecemeal replacement doesn't mean that upon complete replacement that you have avoided dying.
ReplyDeleteFanciful thinking warning: Some of my own musings on how to be immortal involved a process of electronic replacement - but I think it doesn't work. At best you get a nomadic migration to the new silicon and the hope that this migration forms a strong iteration of you. And the hope that the iteration is a good progression into the future.
But that said - maybe you can die but have zero self reported break in consciousness during it? I'm not quite on topic in saying that but it's an interesting benchmark to consider in regards to consciousness if biological death could occur while self reported consciousness continues.
Currently the construct used for consciousness seems to treat consciousness like it hovers above neurons - even the introspection neurons. So kind of like a table cloth beneath a bowl, if you whip the table cloth out from under consciousness quickly and smoothly then consciousness just stays there? It doesn't seem to treat consciousness as flat with the nature of things - that the introspection neurons interactions are what constitutes the behaviors associated with consciousness.
In some ways it seems we are aware that part of us is flat with nature but then a certain part of us seems to rise and hover above nature and not be a part of it. To me it seems this approach informs the controversy on both sides. One side sees it that the replacement by electronics would lead to 'not real consciousness' because the hovering consciousness wouldn't hang around. The other side seems to treat it that consciousness is hovering and the replacement would lead to that hovering element to continue to be there. Neither side is really engaging that there is no hovering component at all - it's just an artifact of an organisms inability to detect itself fully (detecting oneself includes detecting the detecting - which runs into all sorts of recursion problems that can't be overcome but the organism needs some hypothesis - thus the hovering consciousness hypothesis. Scott Bakker's blind brain hypothesis describes the recursion issue in longer form).
But to actually ask, does it seem that consciousness hovers above the material of neurons and even the neurons involved in introspection? And it's a matter of how much you can replace/whip out the tablecloth of neurons from underneath the bowl of consciousness and the bowl remains?
Sorry for the long comment, kind of my jam. Thanks for the post, Eric.
There are times when I am less phenomenally conscious of a particular stimulus than others: when I am passing in and out of sleep, when I am not attending to it, etc. If you ask me, "are you attending to your perception of the color red right now? Are you consciously experiencing it?" and I answer "yes and yes" as I attend less and less to it, the first statement gradually fades from the truth to a lie, and at the end of the process, when I am not attending to it at all but just mechanically responding yes to the question, my answer to the second question is completely unreliable.
ReplyDeleteSomething very similar may be happening as I am gradually replaced with silicon chips, the difference being that I myself would be completely unaware that my attention was fading, because the chips would be reporting to the rest of my brain that I was still being attentive to it.
The thought experiment involves replacing neurons with silicon chips with inputs and outputs identical to those of the neurons replaced. As I understand the scenario, these inputs and outputs are the release of neurotransmitters and effects they have on the release of neurotransmitters on neurons or silicon chips they synapse with.
ReplyDeleteDoesn’t this leave out intraneuronal activity?
Even if all neurons were replaced with silicon chips, resulting in the same input/output pattern as the replaced neurons, this would leave out a lot of neurological activity.
Hi Eric,
ReplyDeleteI understand the target audience to be the infallibilist skeptic who is also an (ontological) dualist. Chalmers, I think, is addressing those persons (like Searle) who implicitly believe that our direct phenomenal beliefs are constituted by non-physical phenomenal states. Therefore, all Chalmers needs to do to demonstrate (for the skeptic) that intermediate zombies will have genuine introspection is to show that they will have genuine consciousness. Since the target audience accepts the existence of intermediate consciousness, they will believe in genuine introspection.
According to the skeptic, the fading qualia person will still form phenomenal beliefs that are directly constituted by their (fading) phenomenal states, even if they are functionally very similar to the zombie person. The relevant premise here, I take it, is that cognition is supposed to supervene on physical states. Chalmers' reductio is meant to show the skeptic that if you were experiencing fading qualia, you wouldn't actually be able to form direct phenomenal beliefs about it. The skeptic (presumably) never considers (or disagrees) that cognition is physical.
Ironically, I think the skeptic is right here. If you're going to be a dualist concerning phenomenal experience, it seems you have to be a dualist "all the way down". The argument being that we can't avoid epiphenomenal worries if phenomenal states are non-reducible to physical states, but cognition is not. Avoiding the paradox of the phenomenal judgement arguably requires not only that cognition is non-physical (for the dualist) but that basically one's entire mental life is non-physcial. Helen Yetter-Chappell has a great argument to this effect here: https://philarchive.org/rec/YETDAT.
Interestingly, the paradox of the phenomenal judgment would seem to apply to other forms of phenomenal realism as well, like interactionism and panpsychism. For example, Howell (https://philpapers.org/rec/HOWTRM) argues convincingly (in my opinion) that even on Russellian Monist panpsychism, where intrinsic phenomenal states cause physical effects, such effects still won't occur in virtue of their intrinsicality. In other words, we can just swap the "bare inscrutables" and leave the physical structure intact. If that’s true, then we still face the same problem: namely how we can have any epistemic contact between cognitive and phenomenal states. Therefore, all phenomenal realist accounts would seem to require dualism "through and through". Of course, this would completely refute Chalmer's argument, since the skeptic would end up being in the right.
The exception would be if we adopted some kind of powers account (like Hedda Hassel Morch’s phenomenal powers theory). On these views, certain physical effects necessarily occur due to certain phenomenal causes. In other words, the right kind of dispositional behavior regarding physical red would be necessarily caused by states that instantiate phenomenal red. Chalmers could try appealing to such a theory to argue that we would have good reason to think that fading qualia states couldn't exist. If they did exist, then that implies a physical-phenomenal mismatch. However, there could be no mismatch under a powers account, because in a universe where silicon chips did instantiate fading qualia, those fading qualia states would (through their dispositional powers) change the physical functionality of silicon chips (presumably by changing the fundamental physical laws?). Since we don't live in such a universe, it would follow that fading qualia are impossible.
Unfortunately, for a lot of reasons, powers theories aren't that popular among most phenomenal realists. In any case, they are typically invoked just for hedonic states (e.g. to explain away fortunate coincidences between phenomenal pain and physical pain) so as to satisfy the evolutionary argument, and not for every single phenomenal state.
Alex and Eric...thanks
ReplyDeleteAll the while were you also meaning-comparing phenomenon to observation implying observation need not be remembered with phenomenon...
...evolution and introspection may not work that way...
Remembrance...minds cannot help theirselves...
...its what they have and are to work and live with...
Thanks for the comments, folks!
ReplyDeleteMike/SelfAware: Right! "Reality is absurd" is basically the thesis of my forthcoming book. I agree about introspection, too.
Arnold: Why can't minds help themselves?
Callan: Ship of Theseus issues with consciousness, death, and selves are interesting, since we ordinarily think of these things as discretely yes/no rather than coming in degrees. But almost all of reality comes in degrees, so our intuitions probably fail here (see my paper and posts on "Borderline Consciousness"). On the "hovering" idea, I suspect that almost all philosophers who aren't substance dualists would deny that they are engaging in thinking of that sort -- which doesn't mean that they aren't implicitly doing so, of course!
D: Yes, nicely put. That's basically the worry that Chalmers doesn't adequately address, in my view.
Dan: What sort of intraneuronal activity do you think is being left out? On standard models as I understand them, the neurotransmitters are what's doing the work. You could say, maybe, that the overall shape of the electrical field as measured by EEG would be different, and of course things would look very different on an fMRI. Maybe those things matter?
Alex: Thanks so much for that super interesting and helpful comment. I have always been inclined to think that the account of phenomenal judgment is the thread in Chalmers's view which, when you pull on it, unravels the whole thing, so I think we're basically in agreement here. Thanks for the link to Yetter-Chappell's recent paper. I hadn't noticed it, but she's always interesting on these issues, and not frightened away by a bit of metaphysical bizarreness! Frankish is also interesting on this point, for example in his paper Panpsychism and the Depsychologization of Consciousness.
Even if neurotransmitters do most of the work, I don't think that the cells/neurons upon which they act only function to release or uptake more neurotransmitters. If that were so, then the thought experiment could be pared down to a scenario where the release and uptake of neurotransmitters could occur at a distance without the substitutions of silicon chips.
ReplyDeleteIs the hypothesis that all that matters is the occurrence of neurotransmitter presences and uptakes occurring in a particular geometric configuration?
Dan,
ReplyDeleteIn order to engage with this argument, I think we probably need to leave almost everything we know about the science of the how the brain works behind.
Consciousness would fade I believe as we replaced whatever C factors there are in the brain with something else. Likely unconsciousness and death would be the end point with Alzheimer's and dementia as stopovers along the way. Does the man who forgot his wife know his consciousness is fading? Maybe some do at least in the early stages of decline.
Regarding introspection, I would say it can't be greatly more reliable or unreliable than any other aspect of consciousness. It is all produced from the same neurons and other C factors (whatever they may be) that produce the other aspects of consciousness. In a sense there is nothing but introspection. We do not actually see ourselves or our psyches apart from our total model of the world, although we explore this possibility in ideation regarding suicide and death. We know ourselves easily as well as we know the tree in the backyard.
@Eric: Thanks as well for the link. It just so happens that I've already read Frankish's paper. :)
ReplyDeleteI agree with much of what he has to say in that article, though I don't count myself as an illusionist. Discourse on consciousness arguably belongs to the realm of psychology and neuroscience, not physics. Something has gone terribly wrong with panpsychism if we have to characterize phenomenal consciousness, and therefore other components of the mental, in physics-based terms.
By the way, I sent you an email earlier today about a proposal solution to the hard problem, should you be interested.