David Chalmers defends what he calls a principle of organizational invariance according to which if a system has conscious experiences then any other system with the same fine-grained functional organization will have qualitatively identical experiences. His main arguments for this principle are his "Fading Qualia" and "Dancing Qualia" arguments.
Both arguments are reductios. Let's start with Fading Qualia. Suppose, contra the principle of organizational invariance, that there could be a fine-grained functional isomorph of you without conscious experience -- perhaps a robot (call him Stu) with a brain made of silicon chips instead of neurons. If this is possible, then it should also be possible to create a series of intermediate beings between You and Stu -- perhaps, for example, beings in which different proportions of the neurons are replaced by silicon chips. If You have a hundred billion neurons in your brain, then maybe we can imagine a hundred billion minus one intermediate cases, each with one less neuron and one more silicon chip. The question is: What kind of consciousness do these intermediate beings have? Chalmers argues that there is no satisfactory answer.
There seem to be two ways to go: Consciousness might suddenly disappear somewhere in the progression, say between fifty billion and one neurons and fifty billion. But that seems bizarre. How could the replacement of one neuron make the difference between consciousness and its absence? You and Fifty-Billion-and-One are having vivid visual experience of a basketball game, say, while poor Fifty-Billion is a complete experiential blank. Surely we don't want to accept that.
Seemingly more plausible is a second option: Consciousness slowly fades out between You and Stu. But then what does Fifty-Billion experience? Half of a visual field? An entire visual field, but hazy or in unsaturated color? Note that since You, Stu, and Fifty-Billion are all identical at the level of functional organization, you will all exhibit exactly the same outward behavior. You will all, when asked to introspect, presumably say something like "I am having vivid visual experience of a basketball game". Stu is wrong about this, of course, if it makes sense to attribute assertions to him at all; but he is just a silicon robot without consciousness, so maybe that's okay. But Fifty-Billion is not just a silicon robot. He has some consciousness. But he seems to be badly wrong about it. His visual experience is not, as he says, vivid and sharp, but rather indistinct, or incomplete, or unsaturated. And Chalmers suggests that it's absurd to attribute that kind of radical error to him. Thus Chalmers completes the reductio: There's an absurdity in assuming the denial of the principle of organizational invariance. You, Stu, and Fifty-Billion all have qualitatively identical conscious experience.
I object to the last move in this argument, to the idea that it is absurd that Fifty-Billion could make that kind of mistake. My reason is this: Many of us make exactly the same mistake in ordinary instances of introspection. Some people, for example, when asked how detailed their conscious experience is at any one moment, say that it is extremely rich -- full of precise detail through a wide visual field, and simultaneously full of auditory detail, tactile detail, and detail in other modalities. Others say that their experience is very sparse -- they only experience one or a few things at a time. On the sparse view, when one is attending to the visual environment, one has no experience of the feet in one's shoes; when one is attending to one part of the visual field, one has no experience of the areas outside of attention; etc. I have argued that this dispute does not turn merely on a disagreement about terminology, and does not reflect radical differences in different people's experiences, but rather is a real substantive, phenomenological dispute. One or both parties must therefore be radically wrong about their experience. This is at least, I think, not an absurd view, given the potential sources of error about the richness of experience, such as the refrigerator light illusion (the possibility that thinking about experience in some modality or region creates experience in that modality or region where none was before, causing us to mistakenly think it was there all along). And if it's not absurd to suppose that ordinary people could be mistaken about how rich and detailed their experience is, it's not absurd to suppose that Fifty-Billion could be mistaken.
Dancing Qualia is a variation of Fading Qualia. It requires two visual processing systems with the same functional organization but different associated visual phenomenology, and it requires the capacity for you to switch swiftly between these systems. Since the functional organization of the systems is the same, you won't report any difference in experience when you switch from one to the other, thereby implying that some of your reports will be mistaken -- implausibly mistaken, in Chalmers's view. Therefore, by reductio, the systems cannot really differ in their associated visual phenomenology.
But in cases of "change blindness" -- for example here -- people will fail to notice substantial changes in their visual experience. (Or at least this is true if experience is relatively rich.) Such failures aren't perhaps as severe as what might be created by a visual system switch, and, as Chalmers notes, many of them require that your attention not be on the object of change. However, not all change blindness cases seem to require lack of attention to the changed stimulus -- like when the person you are talking to changes after brief interruption without your noticing (though determining what exactly qualifies as a target of attention may be a difficult matter in such scenarios); and in any case consideration of such cases should, I think, loosen our commitment to the seeming absurdity of failing, especially in weird scenarios, to notice radical changes in experience.
Furthermore, the Dancing Qualia case seems problematically pre-built to frustrate our ability to notice differences, much like radically skeptical brain-in-a-vat scenarios are pre-built to frustrate the sensory abilities on which we depend by giving the same sensory input despite a large change in the far-side objects. The following model is too simplistic, but conveys the idea I have in mind here: Imagine that introspection works by means of an introspection module located near the front of the brain, which receives input from the visual cortex in the back of the brain. The back of the brain has been changed so that experience is radically different (on the assumption of the reductio), but changed only in such a way that the input from the back to the front of the brain is exactly the same. In such a case, it seems not at all absurd to suppose that introspection would fail to notice a difference, despite a real difference in experience. Thus, the Dancing Qualia reductio fails.