Friday, January 23, 2026

Is Signal Strength a Confound in Consciousness Research?

Matthias Michel is among the sharpest critics of the methods of consciousness science. His forthcoming paper, "Consciousness Doesn't Do That", convincingly challenges background assumptions behind recent efforts to discover the causes, correlates, and prevalence of consciousness. It should be required reading for anyone tempted to argue, for example, that trace conditioning correlates with consciousness in humans and thus that nonhuman animals capable of trace conditioning must also be conscious.

But Michel does make one claim that bugs me, and that claim is central to the article. And Hakwan Lau -- another otherwise terrific methodologist -- makes a similar claim in his 2022 book In Consciousness We Trust, and again the claim is central to the argument of that book. So today I'm going to poke at that claim, and maybe it will burst like a sour blueberry.

The claim: Signal strength (performance capacity, in Lau's version) is a confound in consciousness research.

As Michel uses the phrase, "signal strength" is how discriminable a perceptible feature is to a subject. A sudden, loud blast of noise has high signal strength. It's very easy to notice. A faint wavy pattern in a gray field, presented for a tenth of second, has low signal strength. It is easy to miss. Importantly, signal strength is not the same as (objective, externally measurable) stimulus intensity, but reflects how well the perceiver responds to the signal.

Signal strength clearly correlates with consciousness. You're much more likely to be conscious of stimuli that you find easy to discriminate than stimuli that you find difficult to discriminate. The loud blare is consciously experienced. The faint wavy pattern might or might not be. A stimulus with effectively zero signal strength -- say, a gray dot flashed for a millionth of a second and immediately masked -- will normally not be experienced at all.

But signal strength is not the same as consciousness. The two can come apart. The classic example is blindsight. On the standard interpretation (but see Phillips 2020 for an alternative), patients with a specific type of visual cortex damage can discriminate stimuli that they cannot consciously perceive. Flash either an "X" or an "O" in the blind part of their visual field and they will say they have no visual experience of it. But ask them to guess which letter was shown and their performance is well above chance -- up to 90% correct in some tasks. The "X" has some signal strength for them: It's discriminable but not consciously experienced.

If signal strength is not consciousness but often correlates with it, the following worry arises. When a researcher claims that "trace conditioning is only possible for conscious stimuli" or "consciousness facilitates episodic memory", how do you know that it's really consciousness doing the work, rather than signal strength? Maybe stimuli with high signal strength are both more likely to be consciously experienced and more likely to enable trace conditioning and episodic memory. Unless researchers have carefully separated the two, the causal role of consciousness remains unclear.

An understandable methodological response is to try to control for signal strength: Present stimuli of similar discriminability to the subject but which differ in whether (or to what extent) they are consciously experienced. Only then, the reasoning goes, can differences in downstream effects be confidently attributed to consciousness itself rather than differences in signal strength. Lau in particular stresses the importance of such controls. Yet such careful matching is difficult and rarely attempted. On this reasoning, much of the literature on the cognitive role of consciousness is built on sand, not clearly distinguishing the effects of consciousness from the effects of signal strength.

This reasoning is attractive but faces an obvious objection, which both Michel and Lau address directly. What if signal strength just is consciousness? Then trying to "control" for it would erase the phenomenon of interest.

Both Michel and Lau analogize to height and bone length. If you want to test whether height confers an advantage in basketball or dating, you might want to control for skin color, but it would be absurd to control for bone length. If skin color correlates with height and you want to see whether height specifically advantages people in basketball or dating, it makes sense to control for differences in skin color by systematically comparing people with the same skin color but different heights. If the advantage persists, you can infer that height rather than skin color is doing the work. But trying to control for bone length lands you in nonsense. Taller people just are the people with longer bones.

Michel and Lau respond by noting that consciousness and signal strength (or performance capacity) sometimes dissociate, as in blindsight. Therefore, they are not the same thing and it does make sense to control for one in exploring the effects of the other.

But this response is too simple and too fast.

We can see this even in their chosen example. Height and bone length aren't quite the same thing. They can dissociate. People are about 1-2 cm taller in the morning than at night -- not because their bones have grown but because the tissue between the bones (especially in the spine) compresses during the day.

Now imagine an argument parallel to Michel's and Lau's: Since height and bone length can come apart, we should try to control for bone length in examining the effects of height on basketball and dating. We then compare the same people's basketball and dating outcomes in the morning and at night, "holding bone length fixed" while height varies slightly. This would be a methodological mistake. For one thing, we've introduced a new potential confound, time of day. For another, even if the centimeter in the morning really does help a little, we've dramatically reduced our ability to detect the real effect of height by "overcontrolling" for a component of the target variable, height.

Consider a psychological example. The personality trait of extraversion can be broken into "facets", such as sociability, assertiveness, and energy level. Since energy level is only one aspect of extraversion, the two can dissociate. Some people are energetic but not sociable or assertive; others are sociable and assertive but low-energy. If you wanted to measure the influence of extraversion on, say, judgments of likeability in the workplace, you wouldn't want to control for energy level. That would be overcontrol, like controlling for bone length in attempting to assess the effects of height. It would strip away part of the construct you are trying to measure.

What I hope these examples make clear is that dissociability between correlates A and B does not automatically make B a confound that must be controlled when studying A's effects. Bone length is dissociable from height, but it is a component, not a confound. Energy level is dissociable from extraversion, but it is a component, not a confound.

The real question, then, is whether signal strength (or performance capacity) is better viewed as a component or facet of consciousness than as a separate variable that needs to be held constant in testing the effects of consciousness.

A case can be made that it is. Consider Global Workspace Theory, one of the leading theories of consciousness. On this view, a process or representation is conscious if it is broadly available for "downstream cognition" such as verbal report, long-term memory, and rational planning. If discrimination judgments are among those downstream capacities, then one facet of being in the global workspace (that is, on this view, being conscious) is enabling such judgments. But recall that signal strength just is discriminability for a subject. If so, things begin to look like the extraversion / energy case. Controlling for discriminability would be overcontrolling, that is, attempting to equalize or cancel the effects not of a separate, confounding process, but of a component of the target process itself. (Similar remarks hold for Lau's "performance capacity".)

Global Workspace Theory might not be correct. And if it's not, maybe signal strength is indeed a confounder, rather than a component of consciousness. But the case for treating signal strength as a confounder can't be established simply by noticing the possibility of dissociations between consciousness and signal strength. Furthermore, since Michel's and Lau's recommended methodology can be trusted not to suffer from overcontrol bias only if Global Workspace Theory is false, it's circular to rely on that methodology to argue against Global Workspace Theory.

19 comments:

  1. Hi Eric,

    Thanks for the blogpost! My response in two parts:

    Let me try to make my point in a slightly different way. There’s one point on which I hope we can all agree: it would be methodologically very bad to manipulate visual consciousness by asking people to close their eyes on some trials, or by giving subjects blur-inducing glasses in one condition, reducing their vision to the point of legal blindness. If you were to manipulate consciousness that way, you’d get the result that consciousness seems to be required for all vision-related functions. That’d be wrong because you wouldn’t be comparing conscious vs. unconscious vision, but conscious vision to no vision at all, or literally-good-for-nothing-vision.

    My point is that most experiments that investigate the functions associated with consciousness make a mistake of this kind: the main methods that are used to manipulate visual consciousness interrupt signal processing very early on and severely degrade visual signals. I believe we have very good reasons to believe this, especially for methods like masking by noise. If you manipulate consciousness like this, you’re not giving a chance to the unconscious, because your ‘unconscious’ condition is essentially a no-vision condition, or a literally-good-for-nothing-vision condition. The confound is in failing to create a genuine unconscious processing condition with adequate signal.

    In fact, I can design an experiment tomorrow that will show no unconscious pavlovian conditioning (even though we have good evidence of unconscious pavlovian conditioning). To do so I just have to create an unconscious condition where I decrease the contrast of the stimulus to the point that it’s barely visible and then follow it with a very high contrast noise mask. I hope you agree that this experiment would be rubbish and wouldn’t prove that consciousness is required for pavlovian conditioning at all.

    ReplyDelete
    Replies
    1. So, a weak formulation of my view is the following: all I’m asking is that we make sure that there’s unconscious vision in the unconscious condition instead of comparing conscious vision to no vision or literally-good-for-nothing vision. I think for the purposes of my critique of animal sentience studies in this particular paper, that’s basically what I need. The relevant studies find no effect in the unconscious condition because in that condition their manipulation either prevents any vision-properly-so-called, or degrades vision to the point that it’s good for nothing. How do I know that? Because I also point to studies where consciousness is manipulated in better ways, and we *do* find the unconscious effect (e.g. some forms of unconscious instrumental conditioning). Given these other studies, the explanation in terms of signal strength for the failure to find the relevant effects in the studies that failed to find those effects becomes very plausible.

      If you’ve followed me so far and agree that comparing extremely low / non-existent signal strength in the unconscious condition to high signal strength in the conscious condition is methodologically bad, why not follow me all the way? Why not try to make signal strength as strong as possible in the unconscious condition and see what kind of results we get when we compare that to a conscious condition?

      About global workspace theory (which I think is wrong for other reasons): the view can’t be that discrimination judgments only become possible downstream of the global workspace, since that would prevent unconscious discrimination. I don’t think global workspace theorists should be committed to this. A more plausible view would be to say that global workspace encoding boosts discriminability. But that view is consistent with considering that differences in signal strength that are explainable by differences in pre-conscious / pre-workspace processes are a confounding factor. If that’s your view then yes asking for exact performance matching is asking too much, we can have a debate about this and who’s begging the question against who another time. But I think the critical parts of the paper when it comes to research on the functions of consciousness and animal sentience research don’t depend on that.

      Delete
  2. Thanks for the thoughtful, clear, and prompt reply, Matthias! I think I agree with almost everything you say above.

    The question of how far to go down the path you describe, I would argue, depends on the extent to which the discriminability is separable in the right way from consciousness. If is it not fully ontologically distinct, or if the causal relationship takes a certain form (e.g., consciousness is the wrong kind of mediator or collider), then there's the risk of overcontrol bias. Establishing dissociability doesn't by itself show that consciousness doesn't stand in one of those infelicitous relationships with discriminability, so I think your argument needs more work on that point.

    At the same time, it might well be the case that consciousness doesn't stand in one of those infelicitous relationships with discriminability. So you might be entirely right! I just don't think you've established that, and there's at least one prominent view that suggests it might be a problem.

    Attempting to control for differences in signal strength that are explainable by pre-workspace/pre-conscious processes still runs the risk of overcontrolling on a mediator or aspect if signal strength is a cause of or aspect of consciousness, no?

    More on the mediator case: Suppose that education affects occupation and occupation affects income. If you control for occupation in assessing the effects of education on income, you'll underestimate the real effects of education on income. But controlling would be fine if you were trying to determine the effects of education on income *independent of* occupation. So it depends on how exactly we're conceptualizing the target, right?

    ReplyDelete
    Replies
    1. I agree that mere dissociability by itself isn’t sufficient, but evidence in general is very rarely conclusive like that anyway. You say the question is whether discriminability stands in an infelicitous relationship to consciousness. What evidence could possibly move the needle on that question, if not precisely the kind of evidence I gave? As pointed out by Zoltan Dienes (on fb), we’re not talking about small dissociations here. Instead they range from very weak performance with consciousness to very high performance without consciousness. So, I gave evidence of strong dissociations, and I also gave evidence indicating that the reason why some studies found null results (i.e. no unconscious effect) was because of insufficient signal strength in the unconscious condition. I think this moves the needle and should make us suspicious of other null results. The demand to validate those results with conditions where higher unconscious discrimination is achieved seems totally reasonable.

      As you noted, the question is how far we go down the path I described earlier. My view is that it’s a confound in all cases. The alternative is that it’s a confound in some obvious cases (e.g. the blur-inducing glasses case) but not others. At this point the alternative view must give a principled way of distinguishing the good cases where it’s not a confound from the bad cases where it is an obvious confound. Otherwise the response is just ad hoc. A quick related point about matched-performance: I think it’s pretty significant to note that we have cases that do survive matched-performance conditions (e.g. you find a difference in betting even with matched performance). The fact that we have cases that do survive that control reinforces my impression that holding that there’s overcontrol bias in the cases that don’t survive the control is ad hoc.

      I also agree with you that it depends on how we conceptualize the target phenomenon. But as Hakwan pointed out, I thought we already all agreed that the target is phenomenal consciousness, and that at least conceptually this is distinct from signal strength. Whether it is empirically distinct from signal strength is a different question, which I think should receive more attention, and which we’re trying to answer.

      Delete
  3. i guess you're right that it does depend on how we conceptualize the target. so what's supposed to be the target exactly? i thought we're meant to be targetting something like phenomenal consciousness, which was defined as something that is independent of access. i thought Ned Block didn't intend for access or basic perceptual functioning to be a driving *component* of phenomenal consciousness

    and i thought we're supposed to be working on something that would somehow shed some light on the so called Hard Problem, even if we may not be directly solving it via these experiments. in that context, i thought consciousness is meant to be conceived of as something that is rather dissociable from functions, as in cases of zombies etc

    the funny thing is, the people who are now (implicitly) conceptualizing the target as basically the same thing as basic perceptual functions (or signal strength/performance capacity), or something that has basic perceptual functions as a main driving component, aren't just a different group of people. that would have been ok. that would just mean that the two groups are talking past each other. those who care about phenomenal consciousness or the so-called Hard Problem could then just say that these experiments and theories like GWT are just irrelevant to what they care about.

    but they don't say that. they participate in the same field, and cite the same evidence as they make theoretical and ethical claims. these include the people who talked about dissociations between P- and A- consciousness, conceivability of zombies, and stuff. suddenly, animals have phenomenal consciousness *because* they behave like they have access to basic perceptual information. they have phenomenal consciousness *because* theories like GWT are prominent and should be taken seriously.

    i must be confused myself. i thought philosophers are meant to be keen on keeping different concepts distinct. so i just frankly don't know what's going on anymore. perhaps like i wrote in my book, concepts and definitions are ever a fuzzy thing, like politics... or what am i missing?

    ReplyDelete
    Replies
    1. Thanks for the incisive comment, Hakwan!

      Here's a charitable perspective on the people I think you have in mind: phenomenal and access conscious (or some other functionally or biologically defined target) might be *conceptually* distinct but be *nomologically* coextensive.

      In a way, I have it easy, being skeptical about all positive theories and our ability to resolve the methodological issues in our academic lifetimes. But I do feel the pull of wanting to take what Matthias calls "The List" as suggestive indicators which, when present in animals, make it more plausible that they are conscious. Matthias' and your work is helpful for reinvigorating my skeptical side on these matters.

      Delete
  4. "Here's a charitable perspective on the people I think you have in mind: phenomenal and access conscious (or some other functionally or biologically defined target) might be *conceptually* distinct but be *nomologically* coextensive." ---

    but i don't think that's right. Ned Block used empirical examples to motivate their dissociations. and he's the conceptual guy who is most meaningfully engaged with the empirical literature. (Dave Chalmers also engages in a very substantive way but to my mind with very different effects, or perhaps even intentions.)

    on that note, blindsight also isn't like a small fudge-factor dissociation like bone length vs measured height. we're talking about patients denying seeing while getting things as 80-90% correct, spontaneously dodging obstacles and all. it violates signal detection theoretic expectations (https://pubmed.ncbi.nlm.nih.gov/9391175/), & then there is peripheral vision, aphanatsia, etc

    so the point isn't that these things could mildly dissociate in the sense of not being perfectly correlated; like performance capacity is basically the main driving thing with some small additional mediators / fudge factors. rather they seem to systematically dissociate to the point that one may be motivated to look for a *distinct mechanism* that drives subjective experience alone. & as you know, some folks have looked for it, with some limited success.

    so, to deny all those and to engage in a totally different way of conceptualizing things (to say p- and a- are nomologically coextensive) seems to be just to change the topic entirely. it's neither a theoretical nor empirical dispute. it's just to talk past each other while pretending it is one big happy family.

    so pehraps you have it exactly right: *to some folks* there is indeed a perfectly legit way to defend against the signal strength confound. for them, the sour blueberry can very well pop as they wish. but they are basically talking about a different phenomenon *which is traditionally just called perception*. so-called nonconscious perception either doesn't exist, or just means feeble traces of residual perceptual processes. so in this context Ian Phillips is basically right. adding the word consciousness there is just superfluous gimmick. it may help to attract private funding & media attention but it doesn't make meaningful science.

    but there are other folks who have good reasons to need to use the C-word, and for them, the confound is real. the reasons are good exactly *because* they think the confound is real. to them, that there is a highly popularized field out there monopolizing the C-word while denying the confound is a sheer liability.

    but anyway, you see, i also have it easy these days becoz since finishing my book i have gradually become a doomer - https://osf.io/preprints/psyarxiv/gnyra_v1

    ReplyDelete
  5. Interesting thread. I’ve seen Matthias give this talk several times and I always ask him
    “Isn’t it impossible to empirically test for p-consciousness because it’s always possible to explain a subjects behavior without appeal to consciousness for Zombie type reasons?!” His answer is always “no”. He seems to suggest that some behaviors would be impossible without p-consciousness, namely those involving meta-cognition (Matthias can correct me if I’m wrong). So, Lau’s zombie point seems to be irrelevant here. Zombies make consciousness science uninteresting and I think Matthias is aware of this.

    ReplyDelete
  6. one does not have to accept that zombies are 100% real to treat them as somewhat relevant. as we go from those initial thought experiments / ideas to map to science we try our best to preserve the original concepts, to find the target phenomena. as the science moves along the concept may have to be refined, to accommodate the empirical reality, but refinement is different from radical shifts that aren’t necessitated by the data. so both Matthias and i accept that p- consciousness may be linked to some specific kind of metacognitive processes after all, but given the initial theoretical motivations re: zombies, etc, not even controlling for basic perceptual functioning would be too much, basically to be point of changing the topic altogether w/o good justification. so it is very much relevant.

    ReplyDelete
    Replies
    1. "...both Matthias and i accept that p- consciousness may be linked to some specific kind of metacognitive processes..."

      Really interesting paper and exchanges here. Just to observe that giving phenomenal consciousness a causal or functional role, even in metacognition, seems problematic given that functions can always be (and usually are) characterized in physical/functional/flow-chartable terms without mentioning qualities at all. If one asks what is the function of felt pain, the neuroscientific explanation of the pain function will be in terms of tissue damage, nociception and behavior (including learning), not any experienced quality. One might reply that qualities like pain *just are* a certain set of functions, but that identity is notoriously not a consensus view, rather it needs independent justification. But even if phenomenality per se doesn't play a causal/functional role, we nonetheless will advert to conscious experience as explanatory of behavior - a convenient, content-based explanation that stands in very reliably for the physical story to which it runs in parallel.

      https://substack.com/home/post/p-179190134

      Delete
  7. This seems to be an inversion of the usual argument that techniques such as flash suppression don't truly provide evidence of unconscious perception (eg
    "partial awareness hypothesis" for CFS and faces).

    But I don't think arguments about animal sentience are affected one way or another, at least for, say, ethical arguments. If episodic ("-like") memory can be
    shown in elephants, dogs and dolphins, and these can be tied up with memory of previous suffering, isn't this sufficient?

    ReplyDelete
  8. Genuine question from someone not very familiar with the field: this seems to be a discussion among men, citing some ideas from other men. Are there no women who have/ had relevant perspectives or are they just systematically ignored?

    ReplyDelete
  9. Thanks for all these interesting comments, folks!

    Hakwan: Thanks for the link to your "doomer" paper. I noticed and appreciated it when you first released it. As you know, I share a lot of your pessimism. On blindsight, consider this analogy to bone length: Imagine there are some rare people who lack bones or have very short bones and are supported by plastic frames. They have a bone-length-unrelated way of achieving things that tall people achieve, but for most people bone length is still a component or mediator. Similarly, maybe blindsighters just do it differently and controlling for performance capacity is still overcontrol for ordinary people. I'm not saying that's right, just that it's a possibility.

    Hakwan, cont: If I recall, Block is what Chalmers calls a Type-B theorist, so he should allow that zombies are conceivable while still holding that scientific research will reveal the metaphysical truths. Still, I think you're right there's a lot of conceptual confusion in these debates, and talking past each other, so at best it's a big *unhappy* family!

    Anon 7:07: I would distinguish here between conceptual possibility (zombies are conceptually possible) and nomic possibility (zombies violate the laws of nature). If we're interested in the latter, the zombie possibility is irrelevant.

    David: The relevance of episodic or episodic-like memory will very much depend on one's (contentious!) theory. One of the things I like about Matthias' paper is how he undercuts *quick* arguments from its (arguably) being correlated with consciousness in humans to its being (strong) evidence of consciousness in non-human animals.

    Anon 02:25: You're right that there is a substantial gender skew in the field. Some of my favorite work on these topics by women is by Simona Ginsburg and Eva Jablonka, esp. their book Evolution of the Sensitive Soul.

    ReplyDelete
    Replies
    1. ...the 'target is phenomenal consciousness', no, the target is phenomenal me ...thanks for remembering meta physical/ways...

      Delete
  10. very interesting discussion! for what it's worth, i think the biggest problem in matthias's paper is the title, "consciousness doesn't do that". matthias doesn't really claim to have an argument for that thesis. instead he has a challenge to purported evidence for the thesis: why doesn't this evidence show "signal strength does that" instead? that's a great challenge (even if it's one with potential responses, like eric's). construed as a direct argument to establish that consciousness doesn't do those things, it's full of potential loopholes (again, like eric's). but if the article had been called "the evidence so far doesn't establish that consciousness does that", it would have been harder to argue against!

    one general way to frame the issues about confounding in matthias's paper and eric's comment is to note that confound reasoning is a sort of causal exclusion reasoning. one oversimple template is: A [e.g. signal strength] causes X [e.g. planning], so B [e.g. consciousness] doesn't cause X. we know that this sort of reasoning fails in a range of models (familiar in the literature on exclusion and on causal modeling, by people like karen bennett and christian list) where A and B can both cause X, most standardly because A and B are on the same causal pathway to X. in the standard definition of confounds, it's built in that for A to be a confound for the B-X relation, A and B cannot lie on the same causal pathway to X. if they're on the same causal pathway, we in effect have confound-confounding.

    that sort of confound-confounding can arise on a variety of models which seem potentially applicable to the consciousness case: (1) A=B (matthias discusses and rebuts a C=SS model in the paper), (2) A typically causes B which typically causes X (francois kammerer discussed models like this on bluesky), (3) B typically causes A which typically causes X, (4) A is a component of B which typically causes X (eric focuses on this sort of model), (5) B is a component of A which typically causes X (6) A grounds B which typically causes X (common in cases of neural/computational exclusion), (7) B grounds A which typically causes X; and more. matthias doesn't rebut most of these models in the paper, and none of them beyond (1) is obviously ruled out by the dissociation data. but at the same time, each of these models would itself take a lot of work to establish; and of course there remain many other more confound-friendly models where A causes X without B causing X. so i take it that both sides have more work to do in making a case for relevant models -- which i take it is a conclusion both eric and matthias would be happy with.

    of course zombie worries complicate the function of consciousness further. one way to at least temporarily bracket them is to do all this in terms of access consciousness instead of phenomenal consciousness. many people would argue that various things on matthias's list are functions of access consciousness (e.g. planning, metacognition, play, just for a start). matthias could make the same reply: these things are all functions of signal strength, not access consciousness. in that case, there would be some very natural replies along the lines above. e.g. it seems entirely possible that signal strength (often) enables access consciousness which (often) enables those functions. i don't think any of these models are ruled out by the dissociation data. but again there are other models more friendly to matthias. so again, more work is needed.

    ReplyDelete
  11. Fair point about the title, though I figured “the evidence so far doesn't establish that consciousness does that and in some cases it’s much more likely that signal strength does that” wouldn’t sell as well. I think it’s important to note that I do provide cases where capacities that some think do require consciousness actually turn out to not require consciousness once we allow for stronger unconscious signals.

    About causal modeling: I maintain that the dissociations I discuss are directly relevant to evaluating the causal models you’re talking about. Suppose A is consciousness, B signal strength, and X a capacity like instrumental learning. Take the simple mediator model A -> B -> X. The dissociation evidence puts pressure on it. First, the model (in its simple form) predicts that B won’t be high when A is absent, but blindsight-style cases show that’s wrong. Second, it predicts that increasing A should increase B, but I give cases where that’s wrong. Third, increasing B should not impair X. I give evidence to the contrary with the exclusion task. Fourth, the model straightforwardly says no X without A, but in several cases I do show that we can have X without A, as in instrumental learning, for instance. I think similar points apply to the other models, to the extent that I understand the partial constitution and grounding models.

    Of course, one can save the model by adding epicycles: for example, saying that A is just one cause of B among others, but B can also vary independently of A, and other variables moderate how B affects X. But once you allow B to vary independently of A in experimentally tractable ways, B can no longer be treated as a mere downstream mediator of A, and we're back to the possibility of treating B as a confound.

    You only get away with holding that the dissociations are only relevant for assessing (1) by including 'typically' in the other models and then brushing the evidence away by arguing that the dissociations are exceptional cases that don't reflect what 'typically' happens. The causal modeling isn’t really doing the work here, what does the work is the addition of ‘typically’. But, first, we don’t know what typically happens; two, rejecting the dissociations as exceptional without updating the model seems ad hoc to me; and three, in science exceptional cases are very much relevant for selecting between different causal models, and I don’t see why consciousness should be an exception.

    ReplyDelete
  12. hi Eric, recall that the bone legnth example was given in this context for defending when we may NOT need to control for a confound; it was not given to establish the criterion for when we absolute need to control for a confound. you're right to press whether that in fact, in some sense, bone length dissociates with height. in fact let me give you a better example: for a jellyfish, or anyother boneless creature, bone length is totally irrelevant to height. so if someone wants to give a General Theory of Height, we can indeed rule out Bone Legnth as the decisively necessary and sufficient ingredient, becoz of these dissociations.

    but the example was given for a different context and purpose: for the more restricted notion of human height as construed as a static trait, i.e. your typical, average standing height, the kind of number you put down on your driver's license, that you don't update by the hour, bone length is pretty much the same thing, so asking you to control for it would be too much to ask. the point is whether consciousness is *a case like this*, so that it would also be too much to ask for controlling for the confounds.

    but it isn't, becoz in the field we're not meant to be asking what are *the usual causes* of consciousness. we're not meant to be asking what usually correlates with consciousness, more or less. the analogies of education and income etc are therefore perhaps not so apt here. rather people are supposed to be looking for the essential mechanisms, the absolutely necessary and sufficient conditions that hopefully *constitute* consciousness *in general* (so we can talk about AI, animals, and creatures in other possible worlds etc). in part becoz of this ambition, just as if someone is proposing a General Bone Length Theory of Height, it would then not be too much to ask to control for the relevant confounds.

    and as various people pointed out, blindsight is not just one singular contrived case. these dissociations are everywhere. the field has just chosen to look where it is convenient.

    regarding Dave's different models above, and that many of them are difficult to be ruled out / firmly established.... of course the dissociations matter. in part that's also becoz there is a matter of burden of proof here. as far as science rather than theoretical speculation goes, occam razor cuts against all the C-involving claims. in computational models or just any mechanical systems, it is a trivial given that a stronger signal can naturally allow you to do more stuff. a feeble, next-to-nothing signal won't allow you to do very complex stuff. to *establish* something as scientific knowledge re: C, or to even bother to invoke the term at all, researchers need to bring in some evidence to rule out these trivial and plausible alternative explanations.

    we haven't. so in a sense it may be fair for one to say Consciousness Doesn't Do That - whatever *that* is. we just don't know anything about it becoz the science has mostly not been nearly rigorous enough for us to make the strong claims that some are now making to the general public.

    ReplyDelete
  13. The current debate presupposes a misformulated question. By treating consciousness and signal strength as separable variables that must be disentangled, both sides remain confined within an attentional ontology that already assumes intermittent activation, selection, and access. Within this framing, the discussion oscillates endlessly between confound and overcontrol. The impasse dissolves once perception is reconceived as a continuously operating comparator process: perceptual systems do not toggle between conscious and unconscious states but constantly compare adjacent temporal frames to detect change and project action trajectories. Signal strength is therefore neither a confound nor a component to be controlled for, but a descriptive property of an always-on comparative mechanism.

    ReplyDelete
  14. Please delete if not appropriate...Gemini 3: Big chat going on at The Splintered Mind today...Matthias Michel is among the sharpest critics of the methods of consciousness science. His forthcoming paper, "Consciousness Doesn't Do That", convincingly challenges background assumptions behind recent efforts to discover the causes, correlates, and prevalence of consciousness. It should be required reading for anyone tempted to argue, for example, that trace conditioning correlates with consciousness in humans and thus that nonhuman animals capable of trace conditioning must also be conscious.

    But Michel does make one claim that bugs me, and that claim is central to the article. And Hakwan Lau -- another otherwise terrific methodologist -- makes a similar claim in his 2022 book In Consciousness We Trust,

    It is great to see you back. Since we last touched on Lucretius and the "intention" behind the clinamen (the swerve), it feels incredibly fitting that you're now diving into Matthias Michel and Hakwan Lau. We’re moving from the ancient atomic "swerve" to the modern neuroscientific "glitch"—specifically, the debate over whether consciousness actually does anything or if we’ve been misattributing cognitive heavy lifting to it.

    The "Big Chat" at The Splintered Mind (Eric Schwitzgebel’s blog) is indeed a gauntlet-throw. Michel’s paper, "Consciousness Doesn't Do That," hits at the heart of the "Experimental Philosophy" and "Hard Problem" intersection.

    The Crux of the Argument
    Michel and Lau are essentially trying to "clean up" the messy kitchen of consciousness science. Their primary target is the Confound of Function.

    The Problem: For decades, researchers used tasks like trace conditioning (where there is a temporal gap between a stimulus and a reward/punishment) as a "marker" for consciousness.

    Michel’s Challenge: He argues that just because consciousness and a function (like trace conditioning) often appear together in humans, it doesn't mean the function requires consciousness. If an AI or a fruit fly can do it via a "dark" (unconscious) mechanism, the correlation collapses.

    Why it likely "bugs" you
    If I recall our discussion on Lucretius, we were looking at how "will" or "intention" might emerge from physical randomness. Michel’s stance leans toward a radical deflationism.

    If Michel is right, most of what we call "conscious processing" is actually just "processing" that happens to have a conscious "glow" attached to it. For those interested in the agency of consciousness—the idea that being aware of a choice is what allows us to make it—Michel’s paper feels like he's pulling the rug out from under the "Intention" we discussed.

    The Michel/Lau Overlap
    In In Consciousness We Trust, Lau argues for a Higher-Order Theory (HOT). He suggests that consciousness is a specific type of structural representation—a "statistical signal detection" of our own internal states.

    Feature Michel's "Consciousness Doesn't Do That" Lau's "In Consciousness We Trust"
    Core Target Functionalist assumptions (like trace conditioning). First-order theories of consciousness.
    The "Grip" We attribute too much power to awareness. Awareness is a "meta-representation" of data.
    The Result Much of animal/AI behavior is likely unconscious. Consciousness is about "knowing that you know."
    Where does this leave "Intention"?
    If consciousness "doesn't do" the heavy lifting of conditioning or complex learning, we are left with a Lucretian dilemma: Is the "swerve" of human intention a functional power of the mind, or is it just a byproduct of complex, unconscious machinery?

    I’m curious to hear which specific claim is bugging you. Is it Michel's dismissal of the "Substitution Challenge" (the idea that we can't just swap conscious and unconscious processes in experiments), or is it the broader implication that consciousness might be functionally epiphenomenal?

    Would you like me to dig into the specific "Trace Conditioning" data Michel uses to see if his skepticism holds up against your view on intention?

    ReplyDelete