I aim to defend the existence of "borderline cases" of consciousness, cases in which it's neither determinately true nor determinately false that experience is present, but rather things stand somewhere in between.
The main objection against the existence of such cases is that they seem inconceivable: What would it be like to be in such a state, for example? As soon as you try to imagine what it's like, you seem to be imagining some experience or other -- and thus not imagining a genuinely indeterminate case. A couple of weeks ago on this blog, I argued that this apparent inconceivability is the result of an illegitimately paradoxical demand: the demand that we imagine or remember the determinate experiential qualities of something that does not determinately have any experiential qualities.
But defeating that objection against borderline cases of consciousness does not yet, of course, constitute any positive reason to think that borderline cases exist. I now have a new full-length draft paper on that topic here. I'd be interested to hear thoughts and concerns about that paper, if you have the time and interest.
As this week's blog post, I will adapt a piece of that paper that lays out the main positive argument.
[Escher's Day and Night (1938); image source]
To set up the main argument, first consider this quadrilemma concerning animal consciousness:
(1.) Human exceptionalism. Only human beings are determinately conscious.
(2.) Panpsychism. Everything is determinately conscious.
(3.) Saltation. There is a sudden jump between determinately nonconscious and determinately conscious animals, with no indeterminate, in-between cases.
(4.) Indeterminacy. Some animals are neither determinately nonconscious nor determinately conscious, but rather in the indeterminate gray zone between, in much the same way a color might be indeterminately in the zone between blue and green rather than being determinately either color.
For sake of today's post, I'll assume that you reject both panpsychism and human exceptionalism. Thus, the question is between saltation and indeterminacy.
Contra Saltation, Part One: Consciousness Is a Categorical Property with (Probably) a Graded Basis
Consider some standard vague-boundaried properties: baldness, greenness, and extraversion, for example. Each is a categorical property with a graded basis. A person is either determinately bald, determinately non-bald, or in the gray area between. In that sense, baldness is categorical. However, the basis or grounds of baldness is graded: number of hairs and maybe how long, thick, and robust those hairs are. If you have enough hair, you're not bald, but there's no one best place to draw the categorical line. Similarly, greenness and extraversion are categorical properties with graded bases that defy sharp-edged division.
Consider, in contrast, some non-vague properties, such as an electron's being in the ground orbital or not, or a number's being exactly equal to four or not. Being in the ground orbital is a categorical property without a graded basis. That's the "quantum" insight in quantum theory. Bracketing cases of superposition, the electron is either in this orbit, or that one, or that other one, discretely. There's discontinuity as it jumps, rather than gradations of close enough. Similarly, although the real numbers are continuous, a three followed by any finite number of nines is discretely different from exactly four. Being approximately four has a graded basis, but being exactly four is sharp-edged.
Most naturalistic theories of consciousness give consciousness a graded basis. Consider broadcast theories, like Dennett’s "fame in the brain" theory (similarly Tye 2000; Prinz 2012). On such views, a cognitive state is conscious if it is sufficiently "famous" in the brain – that is, if its outputs are sufficiently well-known or available to other cognitive processes, such as working memory, speech production, or long-term planning. Fame, of course, admits of degrees. How much fame is necessary for consciousness? And in what respects, to what systems, for what duration? There’s no theoretical support for positing a sharp, categorical line such that consciousness is determinately absent until there is exactly this much fame in exactly these systems (see Dennett 1998, p. 349; Tye 2000 p. 180-181).
Global Workspace Theories (Baars 1988; Dehaene 2014) similarly treat consciousness as a matter of information sharing and availability across the brain. This also appears to be a matter of degree. Even if typically once a process crosses a certain threshold it tends to quickly become very widely available in a manner suggestive of a phase transition, measured responses and brain activity are sometimes intermediate between standard "conscious" and "nonconscious" patterns. Looking at non-human cases, the graded nature of Global Workspace theories is even clearer. Even entities as neurally decentralized as jellyfish and snails employ neural signals to coordinate whole-body motions. Is that "workspace" enough for consciousness? Artificial systems, also, could presumably be designed with various degrees of centralization and information sharing among their subsystems. Again, there’s no reason to expect a bright line.
Or consider a very different class of theories, which treat animals as conscious if they have the right kinds of general cognitive capacities, such as "universal associative learning", trace conditioning, or ability to match opportunities with needs using a central motion-stabilized body-world interface organized around a sensorimotor ego-center. These too are capacities that come in degrees. How flexible, exactly, must the learning systems be? How long must a memory trace be capable of enduring in a conditioning task, in what modalities, under what conditions? How stable must the body-world interface be and how effective in helping match opportunities with needs? Once again, the categorical property of conscious versus nonconscious rests atop what appears to be a smooth gradation of degrees, varying both within and between species, as well as in evolutionary history and individual development.
Similarly, "higher-order" cognitive processes, self-representation, attention, recurrent feedback networks, even just having something worth calling a "brain" -- all of these candidate grounds of consciousness are either graded properties or are categorical properties (like having a brain) that are in turn grounded in graded properties with borderline cases. Different species have these properties to different degrees, as do different individuals within species, as do different stages of individuals during development. Look from one naturalistic theory to the next -- each grounds consciousness in something graded. Probably some such naturalistic theory is true. Otherwise, we are very much farther from a science of consciousness than even most pessimists are inclined to hope. On such views, an entity is conscious if it has enough of property X, where X depends on which theory is correct, and where "enough" is a vague matter. There are few truly sharp borders in nature.
I see two ways to resist this conclusion, which I will call the Phase Transition View and the Luminous Penny View.
Contra Saltation, Part Two: Against the Phase Transition View
Water cools and cools, not changing much, then suddenly it solidifies into ice. The fatigued wooden beam takes more and more weight, bending just a bit more with each kilogram, then suddenly it snaps and drops its load. On the Phase Transition View, consciousness is like that. The basis of consciousness might admit of degrees, but still there's a sharp and sudden transition between nonconscious and conscious states. When water is at 0.1° C, it's just ordinary liquid water. At 0.0°, something very different happens. When the Global Workspace (say) is size X-1, sure, there's a functional workspace where information is shared among subsystems, there's unified behavior of a sort, but no consciousness. When it hits X -- when there's that one last crucial neural connection, perhaps -- bam! Suddenly everything is different. The bright line has been crossed. There’s a phase transition. The water freezes, the beam snaps, consciousness illuminates the mind.
I'll present a caveat, a dilemma, and a clarification.
The caveat is: Of course the water doesn't instantly become ice. The rod doesn't instantly snap. If you zoom in close enough, there will be intermediate states. The same is likely true for the bases of consciousness on naturalistic views of the sort discussed above, unless those bases rely on genuine quantum-level discontinuities. Someone committed to the impossibility of borderline cases of consciousness even in principle, even for an instant, as a matter of metaphysical necessity, ought to pause here. If the phase transition from nonconscious to conscious needs to be truly instantaneous without a millisecond of in-betweenness, then it cannot align neatly with any ordinary, non-quantum, functional or neurophysiological basis. It will need, somehow, to be sharper-bordered than the natural properties that ground it.
The dilemma is: The Phase Transition View is either empirically unwarranted or it renders consciousness virtually epiphenomenal.
When water becomes ice, not only does it change from liquid to solid, but many of its other properties change. You can cut a block out of it. You can rest a nickel on it. You can bruise your toe when you drop it. When a wooden beam breaks, it emits a loud crack, the load crashes down, and you can now wiggle one end of the beam without wiggling the other. Phase transitions like this are notable because many properties change suddenly and in synchrony. But this does not appear always to happen with consciousness. That precipitates the dilemma.
There are phase transitions in the human brain, of course. One is the transition from sleeping to waking. Much changes quickly when you awaken. You open your eyes and gather more detail from the environment. Your EEG patterns change. You lay down long-term memories better. You start to recall plans from the previous day. However, this phase transition is not the phase transition between nonconscious and conscious, or at least not as a general matter, since you often have experiences in your sleep. Although people sometimes say they are "unconscious" when they are dreaming, that's not the sense of consciousness at issue here, since dreaming is an experiential state. There's something it’s like to dream. Perhaps there is a phase transition between REM sleep, associated with longer, narratively complex dreams, and nREM sleep. But that probably isn't the division between conscious and nonconscious either, since people often also report dream experiences during nREM sleep. Similarly, the difference between being under general anesthesia and being in an ordinary waking state doesn't appear to map neatly onto a sharp conscious/nonconscious distinction, since people can apparently sometimes be conscious under general anesthesia and there appear to be a variety of intermediate states and dissociable networks that don't change instantly and in synchrony, even if there are also often rapid phase transitions.
While one could speculate that all of the subphases and substates of sleep and anesthesia divide sharply into determinately conscious and determinately nonconscious, the empirical evidence does not provide positive support for such a view. The Phase Transition View, to the extent it models itself on water freezing and beams breaking, is thus empirically unsupported in the human case. Sometimes there are sudden phase transitions in the brain. However, the balance of evidence does not suggest that falling asleep or waking, starting to dream or ceasing to dream, falling into anesthesia or rising out of it, is always a sharp transition between conscious and nonconscious, where a wide range of cognitive and neurophysiological properties change suddenly and in synchrony. The Phase Transition View, if intended as a defense of saltation, is committed to a negative existential generalization: There can be no borderline cases of consciousness. This is a very strong claim, which fits at best uneasily with the empirical data.
Let me emphasize that last point, by way of clarification. The Phase Transition View, as articulated here with respect to the question of whether borderline consciousness is possible at all, that is, whether borderline consciousness ever exists, is much bolder than any empirical claim that transitions from nonconscious to conscious states are typically phase-like. The argument here in no way conflicts with empirical claims by, for example, Lee et al. (2011) and Dehaene (2014) that phase transitions are typical and important in a person or cognitive process transitioning from nonconscious to conscious.
The Phase Transition View looks empirically even weaker when we consider human development and non-human animals. It could have been the case that when we look across the animal kingdom we see something like a "phase transition" between animals with and without consciousness. These animals over here have the markers of consciousness and a wide range of corresponding capacities, and those animals over there do not, with no animals in the middle. Instead, nonhuman animals have approximately a continuum of capacities. Similarly, in human development we could have seen evidence for a moment when the lights turn on, so to speak, in the fetus or the infant, consciousness arrives, and suddenly everything is visibly different. But there is no evidence of such a saltation.
That's the first horn of the dilemma for the Phase Transition View: Accept that the sharp transition between nonconscious and conscious should be accompanied by the dramatic and sudden change of many other properties, then face the empirical evidence that the conscious/nonconscious border does not always involve a sharp, synchronous, wide-ranging transition. The Phase Transition View can escape by retreating to the second horn of the dilemma, according to which consciousness is cognitively, behaviorally, and neurophysiologically unimportant. On second-horn Phase Transition thinking, although consciousness always transitions sharply and dramatically, nothing else need change much. The lights turn on, but the brain need hardly change at all. The lights turn on, but there need be no correspondingly dramatic change in memory, or attention, or self-knowledge, or action planning, or sensory integration, or.... All of the latter still change slowly or asynchronously, in accord with the empirical evidence.
This view is unattractive for at least three reasons. First, it dissociates consciousness from its naturalistic bases. We began by thinking that consciousness is information sharing or self-representation or whatever, but now we are committed to saying that consciousness can change radically in a near-instant, while information sharing or self-representation or whatever hardly changes at all. Second, it dissociates consciousness from the evidence for consciousness. The evidence for consciousness is, presumably, performance on introspective or other cognitive tasks, or neurophysiological conditions associated with introspective reports and cognitive performance; but now we are postulating big changes in consciousness that elude such methods. Third, most readers, I assume, think that consciousness is important, not just intrinsically but also for its effects on what you do and how you think. But now consciousness seems not to matter so much.
The Phase Transition View postulates a sharp border, like the change from liquid to solid, where consciousness always changes suddenly, with no borderline cases. It's this big change that precipitates the dilemma, since either the Phase Transition advocate should also expect there always also to be sudden, synchronous cognitive and neurophysiological changes (in conflict with the most natural reading of the empirical evidence) or they should not expect such changes (making consciousness approximately epiphenomenal).
The saltationist can attempt to escape these objections by jettisoning the idea that the sharp border involves a big change in consciousness. It might instead involve the discrete appearance of a tiny smidgen of consciousness. This is the Luminous Penny View.
Contra Saltation, Part Three: Against the Luminous Penny View
Being conscious might be like having money. You might have a little money, or you might have a lot of money, but having any money at all is discretely different from having not a single cent. [Borderline cases of money are probably possible, but disregard that for sake of the example.] Maybe a sea anemone has just a tiny bit of consciousness, a wee flicker of experience -- at one moment a barely felt impulse to withdraw from something noxious, at another a general sensation of the current sweeping from right to left. Maybe that’s $1.50 of consciousness. You, in contrast, might be a consciousness millionaire, with richly detailed consciousness in several modalities at once. However, both you and the anemone, on this view, are discretely different from an electron or a stone, entirely devoid of consciousness. Charles Siewert imagines the visual field slowly collapsing. It shrinks and shrinks until nothing remains but a tiny gray dot in the center. Finally, the dot winks out. In this way, there might be a quantitative difference between lots of visual consciousness and a minimum of it, and then a discontinuous qualitative difference between the minimum possible visual experience and none at all.
On the Luminous Penny View, there is a saltation from nonconscious to conscious in the sense that there are no in-between states in which consciousness is neither determinately present nor determinately absent. Yet the saltation is to such an impoverished state of consciousness that it is almost empirically indistinguishable from lacking consciousness. Analogously, in purchasing power, having a single penny is almost empirically indistinguishable from complete bankruptcy. Still, that pennysworth of consciousness is the difference between the "lights being on", so to speak, and the lights being off. It is a luminous penny.
The view escapes the empirical concerns that face the Phase Transition View, since we ought no longer expect big empirical consequences from the sudden transition from nonconscious to conscious. However, the Luminous Penny View faces a challenge in locating the lower bound of consciousness, both for states and for animals. Start with animals. What kind of animal would have only a pennysworth of consciousness? A lizard, maybe? That seems an odd view. Lizards have fairly complex visual capacities. If they are visually conscious at all, it seems natural to suppose that their visual consciousness would approximately match their visual capacities -- or at least that there would be some visual complexity, more than the minimum possible, more than Siewert's tiny gray dot. It's equally odd to suppose that a lizard would be conscious without having visual consciousness. What would its experience be? A bare minimal striving, even simpler than the states imaginatively attributed the anemone a few paragraphs back? A mere thought of "here, now"?
More natural is to suppose that if a lizard is determinately conscious, it has more than the most minimal speck of consciousness. To find the minimal case, we must then look toward simpler organisms. How about ants? Snails? The argument repeats: These entities have more than minimal sensory capacities, so if they are conscious it’s reasonable to suppose that they have sensory experience with some detail, more than a pennysworth. Reasoning of this sort leads David Chalmers to a panpsychist conclusion: The simplest possible consciousness requires the simplest possible sensory system, such as the simple too-cold/okay of a thermostat.
The Luminous Penny View thus faces its own dilemma: Either slide far down the scale of complexity to a position nearly panpsychist or postulate the existence of some middle-complexity organism that possesses a single dot of minimal consciousness despite having a wealth of sensory sensitivity.
Perhaps the problem is in the initial move of quantifying consciousness, that is, in the commitment to saying that complex experiences somehow involve "more" consciousness than simple experiences? Maybe! But if you drop that assumption, you drop the luminous penny solution to the problem of saltation.
State transitions in adult humans raise a related worry. We have plausibly nonconscious states on one side (perhaps dreamless sleep), indisputably conscious states on the other side (normal waking states), and complex transitional states between them that lack the kind of simple structure one might expect to produce exactly a determinate pennysworth of consciousness and no more.
If consciousness requires sophisticated self-representational capacity (as, for example, on "higher order" views), lizard or garden snail consciousness is presumably out of the question. But what kind of animal, in what kind of state, would have exactly one self-representation of maximally simple content? (Only always "I exist" and nothing more?) Self-representational views fit much better with either phase transition views (if phase transition views could be empirically supported) or with gradualist views that allow for periods of indeterminacy as self-representational capacities slowly take shape and, to quote Wittgenstein, "light dawns gradually over the whole" (Wittgenstein 1951/1969, §141).
If you’re looking for a penny, ask a panpsychist (or a near cousin of a panpsychist, such as an Integrated Information Theorist). Maximally simple systems are the appropriate hunting grounds for maximally simple consciousness, if such a thing as maximally simple consciousness exists at all. From something as large, complicated, and fuzzy-bordered as brain processes, we ought to expect either large, sudden phase transitions or the gradual fade-in of something much richer than a penny.
Full manuscript:
13 comments:
Based on your excerpt here, all looks good to me Eric. But then I'm the "consciousness is in the eye of the beholder" and "like us" guy. :-)
I like the "luminous penny" analogy! It reminds me of something I pondered a few years ago, when looking at arguments for sensory consciousness being about images. When does an organism have a visual image? Most of us would say not when all that's present is a singled photoreceptive light sensor cell.
But then what if we have two, so that now light direction can be inferred? What about when we have a dozen? It wouldn't be an image by our standards with our 100 million receptors, but to an early Cambrian fish, it's taking in far more information the the worm with the single photoreceptor. There's no sharp boundary where before we had a collection of pixels (as well as some other visual field "flags" for things like motion), and now an "image."
Nature doesn't care about our little categories, like "life", "species", "brain", "planet", or many others like "consciousness". All of these have borderline cases that will frustrate anyone insisting on categorical absolutes.
Mike
HI Professor, just as brain surgeons stimulate different sections of the brain to see what they affect, so we can have a person who is conscious and you perform different permutations on him or her, to see what causes consciousness. This is practically difficult, but maybe conceptually possible. Perhaps we could rule some things out and even see if all parts of our brain and body have consciousness.
I believe there is here a nice confusion between: consciousness, the contents of consciousness, self-awareness and memory. Consciousness (or conscious experience) is binary: on/off. There are no intermediate states. Now, once consciousness is on there is a huge range of potential conscious states, from high level mystic states to blurred states with no solid reference and many intermediate cases. Then of course these states can be remembered or not.
I think we could restate the luminous penny argument as something like this: there can only be a luminous penny if consciousness is in fact one single thing. I'm not sure it has to be only sensory data - my preferred idea for that kind of definition would be second-order thinking, i.e. thinking about your own thoughts, which would position it perhaps in a more reasonable position on the posited scale - but it definitely has to be one single thing.
And consciousness as we use the term doesn't seem to be one single thing. Apart from anything else, there is at least the material form of consciousness (the neurons or whatever) and the sensory experience. However you look at consciousness in the way the word is used now, it doesn't seem to be one thing. That's a bit handwavy, but I think it could push a luminous penny theorist to either pony up and name the thing or admit that it's more complex.
Some of us are not philosophers but enjoy your openess...
Wouldn't it be easier to under-stand, between determinatly true and determinatly false, is the meaning of inbetween...
Your not defending true or false your defending inbetween...
That inbetween, might mean allowance instead of determinance, like a stance in movement...
I didn’t notice anything from the paper that seems to overcome the objections that I provided for the original post, so rather than reiterate I’ll just link to the first of my five comments.
Do I have anything else to add as well? It seems to me that sensibly talking about something that today is as practically amorphous as “consciousness”, should be extra problematic to say the least (and even given the professor’s “innocent” conception, which I consider extremely effective). But let’s try beginning with a situation that’s as discrete as we can manage and then theorize a variety of “consciousness” that might function similarly.
Consider a lightbulb resistor in an electrical circuit. A greater difference in charge between the two sides of the resistor will induce more electrons to flow through it to thus produce photons that exist in the form of electromagnetic radiation. So in a sense we can say that the circuit is “off” if no photons are produced, though if even one happens to be emitted then it’s “on” in that specific regard. Thus no grey areas here.
Note that in a given situation we’ll try to observe whether or not any photons happen to be emitted, and so there won’t be a deductive truth (such as the properties of a triangle once defined), but rather an inductive assessment. And indeed, the metaphysical premise of monism mandates that there can’t ultimately be any discrete states of “on” or “off” in nature given perpetual causal interdependence — here it’s all just one continuous system. Nevertheless in the sense that it’s useful for us to inductively observe the “on” or “off” status of an electrical circuit which potentially produces photons, can the professor’s conception of consciousness also work discretely? I think so.
Johnjoe McFadden theorizes that phenomenal experience does indeed exist in the form of some variety of electromagnetic radiation, which is to say in terms of photons that at least inductively may be assessed in terms of “yes” and “no” existence. So essentially everything phenomenal that constitutes us is thought to exist in the form of an EM field that’s produced by means of certain synchronous brain neuron firing. Furthermore unlike the vast majority of consciousness theories, this one happens to be falsifiable. In fact to practically test his theory I propose that we put transmitters in a volunteer’s head with firing charges typical of neurons, and see if the fully aware test subject is able to notice anything phenomenally strange when various combinations of exogenous firing is tried. If phenomenal experience does exist by means of such physics, it ought to be possible to detect this way since waves of a certain variety tend to be altered by other waves of that variety.
After rereading your post professor, let me give you a taste of my psychology based “dual computers” model of brain function. I think it could help address your concerns about phase transition.
Like the computers that instruct our robots, originally there should have been no phenomenal experience element to any brain function. (This is perhaps further back than the Cambrian, since the Cambrian explosion may have been somewhat caused by the emergence of phenomenal consciousness.). And also like our robots these organisms should have been most successful in the very specific environments that their programming could account for, though they should have tended to fail under the more “open” circumstances that it could not. Apparently algorithmic computers can only be programmed so far.
I consider my own consciousness to exist as a fundamentally different form of computer that’s purpose based. Unlike the algorithmic form, for this variety existence can be anywhere from wonderful to horrible in a wide assortment of ways. Thus I think we choose to do what we perceive will promote our purpose based interests. In any case this sets up a vast algorithmic supercomputer brain that creates a tiny phenomenal computer (which might very well exist in the form of a neuron produced electromagnetic field). I theorize that whatever number of operations that the human brain does, the phenomenal computer that it creates should do less than 1000th of one percent as many. Notice that when you tell your hand to do something for example, it’s the algorithmic computer that actually preforms the associated robotics.
More to your concerns about phase transition however, how might a phenomenal computer have evolved? Theoretically at first there should have been an epiphenomenal spandrel residing within some of these early biological robots. Why? Because something must exist in at least some capacity before evolution might cause it to also become fully functional. So theoretically when the programming of some of these organisms couldn’t successfully deal with a given circumstance, by chance at times this sort of decision must have partly been influenced by the phenomenal dynamic which only desired to feel good rather than bad, and regardless of its surely impoverished state at that point. I suspect that because there are limits to what non-purpose based computers are able to do, that this dynamic was progressively given more and more informational input and processing capacity to evolve into the medium through which you and I are now experiencing our existence.
So yes, it should be possible for phenomenal experience to be epiphenomenal. Indeed, I presume that brain damage can cause a sort of “locked in” state for us since phenomenal experience might exist with no potential to consciously move. And yes, given whatever physics creates the phenomenal dynamic, the most primitive and basic example of this physics should necessarily be discrete in some sense on the basis of its existence or not. Surely in practice however, more than “one” would be needed for functionality. Thus effective continuity at least, as in the number of lightbulb photons that we’d refer to as “light” in a given situation.
Because all mainstream consciousness theories deny any mechanistic component to phenomenal experience, it’s quite understandable to me that you’d base your paper upon their mediumless algorithm proposals. Without even nodding to an unknown variety of physics which creates phenomenal experience however, I’m able to argue that they depend upon dynamics that are “not of this world”. Apparently algorithms only exist in respect to the mechanisms that they animate, so without a mechanism there can be no algorithm. Note that to remedy this situation they’d merely need to make a nod to such unknown “hard problem physics”. But ha! As Planck noted long ago, “Science progresses one funeral at a time”. And I think science could use some help from philosophy here as well.
There seems to be some ambiguity with the luminous penny idea. Is the luminarian committed to: a) there is some minimal degree of consciousness which may have arbitrarily complex content, or b) there is some degree of consciousness which is minimally there in virtue of having minimal content? You seem to be assuming b, but I think a might be more plausible, especially if one thinks that there is something it is like to have contentless experiences.
If you take a, then I think the luminarian can respond to your claims about it leading to panpsychism by asking why you think there ever comes a point where something becomes determinately non conscious. They might then say, in essence, "whatever the right theory of conscious turns out to be, where Eric calls it determinately or indeterminately conscious, we call it conscious." And steal your account of why it doesn't stay indeterminate all the way to panpsychic or near enough absurdity.
All that said, I am pretty sympathetic to your claim that there is such a thing as borderline consciousness.
The Draft: "Borderline Consciousness, When It's Neither Determinately True nor Determinately False That Experience Is Present"...
My comparison: https://en.wikipedia.org/wiki/Quantum_indeterminacy...
Your blog:..."but rather things stand somewhere in between"...
...Are you proposing determinism stands on evidence...
...or, are you proposing determinism is a stance on evidence...
Isn't it true philosophy is a stance for when love of wisdom is 'present'...
Thanks for the comments, folks, and apologies for falling a bit behind in my replies!
SelfAware/Mike: Agreed!
Howard: That's a promising approach, and TMS can give a version of that. But there are of course methodological complications, including the tricky relation between consciousness and reports of consciousness.
Arnold: Thanks for the kind words and helpful analogies.
Phil E: (1.) That was an interesting back and forth you and Ryan and I had on my previous post regarding imaginary numbers, but I guess I'm not convinced that they are any less real than negative numbers or ordinary counting numbers, so I'm inclined to stick with the example unless I feel the worry getting more traction. This does of course get into some tricky issues in philosophy of math. (2.) I agree that you even if nature is continuous throughout you can often get effectively discrete, binary cases. It *could* be that consciousness is like that, with mostly discrete yes/no and the borderline cases being unusual edge cases. (3.) McFadden's theory hasn't got a lot of traction among consciousness researchers yet. It could be worth a look, but testing the specific empirical commitments are beyond the scope of my research. (4.) I wonder how much your view has to partake of not-of-this-world physics. In a way, Higher Order and Global Workspace theories share some commonality with your view. Simple, reflex systems don't require consciousness; conscious arises when you need some systemwide or top-down control, including motivational systems.
Gumby: Interesting point. I was thinking of the dimension that the luminarian minimizes as being a content dimension, but you're right it could be something else -- maybe vividness or availability for report? But now it's hazier to me what the view is, so I might need a concrete proposal to engage. As to whether they could define "consciousness" to include what I call "borderline consciousness", and then take that redefinition all the way to panpsychism -- well of course one *can* define technical terms in any way one wants! But this might be like defining "green" to include shades on the border between green and non-green, then redefining the borderline again all the way to the point of calling shades of canonical blue "green".
I think what the luminarian is picturing is something like a "fade to black" for consciousness. That suggests vividness, but assuming the substrate for vividness is graded, you can just use your light bulb metaphor and ask how dim it has to be to count as lights out. But then they can just say that it would be the equivalent of no photons emitted... or however high they have to go to avoid you charging them with absurdity.
I'm not sure I've articulated the best case for the luminarian trying to steal your thunder, however. I'm not picturing mere redefinition. I imagine them agreeing with you that panpsychism is absurd and granting that consciousness is graded. Then they ask why you should say that consciousness ever ceases to be at least indeterminately there. Whatever your account happens to be of why atoms and such are not conscious, they will presumably be able to agree with it, and simply say that, where such an account does not hold, the entity in question is conscious. Such a luminarian stays *just* as far enough away from panpsychism as they need to in order to keep you from rejecting it for being too panpsychist.
In other words, once you grant that consciousness is graded, the question of how to avoid panpsychism seems to be a problem for both the luminarian and for you. It is a problem the state-change theorist solves by drawing what seems to be an arbitrary line, and it seems that the luminarian has to draw an arbitrary line, too. You try to avoid drawing an arbitrary line by making the line fuzzy, but it is not clear that you actually solve the problem by doing so. It looks like the boundary region will still be arbitrarily placed, and perhaps reach arbitrarily deep toward atoms, even if it is a wide band rather than a line.
One could map the gradient of consciousness directly onto the gradient in the substrate on which it supervenes, but then you won't get indeterminate consciousness, but, rather, determinate consciousness of ever-decreasing amounts, down to, perhaps (depending on the theory), atoms with each a penny's-worth of consciousness. If your argument is that the substrate is a gradient, so consciousness is likely to be one too, then I think you owe us a reason as to why (we should believe that) consciousness doesn't follow the *same* gradient. Why, after all, should conscious be like bald, rather than like the underlying gradient from which we seem to abstract baldness? Similarly, many vague terms seem to rest on features which have a spectrum: height and tall, green and the various colors, etc. If you stay on the gradient, you risk panpsychism or something absurd enough. If you diverge from it, you risk making it easy to be a saltation theorist. I'm not sure you can dodge both.
Gumby Bush: Thanks so much for that super helpful and thoughtful comment! I'll probably need some time to fully digest this one, since the challenge you raise is so interesting and fundamental. One way to think of it is this. Some properties are such that you have X and one or more not-X properties at opposite poles, with a gray zone between: bald vs hairy, green vs blue, extravert vs introvert. My account models consciousness on this. However other properties are "how-much" properties, like having money or having a size, such that there's a discrete jump between zero and any, and then just a matter of quantity once you have any.
Why do I think of consciousness in the first way rather than the second? My main argument in this paper is that modeling it in the second way threatens a dilemma between panpsychism and locating the lower bound. Your thought, if I'm understanding correctly, is that the luminarian can be kind of squirrely about the lower bound, but that since I'm kind of squirrely about it too, my view doesn't really have the advantage there. (In this paper I stay neutral about whether consciousness is an ungraded categorical property with a graded basis or instead a categorical property that admits of gradations within it.)
I have a further move if the basis of the gradation is content, since it's kind of hard to swallow the idea of a conscious organism with only a pennysworth of content, unless we're verging on panpsychism. But if the basis is something like vividness, that argument isn't available.
So I suppose that here I'll land back on two thoughts: First, there's my general appeal to the tendency of almost everything in nature to come with borderline cases (even most phase transitions, viewed at small temporal scale and/or looking at unusual cases), so some strong reason would be needed to overcome this default. Second, I'll fall back on asking for more particulars about the vividness version of the luminarian view, since my guess is that with those particulars in place I can better evaluate and challenge the view.
But I think there's probably more to be said here by way of general defense of the opposite poles model, which is what I'll need to think some more about!
No worries on imaginary numbers professor. I personally suspect that a future society of meta scientists will work math out essentially as an invented language (unlike an evolved language such as English) from which to think. Thus while deductive math may inform inductive physics, I think they’ll agree that inductive physics has little potential to inform mathematics. But hey, Sabine Hossenfelder’s video showed that even she’s interested in the idea that physics might teach us about math.
On your “2”… exactly. Maybe phenomenal experience transpires as McFadden theorizes, and maybe it’s more like Dennett’s “fame in the brain”. The first could be considered relatively discrete while the second seems ripe for all manners of ambiguity (not to mention my “otherworldlyness” accusation).
What I mean by “worldly” is ultimately stuff that’s causal from within our system. Under the premise of naturalism theoretically everything that happens here is caused to occur exactly as it does by means of such dynamics (and yes, even including quantum mechanics). Theoretically any ontological discrepancies would reflect a second kind of stuff commonly known as “substance dualism”. I’m saying that McFadden’s theory is monistic while popular theories such as Global Workspace would require something from outside our system to function as proposed. Consider my reasoning:
We can observe that various machines function on the basis of informational input. Indeed, all information that animates the function of a given machine might thus be referred to as “algorithms”, and not otherwise. Our computer screens for example are clearly animated like this. But just because a given algorithm may exist as such for a given machine, this shouldn’t mean that it will exist as such for another machine. You wouldn’t expect a Betamax tape to work right in a VHS machine, but rather a machine that’s set up to use that sort of information. Thus in a natural world it would seem effective to say that all algorithms should be considered “mechanism specific”.
That’s my beef with popular modern consciousness theories. They bizarrely presume the existence of consciousness by means of algorithms that not only aren’t mechanism specific, but require no mechanical instantiation whatsoever. So for example it’s not the broad information sharing feature of global work space theory that I consider problematic. I have no opinion on that. It’s that the global information or whatever that’s thought to create phenomenal experience, is not proposed to be based upon the function of any sort of mechanism. Here certain algorithms are proposed to exist universally, and they needn’t even animate the function of a machine that’s set up to produce phenomenal experience.
That sure seems convenient! Thus beyond thought experiments devised by you and other distinguished people, I’m able to observe that various popular theories hold that if the right generic algorithm on paper were properly converted into some specific other generic algorithm on paper, then something here should thus experience what we know of as “a whacked thumb”. Each such proposal may be contrasted with the mechanism specific proposal of McFadden. In the early days of it in a 2002 paper he formally offered theories such as GWT a means of causal instantiation for their proposals. None accepted. Even today few seem to grasp the otherworldly implications of non mechanism specific algorithms.
If it makes sense to you that all machine algorithms should only exist as such by means of specific kinds of machines, and thus that there should be no generic algorithms from which to create phenomenal experience in a natural world, then you might take up this cause yourself as a professional philosopher. In that case I’d expect McFadden’s mechanism specific proposal for the creation of phenomenal experience to interest you as a converse proposal that doesn’t violate the premise of naturalism.
Post a Comment