If you grew up in a temperate climate, you probably spent some time bothering brown garden snails (Cornu aspersum, formerly known as Helix aspersa). I certainly did. Now, as a grown-up (supposedly) expert (supposedly) on the science and philosophy of consciousness, I've decided to seriously consider a question that didn't trouble me very much when I was seven: Are garden snails conscious?
Being an "experimental philosopher", I naturally started with a Facebook poll of my friends, who obligingly fulfilled my expectations by answering, variously, "yes" (here's why), "no" (here's why not), and "OMG that is the stupidest question". I'll call this last response "*gong*" after The Gong Show, an amateur talent contest in which performers whose acts were sufficiently horrid would be interrupted by a gong and ushered off the stage.
It turns out that garden snails are even cooler than I thought, now that I'm studying them more closely. Let me fill you in.
Garden Snail Cognition and Behavior
(Most of this material is drawn from Ronald Chase's 2002 book Behavior & Its Neural Control in Gastropod Molluscs.)
The central nervous system of the brown garden snail contains about 40,000 neurons. That's quite a few more neurons than the famously mapped 302 neurons of the Caenorhabditis elegans roundworm, but it's modest compared to the quarter million neurons of an ant or fruitfly. The snail's brain is organized into several clumps of ganglia, mostly in a ring around its esophagus. Gastropod neurons generally resemble vertebrate neurons, with a few notable exceptions. One exception is that gastropod neurons usually don't have a bipolar structure with axons on one side of the cell body and dendrites on the other side. Instead, input and output typically occurs on both sides without a clear differentiation between axon and dendrite. Another difference is that although gastropods' small-molecule neural transmitters are the same as in vertebrates (e.g., acetylcholine, serotonin), their larger-molecule neuropeptides are mostly different.
Snails navigate primarily by chemoreception, or the sense of smell, and mechanoreception, or the sense of touch. They will move toward attractive odors, such as food or mates, and they will withdraw from noxious odors and tactile disturbance. Although garden snails have eyes on the tips of their posterior tentacles, their eyes seem to be sensitive only to light versus dark and the direction of light sources, rather than to the shapes of objects. The internal structure of snail tentacles shows much more specialization for chemoreception, with the higher-up posterior tentacles perhaps better for catching odors on the wind and the lower anterior tentacles better for odors closer to the ground. Garden snails can also sense the direction of gravity, righting themselves and moving toward higher ground to avoid puddles.
Snails can learn. Gastropods fed on a single type of plant will prefer to move toward that same plant type when offered the choice in a Y-shaped maze. They can also learn to avoid foods associated with noxious stimuli, in some cases even after only a single trial. Some species of gastropod will modify their degree of attraction to sunlight if sunlight is associated with tumbling inversion. In warm ocean Aplysia californica gastropods, the complex role of the central nervous system in governing reflex withdrawals has been extensively studied. Aplysia californica reflex withdrawals are centrally mediated, and can be inhibited, amplified, and coordinated, maintaining a singleness of action across the body and regulating withdrawal according to circumstances, with both habituation and sensitization possible. Garden snail nervous systems appear to be similarly complex, generating unified action that varies with circumstance.
Garden snails can coordinate their behavior in response to information from more than one modality at once. For example, as mentioned, when they detect that they are surrounded by water, they can seek higher ground. They will cease eating when satiated, withhold from mating while eating despite sexual arousal, and exhibit less withdrawal reflex while mating. Before egg laying, garden snails use their feet to excavate a shallow cavity in soft soil, then insert their head into the cavity for several hours while they ovulate.
Garden snail mating is famously complex. Cornu aspersum is a simultaneous hermaphrodite, playing both the male and female role simultaneously. Courtship and copulation requires several hours. Courtship begins with the snails touching heads and posterior antennae for tens of seconds, then withdrawing and circling to find each other again, often consuming each other's slime trails, or alternatively breaking courtship. They repeat this process several times. During mating, snails will sometimes bite each other, then withdraw and reconnect. Later in courtship, one snail will shoot a "love dart" consisting of calcium and mucus at the other, succeeding in penetrating the skin about one third of the time; tens of minutes later, the other snail will reciprocate. Courtship continues regardless of whether the darts successfully land. Sex culminates when the partners manage to simultaneously insert their penises into each other, which may require dozens of attempts.
Impressive accomplishments for creatures with brains of only 40,000 neurons! Of course, snail behavior is limited compared to the larger and more flexible behavioral repertoire of mammals and birds.
Garden Snail Consciousness: Three Possibilities
So, knowing all this... are garden snails conscious? Is there something it's like to be a garden snail? Do snails have, for example, sensory experiences?
Suppose you touch the tip of your finger to the tip of a snail's posterior tentacle, and the tentacle retracts. Does the snail have tactile experience of something touching its tentacle, a visual experience of a darkening as your finger approaches and occludes the eye, an olfactory or chematosensory experience of the smell or taste or chemical properties of your finger, a proprioceptive experience of the position of its now-withdrawn tentacle?
(1.) Yes. It seems like we can imagine that the answer is yes, the snail does have sensory experiences. Any specific experience we try to imagine from the snail's point of view, we will probably imagine too humanocentrically. Withdrawing a tentacle might not feel much like withdrawing an arm; and with 40,000 neurons total, presumably there won't be a wealth of detail in any sensory modality. Optical experience in particular might be so informationally poor that calling it "visual" is already misleading, inviting too much analogy with human vision. Nonetheless, I think we can conceive in a general way how it might be the case that garden snails have sensory experiences of some sort or other.
(2.) No. We can also imagine, I think, that the answer is no, snails entirely lack sensory experiences of any sort -- and thus, presumably, any consciousness at all, on the assumption that if snails are conscious they have at least sensory consciousness. If you have trouble conceiving of this possibility, consider dreamless sleep, toy robots, and the enteric nervous system. (The enteric nervous system is a collection of about half a billion neurons lining your gut, governing motor function and enzyme secretion.) In all three of these cases, most people think, there is no genuine stream of conscious experience, despite some organized behavior and environmental reactivity. It seems that we can coherently imagine snail behavior to be like that: no more conscious than turning over unconsciously in sleep, or than a toy robot, or than the neurons lining your intestines.
We can make sense of both of these possibilities, I think. Neither seems obviously false or obviously refuted by the empirical evidence. One possibility might strike you as intuitively much more likely than the other, but as I've learned from chatting with friends and acquaintances (and from my Facebook poll), people's intuitions vary -- and it's not clear, anyway, how much we ought to trust our intuitions in such matters. You might have a favorite scientific or philosophical theory from which it follows that garden snails are or are not conscious; but there is little consensus on general theories of consciousness and leading candidate theories yield divergent answers. (More on this, I hope, in a follow-up post.)
(3.) *Gong*. To these two possibilities, we can add a third, the one I am calling *gong*. Not all questions deserve a yes or a no. There might be a false presupposition in the question (maybe "consciousness" is an incoherent concept?), or the case might be vague or indeterminate such that neither "yes" nor "no" quite serves as an adequate answer. (Compare vague or indeterminate cases between "green" and "not green" or between "extraverted" and "not extraverted".)
Indeterminacy is perhaps especially tempting. Not everything in the world fits neatly into determinate, dichotomous yes-or-no categories. Consciousness might be one of those things that doesn't dichotomize well. And snails might be right there at the fuzzy border.
Although an indeterminate view has some merits, it is more difficult to sustain than you might think at first pass. To see why, it helps to clearly distinguish between being a little conscious and being in an indeterminate state between conscious and not-conscious. If one is a little conscious, one is conscious. Maybe snails just have the tiniest smear of consciousness -- that would still be consciousness! You might have only a little money. Your entire net worth is a nickel. Still, it is discretely and determinately the case that if you have a nickel, you have some money. If snail consciousness is a nickel to human millionaire consciousness, then snails are conscious.
To say that the dichotomous yes-or-no does not apply to snail consciousness is to say something very different than that snails have just a little smidgen of consciousness. It's to say... well, what exactly? As far as I'm aware (correct me if I'm wrong!), there's no well developed theory of kind-of-yes-kind-of-no consciousness. We can make sense of a vague kind-of-yes-kind-of-no for "green" and "extravert"; we know more or less what's involved in being a gray-area case of a color or personality trait. We can imagine gray-area cases with money too: Your last nickel is on the table over there, and here comes the creditor to collect it. Maybe that's a gray-area case of having money. But it's much more difficult to know how to think about gray-area cases of being somewhere between a little bit conscious and not at all conscious. So while in the abstract I feel the attraction of the idea that consciousness is not a dichotomous property and garden snails might occupy the blurry in-between region, the view requires entering a theoretical space that has not yet been well explored.
The Possibilities Remain Open
There is, I think, some antecedent plausibility to all three possibilities, yes, no, and *gong*. To really decide among them, to really figure out the answer to our question about snail consciousness, we need an epistemically well-grounded general theory of consciousness, which we can apply to the case.
Unfortunately, we have no such theory. The live possibilities appear to cover the entire spectrum from the panpsychism or near-panpsychism of Galen Strawson and of Integrated Information Theory to very restrictive views, like those of Daniel Dennett and Peter Carruthers, on which consciousness requires some fairly sophisticated self-representational capacities of the sort that well beyond the capacity of snails.
Actually, I think there's something wonderful about not knowing. There's something marvelous about the fact that I can go into my backyard, lift a snail, and gaze at it, unsure. Snail, you are a puzzle of the universe, right here in my garden, eating the daisies!
[image by Bryony Pierce]
49 comments:
I like 'gong'.
The most primitive feelings may well have been only vaguely conscious. It does make sense that as life evolved it did not suddenly jump to consciousness at some point.
We would not necessarily be able to have such experiences ourselves, now. And it makes sense that we won't have a good theory of the evolution of consciousness until we have developed AI into conscious machines and taken a good look at what we've done then.
What if the term “consciousness” is akin to the term “literature”? Here’s the first definition I got when I googled “literature”:
“written works, especially those considered of superior or lasting artistic merit.”
That definition creates a wide variety of opinions as to what counts as literature, but it potentially allows everything from the very minimal (a “Stop” sign?) to the very sophisticated. And then you can talk about the edge cases. Are YouTube videos literature? They aren’t written words, but when the term “literature” was coined writing was the only means of recording words. Maybe “literature” really should mean “recorded works”, or “recorded verbal works”, or whatever.
So what if there is a theory of consciousness that works like literature? Maybe there is a definition which provides the very minimum requirement. It can be presumed that there would be very few who would attribute consciousness to a real example that has only this minimal level.
So the question then becomes, for each inquirer, what would be the necessary requirements (or constraints) for you, the inquirer, to call an exemplar to be an example of consciousness. Maybe we could give that minimal examplar of Consciousness a name, like, say, a psychule, analogous to molecule, to represent the smallest unit of the mental.
So for example, what if the absolute minimal requirement for consciousness is the ability to interact with the environment? Because every physical thing we know about (or can know about) can interact with its environment, calling that minimal level the psychule would make you a panpsychist. What if every interaction with the environment could be described as an integration of information? Requiring only “integrated information” for the psychule would still make you a panpsychist. Requiring that the minimal interaction serve a purpose for it to be a psychule would make you a Functionalist. And so on up to requiring the interaction to create memory, or to access memory, or to create simple concepts, or complex concepts, or self-referential concepts, or goal-type concepts, or causal inference concepts, and so on. Although there might be something beyond some sort of concept, I can’t imagine (yet) what that would be.
*
[just sayin’]
Thanks for the comments, folks! A couple of votes for *gong*. :-)
Hmmm, let's see. Is there some "thing" that it is like to be a snail? I can't use common sense to answer that question because according to Einstein, common sense is nothing more than a set of prejudices that we acquire by the age of eighteen. So now I am compelled to rely upon pre- prejudicial senses, lets say a five year old. I believe a five year old would answer yes, there is some "thing" that it is like to be a snail, as a five year old would also agree that there is some "thing" that it is like to be mass, spin and charge. See, that wasn't difficult now was it?
I think it's all in how we choose to define "consciousness."
We can use a moderately wide definition to include any system that has perception, attention, or imagination. It's conceivable that the garden snail might have glimmers of one or more of these, of this primary or sensory consciousness, but it seems like it would be minuscule compared to, say, a fruit fly, much less any vertebrate. This seems like (3) *gong*.
Or we can further widen the definition to anything that responds to its environment, or anything that can be conditioned, or that has goals. If so, then we're at (1) Yes. (It's worth noting that, using this definition, plants may also be a Yes.)
Or we can go narrow and define consciousness in a high order manner, as the ability to introspect. If so, then no, garden snails are not conscious. But this may also exclude most non-human animal species. (If this definition seems too narrow, consider how we categorize the conscious and the unconscious in our own minds.) This is a full (2) No.
Overall, I guess I'd be in the *gong* category because I ultimately suspect asking whether garden snails are conscious is like asking how much "humanness" they have.
Eric,
In your essay: "Kant Meets Cyberpunk", you commented that "maybe A-type properties are closer to common sense and we ought to stick with common sense unless there is compelling reason to reject it." Why not revise that statement to say that since A-type properties might actually be simpler, the most parsimonious explanation of things might be from the perspective of pre-prejudicial sense, the type of insight that a five year old might exhibit. Now one is compelled to consider: Why not stick with the most parsimonious explanation of consciousness unless there is compelling evidence to suggest other wise?
For example: in response to the question of how physical states give rise to consciousness, the most parsimonious explanation is that physical states in and of themselves are forms of consciousness. Fundamentally, that's a simple enough explanation, and at the moment, I do not see any compelling evidence to suggest otherwise other than our own prejudicial common sense. So in conclusion: Is prejudice a compelling enough reason to reject that simple explanation of consciousness?
But Lee, *which* physical states? I'm not sure five-year-olds will have the best answer here....
SelfAware: I can see that if consciousness is defined in those ways, you get those answers. But if we define consciousness in terms of having a stream of experience or there's being "something it's like", then the question remains unclear, I think. Maybe that's a bad way to define consciousness, which doesn't yield a coherent, usable concept? Sure, if so, then *gong*.
(But Lee, *which* physical states?)
The fundamental building blocks Eric, all of them. If a group of five year olds in a kindergarten class where tasked with building a structure using the building blocks in the classroom, and then the teacher placed a doll's head on the top of the structure to represent consciousness, I'm pretty sure the kids would make the correlation between the building blocks and consciousness, simply because it's the most parsimonious explanation, and young children understand simplicity.
You're an adult here Eric, step back into that classroom I just depicted with all of your current wisdom and knowledge, only leave all of your prejudicial common sense behind. What conclusion would you draw? Sorry, I've got to go with Einstein on this one, common sense is nothing more than a catalogue of prejudices that we acquire by the age of eighteen, and beyond the age of eighteen, it only gets worse. "Gong"
I'm reminded of a true story about simplicity. A semi-truck drove under an overpass that was too low and the semi got stuck. A young boy was watching as the all of the adults tried to remove the truck. Policemen were there, structural engineers, the tow truck driver and all of the bystanders. Nothing that they tried was working. Reluctantly, the young boy rode his bicycle down and asked the policeman why they just didn't let the air out of the tires and drive the semi out from under the overpass....... "Gong"
Five-year-olds are terrific, but I'm not sure we should trust them on this question -- nor that they'd all agree. Some work in developmental psychology suggests that five-year-olds might be intuitive substance dualists (see Paul Bloom's book Descartes' Baby). Maybe the tires of that semi are already flat and what really needs to happen is that we pack the back of the semi with ice to shrink the metal just enough that the semi can slip through -- a solution that might not occur to a five year old.
Maybe so Eric, but I think you would agree that it would certainly be easier to convince a group of kindergarteners that the reason consciousness arises from physical states is because physical states in and of themselves are forms of consciousness. The only other explanation on the table is magic, and let's face it, all children love magic, and the older ones like ourselves actually prefer the paradigm of magic over parsimonious explanations. Thanks....
I might ask what would a snail have to do to convince me it is conscious? Something along the lines of a behaviour demonstrating a decent modifiable internal model of the world.
I was looking again at the inverting goggles experiments: "humans can adapt almost perfectly, monkeys follow humans, split results in birds, no adaptation in amphibians, fishes and insects". There are lots of mental abilities that might parallel that, but I wondered if it might be a useful pointer. My original interest was in the qualia once you have readapted (another inverted spectrum ;)), following the model that a quale can be more than a simple raw feel, but rather any "elementary phenomenal experience indecomposable to smaller elements of experience" (eg a face). This has the nice property of eliminating the problem of what level of signal pre-processing in the brain is cognition versus perception versus sensation. Then one measure of level of consciousness might be the most complex a thing can be giving rise to a useful quale (ie not a blur or blob).
Is ‘X’ conscious? I suspect that question can only be answered in a definitive way scientifically, using knowledge we at present have not acquired. Since the bulk of your post consists of scientifically valid observational and anatomical evidence about snails, I imagine you’d agree that a purely philosophical analysis cannot possibly provide an answer to the question. I would expect that a future neuroscience development involving nanoscale probing of whatever brain structure physically produces consciousness will identify the precise structural and neurochemical signature of consciousness, allowing us thereafter to definitively answer the question.
Owing to the empathic functionality of mirror neurons, it seems likely that we assume from early infancy that other humans feel their embodied selves centered in a world just like we do, but we lack the ability to formally and conclusively prove consciousness in others. I believe the best we can do is infer consciousness in another organism, with the strength of the inference dependent on the degree of neurobiological similarity to oneself, the only thoroughly convincing case of a conscious being. As such, I would propose that the inference strength of consciousness in other humans is upwards of 99.9999%, as is the case with all mammals and, in fact, any creature with a brainstem (the reptilian brain), including birds.
We can decidedly rule out consciousness in non-living structures because consciousness is biological and in very simple organisms because they lack the necessary neuronal structure and metabolism. As such, the inference strength of consciousness in garden snails would be non-zero, but, in my opinion, rather low. Octopuses (-podes, -pi), on the other hand, I would rate as very high—they even respond somewhat like we do after ingesting MDMA (aka ecstasy). At the other end of the inference strength scale, I would assign panpsychic elements, whatever they are, a consciousness inference strength of 0%, since they lack any relationship at all to consciousness, which is a strictly biological phenomenon.
Perhaps James of Seattle, SelfAwarePatterns and others in your blog audience would be interested in P. M. S. Hacker’s revealing look at what we mean by the word consciousness in his paper, “The Sad And Sorry History of Consciousness: Being, Among Other Things, a Challenge to the ‘Consciousness Studies Community’” (PDF at http://info.sjc.ox.ac.uk/scr/hacker/docs/Consciousness%20a%20Challenge.pdf). Hacker begins by noting that “The term ‘consciousness’ is a latecomer upon the stage of Western philosophy” and describes the evolution of our use of the word.
Eric, since in this post you again use the “something it’s like” Nagelese dogma, which I’ve always found incoherent regarding what is being asserted, perhaps you’ll enjoy Hacker’s convincing take-down of Nagel’s 1974 “there is something it is like” proposition which, per Hacker, “laid the groundwork for the next forty years of fresh confusion about consciousness.” Although Nagel’s remark is discussed in the latter half of the “Sad and Sorry History” paper, another of Hacker’s papers, “Is There Anything It Is Like to Be a Bat?”, focuses on the “something it is like” analysis alone. That PDF is available at http://info.sjc.ox.ac.uk/scr/hacker/docs/To%20be%20a%20bat.pdf. Interestingly, in regards to Damasio’s definition of consciousness as a feeling, which I find compelling, Hacker’s analysis uses the words “feel” and “feeling” over 60 times when referring to consciousness. I wonder if Hacker, or anyone else, has noticed. Apparently, I feel, therefore I am.
I’ll email you a copy of both of Hacker’s PDFs Eric and, as always, I’m most interested to learn your thoughts and those of your blog readers.
The dialectic in these kinds of questions always seems to start from the presupposition that everything is not conscious (whatever consciousness may mean.) I feel as though any specified version of consciousness, whether it be specified in terms of more sophisticated capabilities or in terms of something simple like the experience of qualia, will as a result lead to doubt about the extent of consciousness. Can a creature experience red qualia? This will depend on their optical system, their nervous system, etc... any specification of consciousness will correlate with a specification of the type of organism under consideration that explains that particular kind of experience. Might it be the case that it's in fact intuitive to presuppose a wide extent of consciousness in spite of the fact that it also has the feature of being something for which it's very difficult to produce an account that answers the question of "how do we know it's there" in some more demanding sense?
I guess my real question is: why is our starting point that everything is not conscious, and we are in the position of having to establish somehow that everything else is (or that each thing is on a case by case basis)?
Eric,
I agree that for the stream of experience and "something it is like" definitions, the answer can be *gong*, although I think we have to bear in mind that when we define consciousness totally from the subjective perspective, as these definitions do, we're doing so solely from the human perspective, the only one we can take. Arguably the snail's perspective is too far from the human one for us to consider it "like anything" if we could somehow access its miniscule perspective. So really, to me, it's *gong* leaning toward No.
Stephen,
Thanks for sharing that paper! I have to admit that I'm occasionally tempted to conclude that the overall concept of consciousness isn't a productive one, and that we'd be better off without it, perhaps just focusing on narrower concepts like perception, memory, attention, imagination, and metacognition.
But while Hacker makes many good points, I think he oversells the case against introspection. His thesis seems to be that we can self reflect because we have language. I think this gets it backward, that we have language because we do have recursive access to our own mental life. That access is unreliable for understanding the architecture of the mind, but very effective at enabling symbolic thought, including language.
Still an interesting paper. Thanks again!
"The snail's brain is organized into several clumps of ganglia, mostly in a ring around its esophagus."
Ah, yes. The secret of consciousness is revealed. It exists so we can eat.
And some wonder what the adaptive advantage of consciousness could possibly be.
Stephen, I agree largely with SelfAware’s remarks on Hacker’s paper. This paper gives a nice historical context to the concept of consciousness, and there are some good insights, especially this:
“Experiences are not in general individuated by reference to what it feels like to have them but by reference to what they are experiences of.”
I’d like to repeat that: experiences are individuated by (distinguished by, categorized by, recognized by, qualified by, “qualiafied” by) reference. That is, the reference of this experience is different from the reference of that experience.
However and unfortunately, it seems to me the main thrust of the paper is misguided. Firstly, Hacker suggests, like some others, that the qualities of an experience are equal to the emotional response from the experience. I, and others (I think), would suggest that the associated normative feelings, are subsequent experiences resulting from the original experience. Pain is still pain, even if it doesn’t cause subsequent bad feelings. (Thanks, morphine!).
Secondly, Hacker seems to attack Nagel’s characterization of qualia as “something it feels like” on linguistic grounds. To me, this smacks of Wittgensteinian word games. I agree that “what it feels like” is a horrible description of qualia, but I don’t think that means that the “consciousness studies community [should] retire from the field”. They just need a better handle on what they’re talking about.
*
[the answer is: qualia are individuate semantic references]
Thanks for the continuing comments, folks!
David: Inverting goggles is an interesting measure of adaptability. Snails have *some* adaptability (e.g., habituation and sensitization) but maybe inverting goggles would stump them. Of course, people aren't perfectly adaptable either. Supposing we take adaptability as the relevant criterion, I'm not sure how we determine the sweet spot.
Stephen: Thanks for the papers and for the (as always) thoughtful and interesting comment. I agree about the inference problem, but I am less sure about how far to extend it. For example, some non-biological things might have consciousness (sophisticated robots?) and as you know I'm not as convinced that a brain stem is sufficient (though maybe it is).
Anon: Most people find panpsychism highly unintuitive, as do I, but I wouldn't want to rule it out entirely. It is possible that the problem here is in the supposition that not everything is conscious.
Stephen/James: I'm inclined to agree with James's diagnoses of the problems with Hacker's arguments. I would also add that at crucial points where Hacker asserts that certain kinds of claims don't make sense, he seems to be leaning excessively on Wittgensteinian or verificationist background assumptions. And yet... it's an interesting challenge. I do take seriously the *gong* idea that something is rotten in our/my concept of consciousness, even if I don't find the way Hacker articulates the concerns to be convincing.
In your "no" option you write this:
"We can also imagine, I think, that the answer is no, snails entirely lack sensory experiences of any sort -- and thus, presumably, any consciousness at all, on the assumption that if snails are conscious they have at least sensory consciousness."
But you stated above they navigate by sense of smell and sense of touch. So they do have sensory consciousness and conceivably a internal map of the external world.
I'm wondering if there is an easy criteria to determine if an organism is conscious.
Does the organism require sleep?
https://www.mnn.com/earth-matters/animals/stories/do-snails-sleep-new-evidence-says-yes
I believe Hacker understands that his approach and remarks regarding consciousness concepts are controversial. From Wikipedia’s "Peter Hacker" article: “He is known for his detailed exegesis of the work of Ludwig Wittgenstein, and his outspoken conceptual critique of cognitive neuroscience.” Hacker is also the author of the book “Insight and Illusion – Themes in the Philosophy of Wittgenstein” … so it’s not surprising to find Hacker’s work ripe with “Wittgensteinian word games.”
While I recommended his paper for its interesting information on the evolution of the philosophical usage of the word consciousness, I find his analyses at times to be overly nit-picky (or Witt-picky?) thrusts that tend to be brutally literal at the expense of understanding, as in, for instance, his (and M. R. Bennett’s) critique of Searle in their “Philosophical Foundations of Neuroscience”. They appear to insist, for instance, that the pain you claim to feel in your foot is really a pain right down there in your foot and nothing else, misconstruing what I believe is Searle’s intended meaning when he says the pain in your foot is in your brain—I believe Searle is saying that physical feelings generated and felt by the brain include a body location component that motivates your accurate remark that you feel the pain in your foot. The phantom limb syndrome alone would seem to support Searle’s position—that pain in the foot you don’t have any more is obviously not in your foot.
That being said, I agree with Hacker’s thesis that Nagel’s “There is something that it is like to be a bat” statement is incoherent and, as I’ve written previously, a single word change resulting in “something that it feels like to be a bat” solves that problem but results in the meaningful but unsurprising assertion that “bats are sentient in (obviously) a bat-like way.” (The word sentient, by the way, means "feeling" rather than "intelligent" as in the common science-fictional misusage).
As I’ve posted previously, here’s a related excerpt from a Chalmers interview, who said: “But there is one kind of consciousness that I am most interested in, and that is consciousness as subjective experience: roughly what it feels like to be thinking, reasoning, or being. In this sense, a system is conscious if there is something it is like to be that system.” Notice how Chalmers immediately discards the meaningful “feels like” for the “is like” construction as he moves from biology to philosophy.
I also cringe a bit at the derivative phrase “what-it's-like-ness” because I don't find it at all meaningful. Specifically, what's the referent? I occasionally wonder if this Nagelese sort of language promotes the evidence-free proposition that consciousness is a “thing” that can somehow be plucked from its biological instantiation and deposited elsewhere, most often into a computer.
Related to that, Eric, I'd be most interested to learn the grounds for your assertion that "some non-biological things might have consciousness" ... I simply cannot imagine how that would be possible. How we would determine that a robot has feelings? How will we know if we have created Artificial Consciousness?
Stephen, I know you asked Eric, not me, why some non-biological things might have consciousness, but I wonder if you have considered my suggestion above that the relevant component of a conscious experience is a symbolic reference. If I’m correct, we have already created artificial consciousness. It’s just not the kind of consciousness we’re used to, i.e., human or creature-like. But, again if I’m correct, making artificial consciousness human-like is just an engineering problem of finding all of the human consciousness-related activities and duplicating them artificially.
*
Actually, James of Seattle, (and Hello!) I’m a biological naturalist ala Searle and, like Damasio, I believe that consciousness is a wholly biological feeling which is felt by the organism having the feeling. An organism feeling a feeling—any feeling—is conscious.
Damasio defines core consciousness (aka creature consciousness) as the feeling of being embodied and centered in a world—“the feeling of what happens.” From an evolutionary perspective, the first feelings, and the most easily recognized, were certainly basic physical feelings such as pressure (touch), temperature (hot/cold), hunger and so on, as well as basic emotions. Consciousness is something of a meta-feeling, incorporating feelings derived from various sensory “tracks” for both internal and external events.
More elaborated and sophisticated consciousness, such as our own, is called extended consciousness. Extended consciousness, like core consciousness, is also composed of feelings, some of which are commonly misunderstood as inexplicable non-physical mental operations, such as thought, causing considerable confusion in my opinion. Thought, vision, hearing and so on are feelings as well, although they are not usually perceived as being the same sort of thing as our “physical feelings.”
Consciousness, then, evolved as a production of the brain and is strictly biological. Owing to the completely subjective nature of feelings, consciousness in others can only be inferred, as I wrote originally, and the strength of that inference is dependent on the degree of biological similarity to ourselves—specifically neurostructures and neurochemicals. If, as you suggest, we successfully duplicate human consciousness-related activities, we will have certainly created a simulation of consciousness but will still be unable to infer consciousness—we can call it Commander Data … ;-)
An Artificial Consciousness that incorporates the living brain tissue that we know to be the producer of consciousness could legitimately be inferred to be conscious, but, of course, we must identify the complete working biology of that tissue subsystem before that’s possible. (I’m referring to the science-fictional “biological brain-as-pilot coupled with the systems of a starship scenario” for example). I don’t believe we have, or could ever have, any grounds on which to infer consciousness in any non-biological system. As always, I’m interested in learning of ideas to the contrary—and I’d frankly be amazed by a single verifiable instance.
To respond to a remark from Eric while I’m posting:
Eric, I am still unable to locate a single point of evidence in support of cortical consciousness to weigh against the significant body of evidence in favor of consciousness created by the brainstem complex. The recent reexamination of the split-brain studies has removed the only evidence ever cited for the cortical theory.
I have, however, noticed that the (usually unstated) cortical consciousness hypothesis is assumed to be true by nearly everyone as in, for instance, those who’ve interpreted the timing results from Libet’s experiments. No one explains why cortical stimulation is not perceived almost immediately if conscious is cortically created. And wouldn’t we expect cortical stimulation to be perceived always before a simultaneous direct sensory stimulation (like a touch) which, like all bodily events, is routed first to the brainstem? Shouldn’t the need for inexplicable proposals like “back dating” and “back referencing” processes be an Occam’s-razor clue that we might be on the wrong track? I wonder how the current theories explaining Libet’s findings would change with an assumption of brainstem-created consciousness. My initial cursory think-through assuming brainstem consciousness appears to yield more straightforward explanations—I wish I had the time to become an expert on neural timings.
Interesting to ponder—why would a fully-functional ear evolve in organisms with only a reptilian brain who have no cortex to create the consciousness of a sound?
Thanks for the thoughtful and detailed comment, Stephen -- as usual! Maybe people love neocortex because it make us humans special? People who are drawn toward restrictive views of consciousness involving sophisticated meta-representations might think, for example, that consciousness is so biologically rare and special that it needs to arise from tissue that is more recently evolved.
Stephen,
On cortical consciousness, are you familiar with the phenomenon of blind-sight? It happens when a patient has damage to the occipital lobe, the visual cortex at the back of the neocortex, and so is unable to consciously see. However, when forced to make a decision about whether something is in front of them, their "guess" is significantly more accurate than random chance. They can also often make accurate guesses about coarse properties of objects within their field of vision.
The theories about how this works focus on the path of the optic nerve. It goes from the retina to both the thalamus which relays the signals to the occipital lobe, and the superior colliculus in the midbrain. The idea is that the "guesses" come from signalling from either the thalamus or the superior colliculus that manage to make it to the movement regions in the basal ganglia.
Regardless of whether the signal comes from the thalamus or the superior colliculus, the processing in these sub-cortical regions do not result in conscious perception of the visual information. That only seems to happen when the visual signals travel through cortical pathways. (The prefrontal cortex and posterior association cortex in particular appear to be essential players.)
There are many other neurological case studies showing the effects of cortical damage on aspects of consciousness. V.S. Ramachandran's fascinating book, 'The Tell-Tale Brain', covers a lot of interesting case studies.
Incidentally, in the recent split-brain patient experiments, my understanding is that the information shared between the hemispheres was very limited and had similar restrictions, indicating that the sub-cortical pathways it was likely flowing through were also below the level of consciousness.
None of this is to say that the brainstem regions aren't critical support and regulatory structures. Damage to them can snuff consciousness out. But that doesn't mean it's where consciousness actually happens.
Consciousness happens in consciousness. Its physiological correlates involve the whole brain (some believe to be like a unified electromagnetic wave) and perhaps the whole body so it is pointless to debate whether it is more of less in one part of the brain or another.
One of its distinguishing characteristics is the sense of embodiment. The reticular activating system has both descending and ascending connections. In a sense, it connects the brain with the body. It seems to control wakefulness and transitions between sleep and wakefulness. Damage to it results in coma. If we grant consciousness to evolutionary older organisms with brains and nervous system - insects, reptiles, slugs - they either have a reticular formation or probably have cells performing analogous functions since most of these organisms seem to require sleep.
For starters, I’m delighted to see interest in the topic of brainstem consciousness, an admittedly minority view, but, in this case I believe the evidence supports siding with the crackpots … ;-). To Mike Smith (aka SelfAwarePatterns): I theorize that cortical processing resolves much of the content of consciousness as opposed to creating the conscious “display”—the actual presentation of consciousness—which I believe is created by the brainstem, as discussed below. As such, I expect the blind-sight phenomena will be found to be a consequence of that division of functionality. Also, it’s my understanding that after severing the corpus callosum, direct connectivity still exists between hemispheres, but it’s less optimized for speed, lacking the c. callosum’s myelination. To Jim, the reticular formation—as you know, “a set of interconnected nuclei that are located throughout the brain stem”—has long been known to be intimately connected with consciousness. The brainstem actually activates the cortex, a functionality whose evolution I believe merits consideration.
I’ve posted some of what follows to Eric’s blog before, in the “AI Consciousness” guest post by Susan Schneider (http://schwitzsplinters.blogspot.com/2017/01/ai-consciousness-reply-to-schwitzgebel.html) but, for convenience, I repeat some of that posted content here. The neuroscientific sources of interest are: “Consciousness without a Cerebral Cortex” by Bjorn Merker (at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.7642&rep=rep1&type=pdf) and “Consciousness and the Brainstem” by Antonio Damasio and Josef Parvizi (at https://pdfs.semanticscholar.org/aad7/58c2e6fa977d8a6169197f50ddb44e0b3cea.pdf).
The brainstem consciousness hypothesis is decidedly a minority position. As Damasio writes: “... contrary to tradition and convention, I believe that the mind is not made in the cerebral cortex alone. Its first manifestations arise in the brain stem. The idea that mind processing begins at brain stem level is so unconventional that it is not even unpopular.”
I believe that Merker makes the definitive case, which is supported by compelling evolutionary, experimental and observational evidence. From his paper’s abstract:
“A broad range of evidence regarding the functional organization of the vertebrate brain—spanning from comparative neurology to experimental psychology and neurophysiology to clinical data—is reviewed for its bearing on conceptions of the neural organization of consciousness. ... the principal macrosystems of the vertebrate brain can be seen to form a centralized functional design in which an upper brain stem system organized for conscious function performs a penultimate step in action control. This upper brain stem system retained a key role throughout the evolutionary process by which an expanding forebrain—culminating in the cerebral cortex of mammals—came to serve as a medium for the elaboration of conscious contents. This highly conserved upper brainstem system, which extends from the roof of the midbrain to the basal diencephalon, integrates the massively parallel and distributed information capacity of the cerebral hemispheres into the limited-capacity, sequential mode of operation required for coherent behavior. It maintains special connective relations with cortical territories implicated in attentional and conscious functions, but is not rendered nonfunctional in the absence of cortical input. This helps explain the purposive, goal-directed behavior exhibited by mammals after experimental decortication, as well as the evidence that children born without a cortex are conscious.”
As far as I can tell, no such compelling evolutionary, experimental and observational evidence exists for any cortical consciousness hypothesis—no evidence at all. If you’re aware of any, please let us all know.
My own armchair elaboration of the brainstem consciousness proposition is that pre-conscious images are resolved both by the brainstem subsystem, for core consciousness, and by the cortex for extended consciousness. Then, some as-yet unidentified structure in the brainstem complex creates the conscious “display” of the feelings that are the “movie-in-the-brain” (as it’s been called) from, or by using, those pre-conscious images. The more complex pre-conscious images resolved by the cortex are created from the massive parallel processing that Merker notes and transmitted to the brainstem with its “limited-capacity, sequential mode” for “display,” perhaps via those sweeping electromagnetic waves Jim mentions. It would seem that the centrality and connectivity of the brainstem to the entire nervous system would also allow the physical instantiation of conscious images (the “displaying”) to directly and efficiently affect bodily control, particularly in high-speed interactions—Merker’s “penultimate step in action control”.
It seems most unlikely that another “display” functionality would evolve in the cortex because once a successful biological functionality evolves and conserved, it’s possibly enhanced but not duplicated. In the case of consciousness, the problem of synchronizing and unifying distributed consciousness into a single presentation would seem a daunting obstacle to the evolution of a distributed cortical consciousness. By the way, that unity of conscious presentation is one of the unsolved “mysteries” of all cortical consciousness hypotheses.
I’d be happy to email the Merker and Damasio PDFs cited above to anyone in response to an email to ERLTalk@outlook.com. Mike, I’ve taken a cursory look at and bookmarked your great website for further reading and I see that we might have much in common—I’m a retired computer programmer with very similar interests. The “ERL” in ERLTalk refers to my hypothesis “The Eternal Re-experiencing of Life” regarding what Einstein called the “eternity of life”—a direct implication of the existence of consciousness in our block universe. ERL challenges everything we believe about the purpose and meaning of life! My paper “Einstein’s Breadcrumbs” can be downloaded from https://drive.google.com/file/d/1e76sHwM4bjKyZwl2sMcrDmTKAtZDG74D/view?usp=sharing.
And, Eric: I’ve noticed that a freshly exposed cortical surface is slippery wet and shiny and, as we all can attest, we are irresistibly drawn to shiny objects … and I suspect that’s the hypnotic attraction at the root of cortical consciousness theories. ;-)
Stephen
Those Merk and Damasio papers are pretty persuasive to me but I was already on board with the idea of the primacy of the brain stem and reticular formation.
Thanks Stephen. I'd be thrilled to see your thoughts on any post.
I’m sure you’ve heard the common criticism of the internal movie idea and the infinite regress. A movie implies an audience, but how does the audience consume the movie? With its own internal movie and audience? Which in turn as its own movie/audience? You can break the regress by having each nested movie be less sophisticated, but that means the final nested version would be pretty primitive.
The brainstem is generally seen as primarily reflexive in nature because of disturbing experiments done on animals where the cerebrum is separated from the brainstem. This results in the animal displaying nothing but reflexive behavior. An excellent source for information like this is Todd Feinberg and Jon Mallatt's excellent 'Ancient Origins of Consciousness'. It's a fairly technical book on animal consciousness, but it has a lot of discussion on where mental imagery take place.
So that phylogenetically ancient region is definitely where our most primal reactions to the world originate, the core of what eventually become emotional feelings. In that sense, everything above it *is* an elaboration, but I think consciousness is one of those elaborations.
Unless of course you equate consciousness with the reflexes.
Mike, the Feinberg/Mallatt interpretation of decorticate animal behavior as purely reflexive conflicts with Merker’s (and others) view of “… the purposive, goal-directed behavior exhibited by mammals after experimental decortication ...”. I’ve added Feinberg/Mallatt's book and thesis to my lengthy reading list. Thanks for the reference.
I wonder how Feinberg/Mallatt would account for the consciousness of normal, healthy infants, considering that (again, from Merker’s paper): “Their cerebral cortex is quite immature and its connections to brainstem systems are still rapidly developing. … Nevertheless, 3-day-old infants can discriminate their mother’s voice and work to produce it ... Three-to 4-month-old infants can form concepts … and 6-month-old infants can form associations between memory representations that are absent). Also, “A child without a cortex cannot regulate emotions efficiently or exercise cognitive control of emotion-expression or emotion-related behavior. The same is true of normal young infants.” Apparently we have all been “brainstem babies” yet, as far as I know, no one claims that normal human infants lack consciousness or that they behave in a purely reflexive way.
Regarding the “movie-in-the brain”, the phrase is used by Damasio, Panksepp, Sacks and others metaphorically and not literally. Consciousness is not a display that’s observed by a brain-resident homunculus with a little homunculus in its head, with a little … and so on, ad infinitum. The movie-as-metaphor is not “watched” but is, rather, experienced and the use of a movie as a metaphor refers to the multi-sensory-track flowing experience that is consciousness. The movie as metaphor is why I use the visual word “display” in quotes.
Related to the earlier mentions of change blindness, I noticed this interesting information in Merker’s impressive paper:
“The simulated nature of our body and world is further supported by a number of phenomena that alert us to the synthetic nature of what we typically take to be physical reality itself, that is, phenomena such as inattention blindness, change blindness, and allied effects … Such “deletions from consciousness” can be countered by appropriately placed microstimulation of the superior colliculus … These various indications all support the conclusion that what we confront in sensory consciousness is indeed a simulated (synthetic) world and body.”
So core consciousness, also called “creature consciousness”, that feeling of being embodied and centered in a world, is a simulation and not a faithful representation of ourselves in the world—a fact that I believe is all too often overlooked. The silent, colorless external world is not at all like our conscious representation of it.
So much to think about … so little time … ;-)
As an interesting aside, a movie metaphor is also employed by physicist Brian Greene to describe our feeling of a flowing time, a completely illusory feeling since flowing time (and its “now”) does not exist in the universe, per our repeatedly confirmed Relativity physics. Hence Einstein’s remark, “… the distinction between past, present and future is only a stubbornly persistent illusion." I believe that the use of a movie metaphor in both instances demonstrates the intimate relationship between the stream of consciousness and the flowing time illusion. In my “Einstein’s Breadcrumbs” paper, I propose that the flow/stream of consciousness is the fact of consciousness that we mistakenly interpret as the illusion of flowing time, so that the conclusion that consciousness alone animates the static and unchanging block universe we inhabit seems unavoidable.
Stephen,
I think we have to be careful about conflating two different procedures: decerebrating and decorticating. As I understand it, decerebrating is more severe, severing the connection between the brainstem and the cerebrum. Decorticating involves removing the outer cortical layer, but leaves more lower level structures, such as the thalamus and basal ganglia intact. I think this makes a difference in how much functionality remains. Panksepp in particular in his writing seems to focus on the results of decortication. F&M's analysis is about decerebrated animals.
From what I've read, newborn behavior is dominated by reflexes. But we should remember that even newborns still have a cerebrum, just one that isn't fully myelinated yet and only beginning synaptic pruning, so its functioning is inefficient, but not entirely absent. (Infants put in an fMRI still show substantial cortical activity.) Myelination of the axons in the cerebrum, which begins in the womb, is constantly in progress for an infant, although it isn't fully complete in the frontal lobes until well after puberty.
Damasio in his book 'Self Comes to Mind' discusses hydranencephalics, children born with little or no cerebral cortex. He notes that they seem to exhibit a sort of primal form of consciousness, very similar to newborns, but never developing past that stage. I think this is a factor in his views, and I initially found it compelling. But when I did additional research, I learned that their capabilities vary substantially, and that most retain their thalami. Many also have lower levels of their temperoal and frontal lobes. All of which cloud any conclusions about brainstem functionality.
On the movie metaphor, my apologies. I should have realized that you would have a more sophisticated conception. Myself, I think the impression of the movie comes from the interaction between the perceiving regions of the brain and the movement planning ones. To me, the mid-brain region seems awfully small for that. I'll admit that it likely has image maps, for saccades and other reflexive reactions, but at a much lower resolution than what we consciously experience.
The problem is that we can't test the brainstem-movie hypothesis. There's no way to temporarily turn off someone's cerebrum to see if they retain any consciousness during its absence, and subsequently restore it so they can describe the experience to us. I wonder if the work currently being done with patients previously thought to be locked in, but subsequently discovered via brain scans to have some alertness, might eventually shed some light on this.
Damage to the midbrain does tend to wipe consciousness out, but so does sufficient damage to the thalamus, as well as widespread damage to the anterior cingulate and overall neocortex. What is a vital supporting structure and what is actually part of the experience generation?
"So core consciousness, also called “creature consciousness”, that feeling of being embodied and centered in a world, is a simulation and not a faithful representation of ourselves in the world."
The "feeling of being embodied" may be the original and most basic sense that makes up consciousness that came before vision and smell. The neural mapping of the body and its relation to the external world, simulation though it may be, had the adaptive advantages of regulation of the internal body, orientation of the body, particularly the mouth, and control of eating, swallowing and digestion.
Looks like we’ve drifted a bit from gonging snails, but in a most interesting way. Mike, since the overwhelmingly preferred hypothesis is that cortical tissue creates consciousness, I believe the decortication evidence, as opposed to the evidence from decerebration, is more relevant to the discussion about which brain structure creates consciousness. As such, F&M’s analysis, while certainly interesting, would appear to be of lesser relevance than Merker's (and others) contributions.
To back up a bit, though: my primary claim that there’s no evidence at all for cortical consciousness hypotheses seems unrefuted still and that absence of evidence would seem to invalidate all of those evidence-free hypotheses at this point, particularly in light of the considerable body of evolutionary, experimental and observational evidence supporting the minority brainstem consciousness hypothesis. Although none of that evidence taken by itself is conclusive, taken together it’s very suggestive and persuasive. And added to the lack of evidence for cortical consciousness, I’ve recently come to believe that there is evidence against cortical consciousness theories, including no explanation for the unity of conscious experience, Libet’s timing data, inexplicable but seemingly necessary propositions like “back referencing” and so on. Cortical consciousness proponents also cannot explain why consciousness remains whole and intact following the removal of handfuls of cortical tissue and even an entire hemisphere—certainly the content of consciousness changes, but consciousness itself remains unaffected.
From an evolutionary perspective, I have a hard time believing that, over hundreds of millions of years, gazillions of living creatures metabolically quite similar to ourselves but lacking mammalian cortical structures possessed eyes without vision, noses without smells, ears without sounds, and felt no hunger, no pain, nor any proprioceptive feelings whatsoever. I also think it’s a stretch to believe that core consciousness would evolve twice on top of the established brainstem architecture—once for birds with the pallium and again for creatures with a cortex.
Perhaps the focus on the cortex is a simple consequence of its relatively easy accessibility. In contrast, as you say, “we cannot test the brainstem-movie hypothesis” and, problematically, should the investigative technology become available, probing and manipulating the brainstem complex will remain dangerous, both physically and morally.
That ends my summation. I’ve spent some enjoyable time lately perusing your very interesting selfawarepattens blog Mike and, surprisingly, I didn’t see any references to the block universe of Relativity physics, which I expected in your Science section. I subscribed to your blog earlier today because it’s an interesting and enjoyable read and also in the hope that you might someday post your thoughts on 4-dimensional spacetime and Einstein’s remarks about “consciousness propagating itself throughout all eternity,” should you get around to reading “Einstein’s Breadcrumbs”. Your blog is an enjoyable discovery in any case … many thanks!
Thanks Stephen. I hope I won't test your patience if I clarify a few things.
Accepting that primary consciousness happens in the cerebrum in humans (and I do think the evidence is quite strong that it does) doesn't mean accepting that species lacking a mammalian cerebrum can't have it. A lot of non-mammalian vertebrates have their sensory processing spread out between the forebrain and midbrain. For example, fish do olfaction in their telencephalon and visual processing in their optic lobes. In mammals, most of this functionality migrated to the cerebrum and to the nidopallium in birds.
And I wouldn't dismiss the idea of convergent evolution for consciousness. I think many invertebrate species (arthropods, cephalopods, etc) display compelling signs of primary consciousness in their behavior, even though their brain structures and evolutionary history are very different from vertebrates. It seems to me that once we accept that primary consciousness is adaptive, the idea that it might have evolved in independent evolutionary lines shouldn't be a hard sell.
Note that I specified "primary consciousness" above, because if we're talking about introspective self awareness, I think only a few species have that.
I have to admit I'm not familiar with the block universe concept, although I do know about general and special relativity. I do plan to take a look at your paper when I get a chance. Looking forward to our future conversations!
Mike, just a short post regarding your assertion that “... primary consciousness happens in the cerebrum in humans (and I do think the evidence is quite strong that it does)”. Since that statement is directly at odds with my assertion that the proposition is evidence-free, would it be possible for you to provide a list of documented evidence points that I can investigate?
I keep finding information like this, from “A Neuropsychoanalytical Approach to the Hard Problem of Consciousness” by Mark Solms: “Moruzzi & Magoun (1949) demonstrated decades ago—to their own surprise—that decorticated cats remain conscious. The same applies to all other animals, including humans; as Penfield & Jasper (1954) found when they concluded that human consciousness depends upon the integrity, not of cortex, but rather of the upper brainstem—of what they called the ‘centrencephalic’ region.” Solms proceeds to cite the research of Shewmon, Holmse & Byrne 1999; Merker 2007.
I simply cannot find similar material citing evidence, strong or otherwise, to support your statement. I’ve considered the possibility that our definitions of consciousness might be substantially different, and your statement might be a consequence of your definition of "primary" consciousness tending towards reflective consciousness rather than the affective definition I’ve provided. A looksee at the sources for the evidence you mention on might help me clarify my understanding. If and when you have the time ... thanks in advance.
Hi Stephen,
I do think definitions might be involved. That's one of the problems with discussing consciousness, is that people are often arguing past each other. It's why I often describe a hierarchy:
1. Reflexes, primal responses to stimuli
2. Perception, image maps, predictive models of the environment and oneself, expanding the scope in space of what the reflexes react to.
3. Attention, prioritization of what the reflexes react to.
4. Imagination, sensory and action scenario simulations, expanding the scope in time of what the reflexes are reacting to. I think it is here where reflexive reactions become affects, inclinations toward an action rather than automatic action.
5. Introspection, metacognition, self reflection.
1 is in the brainstem. The brainstem has rough low resolution versions of 2 and 3, but it's not the high resolution versions we actually experience directly, which are in the thalmo-cortical system, as are 4 and 5 completely.
What's the evidence? Extensive neurological case studies of various agnosias (failure of sensory processing due to specific brain injury) and, importantly, anosognosias (inability to know about the lack of capability from a brain injury). Anosognosia is important, because if there's a place separate from the capability where consciousness is happening, the patient should be aware of their agnosia, but often they're not.
An example is hemispatial neglect, where due to cortical injury in the right hemisphere, the patient can't perceive the left side of their field of vision. It's not that they are blind on the left side (although they effectively are), but sometimes that the left side has become *inconceivable* to them.
And large scale damage to the cortex or the thalamus can snuff consciousness out just as thoroughly as damage to the midbrain region.
There are many excellent books that explore some of these case studies. Some I've read and can recommend include:
Cognitive Neuroscience, A Very Short Introduction by Richard Passingham
The Tell-Tale Brain by V.S. Ramachandran
The New Executive Brain by Elkhonan Goldberg (this focuses on the frontal lobes)
Who's In Charge by Michael Gazzaniga
Of course, you can insist that these are all elaborations on some primal version of consciousness. But if so, I have to ask what that primal version is supposed to be or do?
Hope this helps.
"The brainstem has rough low resolution versions of 2 and 3, but it's not the high resolution versions we actually experience directly..."
That sounds to me like you are agreeing that there is some low resolution consciousness possible with the brain stem unless you are restricting consciousness to 4 and 5. If you are restricting it to 4 and 5, then we seem to be having primarily a disagreement over definition more than anything.
I still think that it somewhat pointless to argue over where in the brain consciousness resides. In normal, waking human consciousness (2-5 in your list), it is involved with many parts of the brain - brainstem to cortex. The question is what turns on the switch and for that I am with Stephen in pointing to the RAS that controls the sleep-wake cycle and is directly involved with sensations of pain.
Jim,
Definitely a lot hinges on the definitions.
I do think we have to be careful about our intuitions here. We see an awake organism and intuitively project the full range of capabilities mentally complete humans have (1-5) when we're awake. But that's a major assumption. An organism can be awake with only 1.
I would argue that a system that lacks 4 doesn't have sentience. Feelings require a feeler. And we should remember that in humans we distinguish the conscious from the unconscious mind based on what can be introspected. Where does that leave a system with no introspective ability?
Thanks for the elaboration, Mike.
Your apparent allocation of conscious functionalities 1-3 to the brainstem, with 2 and 3 qualified as “low resolution,” locates a primary consciousness in the brainstem complex, so on that point we are apparently in agreement, as Jim points out. Merker’s hypothesis that the cortex elaborates the contents of consciousness seems to fit as well and would account for the “high resolution” of 2 and 3 that humans experience. I notice that your definition is biological, which perhaps eliminates IIT and panpsychism, as it definitely does for me. Also, rather than being an extended type of consciousness, I would suggest your number 5, “introspection, metacognition, and self reflection” are additional contents of human consciousness. I’m puzzled by the claim for the existence of varying types and amounts of consciousness because, to my way of thinking, an organism is conscious or it’s not, but the contents of consciousness are variable.
I believe the unexplained maintenance of a unified conscious presentation between existing unitary subcortical consciousness and a newly developed, widely distributed cortical consciousness is very problematic for cortical consciousness theories. The development of enhanced, pre-conscious images by the cortex which are then transmitted to the brainstem for incorporation into its unified “display” simply seems more straightforward and solves the unitary presentation conundrum.
Also, it’s difficult for me to draw strong conclusions from agnosia/anosognia, inattention/change blindness and cortical lesions types of evidence for the location of consciousness creation because the complexity of the interconnected subsystems in the human brain makes it very difficult to determine precisely what causes what. In contrast, I find that the persistence of consciousness following a cerebral hemispherectomy very persuasive evidence, as is the strong experimental and observational evidence of consciousness in cortically deficient animals and humans. In those cases, consciousness remains intact and cannot possibly be ascribed to cortical functionality. Certainly the content of consciousness is significantly reduced, but that’s once again explained by the “cortical elaboration of conscious contents” hypothesis.
That being said, both cortical and subcortical theories are still on the table. I don’t expect a resolution anytime soon.
For Jim: I believe the anatomical location of consciousness is important to our eventual discovery of precisely how conscious feelings are generated—we'll never find out how if we're looking in the wrong place. That knowledge will be required prior to any attempt to create Artificial Consciousness, a moral hazard to be sure, but truly irresistible.
My "what causes what" comment is incorrect, as I've just rediscovered in Merker's excellent paper, which I'm revisiting. Apparently, visual experience in the absence of enhanced cortical processing is limited to "the ability to orient to and approach the location of moving visual stimuli in space" but subcortically generated visual experience decidedly exists:
"Complete removal of the posterior visual areas of one hemisphere in the cat (parietal areas included) renders the animal profoundly and permanently unresponsive to visual stimuli in the half of space opposite the cortical removal (Sprague 1966; see also, Sherman 1974; Wallace et al. 1989). The animal appears blind in a manner resembling the cortical blindness that follows radical damage to the geniculostriate system in humans. Yet inflicting additional damage on such a severely impaired animal at the midbrain level restores the animal’s ability to orient to and to localize stimuli in the formerly blind field (Sprague 1966; cf. Sherman 1977; Wallace et al. 1989)."
and ...
"... adding a small amount of damage in the brainstem to the cortical damage “cures” what appeared to be a behavioral effect of massive cortical damage. The restored visual capacity is limited essentially to the ability to orient to and approach the location of moving visual stimuli in space (Wallace et al. 1989). Visual pattern discrimination capacity does not recover after the midbrain intervention (Loop & Sherman 1977), though the midbrain mechanism can be shown to play a role even in such tasks (Sprague 1991)."
Here’s a most interesting development related to the “where is consciousness created” discussion. For those who support any cortical consciousness hypothesis, this recent article at https://www.eurekalert.org/pub_releases/2018-10/tu-sgf101518.php should raise some concerns.
From the article:
“A team of Tufts University-led researchers has developed three-dimensional (3D) human tissue culture models for the central nervous system that mimic structural and functional features of the brain and demonstrate neural activity sustained over a period of many months. … The new 3D brain tissue models overcome a key challenge of previous models—the availability of human source neurons.”
and:
“The researchers are looking ahead to take greater advantage of the 3D tissue models with advanced imaging techniques, and the addition of other cell types, such as microglia and endothelial cells, to create a more complete model of the brain environment and the complex interactions that are involved in signaling, learning and plasticity, and degeneration.”
If you subscribe to any cortical consciousness hypothesis, the experimental creation and maintenance of human neural tissue in this way might be a genuine moral hazard—could these “3D tissue models,” or the more complete and sophisticated "organoids" of future experiments be conscious in any way? Could they be experiencing pain?
Stephen,
Based on everything I've read, any section of cortical tissue in isolation wouldn't by itself be conscious (unless you're a panpsychist). Even the entire neocortex, without the supporting sub-cortical structures, wouldn't by itself be conscious. The feeling of pain seems to require the anterior cingulate cortex, but only as the convergence and culmination of a lot of signalling from supporting structures, both cortical and sub-cortical.
That said, I do think that if enough of the brain were simulated, including emulating all those supporting structures and processes, that the simulation could be conscious, feel pain, etc, but we're a long way off from achieving that (decades, possibly centuries).
Mike, I believe you’re suggesting that the cortical creation of consciousness cannot happen without support of some sort from sub-cortical structures. If, however, the cortex is the brain structure that resolves and then somehow “displays” conscious contents, as cortical consciousness hypotheses propose, then, rather than an experience of pain being impossible without contributions from other brain subsystems, it’s the orderly experience of pain that's impossible. For a laboratory cortex, generating an experience that’s normally a consequence of prior sensory input is impossible because there is no sensory input. But, if cortical consciousness hypotheses are correct, a "cortex in a vat" generating disorderly experiences of pain—hallucinations of pain and, indeed, hallucinations of every kind—seems unavoidable.
Stephen,
When considering this question, I think we have to remember that at some point a conscious system must reduce to non-conscious components. Most of us wouldn't argue that an individual neuron in isolation by itself is conscious. (Not to mention the individual proteins and other components of the neuron.) So consciousness comes into being through the interaction of components. (If this seems a bit too mystical, consider that the program you're using to access this web site comes into being in the same manner.)
Considering the neocortex overall, I think we first have to specify whether the thalamus is also present. Without it, most of the communication between the regions of the cortex can't happen. The thalamus serves as a communication hub and also likely modifies some of the signals. I don't think the cortex can be considered separate from it.
Would the thalmo-cortical system in isolation be conscious? It's worth considering what would be missing. Without the reflexive survival circuits in the brainstem, there wouldn't be any underlying cause to emotional feelings, including the sense of self. Without the hippocampus, there'd be no sense of location or ability to form new memories. Without the claustrum, it's possible brain waves throughout the neocortex wouldn't be in sync (technically making it as necessary as the thalamus).
So would the thalamo-cortical-claustrum system maybe be conscious? If so, it seems like it would be an unfeeling type of consciousness, one without sentience.
I'm not a fan of IIT, but I think it gets one thing right. Consciousness is, among other things, a nexus of integrated information. But a nexus can't be a nexus without the surrounding material.
You discuss the possibility of an unordered consciousness. I could see that if enough of the system were present and functional, but if not, I don't think the resulting fragmented signalling would be anything we'd be tempted to call "conscious".
Sorry for the long rambling answer, but I find this stuff fascinating and can ponder it all day.
Perhaps we are talking about two different things, Mike. You seem to be discussing the subsystem interactions that resolve the contents of consciousness, while I’m specifically referring to the neural cell assemblies that ultimately ”display” conscious “images”, i.e., the specific brain tissue that actually results in a feeling of pain, for instance.
Are you proposing that a feeling is “displayed” by all of those distributed, interconnected brain structures you mention operating together? You wrote “… there wouldn't be any underlying cause to emotional feelings.” I’m referring instead to the presumably localized functionality that creates a conscious “display” of a feeling without any underlying cause or preprocessing—a feeling of pain, for instance, that is solely produced by the specific cortical tissue arranged to “display” a pain feeling.
I agree that in the normal, orderly case there’s processing that precedes a feeling—that’s what I call an “orderly” production of a feeling. But my concern for cortical consciousness theorists (and they are legion) is about the possibility of self-assembling cortical organoids in the laboratory “displaying” conscious “images”—particularly painful ones—without any of that normal preprocessing.
I’m trying to understand your conception—are you proposing that the actual “display” of a single resolved feeling is some sort of electronic “field” (or something similar) that’s widely distributed across brain subsystems? If that’s your view, how would such complex, interdependent functionality evolve? It seems consciousness would then be an all-or-nothing proposition, requiring the several interconnected brain subsystems you’ve identified to be fully developed and fully functioning before the first feeling was felt.
Stephen,
I think I need to re-emphasize the point above. Wherever it happens, the experience of the "display" is a mechanism, and that mechanism itself must have components, components that eventually must not themselves be conscious. To believe otherwise is to think that some component of the brain is an antenna of some type for an immaterial aspect of the mind. Under physicalism, sooner or later consciousness must reduce to non-conscious components.
So even if the display is experienced in the midbrain region, it would still have large numbers of neurons involved, with the experience of the display arising from the interaction of those neurons. If you think a broader collection of components requires a ghostly electromagnetic "field", then you'd have to think the same thing of this more compact region.
I think the reality is that the impression of the "display" comes from action planning regions in the brain accessing the mental imagery held in the perceiving regions. So when you see a tree, the frontal lobe receives an impression of "treeness" from the posterior association cortex. When you notice the color of the bark, it comes from the prefrontal cortex deciding to focus on that and retrieving the information from the visual cortex.
Our impression of a visual display comes from the frontal lobes retrieving information from the perception cortices in an ongoing interaction. The frontal lobes can only retrieve a very small part of that information at any one time, but the interactions come fast and quick, which add up to the impression of a theater like experience. No electromagnetic field required.
You cannot imagine how this article fits in with where my mind is now
"Snails may have opioid responses and mussels release morphine when confronted with noxious stimuli. Both reactions suggest that these animals do, in fact, feel pain."
That's really all you need to confirm to tell yourself, yes it was wrong to torture snails when you were seven and yes you should feel a little guilty about it. (which is natures way of encouraging better behavior.)
I am interested in pondering the lack of remorse / sadism in almost exclusively males in the west and I would bet most are enacting some sort of childhood trauma (at best not being encouraged to express yourself emotionally / not being given tools to regulate emotions or self soothe / etc. )(at worst emotional and or psychical neglect or abuse)
As far as the debate on snail's consciousness - I have a couple as pets (which was why I looked this article up ;)) and to me they are much more conscious than plants (which I would argue has a form of consciousness. Self awareness and consciousness aren't the same thing.
I read a dude's article that posts here:
https://selfawarepatterns.com/2023/03/14/qa-on-the-mind-object-identity-hypothesis/
The bit about brains "carving out" subsets of the world is great.
Incredible stuff and very much resonates with me.
Post a Comment