I've just finished reading Simona Ginsburg's and Eva Jablonka's tome on consciousness in non-human animals, The Evolution of the Sensitive Soul. It is an impressively wide-ranging work, covering huge swaths of philosophy, biology, and psychology for many different species. (For an article-length version of their view, see here.)
Ginsburg's and Jablonka's central idea is that consciousness (i.e., phenomenal consciousness, subjective experience, being an entity that there's "something it's like" to be) requires something they call Unlimited Associative Learning. They argue that we see consciousness and Unlimited Associative Learning in vertebrates, at least some arthropods (especially insects), and in some mollusks (especially cephalopods) but not other mollusks (e.g., sea hares), and not in most other animal phyla (e.g., annelida such as earthworms or cnidaria such as jellyfish). If you wonder -- as I do -- where we should draw the line between animal species with consciousness and those without consciousness, theirs is one of the most interesting and well-defended proposals.
I'm not convinced for two broad reasons I discuss here and here. I think all general theories of consciousness suffer from at least the following two epistemic shortcomings. First, all such theories beg the question, right from the start, against plausible views endorsed by leading researchers who see consciousness as either much more abundant in the universe or much less abundant in the universe (e.g., panpsychism and Integrated Information Theory on the abundant side, theories that require sophisticated self-representation on the other side). Second, all such theories are ineliminably grounded in human introspection and verbal report, creating too narrow an evidence base for confident extrapolation to very different species.
But today I don't want to focus on those broad reasons. As regular readers of this blog know, I love snails. So I was interested to note that Ginsburg and Jablonka specifically highlight two genera of terrestrial gastropod (the Limax slug and the Helix snail) as potentially in the "gray area" between the conscious and nonconscious species (p. 395). And I think if you pull a bit on the thread they leave open here, it exposes some troubles that are specific to their theory.
Ginsburg's and Jablonka's view depends essentially on a distinction between Limited Associative Learning and Unlimited Associative Learning. Associative learning, as you might remember from psychology class, is the usual sort of classical and operant conditioning we see when a dog learns to salivate upon hearing a bell associated with receiving food or when a rat learns to press on a lever for a reward. Unlimited Associative Learning, as Ginsburg and Jablonka define it, "refers to an animal's ability to ascribe motivational value to a compound stimulus or action pattern and to use it as the basis for future learning" (p. 35, italics added). Unlimited Associative Learning allows "open-ended behavioral adjustments" (p. 225) and "has, by definition, enormous generativity. The number of associations among stimuli and the number of possible reinforced actions that can be generated are practically limitless" (p. 347). An animal with Limited Associative Learning, in contrast, can only associate "simple ('elemental') stimuli and stereotypical actions" (p. 225).
Immediately, one might notice the huge gap between Limited Associative Learning (no learning of compound stimuli, no stringing together of compound actions) and truly open-ended, truly "unlimited" Unlimited Associative Learning with full generativity and "practically limitless" possibilities for learning. Mightn't there be some species in the middle, with some ability to learn compound stimuli, and some ability to string together compound actions, but only a very limited ability to do so, far, far short of full combinatorial generativity? For example... the garden snail?
Terrestrial snails and slugs are not the geniuses of the animal world. With only about 60,000 neurons in their central nervous system, you wouldn't expect them to be. They don't have the amazing behavioral flexibility and complex learning abilities of monkeys or pigeons. There's not a whole lot they can do. I'd be very surprised, for example, if you could train them to always choose a stimulus of intermediate size between two other stimuli, or if you could train them to engage in long strings of novel behavior. (Certainly, I have heard no reports of this.) But it does seem like they can be trained with some compound stimuli -- not simply "elemental" stimuli. For example, Limax slugs can apparently be trained to avoid the combined scent of A and B, while they remain attracted to A and B separately (Hopfield and Gelperin 1989) -- compound stimulus learning. Terrestrial gastopods also tend to have preferred home locations and home ranges, rather than always moving toward attractive stimuli and away from unattractive stimuli in an unstructured way, and it is likely (but not yet proven) that their homing behavior requires some memory of temporally or spatially compound olfactory and possibly other stimuli (Tomiyama 1992; Stringer et al. 2018).
Nor is it clear that even rat learning is fully generative and compoundable. As Ginsburg and Jablonka acknowledge (p. 303), in the 1960s John Garcia and Robert A. Koelling famously found that although rats could readily be trained to associate audiovisual stimuli with electric shock and gustatory stimuli with vomiting, the reverse associations (audiovisual with vomiting and gustatory with shock) are much more difficult to establish.
Between, on the one hand, "Limited Associative Learning" which is noncompound and reflex and, on the other hand, fully compoundable, fully generative "Unlimited Associative Learning" stands a huge range of potential associative abilities, which with intentional oxymoronity we might call Semi-Unlimited Associative Learning. Ginsburg's and Jablonka's system does not leave theoretical space for this possibility. Terrestrial gastropods might well fall smack into the middle of this space, thus suggesting (once again!) that they are the coolest of animals if you are interested in messing up philosophers' and psychologists' neat theories of consciousness.
[image source Platymma tweediei]
why do people think consciousness has anything to do with learning? I don't see why you couldn't be conscious but a poor student -- never learn anything ever.
ReplyDeletesecond question -- why do slugs drown themselves in beer? It's very fin-de-siecle for something that seems otherwise not to be much of an aesthete. (do you know about this?)
First question: Ginsburg and Jablonka don't I think make a good case for this, other than pointing out that consciousness of stimuli facilitates learning in human beings. Ultimately, I think one of the strengths of UAL is that it's a simple theory relating consciousness to testable behavior and that it gets the line between conscious and nonconscious species about right *if* you are someone attracted to a view on which consciousness is moderately abundant: present in lizards but not in roundworms.
ReplyDeleteSecond question: Slugs seem to like the smell of beer, and they're not very clever about getting out of puddles unless it's clear from tactile information what is the best path out (feeling an upward slope). Their sight is extremely limited, so they probably can't see the higher ground in the distance and aim for it.
It's not just learning, but the type of learning, and what kind of capabilities are necessary for its presence. Global learning, such as UAL if I'm understanding correctly, demonstrates that the animal can learn to make cause and effect predictions, indicating that it has a world model, a model which has to include at least its bodily self.
ReplyDeleteNote that the predictions must be learned ones, otherwise they're really just reflexes, which might be described as evolution making predictions but not the animal itself.
This book looks extremely interesting. Thanks for highlighting Eric! Just added it to my Kindle. I had heard about UAL but struggled to understand it. The nice thing about these types of books, even if you don't buy the authors' specific views on consciousness, you usually learn a lot about the evolution of nervous systems. Another excellent book along these lines is Feinberg and Mallatt's The Ancient Origins of Consciousness.
Anyway, this always gets to the difficulty of actually defining consciousness. I'm not sure there is any one simple definition that addresses all our intuitions, which aren't always consistent. Indeed, lumping animal cognition under "consciousness" often risks oversimplifying a vast spectrum of capabilities across species and evolutionary time.
I’m with Ginsburg and Jablonka on this. From the abstract:
ReplyDeleteWe define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience).”
Note that phenomenal consciousness shouldn’t require Unlimited Associative Learning, but conversely something that does have a phenomenal element should thus have the potential for such learning. Why? Because instead of just non-conscious programming (or the Limited Associated Learning of a “robot”), there should also be a teleology dynamic based upon its motivation to feel good rather than bad. Purpose!
Clearly it’s difficult for us to decide if certain strange organisms function with absolutely no phenomenal element, or additionally have a touch of this as well. But notice that there are only two potential options here — either non-conscious, or non-conscious plus conscious. So it could be that snails are amazing robots, or amazing robots that are also armed with a phenomenal element. Our confusion about which shouldn’t mean that a middle exists as well, or at least not ontologically.
I believe that science will get “consciousness” essentially straightened out some day, though not until after philosophy is able to help improve science.
Thanks for the continuing comments folks!
ReplyDeleteLet me add some nuance to my earlier reply to Kaplan Family: Part of their case for the value of learning is its theoretical relationship with other sorts of sophisticated cognition. Philosophically, maybe we can imagine a cognitively complex system with simply no flexible learning at all, but practically speaking cognitive complexity is going to have to travel with sophisticated learning capacities -- so UAL fits into a general pattern of moderate cognitive sophistication which plausibly co-occurs with consciousness.
SelfAware: You write that UAL shows "cause and effect predictions, indicating that it has a world model, a model which has to include at least its bodily self". I'm not sure UAL implies cause and effect predictions -- could be something more like Humean association. It is somewhat hard to imagine UAL without the capacity to learn about one's body, though, and its environmental embeddedness, so there's some harmony with Merker's influential view on this, which they note.
P Eric: Ginsburg and Jablonka do sometimes note that they can't definitively establish lack of consciousness in animal that lack UAL. I find them a bit hard to read on this. In some places, it seems like they want to boldly draw the line between LAL and UAL, saying that consciousness goes with the latter but not the former. But if you put a lot of weight on these qualifications, then their view is consistent even with panpsychism, which is not how it generally reads at a surface and summary level.
So Ginsburg and Jablonka advocate a view which is consistent with panpsychism? Well that’s troubling. Regardless I don’t mind boldly drawing a line at phenomenal experience to distinguish between evolved creatures with limited associative learning and unlimited associative learning, though my ideas reject panpsychism explicitly.
ReplyDeleteAs I understand it, in academia today the most popular way to account for phenomenal experience is “software” based (contra John Searle). I consider this to conflict with my own metaphysics of naturalism however. Instead I believe that there must be mechanisms in the brain which neuron function animates in order to produce phenomenal experience (somewhat as the computer that I’m working on now animates its screen).
And what brain mechanism might produce phenomenal experience? I suspect that this occurs by means of the electromagnetic radiation associated with neuron firing. Are you aware of such speculation professor? It’s most prominently championed by a UK molecular geneticist by the name of Johnjoe McFadden.
Regardless of the specific phenomenal experience brain mechanism however, here I’m able to make a sharp distinction between limited and unlimited associative learning, and without resorting to the notion that causal properties exist in information beyond such conveyance, or that all matter possesses a phenomenal element.
Thanks for the continuing comments, Eric! Ginsburg and Jablonka are skeptical of panpsychism and (in personal communication) have clarified that they think it's probably not the case that even organisms with only Limited Associative Learning are conscious, though they don't entirely rule out the possibility of LAL consciousness.
ReplyDeleteAs for brain mechanisms... I'm skeptical enough to think it's far too early to have a confident opinion about that, but the closest there is to a majority or plurality view might be something like long-range neural networks with recurrent processing.
Professor,
ReplyDeleteI notice from the timestamp on your recently provided video that you did post graduate work at UC Berkley under no less than John Searle! I don’t know how close you were or are with him, though I’m quite impressed that he remains a non ignorable “fly in the ointment” for the plurality view that phenomenal consciousness is an information based dynamic sans mechanisms. It’s disheartening to me how few have picked up his torch.
One obstacle may be that his elaborate Chinese room thought experiment is based upon “understanding” a given language. This might be too fuzzy an idea for the point to sufficiently hit home. Given that our computers can do various things based upon English sentences typed into them, do they not “understand” as well?
Consider my own version however:
When my thumb gets whacked, it’s presumed that information about this event is transferred to my brain through nerves, and that my brain then does various things which cause me to feel what I know of as “thumb pain”. But if thumb pain exists by means of associated information processing alone, then it stands to reason that symbols on paper which correlate with the information provided to my brain, could with a sufficient data base be converted into other symbols on paper to thus produce something which feels what I do when my thumb gets whacked! Is this not ridiculous? Symbols on paper processed into other symbols on paper, which thus create “thumb pain”?
Just as the computers that we use cannot provide us with images when they aren’t hooked up to “viewing mechanisms”, I believe that the information which my brain processes to cause my thumb pain, can only do so by means of “phenomenal mechanisms”. If that’s the case (and to avoid ridiculousness, it sure seems so), then what brain mechanisms might sufficiently preserve the information associated with neuron firing? The only answer I know of that seems like it might be sufficient, would be the electromagnetic fields created by neuron firing.
Yes -- it is hard to imagine how symbol manipulation could give rise to genuine pain. The fact that it's hard to imagine is, I think, some evidence but not decisive evidence that it's impossible.
ReplyDeleteThanks professor! Granting that I’m presenting “some evidence” that pain is not produced by means of symbol manipulation alone, is more than I expected. I wondered if you’d find various holes in my argument (not that others have so far). And indeed, I’m not even suggesting that there’s an “impossibility” associated with the most popular phenomenal experience explanation in science today. Gods aren’t impossible either. I’m merely suggesting that an “information processing alone” answer violates the metaphysics of causality, just as gods do. Events simply should not occur in a natural realm, without causal mechanisms from which to incite their existence. Asserting that phenomenal experience is the unique element of reality which exists as information beyond how it’s provided, to me seems quite spooky. (What else do we consider to exist in the form of information alone? Maybe legal entities like a novel or patent, though they’re a product of human agreement, or what could be considered a higher order form of mechanism.)
DeleteIt seems to me that if modern scientists (such as Steven Pinker for example) were to realize that they’ve been championing a supernatural position, some would then be motivated to cultivate their ideas beyond just information processing in order to develop natural positions. Just as the information processing of the computers that we build is only effective when it’s permitted to animate mechanisms (like the function of a robot, or computer monitor), in order for the information processing of the brain to produce phenomenal experience, these scientists should begin looking for “phenomenal mechanisms” associated with neuron firing.
We know that neuron firing creates electromagnetic radiation — that’s what scientists often use to monitor the function of neurons. So rather than generic information processing alone, couldn’t phenomenal experience exist by means of the electromagnetic radiation associated with some neuron based processed information? Or perhaps various other mechanisms, though surely in a causal realm, symbols on paper which are processed into other symbols on paper, should not even theoretically create phenomenal experience.
Your view please, associations are from imagination or imaginations are from associations or...
ReplyDeleteI'd think you could have associations without the sort of thing we think of as "imagination" but probably not vice versa.
ReplyDeleteThen an image of the mind could be different than an image of the imagination...
ReplyDelete...a way to relate for us, sight and sound for our minds, probably...
At "Stanford Encyclopedia of Philosophy", we find "Phenomenal Intentionality" by/at David Bourget , Angela Mendelovici . First published Mon Aug 29, 2016; substantive revision Tue Jan 29, 2019...
ReplyDelete...Phenomenal intentionality is a kind of intentionality, or aboutness, that is grounded in phenomenal consciousness, the subjective, experiential feature of certain mental states. The phenomenal intentionality theory is a theory of intentionality according to which there is phenomenal intentionality, and all other kinds of intentionality at least partly derive from it. In recent years, the phenomenal intentionality theory has increasingly been seen as one of the main approaches to intentionality."...
...philosophy and psychology are wonderful approaches to intentionality; and even theory, probably-maybe, with some luck, might lead, one, to a moment of their own phenomenal presence...
Philosopher E: One view is that what makes something a symbol has to do (in a naturalistic way) with its causal history and the causal relations that it is apt to enter into -- and maybe consciousness depends on those things too. This is one way to make a symbol-based view not "supernatural".
ReplyDeleteI agree that symbols cannot become so without an apt causal history professor. Each term that we English speakers use, for example, should have a history by which symbolic meaning has been gained. I’m not satisfied with the recursion associated with proposals for consciousness by means of symbols though. If it takes a product of consciousness to create consciousness, then in the beginning there would need to be something like a teleological god armed with symbols! Of course we naturalists presume something more like “a blind watchmaker” at work, which is to say that symbols needn’t exist at all for consciousness to exist. I now see however that to improve my thought experiment, I’ll need to replace my “symbol” references with a term associated with non-conscious function, which is to say “information”.
ReplyDeleteFurthermore this gets me thinking about how different my thought experiment happens to be from Searle’s. In his, Chinese symbol questions are manually processed into appropriate Chinese symbol responses through the database of a computer which is able to convincingly pass the Turing test. If manual database processing occurs by means of a person who can’t read Chinese, how might anything here “understand” the resulting conversation? Some naturalists say that the actions of this person effectively exists as “understanding”, while others aren’t able to swallow such a jagged pill, but instead resort to various excuses in order to maintain that any computer which convincingly passes the Turing test, thus “understands” what’s said.
In my own thought experiment however, if the information which is fed to my brain when my thumb gets whacked is somehow represented on paper, and is also processed by means of the AND, OR, and NOT gates associated with neuron function into yet more information on paper (though here something other than neurons would be converting the input information into output information, and I don’t mind leaving the whole thing for an advanced computer to take care of), could a result be something which experiences what I know of as “thumb pain”? Could information laden paper which is computationally processed into more information laden paper, produce such a phenomenal experience?
While I consider a “Yes” answer to mandate a void in causal dynamics, there’s but one extra step that I consider needed in order to render the scenario here “natural”. If the processed paper were fed into a machine that was set up to convert its information into phenomenal experience, which is to say a machine armed with such physics, then I do believe that something here could feel what I do when my thumb gets whacked. And what sort of physics might produce phenomenal experience? The radiation associated with neuron firing does seem likely to me.
Arguably, information is just causation. There need not necessarily be a conflict between a causal model and an information-based model of consciousness.
ReplyDeleteI don’t mean to imply that information processing has no causal properties whatsoever professor. If that were the case then my computer simply wouldn’t function. But apparently every bit of information processing that it does do, is only of potential use once it’s permitted to animate associated mechanisms. Regardless of how much image processing occurs, no screen pixels will light up when no screen is permitted to implement such information.
ReplyDeleteIf anyone knows of a service or good which the information processing of a computer provides in itself, which is to say without animating associated output mechanisms, then I’d be pleased to learn of it. The only such proposal that I’m currently aware of is that brains theoretically produce phenomenal experience this way. This seems similarly convenient as proposing that there must instead be a second kind of stuff responsible. But at least Chalmers and Descartes have acknowledged the metaphysical implications of their proposals.
At 40 years of age, Searle’s classic but widely flaunted thought experiment seems long in the tooth. Perhaps it’s time for another version? If certain information on paper were converted into certain other information on paper, then from the standard position the conversion process itself would result in something that experiences “thumb pain”. It seems to me that if many of the naturalist who’ve been so indoctrinated were to realize the implications of their premise, then some would decide that in a natural world an output mechanism must exist additionally.
Your snails are fun professor, though if at some point you were to begin repeating this thought experiment yourself, it seems to me that a very important conversation for the science and philosophy of mind, might result.