Thursday, March 28, 2019

Against the Mind-Package View of Minds

(adapted from comments I will be giving on Carrie Figdor's Pieces of Mind, at the Pacific Division meeting of the American Philosophical Association on Friday, April 19, 9-12)

We human beings have a favorite size: our own size. We have a favorite pace of action: our own pace. We have a favorite type of organization: the animal, specifically the mammal, and more specifically us. What’s much larger, much smaller, or much different, we devalue and simplify in our imaginations.

It’s true that we’re great. Wow, us! But we tend to forget that other things can also be pretty amazing.

So here’s a naive picture of the world. Some things have minds, and other things don’t have minds. The things with minds might have highly sophisticated minds, capable of appreciating Shakespeare and proving geometric theorems, or they might have simpler minds. But if an entity has a mind at all, then it has sensory and emotional experiences, preferences and dislikes, plans of at least a simple sort, some kind of understanding of its environment, an ability to select among options, and a sense of its location and the boundaries of its body. Let’s call this the Mind Package.

Everything that exists, you might think, is either a thing that has the whole Mind Package or a thing that has no part of the Mind Package. Stones have no part of the Mind Package. They don’t feel anything. They have no preferences. They make no decisions. They have no comprehension of the world around them. There’s nothing it’s like to be a stone. Dogs, we ordinarily assume, do have the Mind Package. My own dog Cocoa enjoys going on walks, prefers the bucket chair to the recliner, gets excited when she hears my wife coming in the front door, and dislikes our cat.

[A recent picture of some of my favorite biological entities. Can you guess which ones have the Mind Package?]

Now it could be the case that everything in the world either has the Mind Package or doesn’t have it, and if something has one piece of the Mind Package, it has all the pieces. Intuitively, this is an attractive idea. What would it be to kind of have a mind? Could a creature have full-blown desires and preferences but no beliefs at all? Could a creature be somewhere between having experiences and not having any experiences? This seems hard to imagine. It’s much easier to think that either the light is on inside or the light is off. Either you’ve got a stone or you’ve got a dog.

But there are a couple of reasons to suspect that the lights-on/lights-off Mind Package view is two simple.

The first reason to be suspicious is that the world is full of slippery slopes. In fetal development and infant development, biological and cognitive complexity emerges gradually. But if you’ve either got the whole package or you don’t then there must be some moment at which the lights suddenly turned on and you went, in a flash, from being an entity without experiences, preferences, feelings, comprehension, and choice, to being an entity with all of those things. In the gradual flow of maturation, when could this be? Likewise, if we assume, at least for a moment, that jellyfish don’t have the Mind Package but dogs do, similar trouble looms: Across species there’s a gradual continuum of capacities, not, it seems, a sudden break between lights-on and lights-off animals. (Garden snails are an especially fascinating problem case.)

This leads to a second reason to be suspicious of the Mind Package view. As Carrie Figdor emphasizes, bacteria are much more informationally complicated than we tend to think. Plants are much more informationally complicated than we tend to think. Group interactions are much more informationally complicated than we tend to think. The relations of parasite, host, and symbiont are much more informationally complicated than we tend to think. The difference is smaller than we usually imagine between things of our favorite size and pace and other things. The biological world is richly structured with what looks like sophisticated informational processing in response to environmental stimuli. When scientists need to model what’s going on in plants and bacteria and neurons and social groups, they seem to need terms and concepts and models from psychology: signaling, communication, cooperation, decision, memory, detection, learning. Structures other than those of our favorite size and pace seem to show the kinds of informational interactions and responsiveness to environment that we capture with psychological words like these.

Furthermore, there’s no general reason to think that systems usefully described by some of these psychological terms need always also be usefully described by other of these terms. If a scientific community starts to attribute memories or preferences to the entities they research, it doesn’t follow that they will find it fruitful also to ascribe sensory experiences, feelings, or a sense of the difference between body and world. Different aspects of mentality may be separable rather than bundled. They don’t need to stand or fall as a Package. To paraphrase the title of Carrie’s book, the Mind comes in Pieces.

Philosophers of mind love to paint their opponents as clinging to the remnants of Cartesianism. Should I alone resist? The Mind Package view is a remnant of Cartesianism: There’s the Minded stuff, which has this nice suite of cognitive and conscious properties, all as a bundle, and then there’s the non-Minded stuff which is passive and simple. We ought to demolish this Cartesian idea. There is no bright line between the fully and properly Minded and the rest of the world, and there is no need for cognitive properties to all travel on the family plan.

The Mind Package view has a powerful grip on our intuitions. We want to confine “the mental” to privileged spaces – our own heads and the heads of our favorite animals. But if the informational structures of the world are sufficiently complex, this intuitive approach must be jettisoned. Mental processes run wide through the world, different ones in different spaces, defying our intuitive groupings. This radically anti-Cartesian view is the profound and transformative lesson of Carrie’s book, and it takes some getting used to.

If Carrie’s radically anti-Cartesian view of the world is scientifically correct, there are, then, pragmatic grounds to prefer a broad view of the metaphysics of preferences and decisions, according to which many different kinds of entities have preferences and make decisions. It is the view that better respects the evidence that we are continuous with plants, worms, and bacteria, and that the types of patterns of mindedness we see in ourselves resemble what’s happening in them, even if such entities don’t have the whole Mind Package.

Goodbye, Mind-Package rump Cartesianism!

-----------------------------------------------------

Related:

Do Neurons Literally Have Preferences (Nov 4, 2015)

Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018)

10 comments:

Lee Roetcisoender said...

Touche Eric... I agree that we should jettison the Cartesian model. But why stop at the threshold of biological forms? There is something "magical" that is responsible for both causality and consciousness, that is a certainty. Could that magic be contained within the construct of mind? This is the fulcrum point where idealism runs off the track. Mind may indeed be a feature and/or intrinsic to the ontological primitive, nevertheless, mind is not the underlying qualitative property as such. The underlying qualitative property is much simpler, all encompassing and more direct than mind. Mind processes information, but if the "thing-in-itself" is the information, mind would not be a qualitative property as such, but merely a feature and/or attribute.

A revision to Kant's transcendental idealism model where the paradox of the "thing-in-itself" being unknowable is no longer hidden would transform the current model to "Transcendental Idealism revision 1.0". This new model could then be utilized to understand both causality and consciousness, rewriting the entire landscape.

Eric Schwitzgebel said...

As far as I see, there's no need to constrain mentality to the biological (as the biological is ordinarily conceived). I do have to confess, though, that I'm pretty skeptical of grand theories the profess to explain the whole story!

David Duffy said...

Helen Keller's autobiography doesn't talk that much about her mental experiences before she learnt to communicate (complicated by what she may or may not have retained from before age 2), but she does describe that famous single moment when the existence of "discursive propositional intentional content" is revealed to her. A lot of people think minds require language, and that the gap to dumb animals is qualitative.

SelfAwarePatterns said...

Well said, and I like the term "mind package".

It's interesting that most people in mind studies explicitly eschew Cartesian dualism, and yet implicitly hold on to dualistic notions, such as the mind package, the ghost in the machine, consciousness as something that "arises" from the brain as though it's an ectoplasmic force separate and apart from what the brain does.

But accepting that substance dualism is false means accepting that the mind is a suite of capabilities, a composite phenomenon, and that it can be present in greater or lesser extents, with some aspects there while others are absent. Anyone who studies the neurological case studies of brain injured patients quickly learns that this is so.

Of course, what counts as a mind is a matter of definition. No matter which definition we choose, nature will almost certainly throw edge cases at us to challenge it.

Eric Schwitzgebel said...

SelfAware: Thanks for your comment. Nicely put!

David Duffy: Yes, that's possible. My own inclinations are not to privilege language so much and to think we overestimate the gulf between us and other creatures. But this is one of those broad perspective issues that's very hard to settle!

Philosopher Eric said...

Interesting post professor. Another way to say this is that even though the “lights are off” for the stone, and “on” for me, there is no single difference between us but rather a graduation. Does a single photon count as “turning the lights on”? Well it could be defined that way, though from this view it takes a considerable number of them in order to count at all — the more photons, the more “light”.

I nevertheless have developed an “on/off” consciousness model however, though it certainly avoids the “mind-package” path. Here the brain is a non-conscious computer, and it may or may not output a teleological form of computer, or consciousness. Per the above analogy, the “photon” here is sentience. I consider this to essentially be “the fuel” which drives the conscious form of computer.

You can test out this idea right now if you like. The more pain that you are in, for example, the more conscious that you should be according to this model. Conversely with a perfect elimination of sentience, I think it’s useful to say that consciousness no longer exists. (Of course that’s a great time for surgery!)

I theorize that at some point in the past there were non-conscious brains that began to produced sentience, and that this went on to produce various conscious forms of life such as the human.

-P Eric

Eric Schwitzgebel said...

Thanks for the comment Prof Eric! The photon analogy is interesting. Searle has a similar analogy to money: You can have more or less of it, but if you have a single penny, you have money, and that is discretely different from having no money at all.

Philosopher Eric said...

Coincidentally my friend Steven recently accused me of choosing the “Philosopher Eric” pseudonym to misrepresent myself online as a distinguished person. If he’s paying attention to this post then I’d expect a big “HA!” in his next response to me. Well I’m not changing my online name, and even if he’s right!

Since you’re the “Eric” around here, I’ve suggested going by just “P Eric” at your site. Conversely Steven has proposed that I go with an Eric modifier such as “Bad Guy”, “Philistine”, or “Scientism” :-)

One of my main themes is that the topics of philosophy (which is to say, metaphysics, epistemology, and axiology) essentially concern the premises upon which science must build. But then why have “hard” forms of science been able to do as well as they have without generally accepted principles of philosophy from which to work? I suspect that they’re less susceptible than mental and behavioral forms given their empirical subject matter, as well as because they’re inherently less “personal”. Regardless I believe that various useful principles of philosophy will be required in order for our soft sciences to harden. I propose four.

One of academia’s worst conventions, I think, is that we’ve been taught to look for “true” definitions. Is the garden snail “conscious”? GONG! As SAP mentioned earlier, it all depends upon how “consciousness” is defined. If my first principle of epistemology ever becomes widely accepted, the standard convention should actually become known as a fallacy and so permit us to look for “useful” definitions. Better epistemology should disproportionally help our soft sciences.

For a useful consciousness definition I think “sentience” works quite well. And yes like currency or even photons, there should be discrete rather than graduated states of sentience. I don’t know how to measure such states objectively with any precision, though ways should be found if the need becomes apparent.

Eric Schwitzgebel said...

P Eric: I'm fine with your being "Philosopher Eric" or "P Eric" or any such name. There's hardly only one philosopher named Eric!

On definitions: I'm usually a pragmatist, as you describe -- for example in my definition of belief in "The Pragmatic Metaphysics of Belief". But part of pragmatism is also targeting the phenomenon that people mean to be targeting, and I'm inclined to think that people mean a fairly specific phenomenon in discussing "phenomenal consciousness" so that too much definitional flexibility here would be misleading.

Philosopher Eric said...

Agreed on both counts professor. Firstly the author of a given idea needs definitional freedom from which to make his or her points. Thus in your paper I must accept the term “belief” as not just a conviction that I have, but also a conviction that I practically display. Here I will not “believe” that 2 + 2 = 4 (or even 5), unless I also display such a conviction. Furthermore if people in general were to find your distinction between belief and mere conviction useful for their purposes, then modern English should become more effective in this specific regard.

Secondly, what might the protocol be when a person goes beyond what people generally mean by a given term? A panpsychist might define anything that functions by means of causality, for example, to thus display an associated level of “consciousness”. So would my first principle of epistemology thus support panpsychism by permitting such definitions to be made? Not at all! We naturalist believe that all of reality functions by means of causality, but since it’s standard to refer to “consciousness” as a very specific variety of causal function (or the “phenomenal” kind), we can effectively say that their definition does not seem useful. Without my EP1 however, panpsychists seem to be running amok today. Note that if there are “true” and “false” definitions as standard convention suggests, though panpsychists refrain from saying various untrue things, then their position may indeed be convincing!

My main point is that even though philosophy is often considered to be “an Ivory Tower art to potentially appreciate”, it must also evolve to become known as nothing less than the foundation upon which science itself rests. And how might it become such an institution? Apparently we’ll need a respected community which is able to develop its own generally accepted principles of metaphysics, epistemology, and axiology, from which to better found the institution of science. I propose four.