Wednesday, October 13, 2021

Michael Tye on Vagueness about Consciousness

In late August, about two days after I finished drafting my new paper "Borderline Consciousness, When It's Neither Determinately True nor Determinately False That Consciousness Is Present", I learned that Michael Tye had a new book forthcoming on the same topic: Vagueness and the Evolution of Consciousness.

Tye is eminent in consciousness studies, and he has also written influentially about the logic of vagueness. In the past, he had defended, but not in detail, the idea that consciousness is a vague property, admitting borderline cases -- the same thesis I defend in that circulating draft paper. You know that feeling when you discover that someone much better known than you is working on the same thing you've been working on, probably with a similar view and probably a couple of steps ahead? Right. Eek!

So of course I had to read Tye's new book straightaway. I received it last week.

***********************

Chapter 1 is surprising, given Tye's previous work defending vagueness.

First (the unsurprising part), Tye argues that consciousness is vague, that is, that there must be a range of "borderline cases" between being conscious and being non-conscious. His argument is similar to mine: If consciousness is a physical phenomenon or grounded in a physical phenomenon, it pretty much has to have fuzzy boundaries since basically all physical phenomena have fuzzy boundaries, including those normally associated with consciousness (such as having neurons or integrating a certain amount of information). Therefore, between non-conscious bacteria and conscious humans, there must be some animals who are only borderline conscious. I'd add, though it's not Tye's emphasis, that there must also be transitional, borderline states between non-conscious sleep and conscious waking.

Second (the surprising part), Tye argues that consciousness cannot be vague on the grounds that we cannot present examples of, or even conceive of, borderline cases of it. He rejects a couple of putative examples. Feeling groggy upon waking is not borderline consciousness, but rather a type of conscious experience (perhaps with indistinct contents). Also, hearing a tone fade into silence (an example he used in his own earlier work) is not a case of borderline consciousness because one can hear silence, so throughout the fading you are definitely having a conscious experience, though it might one with vague or indeterminate representational content such as "maybe there's a very quiet tone or maybe there's just silence".

Tye presents these two arguments as a "paradox". On the one hand, we seem to have a good argument that consciousness must be a vague property, admitting of borderline cases. On the other hand, we seem to have a good argument that consciousness cannot be a vague property, admitting of borderline cases. He concludes by saying, "So, consciousness, it seems, is both vague and not vague. What to do? Houston, we have a problem!" (p. 18).

Before presenting Tye's solution to that problem, let me suggest that the two arguments -- one pro-vagueness, one anti-vagueness -- are not equally strong. The first argument is approximately as strong as standard-issue (non-panpsychist) materialism. If consciousness is, or is grounded in, large, floppy, fuzzy-edged properties like having a brain of a certain sort or having cognitive capacities of a certain sort, as basically all ordinary materialist philosophers think, then barring something quite strange, consciousness too must have vague boundaries.

The anti-vagueness argument seems weaker. From the fact that we can't conceive of borderline cases, it doesn't follow that borderline cases don't exist. The problem might be (as I argue in my paper) that there's a failure or limitation in our imaginative capacities. As Tye himself says, "The concept consciousness is such that we cannot conceive of a borderline case and that is prima facie evidence that it is sharp" (p. 16, bold added). "Prima facie evidence", he says -- not conclusive evidence. Just an initial reason in favor. Shouldn't the response for the standard-issue materialist convinced by the argument of the first part just be to reject the "prima facie evidence" and look for another explanation for our conceptual failure?

***********************

Neither Chapter 2 nor Chapter 3 are mainly about vagueness. Chapter 2 argues against a certain panpsychist way of solving the problem of Chapter 1. Chapter 3 defends Tye's famous representationalist view of consciousness against a family of objections, along the way establishing the important point that representational contents of determinately conscious experiences can be vague. It can determinately be the case, for example, that you're having a visual experience with vague, partly indeterminate contents, as when you are looking through blurry glasses at something you can't quite make out.

***********************

In Chapter 4, Tye delivers his answer to the puzzle of Chapter 1. His answer turns on the concept of consciousness*. Consciousness* is a property

that a state must have to be conscious. Experiencing something, I propose, is a matter of undergoing an inner state (with a quasi-pictorial structure), a state that has the property of being conscious* and that also represents something. Consciousness* is not itself a representational property, nor is it a functional property.... It is, I hold, irreducible and fundamental. And it is consciousness* that is found at the level of quarks. Quarks are conscious* but not conscious (p. 79).

Concerning vagueness, Tye adds:

Consciousness* is sharp whereas consciousness essentially involves content and thus is vague. When we assert baldly that there are no possible borderline states of consciousness, we are wrong; but the borderline cases arise via the vagueness of the representational aspect of consciousness. There is no vagueness in consciousness*, the other key element of consciousness (p. 79).

I confess to finding consciousness* a bit puzzling! What is this new fundamental property? It's philosophically bold, and seemingly empirically unmotivated, to posit that quarks have not only the usual properties that particle physicists attribute to them, such as spin and "color", but also a previously unnoticed property, consciousness*, which isn't consciousness itself but which is intimately related to it. Why should we posit the existence of such a property rather than satisfying ourselves with, say, a simpler representationalism on which the only thing necessary for consciousness is having a cognitive, representational structure of a certain sort? Tye himself, in his earlier work, ranks among the chief proponents of that type of simpler representationalism about consciousness. Consciousness* is a new aspect of his view -- a change in his position (as he admits in the introduction), presumably forced upon him after long thought and dissatisfaction with his previous view.

I can see two main motivations for positing consciousness*. Both depend on treating conceivability as a compelling test of possibility.

The first is zombies. A philosophical zombie is an entity particle-for-particle identical to a person, at the finest level of functional detail, but lacking consciousness. Tye treats such zombies as conceivable and therefore possible (esp. p. 98-99). Consciousness* gives us a way to make sense of this. Zombies are microphysically like us at the finest-grain functional level, and have all the same representational contents, but their microparticles lack consciousness*. Consciousness* thus plays a role in explaining why we aren't zombies. Consciousness* is a micro-level property that we have and that zombies lack, even though every molecule in our bodies behaves outwardly in the same way (including in producing the same verbal reports of consciousness).

The second reason to posit consciousness* is more central to the vagueness project. Something about consciousness is sharp-bordered, Tye argues. But it can't be the representational content. Consciousness* thus plays the role of being this sharp-bordered property -- either present at the micro level (in us) or absent (in zombies), rather than objectionably fading gradually in as systems become bigger and more complex.

I've long found philosophers' fascination with the zombie thought experiment a little puzzling. Part of me is inclined to doubt that we can reach any substantive conclusions about the nature of consciousness by considering examples that are not even (by most zombie-theorists' own lights) physically possible.[1] Another part of me, however, is happy to concede zombie-theorists their point: Sure, there is some property we have that these hypothetical creatures would lack despite their (posited) physical-functional identity, i.e., the property of being conscious. But properties of this sort are cheap. They don't threaten materialism as a scientific hypothesis concerning what is physically possible. The zombie-business seems separable from the business of figuring out what real creatures have conscious experiences, in virtue of which physical/structural features. It is similarly separable from the business of figuring out what hypothetical but physically possible creatures would have conscious experience if we built them, in virtue of which physical/structural features.

In that concessive mood, maybe I should be fine with consciousness*. Maybe, even, it's just the thing we need to deal with the zombie case. We can then say, sure, all ordinary matter is such that if you organize it in the right way it gives rise to consciousness. All matter we can see and interact with has consciousness*, i.e., is such that it would not give rise to mere zombie fake-consciousness if you swirled it together into the form of a biological person. Hypothetically we can imagine matter that lacks consciousness*. Hypothetically we can also imagine ghosts and Cartesian souls. There's no compelling evidence that our universe contains any such things. So maybe, similarly, microparticles around here have Property NG, the property of being such that they don't require the existence of ghosts to give rise to consciousness when they are organized in the right way. Perhaps this, too, is a previously unlabeled fundamental property? Or is it maybe just the same property as consciousness*?

But let's return to vagueness. In Chapter 1, Tye argued that the fact that we cannot conceive of borderline cases of consciousness is prima facie evidence against the existence of such cases. Now, in Chapter 4, we find him -- in my view wisely -- positing the existence of borderline cases of consciousness after all, stating that it is only consciousness* that cannot be vague. But, strikingly, he doesn't really address the problem he raised in Chapter 1. He still does not present a borderline case of vague consciousness. He does not tell us how to conceive of such cases. Or at least, he does so no more than he could easily have done without the concept of consciousness*. In Chapter 5 he defends honeybee consciousness and box jellyfish non-consciousness. So presumably he could say something like "since jellyfish aren't conscious and bees are conscious, any borderline-conscious animals would have to be somewhere between those two." But that gestural remark depends not at all on the concept of consciousness*, and it's just the sort of handwavy thing that ordinarily fails to satisfy those who object in principle to the existence of borderline cases of consciousness.

I am left, then, thinking that a piece of the puzzle is missing: an explanation of why we should allow the existence of borderline cases of consciousness despite our difficulty of really clearly conceiving of such cases.

Fortunately, that missing piece is just what I supply in my own paper on borderline consciousness.

***********************

[1] Qualification here: Tye treats consciousness* as a physical property and thus zombies as physically possible, though presumably no particles lacking consciousness* have been observed in our universe (though there's a question of how we could know that). Zombies are thus, on his view, not physically identical to us but only identical to us with respect to the functional side of their physical properties, i.e., how every particle interacts with other physical particles. This difference between Tye's and others' treatment of zombies doesn't matter, I think, to the argument of this post.

***********************

Related:

Borderline Consciousness, When It's Neither Determinately True nor Determinately False That Consciousness Is Present (article in draft)

An Argument for the Existence of Borderline Cases of Consciousness (Aug 18, 2021).

19 comments:

SelfAwarePatterns said...

Tye's consciousness* conception seems similar to David Chalmers' proto-consciousness, and the overall view to his panprotopsychism.

I think the solution to the discrepancy Tye notes is that consciousness is only sharp introspectively, and introspection is unreliable. I might be in a reduced state of consciousness due to injury, fatigue, or inebriation. When I'm in that state, I might introspectively categorize my state as conscious, in the sense that I can remember an episodic sequence of the situation.

But I'm not conscious of what I'm not conscious of, except by what I might be able to retrospectively infer. I won't be conscious of the things I'm missing that I might otherwise notice, of the gaps in whatever episodic sequence I can piece together. My ability to monitor my own state has become compromised because my consciousness has been reduced. It might be obvious to any third party observer, but not to me. (This is one of the reasons a drunk person is often not in a good position to judge how compromised their current state might be.)

The difficulty would be worse if consciousness were sharp to an outside observer, but that's exactly where its fading nature seems most apparent. Its sharp nature only seems apparent from the inside. (Unless of course I'm missing something.)

Mike

Eric Schwitzgebel said...

Mike/SelfAware: Right, Tye's view does seem to be a form of panprotopsychism, though I don't think he uses that word, so he might see some important difference between his view and panprotopsychism as Chalmers characterizes it. I certainly agree with the unreliability of introspection, and I think it's possible that some cases of injury, fatigue, or inebriation are cases of borderline consciousness. Whether we can introspectively classify such an experience as conscious at the moment it's happening is a tricky question, though. I could imagine that the act of introspection might not be possible without determinate consciousness or might bring into determinate consciousness a state that otherwise would have been borderline.

Howie said...

Then why not argue there is life after death?
If there are some states we're unaware of, namely types of consciousness- we're not conscious in this world of our state after death- maybe both viewpoints, the one arguing for states of consciousness and the other of consciousness after death, are equally absurd,

Michael Tye said...

Hi Eric,

Thanks for your comments. To clarify: there are (so to speak) borderline cases of consciousness, but these are really borderline with respect to *what* one is conscious of, *what* one experiences. This point might be put by saying that consciousness itself is sharp, but consciousness itself, in my view, is inherently representational and (as just noted), this brings in borderline cases. So, what we really should say is that there is an element in consciousness, that is itself on/off, but that is such that when a state has that element and that represents appropriately it is a conscious state. Call that element "consciousness*". Consciousness* can't be nonphysical (for all the usual reasons), so it must be physical. But physical properties at the level of complexes allow for borderline cases. So, consciousness* must be there at the most fundamental level, part of the intrinsic nature of fundamental entities. Microphysics doesn't mention such a property, of course, but I take microphysics to concern itself with extrinsic, dispositional properties of fundamental entities, properties like spin and charge. The problem now becomes: how does consciousness* get transferred from fundamental entities to states of the sort we undergo when we are conscious. This is a version of the problem of combination. I offer a solution to that problem in the book.

Eric Schwitzgebel said...

Thanks for the comments, folks!

Howie: I guess I'm looking for empirical evidence of life after death first. Seances would have been one kind of evidence, but my sense is that they didn't turn out to be compelling evidence after all.

Michael: Thanks for the engagement and detailed reply! All of what you say fits with my understanding of what you wrote in your book, and I hope I conveyed it fairly enough in the post. I suppose our main disagreement is that you think that consciousness must have a sharp element, and you are thus led to posit consciousness*, whereas I think we can explain away our inclination to think that consciousness must have a sharp element and thus we need not take the step of positing consciousness*. On the transfer/combination issue, I read you as saying that consciousness* is transferred, and thus states are conscious, when you have system with representations that play the kind of role posited in broadcast/workspace type theories.

Arnold said...

Mentioned twice..."how does consciousness* get transferred from fundamental entities to states of the sort we undergo when we are conscious."...

These words, sometimes need qualifiers...
...consciousness(presence),transferred(transformation),entities(being),states(attitudes)...

The separation of possible phenomena in philosophical combinations...
...including intentionality and self...

P.D. Magnus said...

If consciousness* is a property of individual particles, then can't we still leverage that to imagine a borderline case?
Imagine that not all fundamental particles actually have consciousness*, and that a box of particles that lack it is brought back from deep space. The payload is spilled on me by accident, so that some of the electrons in my body (which have consciousness*) are exchanged with electrons from the payload (which lack consciousness*). Surely just one or two electrons would not make it so that I wasn't conscious! But what about 100? 1000? 10^23? What about all the electrons in my body but none of the other particles? And so on.

Arnold said...

Is it that fundamental particles are part of a wish to combine for the sake of being in a evolving universe...

Can my pretentiousness change from belief to an actuality...
...perhaps through realities' of many many combinations...

Seems always true every second is new...

Eoin said...

Hi there.

I think that You should take a closer look at the concepts inside your own head and try to develop analogies for what is happening in your mind mechanistically.

I'm almost finished a book at the moment and I'm doing some premarketing.

I have some notions on perception, knowledge and the like.

Please pop over to https://deafinoneeye.com/ and see what you think.

Comments are very much appreciated :-).

Cheers,

e.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

PD: Nice example! I can see some possible replies, such as that a single conscious* particle is sufficient or that conscious* and nonconscious* particles repel each other. But those solutions seem inelegant or ad hoc.

P.D. Magnus said...

Even if I accept that one particle with consciousness* is enough: Given that consciousness* is supposed to be a physical property, it is at least conceivable that a particle could be in a superposition of conscious* and not. What then?

Alan said...

There seems to be an explanation of our lack of experience of partial consciousness in "it's not Tye's emphasis, that there must also be transitional, borderline states between non-conscious sleep and conscious waking". If indeed the groggy waking state is still fully "conscious" of the fuzziness of its experience, then perhaps the levels of lesser consciousness are seen only in less mentally complex organisms. That wouldn't be surprising and would make a plausible case for the possible definition of the consciousness of any physical system in terms of its information processing capacity.

But the idea that any other consciousness-related property such as "consciousness*" might exist for quarks just strikes me as completely silly - especially since the proposal is that there is no physical way to detect it. The "philosophical zombie", I'm afraid, is either physical nonsense or ultimate solipsism. If every physical property and action of the zombie is indistinguishable from that of a non-zombie then it will tell me it is conscious just like me even if it is not. So any entity that accepts the possibility of such a thing is basically accepting the possibility that its own consciousness is the only one there is.

Arnold said...

Rereading everything, from our authors, professors, commenters, wikipedia and google, about this 'topic'...

I've concluded...

This 'topic here now', seems to be, "what to know when your not here now"...

Eric Schwitzgebel said...

PD: Right. I see various theoretical options (e.g., that the entity is in a superposition of conscious and non-conscious states, but that's not the same as vagueness in the *relevant* (=?) sense), but it all does seem rather inelegant and removed from empirical testability.

Alan: I basically agree, though with the caveat that every view of consciousness so far proposed has some aspects that seem "silly", so seeming-silliness isn't by itself a decisive objection.

Shane Wagoner said...

According to Tye, allegedly borderline cases of consciousness such as the ones he considers are really only vague with respect to what one is conscious of, not whether one is conscious at all.

There are two ways it might be vague what one is conscious of: First, one might have an experience with determinate content that includes some borderline case (Tye gives the example of a pressure that is a borderline case of pain). Second, it might be indeterminate whether an experience has one content or another. In this latter case, the vagueness does not arise because one is aware of a borderline case. Rather, it is indeterminate whether one is aware of some determinate feature or another.

It seems to me that Tye only provides examples of the first way, not the second. In all of his cases, vagueness only arises because the content of the experience includes determinate properties that are borderline cases. In other cases, such as a gradual transition from noise to silence, there are moments of epistemic indeterminacy where we don’t know whether we hear a sound or not. But these are not experiences with indeterminate content. After all, when one is unsure whether there is a sound, one is unsure because of what the experience is like.

Upon reflection, it is easy to see why Tye does not provide examples of the second type of vagueness. When we consider different kinds of experiences, those kinds are individuated in terms of what the experiences are like. It’s no surprise then that we cannot describe a state with no determinate content in terms of what it’s like since there’s no determinate fact of the matter about what it’s like in the first place.

This point is significant because it seems to provide a way of addressing the puzzle of vagueness that is, strangely enough, available to the very view that Tye is led to abandon in the book. If one endorses a traditional form of strong representationalism, borderline cases of consciousness will be precisely those states for which it is indeterminate whether they represent certain objects and/or properties. Consequently, the view readily explains our inability to provide a subjective characterization of borderline cases and can’t be rejected because of it.

This sort of move is only available for views that identify phenomenal character with representational content. Views that identify phenomenal qualities with neural properties, for instance, will entail the possibility of borderline cases of phenomenal character. Such views are ruled out by Tye’s considerations.

I assume I’m missing something here, but it really seems like the puzzle of vagueness is a very powerful reason to accept Tye’s former view, not reject it.

Eric Schwitzgebel said...

Interesting comment, Shane!

You write: "It’s no surprise then that we cannot describe a state with no determinate content in terms of what it’s like since there’s no determinate fact of the matter about what it’s like in the first place."

I agree, and this is central to my own defense of vagueness against the objection that vague cases of consciousness are inconceivable. I also agree that representationalism can account for vagueness in the way you say. I'm not sure why other views can't make similar move, however. Couldn't a brain state theorist, for example, allow that it could be indeterminate whether one is in brain state X?

Shane Wagoner said...

Suppose that phenomenal qualities are identified with neural properties. These are physical properties that, as such, will admit of borderline cases. Consequently, a borderline case of phenomenal character will be possible. Tye provides prima facie evidence against this possibility.

By contrast, if phenomenal character is identified with representational content, a borderline case of consciousness will not imply that a borderline case of phenomenal character is possible. A borderline case of a representation is not a representation that has a borderline case of content. Since no borderline case of representational content is possible, the representationalist can reject borderline cases of phenomenal character while maintaining that consciousness is vague.

Alan said...

To this outsider, the question vagueness all seems a bit, well, vague. But that may just be due to my ignorance of the technical language.

Is the property of vagueness something that philosophers have precisely defined somewhere? I can't tell from this discussion whether it refers to probability (which attempts to quantify our uncertainty as to whether a well-defined statement is true or false) or precision (as in the case of a continuous variable about which we may have certainty that it is in some range but cannot reduce the width of that range to zero).

With regard to the experience of partial consciousness, I would suggest that it is often not noticed because it is very brief, like the partially lit state of a lightbulb between when the current starts to flow and when the filament becomes fully incandescent. But when I do try to imagine or remember states of partial consciousness, what seems to distinguish them in my mind is the (lack of) clarity, completeness, and certainty with which I can recall my immediately prior experience. Does anyone have a definition of consciousness that does not, to some extent involve memory (and in particular the comparison of more and less immediate past mental states)?

Arnold said...

Conscious state in nature versus...
...mental/neural/algorithmical/semantical conscious state in nature...

Conscious as a function in nature and consciousness as a function in life in nature...
...does nature reconcile itself in functions...

Everything, any philosophy/religion/..., reconciles itself in nature...
...that reconciliation is force with many other forces in the state of nature...

"have a sense of it, feel your way, think about it, be conscious" in nature...