Tuesday, November 19, 2024

New in Draft: When Counting Conscious Subjects, the Result Needn't Always Be a Determinate Whole Number

(with Sophie R. Nelson)

One philosophical inclination I shared with the late Dan Dennett is a love of weird perspectives on consciousness, which sharply violate ordinary, everyday common sense. When I was invited to contribute to a special issue of Philosophical Psychology in his memory, I thought of his intriguing remark in Consciousness Explained against "the myth of selves as brain-pearls, particular, concrete, countable things", lamenting people's stubborn refusal "to countenance the possibility of quasi-selves, semi-selves, transitional selves" (1991, p. 424-425). As I discussed in a blog post in June, Dennett's "fame in the brain" view of consciousness naturally suggests that consciousness won't always come in discrete, countable packages, since fame is a gradable, multidimensional phenomenon, with lots of gray area and partial overlap.

So I contacted Sophie R. Nelson, with whom I'd published a paper last year on borderline cases of group minds, and we decided to generalize the idea. On a broad range of naturalistic, scientific approaches to consciousness, we ought to expect that conscious subjects needn't always come in determinate, whole number packages. Sometimes, the number of conscious subjects in an environment should be either indeterminate, or a determinate non-whole number, or best modeled by some more complicated mathematical representation. If some of us have commonsense intuitions to the contrary, such intuitions aren't probative.

Our submission is due November 30, and comments are (as always) very welcome -- either before or after the Nov 30 deadline (since we expect at least one round of revisions).

Abstract:

Could there be 7/8 of a conscious subject, or 1.34 conscious subjects, or an entity indeterminate between being one conscious subject and seventeen? Such possibilities might seem absurd or inconceivable, but our ordinary assumptions on this matter might be radically mistaken. Taking inspiration from Dennett, we argue that, on a wide range of naturalistic views of consciousness, the processes underlying consciousness are sufficiently complex to render it implausible that conscious subjects must always arise in determinate whole numbers. Whole-number-countability might be an accident of typical vertebrate biology. We explore several versions of the inconceivability objection, suggesting that the fact that we cannot imagine what it’s like to be 7/8 or 1.34 or an indeterminate number of conscious subjects is no evidence against the possibility of such subjects. Either the imaginative demand is implicitly self-contradictory (imagine the one, determinate thing it’s like to be an entity there isn’t one, determinate thing it’s like to be) or imaginability in the relevant sense isn’t an appropriate test of possibility (in the same way that the unimaginability, for humans, of bat echolocation experiences does not establish that bat echolocation experiences are impossible).

Full draft here.

[Figure 2 from Schwitzgebel and Nelson, in draft: An entity intermediate or indeterminate between one and three conscious subjects. Solid circles represent determinately conscious mental states. Dotted lines represent indeterminate or intermediate unity among those states.]

Friday, November 15, 2024

Three Models of the Experience of Dreaming: Phenomenal Hallucination, Imagination, and Doxastic Hallucination

What are dreams like, experientially?

One common view is that dreams are like hallucinations. They involve sensory or sensory-like experiences just as if, or almost as if, you were in the environment you are dreaming you are in. If you dream of being Napoleon on the fields of Waterloo, taking in the sights and sounds, then you have visual and auditory experiences much like Napoleon might have had in the same position (except perhaps irrational, bizarre, or otherwise different in specific content). This is probably the predominant view among dream researchers (e.g., Hobson and Revonsuo).

Another view, less common but intriguing, is that dreams are like imaginings. Dreaming you are Napoleon on the fields of Waterloo is like imagining or "daydreaming" that you're there. The experience isn't sensory but imagistic (e.g., Ichikawa and Sosa).

These views are very different!

For example, look at your hands. Now close your eyes and imagine looking at your hands. Unless you're highly unusual, you will probably agree that the first experience is very different from the second experience. On the hallucination model of dreams, dream experience is more like the first (sensory) experience. On the imagination model, dream experience is more like the second (imagery) experience. On pluralist models, dream experiences are sometimes like the one, sometimes like the other (e.g., Rosen and possibly Windt's nuanced version of the hallucination model). (Unfortunately, proponents of the hallucination model sometimes confusingly talk about dream "imagery".)

-----------------------------------

I confess to being tempted to the imagination model. My reason is primarily introspective or immediately retrospective. I sometimes struggle with insomnia and it's not unusual for me to drift in and out of sleep, including lying quietly in bed, eyes closed, allowing myself to drift in daydream, which seems sometimes to merge into sleep, then back into daydream, and my immediately remembered dreams seem not so radically different from my eyes-closed daydream imaginations. (Ichikawa describes similar experiences.)

Another consideration is this: Plausibly, the stability and detail of our ordinary sensory experiences depend to a substantial extent on the stabilizing influence of external inputs. It appears both to match my own experience and to be neurophysiologically plausible that the finely detailed, vivid, sharp structure, of say, visual experience, would be difficult for my brain to sustain without the constraint of a rich flow of input information.  (Alva Noë makes a similar point.)

Now, I don't put a lot of stock in these reflections. There's reason to be skeptical of the accuracy of introspective reports in general, and perhaps dream reports in particular, and I'm willing to apply my own skepticism to myself. But by the same token, what is the main evidence on the other side, in favor of the hallucination model? Mainly, again, introspective report. In particular, it's the fact that people often report their dream experiences as having the rich, sensory-like detail that the hallucination model predicts. Of course, we could just take the easy, obvious, pluralist path of saying that everyone is right about their own experiences. But what fun is that?

-----------------------------------

In fact, I'm inclined to throw a further wrench in things by drawing a distinction between two types of hallucination: phenomenal and doxastic. I introduced this distinction in a blog post in 2013, after reading Oliver Sacks's Hallucinations.

Consider this description, from page 99 of Hallucinations:

The heavens above me, a night sky spangled with eyes of flame, dissolve into the most overpowering array of colors I have ever seen or imagined; many of the colors are entirely new -- areas of the spectrum which I seem to have hitherto overlooked. The colors do not stand still, but move and flow in every direction; my field of vision is a mosaic of unbelievable complexity. To reproduce an instant of it would involve years of labor, that is, if one were able to reproduce colors of equivalent brilliance and intensity.

Here are two ways in which you might come to believe the above about your experience:

(1.) You might actually have visual experiences of the sort described, including of colors entirely new and previously unimagined and of a complexity that would require years of labor to describe.

Or

(2.) you might shortcut all that and simply arrive straightaway at the belief that you are undergoing or have undergone such an experience -- perhaps with the aid of some unusual visual experiences, but not really of the novelty and complexity described.

If the former, you have phenomenally hallucinated wholly novel colors. If the latter, you have only doxastically hallucinated them. I expect that I'm not the first to suggest such a distinction among types of hallucination, but I haven't yet found a precedent.

Mitchell-Yellin and Fischer suggest that some "near death experiences" might also be doxastic hallucinations of this sort. Did your whole life really flash before your eyes in that split second during an auto accident, or did you only form the belief in that experience without the actual experience itself? It's not very neurophysiologically plausible that someone would experience hundreds or thousands of different memory experiences in 500 milliseconds.

-----------------------------------

It seems clear from dream researchers' descriptions of the hallucination model of dreams that they have phenomenal hallucination in mind. But what if dream experiences involve, instead or at least sometimes, doxastic rather than phenomenal hallucinations?

Here, then, is a possibility about dream experience: If I dream I am Napoleon, standing on the fields of Waterloo, I have experiences much like the experiences I have when I merely imagine, in daydream, that I am standing on the fields of Waterloo. But sometimes a doxastic hallucination is added to that imagination: I form the belief that I am having or had rich sensory visual and auditory experience. This doxastic hallucination would explain reports of rich, vivid, detailed sensory-like dream experience without requiring the brain actually to concoct rich, vivid, and detailed visual and auditory experiences.

Indeed, if we go full doxastic hallucination, even the imagination-like experiences would be optional.  (Also, if -- following Sosa -- we don't genuinely believe things while dreaming, we could reframe doxastic hallucinations in terms of whatever quasi-belief analogs occur during dreams.)

[The battle at Waterloo: image source]

Monday, November 11, 2024

New in Draft: The Copernican Argument for Alien Consciousness; The Mimicry Argument Against Robot Consciousness

(with Jeremy Pober)

Over the past several years, I've posted a few times on what I call the "Copernican Argument" for thinking that behaviorally sophisticated space aliens would be conscious, even if they are constituted very differently from us (here, here, here, here). I've also posted a few times on what I call the "Mimicry Argument" against attributing consciousness to AI systems or robots that were designed to mimic the superficial signs of human consciousness (including current Large Language Models like ChatGPT and Claude) (here, here, here).

Finally, I have a circulatable paper in draft that deals with these issues, written in collaboration with Jeremy Pober, and tested with audiences at Trent University, Harvey Mudd, New York University, the Agency and Intentions in AI conference in Göttingen, Jagiellonian University, the Oxford Mind Seminar, University of Lisbon, NOVA Lisbon University, University of Hamburg, and the Philosophy of Neuroscience/Mind Writing Group.

It's a complicated paper! Several philosophers have advised me that the Copernican Argument is one paper and the Mimicry Argument is another. Maybe they are right. But I also think that there's a lot to be gained from advancing these arguments side by side: Each shines light on the boundaries of the other. The result, though intricate, is I hope not too intricate for evaluation and comprehensibility. (I might still change my mind about that.)


Abstract:

On broadly Copernican grounds, we are entitled to default assume that apparently behaviorally sophisticated extraterrestrial entities (“aliens”) would be conscious. Otherwise, we humans would be inexplicably, implausibly lucky to have consciousness, while similarly behaviorally sophisticated entities elsewhere would be mere shells, devoid of consciousness. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness in humans (“consciousness mimics”), and in particular a broad class of current, near-future, and hypothetical robots. These considerations, which we formulate, respectively, as the Copernican and Mimicry Arguments, jointly defeat an otherwise potentially attractive parity principle, according to which we should apply the same types of behavioral or cognitive tests to aliens and robots, attributing or denying consciousness similarly to the extent they perform similarly. Instead of grounding speculations about alien and robot consciousness in metaphysical or scientific theories about the physical or functional bases of consciousness, our approach appeals directly to the epistemic principles of Copernican mediocrity and inference to the best explanation. This permits us to justify certain default assumptions about consciousness while remaining to a substantial extent neutral about specific metaphysical and scientific theories.

Full paper here.


As always, questions/comments/objections welcome here on the blog, on my social media accounts, or by email to my UCR address.

[image source]