Thursday, January 24, 2019

How to Turn Five Discrete Streams of Consciousness into a Murky Commingling Bog

Here's something you might think you know about consciousness: It's unified and discrete.

When we think about the stream of conscious experience -- the series of conscious thoughts, perceptual experiences, felt emotions, images, etc., that runs through us -- we normally imagine that each person has exactly one such stream. I have my stream, you have yours. Even if we entertain very similar ideas ("the bus is late again!") each of those ideas belongs determinately to each of our streams of experience. We share ideas like we might share a personality trait (each having a full version of each idea or trait), not like we share a brownie (each taking half) or a load (each contributing to the mutual support of a single whole). Our streams of experience may be similar, and we may influence each other, but each stream runs separately without commingling.

Likewise, when we count streams of experience or conscious entities, we stick to whole numbers. It sounds like a joke to say that there are two and a half or 3.72 streams of conscious experience, or conscious entities, here in this room. If you and I are in the room with a snail and an anesthesized patient, there are either two conscious entities in the room with two streams of conscious experience (if neither snails nor people in that type of anesthesized state have conscious experiences), or there are three, or there are four. Even if the anethesized patient is "half-awake", being half-awake (or alternatively, dreaming) is to be fully possessed of a stream of experience -- though maybe a hazy stream with confused ideas. Even if the snail isn't capable of explicit self-representation of itself as a thinker, if there's anything it's like to be a snail, then it has a stream of experience of its own snailish sort, unlike a fern, which (we normally think) has no conscious experiences whatsoever.

I find it hard to imagine how this could be wrong. And yet I think it might be wrong.

To start, let's build a slippery slope. We'll need some science fiction, but nothing too implausible I hope.

At the top of the slope, we have five conscious human beings, or even better (if you'll allow it) five conscious robots. At the bottom of the slope we have a fully merged and unified entity with a single stream of conscious experience. At each step along the way from top to bottom, we integrate the original five entities just a little bit more. If they are humans, we might imagine growing neural connections, one at a time, between their brains, slowly building cross-connections until the final merged brain is as unified as one could wish. If necessary, we could slowly remove and reconfigure neurons during the process so that the final merged brain is exactly like a normal human brain.

Since the human brain is a delicate and bloody thing, it will be much more convenient to do this with robots or AI systems, made of silicon chips or some other digital technology, if we are willing to grant that under some conditions a well-designed robot or AI system could have a genuine stream of consciousness. (As intuitive examples, consider C3P0 from Star Wars or Data from Star Trek.) Such systems could be slowly linked up, with no messy neurosurgery required, and their bodies (if necessary) slowly joined together. On the top of the slope will be five conscious robots, on the bottom one conscious robot.

The tricky bit is in the middle, of course. Either there must be a sudden shift at exactly one point from five streams of experience to one (or four sudden shifts from exactly 5 to 4 to 3 to 2 to 1), as a result of ever so small a change (a single neural connection), or, alternatively, streams of experience must in some cases not be discretely countable.

To help consider which of these two possibilities is the more plausible, let's try some toy examples.

You and me and our friends are discretely different animals with discretely different brains in discretely different skulls, with only one mouth each. So we are used to thinking of the stream of conscious experience like this:

The red circles contain what is in our streams of conscious experience -- sometimes similar (A and A', which you and I share), sometimes different (C belongs only to you), all reportable out of our discrete mouths, and all available for the guidance of consciously chosen actions.

However, it seems to be only a contingent fact about the biology of Earthly animals that we are designed like this. An AI system, or an alien, might be designed more like this:

Imagine here a complex system with a large pool of representations. There are five distinct verbal output centers (mouths), or five distinct loci of conscious action (independent arms), each of which draws on some but not all of the pool of representations. I have included one redundant representation (F) and a pair of contradictory representations (B and -B) to illustrate some of the possible complexity.

In such a case, we might imagine that there are exactly five streams, though they overlap in some important way.

But this is still much simpler than it might be. Now imagine these further complications:

1. There is no fixed number of mouths or arms over time.
2. The region of the representational pool that a mouth or arm can access isn't fixed over time.
3. The region of the representational pool that a mouth or arm can access isn't sharp-boundaried but is instead a probability function, where representations functionally nearer to the mouth or arm are very likely to be accessible for reporting or action, and representations far from the mouth or arm are unlikely to be accessible, with a smooth gradation between.

This picture aims to capture some of the features described:

Think of each color as a functional subsystem. Each color's density outside the oval is that system's likelihood, at any particular time, of being available for speech or action in that direction. Each color's density inside the oval is the likelihood of representations in that region being available to that subsystem for speech or action guidance. With a rainbow of colors, we needn't limit ourselves to a discretely countable number of subsystems. The figure also might fluctuate over time, if the probabilities aren't static.

In at least the fluctuating rainbow case, I submit, countability and discreteness fail. Yet it is a conceivable architecture -- a possible instantiation of an intermediate case along our slippery slope. If such an entity could host consciousness, and if consciousness is closely related to its fluctuating rainbow a structural/functional features, then the entity's stream(s) of conscious experience cannot be effectively represented with whole numbers. (Maybe we could try with multidimensional vectors.)

Is this too wild? Well, it's not inconceivable that octopus consciousness has some features in this direction (if octopi are conscious), given the distribution of their cognition across their arms; or that some overlap occurs in unseparated craniopagus twins joined at the head and brain; or even -- reading Daniel Dennett in a certain way -- that we ourselves are structured not as differently from this as we normally suppose.

----------------------------------

Related:

A Two-Seater Homunculus (Apr 1, 2013);

How to Be a Part of God's Mind (Apr 23, 2014);

If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies, 2015);

Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018).

15 comments:

  1. I do think we're composed of multiple streams. The brain is a massively parallel processing system. We're just not conscious of the exchanges, seams, and gaps between them. Why should we be? There are no sensory neurons in the brain, and all the interacting subsystems have been evolving together for a long time.

    There are the famous split-brain patients, people who had the corpus collosum fibers between their cerebral hemisphere cut to deal with severe epileptic seizures. Subsequent tests showed that the two hemisphere didn't communicate between each other, yet the patients who had the procedure were oblivious and able to function in their day to day lives. The two hemispheres seemed to confabulate stories to maintain the illusion that they were still one system.

    (Often when I mention this these days, people point out the more recent tests that do show sub-cortical communication between hemispheres, but that communication remains very limited, and the patients are still functional despite it.)

    For the thought experiment, a lot depends on how we join the brains. If people are joined through their sensory centers (essentially their I/O systems), then it seems like discrete identities would be maintained.

    If we joined them at the corpus collosum, essentially turning a two hemisphere system into a network system with several nodes, I'm not sure what would happen since the brain is used to sharing data across that boundary with itself in the same person. A unified consciousness seems like a possibility, albeit possibly one with conflicting impulses.

    But if we join them in the way described where each brain's subsystem has access to other brain's subsystems (and assuming we know how to avoid that resulting in a confused insane mess) then a unified consciousness might be possible.

    Of course, in the case of humans, the brain does seem to have a body image it maintains, and strong responses when its expectations about it are violated. This could result in the whole system being unavoidably insane. Or after a period of neurosis, maybe it would come to terms with itself. I think cognition is more robust than many assume.

    Not that I'd volunteer to down that slope :-)

    ReplyDelete
  2. Alternatively, it may be that “consciousness” refers to particular kinds of processes. Thus, an entity has “consciousness” to the extent that it performs one or more flavors of those particular consciousness-related kinds of processes. The “stream of consciousness” of that entity is simply a description of the processes that entity actually performs.

    If that is the case, then what we choose to include in the entity in question is arbitrary. We designate the boundaries of the particular physical system, and then we can identify the particular consciousness-type processes of which it is capable.

    So when looking at your example, we can look at each individual of the five humans or robots and identify the consciousness type processes each performs, or we could look at the consciousness of the group (committee?) and ask which kinds of processes that group performs. And of course we can, as SelfAwarePatterns suggests, look inside an individual brain and identify sub-parts and ask which consciousness-type processes that subpart performs.

    *

    ReplyDelete
  3. The science fiction writer Peter Watts has written about a lot of these possibilities, either in his fiction (especially Blindsight) or on his blog.
    e.g.: https://www.rifters.com/crawl/?p=5875
    https://www.rifters.com/crawl/?p=8169

    ReplyDelete
  4. And a most interesting quote [italics added] from Watts' Blindsight:

    "I went to ConSensus for enlightenment and found a whole other self buried below the limbic system, below the hindbrain, below even the cerebellum. It lived in the brain stem and it was older than the vertebrates themselves. It was self-contained: it heard and saw and felt, independent of all those other parts layered over top like evolutionary afterthoughts. It dwelt on nothing but its own survival. It had no time for planning or abstract analysis, spared effort for only the most rudimentary sensory processing. But it was fast, and it was dedicated, and it could react to threats in a fraction of the time it took its smarter roommates to even become aware of them.

    And even when it couldn’t—when the obstinate, unyielding neocortex refused to let it off the leash—still it tried to pass on what it saw, and Isaac Szpindel experienced an ineffable sense of where to reach. In a way, he had a stripped-down version of the Gang in his head. Everyone did."

    ReplyDelete
  5. Thanks for the comments, folks!

    SelfAware: Yes, I agree that it will have to depend on the details of the joining. As for insanity -- maybe no more insane than a committee?

    James: I feel the pull of that thinking, but it seems more anti-realist about phenomenology than I prefer to be. I think there are facts of the matter about conscious experience such that there are right and wrong cuts, rather than its being open to arbitrariness. This makes it hard for me to simply accept the type of view I describe here.

    Steven and Stephen: Thanks for the tips on Watts! I have read Blindsight, but I wasn't familiar with his blog.

    ReplyDelete
  6. Perhaps what is missing here is a recognition that individual consciousness is already more than a simple uni-directional stream. Our individual consciousness can only exist as part of a meaningful world expressed through a shared language. Consciousness grows because it's immersed in a culture that exists through all of us recognising it.

    ReplyDelete
  7. Eric, I don’t think I’m anti-realist about phenomenology. (Maybe I don’t sufficiently understand those terms.) I agree there are facts of the matter about conscious experience, but I think those facts are determined by how one defines or identifies a conscious experience, and I think that is best done by starting with generic processes and then deciding what constraints on a process are required to make it a “conscious experience” type process. And then you can start looking around in the brain (and other places ... committees?) for those types of processes.

    *

    ReplyDelete
  8. I have commented before that we have multiple personality disorder as one model:
    https://www.researchgate.net/profile/John_Morton4/publication/317186301_Interidentity_amnesia_in_dissociative_identity_disorder/links/5968fa1ba6fdcc18ea6f1b17/Interidentity-amnesia-in-dissociative-identity-disorder

    reviews the literature (incl priming and imaging studies) and describes new experimental results. There seems to be a pattern that in some individuals the apparent separate consciousnesses have separate episodic memories. Even if one believes the "Sociocognitive Model" that these individuals are simulating the symptoms, this is completely consistent with the idea that the Self is an illusion anyway.

    ReplyDelete
  9. This has something to do with the fact that the 'stream of thought' metaphor is supposed to shake us loose of bad old 'Cartesian theater of the mind' thinking. But a stream is a different - highly particular - spatio-temporal entity. So we get a bit caught up with various more or less plausible commingling analogies or 'don't cross the streams!' anxieties. I think this is by way of saying: what you say is quite plausible and we are only likely to resist it if we are too attached to the physical associations of 'stream'. Because, by the end of the slippery slope, the intuitive 'streaminess' is gone. But that's just a metaphor dying a possibly natural death.

    I think maybe I just wrote a poem about it, weirdly enough. But that is not dispositive.

    https://twitter.com/jholbo1/status/1090416595912933376

    ReplyDelete
  10. Hey Eric - I really like this post, and meant to comment days ago. I think this thought-experiment is really good, and I agree with the conclusion (I think - though I might phrase it differently). The 8th chapter of my book is actually entirely dedicated to working through a version of exactly this idea (I do it with radio communication between two human brains) - including some of the practical questions SelfAwarePatterns raises. I can share that chapter if you’re interested.

    My reason for saying I might not phrase things quite the way you do is that I don’t try to think in terms of ‘not whole numbers’ of minds, because I don’t have any good way to quantify ‘0.4 of’ a mind or anything like that. I’d prefer to say something like: there’s no objectively privileged threshold for ‘how unified’ two mental events have to be for us to attribute them to a single mind (or, to come at it slightly differently, to call whatever physical system they arise in ‘a single subject’). Using higher or lower thresholds will yield different divisions into minds, and the most natural way to capture this is to think of a single composite mind (all of whose contents meet a fairly low threshold for unity, but not a high one), containing component minds which are more unified (and hence count as distinct subjects if we use a higher threshold). We could also speak of ‘half minds’ or similar: either way we have to recognise that minds are not necessarily discrete from one another, and may share experiences or even entirely contain each other.

    (Oh, and since you mention realism and anti-realism about consciousness, I think the basic non-discreteness of minds has to be and can be recognised by different theories of consciousness, though the details do look rather different. Part of my aim in my book is to disentangle the way these issues arise for panpsychists, who have been worrying about them a lot recently, and for other sorts of theories.)

    ReplyDelete
  11. Thanks for the continuing comments, folks!

    John: Hilarious poem!

    Chris: I'm not sure how that addresses the discreteness issue, though?

    James: Maybe "anti-realist" isn't the right word. Here are two ways of thinking about the brain and consciousness: (1.) Consciousness is this phenomenon that philosophers point to by saying things like "qualia" and "what it's like", and there's an empirical question about what it relates to in the brain. (2.) There are a bunch of different processes in the brain that have functional properties that we associate with consciousness. It's a terminological decision which brain states and which functional properties we call the "conscious" ones, NOT a question of which among those processes really matches up with consciousness in sense (1). I thought I was hearing a (2)-type view.

    David: I have a grad student who is thinking about the same thing!

    Unknown: I wish I knew who you were! I like your phrasing. I think of non-whole numbers not as the right way to go but rather as a way of making vivid the weirdness of discrete steps between whole numbers. I'd be interested to see your chapter if you'd like to send it along.

    ReplyDelete
  12. Yeah, my philosophy is that the main advantage of minds is that they are really funny. So there are some good jokes out there!

    ReplyDelete
  13. Sorry, I must have messed up the commenting system somehow, 'Unknown' is me, Luke Roelofs.

    ReplyDelete
  14. Eric, thanks for engaging. I am suggesting that those two views (“what it is like” v. various different functions) are not mutually exclusive. I’m suggesting that there is a particular type of process that creates a situation that is best explained as qualia or what it’s like. To be explicit, I suggest a process which has a symbolic sign vehicle (to use a Peirce expression) as input is a conscious-type process. Any reference to that process from a subjective view will necessarily reference the object of the sign, i.e., the “meaning” of the symbol, and only that object. Thus, a process which has a symbol which means “red” as input will necessarily be referred to by the subject as a “feeling of red”, a qualia of “red”.

    Does that make sense as a possibility?

    *

    ReplyDelete
  15. James, yes, that makes sense as a possibility to me -- but I think there are many possibilities!

    ReplyDelete