Here's something you might think you know about consciousness: It's unified and discrete.
When we think about the stream of conscious experience -- the series of conscious thoughts, perceptual experiences, felt emotions, images, etc., that runs through us -- we normally imagine that each person has exactly one such stream. I have my stream, you have yours. Even if we entertain very similar ideas ("the bus is late again!") each of those ideas belongs determinately to each of our streams of experience. We share ideas like we might share a personality trait (each having a full version of each idea or trait), not like we share a brownie (each taking half) or a load (each contributing to the mutual support of a single whole). Our streams of experience may be similar, and we may influence each other, but each stream runs separately without commingling.
Likewise, when we count streams of experience or conscious entities, we stick to whole numbers. It sounds like a joke to say that there are two and a half or 3.72 streams of conscious experience, or conscious entities, here in this room. If you and I are in the room with a snail and an anesthesized patient, there are either two conscious entities in the room with two streams of conscious experience (if neither snails nor people in that type of anesthesized state have conscious experiences), or there are three, or there are four. Even if the anethesized patient is "half-awake", being half-awake (or alternatively, dreaming) is to be fully possessed of a stream of experience -- though maybe a hazy stream with confused ideas. Even if the snail isn't capable of explicit self-representation of itself as a thinker, if there's anything it's like to be a snail, then it has a stream of experience of its own snailish sort, unlike a fern, which (we normally think) has no conscious experiences whatsoever.
I find it hard to imagine how this could be wrong. And yet I think it might be wrong.
To start, let's build a slippery slope. We'll need some science fiction, but nothing too implausible I hope.
At the top of the slope, we have five conscious human beings, or even better (if you'll allow it) five conscious robots. At the bottom of the slope we have a fully merged and unified entity with a single stream of conscious experience. At each step along the way from top to bottom, we integrate the original five entities just a little bit more. If they are humans, we might imagine growing neural connections, one at a time, between their brains, slowly building cross-connections until the final merged brain is as unified as one could wish. If necessary, we could slowly remove and reconfigure neurons during the process so that the final merged brain is exactly like a normal human brain.
Since the human brain is a delicate and bloody thing, it will be much more convenient to do this with robots or AI systems, made of silicon chips or some other digital technology, if we are willing to grant that under some conditions a well-designed robot or AI system could have a genuine stream of consciousness. (As intuitive examples, consider C3P0 from Star Wars or Data from Star Trek.) Such systems could be slowly linked up, with no messy neurosurgery required, and their bodies (if necessary) slowly joined together. On the top of the slope will be five conscious robots, on the bottom one conscious robot.
The tricky bit is in the middle, of course. Either there must be a sudden shift at exactly one point from five streams of experience to one (or four sudden shifts from exactly 5 to 4 to 3 to 2 to 1), as a result of ever so small a change (a single neural connection), or, alternatively, streams of experience must in some cases not be discretely countable.
To help consider which of these two possibilities is the more plausible, let's try some toy examples.
You and me and our friends are discretely different animals with discretely different brains in discretely different skulls, with only one mouth each. So we are used to thinking of the stream of conscious experience like this:
The red circles contain what is in our streams of conscious experience -- sometimes similar (A and A', which you and I share), sometimes different (C belongs only to you), all reportable out of our discrete mouths, and all available for the guidance of consciously chosen actions.
However, it seems to be only a contingent fact about the biology of Earthly animals that we are designed like this. An AI system, or an alien, might be designed more like this:
Imagine here a complex system with a large pool of representations. There are five distinct verbal output centers (mouths), or five distinct loci of conscious action (independent arms), each of which draws on some but not all of the pool of representations. I have included one redundant representation (F) and a pair of contradictory representations (B and -B) to illustrate some of the possible complexity.
In such a case, we might imagine that there are exactly five streams, though they overlap in some important way.
But this is still much simpler than it might be. Now imagine these further complications:
1. There is no fixed number of mouths or arms over time. 2. The region of the representational pool that a mouth or arm can access isn't fixed over time. 3. The region of the representational pool that a mouth or arm can access isn't sharp-boundaried but is instead a probability function, where representations functionally nearer to the mouth or arm are very likely to be accessible for reporting or action, and representations far from the mouth or arm are unlikely to be accessible, with a smooth gradation between.
This picture aims to capture some of the features described:
Think of each color as a functional subsystem. Each color's density outside the oval is that system's likelihood, at any particular time, of being available for speech or action in that direction. Each color's density inside the oval is the likelihood of representations in that region being available to that subsystem for speech or action guidance. With a rainbow of colors, we needn't limit ourselves to a discretely countable number of subsystems. The figure also might fluctuate over time, if the probabilities aren't static.
In at least the fluctuating rainbow case, I submit, countability and discreteness fail. Yet it is a conceivable architecture -- a possible instantiation of an intermediate case along our slippery slope. If such an entity could host consciousness, and if consciousness is closely related to its fluctuating rainbow a structural/functional features, then the entity's stream(s) of conscious experience cannot be effectively represented with whole numbers. (Maybe we could try with multidimensional vectors.)
Is this too wild? Well, it's not inconceivable that octopus consciousness has some features in this direction (if octopi are conscious), given the distribution of their cognition across their arms; or that some overlap occurs in unseparated craniopagus twins joined at the head and brain; or even -- reading Daniel Dennett in a certain way -- that we ourselves are structured not as differently from this as we normally suppose.
----------------------------------
Related:
A Two-Seater Homunculus (Apr 1, 2013);
How to Be a Part of God's Mind (Apr 23, 2014);
If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies, 2015);
Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018).