Let's walk through one example from the paper, originally suggested by Sophie but mutually written for the final draft. I think it stands on its own without need of the rest of the paper as context. For the purposes of this argument we are assuming that broadly human-like cognition and consciousness is possible in computers and that functional and informational processes are what matter to consciousness. (These views are widely but not universally shared among consciousness researchers.)
(Readers who aren't philosophers of mind might find today's post to be somewhat technical and in the weeds. Apologies for that!)
Suppose there are two robots, A and B, who share much of their circuitry in common. Between them hovers a box in which most of their cognition transpires. Maybe the box is connected by high-speed cables to each of the bodies, or maybe instead the information flows through high bandwidth radio connections. Either way, the cognitive processes in the hovering box are tightly cognitively integrated with A's and B's bodies and the remainders of their minds -- as tightly connected as is ordinarily the case in ordinary unified minds. Despite the bulk of their cognition transpiring in the box, some cognition also transpires in each robot's individual body and is not shared by the other robot. Suppose, then, that A has an experience with qualitative character α (grounded in A's local processors), plus experiences with qualitative characters β, γ, and δ (grounded in the box), while B has experiences with qualitative characters β, γ, and δ (grounded in the box), plus an experience with qualitative character ε (grounded in B's local processors).
If indeterminacy concerning the number of minds is possible, perhaps this isn't a system with a whole number of minds. Indeterminacy, we think, is an attractive view, and one of the central tasks of the paper is to argue in favor of the possibility of indeterminacy concerning the number of minds in hypothetical systems.
Our opponent -- whom we call the Discrete Phenomenal Realist -- assumes that the number of minds present in any system is always a determinate whole number. Either there's something it's like to be Robot A, and something it's like to be Robot B, or there's nothing it's like to be those systems, and instead there's something it's like to be the system as a whole, in which case there is only one person or subjective center of experience. "Something-it's-like-ness" can't occur an indeterminate number of times. Phenomenality or subjectivity must have sharp edges, the thinking goes, even if the corresponding functional processes are smoothly graded. (For an extended discussion and critique of a related view, see my draft paper Borderline Consciousness.)
As we see it, Discrete Phenomenal Realists have three options when trying to explain what's going on in the robot case: Impossibility, Sharing, and Similarity. According to Impossibility, the setup is impossible. However, it's unclear why such a setup should be impossible, so pending further argument we disregard this option. According to Sharing, the two determinately different minds share tokens of the very same experiences with qualitative characters β, γ, and δ. According to Similarity, there are two determinately different minds who share experiences with qualitative characters β, γ, and δ but not the very same experience tokens: A's experiences β1, γ1, and δ1 are qualitatively but not quantitatively identical to B's experiences β2, γ2, and δ2. An initial challenge for Sharing is its violation of the standard view that phenomenal co-occurrence relationships are transitive (so that if α and β phenomenally co-occur in the same mind, and β and ε phenomenally co-occur, so also do α and ε). An initial challenge for Similarity is the peculiar doubling of experience tokens: Because the box is connected to both A and B, the processes that give rise to β, γ, and δ each give rise to two instances of each of those experience types, whereas the same processes would presumably give rise to only one instance if the box was connected only to A.
To make things more challenging for the Discrete Phenomenal Realist who wants to accept Sharing or Similarity, imagine that there's a switch that will turn off the processes in A and B that give rise to experiences α and ε, resulting in A's and B's total phenomenal experience having an identical qualitative character. Flipping the switch will either collapse A and B to one mind, or it will not. This leads to a dilemma for both Sharing and Similarity.
If the defender of Sharing holds that the minds collapse, then they must allow that a relatively small change in the phenomenal field can result in a radical reconfiguration of the number of minds. The point can be made more dramatic by increasing the number of experiences in the box and the number of robots connected to the box. Suppose that 200 robots each have 999,999 experiences arising from the shared box, and just one experience that's qualitatively unique and localized – perhaps a barely noticeable circle in the left visual periphery for A, a barely noticeable square in the right visual periphery for B, etc. If a prankster were to flip the switch back and forth repeatedly, on the collapse version of Sharing the system would shift back and forth from being 200 minds to one, with almost no difference in the phenomenology. If, however, the defender of Sharing holds that the minds don't collapse, then they must allow that multiple distinct minds could have the very same token-identical experiences grounded in the very same cognitive processors. The view raises the question of the ontological basis of the individuation of the minds; on some conceptions of subjecthood, the view might not even be coherent. It appears to posit subjects with metaphysical differences but not phenomenological ones, contrary to the general spirit of phenomenal realism about minds.
The defender of Similarity faces analogous problems. If they hold the number of minds collapses to one, then, like the defender of Sharing, they must allow that a relatively small change in the phenomenal field can result in a radical reduction in the number of minds. Furthermore, they must allow that distinct, merely type-identical experiences somehow become one and the same when a switch is flipped that barely changes the system's phenomenology. But if they hold that there's no collapse, then they face the awkward possibility of multiple distinct minds with qualitatively identical but numerically distinct experiences arising from the same cognitive processors. This appears to be ontologically unparsimonious phenomenal inflation.
Maybe it will be helpful to have the possibilities for the Discrete Phenomenal Realist depicted in a figure. Click to enlarge and clarify.