Last fall I gave a couple of talks in Ohio. While there, I met an Oberlin undergraduate named Sophie Nelson, with whom I have remained in touch. Sophie sent some interesting ideas for my paper in draft "Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons", so I invited her on as a co-author and we have been jointly revising. Check out today's new version!
Let's walk through one example from the paper, originally suggested by Sophie but mutually written for the final draft. I think it stands on its own without need of the rest of the paper as context. For the purposes of this argument we are assuming that broadly human-like cognition and consciousness is possible in computers and that functional and informational processes are what matter to consciousness. (These views are widely but not universally shared among consciousness researchers.)
(Readers who aren't philosophers of mind might find today's post to be somewhat technical and in the weeds. Apologies for that!)
Suppose there are two robots, A and B, who share much of their circuitry in common. Between them hovers a box in which most of their cognition transpires. Maybe the box is connected by high-speed cables to each of the bodies, or maybe instead the information flows through high bandwidth radio connections. Either way, the cognitive processes in the hovering box are tightly cognitively integrated with A's and B's bodies and the remainders of their minds -- as tightly connected as is ordinarily the case in ordinary unified minds. Despite the bulk of their cognition transpiring in the box, some cognition also transpires in each robot's individual body and is not shared by the other robot. Suppose, then, that A has an experience with qualitative character α (grounded in A's local processors), plus experiences with qualitative characters β, γ, and δ (grounded in the box), while B has experiences with qualitative characters β, γ, and δ (grounded in the box), plus an experience with qualitative character ε (grounded in B's local processors).
If indeterminacy concerning the number of minds is possible, perhaps this isn't a system with a whole number of minds. Indeterminacy, we think, is an attractive view, and one of the central tasks of the paper is to argue in favor of the possibility of indeterminacy concerning the number of minds in hypothetical systems.
Our opponent -- whom we call the Discrete Phenomenal Realist -- assumes that the number of minds present in any system is always a determinate whole number. Either there's something it's like to be Robot A, and something it's like to be Robot B, or there's nothing it's like to be those systems, and instead there's something it's like to be the system as a whole, in which case there is only one person or subjective center of experience. "Something-it's-like-ness" can't occur an indeterminate number of times. Phenomenality or subjectivity must have sharp edges, the thinking goes, even if the corresponding functional processes are smoothly graded. (For an extended discussion and critique of a related view, see my draft paper Borderline Consciousness.)
As we see it, Discrete Phenomenal Realists have three options when trying to explain what's going on in the robot case: Impossibility, Sharing, and Similarity. According to Impossibility, the setup is impossible. However, it's unclear why such a setup should be impossible, so pending further argument we disregard this option. According to Sharing, the two determinately different minds share tokens of the very same experiences with qualitative characters β, γ, and δ. According to Similarity, there are two determinately different minds who share experiences with qualitative characters β, γ, and δ but not the very same experience tokens: A's experiences β1, γ1, and δ1 are qualitatively but not quantitatively identical to B's experiences β2, γ2, and δ2. An initial challenge for Sharing is its violation of the standard view that phenomenal co-occurrence relationships are transitive (so that if α and β phenomenally co-occur in the same mind, and β and ε phenomenally co-occur, so also do α and ε). An initial challenge for Similarity is the peculiar doubling of experience tokens: Because the box is connected to both A and B, the processes that give rise to β, γ, and δ each give rise to two instances of each of those experience types, whereas the same processes would presumably give rise to only one instance if the box was connected only to A.
To make things more challenging for the Discrete Phenomenal Realist who wants to accept Sharing or Similarity, imagine that there's a switch that will turn off the processes in A and B that give rise to experiences α and ε, resulting in A's and B's total phenomenal experience having an identical qualitative character. Flipping the switch will either collapse A and B to one mind, or it will not. This leads to a dilemma for both Sharing and Similarity.
If the defender of Sharing holds that the minds collapse, then they must allow that a relatively small change in the phenomenal field can result in a radical reconfiguration of the number of minds. The point can be made more dramatic by increasing the number of experiences in the box and the number of robots connected to the box. Suppose that 200 robots each have 999,999 experiences arising from the shared box, and just one experience that's qualitatively unique and localized – perhaps a barely noticeable circle in the left visual periphery for A, a barely noticeable square in the right visual periphery for B, etc. If a prankster were to flip the switch back and forth repeatedly, on the collapse version of Sharing the system would shift back and forth from being 200 minds to one, with almost no difference in the phenomenology. If, however, the defender of Sharing holds that the minds don't collapse, then they must allow that multiple distinct minds could have the very same token-identical experiences grounded in the very same cognitive processors. The view raises the question of the ontological basis of the individuation of the minds; on some conceptions of subjecthood, the view might not even be coherent. It appears to posit subjects with metaphysical differences but not phenomenological ones, contrary to the general spirit of phenomenal realism about minds.
The defender of Similarity faces analogous problems. If they hold the number of minds collapses to one, then, like the defender of Sharing, they must allow that a relatively small change in the phenomenal field can result in a radical reduction in the number of minds. Furthermore, they must allow that distinct, merely type-identical experiences somehow become one and the same when a switch is flipped that barely changes the system's phenomenology. But if they hold that there's no collapse, then they face the awkward possibility of multiple distinct minds with qualitatively identical but numerically distinct experiences arising from the same cognitive processors. This appears to be ontologically unparsimonious phenomenal inflation.
Maybe it will be helpful to have the possibilities for the Discrete Phenomenal Realist depicted in a figure. Click to enlarge and clarify.
13 comments:
Four or more kinds of consciousness researchers:...thanks...
...Consciousness in one, in everyone, out side one, out side everyone, between everyone...
For relating deep semantic brain function processing with meta ontological realities'...
Nice. I've also discussed some of these ideas in a paper on the Hogan twins. https://philpapers.org/rec/COCACO-6
I'm not understanding the setup.
"...experiences with qualitative characters β, γ, and δ (grounded in the box)..."
For the two robots, are these experiences supposed to be grounded by the same events in the box? I wouldn't have any difficulty thinking that a single piece of circuitry could function at one time as a part of the mind of Robot A, and at another time as part of the mind of Robot B. But if you're talking about a single set of physical events grounding β, γ, and δ for both robots simultaneously, then it's a different question.
So, this is still about AI? That means, I guess, anything I might allege, or hold, or claim about human minds is irrelevant. As it should, IMHO, be. Anyone curious may click on today's offering from Oxford's Practical Ethics. Therewith, I admitted it meant little to me. Discretion, in the 'minds' of intelligent machinery, begs better and independent inquiry. But, whoa, we can't teach AI to be discrete---or,can we? I would say---am saying here---is AI is property. Not a different holding from robots are property, as was held, decades ago. I don't get paid for my contrarian opinions.
The positions taken now, regarding AI and whether or not it may have a capacity for consciousness, remain obscure to me. This is more of the science fiction I loved,fifty years ago. As for AI, the result is the same. If garbage is deposited,garbage emerges.
Thanks for the comments, folks!
Tom: Yes! We cite you on this in the full MS.
Chinaphil: I’m not quite understanding your question. Are you asking about event-individuation? I’m willing to be flexible on those sorts of metaphysical details.
Paul: That’s not an unreasonable metaphysical view, but I also think the opposite is reasonable, so we can speculate on what would follow from one or the other possibility regarding AI consciousness.
Dear Eric:
Now that I have once read your paper with Sophie, I will offer a few observations and some opinion.
*your attention to nuance and detail is admirable. This is a lot of very good work.
*the premise(s) is/are elegant and I am suitably impressed by that as well.
*elegant speculation is a key ingredient to philosophical enquiry, check.
*AI 'minds' are not equivalent to human mind, IMHO.
*human consciousness is difficult to assess, because we still have no certifiable means to measure it, whatever it is---no quantitative nor qualitative yardstick.
*I think, mind you only think, Descartes got as near as anyone else has when he announced "cogito ergo sum" That is, as a practical matter, the sum of things.
I am going to whip out my fine toothed comb and read your paper again, if the internet and my connection thereto hold together. Thanks for allowing me here. Others appear to be having second thoughts, but I cannot prove that appearance. Why?
Because the internet, as a human enterprise, is not always, or in all ways, reliable. I have learned to expect no more than that. Later, then, PDV.
It seems to me like the discrete phenomenal realist has an out if they embrace the view that phenomenal character is essentially a single, unified representation for a conscious subject. On this view, there would be one subject whose experience had its phenomenal character constituted by their isolated parts as well as the parts in the box as well as a distinct conscious subject who had the phenomenal character of their experience constituted by the isolated parts in *their* head as well as the parts in the box. If the latter subject’s isolated parts were shut off, the former subject would nevertheless retain a complete conscious experience. The the former subject’s isolated parts were shut off, the latter subject would still retain their full conscious experience. If the parts in the box were shut off, both subjects would lose their full conscious experience. Where is the problem?
This is not a critique. Of anyone here. The depth of this 'problem' is an abstraction just as freewill, as illusion, are abstraction. Philosophy loves abstraction because, unless one is more pragmatic than abstract, the 'direction of fit' is more important than construction. Many have thought reality itself is illusion. I don't think so, unless one attaches the qualifier, contextual, to reality, as I did when coining contextual reality. I have said people hold that position, based on interest, preference and motivation. In other words, they make it up as they go. This is my extension of Davidson's propositional attitudes. Exacerbation of contextual reality emerges with postmodern accent on excess, exaggeration and extremism. Am I making THIS up? No, as a conscious human, I am just paying attention. Carry on...and,best to all.
Interesting article! I like thinking about group minds myself.
Shane: I think the issues Eric raises for that view, from my understanding, is that it would allow a tiny phenomenological change to potentially cause a massive change in the number of minds, like in the example given of 200 robots with 999,999 common experiences and only 1 non-shared experience each, and removing only the non-shared experiences. However, it may be possible to bite the bullet on this.
I don't have strong feelings on unity at the moment, but I have at least some inclination towards the idea that the definition of a "subject" or "mind" just is a maximal set of unified parts of consciousness. (I think maximal set is correct here? What I think many people, including me, would be inclined to exclude is every subset of a unified set of mental experiences itself counting as a mind, although then again, I know Eric has talked about nested minds before). But anyway, because it's just definitional, it does seem trivial.
I think Baynes (whom Eric has mentioned in discussions of the unity of consciousness) would really like to discuss a more substantial idea of unity, or at least says that in a paper with Chalmers that I read earlier. And I think Eric may also be talking about that idea of unity in previous articles I read. I need to read up again on that and Eric's other articles more to get a grip on what that unity is supposed to be, though, and whether it's plausible that it always holds.
Paul D. Van Pelt: I'm also personally inclined to think that AI could be conscious for various reasons, but of course no one has hard proof. But I'm actually thinking we may not need AI after all for the sort of situation presented in the post! At the very least, it doesn't seem obviously impossible to make a similar structure with biological tissue, such as a "main brain" connected to two smaller brains. And while it's not the same structure, it can be argued there are already cases of brains being deeply connected, for instance in the example Cochrane mentioned above of the Hogan twins. So even a skeptic about the possibility of AI consciousness would need further arguments to claim that no minds could mostly overlap, and I'm not optimistic about the success chances of such arguments.
Thanks for the heads up. I will return to pen and ink, rather than the easier,yet risky, AI mode I have trusted. I do not know how or why an essay of mine was pirated from my tablet. But, stranger things are happening. Everyday. Yes, consciousness is discrete, insofar as living things are discrete. Some of those living things have, as Edelman suggested, primary consciousness. Others, such as us---if you follow his reasoning, have consciousness of a higher order. I believe in temporal proximity, as well as spacial proximity. Finally, I will not commit anything further of importance to this device. Anyone is welcome to question my reasoning(s). Water is physics, not philosophy. Just saying...
This is all really interesting! I have a question and am curious how you’d respond. My question does not concern the indeterminacy issue, however.
Focus on one of the two (determinate) extremes: Aleph and the 200 robots constitute a single mind distributed across 201 locations. And that’s it. There’s exactly one mind here.
When Aleph represents that A1 represents the field as having, say, 60% blue flowers, is that representation introspective? Supposing that A1 and Aleph are part of a single mind might suggest that it *is* introspective.
Yet, as described in your paper, *Aleph* might not represent the field in that way, in which case Aleph wouldn’t (or shouldn’t) represent that it (Aleph) represents the field as having 60% blue flowers. Does Aleph’s representation about A1’s representation have the right form to be introspective? In some ways it’s similar to a representation about how one’s neighbor represents things, which suggests that it’s *not* introspective.
What you write at the top of page 10 suggests that you might express Aleph’s representation about A1 differently than I did above. Instead of “A1 represents the field as having 60% blue flowers”, it might be “my A1 cognition represents the field as having 60% blue flowers”. This is in first-person terms and so seems to have the right form for an introspective representation.
Yet although this is in first-person terms, it can be conjoined with either of the following representations: (a) “… but the field has 50% blue flowers”; or (b) “… but I represent the field as having 50% blue flowers. This makes me wonder whether the initial representation is truly introspective.
So, even if we suppose that A1 and Aleph are part of a single mind, I don’t know that Aleph’s representations about A1’s representations should count as introspective. What do you think about this?
P.s., I recognize that cases of known illusion in normal (non-sci-fi) cases are similar to this. My visual system represents the bottom line as being longer than the top line (Muller-Lyer illusion). Although I do not believe the bottom line is longer (since I am aware of the illusion), my representation that “my visual system represents the bottom line as being longer” seems to be introspective. And this is so even though I could conjoin this with either: “… but the bottom line is not longer”, or “… but I do not believe that the bottom line is longer”.
Interestingly, in this case the conflicting representations are of a different kind: the one is visual and the other is a belief. By contrast, we could imagine that Aleph and A1 each represents the percentage of blue flowers in the field in the same way, say, visually (though disagreeing on the number). Suppose that Aleph visually represents the percentage as 50 and that A1 visually represents the percentage as 60.
Now suppose that Aleph represents that A1 visually represents the field as having 60% blue flowers. Given that Aleph visually represents the field in a different way, it sounds a bit odd to me to count Aleph’s representation about A1 as introspective, even if that representation is expressed as: “my A1 cognition visually represents the field as having 60% blue flowers”.
Perhaps this is all confused, or too far removed from your main interests in the paper. Apologies if so!
I guess I'm not seeing quite where the problem is. I see this as settling into two situations which are familiar from our own experience.
1) If the central mind has a certain set of events associated with one robot-sensor, and a certain set of events associated with another (e.g. one during the day, and the other at night), then it just seems like a "split personality". This is a split personality with an additional physical component, but just the same as something that already exists(?) in mentally ill people.
2) If the central brain only has one single stream of thought-events, then I can't see any basis for splitting it into more than one "mind". I can feel an object with my left hand, and be conscious of it in a left-hand-plus-brain kind of a way; then feel it with my right hand and be conscious of it in a right-hand-plus-brain kind of a way. None of this makes me think that I represent more than one mind. I can't see how the robot example is different.
Thanks, Phil. There is a lot of complexity wrapped up in this. How much of it represents validity; how much details interests, preferences and motives, I cannot discern. I try to assume a Nagelian view from nowhere, but that is so twentieth century. Last century, a musician friend and associate said: everybody's got their own album to do. While living in another country and working in a Food Science lab, I learned there were containers of PCB, stored under a lab bench. This was revealed to me by the department head. On a brighter note, there was a box, a NMR machine. Which sat idle, because no one knew how to use it, or for what. Magnetic Resonance Imaging now saves, or at least prolongs lives. I weighed the responsibilities against expectations. And, left. Connections and comparisons were beginning to bother me.
Post a Comment