Over the course of a few related posts, I'll consider various possible architectures for superhuman group minds. Such minds regularly appear in science fiction -- e.g., Star Trek's Borg and the starships in Ann Leckie's Ancillary series -- but rarely do these fictions make the architecture entirely clear.
One cool thing about group minds is that they have the potential to be spatially distributed. The Borg can send an away team in a ship. A starship can send the ancillaries of which it is partly composed down to different parts of the planet's surface. We normally think of social groups as having separate minds in separate places, which communicate with each other. But if mentality (instead or also) happens at the group level, then we should probably think of it as a case of a mind with spatially distributed sensory receptors.
(Elsewhere, I've argued that ordinary human social groups might actually be spatially distributed group minds. We'll come back to that in a future post, I hope.)
So how might perception work, in a group mind?
Central Versus Distributed Perceptual Architecture:
For concreteness, suppose that the group mind is constituted by twenty groups of ten humanoids each, distributed across a planet's surface, in contact via relays through an orbiting ship. (This is similar to Leckie's scenario.)
If the architecture is highly centralized, it might work like this: Each humanoid aims its eyes (or other sensory organs) toward a sensory target, communicating its full bandwidth of data back up to the ship for processing by the central cognitive system (call it the "brain"). This central brain synthesizes these data as if it had two hundred pairs of eyes across the planet, using information from each pair to inform its understanding of the input from other pairs. For example if the ten humanoids in Squad B are flying in a sphere around an airplane, each viewing the airplane from a different angle, the central brain forms a fully three-dimensional percept of that airplane from all ten viewing angles at once. The central brain might then direct humanoid B2 to turn its eyes to the left because of some input from B3 that makes that viewpoint especially relevant -- something like how when you hear a surprising sound to your left, you spontaneously turn your eyes that direction, swiftly and naturally coordinating your senses.
Two disadvantages of this architecture are the bandwidth of information flow from the peripheral humanoids to the central brain and the possible delay of response to new information, as messages are sent to the center, processed in light of the full range of information from all sources, and then sent back to the periphery.
A more distributed architecture puts more of the information processing in the humanoid periphery. Each humanoid might process its sensory input as best it can, engaging in further sensory exploration (e.g., eye movements) in light of only its own local inputs, and then communicate summary results to the others. The central brain might do no processing at all but be only a relay point, bouncing all 200 streaming messages from each humanoid to the others with no modification. The ten humanoids around the airplane might then each have a single perspectival percept of the plane, with no integrated all-around percept.
Obviously, a variety of compromises are possible here. Some processing might be peripheral and some might be central. Peripheral sources might send both summary information and also high-bandwidth raw information for central processing. Local sensory exploration might depend partly on information from others in the group of ten, others in other 19 groups of ten, or from the central brain.
At the extreme end of central processing, you arguably have just a single large being with lots of sensory organs. At the extreme end of peripheral processing, you might not want to think about the system as a "group mind" at all. The most interesting group-mind-ish cases have both substantial peripheral processing and substantial control of the periphery either by the center or by other nodes in the periphery, with a wide variety of ways in which this might be done.
Perceptual Integration and Autonomy:
I've already suggested one high integration case: having a single spherical percept of an airplane, arising from ten surrounding points of view upon it. The corresponding low integration case is ten different perspectival percepts, one for each of the viewing humanoids. In the first case, there's single coherent perceptual map that smoothly integrates all the perceptual inputs; in the second case each humanoid has its own distinct map (perhaps influenced by knowledge of the others' maps).
This difference is especially interesting in cases of perceptual conflict. Consider an olfactory case: The ten humanoids in Squad B step into a meadow of uniform-looking flowers. Eight register olfactory input characteristic of roses. Two register olfactory input characteristic of daffodils. What to do?
Central dictatorship: All ten send their information to the central brain. The central brain, based on all of the input, plus its background knowledge and other sorts of information, makes a decision. Maybe it decides roses. Maybe it decides daffodils. Maybe it decides that there's a mix of roses and daffodils. Maybe it decides it is uncertain, and the field is 80% likely to be roses and 20% likely to be daffodils. Whatever. It then communicates this result to each of the humanoids, who adopt it as their own local action-guiding representation of the state of the field. For example, if the central brain says "roses", the two humanoids registering daffodil-like input nonetheless represent the field as roses, with no more ambivalence about it than any of the other humanoids.
Winner-take-all vote: There need be no central dictatorship. Eight humanoids might vote roses versus two voting daffodils. Roses wins, and this result becomes equally the representation of all.
Compromise vote: Eight versus two. The resulting shared representation is either a mix of the two flowers, with roses dominating, or some feeling of uncertainty about whether the field is roses (probably) or instead daffodils (possible but less likely).
Retention of local differences: Alternatively, each individual humanoid might retain its own locally formed opinion or representation even after receiving input from the group. A daffodil-smeller might then have a representation something like this: To me it smells like daffodils, even though I know that the group representation is roses. How this informs that humanoid's future action might vary. On a more autonomous structure, that humanoid might behave like a daffodil smeller (maybe saying, "Ah, it's daffodils, you guys! I'm picking this one to take one back to the daffodil loving Queen of Mars") or it might be more deferential to the group (maybe saying, "I know my own input suggests daffodils, but I give that input no more weight than I would give to the input of any other member of the group").
Finally, no peripheral representation at all: An extremely centralized system might involve no perceptual representations at all in the humanoids, with all behavior issuing directly from the center.
Conceptual Versus Perceptual:
There's an intuitive distinction between knowing something conceptually or abstractly and having a perceptual experience of that thing. This is especially vivid in cases of known illusion. Looking at the Muller-Lyer illusion you know (conceptually) that the two lines minus the tails are the same length, but that's not how you (perceptually) see it.
The conceptual/perceptual distinction can cross-cut most of the architectural possibilities. For example, the minority daffodil smeller might perceptually experience the daffodils but conceptually know that the group judgment is roses. Alternatively, the minority daffodil smeller might conceptually know that her own input is daffodils but perceptually experience roses.
Counting Streams of Experience:
If the group is literally phenomenally conscious at the group level, then there might be 201 streams of experience (one for each humanoid, plus one for the group); or there might be only one stream of experience (for the group); or streams of experience might not be cleanly individuated, with 200 semi-independent streams; or something else besides.
The dictatorship, etc., options can apply to the group-level stream, as well as to the humanoid-level streams, perhaps with different results. For example, the group stream of consciousness might be determined by compromise vote (80% roses), while the humanoid streams of experience retain their local differences (some roses, some daffodils).
To Come:
Similar issues arise for group level memory, goal-setting, inferential reasoning, and behavior. I'll work through some of these in future posts.
I also want to think about the moral status of the group and the individuals, under different architectural setups -- that is, what sorts of rights or respect or consideration we owe to the individuals vs. the group, and how that might vary depending on the set-up.
------------------------------------------------
Related:
Would like to give your proposals full consideration; still have you thought how your group mind approach differs from the society as an organism theory put forth by thinkers such as Durkheim?
ReplyDeleteThe evolution of groups, from everywhere in the Cosmos, working to allow the influence of phenomena for perception...
ReplyDeleteThis is an awesome post, one I’m sure I’ll be linking for years to come. But I gotta ask, why worry about minds?
ReplyDeleteWe understand things by plugging them into larger pictures. ‘Minds’ are a way to do that absent any information regarding the natural processes involved. The problem here is that it really doesn’t matter what kind of minds we foist on them, so long as it renders them tractable. Solutions are the only material ‘facts of the matter.’ As a result, there need not be much in the way of connection between the minds we attribute them, the minds they attribute themselves, or the actual physiological processes driving the whole show. They could be centralized or decentralized in physiological fact, and yet construe their mentality in opposite terms. The consciousness a node has versus the consciousness it reports could be wildly different. Raising the question, which ‘mind’ are we after here? Our attribution, their attribution, or—as you primarily pursue here—the engines running the show?
But if this last is what we’re after, why not simply talk in terms of information and physical systems? Why even mention ‘minds’? The (quite subtle) architecture you sketch doesn’t need them for one. For another, minds are clearly adapted to solving problems absent knowledge of physical systems! Since this amounts to saying mind talk takes those systems for granted, we can see thought experiments like these as exercises in ‘imaginative ecology swapping,’ pinging the limits of human mind talk by swapping out the physiology it’s primarily adapted to solve.
Think, for instance, of the way it problematizes representations. Representation talk allows us to track information absent access to any natural systems responsible. Only the invariance of that background allows us to neglect it, to solve problems absent any inkling of the systems involved. The more you alter that background, the less applicable representation becomes (the more controversial representation talk becomes). A hive brain might ‘experience’ daisies on a continuum of levels, with a complexity that we could simply not conceive, let alone troubleshoot with a tool so coarse grained and anthropocentric as ‘representation.'
These kinds of thought experiments force us to go ‘post intentional,’ I think.
Interesting comment, Scott! Here, as in some other places, we are so close to each other in certain respects, and yet also maybe so far apart that it's hard to know where to begin.
ReplyDeleteI am a realist about streams of experience -- and I think that if there are facts about the locus of a group stream, then so also there will be facts about mentality. Intentionality I am more of a reductivist about, but you seem to be closer to an eliminativist. The intentionality of the group might be constituted by the informational patterns but still be real and present. Much more to say here, of course -- a conversation, maybe, more than a comment.
Howie: Human societies are only the beginning of the possibilities. I do think it's possible that they are genuine group minds, though. (See my paper "If Materialism Is True, the United States Is Probably Conscious".
Does being a realist about streams of experience mean that, for you, a locus can't be askew from where you'd normally position it in time and space? Or does it allow for the possibility of imperceptible currents of experience eluding in principle the reductive--and relational--capacities of human cognition, warping the vector sum toward an alien consciousness in our midst? In other words, is there room to conceive of streams of experience as, shall we say, parallactic as opposed to architectonic?
ReplyDeleteTake the probable consciousness of the United States. Certainly its own stream of experience includes currents that have no human correlate, otherwise it's simply an anthropomorphic construct.