Thursday, July 12, 2018

Two Ways of Being a Group Mind: Synchronic vs. Diachronic

Based on last week's post, I am now seeing ads for sunglasses everywhere, as if to say "Welcome, Eric, to the internet hypermind! Did you say SUNGLASSES?!"

Speaking of hyperminds....

I want to distinguish two ways of being a group mind, since I know you care immensely about the cognitive architecture of group minds I'm a dork. My thought is that the philosophical issues of group consciousness and personal identity play out differently in the two types of case.


Examples: Star Trek's Borg (mostly), Ann Leckie's ancillaries, highly interconnected nations (according to me), Vernor Vinge's tines.

Synchronic group minds are probably the default conception. A bunch of independent, or semi-independent, or independent-enough entities remain in constant or constant-enough communication with each other. Their communication is sufficiently rich or sufficiently well structured that it gives rise to group-level mental states in the whole. In the most interesting case, the individual entities are distinct enough to have rich mental lives but also the group is well enough coordinated that it also, simultaneously, has a distinctive, rich mental life above and beyond that of its members.


Examples: David Brin's Kiln People, Linda Nagata's ghosts, Benjamin Kinney's forks.

In a diachronic group mind, communication is relatively rare but high bandwidth and transformative. Post-communication, the individuals inherit mental states from the others. In the most interesting case, the inheritance process is very different from just listening credulously to someone's testimony; rather it's more like a direct transfer of memories, plans, and opinions, maybe even values or personality. Imagine "forking" into three versions of yourself, going your separate ways for the day, and then at the end of the day merging back into a single individual, integrating the memories and plans of each, and averaging any general changes in value, opinion, or personality. Tomorrow and every day thereafter, you will fork and merge in the same way.

[Cerberus might not be integrated enough to be a good example of a group mind, but I didn't want to attach another darn picture of the Borg.]

Tradeoffs Between Group-Level and Individual-Level Personhood and Autonomy:

As I have described it here, delay between information transfer episodes is the fundamental difference between these types of group minds: whether the minds are in constant or constant-enough communication, or whether instead they communicate only at long intervals. Obviously, temporal distance admits of degree, but this difference in degree creates structural pressures. If communication is infrequent, its effects have to be radical if it is to give rise to an entity sufficiently integrated to be worth calling a "group mind". If every day group of friends meets to exchange information and plan the next day's activities, in the ordinary way people sometimes do this, I suppose that in some weak sense they have formed a group mind. But they haven't done so in the radical science-fictional sense I'm imagining. For example, if there were five friends who did this, there would still be exactly five persons -- entities with serious rights whose destruction would we worth calling murder. For the emergence of something more metaphysically and morally interesting, the exchange has to be radical enough to challenge the boundaries of personal identity.

Conversely, if communication is constant and its effects are radical, it's not clear that we have a group of individuals in any interesting sense: We might just have a single non-group entity that happens to be spatially scattered (as in my Martian Smartspiders).

In other words, to be a philosophically-interesting group entity there must be some sort of interestingly autonomous mentality both at the individual level and at the group level. Massive transformative communication (as in diachronic merging of memories and values) radically reduces autonomy: If communication is both massively transformative and very frequent, there's no chance for interesting person-like autonomy at the individual level. If communication is neither massively transformative nor very frequent, there's no chance for interesting person-like autonomy at the group level.


Our intuitive judgments about group-level consciousness are probably pretty crappy (as I've argued here and here). But our general theories about consciousness as they apply to the group level are probably even crappier (as I've argued here and here). At the same time, whether the group as a whole has a stream of conscious experience over and above the consciousness of its individual members seems like a very important question if we're interested in its mentality and whether it deserves moral status as a person. So we're kind of stuck. We'll have to guess.

Plausibly, in the diachronic case there is no stream of consciousness beyond that of the merging individuals. When there's one body at night, there's one stream of consciousness (at most, if it's dreaming). When there are three bodies off doing their thing, there are three streams of consciousness. We might be able to create some problematic boundary cases during the merge, but maybe that's marginal enough to dismiss with a bit of hand waving.

The synchronic case is, I think, more conceptually challenging with respect to consciousness. If we allow that minimally interactive groups do not give rise to group level consciousness and we also allow that a fully informationally integrated but spatially distributed entity does give rise to consciousness, it seems that we can create a slippery slope from one case to the other by adding more integration and communication (for example here). At some point, if there is enough coherent behavior, self-representation, and information exchange at the group level, most standard functionalist views of consciousness (unless they accept an anti-nesting principle) should allow that each individual member of the group would have a stream of experience and also that there would be a further, different stream of experience at the group level. But it's a tricky question how much integration and information exchange, and with what kind of structural properties, is necessary for group-level consciousness to arise.


One interesting issue that arises is the extent to which an individual's beliefs about what counts as "self-interest" and "death" define the boundaries of their personhood. Consider a diachronic case: You are walking back home after your day out and about town, with a wallet full of money and interesting new information about a job opportunity tomorrow, and you are about to merge back together with the two other entities you forked off from this morning. Is this death? Are "you" going to be gone after the merge, your memories absorbed into some entity who is not you (but who you might care about even more than you care about yourself)? In walking back, are you magnanimously sacrificing your life to give your money and information to the entity who will exist tomorrow? Would it be more in your self-interest to run away and blow your wad on something fun for this current body? Or, instead, will it still be "you" tomorrow, post-merge, with that information and that money? To some extent, in unclear cases of this sort, I think it might depend on how you think and feel about it: It's to some extent up to you whether to conceptualize the merging together as death or not.

A parallel issue might arise with synchronic groups, though my hunch is that it would play out differently. Synchronic groups, as I'm imagining them, don't have identity-threatening splits and merges. The individual members of synchronic groups would seem to have the same types of rights that otherwise similar individuals who aren't members of synchronic group minds would have -- rights depending on (for example, but it's not this simple) their capacity to suffer and think and choose as individuals. They might choose, as individuals, to view the group welfare as much more important than their own welfare (as a soldier might choose to die for sake of country); but unless there's some real loss of autonomy or consciousness, this doesn't threaten their status as persons or redefine the boundaries of what counts as death.


Possible Architectures of Group Minds: Perception (May 4, 2016)

Possible Architectures of Group Minds: Memory (Jun 14, 2016)

Group Minds on Ringworld (Oct 24, 2012)

If Materialism Is True, the United States Is Probably Conscious (academic essay in Philosophical Studies, 2015)

Our Moral Duties to Monsters (Mar 8, 2014)

Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity (Aug 20, 2016).

[image source]

Friday, July 06, 2018

How to Create Immensely Valuable New Worlds by Donning Your Sunglasses

[A satisfied REI customer, creating whole gobs of new worlds.]

One world is good, so two is better, right?

(If you think one world is bad and two are worse, just invert the reflections below.)

Here's one way to look at it: Run a universe, or a world, from beginning to end, sum up all the good stuff, subtract all the bad stuff, and note the (hopefully positive) total. Now consider, from your end-of-the-universe God's-eye point of view: Should you launch another world similar to the previous one? Well, of course you should! There would be even more good stuff, and a higher positive total, after that second world has been run. Similar considerations suggest that two good worlds running in parallel would also be better than a single good world.

(To avoid problems with summing infinitudes, let's assume finite worlds with finite value. For some complexities regarding the value or disvalue of repetition specifically see this earlier post.)

Now on some interpretations of quantum mechanics, every time there's a quantum event with different possible outcomes, all of the outcomes occur, each in a different world. Such "many world" interpretations often describe the world as "splitting" into two worlds -- one world in which Outcome A occurs and one in which Outcome B occurs. You too, the observer, will split: One copy of you goes into World A, observing Outcome A, and the other goes into World B, observing Outcome B.

In a classic article on the many-worlds interpretation of quantum mechanics, Bryce S. DeWitt writes:

This universe is constantly splitting into a stupendous number of branches, all resulting from the measurementlike interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.... I still recall vividly the shock I experienced on first encountering this multiworld concept. The idea of 10^100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable.... (DeWitt 1970, p. 33).

Notice that on DeWitt's portrayal, there are a finite but large number of worlds: 10^100+. Now here's the normative question I want to consider: Is there positive value -- would it be ethically or prudentially or aesthetically good -- to increase the amount of splitting, so that there are more worlds rather than fewer?

On the face of it, it seems that, yes, it would be good if there were more worlds. If each world independently considered is good, then plausibly more worlds is better. Suppose World W has positive value V. Suppose now that it splits into two worlds that are very similar but not identical: W1 with value V1 and W2 with value V2. If we assume V1 ~= V2 ~= V and that the value of worlds is additive, then after the split the whole W1+W2 is approximately twice the value of W. We have doubled the amount of value that the cosmos as a whole contains!

Now you might object in one of three ways:

(1.) You might reject the whole splitting-worlds interpretation. Fair enough! But then you're not really playing the game. I'm interested in thinking about the normative consequences assuming that the world does split as DeWitt describes.

(2.) You might reject the assumption that V1 ~= V. For example, you might think that after splitting, each world has only approximately half the value that the world before the split had, so that V1 + V2 ~= V. Then no value would be gained by splitting. A tempting thought, perhaps. But why would splitting make a world half as valuable? It's not clear. One consequence of this view is that our world, constantly splitting, would be constantly halving in value, so that its value is plummeting by orders of magnitude every second. Hmmm, that seems no less bizarre than the idea that splitting doubles the value of cosmos. Now if you thought that the other worlds already existed before the split, then you might reject V1 ~= V, since the subset of worlds in V1 would be about half (or whatever) of the total number of worlds in V. But the whole idea of splitting worlds is not that the worlds already existed. It's that they are created. So we're talking about whether creating a new valuable world adds value to the cosmos as a whole. Pending a good argument otherwise, it seems like it should.

(3.) You might reject additivity. You might say that although V1 ~= V2 ~= V, you can't simply sum V1 and V2 to get ~= 2V. I can feel the pull of this idea. If the worlds were strictly identical, you might say "well, there's no point in duplicating everything again"! But (a.) The worlds are not strictly identical, and in fact (given chaos theory) they might diverge increasingly over time. And (b.) Duplication plausibly adds value when worlds are temporally separated (at the end of the universe, it seems like not re-running the thing again would involve omitting possible future pleasures that future inhabitants of the world would have), and side-by-side splitting wouldn't seem to be very normatively different in principle.

Okay, let's pretend you're convinced. When new worlds arise through quantum mechanical splitting, that's terrific. Whole realms of value are added! The value of the cosmos as a whole approximately doubles. Now, accepting this, is there anything you should do differently?

Consider the following possible principle:

Conservation of Splitting. Over any time interval t, the world will split N times, no matter what you do.

As far as I can see, there's no reason to accept Conservation of Splitting. It seems like you can create situations in which there would be relatively more splitting or relatively less splitting. You can run more quantum mechanical experiments or fewer. You can make more quantum mechanical measurements or fewer. And if you run more experiments and make more measurements, there will be more splitting, and thus more worlds.

For example, that pair of polarized sunglasses you have, sitting in that dark sunglass case in that dark drawer? When a photon passes or fails to pass through a polarized lens, that's a quantum mechanical event -- a chance event, which, on this interpretation, results in a splitting of worlds. In some worlds the photon goes through; in others it does not. You could take those sunglasses out of the drawer. You could go to the beach and wear them in the sun. Many more photons will pass through those lenses! Maybe about 10^18 more photons per second. If each of those photons has an independent quantum chance of passing or not passing through those lenses, splitting the world, then you're creating 2^10^18 new worlds per second just by sitting on the beach -- worlds that would not have been created had the sunglasses remained cased in the drawer. Think of all the value you're adding to the cosmos!

Effective altruism is a movement that recommends using reason and evidence to do the most good you can do. Normally, effective altruists recommend doing things like donating money to charities that effectively help to alleviate suffering due to poverty. But the value of saving one life has to be tiny compared to the value of creating 2^10^18 new universes per second! So instead of staying indoors in the shade, writing a check to the Against Malaria Foundation, maybe you'd do better to spend the day at the beach.

Even if you have only a 0.001% credence that the splitting worlds interpretation of quantum mechanics is true, the expected utility of sitting on the beach might far exceed that of donating to poverty relief.

I know that it doesn't seem intuitively very plausible that going to the beach is far morally better than donating money to effective, life-saving charities, but consequentialist philosophers are often willing to admit what's ethically best might not match our intuitions about what's ethically best.

PS: I feel so bad about making this argument that I just donated to Oxfam.


Related Posts:

Duplicating the Universe (Apr 29, 2015)

Goldfish Pool Immortality (May 30, 2014)

How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)

[image source]