Thursday, October 09, 2014

Possible Psychology of a Matrioshka Brain

Enclose the sun inside a layered nest of thin spherical computers. Have the inmost sphere harvest the sun's radiation to drive computational processes, emitting waste heat out its backside. Use this waste heat as the energy input for the computational processes of a second, larger and cooler sphere that encloses the first. Use the waste heat of the second sphere to drive the computational processes of a third. Keep adding spheres until you have an outmost sphere that operates near the background temperature of interstellar space.

Congratulations, you've built a Matrioshka Brain! It consumes the entire power output of its star and produces many orders of magnitude more computation per microsecond than all of the current computers on Earth do per year.

Here's a picture:

(Yes, it's black. Maybe not if you shine a flashlight on it, though.)

A common theme in discussions of super-duper-superintelligence is that we can have no idea what such a being would think about -- that a being so super-duper would be at least as cognitively different from us as we are from earthworms, and thus entirely beyond our ken. But I'd suggest (with Nick Bostrom, Eric Steinhart, and Susan Schneider) that we can think about the psychology of vast supercomputers. Unlike earthworms, we know some general principles of mentality; and, unlike earthworms, we can speculate, at least tentatively, about how these principles might apply to entities with computational power that far exceeds our own.

So...

Let's begin by considering a Matrioshka Brain planfully constructed by intelligent designers. The designers might have aimed at creating only a temporary entity -- a brief art installation, maybe, like a Buddhist sand mandala. These are, perhaps, almost entirely beyond psychological prediction. But if the designers wanted to make a durable Matrioshka Brain, then broad design principles begin to suggest themselves.

Perception and action. If the designers want their Brain to last, it probably needs to monitor its environment and adjust its behavior in response. It needs to be able to detect, say, an incoming comet that might threaten its structure, so that it can take precautionary measures (such as deflecting the comet, opening a temporary pore for it to pass harmlessly through, or grabbing and incorporating it). There will probably be engineering trade-offs between at least three design features here: (1) structural resilience, (2) ability to detect things in its immediate environment, and (3) ability to predict the future. If the structure is highly resilient, then it might be able to ignore threats. Maybe it could even lack outer perception entirely. But such structural resilience might come at a cost: either more expensive construction (at least fewer options for construction) or loss of desirable computational capacity after construction. So it might make sense to design a Brain less structurally resilient but more responsive to its environment -- avoiding or defeating threats, as it were, rather than just always taking hits to the chin. Here (2) and (3) might trade off: Better prediction of the future might reduce the need of here-and-now perception; better here-and-now perception (coupled with swift responsiveness) might reduce the need of future prediction.

Prediction and planning. Very near-term, practical "prediction" might be done by simple mechanisms (hairs that flex in a certain way, for example, to open a hole for the incoming comet) but long-term prediction and prediction that involves something like evaluating hypothetical responses for effectiveness starts to look like planful cognition (if I deflected the comet this way, then what would happen? if I flexed vital parts away from it in this way, then what would happen?). Presumably, the designers could easily dedicate at least a small portion of the Matrioshka Brain to planning of this sort -- that seems likely to be a high-payoff use of computational resources, compared to having the giant Brain just react by simple reflex (and thus possibly not in the most effective or efficient way).

Unity or disunity. If we assume the speed of light as a constraint, then the Brain's designers must choose between a very slow, temporally unified system or a system with fast, distributed processes that communicate their results across the sphere at a delay. The latter seems more natural if the aim is to maximize computation, but the former might also work as an architecture, if a lot is packed into every slow cycle. A Brain that dedicates too many resources to fighting itself might not survive well or effectively serve other design purposes (and might not even be well thought of as a single Brain), but some competition among the parts might prove desirable (or not), and I see no compelling reason to think that its actions and cognition need be as unified as a human being's.

Self-monitoring and memory. It seems reasonable to add, too, some sort of self-monitoring capacities -- both of its general structure (so that it can detect physical damage) and of its ongoing computational processes (so that it can error-check and manage malfunction) -- analogs of proprioception and introspection. And if we assume that the Brain does not start with all the knowledge it could possibly want, it must have some mechanism to record new discoveries and then later have its processing informed by those discoveries. If processing is both distributed and interactive among the parts, then parts might retain traces of their recent processing that influence reactions to input from other parts with which they communicate. Semi-stable feedback loops, for example, might be a natural way to implement error-checking and malfunction monitoring. This in turn suggests the possibility of a distinction between high-detail, quickly dumped, short-term memory, and more selective and/or less detailed long-term memory -- probably in more than just those two temporal grades, and quite possibly with different memories differently accessible to different parts of the system.

Preferences. Presumably, the Matrioshka Brain, to the extent it is unified, would have a somewhat stable ordering of priorities -- priorities that it didn't arbitrarily jettison and shuffle around (e.g., structural integrity of Part A more important than getting the short-term computational outputs from Part B) -- and it would have some record of whether things were "going well" (progress toward satisfaction of its top priorities) vs. "going badly". Priorities that have little to do with self-preservation and functional maintenance, though, might be difficult to predict and highly path-dependent (seeding the galaxy with descendants? calculating as many digits of pi as possible? designing and playing endless variations of Pac-Man?).

The thing's cognition is starting to look almost human! Maybe that's just my own humanocentric failure of imagination -- maybe! -- but I don't think so. These seem to be plausible architectural features of a large entity designed to endure in an imperfect world while doing lots of computation and calculation.

A Matrioshka Brain that is not intentionally constructed seems likely to have similar features, at least if it is to endure. For example, it might have merged from complex but smaller subsystems, retaining the subsystems' psychological features -- features that allowed them to compete in evolutionary selection against other complex subsystems. Or it might have been seeded from a similar Matrioshka Brain at a nearby star. Alternatively, though, maybe simple, unsophisticated entities in sufficient numbers could create a Matrioshka Brain that endures via dumb rebuilding of destroyed parts, in which case my current psychological conjectures wouldn't apply.

Wilder still: How might a Matrioshka Brain implement long-term memory, remote conjecture, etc.? If it is massively parallel because of light-speed constraints, then it might do so by segregating subprocesses to create simulated events. For example, to predict the effects of catapulting a stored asteroid into a comet cloud, it might dedicate a subpart to simulate the effects of different catapulting trajectories. If it wishes to retain memories of the psychology of its human or post-human creators -- either out of path-dependent intrinsic interest or because it's potentially useful in the long run to have knowledge about a variety of species' cognition -- it might do so by dedicating parts of itself to emulate exactly that psychology. To be realistic, such an emulation might have to engage in real cognition; and to be historically accurate as a memory, such an emulated human or post-human would have to be ignorant of its real nature. To capture social interactions, whole groups of people might be simultaneously emulated, in interaction with each other, via seemingly sensory input.

Maybe the Brain wouldn't do this sort of thing very often, and maybe when it did do it, it would only emulate people in large, stable environments. The Brain, or its creators, might have an "ethics" forbidding the frequent instantiation and de-instantiation of deluded, conscious sub-entities -- or maybe not. Maybe it makes trillions of these subentities, scrambles them up, runs them for a minute, then ends them or re-launches them. Maybe it has an ethics on which pleasure could be instantiated in such sub-entities but suffering would always only be false memory; or maybe the Brain finds it rewarding to experience pleasure via inducing pleasure in sub-entities and so creates lots of sub-entities with peak experiences (and maybe illusory memories) but no real past or future.

Maybe it's bored. If it evolved up from merging social sub-entities, then maybe it still craves sociality -- but the nearest alien contacts are lightyears away. If it was constructed with lots of extra capacity, it might want to "play" with its capacities rather than have them sit idle. This could further motivate the creation of conscious subentities that interact with each other or with which it interacts as a whole.

This is one possible picture of God.

------------------------------------

Related posts:

  • Group Organisms and the Fermi Paradox
  • How to Be a Part of God's Mind
  • Our Possible Imminent Divinity
  • Skepticism, Godzilla, and the Artificial Computerized Many-Branching You
  • 15 comments:

    1. Self-protection wise, if you find a Matrioshka brain on your doorstep you do not want to poke it with a stick. As James Nicoll pointed out, these things are effectively a Kardashev Type II civilization, with the total energy budget of a star on tap. They're also presumably (if you go by Robert Bradbury's original work) composed of a myriad of small computing nodes, presumably communicating node-to-node across the interior using short wavelength radiation (probably X-ray lasers).

      If a Matrioshka brain notices an incoming comet, it can just direct a bunch of its internal networking bandwidth into a phased-array directed energy beam aimed at the comet -- which will then go the same way as a moth caught in a jet engine's exhaust. Actually, the same goes for incoming planets -- our sun's energy output over about one day is equivalent to the gravitational binding energy of the Earth. And I mentioned directionality, didn't I? (An intrinsic requirement of an internal point-to-point communication system functioning across a distance of multiple astronomical units.) MB's basically come with the ultimate weapons system built in -- a phased-array energy weapon able to evaporate Earth-mass planets at a range of hundreds of light years.

      ReplyDelete
    2. Hope this isn't spoilers for any particular book?

      Further it makes me think of a doomsday movie plot whether black threads don't invade earth, they just enter the solar system and start wrapping around the sun, more and more. Possible twist ending for movie; everyone has to go live on the surface of it.

      But in the end the singulars have the trouble with evolutionary pressures (which we take for granted, given our many). Why does it care to live? Care being whatever - some sort of pattern matching algorythm or whatever.

      With the many, those that don't care to live just fall off the evolutionary radar. Even if there's a lot of them. Only the carers continue. With a singular entity, odds are it falls into the don't care and gets wiped out (if incompetence doesn't get it first).

      Actually there's a pitch - maybe 'it' doesn't simulate many smaller beings for fun or because it's bored - but as a forced evolutionary measure, to spawn something by chance that cares amongst the disinterested. As an overall thinking process geared toward survival via such a method.

      Although maybe that's how the human mind works as well - little clusters of synapses perhaps kind of like individuals, so some of them might get past the evolutionary hurdle of caring (and competence)

      a phased-array energy weapon able to evaporate Earth-mass planets at a range of hundreds of light years.

      Almost some kind of star that causes...death?

      "You may fire when ready!"

      ReplyDelete
    3. Charlie, the Brain certainly could be designed to do that -- seems a natural design choice to give it such weaponry capacity, which would presumably require only a tiny percentage of its resources.

      But maybe it would incorporate the comet, or seed it with life then fling it out, or turn it into a beautiful bouquet for its Matrioshka lover at Tau Ceti?

      I want to keep an open mind about the Brain's particular motivations and behavior while recognizing that it will plausibly employ these sorts of *general* design principles.

      ReplyDelete
    4. Interesting thought, Callan! I entirely agree about the risks of a singleton species without evolutionary selection. My post of the Fermi Paradox is about this, as is a sci-fi story I currently have in draft, "Last Janitor of the Divine".

      ReplyDelete
    5. Yeah, that was a genuinely sad story (or atleast IMO). Which is strange because it was sci-fi, which isn't usually about genuine sadness.

      ReplyDelete
    6. "Unlike earthworms, we know some general principles of mentality;"

      There's an obvious problem here. What we call mentality might be completely inadequate to capture the level on which the Matrioshka operates. Imagine how an earthworm might look down on an amoeba ("Deterministic single-input stimulus-response? Pah! I respond to multiple stimuli. It's qualitatively different, don't you know.") I see no reason to think that the same problem doesn't apply to us. For example, what if the Matrioshka perceives time differently? What if it literally and directly perceives quantum probabilities? How could those meat possibly think, it would say, when they can't even conceive of the possibilities in their space, because they don't perceive them? That's not thought, that's deterministic response to quantum stimuli.

      Second, I was glad to see preferences in there, but disappointed that it came last. The thing that determines our psychology is our drives, much more than our cognitive calculations. Also, this: "Priorities that have little to do with self-preservation and functional maintenance, though, might be difficult to predict and highly path-dependent" - that would be all of what we'd call psychology, then? Granted, it probably desires to self-preserve (otherwise it wouldn't last long). But we need to know more than that to start thinking about its behaviour.

      To start with a random example, whether or not Matrioshka has a concept of self seems like a pretty fundamental thing to want to know. In humans, it seems like the concept of self is closely tied to the concept of others, which in turn arises strongly because we are a social animal. Has that happened with M? Does it desire to interact with others? If it does, does that desire lead to the development of a self?

      This seems massively relevant at this moment, because the laptop on which I'm typing is indisputably much smarter than I am. And yet the court case in New York is not about whether we should grant personhood to a computer, but whether we should give it to a chimp, which is probably dumber than me.

      ReplyDelete
    7. chinaphil: do the individual cells of your body have a concept of self?

      That's a trick question: they don't even have their own nervous systems; while they exhibit trophism and complex interactions with their environments, they're not capable of independent existence outside the organism.

      And we could ask the same about your mitochondria.

      Now ramp it up a level: corporations or nations are effectively hive organisms composed of lots and lots of individual humans. Are they demonstrably self-aware or conscious? (I'm inclined to say "no", that consciousness is an emergent property of human central nervous processing -- but I might well be wrong.)

      Are properties like consciousness even relevant to something on the scale of an MB?

      ReplyDelete
    8. China / Charlie -- thanks for the interesting continued discussion!

      I think we should be open to the possibility that corporate entities of the right sort might have consciousness and selfhood (whatever that is). They've got the same general type of organization and responsiveness as people do, with respect to the kinds of things that philosophers typically regard as crucial for consciousness. (I argue this point in my forthcoming paper "If Materialism Is True, the United States Is Probably Conscious".)

      I do agree that a MB might have -- probably would have -- all kinds of conceptual capacities that we lack and concerns/priorities that we can't understand. 21st-century bloggers have all kinds of conceptual capacities and priorities that traditional hunter-gatherers lack, and vice versa, and we're even the same species. But I still think it's a good guess that basic structural features like outer perception and short-term and long-term memory will be present.

      I could see the concept of self going either way: There could be a unified "this is the one me and these are my aims" or it might be more of a mushy-distributed thing closer to a group organism. Or maybe something else -- maybe we can imagine a throughly Buddhist MB that finally achieves consistent "no self" mindfulness!

      ReplyDelete
    9. Charlie - I'm with you on the corporate consciousness thing. I think it has to do with the speed at which coordination can happen. Basically, there has to be a mechanism by which the being can coordinate itself, and can continually refresh the instructions quickly enough that individual subparts are kept focused on the coordinated goal, and don't drift off on their own missions. (I know Eric's not convinced by this, though.) I would assume that the M can maintain a sense of self if it has the capacity to coordinate all of itself towards a goal.

      Eric - "this is the one me and these are my aims" You might have given me a mini-revelation there. I've been thinking for a while that desires/drives are fundamental in defining selves (much more than cognitive capacity). But I hadn't ever noticed that this is exactly what Buddhism says. Extinguish the desires and you extinguish the self. This may mean that I have a lot of reading to do now :(

      ReplyDelete
    10. Great post and discussion ! I try to address something like this issue in my book Posthuman Life via the idea of a hyperplastic entity - a being that has the capacity to understand and intervene in its own structure to an arbitrary degree. The problem with attributing personhood or selfhood arises if we assume that the representational content of any state of the hyperplastic depends on its relationship to other states. I argue (to cut a long story short) that a hyperplastic being can't scan its internal states and isolate the ones that signify a given value (or belief, etc.) because any of these could exist in a configuration in which it means something different. And that goes for any of those configurations in turn. If so, then self-attributing values - or beliefs and desires - is probably pointless for a hyperplastic since no robust generalizations will be extendible to all the modified versions of itself. If this is right, then hyperplastic agents probably won't have much use for belief--desire psychology or concepts like "value". They might be some weird kind of agent, but not "people". A matrioshka brain seems like a pretty good candidate for a hyperplastic - so regardless of its organizational architecture, I'm skeptical as to whether our parochial notions about agents, beliefs, etc. would be applicable here.

      ReplyDelete
    11. Charlie,

      Charlie,

      That's a trick question: they don't even have their own nervous systems; while they exhibit trophism and complex interactions with their environments, they're not capable of independent existence outside the organism.

      Well, neither can the brain?

      With ramping it up, you seem to start on a spectrum of mitochondria, going up to cells, going up to men, going up to nations - but you kind of skip the middle part of the spectrum as being different?

      ReplyDelete
    12. David,

      I'd tend to think that's taking selfhood or personhood as being for us, then looking at how such a being might engage such a thing.

      I think personhood or selfhood are more like particular models/hypotheses toward the goal of survival.

      You can't take a model for survival and mangle it without the organism ending itself (possibly laying still with drool oozing from the corner of it's eating hole, with no activity. Or perhaps just one perpetual epileptic fit (and not a fit that gets you the kilojoules you need!))

      Whereas if you take personhood as being for us, it seems like it can be something that can be manipuated and somehow still work out.

      ReplyDelete
    13. David -- I'm partway through Posthuman Life, but I haven't got to that discussion yet. It sounds really interesting! I'm eager to see the argument in detail. My preliminary guess is that I'll think that my own dispositional approach to belief will escape your conclusions in a way that "belief-box" internal-representation views might not. But we'll see!

      ReplyDelete
    14. Eric,

      Thanks for a lovely question.

      Yes, some of it is about self-preservation and planning. But there's a lot of an MB. What else is it doing?

      The sychronization problem is interesting. Natalie Angier's NYT article about giraffes last week pointed out that they move their legs slowly, maybe, because information moves so slowly along biological nerves from brain to feet and back again. At best, a point on an MB communicates to a point on the opposite side in times on the order of tens of minutes.

      Cheers,
      Tony

      ReplyDelete
    15. Cool that you're reading P-Life, Eric. I'm sure (well, know) it offers an embarrassment of targets. And as you say, there may be a response by way of some vehicle-free approach to belief. I still view this argument as in development hell, so I'd be interested to read your response.

      David

      ReplyDelete