Tuesday, June 14, 2016

Possible Architectures of Group Minds: Memory

by Eric Schwitzgebel and Rotem Herrmann

Suppose you have 200 bodies. "You"? Well, maybe not exactly you! Some hypothetical science fictional group intelligence.

How might memory work?

For concreteness, let's assume a broadly Ann Leckie "ancillary" setup: two hundred humanoid bodies on a planet's surface, each with an AI brain remotely connected to a central processor on an orbiting starship.

(For related reflections on the architecture of group perception, see this earlier post.)

Central vs. Distributed Storage

For simplicity, we will start by assuming a storage and retrieval representational architecture for memory.

A very centralized memory architecture might have the entire memory store in the orbiting ship, which the humanoid bodies access any time they need to retrieve a memory. A humanoid body, for example, might lean down to inspect a flower which it wants to classify, simultaneously sending a request for taxonomic information to the central unit. In contrast, a very distributed memory architecture might have all of the memory storage distributed in the humanoid bodies, so that if the humanoid doesn't have classification information in its own local brain it will have to send a request around to other humanoids to see if they have that information stored.

A bit of thought suggests that completely centralized memory architecture probably wouldn't succeed if the humanoid bodies are to have any local computation (as opposed to being merely dumb limbs). Local computation presumably requires some sort of working memory: If the local humanoid is reasoning from P and (P -> Q) to Q, it will presumably have to retain P in some way while it processes (P -> Q). And if the local humanoid is reaching its arm forward to pluck the flower, it will presumably have to remember its intention over the course of the movement if it is to behave coherently.

It's natural, then, to think that there will be at least a short-term store in each local humanoid, where it retains information relevant to its immediate projects, available for fast and flexible access. There needn't be a single short term store: There could be one or more ultra-fast working memory modules for quick inference and action, and a somewhat slower short-term or medium-term store for contextually relevant information that might or might not prove useful in the tasks that the humanoid expects to confront in the near future.

Conversely, although substantial long-term information, not relevant to immediate tasks, might be stored in each local humanoid, if there is a lot of potential information that the group mind wants to be able to access -- say, snapshots of the entire internet plus recorded high-resolution video feeds from each of its bodies -- it seems that the most efficient solution would be to store that information in the central unit rather than carrying around 200 redundant copies in each humanoid. Alternatively, if the central unit is limited in size, different pieces could be distributed among the humanoids, accessible each to the other upon request.

Procedural memories or skills might also be transferred between long-term and short-term stores, as needed for the particular tasks the humanoids might carry out. Situation-specific skills, for example -- piloting, butterfly catching, Antarean opera singing -- might be stored centrally and downloaded only when necessary, while basic skills such as walking, running, and speaking Galactic Common Tongue might be kept in the humanoid rather than "relearned" or "re-downloaded" for every assignment.

Individual humanoids might also locally acquire skills, or bodily modifications, or body-modifications-blurring-into-skills that are or are not uploaded to the center or shared with other humanoids.

Central vs. Distributed Calling

One of the humanoids walks into a field of flowers. What should it download into the local short-term store? Possibilities might include: a giant lump of botanical information, a giant history of everything known to have happened in that location, detailed algorithms for detecting the presence of landmines and other military hazards, information on soil and wildlife, a language module for the local tribe whose border the humanoid has just crossed, or of course some combination of all these different types of information.

We can imagine the calling decision being reached entirely by the central unit, which downloads information into particular humanoids based on its overview of the whole situation. One advantage of this top-down approach would be that the calling decision would easily reflect information from the other humanoids -- for example, if another one of the humanoids notices a band of locals hiding in the bushes.

Alternatively, the calling decision could be reached entirely by the local unit, based upon the results of local processing. One advantage of this bottom-up approach would be that it avoids delays arising from the transmission of local information to the central unit for possibly computationally-heavy comparison with other sources of information. For example, if the local humanoid detects a shape that might be part of a predator, it might be useful to prioritize a fast call of information on common predators without having to wait for a call-up decision from orbit.

A third option would allow a local representation in one humanoid A to trigger a download into another humanoid B, either directly from the first humanoid or via the central unit. Humanoid A might message Humanoid B "Look out, B, a bear!" along with a download of recently stored sensory input from A and an instruction to the central unit to dump bear-related information into B's short term store.

A well engineered group mind might of course, allow all three calling strategies. There will still be decisions about how much weight and priority to give to each strategy, especially in cases of...

Conflict

Suppose the central unit has P stored in its memory, while a local unit has not-P. What to do? Here are some possibilities:

Central dictatorship. Once the conflict is detected, the central unit wins, correcting the humanoid unit. This might make especially good sense if the information in the humanoid unit was originally downloaded from the central unit through a noisy process with room for error or if the central unit has access to a larger or more reliable set of information relevant to P.

Central subordination. Once the conflict is detected, the local might overwrite the central. This might make especially good sense if the central store is mostly a repository of constantly updated local information, for example if humanoid A is uploading a stream of sensory information from its short term store into the central unit's long term store.

Voting. If more than one local humanoid has relevant information about P, there might be a winner-take-all vote, resulting in the rewriting of P or not-P across all the relevant subsystems, depending on which representation wins the vote.

Compromise. In cases of conflict there might be compromise instead of dominance. For example, if the central unit has P and one peripheral unit has not-P, they might both write something like "50% likely that P"; analogously if the peripheral units disagree.

Retain the conflict. Another possibility is to simply retain the conflict, rather than changing either representation. The system would presumably want to be careful to avoid deriving conclusions from the contradiction or pursuing self-defeating or contradictory goals. Perhaps contradictory representations could be somehow flagged.

And of course there might be different strategies on different occasions, and the strategies can be weighted, so that if Humanoid A is in a better position than Humanoid B the compromise result might be 80% in favor of Humanoid A, rather than equally weighted.

Similar possibilities arise for conflicts in memory calling -- for example if the local processors in Humanoid A represent bear-information download as the highest priority, the local processes in Humanoid B represent language-information download as urgent for Humanoid A, and the central unit represents mine detection as the highest priority.

Reconstructive Memory

So far we've been working with a storage-and-retrieval model of memory. But human memory is, we think, better modeled as partly reconstructive: When we "remember" information (especially complex information like narratives) we are typically partly rebuilding, figuring out what must have been the case in a way that brings together stored traces with other more recent sources of information and also with general knowledge. For example, as Bartlett found, narratives retold over time tend to simplify and move toward incorporating stereotypical elements even if those elements weren't originally present; and as Loftus has emphasized, new information can be incorporated into seemingly old memories without the subject being aware of the change (for example memories of shattered glass when a car accident is later described as having been at high velocity).

If the group entity's memory is reconstructive, all of the architectural choices we've described become more complicated, assuming that in reconstructing memories the local units and the central units are doing different sorts of processing, drawing on different pools of information. Conflict between memories might even become the norm rather than the exception. And if we assume that reconstructing a memory often involves calling up other related memories in the process, decisions about calling become mixed in with the reconstruction process itself.

Memory Filling in Perception

Another layer of complexity: An earlier post discussed perception as though memory were irrelevant, but an accurate and efficient perceptual process would presumably involve memory retrieval along the way. As our humanoid bends down to perceive the flower, it might draw examplars or templates of other flowers of that species from long-term store, and this might (as in the human case) influence what it represents as the flower's structure. For example, in the first few instants of looking, it might tentatively represent the flower as a typical member of its species and only slowly correct its representation as it gathers specific detail over time.

Extended Memory

In the human case, we typically imagine memories as stored in the brain, with a sharp division between what is remembered and what is perceived. Andy Clark and others have pushed back against this view. In AI cases, the issue arises vividly. We can imagine a range of cases from what is clearly outward perception to what is clearly retrieval of internally stored information, with a variety of intermediate, difficult-to-classify cases in between. For example: on one end, the group has Humanoid A walk into a newly discovered library and read a new book. We can then create a slippery slope in which the book is digitized and stored increasingly close to the cognitive center of the humanoid (shelf, pocket, USB port, internal atrium...), with increasing permanence.

Also, procedural memory might be partly stored in the limbs themselves with varying degrees of independence from the central processing systems of the humanoid, which in turn can have varying degrees of independence from the processing systems of the orbiting ship. Limbs themselves might be detachable, blurring the border between body parts and outside objects. There need be no sharp boundary between brain, body, and environment.

[image source]

3 comments:

chinaphil said...

Two points struck me. The first is that all of these options seem fine (though it's not obvious to me how exhaustive you've managed to be), but that we don't have much idea what we individual humans are like in these terms. We have different sensate bits that sometimes seem to have a will of their own; we have bicameral brains, which can in extreme cases be completely bisected. Even if we were able to nail down right now how an alien worked, in the terms you describe above, that wouldn't tell us how much it is like us, because we don't know how we work!
The second point is a slight worry about some of the language you use: "voting," "compromise," "decisions." Perhaps I'm being over-cautious, but seeing as your aim is to describe some weird phenomena which may look like many consciousnesses but in fact are only one consciousness, then perhaps best to avoid language which invokes the metaphor of conscious action.

Arnold said...

Is AI only life after original sin (common metaphor for 'will'); Descendant into architecture of place, including new constructions, remodeling, maintenance repairs...

Do we find today a Wish to reconstruct descendant is ascendance into 'How we work'...

Eric Schwitzgebel said...

Thanks for the comments (and apologies for the slow reply -- away on family vacation til yesterday). Chinaphil: I agree with your first point. On the second point, part of my agenda is to raise concerns about these issues. Under what conditions would the humanoids be individually conscious? I don't want the language to beg the question, of course, but I'm not sure such terms need to be carefully avoided either. (And in discussions of the brain they are not avoided, e.g., neurons are regularly described as having "preferences", "voting", etc.)