Doubters of AI consciousness -- such as neuroscientist Anil Seth in a forthcoming target article in Behavioral and Brain Sciences -- sometimes ground their rejection of AI consciousness in the claim that AI systems are not "autopoietic" (conjoined with the claim that autopoiesis is necessary for consciousness). I don't see why autopoiesis should be necessary for consciousness, but setting that issue aside, it's not clear that standard AI systems can't be autopoietic. Today I'll describe a minimally autopoietic AI system.
The idea of autopoiesis was canonically introduced in Maturana and Varela (1972/1980). Drawing on that work, Seth characterizes autopoietic systems as systems that "continually regenerate their own material components through a network of processes... actively maintain[ing] a boundary between the system and its surroundings". Now, could a standard AI system be autopoietic in this sense?
[the cover of Maturana and Varela, Autopoiesis and Cognition; image source]
Consider a hypothetical solar-powered robot designed to move toward light when its charge is low. The system thereby acts to maintain its own functioning. It might employ predictive processing to model the direction of light sources. Perhaps it's bipedal, staying upright by means of a gyroscope and tilt detectors that integrate gravitational and camera inputs. More fancifully, we might imagine it to be composed of modules held together electromagnetically, so that in the absence of electrical power it falls apart.
Now let's give the robot error-detection systems and the ability to replace defective parts. When it detects a breakdown in one part -- for example, in the upper portion of its left leg -- it orders a replacement part delivered. Upon delivery, the robot scans the part to determine that it is compatible (rejecting any incompatible parts) then electromagnetically disconnects the damaged part and installs the new one. If the system has sufficient redundancy, even central processing systems could be replaced. A redundant trio of processors might eject a defective processor and run on the remaining processors until the replacement arrives.
A plastic shell maintains the boundary between the system and its surroundings. The system might detect flaws in the shell, for example, by internal sensors that respond to light entering through unexpected cracks, by visually monitoring its exterior, and perhaps by electrostatically detecting cracks or gaps. Defective shell components might be replaced.
If repelling intruders is necessary, we can challenge our robot with fakes. Shipments might sometimes arrive with a part mislabeled as compatible or visually similar to a compatible part, but ruinous if installed. Detecting and rejecting fakes might become a dupe-and-mimic arms race.
I see no in-principle obstacles to creating such a system using standard AI and engineering tools. Such a system is, I suggest, minimally autopoietic. It actively maintains itself. It enforces a boundary between itself and its environment. It continually generates, in a sense, its own material components. It employs predictive processing, fights entropy by drawing on external energy, resists dispersion, and has a solar-electric metabolism.
Does the fact that it depends on shipments mean that it does not actually generate its own parts? Humans also depend on nutrients generated from outside, for example vitamins and amino acids that we cannot biologically manufacture. Sometimes these nutrients are shipped to us (for example, ordered online). Also, it's easy enough to imagine the robot not simply installing but in a minimal sense manufacturing a part. Suppose a leg has three modular components. Each component might arrive separately, requiring a simple joining procedure to create the leg as a whole.
In a human, the autopoietic process occurs at multiple levels simultaneously. Cells maintain themselves, and so do organs, and so does the individual as a whole. Our robot does not have the same multi-level autopoiesis. But it's not clear why autopoiesis must be multi-level to count as genuine autopoiesis. In any case, we could recapitulate this imaginative exercise for subsystems within the robot or larger systems embedding the robot. A group-level autopoietic system might comprise several robots who play different roles in the group and who can be recruited or ejected to maintain the integrity of the group and the persistence of its processes.
Perhaps my system does not continually regenerate its own components, and that is a crucial missing feature? It's not clear why strict continuousness, rather than periodic replacement as needed, should be essential to autopoiesis. In any case, we can imagine if necessary that the robot has some fragile parts that need continual refurbishment. Perhaps it occupies an acidic environment that continually degrades its shell so that its shell coating must be continually monitored and replaced through capillaries that emit lacquer as needed from a refillable lacquer bag.
My system does not reproduce, but reproduction, sometimes seen as essential to life, is not standardly viewed as necessary for autopoiesis (Maturana and Varela, 1973/1980, p. 100).
A case could even be made that my desktop computer is already minimally autopoietic. It draws power from its environment, maintaining a low-entropy state without which it will cease to function. It monitors itself for errors. It updates its drivers and operating system. It detects and repels viruses. It does not order and install replacement hardware, but it does continually sustain its intricate electrical configuration. Indirectly, though acting upon me, it does sometimes cause replacement parts to be installed. Alternatively, perhaps, we might view its electrical configuration as an autopoietic system and the hardware as the environment in which that system dwells.
My main thought is: Autopoiesis is a high-level, functional concept. Nothing in the concept appears to require implementation in what we ordinarily think of as a "biological" substrate. Nothing seems to prevent autopoietic processes in AI systems built along broadly familiar lines. An autopoietic requirement on consciousness does not seem in principle to rule out consciousness in standard computational systems.
Maturana and Varela themselves might agree. They write that
The organization of a machine (or system) does not specify the properties of the components which realize the machine as a concrete system, it only specifies the relations which these must generate to constitute the machine or system as a unity (1972/1980, p. 77).It is clear from context that they intend this remark to apply to autopoietic as well as non-autopoietic machines.
No comments:
Post a Comment