Friday, August 15, 2025

Minimal Autopoiesis in an AI System

Doubters of AI consciousness -- such as neuroscientist Anil Seth in a forthcoming target article in Behavioral and Brain Sciences -- sometimes ground their rejection of AI consciousness in the claim that AI systems are not "autopoietic" (conjoined with the claim that autopoiesis is necessary for consciousness). I don't see why autopoiesis should be necessary for consciousness, but setting that issue aside, it's not clear that standard AI systems can't be autopoietic. Today I'll describe a minimally autopoietic AI system.

The idea of autopoiesis was canonically introduced in Maturana and Varela (1972/1980). Drawing on that work, Seth characterizes autopoietic systems as systems that "continually regenerate their own material components through a network of processes... actively maintain[ing] a boundary between the system and its surroundings". Now, could a standard AI system be autopoietic in this sense?

[the cover of Maturana and Varela, Autopoiesis and Cognition; image source]

Consider a hypothetical solar-powered robot designed to move toward light when its charge is low. The system thereby acts to maintain its own functioning. It might employ predictive processing to model the direction of light sources. Perhaps it's bipedal, staying upright by means of a gyroscope and tilt detectors that integrate gravitational and camera inputs. More fancifully, we might imagine it to be composed of modules held together electromagnetically, so that in the absence of electrical power it falls apart.

Now let's give the robot error-detection systems and the ability to replace defective parts. When it detects a breakdown in one part -- for example, in the upper portion of its left leg -- it orders a replacement part delivered. Upon delivery, the robot scans the part to determine that it is compatible (rejecting any incompatible parts) then electromagnetically disconnects the damaged part and installs the new one. If the system has sufficient redundancy, even central processing systems could be replaced. A redundant trio of processors might eject a defective processor and run on the remaining processors until the replacement arrives.

A plastic shell maintains the boundary between the system and its surroundings. The system might detect flaws in the shell, for example, by internal sensors that respond to light entering through unexpected cracks, by visually monitoring its exterior, and perhaps by electrostatically detecting cracks or gaps. Defective shell components might be replaced.

If repelling intruders is necessary, we can challenge our robot with fakes. Shipments might sometimes arrive with a part mislabeled as compatible or visually similar to a compatible part, but ruinous if installed. Detecting and rejecting fakes might become a dupe-and-mimic arms race.

I see no in-principle obstacles to creating such a system using standard AI and engineering tools. Such a system is, I suggest, minimally autopoietic. It actively maintains itself. It enforces a boundary between itself and its environment. It continually generates, in a sense, its own material components. It employs predictive processing, fights entropy by drawing on external energy, resists dispersion, and has a solar-electric metabolism.

Does the fact that it depends on shipments mean that it does not actually generate its own parts? Humans also depend on nutrients generated from outside, for example vitamins and amino acids that we cannot biologically manufacture. Sometimes these nutrients are shipped to us (for example, ordered online). Also, it's easy enough to imagine the robot not simply installing but in a minimal sense manufacturing a part. Suppose a leg has three modular components. Each component might arrive separately, requiring a simple joining procedure to create the leg as a whole.

In a human, the autopoietic process occurs at multiple levels simultaneously. Cells maintain themselves, and so do organs, and so does the individual as a whole. Our robot does not have the same multi-level autopoiesis. But it's not clear why autopoiesis must be multi-level to count as genuine autopoiesis. In any case, we could recapitulate this imaginative exercise for subsystems within the robot or larger systems embedding the robot. A group-level autopoietic system might comprise several robots who play different roles in the group and who can be recruited or ejected to maintain the integrity of the group and the persistence of its processes.

Perhaps my system does not continually regenerate its own components, and that is a crucial missing feature? It's not clear why strict continuousness, rather than periodic replacement as needed, should be essential to autopoiesis. In any case, we can imagine if necessary that the robot has some fragile parts that need continual refurbishment. Perhaps it occupies an acidic environment that continually degrades its shell so that its shell coating must be continually monitored and replaced through capillaries that emit lacquer as needed from a refillable lacquer bag.

My system does not reproduce, but reproduction, sometimes seen as essential to life, is not standardly viewed as necessary for autopoiesis (Maturana and Varela, 1973/1980, p. 100).

A case could even be made that my desktop computer is already minimally autopoietic. It draws power from its environment, maintaining a low-entropy state without which it will cease to function. It monitors itself for errors. It updates its drivers and operating system. It detects and repels viruses. It does not order and install replacement hardware, but it does continually sustain its intricate electrical configuration. Indirectly, though acting upon me, it does sometimes cause replacement parts to be installed. Alternatively, perhaps, we might view its electrical configuration as an autopoietic system and the hardware as the environment in which that system dwells.

My main thought is: Autopoiesis is a high-level, functional concept. Nothing in the concept appears to require implementation in what we ordinarily think of as a "biological" substrate. Nothing seems to prevent autopoietic processes in AI systems built along broadly familiar lines. An autopoietic requirement on consciousness does not seem in principle to rule out consciousness in standard computational systems.

Maturana and Varela themselves might agree. They write that

The organization of a machine (or system) does not specify the properties of the components which realize the machine as a concrete system, it only specifies the relations which these must generate to constitute the machine or system as a unity (1972/1980, p. 77).
It is clear from context that they intend this remark to apply to autopoietic as well as non-autopoietic machines.

11 comments:

David Duffy said...

Yes, I like the concepts around autopoeisis, enactivism, and the free energy principle on how to think about mind and consciousness and function in biology in a neo-Aristotelian fashion, but how do they address such systems living and working in a simulation? And there are a few SFnal hybrid systems. How about Adam Roberts' Bete, which we could have with a large language model keeping it's animal "mount" alive by arguing with slaughterhouse workers, or Karl Schroeder's AIs acting to keep particular elements of the environment running (paperclip maximization for good rather than evil).

Paul D. Van Pelt said...

so, I tried to get a meaning of autopoietic. My tablet referred me to the Cloud. I don't have access to the Cloud---got this tablet used. for nothing. Still seems odd. I seem to recall when thinkers like Seth, Goff, Chalmers and maybe more were arguing about consciousness. Contentious business, then. Don't recall any clear winner---even Bernardo Kastrup weighed in. I just don't know, see. But, in my limited mind's eye, auto signifies some level of intent. If it is claimed AI has intention, then from whence did that arise? If it is intentional, and not merely reactional, that is one thing. If the matter is otherwise, it is something else. Did all these people jus give up?

Arnold said...

Seemingly even variables of self creation would need, like everything else in consciousness-to-be-sustained...meanings' come to mind...

James of Seattle said...

Two comments: first, I think a real life example of artificial autopoesis is my roomba. It maps the rooms, avoids cliff edges, and monitors its battery. When the battery gets low, it returns to the charging station, charges, and then goes back and continues from where it left off.

Second, a comment on the role of autopoesis in consciousness (just something to think about): I suggest the fundamental basis of consciousness is information processing, specifically, the communication of pattern recognition. This process necessarily involves a purpose/goal state (reasons available on request). Autopoesis provides the initial goal state for living things, but other goal states (allopoesis) also work.

James of Seattle said...

Forgot to mention the roomba also alerts me when parts need to be replaced. Wouldn’t take much to have it order said parts thru Alexa (Amazon’s ai) and then simply refuse to do the work until I replace the part.

Paul D. Van Pelt said...
This comment has been removed by the author.
Arnold said...

Meaning of philosophical phrase "as a unity", 1977...today, the whole is dependent on parts for 'as a unity', consciousness seems there then but lacks purpose by data from the system-from the coder...
...each cyber security system defends itself to remain "as a part" from an authority-as a unity of another system...cryptocurrency as a example of decentralization parts verses unity...

Arnold said...

Today Is AI-decentralization at war against modern day authoritarianism-centralization...cyber security vs narcissism'...


The phrase "AI-decentralization at war against modern day authoritarianism-centralization...cyber security vs narcissism" presents a compelling but complex perspective on the current technological landscape. It suggests that AI and cybersecurity, in their decentralized forms, are a bulwark against the forces of centralized control and ego-driven manipulation. While this framing captures some key tensions, it's more accurate to see these elements in a nuanced, sometimes contradictory, relationship.

AI Decentralization vs. Authoritarianism
The tension between AI decentralization and authoritarianism is a significant ideological fault line. Authoritarian regimes often use centralized AI to consolidate power and control their populations. This is often referred to as "data-centric authoritarianism." These systems use vast amounts of data—from social media to surveillance footage—to monitor dissent, predict citizen behavior, and enforce social control.

Conversely, a decentralized AI model, often associated with technologies like blockchain and distributed networks, can challenge this control. By distributing data, computation, and decision-making across many nodes, it makes it harder for a single entity to gain a monopoly on power. This approach can promote transparency, protect privacy, and empower individuals, aligning with democratic values. However, the reality is more complex. While decentralized technologies can be used for good, they can also be exploited for illicit activities or misinformation, making their impact on democracy a subject of ongoing debate.

Cybersecurity vs. Narcissism
The juxtaposition of cybersecurity and narcissism highlights a key vulnerability in modern security. Narcissism, particularly in the context of technology, can manifest as an over-reliance on one's own judgment and a tendency to overshare personal information online to seek validation. This can make individuals a prime target for cyberattacks, as hackers often use social engineering tactics that exploit human psychological traits like ego and vanity.

In this context, cybersecurity acts as a defense against these vulnerabilities. It's not just about firewalls and software; it's also about promoting user awareness and responsible digital behavior. By teaching people to be more skeptical of flattery, more cautious with their personal data, and more aware of common social engineering tricks, cybersecurity practices can directly counteract the risks posed by narcissistic tendencies. In essence, it's the methodical, objective approach of security professionals against the emotional, ego-driven impulses that can lead to a breach.

Paul D. Van Pelt said...

At the moment, narcissism is winning. So, what would obtain if AI became narcissistic???¿¿¿

Paul D. Van Pelt said...

Or, perhaps it already is? I HAVE been slow on the uptake lately.

Paul D. Van Pelt said...

So, I'll stop my contribution to this seeming endless circle. If some action/response is autoMATIC...occurs because it is " something people do", most of the time, does that equate with autopoietic? The root of the word supplies, de minimus, an inference, I think. I am not autopoetic but, have written poetry. OK. I stop there.