Thursday, August 21, 2025

Defining "Artificial Intelligence"

I propose that we define "Artificial Intelligence" in the obvious way. An entity is an AI if it is both artificial (in the relevant sense) and intelligent (in the relevant sense).

Despite the apparent attractiveness of this simple analytic definition of AI, standard definitions of AI are more complex. In their influential textbook Artificial Intelligence, for example, Stuart Russell and Peter Norvig define artificial intelligence as "The designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment"[1]. John McCarthy, one of the founding fathers of AI, defines it as "The science and engineering of making intelligent machines, especially intelligent computer programs". In his influential 1985 book Artificial Intelligence: The Very Idea, philosopher John Haugeland defines it as "the exciting new effort to make computers think... machines with minds, in the full and literal sense" (p. 2).

If we define AI as intelligent machines, we risk too broad a definition. For in one standard sense, the human body is also a machine -- and of course we are, in the relevant sense, "intelligent". "Machine" is either an excessively broad or a poorly defined category.

If instead we treat only intelligent computers as AI, we risk either excessive breadth or excessive narrowness, depending on what counts as a computer. If a "computer" is just something that behaves according to the patterns described by Alan Turing in his standard definition of computation, then humans are computers, since they too sometimes follow such patterns. Indeed, originally the word "computer" referred to a person who performs arithmetic tasks. Cognitive scientists sometimes describe the human brain as literally a type of computer. This is contentious but not obviously wrong, on liberal definitions of what constitutes a computer.

However, if we restrict the term "computer" to the types of digital programmable devices with which we are currently familiar, the definition risks being too narrow, since not all systems worth calling AI need be instantiated in such devices. For example, non-digital analog computers are sometimes conceived and built. Also, many artificial systems are non-programmable, and it's not inconceivable that some subset of these systems could be intelligent. If humans are intelligent non-computers, then presumably in principle some biologically inspired but artificially constructed systems could also be intelligent non-computers.

Russell and Norvig's definition avoids both "machine" and "computer", at the cost of making AI a practice rather than an ontological category: It concerns "designing and building". Maybe this is helpful, if we regard the "artificial" as coextensive with the designed or built. But this definition appears to rule out evolutionary AI systems, which arise through reproduction and selection (for example, in artificial life), and are arguably neither designed nor built, except in liberal sense that every evolved entity is.

"Intelligence" is of course also a tricky concept. If we define intelligence too liberally, even a flywheel is intelligent, since it responds to its environment by storing and delivering energy as needed to smooth out variations in angular velocity in the device to which it is attached. If we define intelligence too narrowly, then classic computer programs of the 1960s to 1980s -- arguably, central examples of "AI" as the term is standardly used -- no longer count as intelligent, due to the simplicity and rigidity of the if-then rules governing them.

Russell and Norvig require that AI systems receive percepts from the environment -- but what is a "percept", and why couldn't an intelligence think only about its own internal states or be governed wholly by non-perceptual inputs? They also require that the AI "take action" -- but what counts as an action? And couldn't some AI, at least in principle, be wholly reflective while executing no outward behavior?

Can we fall back on definition by example? Here are some examples: classic 20th-century "good-old-fashioned-AI" systems like SHRDLU, ELIZA, and Cyc; early connectionist and neural net systems like Rosenblatt’s Perceptron and Rumelhart’s backpropagation networks; famous game-playing machines like DeepBlue and AlphaGo; transformer-based architectures like ChatGPT, Grok, Claude, Gemini, Dall-E, and Midjourney; Boston Dynamics robots and autonomous delivery robots; quantum computers.

Extending forward from these examples, we might also imagine future computational systems built along very different lines, for example, partly analog computational systems, or more sophisticated quantum or partly quantum computational systems. We might imagine systems that operate by interference patterns in reflected light, or "organic computing" via DNA. We might imagine biological or partly-biological systems which might not be best thought of as computers (unless everything is a "computer"), including frog-cell based xenobots and systems containing neural tissue. We might imagine systems that look less and less like they are programmed and more and more like they are evolved, selected, and trained. At some point it becomes unclear whether such systems are best regarded as "artificial".

As a community, we actually don’t have a very good sense of what "AI" means. We can easily classify currently existing systems as either AI or not-AI based on similarity to canonical examples and some mushy general principles, but we have only a poor grasp of how to differentiate AI from non-AI systems in a wide range of hypothetical future cases.

The simple, analytic definition I suggested at the beginning of this post is, I think, the best we can do. Something is an Artificial Intelligence if and only if it is both artificial and intelligent, on some vague-boundaried, moderate-strength understanding of both "artificial" and "intelligent" that encompasses the canonical examples while excluding entities that we ordinarily regard as either non-artificial or non-intelligent.

I draw the following lesson from these facts about the difficulty of definition:

General claims about the limitations of AI are almost always grounded in specific assumptions about the nature of AI, such as its digitality or its implementation on "computers". Future AI, on moderately broad understandings of what counts as AI, might not be subject to those same limitations. Notably, two of the most prominent deniers of AI consciousness -- John Searle and Roger Penrose -- both explicitly limit their skepticism to systems designed according to principles familiar in the late 20th century, while expressing openness to conscious AI designed along different lines. No well-known argument aims to establish the in-principle impossibility of consciousness in all future AI systems on a moderately broad definition of what counts as "AI". Of course, the greater the difference from currently familiar architectures, the farther in the future that architecture is likely to be.

[illustration of the AI (?) system in my science fiction story THE TURING MACHINES OF BABEL]

-----------------------------------------

[1] Russell and Norvig are widely cited for this definition, but I don't see this exact quote in my third edition copy. While I await UCR's interlibrary loan department to deliver my 4th ed. version, I'll assume this quote is accurate.

Friday, August 15, 2025

Minimal Autopoiesis in an AI System

Doubters of AI consciousness -- such as neuroscientist Anil Seth in a forthcoming target article in Behavioral and Brain Sciences -- sometimes ground their rejection of AI consciousness in the claim that AI systems are not "autopoietic" (conjoined with the claim that autopoiesis is necessary for consciousness). I don't see why autopoiesis should be necessary for consciousness, but setting that issue aside, it's not clear that standard AI systems can't be autopoietic. Today I'll describe a minimally autopoietic AI system.

The idea of autopoiesis was canonically introduced in Maturana and Varela (1972/1980). Drawing on that work, Seth characterizes autopoietic systems as systems that "continually regenerate their own material components through a network of processes... actively maintain[ing] a boundary between the system and its surroundings". Now, could a standard AI system be autopoietic in this sense?

[the cover of Maturana and Varela, Autopoiesis and Cognition; image source]

Consider a hypothetical solar-powered robot designed to move toward light when its charge is low. The system thereby acts to maintain its own functioning. It might employ predictive processing to model the direction of light sources. Perhaps it's bipedal, staying upright by means of a gyroscope and tilt detectors that integrate gravitational and camera inputs. More fancifully, we might imagine it to be composed of modules held together electromagnetically, so that in the absence of electrical power it falls apart.

Now let's give the robot error-detection systems and the ability to replace defective parts. When it detects a breakdown in one part -- for example, in the upper portion of its left leg -- it orders a replacement part delivered. Upon delivery, the robot scans the part to determine that it is compatible (rejecting any incompatible parts) then electromagnetically disconnects the damaged part and installs the new one. If the system has sufficient redundancy, even central processing systems could be replaced. A redundant trio of processors might eject a defective processor and run on the remaining processors until the replacement arrives.

A plastic shell maintains the boundary between the system and its surroundings. The system might detect flaws in the shell, for example, by internal sensors that respond to light entering through unexpected cracks, by visually monitoring its exterior, and perhaps by electrostatically detecting cracks or gaps. Defective shell components might be replaced.

If repelling intruders is necessary, we can challenge our robot with fakes. Shipments might sometimes arrive with a part mislabeled as compatible or visually similar to a compatible part, but ruinous if installed. Detecting and rejecting fakes might become a dupe-and-mimic arms race.

I see no in-principle obstacles to creating such a system using standard AI and engineering tools. Such a system is, I suggest, minimally autopoietic. It actively maintains itself. It enforces a boundary between itself and its environment. It continually generates, in a sense, its own material components. It employs predictive processing, fights entropy by drawing on external energy, resists dispersion, and has a solar-electric metabolism.

Does the fact that it depends on shipments mean that it does not actually generate its own parts? Humans also depend on nutrients generated from outside, for example vitamins and amino acids that we cannot biologically manufacture. Sometimes these nutrients are shipped to us (for example, ordered online). Also, it's easy enough to imagine the robot not simply installing but in a minimal sense manufacturing a part. Suppose a leg has three modular components. Each component might arrive separately, requiring a simple joining procedure to create the leg as a whole.

In a human, the autopoietic process occurs at multiple levels simultaneously. Cells maintain themselves, and so do organs, and so does the individual as a whole. Our robot does not have the same multi-level autopoiesis. But it's not clear why autopoiesis must be multi-level to count as genuine autopoiesis. In any case, we could recapitulate this imaginative exercise for subsystems within the robot or larger systems embedding the robot. A group-level autopoietic system might comprise several robots who play different roles in the group and who can be recruited or ejected to maintain the integrity of the group and the persistence of its processes.

Perhaps my system does not continually regenerate its own components, and that is a crucial missing feature? It's not clear why strict continuousness, rather than periodic replacement as needed, should be essential to autopoiesis. In any case, we can imagine if necessary that the robot has some fragile parts that need continual refurbishment. Perhaps it occupies an acidic environment that continually degrades its shell so that its shell coating must be continually monitored and replaced through capillaries that emit lacquer as needed from a refillable lacquer bag.

My system does not reproduce, but reproduction, sometimes seen as essential to life, is not standardly viewed as necessary for autopoiesis (Maturana and Varela, 1973/1980, p. 100).

A case could even be made that my desktop computer is already minimally autopoietic. It draws power from its environment, maintaining a low-entropy state without which it will cease to function. It monitors itself for errors. It updates its drivers and operating system. It detects and repels viruses. It does not order and install replacement hardware, but it does continually sustain its intricate electrical configuration. Indirectly, though acting upon me, it does sometimes cause replacement parts to be installed. Alternatively, perhaps, we might view its electrical configuration as an autopoietic system and the hardware as the environment in which that system dwells.

My main thought is: Autopoiesis is a high-level, functional concept. Nothing in the concept appears to require implementation in what we ordinarily think of as a "biological" substrate. Nothing seems to prevent autopoietic processes in AI systems built along broadly familiar lines. An autopoietic requirement on consciousness does not seem in principle to rule out consciousness in standard computational systems.

Maturana and Varela themselves might agree. They write that

The organization of a machine (or system) does not specify the properties of the components which realize the machine as a concrete system, it only specifies the relations which these must generate to constitute the machine or system as a unity (1972/1980, p. 77).
It is clear from context that they intend this remark to apply to autopoietic as well as non-autopoietic machines.

Tuesday, August 05, 2025

Top Science Fiction and Fantasy Magazines 2025

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years. If you're curious what magazines tend to be viewed by insiders as elite, check the top of the list. If you're curious to discover reputable magazines that aren't as widely known (or aren't as widely known specifically for their science fiction and fantasy), check the bottom of the list.

Below is my list for 2025. (For previous lists, see here.)

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I give each magazine one point for each story nominated for a Hugo, Nebula, Sturgeon, or World Fantasy Award in the past ten years; one point for each story appearance in the past ten years in the "best of" anthologies by Dozois, Horton, Strahan, Clarke, Adams, and Tidhar; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress complex differences into a single scale. However, the prestige of a magazine is a socially real phenomenon worth tracking, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


Results:

1. Clarkesworld (187 points) 

2. Tor.com / Reactor (182.5) 

3. Uncanny (160)

4. Lightspeed (133.5) 

5. Asimov's (124.5) 

6. Fantasy & Science Fiction (100.5) 

7. Beneath Ceaseless Skies (57.5) 

8. Strange Horizons (incl Samovar) (47)

9. Analog (42) 

10. Nightmare (38.5) 

11. Apex (36.5) 

12. FIYAH (24.5) (started 2017) 

13. Slate / Future Tense (23; ceased 2024?) 

14. Fireside (18.5) (ceased 2022)

15. Fantasy Magazine (17.5) (off and on during the period) 

16. Interzone (16.5) 

17. The Dark (16) 

18. Sunday Morning Transport (12.5) (started 2022)

19. The Deadlands (10) (started 2021)

20. The New Yorker (9) 

21. Future Science Fiction Digest (7) (ran 2018-2023) 

22t. Diabolical Plots (6.5)

22t. Lady Churchill's Rosebud Wristlet (6.5)

24t. Conjunctions (6) 

24t. khōréō (6) (started 2021)

26t. GigaNotoSaurus (5.5) 

26t. Omni (5.5) (classic magazine relaunched 2017-2020) 

28t. Shimmer (5) (ceased 2018)

28t. Sirenia Digest (5) 

30t. Boston Review (4) 

30t. Omenana (4)

30t. Terraform (Vice) (4) (ceased 2023)

30t. Wired (4)

34t. B&N Sci-Fi and Fantasy Blog (3.5) (ceased 2019)

34t. McSweeney's (3.5) 

34t. Paris Review (3.5) 

37t. Anathema (3) (ran 2017-2022)

37t. Galaxy's Edge (3) (ceased 2023)

37t. Kaleidotrope (3) 

*37t. Psychopomp (3) (started 2023; not to be confused with Psychopomp Magazine)

41t. Augur (2.5) (started 2018)

41t. Beloit Fiction Journal (2.5) 

41t. Black Static (2.5) (ceased fiction 2023)

*41t. Bourbon Penn (2.5)

41t. Buzzfeed (2.5) 

41t. Matter (2.5) 

47t. Baffling (2) (started 2020)

47t. Flash Fiction Online (2)

47t. Fusion Fragment (2) (started 2020)

47t. Mothership Zeta (2) (ran 2015-2017) 

47t. Podcastle (2)

47t. Science Fiction World (2)

47t. Shortwave (2) (started 2022)

47t. Tin House (2) (ceased short fiction 2019) 

55t. e-flux journal (1.5)

55t. Escape Pod (1.5)

55t. MIT Technology Review (1.5) 

55t. New York Times (1.5) 

55t. Reckoning (1.5) (started 2017)

55t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]

--------------------------------------------------

Comments:

(1.) Beloit Fiction Journal, Boston Review, Conjunctions, e-flux Journal, Matter, McSweeney's, The New Yorker, Paris Review, Reckoning, and Tin House are literary magazines that sometimes publish science fiction or fantasy. Buzzfeed, Slate and Vice are popular magazines, and MIT Technology Review, Omni, and Wired are popular science magazines that publish a bit of science fiction on the side. The New York Times ran a series of "Op-Eds from the Future" from 2019-2020. The remaining magazines focus on the science fiction and fantasy (SF) genre or related categories such as horror or "weird". All publish in English, except Science Fiction World, which is the leading science fiction magazine in China.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Clarkesworld (54.5)  
2. Uncanny (47) 
3. Tor / Reactor (35) 
4. Lightspeed (33)
5. Asimov's (22) 
6. Strange Horizons (18) 
7. F&SF (16) 
8. Apex (13)
9. Sunday Morning Transport (12.5) 
10. Beneath Ceaseless Skies (11.5) 
11. FIYAH (10.5)
12t. Fantasy (9.5) 
12t. The Deadlands (9.5) 
14. Nightmare (8)
15. Analog (7.5) 

(3.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Submission Grinder is a terrific resource for authors, with detailed information on magazine pay rates, submission windows, and turnaround times.

(4.) Over the past decade, the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- have been displaced in influence by the leading free online magazines, Clarkesworld, Tor / Reactor, Uncanny, and Lightspeed (all founded 2006-2014). In 2014, Asimov's and F&SF led the rankings by a wide margin (Analog had already slipped a bit, as reflected in its #5 ranking then). This year, Asimov's, F&SF, and Analog were all purchased by Must Read Publishing, which changed the author contracts objectionably enough to generate a major backlash, with SFWA considering delisting at least Analog from the qualifying markets list. F&SF has not published any new issues since summer 2024. It remains to be seen if the big three classic magazines can remain viable in print format.

(5.) Academic philosophy readers might also be interested in the following magazines that specialize specifically in philosophical fiction and/or fiction by academic writers: AcademFic, After Dinner Conversation, and Sci Phi Journal.