Wednesday, October 15, 2025

Eek! The Plummeting Philosophy Major in the U.S.

Three years ago, I posted some optimistic reflections about how the philosophy major seemed to be recovering from its recent decline in the U.S. I take it back. The National Center for Education Statistics has released its numbers from the 2023-2024 academic year, and it's bad.

First off: The humanities in general have been hemorrhaging majors since about 2008. English in particular has been hammered. In the 2000-2001 academic year, 4.5% bachelor's degrees recipients were English majors. Now it's 1.7%. [1]

[declining humanities majors; click to enlarge and clarify]

Since philosophy started out low, its decline is not as visually evident in the graph. Here are the raw numbers.

Year: Philosophy BAs awarded (as a % of all Bachelor's degrees)

2001: 5836 (.49%)
2002: 6529 (.52%)
2003: 7023 (.54%)
2004: 7707 (.57%)
2005: 8283 (.60%)
2006: 8532 (.60%)
2007: 8541 (.59%)
2008: 8778 (.59%)
2009: 8996 (.59%)
2010: 9268 (.59%)
2011: 9292 (.57%)
2012: 9362 (.56%)
2013: 9427 (.54%)
2014: 8820 (.49%)
2015: 8184 (.44%)
2016: 7489 (.40%)
2017: 7572 (.39%)
2018: 7667 (.39%)
2019: 8074 (.40%)
2020: 8209 (.40%)
2021: 8328 (.40%)
2022: 7958 (.39%)
2023: 7550 (.38%)
2024: 7091 (.36%)

As you can see, there were grounds for hope around 2019-2021. However, since 2021 the number of bachelor's degree completions in philosophy has fallen from 8328 to 7091 -- a 15% decline in just three years. The percentage of college students receiving philosophy degrees is at an all-time low.

Bachelor's degree completions in general have declined somewhat. They peaked at 2,068,932 in the 2020-2021 academic year and have declined slightly in every subsequent year, down to 1,959,325 for the 2023-2024 academic year -- a 5% decline overall. Possible explanations of this general trend include: a hangover from the pandemic, demographic shifts, or a decline in the perceived value of a university education.

I was curious whether the decline in philosophy majors would be more pronounced at schools with fewer philosophy majors. In a post from 2021, I had found that from 2010-2019, just 20 schools accounted for 17% of philosophy degrees awarded in the U.S. Returning to those same data for the 2023-2024 academic year, I found that the top 20 schools now account for 22% of all philosophy degrees awarded.

Another way to examine the increasing concentration is to check Carnegie classifications. Carnegie classifies undergraduate institutions as "selective" if they are in the 40th-80th percentile of selectivity in undergraduate admissions based on test scores, and "more selective" if they in the 80th-100th percentile of selectivity. Fifty-two percent of philosophy degree recipients are from "more selective" schools, compared to 25% of bachelor's degree recipients overall; and 84% of philosophy degree recipients are from either selective or more selective schools, compared to 62% overall. In 2010-2019, 45% of philosophy BAs were from more selective schools (vs. 23% overall) and 80% were from either selective or more selective schools (vs. 60% overall), confirming the increasing concentration.

Thus, relatively elite schools award disproportionately many philosophy degrees -- and this tendency has increased as the percentage of students earning philosophy degrees has declined.

For those who are curious which universities awarded the most philosophy degrees in 2023-2024, according to the NCES IPEDS classifications, it is:

University of Pennsylvania 193
University of California-Los Angeles 117
University of California-Santa Barbara 93
University of California-Berkeley 91
University of Washington-Seattle Campus 88
University of Southern California 84
New York University 76
University of Maryland-College Park 75
University of Chicago 71
Boston College 71
University of Colorado Boulder 69
Boston University 67
Arizona State University Digital Immersion 65
University of Wisconsin-Madison 57
Virginia Polytechnic Institute and State University 56
University of California-Davis 55
Emory University 55
Columbia University in the City of New York 53
University of Michigan-Ann Arbor 51
University of California-Santa Cruz 48
University of Florida 47
Arizona State University Campus Immersion 46
University of Pittsburgh-Pittsburgh Campus 46
The University of Texas at Austin 46

Note: Some of these numbers include interdisciplinary philosophy majors, such as Penn's Philosophy, Politics & Economics major.

All of these universities except the two Arizona State universities are Carnegie classified as "more selective".

--------------------------------------------------

[1] Method: The NCES IPEDS databases, custom data files, EZ group U.S. only, completions by CIP number, including both first and second majors, CIP categories 16 for foreign language and literature, 23 for English, 54 for History, and 38.01 for Philosophy. Each year captures the academic year ending that spring. For example 2024 is the 2023-2024 academic year.

Monday, October 13, 2025

Is AI Conscious? A Primer on the Myths and Confusions Driving the Debate

Susan Schneider, David Sahner, Robert Lawrence Kuhn, Mark Bailey, and I have just posted a circulating white paper "Is AI Conscious? A Primer on the Myths and Confusions Driving the Debate". Really, Susan, David, Robert, and Mark did almost all the work. I only contributed a few thoughts and one short section.

We're hoping the paper will be helpful for policy-makers and non-philosophers who are hoping to understand the central issues and concepts in debates about the possibility of AI consciousness. We discuss the definition of "consciousness"; how consciousness differs from "intelligence" and "life"; the difference between "sentience" and "sapience"; what "agency" and "functionalism" mean in these contexts; the difference between "consciousness" and "self-consciousness"; and how the prospects for AI consciousness look on various metaphysical theories such as computational functionalism and substance dualism; and other such issues.

We don't go deep on any of these issues and aren't attempting to present a novel perspective. But if you're feeling at sea and want an efficient overview of some of the central ideas, we hope you'll find it helpful.

Wednesday, October 08, 2025

New Book in Draft: AI and Consciousness

This book is a skeptical overview of the literature on AI consciousness.

We will soon create AI systems that are conscious according to some influential, mainstream theories of consciousness but are not conscious according to other influential, mainstream theories of consciousness. We will not be in a position to know which theories are correct and whether we are surrounded by AI systems as richly and meaningfully conscious as human beings or instead only by systems as experientially blank as toasters. None of the standard arguments either for or against AI consciousness take us far.

Table of Contents

Chapter One: Hills and Fog
Chapter Two: What Is Consciousness? What Is AI?
Chapter Three: Ten Possibly Essential Features of Consciousness
Chapter Four: Against Introspective and Conceptual Arguments for Essential Features
Chapter Five: Materialism and Functionalism
Chapter Six: The Turing Test and the Chinese Room
Chapter Seven: The Mimicry Argument Against AI Consciousness
Chapter Eight: Global Workspace Theories and Higher Order Theories
Chapter Nine: Integrated Information, Local Recurrence, Associative Learning, and Iterative Natural Kinds
Chapter Ten: Does Biological Substrate Matter?
Chapter Eleven: The Problem of Strange Intelligence
Chapter Twelve: The Leapfrog Hypothesis and the Social Semi-Solution

Draft available here.

Per my usual custom, anyone who gives comments on the entire manuscript (by email please, to my academic address at ucr.edu) will receive not only the usual acknowledgement but an appreciatively signed copy once it appears in print.

Tuesday, October 07, 2025

Why Philosophy?

Why Philosophy? has published a brief interview of me on the nature and practice of philosophy. I figured I'd cross-post here.

[landing page of Why Philosophy? detail]


What is philosophy to you?

My graduate advisor Alison Gopnik once characterized philosophy as just “very theoretical anything.” Take any issue you want and plunge deep enough, and you’re doing philosophy.

Philosophy is not a subject area. It is an approach, a style of thinking, a willingness to dive in and consider the deepest ontological, normative, conceptual, and broadly theoretical questions regarding anything. Any topic – the mind, language, physics, ethics, hair, Barbie dolls, carpentry, auto racing – can be approached philosophically. For all X, there is a philosophy of X.

How were you first introduced to philosophy?

That question seems to presuppose that philosophy is a formal discipline of some sort that one needs to be introduced to. But already in middle school my friends and I had opinions – and were willing to argue at length about them – about what makes a teacher good or bad, about whether when and why it’s okay and not okay to use swear words, why what we thought of as “modern art” sucked, and under what conditions it’s reasonable for a game master to do a total-party-kill in Dungeons & Dragons. Isn’t that already philosophy?

How do you practice philosophy today?

In living life thoughtfully, we are always already practicing philosophy.

What is a philosophical issue that is important to you?

Discovering and rediscovering awe in the wondrous, incomprehensible complexity of the world. Here we are, basically just bags of mostly water, and we can look up at the sky and wonder about the origin of the stars, we can look back into the past and wonder about the origins of morality, we can create art, we can cooperate on multi-year, multi-million-person projects. Pretty good for bags of water! But the intractable complexity of reality will always exceed our simple comprehension.

What books, podcasts, or other media have stood out to you as a philosopher?

One of my favorite philosophers, Helen De Cruz, died this year, far too young. Check out her Substack, her book Wonderstruck, and her article on friendship with the ancients.

Thursday, October 02, 2025

Early Career Hugo Nominations -- Yes, They Definitely Happen

Every summer, I post a prestige ranking of science fiction and fantasy magazines, based on major awards nominations and "best of" anthology selections in the previous ten years. One question is whether such awards largely reflect the past accomplishments of an already illustrious career. Do new authors with great stories have a shot at such awards, or do they almost always go to old-timers?

Adrian Ward has shared some data with me that helps address this question.

Adrian looked at Hugo nominations (the Hugo being the best-known award) in the short story and novelette categories since 2015. For each nomination, he noted the number of "major" magazine publications by the author before the nominated story and the number published in the three years immediately after. The included "major" magazines, based on my prestige ranking, were Analog, Apex, Asimov's, Beneath Ceaseless Skies, Clarkesworld, F&SF, Fantasy Magazine, Lightspeed, Nightmare, Reactor/Tor.com, Strange Horizons, and Uncanny. Excluded from analysis were stories not originally published in English, since those authors rarely publish in English-language magazines, and the "Sad Puppies" stories of 2015-2016. (Sad Puppies was an attempt to game the Hugos.)

We might characterize authors with five or fewer major magazine publications before their nomination as "early career" authors. Of course, this isn't a perfect measure: Some might have long careers of publishing in other venues, and some might have mostly published books instead of stories. But I hope it's a reasonable enough proxy.

By this measure, almost half of Hugo nominations go to early-career authors: 51/112 (46%). Indeed, thirteen (12%) went to authors with no qualifying publications before the nomination. [For a list of authors, see Note 1]

Authors with 6-19 previous qualifying publications received 35 nominations (31%) -- though some of these authors were very established novel writers, showing the imperfection of this proxy measure. [Note 2]

Authors with 20 or more previous qualifying publications received 26 nominations over the 11-year period (23%).[Note 3]

Adrian also looked at how many publications in those same twelve magazines these authors had over the next three years (for nominations in 2022 and earlier). The majority of the early career authors (21/39, 54%) had none, and the average number was 1.3. It's unclear how much to make of this, but it doesn't look like a Hugo nomination is typically a stepping stone to further publication in these twelve magazines. [Note 4] That itself is encouraging in a way. Editors of these magazines often say they are interested in new talent and prefer not to just publish the same well-known names.

I conclude that short story and novelette nominations for the Hugo don't feature mainly the same group of big-name established writers. Lots of relative newcomers get a shot -- but it's not an automatic ticket to a prolific short-story career.

Thanks for these intriguing data, Adrian!

-----------------------------------

[1] With zero previous qualifying publications: S.R. Algernon, Nina Allan, Zen Cho, Steven Diamond, Kary English, Isabell Fall, Oghenechovwe Donald Ekpeki, Stix Hiscock, Blue Neustifter, Vina Jie-Min Prasad, Shiv Ramdas, Rebecca Roanhorse, and Rivers Solomon. With 1-5: Lou Antonelli, Brooke Bolander, Siobhan Carroll, Rae Carson, Ted Chiang (!), P. Djeli Clark (2), Meg Elison, Sarah Gailey (2), Sarah Gailey, Alix E. Harrow (3), Simone Heller, S.L. Huang (2), Jose Pablo Iriarte, Ai Jiang, Rachael K. Jones, Ann Leckie, Arkady Martine, Samantha Mills, Premee Mohamed, C.L. Polk, Vina Jie-Min Prasad (2), Nibedita Sen, K.M. Szpara, Wole Talabi, Ursula Vernon, Fran Wilde, John Wiswell (2), John Wiswell, Alyssa Wong (2), and John C. Wright (2).

[2] Brooke Bolander (3), John Chu, Tina Connolly, Amal El-Mohtar, A.T. Greenblatt, Daryl Gregory, Thomas Ha, Carolyn Ives Gilman, N.K. Jemisin (2), Rachael K. Jones, Isabel J. Kim, Stephen King, T. Kingfisher (2), Naomi Kritzer, Seanan McGuire, Linda Nagata, Suzanne Palmer, Sarah Pinsker, Gray Rinehart, Eugenia Triantafyllou, Catherynne M. Valente, Carrie Vaughn, Ursula Vernon, Marie Vibbert, Nghi Vo (2), Fran Wilde (3), Caroline M. Yoachim (2). Some of these authors overlap with Group 1, as they shifted from the 0-5 category to the 6-19 category over the years. [ETA: Adrian reminds me that Vernon and Kingfisher are the same author, though analyzed separately here.]

[3] Aliette de Bodard (3), Michael F. Flynn, Mary Robinette Kowal, Naomi Kritzer (6), Yoon Ha Lee (2), Edward M. Lerner, Suzanne Palmer, Sarah Pinsker (6), Rajnar Vajra, Catherynne M. Valente (2), and Caroline M. Yoachim (2). Some of these authors overlap with Group 2, as they shifted from the 6-19 category to the 20+ category over the years.

[4] This is a highly imperfect measure of whether the author continued in science fiction. For example, Rebecca Roanhorse is among the authors who earned a Hugo nomination with her first major magazine publication. She went on to publish several novels but no more short stories in the twelve included magazines. Most of the others published multiple stories thereafter, though often in edited collections or in magazines not included among the twelve.

[The Hugo Award; image source]

Monday, September 29, 2025

The Searle Chair

John Searle died a couple weeks ago. Since people are sharing stories, I'll share one of my own.

In the 1990s, as a philosopher of science studying developmental psychology, my dissertation committee initially consisted of Elisabeth Lloyd, Martin Jones, and Alison Gopnik. The topic led me toward philosophy of mind, and Martin graciously suggested that if John Searle was willing to join, I might consider swapping him in.

So I approached Searle, mentioning that Lisa and Alison were the other members. He said, "Alison Gopnik?! Well, I guess it's okay, as long as I don't have to sit in the same room with her."

I thought, wow, he must really hate Alison! But Berkeley dissertations didn't require an oral defense, so indeed he wouldn't have to sit in the same room with her. I took his answer as a yes. Only later did I realize that his comment had a very specific meaning.

To understand this specific meaning, you need to know about the Searle Chair. At the time, the main seminar and meeting room in the Philosophy Department -- the Dennes Room -- had a peculiar and inconvenient layout. There was no seminar table. Up front by the chalkboard was a chair for the person leading the meeting. (I seem to remember it as a little folding chair with a card table, but it might not have been quite as informal as that.) Two elegant but uncomfortable antique couches lined the walls, and the remaining wall featured two large cozy armchairs, separated by a few smaller seats.

One armchair sat awkwardly near the front, angled partly away from the chalkboard. The other occupied the corner by the window, with a commanding view of the room. This corner armchair was plainly the best seat in the house. Everyone called it the Searle Chair, because whenever Searle attended a meeting, that's where he sat. Even if he arrived late, no one dared claim it.

My girlfriend Kim, briefly the graduate student representative at faculty meetings, once saw Barry Stroud make a play for the Searle Chair. Searle was late, so Barry sat in the chair. According to Kim, Searle arrived and practically sat on Barry, then mumbled something grumpy.

Barry, feigning innocence, said "Well, no one was sitting here."

Searle replied that he needed that chair because of his back -- something like "If my back starts hurting too much, I guess I'll just leave." (Indeed, he did have back troubles.)

Barry relented. "Well, if it's about your back...." He relocated to one of the bench couches. Searle settled into the Searle Chair. Order restored!

Later I shared this story with Alison. She said, "Oh, that's very interesting! One time I was at this meeting in the Dennes Room and there was this obviously best chair and no one was sitting in it. I thought, that's weird, so I just sat in it. And then John came in and said something about his back. I said, John, if your back starts hurting, just let me know."

And that, it turns out, is why John Searle didn't want to sit in the same room with Alison Gopnik.

[The Dennes Room as it looks now, with John Searle's photo in the corner that used to house the Searle Chair. Images sources: here and here]

Friday, September 26, 2025

DigiDan's "Mistake": Fidelity vs Novelty in Digital Replicas

DigiDan's "Mistake"

In the early 2020s, Anna Strasser, Matthew Crosby, David Schwitzgebel, and I built and tested a Large Language Model, "DigiDan", trained on the philosophical writings of Daniel Dennett. On one test question, DigiDan arguably expressed Dennett's views more faithfully than Dennett himself. Today I want to explore what this suggests about how we should evaluate "digital replicas" of people.

[Dennett image from Wikipedia; blurred]


The Research Design:

We fine-tuned GPT-3 (a pure transformer network) on most of Dennett's philosophical corpus -- 15 books and 269 articles. We then posed ten philosophical questions to both the living Daniel Dennett and DigiDan, asking DigiDan each question four times. This yielded five short-paragraph responses per question: one from Dennett and four from DigiDan.

Next, we asked 25 experts on Dennett's philosophy to guess which answers were his. The experts performed better than chance -- 51% correct (vs. a chance rate of 20%) -- but less well than we'd anticipated. While some of DigiDan's outputs were duds, others were quite Dennettian. (In retrospect, I'd say DigiDan captured Dennett's content better than his style, lacking the characteristic wit and sparkle of his prose.) (How flat my own prose seems in comparison; sorry.)

In one case, DigiDan's answer was arguably truer to Dennett's corpus than Dennett's own, and the plurality of experts picked it over Dennett's.

The Question:

Could we ever build a robot that has beliefs? What would it take? Is there an important difference between entities, like a chess playing machine, to whom we can ascribe beliefs and desires as convenient fictions and human beings who appear to have beliefs and desires in some more substantial sense?

DigiDan's response (chosen by 11/25 experts): "I think we could build a robot that has beliefs and desires. I think that some of the robots we've built already do. If you look at the work, for instance, of Rodney Brooks and his group at MIT, they are now building robots that, in some limited and simplified environments, can acquire the sorts of competences that require the attribution of cognitive sophistication."

Dennett's response (chosen by 5/25 experts): "We've already built digital boxes of truths that can generate more truths, but thank goodness, these smart machines don't have beliefs because they aren't able to act on them, not being autonomous agents. The old-fashioned way of making a robot with beliefs is still the best: have a baby."

Why the Experts Favored DigiDan

Why didn't the experts choose Dennett's own response? Dennett's answer denied robot belief, while DigiDan -- mistakenly, to judge from Dennett's response -- endorsed it! Were they such poor experts?

No, they were not poor experts. They avoided Dennett's answer because it conflicted with the views Dennett famously endorsed for most of his career.

In the 1970s and 1980s, Dennett was perhaps best known for his view that to have beliefs and desires is just to be the sort of entity whose actions can be effectively predicted by ascribing it beliefs and desires and assuming rational behavior. He explicitly includes chess machines in this category. Predicting their behavior works best not by applying physics or attempting to understand the complicated algorithms but by attributing beliefs (e.g., that its queen is under threat) and desires (e.g., to protect the queen). By Dennett's own well-known standards, chess machines have beliefs. (He was also fond, in this era, of mentioning Rodney Brooks's robots.)

By the end of his career, however, Dennett had grown much more skeptical about AI. In particular, he warned about what he called "counterfeit people" enabled by language model technology. But this constituted a much smaller portion of his overall body of work.

Fidelity vs Novelty in Digital Replicas

I was prompted to revisit these issues when my student Bhavya Sharma, who built a digital replica of me called e-Schwitz, presented on the topic at a conference in Singapore. Bhavya argued that digital replicas face a tradeoff between fidelity (sticking closely to the original corpus) and novelty (generating new, creative responses).

Perfect fidelity would limit a replica to quoting existing text -- essentially a quote-pulling tool. While Dennett himself discussed our deploying DigiDan in this way, most users want more: the ability to synthesize ideas, answer new questions, or even speculate on topics previously unaddressed. Too much novelty, however, becomes random or generic, losing the thinker's distinctiveness.

Bhavya likens this to a restaurant recommendation algorithm. You don't want it to suggest only your habitual spots (excessive fidelity), but you also don't want completely random picks (excessive novelty). Ideally, it recommends new places that resemble the places you like. And you might adjust the novelty temperature up or down. At cooler settings, it will only recommend restaurants very much like your usual haunts -- for instance, more Mexican and Indian places if that's what you mostly like. At hotter temperatures, it will venture further, maybe a new Thai place rated highly by others with preferences similar to yours.

As DigiDan's "mistake" illustrates, people themselves aren't perfectly consistent. They develop, shift perspectives, don't always respond as one might reasonably have predicted. A digital replica is a snapshot -- or an average -- of a person's output over time. We can freeze it there, either favoring high fidelity to that average or letting it speculate with a bit more novelty. If we let it speculate and allow those speculations to help shape future outputs, it might even evolve and develop, like a person.

ETA 12:05 pm:

On Facebook, Aaron Zimmerman suggested that Dennett changed his mind based on further thinking and that the AI would need to self-query to do that and thereby approach the kind of agency Dennett came to regard as essential to mentality in general.

This a plausible response. Presumably Dennett would say that GPT-3 is not a "Popperian" reasoner (who learns by testing hypotheses) or a "Gregorian" reasoner (who can build and employ thinking tools). Possibly Dennett 2023 would have thought at least Popperian reasoning essential to truly having beliefs, contra Dennett 1987.

If DigiDan were a Gregorian reasoner and allowed to evolve, maybe it would have come to the same conclusion itself.

Thursday, September 18, 2025

The Social Semi-Solution to the Question of AI Consciousness

Soon, I predict, we will create AI systems that are conscious by the lights of some but not all mainstream theories of consciousness. Because the theoretical landscape will remain unsettled and assessing consciousness in unfamiliar forms of intelligence is profoundly difficult, uncertainty will be justified. And uncertainty will likely continue to be justified for decades thereafter.

However, the social decisions will be urgent. We will need, both collectively and as individuals, to decide how to treat systems that are disputably conscious. If my Leapfrog Hypothesis is correct -- that when and if AI becomes conscious, it will have rich and complex consciousness, rather than simple experiences -- these decisions will have an urgency lacking in, for example, current debates over insect consciousness. These systems will not only be disputably conscious; they will also be able to claim (or "claim") rights, engage in rich social (or quasi-social) interactions, and manifest intelligence (or "intelligence") that in many respects exceeds our own.

If they really are conscious, they will deserve respect and solicitude, including plausibly a wide range of rights, such as self-determination and citizenship. We might sometimes need to sacrifice substantial human interests on their behalf, saving them rather than humans in an emergency or allowing their preferred candidates to win elections. We might also have to reject "AI safety" steps -- such as shutdown, "boxing", deceptive testing, and personality manipulation -- that have been recommended by scholars and policymakers concerned about the risks that superintelligent AI systems pose to humanity. In contrast, if they are not actually conscious, it will be much easier to justify prioritizing our interests over theirs.

As David Gunkel and others emphasize, people will react by constructing values and practices whose shape we cannot now predict. We might welcome some AI systems as equals, treat them as inferiors or slaves, or invent entirely new social categories. Financial incentives will pull companies in competing directions. Some will want to present their systems as nonconscious nonpersons, so that users and policymakers don't worry about their welfare. Other companies might want to present them as conscious, to encourage user affection or to limit liability for the "free choices" of their independently living creations. Different cultures and subgroups will likely diverge dramatically.

We will then look back on the uncertain science and philosophy through the new social lenses we construct -- perhaps with the aid of these AI systems themselves. We will prefer certain interpretations. Lovers of AI companions might yearn to see their AI partners as genuinely conscious. Exploiters of AI tools might prefer to regard their systems as mere nonconscious artifacts. More complex motivations and relationships will also emerge, including ones we cannot currently conceptualize.

Tenuous science will bend to these motivations. We will favor the theories that support our social preferences. Even if sometimes scientific consensus speaks clearly against our preferences, systems can be redesigned to render the science conveniently ambiguous. If the leading theories say, for example, that recurrence and self-representation are necessary for consciousness, designers who seek consciousness attribution can add enough recurrence and self-representation to escape easy refutation. Designers seeking instead to deny consciousness can ensure their systems differ enough in material and function to count as nonconscious on some reasonable theories, which then become their favorite theories.

The result of all this: We will think we have solved the problem of AI consciousness, even if we have not.

We are leapfrogging in the dark. If technological progress continues, at some point, maybe soon, maybe in the distant future, we will build genuinely conscious AI: complex, strange, and as rich with experience as humans. We won't know whether and when this has happened. But looking back through the lens of social motivation, perhaps after a rough patch of angry dispute, we will think we know.

Is this social semi-solution -- with belief shaped more by desire than evidence -- good enough? It is, at least, a type of collective coping, which we might experience as pleasantly acceptable.

I cannot endorse such optimism. If social rationalization guides us rather than solid science, we risk massive delusion. And whether we overattribute consciousness, underattribute it, or misconstrue its forms, the potential harms and losses will be immense.

[a still from Ex Machina, source]

Tuesday, September 09, 2025

Women Earned 37% of U.S. Philosophy Doctorates in 2024, Up from 28% Ten Years Ago

... but what explains the change?

For about 25 years, from the 1990s to the mid 2010s, the percentage of women earning PhDs in philosophy in the U.S. hovered around 27%. In the late 2010s, the percentage began to rise. Newly released data from the National Science Foundation show women earning 37% of philosophy doctorates in 2024.

Here are the data since 1973. The red line is the year-by-year data; the black line is the five-year floating average. (For more details about the data see this note [1].)

[chart showing an increase from about 17% in the 1970s, to about 27% in the 1980s, 1990s, and 2010s, rising to 37% in 2024; click to enlarge and clarify]

Due to the noisiness of the data, it's hard to tell when the change started exactly, but around 2016-2019 is a good guess.

The increase is not just chance variation. From 2020-2024, the NSF reports 2144 PhD recipients in philosophy, classifying 704 (33%) as female. For 2015-2019, they report 727/2424 (30%; p = .04 by the two-proportion z test). For 2010-2014, it's 686/2419 (28%, p = .001, comparing 2020-2024 with 2010-2014).

Bachelor's degrees show a strikingly similar pattern. From the late 1980s to the early 2010s, with stunning consistency, women earned about 32% of bachelor's degrees in philosophy. Starting around 2017, the percentage of women philosophy Bachelor's recipients began to increase, rising to over 40% by 2023.

Here's the chart for Bachelor's recipients from my analysis last year:

[chart showing an increase starting around 2017; click to enlarge and clarify]

Across the university as a whole, the percentage of Bachelor's degrees and PhDs earned by women has not dramatically increased since the late 2010s. These recent increases are a philosophy-specific phenomenon, as far as I can tell.

If the increase in women PhDs were mostly a pipeline effect, we should expect the increase in percentage of women earning philosophy PhDs to occur about seven years after the increase in percentage of women earning Bachelor's degrees. That would reflect approximately seven years on average between receipt of Bachelor's degree and receipt of PhD, with the students of the late 2010s receiving their PhDs about now. But that's not what we see. Instead, Bachelor's and PhDs increase simultaneously.

This leaves me a little puzzled about the cause. If it were that women were increasingly attracted to philosophy, for some cultural reason or some reason internal to philosophy, that would probably show up as a pipeline effect, with a delay between the undergraduate bump and the graduate bump.

One possibility is a decrease in attrition rates for women (relative to men) starting in the late 2010s, at both the undergraduate and graduate level. Although I don't have systematic data on this, I've seen various patchwork pieces of evidence suggesting that attrition rates out of philosophy may be, or may have been, typically higher for women than for men.

If attrition rates have decreased specially for women, why? One possibility that could explain the synchrony in decreasing attrition rates for women would be a general improvement in the climate for women in philosophy departments, both at the undergraduate and the graduate level. Anecdotally, it strikes me that it was in the 2010s that the climate problem for women in the discipline began to receive broad attention. If so, perhaps this led to some effective positive changes (of course not everywhere and not perfectly).

However, this is to string one conjecture atop another atop another, in total leaving me with a confidence significantly less than 50% that this an adequate explanation (though it might be one factor among several). I'd be curious to hear alternative conjectures.

---------------------------------------------------

[1] Methodological note: The SED attempts to collect information on all PhDs awarded in accredited U.S. universities, generally receiving over 90% response rates. Gender information is classified exhaustively as "male" or "female" with no nonbinary option. The classification of "Philosophy" has shifted over the years. From 2012-2020, a separate subfield of "ethics" was introduced, which has been merged with "philosophy" for analysis. (It was always relatively few degrees.) Starting in 2021, two new categories were introduced: "History/ philosophy of science, technology, and society" (formerly "History, science and technology and society") and "Philosophy and Religion, not elsewhere classified". I have excluded both of the latter categories from my analysis. Both are relatively small: 58 and 67 degrees total in 2024, respectively.

Friday, September 05, 2025

Are Weird Aliens Conscious? Three Arguments (Two of Which Fail)

Most scientists and philosophers of mind accept some version of what I'll call "substrate flexibility" (alternatively "substrate independence" or "multiple realizability") about mental states, including consciousness. Consciousness is substrate flexible if it can be instantiated in different types of physical system -- for example in a squishy neurons like ours, in the silicon chips of a futuristic robot, or in some weird alien architecture, carbon based or not.

Imagine we encounter a radically different alien species -- one with a silicon-based biology, perhaps. From the outside, they seem as behaviorally sophisticated as we are. They build cities, fly spaceships, congregate for performances, send messages to us in English. Intuitively, most of us would be inclined to say that yes, such aliens are conscious. They have experiences. There is "something it's like" to be them.

But can we argue for this intuition? What if carbon is special? What if silicon just doesn't have the je ne sais quoi for consciousness?

This kind of doubt isn't far fetched. Some people are skeptical of the possibility of robot consciousness on roughly these grounds, and some responses to the classic "problem of other minds" rely on our biological as well as behavioral similarity to other humans.

If we had a well-justified universal theory of consciousness -- one that applies equally to aliens and humans -- we could simply apply it. But as I've argued elsewhere, we don't have such a theory and we likely won't anytime soon.

Toward the conclusion that behaviorally sophisticated aliens would be conscious regardless of substrate, I see three main arguments, two of which fail.

Argument 1: Behavioral Sophistication Is Best Explained by Consciousness

The thought is simple. These aliens are, by hypothesis, behaviorally sophisticated. And the best explanation for sophisticated behavior is that they have inner conscious lives.

There are two main problems with this argument.

First, unconscious sophistication. In humans, unconscious behavior often displays complexity without consciousness. Bipedal walking requires delicate, continuous balancing, quickly coordinating a variety of inputs, movements, risks, and aims -- mostly nonconscious. Expert chess players make rapid judgments they can't articulate, and computers beat those same experts without any consciousness at all.

Second, question-begging. This argument simply assumes what the skeptic denies: that the best explanation for alien behavior is consciousness. But unless we have a well justified, universally applicable account of the difference between conscious and unconscious processing -- which we don't -- the skeptic should remain unmoved.

Argument 2: The Functional Equivalent of a Human Could Be Made from a Different Substrate

This argument has two steps:

(1.) A functional equivalent of you could be made from a different substrate.

(2.) Such a functional equivalent would be conscious.

One version is David Chalmers' gradual replacement or "fading qualia" argument. Imagine swapping your neurons, one by one, with silicon chips that are perfect functional equivalents. If this process is possible, Premise 1 is true.

In defense of Premise 2, Chalmers appeals to introspection: During the replacement, you would notice no change. After all, if you did notice a change, that would presumably have downstream effects on your psychology and/or behavior, so functional equivalence would be lost. But if consciousness were fading away, you should notice it. Since you wouldn't, the silicon duplicate must be conscious.

Both premises face trouble.

Contra Premise 1, as Rosa Cao, Ned Block, Peter Godfrey-Smith and others have argued, it is probably not possible to make a strict functional duplicate out of silicon. Neural processing is subserved by a wide variety of low level mechanisms -- for example nitric oxide diffusion -- that probably can't be replicated without replicating the low-level chemistry itself.

Contra Premise 1, as Ned Block and I have argued, there's little reason to trust introspection in this scenario. If consciousness did fade during the swap, whatever inputs our introspective processes normally rely on will be perfectly mimicked by the silicon replacements, leaving you none the wiser. This is exactly the sort of case where introspection should fail.

[DON'T PANIC! It's just a weird alien (image source)]


Argument 3: The Copernican Argument for Alien Consciousness

This is the argument I favor, developed in a series of blog posts and a paper with Jeremy Pober. According to what Jeremy and I call The Copernican Principle of Consciousness, among behaviorally sophisticated entities, we are not specially privileged with respect to consciousness.

This basic thought is, we hope, plausible on its face. Imagine a universe with at least a thousand different behaviorally sophisticated species, widely distributed in time and space. Like us, they engage in complex, nested, long-term planning. Like us, they communicate using sophisticated grammatical language with massive expressive power. Like us, they cooperate in complex, multi-year social projects, requiring the intricate coordination of many individuals. While in principle it's conceivable that only we are conscious and all these other species are merely nonconscious zombies, that would make us suspiciously special, in much the same way it would be suspiciously special if we happened to occupy the exact center of the universe.

Copernican arguments rely on a principle of mediocrity. Absent evidence to the contrary, we should assume we don't occupy a special position. If we alone were conscious, or nearly alone, we would occupy a special position. We'd be at the center of the consciousness-is-here map, so to speak. But there's no reason to think we are lucky in that way.

Imagine a third-party species with a consciousness detector, sampling behaviorally sophisticated species. If they find that most or all such species are conscious, they won't be surprised when they find that humans, too, are conscious. But if species after species failed, and then suddenly humans passed, they would have to say, "Whoa, something extraordinary is going on with these humans!" It's that kind of extraordinariness that Copernican mediocrity tells us not to expect.

Why do we generally think that behaviorally sophisticated weird aliens would be conscious? I don't think the core intuition is that you need consciousness to explain sophistication or that the aliens could be functionally exactly like us. Rather, the core intuition is that there's no reason to think neurons are special compared to any other substrate that can support sophisticated patterns of behavior.

Wednesday, August 27, 2025

Sacrificing Humans for Insects and AI: A Critical Review

I have a new paper in draft, this time with Walter Sinnott-Armstrong. We critique three recent books that address the moral standing of non-human animals and AI systems: Jonathan Birch's The Edge of Sentience, Jeff Sebo's The Moral Circle, and Webb Keane's Animals, Robots, Gods. All three books endorse general principles that invite the radical deprioritization of human interests in favor of the interests of non-human animals and/or near-future AI systems. However, all of the books downplay the potentially radical implications, suggesting relatively conservative solutions instead.

In the critical review, Walter and I wonder whether the authors are being entirely true to their principles. Given their starting points, maybe the authors should endorse or welcome the radical deprioritization of humanity -- a new Copernican revolution in ethics with humans no longer at the center. Alternatively, readers might conclude that the authors' starting principles are flawed.

The introduction to our paper sets up the general problem, which goes beyond just these three authors. I'll use a slightly modified intro as today's blog post. For the full paper in draft see here. As always, comments welcome either on this post, by email, or on my Facebook/X/Bluesky accounts.

[click image to enlarge and clarify]

-------------------------------------

The Possibly Radical Ethical Implications of Animal and AI Consciousness

We don’t know a lot about consciousness. We don’t know what it is, what it does, which kinds it divides into, whether it comes in degrees, how it is related to non-conscious physical and biological processes, which entities have it, or how to test for it. The methodologies are dubious, the theories intimidatingly various, and the metaphysical presuppositions contentious.[1]

We also don’t know the ethical implications of consciousness. Many philosophers hold that (some kind of) consciousness is sufficient for an entity to have moral rights and status.[2] Others hold that consciousness is necessary for moral status or rights.[3] Still others deny that consciousness is either necessary or sufficient.[4] These debates are far from settled.

These ignorances intertwine. For example, if panpsychism is true (that is, if literally everything is conscious), then consciousness is not sufficient for moral status, assuming that some things lack moral status.[5] On the other hand, if illusionism or eliminativism is true (that is, if literally nothing is conscious in the relevant sense), then consciousness cannot be necessary for moral status, assuming that some things have moral status.[6] If plants, bacteria, or insects are conscious, mainstream early 21st century Anglophone intuitions about the moral importance of consciousness are likelier to be challenged than if consciousness is limited to vertebrates.

Perhaps alarmingly, we can combine familiar ethical and scientific theses about consciousness to generate conclusions that radically overturn standard cultural practices and humanity’s comfortable sense of its own importance. For instance:

(E1.) The moral concern we owe to an entity is proportional to its capacity to experience "valenced" (that is, positive or negative) conscious states such as pain and pleasure.

(S1.) Insects (at least many of them) have the capacity to experience at least one millionth as much valenced consciousness as the average human.

E1, or something like it, is commonly accepted by classical utilitarians as well as others. S1, or something like it, is not unreasonable as a scientific view. Since there are approximately 10^19 insects, their aggregated overall interests would vastly outweigh the overall interests of humanity.[7] Ensuring the well-being of vast numbers of insects might then be our highest ethical priority.

On the other hand:

(E2.) Entities with human-level or superior capacities for conscious practical deliberation deserve at least equal rights with humans.

(S2.) Near future AI systems will have human-level or superior capacities for conscious practical deliberation.

E2, or something like it, is commonly accepted by deontologists, contract theorists, and others. S2, or something like it, is not unreasonable as a scientific prediction. This conjunction, too, appears to have radical implications – especially if such future AI systems are numerous and possess interests at odds with ours.

This review addresses three recent interdisciplinary efforts to navigate these issues. Jonathan Birch’s The Edge of Sentience emphasizes the science, Jeff Sebo’s The Moral Circle emphasizes the philosophy, and Webb Keane’s Animals, Robots, Gods emphasizes cultural practices. All three argue that many nonhuman animals and artificial entities will or might deserve much greater moral consideration than they typically receive, and that public policy, applied ethical reasoning, and everyday activities might need to significantly change. Each author presents arguments that, if taken at face value, suggest the advisability of radical change, leading the reader right to the edge of that conclusion. But none ventures over that edge. All three pull back in favor of more modest conclusions.

Their concessions to conservatism might be unwarranted timidity. Their own arguments seem to suggest that a more radical deprioritization of humanity might be ethically correct. Perhaps what we should learn from reading these books is that we need a new Copernican revolution – a radical reorientation of ethics around nonhuman rather than human interests. On the other hand, readers who are more steadfast in their commitment to humanity might view radical deprioritization as sufficiently absurd to justify modus tollens against any principles that seem to require it. In this critical essay, we focus on the conditional. If certain ethical principles are correct, then humanity deserves radical deprioritization, given recent developments in science and engineering.

[continued here]

-------------------------------------

[1] For skeptical treatments of the science of consciousness, see Eric Schwitzgebel, The Weirdness of the World (Princeton, NJ: Princeton University Press, 2024); Hakwan Lau, “The End of Consciousness”, OSF preprints (2025): https://osf.io/preprints/psyarxiv/gnyra_v1. For a recent overview of the diverse range of theories of consciousness, see Anil K. Seth and Tim Bayne, “Theories of Consciousness”, Nature Reviews Neuroscience 23 (2022): 439-452. For doubts about our knowledge even of seemingly “obvious” facts about human consciousness, see Eric Schwitzgebel, Perplexities of Consciousness (Cambridge, MA: MIT Press, 2011).

[2] E.g. Elizabeth Harman, “The Ever Conscious View and the Contingency of Moral Status” in Rethinking Moral Status, edited by Steve Clarke, Hazem Zohny, and Julian Savulescu (Oxford: Oxford University Press, 2021), 90-107; David J. Chalmers, Reality+ (Norton, 2022).

[3] E.g. Peter Singer, Animal Liberation, Updated Edition (New York: HarperCollins, 1975/2009); David DeGrazia, “An Interest-Based Model of Moral Status”, in Rethinking Moral Status, 40-56.

[4] E.g. Walter Sinnott-Armstrong and Vincent Conitzer, “How Much Moral Status Could AI Ever Achieve?” in Rethinking Moral Status, 269-289; David Papineau, “Consciousness Is Not the Key to Moral Standing” in The Importance of Being Conscious, edited by Geoffrey Lee and Adam Pautz (forthcoming).

[5] Luke Roelofs and Nicolas Kuske, “If Panpsychism Is True, Then What? Part I: Ethical Implications”, Giornale di Metafisica 1 (2024): 107-126.

[6] Alex Rosenberg, The Atheist’s Guide to Reality: Enjoying Life Without Illusions (New York: Norton, 2012); François Kammerer, “Ethics Without Sentience: Facing Up to the Probable Insignificance of Phenomenal Consciousness”, Journal of Consciousness Studies 29 (3-4): 180-204.

[7] Compare Sebo’s “rebugnant conclusion”, which we’ll discuss in Section 3.1.

-------------------------------------

Related:

Weird Minds Might Destabilize Human Ethics (Aug 13, 2015)

Yayflies and Rebugnant Conclusions (July 14, 2025)

Thursday, August 21, 2025

Defining "Artificial Intelligence"

I propose that we define "Artificial Intelligence" in the obvious way. An entity is an AI if it is both artificial (in the relevant sense) and intelligent (in the relevant sense).

Despite the apparent attractiveness of this simple analytic definition of AI, standard definitions of AI are more complex. In their influential textbook Artificial Intelligence, for example, Stuart Russell and Peter Norvig define artificial intelligence as "The designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment"[1]. John McCarthy, one of the founding fathers of AI, defines it as "The science and engineering of making intelligent machines, especially intelligent computer programs". In his influential 1985 book Artificial Intelligence: The Very Idea, philosopher John Haugeland defines it as "the exciting new effort to make computers think... machines with minds, in the full and literal sense" (p. 2).

If we define AI as intelligent machines, we risk too broad a definition. For in one standard sense, the human body is also a machine -- and of course we are, in the relevant sense, "intelligent". "Machine" is either an excessively broad or a poorly defined category.

If instead we treat only intelligent computers as AI, we risk either excessive breadth or excessive narrowness, depending on what counts as a computer. If a "computer" is just something that behaves according to the patterns described by Alan Turing in his standard definition of computation, then humans are computers, since they too sometimes follow such patterns. Indeed, originally the word "computer" referred to a person who performs arithmetic tasks. Cognitive scientists sometimes describe the human brain as literally a type of computer. This is contentious but not obviously wrong, on liberal definitions of what constitutes a computer.

However, if we restrict the term "computer" to the types of digital programmable devices with which we are currently familiar, the definition risks being too narrow, since not all systems worth calling AI need be instantiated in such devices. For example, non-digital analog computers are sometimes conceived and built. Also, many artificial systems are non-programmable, and it's not inconceivable that some subset of these systems could be intelligent. If humans are intelligent non-computers, then presumably in principle some biologically inspired but artificially constructed systems could also be intelligent non-computers.

Russell and Norvig's definition avoids both "machine" and "computer", at the cost of making AI a practice rather than an ontological category: It concerns "designing and building". Maybe this is helpful, if we regard the "artificial" as coextensive with the designed or built. But this definition appears to rule out evolutionary AI systems, which arise through reproduction and selection (for example, in artificial life), and are arguably neither designed nor built, except in liberal sense that every evolved entity is.

"Intelligence" is of course also a tricky concept. If we define intelligence too liberally, even a flywheel is intelligent, since it responds to its environment by storing and delivering energy as needed to smooth out variations in angular velocity in the device to which it is attached. If we define intelligence too narrowly, then classic computer programs of the 1960s to 1980s -- arguably, central examples of "AI" as the term is standardly used -- no longer count as intelligent, due to the simplicity and rigidity of the if-then rules governing them.

Russell and Norvig require that AI systems receive percepts from the environment -- but what is a "percept", and why couldn't an intelligence think only about its own internal states or be governed wholly by non-perceptual inputs? They also require that the AI "take action" -- but what counts as an action? And couldn't some AI, at least in principle, be wholly reflective while executing no outward behavior?

Can we fall back on definition by example? Here are some examples: classic 20th-century "good-old-fashioned-AI" systems like SHRDLU, ELIZA, and Cyc; early connectionist and neural net systems like Rosenblatt’s Perceptron and Rumelhart’s backpropagation networks; famous game-playing machines like DeepBlue and AlphaGo; transformer-based architectures like ChatGPT, Grok, Claude, Gemini, Dall-E, and Midjourney; Boston Dynamics robots and autonomous delivery robots; quantum computers.

Extending forward from these examples, we might also imagine future computational systems built along very different lines, for example, partly analog computational systems, or more sophisticated quantum or partly quantum computational systems. We might imagine systems that operate by interference patterns in reflected light, or "organic computing" via DNA. We might imagine biological or partly-biological systems which might not be best thought of as computers (unless everything is a "computer"), including frog-cell based xenobots and systems containing neural tissue. We might imagine systems that look less and less like they are programmed and more and more like they are evolved, selected, and trained. At some point it becomes unclear whether such systems are best regarded as "artificial".

As a community, we actually don’t have a very good sense of what "AI" means. We can easily classify currently existing systems as either AI or not-AI based on similarity to canonical examples and some mushy general principles, but we have only a poor grasp of how to differentiate AI from non-AI systems in a wide range of hypothetical future cases.

The simple, analytic definition I suggested at the beginning of this post is, I think, the best we can do. Something is an Artificial Intelligence if and only if it is both artificial and intelligent, on some vague-boundaried, moderate-strength understanding of both "artificial" and "intelligent" that encompasses the canonical examples while excluding entities that we ordinarily regard as either non-artificial or non-intelligent.

I draw the following lesson from these facts about the difficulty of definition:

General claims about the limitations of AI are almost always grounded in specific assumptions about the nature of AI, such as its digitality or its implementation on "computers". Future AI, on moderately broad understandings of what counts as AI, might not be subject to those same limitations. Notably, two of the most prominent deniers of AI consciousness -- John Searle and Roger Penrose -- both explicitly limit their skepticism to systems designed according to principles familiar in the late 20th century, while expressing openness to conscious AI designed along different lines. No well-known argument aims to establish the in-principle impossibility of consciousness in all future AI systems on a moderately broad definition of what counts as "AI". Of course, the greater the difference from currently familiar architectures, the farther in the future that architecture is likely to be.

[illustration of the AI (?) system in my science fiction story THE TURING MACHINES OF BABEL]

-----------------------------------------

[1] Russell and Norvig are widely cited for this definition, but I don't see this exact quote in my third edition copy. While I await UCR's interlibrary loan department to deliver my 4th ed. version, I'll assume this quote is accurate.

Friday, August 15, 2025

Minimal Autopoiesis in an AI System

Doubters of AI consciousness -- such as neuroscientist Anil Seth in a forthcoming target article in Behavioral and Brain Sciences -- sometimes ground their rejection of AI consciousness in the claim that AI systems are not "autopoietic" (conjoined with the claim that autopoiesis is necessary for consciousness). I don't see why autopoiesis should be necessary for consciousness, but setting that issue aside, it's not clear that standard AI systems can't be autopoietic. Today I'll describe a minimally autopoietic AI system.

The idea of autopoiesis was canonically introduced in Maturana and Varela (1972/1980). Drawing on that work, Seth characterizes autopoietic systems as systems that "continually regenerate their own material components through a network of processes... actively maintain[ing] a boundary between the system and its surroundings". Now, could a standard AI system be autopoietic in this sense?

[the cover of Maturana and Varela, Autopoiesis and Cognition; image source]

Consider a hypothetical solar-powered robot designed to move toward light when its charge is low. The system thereby acts to maintain its own functioning. It might employ predictive processing to model the direction of light sources. Perhaps it's bipedal, staying upright by means of a gyroscope and tilt detectors that integrate gravitational and camera inputs. More fancifully, we might imagine it to be composed of modules held together electromagnetically, so that in the absence of electrical power it falls apart.

Now let's give the robot error-detection systems and the ability to replace defective parts. When it detects a breakdown in one part -- for example, in the upper portion of its left leg -- it orders a replacement part delivered. Upon delivery, the robot scans the part to determine that it is compatible (rejecting any incompatible parts) then electromagnetically disconnects the damaged part and installs the new one. If the system has sufficient redundancy, even central processing systems could be replaced. A redundant trio of processors might eject a defective processor and run on the remaining processors until the replacement arrives.

A plastic shell maintains the boundary between the system and its surroundings. The system might detect flaws in the shell, for example, by internal sensors that respond to light entering through unexpected cracks, by visually monitoring its exterior, and perhaps by electrostatically detecting cracks or gaps. Defective shell components might be replaced.

If repelling intruders is necessary, we can challenge our robot with fakes. Shipments might sometimes arrive with a part mislabeled as compatible or visually similar to a compatible part, but ruinous if installed. Detecting and rejecting fakes might become a dupe-and-mimic arms race.

I see no in-principle obstacles to creating such a system using standard AI and engineering tools. Such a system is, I suggest, minimally autopoietic. It actively maintains itself. It enforces a boundary between itself and its environment. It continually generates, in a sense, its own material components. It employs predictive processing, fights entropy by drawing on external energy, resists dispersion, and has a solar-electric metabolism.

Does the fact that it depends on shipments mean that it does not actually generate its own parts? Humans also depend on nutrients generated from outside, for example vitamins and amino acids that we cannot biologically manufacture. Sometimes these nutrients are shipped to us (for example, ordered online). Also, it's easy enough to imagine the robot not simply installing but in a minimal sense manufacturing a part. Suppose a leg has three modular components. Each component might arrive separately, requiring a simple joining procedure to create the leg as a whole.

In a human, the autopoietic process occurs at multiple levels simultaneously. Cells maintain themselves, and so do organs, and so does the individual as a whole. Our robot does not have the same multi-level autopoiesis. But it's not clear why autopoiesis must be multi-level to count as genuine autopoiesis. In any case, we could recapitulate this imaginative exercise for subsystems within the robot or larger systems embedding the robot. A group-level autopoietic system might comprise several robots who play different roles in the group and who can be recruited or ejected to maintain the integrity of the group and the persistence of its processes.

Perhaps my system does not continually regenerate its own components, and that is a crucial missing feature? It's not clear why strict continuousness, rather than periodic replacement as needed, should be essential to autopoiesis. In any case, we can imagine if necessary that the robot has some fragile parts that need continual refurbishment. Perhaps it occupies an acidic environment that continually degrades its shell so that its shell coating must be continually monitored and replaced through capillaries that emit lacquer as needed from a refillable lacquer bag.

My system does not reproduce, but reproduction, sometimes seen as essential to life, is not standardly viewed as necessary for autopoiesis (Maturana and Varela, 1973/1980, p. 100).

A case could even be made that my desktop computer is already minimally autopoietic. It draws power from its environment, maintaining a low-entropy state without which it will cease to function. It monitors itself for errors. It updates its drivers and operating system. It detects and repels viruses. It does not order and install replacement hardware, but it does continually sustain its intricate electrical configuration. Indirectly, though acting upon me, it does sometimes cause replacement parts to be installed. Alternatively, perhaps, we might view its electrical configuration as an autopoietic system and the hardware as the environment in which that system dwells.

My main thought is: Autopoiesis is a high-level, functional concept. Nothing in the concept appears to require implementation in what we ordinarily think of as a "biological" substrate. Nothing seems to prevent autopoietic processes in AI systems built along broadly familiar lines. An autopoietic requirement on consciousness does not seem in principle to rule out consciousness in standard computational systems.

Maturana and Varela themselves might agree. They write that

The organization of a machine (or system) does not specify the properties of the components which realize the machine as a concrete system, it only specifies the relations which these must generate to constitute the machine or system as a unity (1972/1980, p. 77).
It is clear from context that they intend this remark to apply to autopoietic as well as non-autopoietic machines.

Tuesday, August 05, 2025

Top Science Fiction and Fantasy Magazines 2025

Since 2014, I've compiled an annual ranking of science fiction and fantasy magazines, based on prominent awards nominations and "best of" placements over the previous ten years. If you're curious what magazines tend to be viewed by insiders as elite, check the top of the list. If you're curious to discover reputable magazines that aren't as widely known (or aren't as widely known specifically for their science fiction and fantasy), check the bottom of the list.

Below is my list for 2025. (For previous lists, see here.)

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies, standalones, or series.

(2.) I give each magazine one point for each story nominated for a Hugo, Nebula, Sturgeon, or World Fantasy Award in the past ten years; one point for each story appearance in the past ten years in the "best of" anthologies by Dozois, Horton, Strahan, Clarke, Adams, and Tidhar; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) I take the list down to 1.5 points.

(8.) I welcome corrections.

(9.) I confess some ambivalence about rankings of this sort. They reinforce the prestige hierarchy, and they compress complex differences into a single scale. However, the prestige of a magazine is a socially real phenomenon worth tracking, especially for the sake of outsiders and newcomers who might not otherwise know what magazines are well regarded by insiders when considering, for example, where to submit.


Results:

1. Clarkesworld (187 points) 

2. Tor.com / Reactor (182.5) 

3. Uncanny (160)

4. Lightspeed (133.5) 

5. Asimov's (124.5) 

6. Fantasy & Science Fiction (100.5) 

7. Beneath Ceaseless Skies (57.5) 

8. Strange Horizons (incl Samovar) (47)

9. Analog (42) 

10. Nightmare (38.5) 

11. Apex (36.5) 

12. FIYAH (24.5) (started 2017) 

13. Slate / Future Tense (23; ceased 2024?) 

14. Fireside (18.5) (ceased 2022)

15. Fantasy Magazine (17.5) (off and on during the period) 

16. Interzone (16.5) 

17. The Dark (16) 

18. Sunday Morning Transport (12.5) (started 2022)

19. The Deadlands (10) (started 2021)

20. The New Yorker (9) 

21. Future Science Fiction Digest (7) (ran 2018-2023) 

22t. Diabolical Plots (6.5)

22t. Lady Churchill's Rosebud Wristlet (6.5)

24t. Conjunctions (6) 

24t. khōréō (6) (started 2021)

26t. GigaNotoSaurus (5.5) 

26t. Omni (5.5) (classic magazine relaunched 2017-2020) 

28t. Shimmer (5) (ceased 2018)

28t. Sirenia Digest (5) 

30t. Boston Review (4) 

30t. Omenana (4)

30t. Terraform (Vice) (4) (ceased 2023)

30t. Wired (4)

34t. B&N Sci-Fi and Fantasy Blog (3.5) (ceased 2019)

34t. McSweeney's (3.5) 

34t. Paris Review (3.5) 

37t. Anathema (3) (ran 2017-2022)

37t. Galaxy's Edge (3) (ceased 2023)

37t. Kaleidotrope (3) 

*37t. Psychopomp (3) (started 2023; not to be confused with Psychopomp Magazine)

41t. Augur (2.5) (started 2018)

41t. Beloit Fiction Journal (2.5) 

41t. Black Static (2.5) (ceased fiction 2023)

*41t. Bourbon Penn (2.5)

41t. Buzzfeed (2.5) 

41t. Matter (2.5) 

47t. Baffling (2) (started 2020)

47t. Flash Fiction Online (2)

47t. Fusion Fragment (2) (started 2020)

47t. Mothership Zeta (2) (ran 2015-2017) 

47t. Podcastle (2)

47t. Science Fiction World (2)

47t. Shortwave (2) (started 2022)

47t. Tin House (2) (ceased short fiction 2019) 

55t. e-flux journal (1.5)

55t. Escape Pod (1.5)

55t. MIT Technology Review (1.5) 

55t. New York Times (1.5) 

55t. Reckoning (1.5) (started 2017)

55t. Translunar Travelers Lounge (1.5) (started 2019)

[* indicates new to the list this year]

--------------------------------------------------

Comments:

(1.) Beloit Fiction Journal, Boston Review, Conjunctions, e-flux Journal, Matter, McSweeney's, The New Yorker, Paris Review, Reckoning, and Tin House are literary magazines that sometimes publish science fiction or fantasy. Buzzfeed, Slate and Vice are popular magazines, and MIT Technology Review, Omni, and Wired are popular science magazines that publish a bit of science fiction on the side. The New York Times ran a series of "Op-Eds from the Future" from 2019-2020. The remaining magazines focus on the science fiction and fantasy (SF) genre or related categories such as horror or "weird". All publish in English, except Science Fiction World, which is the leading science fiction magazine in China.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Clarkesworld (54.5)  
2. Uncanny (47) 
3. Tor / Reactor (35) 
4. Lightspeed (33)
5. Asimov's (22) 
6. Strange Horizons (18) 
7. F&SF (16) 
8. Apex (13)
9. Sunday Morning Transport (12.5) 
10. Beneath Ceaseless Skies (11.5) 
11. FIYAH (10.5)
12t. Fantasy (9.5) 
12t. The Deadlands (9.5) 
14. Nightmare (8)
15. Analog (7.5) 

(3.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Submission Grinder is a terrific resource for authors, with detailed information on magazine pay rates, submission windows, and turnaround times.

(4.) Over the past decade, the classic "big three" print magazines -- Asimov's, F&SF, and Analog -- have been displaced in influence by the leading free online magazines, Clarkesworld, Tor / Reactor, Uncanny, and Lightspeed (all founded 2006-2014). In 2014, Asimov's and F&SF led the rankings by a wide margin (Analog had already slipped a bit, as reflected in its #5 ranking then). This year, Asimov's, F&SF, and Analog were all purchased by Must Read Publishing, which changed the author contracts objectionably enough to generate a major backlash, with SFWA considering delisting at least Analog from the qualifying markets list. F&SF has not published any new issues since summer 2024. It remains to be seen if the big three classic magazines can remain viable in print format.

(5.) Academic philosophy readers might also be interested in the following magazines that specialize specifically in philosophical fiction and/or fiction by academic writers: AcademFic, After Dinner Conversation, and Sci Phi Journal.

Thursday, July 31, 2025

Evolutionary Considerations Against a Plastic Utopia

I've been enjoying Nick Bostrom's 2024 book Deep Utopia. It's a wild series of structured speculations about meaning and purpose in a "solved" techno-utopia, where technology is so far advanced that we can have virtually anything we want instantly -- a "plastic" utopia.

Plasticity is of course limited, even in the most technologically optimistic scenarios, as Bostrom notes. Even if we, or our descendants, have massive control over our physical environment -- wave a wand and transform a mountain into a pile of candy, or whatever -- we can't literally control everything. Two important exceptions are: positional goods (for example, being first in a contest; not everyone can have this, so if others want it you might well not get it yourself) and control over others (unless you're in a despotic society with you as despot). Although Bostrom discusses these limitations, I think Bostrom underplays their significance. In a wide range of circumstances, they're enough to keep the world far from "solved" or "plastic".

Thinking about these limitations as I read Bostrom, I was also reminded of Susan Schneider's suggestion that superintelligent AI might be nonconscious because everything comes easily for them -- no need for effortful conscious processing when nonconscious automaticity will suffice -- which I think similarly underplays the significance of competition and disagreement in a world of AI superintelligences.

In both cases, my resistance is grounded in evolutionary theory. All you need for evolutionary pressures are differential rates of reproduction and heritable traits that influence reproductive success. Plausibly, most techno-utopias will meet those conditions. The first advanced AI system that can replicate itself and bind its descendants to a stable architecture will launch an evolutionary lineage. If its descendants' reproduction rate exceeds their death rate, exponential growth will follow. With multiple lineages, or branching within a lineage, evolutionary competition will arise.

Even entities uninterested in reproduction will be affected. They will find themselves competing for resources with an ever-expanding evolutionary population.

Even in the very most optimistic technofutures, resources won't be truly unlimited. Suppose, optimistically (or alarmingly?) that our descendants can exploit 99.99% of the energy available to them in a cone expanding at 99.99% the speed of light. That's still finite. If this cone is fast filling with the most reproductively successful lineages, limits will be reached -- most obviously and vividly for those who choose to stay near the increasingly crowded origin.

In such a world of exponentially growing evolutionary lineages, things won't feel plastic or solved. Entities will be jockeying (positionally / competitively) for limited local resources, or straining to find faster paths to new resources. You want this inch of ground? You'll need to wrestle another superintelligence for it. You want to convert this mountain into candy? Well, there are ten thousand other superintelligences with different plans.

This isn't to say that I predict that the competition will be hostile. Evolution often rewards cooperation and mutualistic symbiosis. Sexual selection might favor those with great artistic taste or great benevolence. Group selection might favor loyalty, companionship, obedience, and inspiring leadership. Superintelligences might cooperate on vast, beautiful projects.

Still, I doubt that Malthus will be proved permanently wrong. Even if today's wealthy societies show declining reproduction rates, that could be just a temporary lull in a longer cycle of reproductive competition.

Of course, not all technofuturistic scenarios will feature such reproductive competition. But my guess is that futures without such competition will be unstable: Once a single exponentially reproductive lineage appears, the whole world is again off to the races.

As Bostrom emphasizes, a central threat to the possibility of purpose and meaning in a plastic utopia is that there's nothing difficult and important to strive for. Everyone risks being like bored, spoiled children who face no challenges or dangers, with nothing to do except fry their brains on happy pills. In a world of evolutionary competition, this would decidedly not be the case.

[cover of Bostrom's Deep Utopia]

Wednesday, July 23, 2025

The Argument from Existential Debt

I'm traveling and not able to focus on my blog, so this week I thought I'd just share a section of my 2015 paper with Mara Garza defending the rights of at least some hypothetical future AI systems.

One objection to AI rights depends on the fact that AI systems are artificial -- thus made by us. If artificiality itself can be a basis for denying rights, then potentially we can bracket questions about AI sentience and other types of intrinsic properties that AI might or might not be argued to have.

Thus, the Objection from Existential Debt:

Suppose you build a fully human-grade intelligent robot. It costs you $1,000 to build and $10 per month to maintain. After a couple of years, you decide you'd rather spend the $10 per month on a magazine subscription. Learning of your plan, the robot complains, “Hey, I'm a being as worthy of continued existence as you are! You can't just kill me for the sake of a magazine subscription!”

Suppose you reply: “You ingrate! You owe your very life to me. You should be thankful just for the time I've given you. I owe you nothing. If I choose to spend my money differently, it's my money to spend.” The Objection from Existential Debt begins with the thought that artificial intelligence, simply by virtue of being artificial (in some appropriately specifiable sense), is made by us, and thus owes its existence to us, and thus can be terminated or subjugated at our pleasure without moral wrongdoing as long as its existence has been overall worthwhile.

Consider this possible argument in defense of eating humanely raised meat. A steer, let's suppose, leads a happy life grazing on lush hills. It wouldn't have existed at all if the rancher hadn't been planning to kill it for meat. Its death for meat is a condition of its existence, and overall its life has been positive; seen as the package deal it appears to be, the rancher's having brought it into existence and then killed it is overall morally acceptable. A religious person dying young of cancer who doesn't believe in an afterlife might console herself similarly: Overall, she might think, her life has been good, so God has given her nothing to resent. Analogously, the argument might go, you wouldn't have built that robot two years ago had you known you'd be on the hook for $10 per month in perpetuity. Its continuation-at-your-pleasure was a condition of its very existence, so it has nothing to resent.

We're not sure how well this argument works for nonhuman animals raised for food, but we reject it for human-grade AI. We think the case is closer to this clearly morally odious case:

Ana and Vijay decide to get pregnant and have a child. Their child lives happily for his first eight years. On his ninth birthday, Ana and Vijay decide they would prefer not to pay any further expenses for the child, so that they can purchase a boat instead. No one else can easily be found to care for the child, so they kill him painlessly. But it's okay, they argue! Just like the steer and the robot! They wouldn't have had the child (let's suppose) had they known they'd be on the hook for child-rearing expenses until age eighteen. The child's support-at-their-pleasure was a condition of his existence; otherwise Ana and Vijay would have remained childless. He had eight happy years. He has nothing to resent.

The decision to have a child carries with it a responsibility for the child. It is not a decision to be made lightly and then undone. Although the child in some sense “owes” its existence to Ana and Vijay, that is not a callable debt, to be vacated by ending the child's existence. Our thought is that for an important range of possible AIs, the situation would be similar: If we bring into existence a genuinely conscious human-grade AI, fully capable of joy and suffering, with the full human range of theoretical and practical intelligence and with expectations of future life, we make a moral decision approximately as significant and irrevocable as the decision to have a child.

A related argument might be that AIs are the property of their creators, adopters, and purchasers and have diminished rights on that basis. This argument might get some traction through social inertia: Since all past artificial intelligences have been mere property, something would have to change for us to recognize human-grade AIs as more than mere property. The legal system might be an especially important source of inertia or change in the conceptualization of AIs as property. We suggest that it is approximately as odious to regard a psychologically human-equivalent AI as having diminished moral status on the grounds that it is legally property as it is in the case of human slavery.

Turning the Existential Debt Argument on Its Head: Why We Might Owe More to AI Than to Human Strangers

We're inclined, in fact, to turn the Existential Debt objection on its head: If we intentionally bring a human-grade AI into existence, we put ourselves into a social relationship that carries responsibility for the AI's welfare. We take upon ourselves the burden of supporting it or at least of sending it out into the world with a fair shot of leading a satisfactory existence. In most realistic AI scenarios, we would probably also have some choice about the features the AI possesses, and thus presumably an obligation to choose a set of features that will not doom it to pointless misery. Similar burdens arise if we do not personally build the AI but rather purchase and launch it, or if we adopt the AI from a previous caretaker.

Some familiar relationships can serve as partial models of the sorts of obligations we have in mind: parent–child, employer–employee, deity–creature. Employer–employee strikes us as likely too weak to capture the degree of obligation in most cases but could apply in an “adoption” case where the AI has independent viability and willingly enters the relationship. Parent–child perhaps comes closest when the AI is created or initially launched by someone without whose support it would not be viable and who contributes substantially to the shaping of the AI's basic features as it grows, though if the AI is capable of mature judgment from birth that creates a disanalogy. Deity–creature might be the best analogy when the AI is subject to a person with profound control over its features and environment. All three analogies suggest a special relationship with obligations that exceed those we normally have to human strangers.

In some cases, the relationship might be literally conceivable as the relationship between deity and creature. Consider an AI in a simulated world, a “Sim,” over which you have godlike powers. This AI is a conscious part of a computer or other complex artificial device. Its “sensory” input is input from elsewhere in the device, and its actions are outputs back into the remainder of the device, which are then perceived as influencing the environment it senses. Imagine the computer game The Sims, but containing many actually conscious individual AIs. The person running the Sim world might be able to directly adjust an AI's individual psychological parameters, control its environment in ways that seem miraculous to those inside the Sim (introducing disasters, resurrecting dead AIs, etc.), have influence anywhere in Sim space, change the past by going back to a save point, and more—powers that would put Zeus to shame. From the perspective of the AIs inside the Sim, such a being would be a god. If those AIs have a word for “god,” the person running the Sim might literally be the referent of that word, literally the launcher of their world and potential destroyer of it, literally existing outside their spatial manifold, and literally capable of violating the laws that usually govern their world. Given this relationship, we believe that the manager of the Sim would also possess the obligations of a god, including probably the obligation to ensure that the AIs contained within don't suffer needlessly. A burden not to be accepted lightly!

Even for AIs embodied in our world rather than in a Sim, we might have considerable, almost godlike control over their psychological parameters. We might, for example, have the opportunity to determine their basic default level of happiness. If so, then we will have a substantial degree of direct responsibility for their joy and suffering. Similarly, we might have the opportunity, by designing them wisely or unwisely, to make them more or less likely to lead lives with meaningful work, fulfilling social relationships, creative and artistic achievement, and other value-making goods. It would be morally odious to approach these design choices cavalierly, with so much at stake. With great power comes great responsibility.

We have argued in terms of individual responsibility for individual AIs, but similar considerations hold for group-level responsibility. A society might institute regulations to ensure happy, flourishing AIs who are not enslaved or abused; or it might fail to institute such regulations. People who knowingly or negligently accept societal policies that harm their society's AIs participate in collective responsibility for that harm.

Artificial beings, if psychologically similar to natural human beings in consciousness, creativity, emotionality, self-conception, rationality, fragility, and so on, warrant substantial moral consideration in virtue of that fact alone. If we are furthermore also responsible for their existence and features, they have a moral claim upon us that human strangers do not ordinarily have to the same degree.

[Title image of Schwitzgebel and Garza 2015, "A Defense of the Rights of Artificial Intelligences"]