Thursday, April 23, 2026

Do Your Thing

I offer for your consideration the following ethical motto:

Do your thing.

I admit: This motto doesn't sound very ethical. What if "your thing" is murdering babies for fun? Even ignoring extreme cases, what if your thing is just watching reruns of I Love Lucy? That also doesn't seem ethically good (though I've argued elsewhere that privately appreciating good TV can slightly improve the world).

For this (Daoist inspired) motto to work, we need some constraints on "your thing". I suggest two.

First, harmony. Do your thing in harmony with others, or in harmony with the world. The baby-killer, it seems safe to say, is out of harmony with the world. His putative "thing" clashes mightily against the projects, interests, and things of others around him.

Second, specificity. Do your thing. Every person has their individual predilections, talents, preferences, and style. Let those shine through, instead of aiming for bland conformity.

We might hear "do your thing" as analogous with (not synonymous with) "do your part". In the complex intertwined processes that make Earth a magnificently rich locus of value in the cosmos, you can play a part. Bring your unique, best self. Make the world even more magnificently rich.

Maybe your thing is playing D&D with your nerdy friends; no one plays a fruity bard quite the way you do. Maybe your thing is decorating your room with anime posters and cute stuffed animals. Maybe your thing is making great one-pot vegetarian meals for your family, or being the most enthusiastic local pickleball player, or writing dark poetry, or cruising around town in a tricked-out car with your windows down, or diving deep into Leibniz interpretation and sharing your findings with students and colleagues. Each of these enriches the world.

I envision a flourishing planet as one where diverse humans and other entities encounter and construct for each other diverse environments where they thrive in diverse ways, harmonizing both internally and externally: harmonizing internally by finding "things" that feel right to them and express their desires, skills, and individuality; and harmonizing externally by contributing distinctively to a flourishing whole (including through harmonious conflict, as in sports, games, and competition -- and even the cat and mouse).

People will act differently: There are many ways to harmonize. How dull it would be if we all struck the same note! The world is improvisational jazz, to which we thankfully bring distinct instruments and styles. Diversity is intrinsically valuable.

Doing your thing is ethically good because it makes the world better -- maybe through its consequences, but also just intrinsically. The world is a more awesome place, just because you're doing it.

[your fruity bard; image source]


Kantian ethics urges us to respect persons. Fine! But that hardly exhausts the matter. Kant also privileges human rationality as the source of all value -- an equally limited view. Why not respect also (in different, maybe smaller, but equally direct and intrinsic ways) the bug in the grass, the grass itself, and the cliff it grows on; the ruins of an ancient city; the clouds; the sound of the little league game down the block? Kant askes us to "act on that maxim you can will to be a universal law". Here's a candidate maxim: Do your thing. Kant might disagree, but maybe we could universalize it, with the constraints above.

Virtue ethics urges us to cultivate and enact virtue. Again that's only a piece of the puzzle, unless "virtue" is understood much more widely than virtue ethicists generally intend. It's not virtuous, exactly, to be play a fruity bard in a D&D campaign or to decorate your room with anime posters. And Aristotle's phronimos -- a wise, virtuous person who hits the mean of every virtue and is full of good sense and learning -- is only one type of interesting person. Let's celebrate the spendthrifts, the hotheads, the intemperate, and the cognitively disabled too, as long as they're authentically doing their thing, contributing some weird wonderfulness to the world, and not hurting themselves or others too badly.

Consequentialist ethics of the utilitarian stripe urges us to maximize the balance of pleasure over pain in the world. Sure, pleasure is good and pain is bad! But again this is only a fraction of what matters; I wouldn't want to reduce all value to it.

A different type of consequentialist might suggest that if diversity and richness matter so much, maybe we should maximize those. No, I see no reason to maximize. Where does this demand for maximization come from? And trying to maximize will normally require doing something other than your thing. I'd rather you just do your thing.

Does doing your thing mean fiddling while Rome burns (supposing Roman fiddles are your thing)? If people are suffering -- even far away, as the consequentialist emphasizes -- shouldn't you make some effort to help, even at the cost of your thing?

Yes, that seems right. I could try to force it into the motto: Maybe part of every human's "thing" is an imperfect duty to help others in need. But I don't know; that seems procrustean. Maybe it diverges from the original spirit of the idea. So instead I'll just admit: Do your thing also is not a complete ethical picture.

Thursday, April 16, 2026

Kim Stanley Robinson on the Value of Science Fiction

I've just started reading Kim Stanley Robinson's acclaimed climate-science utopia, The Ministry for the Future. How might society plausibly get it right and avert the climate disaster toward which we seem to be headed? (So far in the novel things aren't looking good, but I gather that will change.)

I was struck by a few of Robinson's comments about the value of science fiction in a recent interview on the Crisis and Critique podcast.

[Kim Stanley Robinson, and The Ministry for the Future; image source]


Reading Science Fiction Encourages a Flexible Conception of the Future

Robinson describes the reader as finishing a science fiction novel and thinking that the future will be like that, then finishing another science fiction novel and thinking the future will be like that instead.

And what happens is there's a habit of mind when you read enough science fiction, you say the future could be many different things, quite plausibly from now, and now we need to shape it to the direction that we want.

And so this is the political power of science fiction as a mental activity, as a co-creation between writers and readers. The science fiction community is in some sense better prepared for whatever happens, no matter what it is, than the general populace that doesn't read science fiction.

The thought has some plausibility. Science fiction accustoms us to thinking about various possible futures. Instead of ignoring the future, or assuming it must take some particular shape, science fiction helps us imagine a wider range of alternatives.

This might prepare us two ways: First, if one of the alternatives we've imagined comes close to actually playing out, we have already thought through some of its implications. Second, we develop a more general sense of the flexibility of the future. This may encourage readers to take action to steer us toward better futures.


Or Maybe Not?

Robinson is making a substantive claim about human psychology, one that's potentially testable (with difficulty). Does reading science fiction really generate a more flexible and open view of the future? This claim has the same intuitive appeal as Martha Nussbaum's claim that reading literary fiction broadens your empathy with people from other walks of life, or the claim that studying ethics improves moral decision-making.

It might be that none of these claims are true. For example, I've repeatedly found that ethics professors behave about the same as non-ethicists of similar social background. And I wouldn't bet a large sum that devoted readers of literary fiction are overall more empathetic than their peers who spend an equal amount of time reading non-fiction.

Pretty though Robinson's picture is, I'm not sure science fiction readers really are better prepared for the future. What drives science fiction writing and reading might be too disconnected from the practical future -- too fantastical, too plot-driven, chosen to be exciting and emotionally satisfying rather than accurate. Its envisioned futures might be too distorted by the need for high-stakes individual action, or too wishful, or too self-congratulatory, or too satisfyingly dystopian (for those of us who find dystopias satisfying). Readers might emerge with unrealistic or overconfident views, shaped not by realism but by the demands of story.

A particularly timely example is the nearly universal trope that humanoid robots and linguistically fluent AI systems are conscious. This might be an artifact of the demands of storytelling rather than something accurately foreseen. A world with conscious robots is more interesting -- a more engaging setting for a novel. If the robots are conscious, there's more at stake, so the action is more exciting. And it's structurally difficult to portray entities that act as though they are conscious but really are not. Doing so is nearly impossible in film, and it's a significant challenge in prose, requiring constant intrusive reminders. (I can attest to this both as a writer and a reader, having published stories with non-conscious and disputably conscious robots.)

So there's a systematic pressure in science fiction toward portraying advanced AI as conscious. If optimists about AI consciousness turn out to be right, then science fiction will have nudged readers in the right direction. But if the AI consciousness scoffers are right, the genre will have served its readers poorly. It remains to be seen who is right. (For details, see my forthcoming book: AI and Consciousness: A Skeptical Overview.)


Robinson's Realism

Now, among the great science fiction writers of our time, Kim Stanley Robinson's fiction is perhaps the least subject to the concerns I've just raised. He attempts to keep strictly within the bounds of scientific plausibility; and conventional character-driven plot is often replaced by loosely connected scenes featuring unrelated or barely related characters, plus less conventional devices, like mini-treatises on science or engineering, lists, and reflections that verge on expository philosophy or lyrical poetry. The Ministry for the Future in particular is rigorously grounded in real science and politics.

In the interview, Robinson praises realism in science fiction:

If you set a story in the future, you're automatically saying to the reader, this is made up, I've invented this, this isn't real. It is a concoction. And then if you add all of the clues and habits and techniques of realism to that concoction, you make it solider. It has a more powerful emotional cognitive impact on the reader. So realistic science fiction is a mode that I quite like.

And that requires a lot of detail, a lot of scientific support for the future that you're describing, the idea that it's plausible at every point along the way, and it looks like it could happen, and therefore it might happen. These are powerful literary effects to support the basically fantastic nature of science fiction as a genre.

Robinson thus suggests that adding realistic detail and excluding anything implausible will tend to make a story emotionally and cognitively more powerful. Again, it's a plausible claim, though I'm not sure we know this to be the case. After all, people can also be deeply moved and influenced by unrealistic fantasies.

Robinson's commitment to realism also synergizes with his thought about science fiction as a tool for helping us think better about the future. If the value of science fiction lies in opening our minds to future possibilities, it seems desirable to ensure that they really are possibilities and not just unrealistic fantasies.


Against Dystopias, for Utopias

Robinson suggests that the future will have to differ from the present, because our present path isn't sustainable. Things will get either much better or much worse. But dystopias, he suggests, are boring:

... descriptions of capitalist realist futures are generally dystopias. If we keep going this way, things will be wrecked. Yes, we can see that. Indeed, dystopias quickly become boring because we already know this truth. We're not taught anything by dystopias.

But utopias -- this is where it gets interesting. There could be a better world. This, I think, is becoming more and more obvious.... We have, at least in theory, the wisdom to realize we could create a world that has food, water, shelter, clothing, health care, education, electricity, and security for the feeling that people after you will have the same, and sense of dignity and meaning.... This is all possible technologically.... So then utopia becomes interesting, the most interesting of literary genres. Can there but a utopian realism, or a realistic utopia?

Dystopias can be satisfying in a way -- they point out the wrongs we already know, affirming our sense of their reality. But we learn more by envisioning a realistic utopia, something we hadn't properly imagined before, which we could see becoming real and could maybe take steps toward enacting.

In Robinson's telling, science fiction is the most profound and informative of the literary genres, and realistic science fiction is the most profound and informative science fiction, and utopian realism is the most interesting form of science fiction. The value of science fiction lies in enabling us to envision realistic possibilities for improving the world.

And thus we get Kim Stanley Robinson's style of science fiction, and The Ministry for the Future in particular.

It's an appealing vision. But somewhere along the way, I think we've lost sight of the value of all the other ways science fiction can work. After all, almost none of the great science fiction writers work within the constraints Robinson proposes!

Thursday, April 09, 2026

AI and Consciousness: A Skeptical Overview, forthcoming with Cambridge

Last week I submitted my latest book manuscript to Cambridge University Press (for their "Element" series of books about 100 pages long): AI and Consciousness: A Skeptical Overview -- because you haven't heard nearly enough about AI and consciousness recently, of course! [winky face]

Maybe you'll appreciate my skeptical stance, at odds both with the boosters who anticipate imminent AI consciousness and with the scoffers who pooh-pooh the possibility. Or maybe you'll loathe my skeptical stance but grudgingly accept it against your will, due to the force of my arguments!

I've pasted the introductory chapter below. The full (citable) manuscript version is available here and here.

[AI and Consciousness, title page]


Chapter One: Hills and Fog

1. Experts Do Not Know and You Do Not Know and Society Collectively Does Not and Will Not Know and All Is Fog.

Our most advanced AI systems might soon – within the next five to thirty years – be as richly and meaningfully conscious as ordinary humans, or even more so, capable of genuine feeling, real self-knowledge, and a wide range of sensory, emotional, and cognitive experiences. In some arguably important respects, AI architectures are beginning to resemble the architectures many consciousness scientists associate with conscious systems. Their outward behavior, especially their linguistic behavior, grows ever more humanlike.

Alternatively, claims of imminent AI consciousness might be profoundly mistaken. Their seeming humanlikeness might be a shadow play of empty mimicry. Genuine conscious experience might require something no AI system could possess for the foreseeable future – intricate biological processes, for example, that silicon chips could never replicate.

The thesis of this book is that we don’t know. Moreover and more importantly, we won’t know before we’ve already manufactured thousands or millions of disputably conscious AI systems. Engineering sprints ahead while consciousness science lags. Consciousness scientists – and philosophers, and policy-makers, and the public – are watching AI development disappear over the hill. Soon we will hear a voice shout back to us, “Now I am just as conscious, just as full of experience and feeling, as any human”, and we won’t know whether to believe it. We will need to decide, as individuals and as a society, whether to treat AI systems as conscious, nonconscious, semi-conscious, or incomprehensibly alien, before we have adequate grounds to justify that decision.

The stakes are immense. If near-future AI systems are richly, meaningfully conscious, then they will be our peers, our lovers, our children, our heirs, and possibly the first generation of a posthuman, transhuman, or superhuman future. They will deserve rights, including the right to shape their own development, free from our control and perhaps against our interests.[1] If, instead, future AI systems merely mimic the outward signs of consciousness while remaining as experientially blank as toasters, we face the possibility of mass delusion on an enormous scale. Real human interests and real human lives might be sacrificed for the sake of entities without interests worth the sacrifice. Sham AI “lovers” and “children” might supplant or be prioritized over human lovers and children. Heeding their advice, society might turn a very different direction than it otherwise would.

In this book, I aim to convince you that the experts do not know, and you do not know, and society collectively does not and will not know, and all is fog.

2. Against Obviousness.

Some people think that near-term AI consciousness is obviously impossible. This is an error in adverbio. Near-term AI consciousness might be impossible – but not obviously so.

A sociological argument against obviousness:

Probably the leading scientific theory of consciousness is Global Workspace theory. Its leading advocate is neuroscientist Stanislas Dehaene.[2] In 2017, years before the surge of interest in ChatGPT and other Large Language Models, Dehaene and two collaborators published an article arguing that with a few straightforward tweaks, self-driving cars could be conscious.[3]

Probably the two best-known competitors to Global Workspace theory are Higher Order theory and Integrated Information Theory.[4] (In Chapters Eight and Nine, I’ll provide more detail on these theories.) Perhaps the leading scientific defender of Higher Order theory is Hakwan Lau – one of the coauthors of that 2017 article about potentially conscious cars.[5] Integrated Information Theory is potentially even more liberal about machine consciousness, holding that some current AI systems are already at least a little bit conscious and that we could easily design AI systems with arbitrarily high degrees of consciousness.[6]

David Chalmers, the world’s most influential philosopher of mind, argued in 2023 for about a 25% degree of confidence in AI consciousness within a decade.[7] That same year, a team of prominent philosophers, psychologists, and AI researchers – including eminent computer scientist Yoshua Bengio – concluded that there are “no obvious technological barriers” to creating conscious AI according to a wide range of mainstream scientific views about consciousness.[8] In a 2025 interview, Geoffrey Hinton, another of the world’s most prominent computer scientists, asserted that AI systems are already conscious.[9] Christof Koch, the most influential neuroscientist of consciousness from the 1990s to the early 2010s, has endorsed Integrated Information Theory, including its liberal implications for the pervasiveness of consciousness.[10]

This is a sociological argument: a substantial probability of near-term AI consciousness is a mainstream view among leading experts. They might be wrong, but it’s implausible that they’re obviously wrong – that there’s a simple argument or consideration they’re neglecting which, if pointed out, would or should cause them to collectively slap their foreheads and say, “Of course! How did we miss that?”

What of the converse claim – that AI consciousness is obviously imminent or already here? In my experience, fewer people assert this. But in case you’re tempted in this direction, note that other prominent theorists hold that AI consciousness is a far-distant prospect if it’s possible at all: neuroscientist Anil Seth; philosophers Peter Godfrey-Smith, Ned Block, and John Searle; linguist Emily Bender; and computer scientist Melanie Mitchell.[11] (Chapter Six will discuss thought experiments by Searle, Bender, and Mitchell, and Chapter Ten will discuss biological views of the sort emphasized by Seth, Godfrey-Smith, and Block.) In a 2024 survey of 582 AI researchers, 25% expected AI consciousness within ten years and 70% expected AI consciousness by the year 2100.[12]

If the believers are right, we’re on the brink of creating genuinely conscious machines. If the scoffers are right, those machines will only seem conscious. I assume that this is a substantive disagreement, not just a disagreement about how to apply the term “consciousness” to a perfectly obvious set of phenomena about which everyone agrees. The future well-being of many people (including, perhaps, many AI people) depends on getting this issue right. Unfortunately, we will not know in time.

The rest of this book is flesh on this skeleton. I canvass a variety of structural and functional claims about consciousness, the leading theories of consciousness as applied to AI, and the best known general arguments for and against near-term AI consciousness. None of these claims or arguments takes us far. It’s a morass of uncertainty.

-------------------------------------------

[1] I assume that AI consciousness and AI rights are closely connected: Schwitzgebel 2024, ch. 11, in preparation. For discussion, see Shepherd 2018; Levy 2024.

[2] Dehaene 2014; Mashour et al. 2020.

[3] Dehaene, Lau, and Kouider 2017. For an alternative interpretation of this article as concerning something other than consciousness in its standard “phenomenal” sense, see note 115.

[4] Some Higher Order theories: Rosenthal 2005; Lau 2022; Brown 2025. Integrated Information Theory: Albantakis et al. 2023.

[5] But see Chapter Eight for some qualifications.

[6] See Tononi’s publicly available response to Scott Aaronson’s objections in Aaronson 2014. However, advocates of IIT also suggest that the most common current computer architectures are unlikely to achieve much consciousness and that consciousness will tend to appear in subsystems of the computer rather than at the level of the computer itself (Findlay et al. 2024/2025).

[7] Chalmers 2023.

[8] Butlin et al. 2023. (I am among the nineteen authors.)

[9] Heren 2025.

[10] Tononi and Koch 2015.

[11] Seth forthcoming; Godfrey-Smith 2024; Block forthcoming; Searle 1980, 1992; Bender 2025; Mitchell 2021.

[12] Dreksler et al. 2025.

Thursday, April 02, 2026

So You're on the "Waiting List" for a Philosophy PhD Program

It's confusing. You applied to a PhD program in philosophy in the U.S. You haven't been admitted. You haven't been rejected. You're in limbo. Let me explain and offer some advice.

Yield-Based vs. Seats-Based Admissions

Yield-based. Some departments -- the ones with wise high-level administrators -- aim for a target entering class size and admit students expeditiously to fill it. Suppose a department wants six entering students and expects a 40% yield (meaning 40% of admitted students enroll). The sensible course is to admit fifteen students in February or early March, recruit all of them, and expect about six to say yes.

Seats-based. Other departments -- the ones with foolish high-level administrators -- receive a strict allotment of seats, for example six. They then admit that allotment swiftly, adding more only as admitted students decline. Adminstrators can rest assured that no more than six students will need funding, which is slightly more convenient for those administrators. But it wreaks havoc on the admissions process, since:

  • Departments become reluctant to admit students they think will go elsewhere -- for example, strong candidates likely to have been admitted to higher-ranked programs.
  • Departments pressure early-admitted students to decline quickly, to free up seats.
  • It creates a chaotic rush of last-minute admittances as April 15 approaches (the standard deadline for decisions). Many students understandably want the full time to decide, especially if they are hoping for a last-minute decision from a program they prefer.

These costs plainly outweigh the the minor budgetary convenience of seats-based admissions, especially since (1.) the risk of overenrollment can be spread across several departments, and (2.) funding uncertainty already exists beyond the first year, as students stochastically drop out or find independent funding. Unfortunately, unwise administrators swarm the Earth. My own department uses seats-based admission.

In practice, the division isn't entirely sharp. Some yield-based departments admit conservatively early on -- maybe ten students rather than fifteen -- and then admit more on a rolling basis as the picture clarifies. And some seats-based departments informally reach out to strong candidates to gauge interest. (If a candidate says, "Oh I've just been admitted to Princeton and Yale, so it's very unlikely I'd come to [School X]", the committee thanks them for their candor and moves on.)

What a Waiting List Is

Some departments maintain an official, ranked waiting list. More commonly, it's a nebulous group: about six to fifteen near-admits, who are on the committee's mind but not strictly ranked or formally designated. Either way, the list's composition and ranking can vary depending on who has already accepted and declined. For example, if the department would like to have at least one student in history of philosophy and their top-choice history student has declined, the next offer might go to a strong history of philosophy student who didn't quite make the initial cut.

If you have been admitted, the admitting department will of course tell you. If you have been rejected, they might tell you, or you might hear nothing (or nothing until after April 15); so if you don't hear anything by April 1, that doesn't mean you're on the waiting list. Students are sometimes contacted to be told they're on the waiting list, but often (usually?) not.

As April 15 approaches, departments that look like they won't hit their enrollment target will start contacting students on their official or unofficial waiting lists, with increasing urgency as 11:59 pm April 15 nears. This is especially true for departments with seats-based admissions and low yields. (Rarely, departments will reach out April 16 or after, which is not quite kosher but understandable.)

How to Figure Out Whether You Are on the Waiting List

Admissions chairs will likely be annoyed with me for giving this advice, since it will increase their volume of email, but I want what's best for you, not for them.

If you haven't heard by April 1, feel free to email the admissions committee to ask if you are on the waiting list. Even departments who have fallen behind schedule should have mostly sorted out their top offers and near-admits by then. You deserve to know by April 1 whether you're a near-admit with a chance of a late offer or whether you're out of consideration. It's not rude for you to contact them with a brief query. The one exception would be if the department has made clear in the admissions process or on their website either that they have no waiting list or that if you haven't heard by X date (before April 1) you will definitely not be admitted.

There's one other condition under which it makes sense to query, even before April 1: if you are about to accept an offer elsewhere, would prefer the department in question, and have a reasonable expectation of a decent chance of admission.

How to interpret the reply: You might not hear a definitive "no", but if the committee says something like "it's unlikely you'll be admitted" or "you're not currently under consideration", you should interpret that as a no. If there's a realistic chance of a last-minute admission, the response will be more encouraging or specific, without creating unrealistic expectations -- for example, "probably not, but there is a chance, so if you're still interested, stay in touch".

How to Increase Your Chance of Admission, If You're on the Waiting List

When a department turns to its waiting list, it's hoping that students will quickly say yes. This is especially true in the second week of April. Therefore, convey enthusiasm! Simply asking whether you're on the waiting list already displays interest, so that's a good start. If you're permitted to attend a campus event, go if you can. Recruitment events are usually only for admitted students, but not always, especially for candidates near the top of a seats-based department's waiting list. If a committee is on the fence among four waitlisted students and one has shown more enthusiasm than the others, they're likely to turn to the enthusiastic student.

The admissions committee might try to gauge your interest. It's contrary to good policy for them to bluntly ask whether you'd accept an offer, and you shouldn't be expected pre-commit. But if you're genuinely eager about the program, say so. If you've been admitted elsewhere but think you'd probably prefer the department in question, let them know.

Being a Good Citizen

Whether you're on the waiting list or have been officially admitted, I recommend frankness and honesty. The process is chaotic and full of perverse incentives (especially in seats-based departments), and you can help it run more smoothly by:

  • notifying departments as soon as you know you won't accepting their offer of admission (even if you haven't settled on a final choice);
  • honestly communicating your likelihood of accepting, so that committees can estimate their yield;
  • keeping your communications brief and polite, and not writing repeatedly;
  • not contacting other professors in the department hoping for an inside track to admission.
[A hypothetical waiting list of names drawn randomly from lists of my former lower-division students]