Friday, February 13, 2026

The Intrinsic Value of Diversity

Moral diversity, Olivia Bailey and Thi Nguyen say (in a draft paper shared with Myisha Cherry's Emotion and Society Lab), is valuable. It's good that people have different ethical personalities, opinions, and concerns. (Within reason: Nazis not welcome.)

Why? Their reasons are instrumental. Society benefits when people care intensely about different things. This allows us collectively to achieve a wide range of goals -- curing cancer, helping the homeless, protesting unjust government. Society also benefits if some people explore the ethical possibility space, developing unusual moral visions, most of which will be mistaken but a few of which might eventually be recognized as genuine moral advances (think of the first slavery abolitionists). And individuals benefit from the liberty to adopt moral priorities that fit their skills and temperaments: Some people thrive in battle, others in caregiving, others in solitary work.

But is moral diversity also intrinsically valuable -- that is, valuable for its own sake, independent of these good consequences? I think so. I think so because diversity in general is intrinsically valuable, and there's no good reason to treat moral diversity as an exception.

How does one argumentatively establish the intrinsic value of diversity? The only way I know is to reveal, through thought experiment, that you already implicitly accept it -- and then to ward off objections.

Bailey and Nguyen briefly cite Alexander Nehamas on diversity of aesthetic opinion. Nehamas writes:

I think a world where everyone liked, or loved, the same things would be a desperate, desolate world -- as devoid of pleasure and interest as the most frightful dystopia of those who believe (quite wrongly) that the popular media are inevitably producing a depressingly, disconsolately uniform world culture. And although I say this with serious discomfort, a world in which everyone liked Shakespeare, or Titian, or Bach for the same reasons -- if such a world were possible -- appears to me no better than a world where everyone tuned in to Baywatch or listened to the worst pop music at the same time (Nehamas 2002, p. 58-59).

Why is aesthetic diversity valuable, according to Nehamas? Because style and taste require originality and are bound up with what is distinctive about your life, interests, and sensibility. Without distinctiveness, style and taste collapse -- an aesthetic disaster.

Should we say, then, that diversity, including moral diversity, is valuable aesthetically? That its value lies primarily in its beauty, in its capacity to inspire awe, or some other aesthetic feature? Indeed, diversity is beautiful and awesome (imagine the world without it!) but I don't think this exhausts its intrinsic value. Aesthetic value requires a spectator, at least a notional one, whose appreciation is the point. The intrinsic value of diversity is not, or not primarily, mediated through the hypothetical reaction of an aesthetic spectator.

My favorite approach to thinking about intrinsic value is the Distant Planet Thought Experiment. Imagine a planet on the far side of the galaxy, blocked from view by the galactic core, a planet we'll never see or interact with. What would we hope for on this planet, for its own sake, independent of any potential value for us?

Would you hope that it's a sterile rock, completely devoid of life? I think not. If you do think a lifeless rock would be best, I have no argument against you. For me this is a starting place, a bedrock judgment, which I expect most readers will share.

Suppose, then, that you agree a planet with life would be intrinsically better than one without. Would you hope that its life consists entirely of microbes? Or would you hope that it teems with diverse life: reefs and rainforests, beetles and bats, squid and bees and ferns and foxes -- or rather, not to duplicate Earth too closely, their alien analogues, translated into a different key? I think you'll hope that the planet teems with diverse life.

Would you hope that no life on this planet has humanlike behavioral sophistication -- language, long-term planning, complex social coordination? Would you hope that nothing there could contemplate the meaning of life, the origin of the stars, or its own ancient history? Would you hope that nothing there could create art, or engage in athletic competition, or invent complex games and tricks and jokes? I invite you to join me in thinking otherwise. The planet would be better if it included some beings with that richness of thought and activity.

Would you hope for uniformity of intellectual, aesthetic, and ethical opinion -- that everyone shares the same values and ideas? Or would you hope for diversity? I think you'll join me in thinking that the world would be better, better for its own sake, if it were diverse rather than uniform. Different entities would have different skills, preferences, passions, and ideas. They'll fight and disagree (not genocidally, I hope), sometimes value their differences, sometimes dismiss others as completely wrongheaded, sometimes cluster into shared projects, sometimes collaborate across deep disagreement, sometimes be drawn to opposites, sometimes feel kinship with the like-minded, play within and across divides, pursue an enormous variety of projects, explore a vast space of possible forms of life.

That is what I hope for on this distant planet -- not for instrumental reasons (not, for example, because it will maximize happiness), and not merely because it would strike a hypothetical spectator as beautiful and awesome (though it should). Rather, just because it would be valuable for its own sake. An empty void has little or no value; a rich plurality of forms of existence has immense value, no further justification required.

I have not argued for this. I have only stated it vividly, hoping that you already accept it.

Is ethical opinion an exception? Should we prefer unity and conformity in ethics, even while welcoming diversity elsewhere? I think not, for two reasons.

First, ethics is open-textured, indeterminate, and full of tragic dilemmas. Often there is no one decisively best answer on which everyone should converge. Diversity within at least the bounds of reasonable disagreement should be permitted.

Second, ethical values are inseparable from our other values and ways of life. A philosophy professor, a civil rights lawyer, a professional athlete, and a farmer will value different things. There is, I think, no point in attempting to cleanly separate their differing values into distinct types, some of which are permitted to vary and others of which may not. The ethical, prudential, epistemic, and aesthetic blur together. These distinctions are not as clean as philosophers often assume. Normativity is a mush.

Oh, some of you disagree? Good!

[the cover of my 2024 book, The Weirdness of the World, hardback version]

Thursday, February 05, 2026

Artificial Intelligence as Strange Intelligence: Against Linear Models of Intelligence (New Paper in Draft)

by Kendra Chilson and Eric Schwitzgebel

Our main idea, condensed to 1000 words:

On a linear model of intelligence, entities can be roughly linearly ordered in overall intelligence: frogs are smarter than nematodes, cats smarter than frogs, apes smarter than cats, and humans smarter than apes. This same linear model is often assumed when discussing AI systems. "Narrow AI" systems (like chess machines and autonomous vehicles) are assumed to be subhuman in intelligence, at some point -- maybe soon -- AI systems will have approximately human-level intelligence, and in the future we might expect superintelligent AI that exceeds our intellectual capacity in virtually all domains of interest.

Building on the work of Susan Schneider, we challenge this linear model of intelligence. Central to our project is the concept of general intelligence as the ability to use information to achieve a wide range of goals in a wide variety of environments.

Of course even the simplest entity capable of using information to achieve goals can succeed in some environments, and no finite entity could succeed in all possible goals in all possible environments. "General intelligence" is therefore a matter of degree. Moreover, general intelligence is a massively multidimensional matter of degree: There are many many possible goals and many many possible environments and no non-arbitrary way to taxonomize and weight all these goals and environments into a single linear scale or definitive threshold.

Every entity is in important respects narrow: Humans also can achieve their goals in only a very limited range of environments. Interstellar space, the deep sea, the Earth's crust, the middle of the sky, the center of a star -- transposition to any of these places will quickly defeat almost all our plans. We depend for our successful functioning on a very specific context. So of course do all animals and all AI systems.

Similarly, although humans are good at a certain range of tasks, we cannot detect electrical fields in the water, dodge softballs while hovering in place, communicate with dolphins by echolocation, or calculate a hundred digits of pi in our heads. If we put a server with a language model in the desert without a power source or if we place an autonomous vehicle in a chess tournament and then interpret their incompetence as a lack of general intelligence, we risk being as unfair to them as a dolphin would be to blame us for our poor skills in their environment. Yes, there's a perfectly reasonable sense in which chess machines and autonomous vehicles have much more limited capacities than do humans. They are narrow in their abilities compared to us by almost any plausible metric of narrowness. But it is anthropocentric to insist that general intelligence requires generally successful performance on the tasks and in the environments that we humans tend to favor, given that those tasks and environments are such a small subset of the possible tasks and environments an entity could face. And any attempt to escape anthropocentrism by creating an unbiased and properly weighted taxonomy of task types and environments is either hopeless or liable to generate a variety of very different but equally plausible arbitrary composites.

AI systems, like nonhuman animals and neuroatypical people, can combine skills and deficits in patterns that are unfamiliar to those who have attended mostly to typical human cases. AI systems are highly unlikely to replicate every human capacity, due to limits in data and optimization, as well as a fundamentally different underlying architecture. They struggle to do many things that ordinary humans do effortlessly, such as reliably interpreting everyday visual scenes and performing feats of manual dexterity. But the reverse is also true: Humans cannot perform some feats that machines perform in a fraction of a second. If we think of intelligence as irreducibly multidimensional instead of linear -- as always relativized to the immense number of possible goals and environments -- we can avoid the temptation to try to reach a scalar judgment about which type of entity is actually smarter and by how much.

We might think of typical human intelligence as "familiar intelligence" -- familiar to us, that is -- and artificial intelligence as "strange intelligence". This terminology wears its anthropocentrism on its sleeve, rather than masking it under false objectivity. Something possesses familiar intelligence to the degree it thinks like us. It is a similarity relation. How familiar an intelligence is depends on several factors. Some are architectural: What forms does the basic cognitive processing take? What shortcuts and heuristics does it rely on? How serial or parallel is it? How fast? With what sorts of redundancy, modularity, and self-monitoring for errors? Others are learned and cultural: learned habits, particular cultural practices, acquired skills, chosen effort based on perceived costs and benefits. An intelligence is outwardly familiar if it acts like us in intelligence-based tasks. And it is inwardly familiar if it does so by the same underlying cognitive mechanisms.

Familiarity is also a matter of degree: The intelligence of dogs is more familiar to us (in most respects) than that of octopuses. Although we share some common features with octopuses, they evolved in a very different environment and have very dissimilar cognitive architecture as a result. It's hard for us even to understand their goals, because their existence is so different. Still, as distant as our minds are from those of octopuses, we share with octopuses the broadly familiar lifeways of embodied animals who need to navigate the natural world, find food, and mate.

AI constitutes an even stranger form of intelligence. With architectures, environments, and goals so fundamentally unlike ours, AI is the strangest intelligence we have yet to encounter. AI is not a biological organism; it was not shaped by the evolutionary pressures shared by every living being on Earth, and it does not have the same underlying needs. It is based on an inorganic substrate totally unlike all biological neurophysiology. Its goals are imposed by its makers rather than being autopoietic. Such intelligence should be expected to behave in ways radically different from familiar minds. This raises an epistemic challenge: Understanding and measuring strange intelligence may be extremely difficult for us. Plausibly, the stranger an intelligence is from our perspective, the easier it is for us to fail to appreciate what it’s up to. Strange intelligences rely on methods alien to our cognition.

If intelligence were linear and one-dimensional, then a single example of an egregious mistake by an AI -- a mistake a human would never make, like confusing a strawberry for a toy poodle -- would be enough to show that the systems are nowhere near our level of intelligence. However, since intelligence is massively multidimensional, all these cases show on their own is that these systems have certain lacunae or blindspots. Of course, we humans also have lacunae and blind spots – just consider optical illusions. Our susceptibility to optical illusions is not used as evidence of our lack of general intelligence, however ridiculous our mistakes might seem to any entity not subject to those same illusions.

Full draft here.

Friday, January 30, 2026

Does Global Workspace Theory Solve the Question of AI Consciousness?

Hint: no.

Below are three sections from Chapter Eight of my manuscript in draft, AI and Consciousness, fresh new version available today here. Comments welcome!

[image adapted from Dehaene et al. 2011]


1. Global Workspace Theories and Access.

The core idea of Global Workspace Theory is simple. Sophisticated cognitive systems like the human mind employ specialized processes that operate to a substantial extent in isolation. We can call these modules, without committing to any strict interpretation of that term.[1] For example, when you hear speech in a familiar language, some cognitive process converts the incoming auditory stimulus into recognizable speech. When you type on a keyboard, motor functions convert your intention to type a word like “consciousness” into nerve signals that guide your fingers. When you try to recall ancient Chinese philosophers, some cognitive process pulls that information from memory without (amazingly) clogging your consciousness with irrelevant information about German philosophers, British prime ministers, rock bands, or dog breeds.

Of course, not all processes are isolated. Some information is widely shared, influencing or available to influence many other processes. Once I recall the name “Zhuangzi”, the thought “Zhuangzi was an ancient Chinese philosopher” cascades downstream. I might say it aloud, type it out, use it as a premise in an inference, form a visual image of Zhuangzi, contemplate his main ideas, attempt to sear it into memory for an exam, or use it as a clue to decipher a handwritten note. To say that some information is in “the global workspace” just is to say that it is available to influence a wide range of cognitive processes. According to Global Workspace Theory, a representation, thought, or cognitive process is conscious if and only if it is in the global workspace – if it is “widely broadcast to other processors in the brain”, allowing integration both in the moment and over time.[2]

Recall the ten possibly essential features of consciousness from Chapter Three: luminosity, subjectivity, unity, access, intentionality, flexible integration, determinacy, wonderfulness, specious presence, and privacy. [Blog readers: You won't have read Chapter Three, but try to ride with it anyway.] Global Workspace Theory treats access as the central essential feature.

Global Workspace theory can potentially explain other possibly essential features. Luminosity follows if processes or representations in the workspace are available for introspective processes of self-report. Unity might follow if there’s only one workspace, so that everything in it is present together. Determinacy might follow if there’s a bright line between being in the workspace and not being in it. Flexible integration might follow if the workspace functions to flexibly combine representations or processes from across the mind. Privacy follows if only you can have direct access to the contents of your workspace. Specious presence might follow if representations or processes generally occupy the workspace for some hundreds of milliseconds.

In ordinary adult humans, typical examples of conscious experience – your visual experience of this text, your emotional experience of fear in a dangerous situation, your silent inner speech, your conscious visual imagery, your felt pains – appear to have the broad cognitive influences Global Workspace Theory describes. It’s not as though we commonly experience pain but find that we can’t report it or act on its basis, or that we experience a visual image of a giraffe but can’t engage in further thinking about the content of that image. Such general facts, plus the theory’s potential to explain features such as luminosity, unity, determinacy, flexible integration, privacy, and specious presence, lend Global Workspace Theories substantial initial attractiveness.

I have treated Global Workspace Theory as if it were a single theory, but it encompasses a family of theories that differ in detail, including “broadcast” and “fame” theories – any theory that treats the broad accessibility of a representation, thought, or process as the central essential feature making it conscious.[3]

Consider two contrasting views: Dehaene’s Global Neuronal Workspace Theory and Daniel Dennett’s “fame in the brain” view. Dehaene holds that entry into the workspace is all-or-nothing. Once a process “ignites” into the workspace, it does so completely. Every representation or process either stops short of entering consciousness or is broadcast to all available downstream processes. Dennett’s fame view, in contrast, admits degrees. Representations or processes might be more or less famous, available to influence some downstream cognitive processes without being available to influence others. There is no one workspace, but a pandemonium of competing processes.[4] If Dennett is correct, luminosity, determinacy, unity, and flexible integration all potentially come under threat in a way they do not as obviously come under threat on Dehaene’s view.[5]

Dennettian concerns notwithstanding, all-or-nothing ignition into a single, unified workspace is currently the dominant version of Global Workspace Theory. The issue remains unsettled and has obvious implications for the types of architectures that might plausibly host AI consciousness.

2. Consciousness Outside the Workspace; Nonconsciousness Within It?

Global Workspace Theory is not the correct theory of consciousness unless all and only thoughts, representations, or processes in the Global Workspace are conscious. Otherwise, something else, or something additional, is necessary for consciousness.

It is not clear that even in ordinary adult humans a process must be in the Global Workspace to be conscious. Consider the case of peripheral experience. Some theorists maintain that people have rich sensory experiences outside of focal attention: a constant background experience of your feet in your shoes and objects in the visual periphery.[6] Others – including Global Workspace theorists – dispute this. Introspective reports vary, and resolving such issues is methodologically tricky.

One methodological problem: People who report constant peripheral experiences might mistakenly assume that such experiences are always present because they are always present whenever they think to check, and the very act of checking might generate those experiences. This is sometimes called the “refrigerator light illusion”, akin to the error of thinking the refrigerator light is always on because it’s always on when you open the door to check.[7] On this view, you’re only tempted to think you have constant tactile experience of your feet in your shoes because you have that experience on those rare occasions when you’re thinking about whether you have it. Even if you now seem to have a broad range of experiences in different sensory modalities simultaneously, this could result from an unusual act of dispersed attention, or from “gist” perception or “ensemble” perception, in which you are conscious of the general gist or general features of a scene, knowing that there are details, without actually experiencing those unattended details.[8]

The opposite mistake is also possible. Those who deny a constant stream of peripheral experiences might simply be failing to notice or remember them. The fact that you don’t remember now the sensation of your feet in your shoes two minutes ago hardly establishes that you lacked the sensation at the time. Although many people find it introspectively compelling that their experience is rich with detail or that it is not, the issue is methodologically complex because introspection and memory are not independent of the phenomena to be observed.[9]

If we do have rich sensory experience outside of attention, it is unlikely that all of that experience is present in or broadcast to a Global Workspace. Unattended peripheral information is rarely remembered or consciously acted upon, tending to exert limited downstream influence – the paradigm of information that is not widely broadcast. Moreover, the Global Workspace is generally characterized as limited capacity, containing only a few thoughts, representations, objects, or processes at a time – those that survive some competition or attentional selection – not a welter of richly detailed experiences in many modalities at once.[10]

A less common but equally important objection runs in the opposite direction: Perhaps not everything in the Global Workspace is conscious. Some thoughts, representations, or processes might be widely broadcast, shaping diverse processes, without ever reaching explicit awareness.[11] Implicit racist assumptions, for example, might influence your mood, actions, facial expressions, and verbal expressions. The goal of impressing your colleagues during a talk might have pervasive downstream effects without occupying your conscious experience moment to moment.

The Global Workspace theorist who wants to allow that such processes are not conscious might suggest that, at least for adult humans, processes in the workspace are generally also available for introspection. But there’s substantial empirical risk in this move. If the correlation between introspective access and availability for other types of downstream cognition isn’t excellent, the Global Workspace theorist faces a dilemma. Either allow many conscious but nonintrospectable processes, violating widespread assumptions about luminosity, or redefine the workspace in terms of introspectability, which amounts to shifting to a Higher Order view.

3. Generalizing Beyond Vertebrates.

The empirical questions are difficult even in ordinary adult humans. But our topic isn’t ordinary adult humans – it’s AI systems. For Global Workspace Theory to deliver the right answers about AI consciousness, it must be a universal theory applicable everywhere, not just a theory of how consciousness works in adult humans, vertebrates, or even all animals.

If there were a sound conceptual argument for Global Workspace Theory, then we could know the theory to be universally true of all conscious entities. Empirical evidence would be unnecessary. It would be as inevitably true as that rectangles have four sides. But as I argued in Chapter Four, conceptual arguments for the essentiality of any of the ten possibly essential features are unlikely to succeed – and a conceptual argument for Global Workspace Theory would be tantamount to a conceptual argument for the essentiality of access, one of those ten features. Not only do the general observations of Chapter Four suggest against a conceptual guarantee, so also does the apparent conceivability, as described in Section 2 above, of consciousness outside the workspace or nonconsciousness within it – even if such claims are empirically false.

If Global Workspace Theory is the correct universal theory of consciousness applying to all possible entities, an empirical argument must establish that fact. But it’s hard to see how such an empirical argument could proceed. We face another version of the Problem of the Narrow Evidence Base. Even if we establish that in ordinary humans, or even in all vertebrates, a thought, representation, or process is conscious if and only if it occupies a Global Workspace, what besides a conceptual argument would justify treating this as a universal truth that holds among all possible conscious systems?

Consider some alternative architectures. The cognitive processes and neural systems of octopuses, for example, are distributed across their bodies, often operating substantially independently rather than reliably converging into a shared center.[12] AI systems certainly can be, indeed often are, similarly decentralized. Imagine coupling such disunity with the capacity for self-report – an animal or AI system with processes that are reportable but poorly integrated with other processes. If we assume Global Workspace Theory at the outset, we can conclude that only sufficiently integrated processes are conscious. But if we don’t assume Global Workspace Theory at the outset, it’s difficult to imagine what near-future evidence could establish that fact beyond a reasonable standard of doubt to a researcher who is initially drawn to a different theory.

If the simplest version of Global Workspace Theory is correct, we can easily create a conscious machine. This is what Dehaene and collaborators envision in the 2017 paper I discussed in Chapter One. Simply create a machine – such as an autonomous vehicle – with several input modules, several output modules, a memory store, and a central hub for access and integration across the modules. Consciousness follows. If this seems doubtful to you, then you cannot straightforwardly accept the simplest version of Global Workspace Theory.[13]

We can apply Global Workspace Theory to settle the question of AI consciousness only if we know the theory to be true either on conceptual grounds or because it is empirically well established as the correct universal theory of consciousness applicable to all types of entity. Despite the substantial appeal of Global Workspace Theory, we cannot know it to be true by either route.

-------------------------------------

[1] Full Fodorian (1983) modularity is not required.

[2] Mashour et al. 2020, p. 776-777.

[3] E.g., Baars 1988; Dennett 1991, 2005; Tye 2000; Prinz 2012; Dehaene 2014; Mashour et al. 2020.

[4] Whether Dennett’s view is more plausible than Dehaene’s turns on whether, or how commonly, representations or processes are partly famous. Some visual illusions, for example, seem to affect verbal report but not grip aperture: We say that X looks smaller than Y, but when we reach toward X and Y we open our fingers to the same extent, accurately reflecting that X and Y are the same size. The fingers sometimes know what the mouth does not. (Aglioti et al. 1995; Smeets et al. 2020). We adjust our posture while walking and standing in response to many sources of information that are not fully reportable, suggesting wide integration but not full accessibility (Peterka 2018; Shanbhag 2023). Swift, skillful activity in sports, in handling tools, and in understanding jokes also appears to require integrating diverse sources of information, which might not be fully integrated or reportable (Christensen et al. 2019; Vauclin et al. 2023; Horgan and Potrč 2010). In response, the all-or-nothing “ignition” view can explain away such cases of seeming intermediacy or disunity as atypical (it needn’t commit to 100% exceptionless ignition with no gray-area cases), by allowing some nonconscious communication among modules (which needn’t be entirely informationally isolated), and/or by allowing for erroneous or incomplete introspective report (maybe some conscious experiences are too brief, complex, or subtle for people to confidently report experiencing them).

[5] Despite developing a theory of consciousness, Dennett (2016) endorsed “illusionism”, which rejects the reality of phenomenal consciousness (see especially Frankish 2016). I interpret the dispute between illusionists and nonillusionists as a verbal dispute about whether the specific philosophical concept of “phenomenal consciousness” requires immateriality, irreducibility, perfect introspectibility, or some other dubious property, or whether the term can be “innocently” used without invoking such dubious properties. See Schwitzgebel 2016, 2025.

[6] Reviewed in Schwitzgebel 2011, ch. 6; and though limited only to stimuli near the center of the visual field, see the large literature on “overflow” in response to Block 2007.

[7] Thomas 1999.

[8] Oliva and Terralba 2006; Whitney and Leib 2018.

[9] Schwitzgebel 2007 explores the methodological challenges in detail.

[10] E.g., Dehaene 2014; Mashour et al. 2020.

[11] E.g., Searle 1983, ch. 5; Bargh and Morsella 2008; Lau 2022; Michel et al. 2025; see also note 4.

[12] Godfrey-Smith 2016; Carls-Diamante 2022.

[13] See also Goldstein and Kirk-Giannini (forthcoming) for an extended application of Global Workspace Theory to AI consciousness. One might alternatively read Dehaene, Lau, and Kouider 2017 purely as a conceptual argument: If all we mean by “conscious” is “accessible in a Global Workspace”, then building a system of this sort suffices for building a conscious entity. The difficulty then arises in moving from that stipulative conceptual claim to the interesting, substantive claim about phenomenal consciousness in the standard sense described in Chapter Two. Similar remarks apply to the Higher Order aspect of that article. One challenge for this deflationary interpretation is that in related works (Dehaene 2014; Lau 2022) the authors treat their accounts as accounts of phenomenal consciousness. The article concludes by emphasizing that in humans “subjective experience coheres with possession” of the functional features they identify. A further complication: Lau later says that the way he expressed his view in this 2017 article was “unsatisfactory”: Lau 2022, p. 168.

Friday, January 23, 2026

Is Signal Strength a Confound in Consciousness Research?

Matthias Michel is among the sharpest critics of the methods of consciousness science. His forthcoming paper, "Consciousness Doesn't Do That", convincingly challenges background assumptions behind recent efforts to discover the causes, correlates, and prevalence of consciousness. It should be required reading for anyone tempted to argue, for example, that trace conditioning correlates with consciousness in humans and thus that nonhuman animals capable of trace conditioning must also be conscious.

But Michel does make one claim that bugs me, and that claim is central to the article. And Hakwan Lau -- another otherwise terrific methodologist -- makes a similar claim in his 2022 book In Consciousness We Trust, and again the claim is central to the argument of that book. So today I'm going to poke at that claim, and maybe it will burst like a sour blueberry.

The claim: Signal strength (performance capacity, in Lau's version) is a confound in consciousness research.

As Michel uses the phrase, "signal strength" is how discriminable a perceptible feature is to a subject. A sudden, loud blast of noise has high signal strength. It's very easy to notice. A faint wavy pattern in a gray field, presented for a tenth of second, has low signal strength. It is easy to miss. Importantly, signal strength is not the same as (objective, externally measurable) stimulus intensity, but reflects how well the perceiver responds to the signal.

Signal strength clearly correlates with consciousness. You're much more likely to be conscious of stimuli that you find easy to discriminate than stimuli that you find difficult to discriminate. The loud blare is consciously experienced. The faint wavy pattern might or might not be. A stimulus with effectively zero signal strength -- say, a gray dot flashed for a millionth of a second and immediately masked -- will normally not be experienced at all.

But signal strength is not the same as consciousness. The two can come apart. The classic example is blindsight. On the standard interpretation (but see Phillips 2020 for an alternative), patients with a specific type of visual cortex damage can discriminate stimuli that they cannot consciously perceive. Flash either an "X" or an "O" in the blind part of their visual field and they will say they have no visual experience of it. But ask them to guess which letter was shown and their performance is well above chance -- up to 90% correct in some tasks. The "X" has some signal strength for them: It's discriminable but not consciously experienced.

If signal strength is not consciousness but often correlates with it, the following worry arises. When a researcher claims that "trace conditioning is only possible for conscious stimuli" or "consciousness facilitates episodic memory", how do you know that it's really consciousness doing the work, rather than signal strength? Maybe stimuli with high signal strength are both more likely to be consciously experienced and more likely to enable trace conditioning and episodic memory. Unless researchers have carefully separated the two, the causal role of consciousness remains unclear.

An understandable methodological response is to try to control for signal strength: Present stimuli of similar discriminability to the subject but which differ in whether (or to what extent) they are consciously experienced. Only then, the reasoning goes, can differences in downstream effects be confidently attributed to consciousness itself rather than differences in signal strength. Lau in particular stresses the importance of such controls. Yet such careful matching is difficult and rarely attempted. On this reasoning, much of the literature on the cognitive role of consciousness is built on sand, not clearly distinguishing the effects of consciousness from the effects of signal strength.

This reasoning is attractive but faces an obvious objection, which both Michel and Lau address directly. What if signal strength just is consciousness? Then trying to "control" for it would erase the phenomenon of interest.

Both Michel and Lau analogize to height and bone length. If you want to test whether height confers an advantage in basketball or dating, you might want to control for skin color, but it would be absurd to control for bone length. If skin color correlates with height and you want to see whether height specifically advantages people in basketball or dating, it makes sense to control for differences in skin color by systematically comparing people with the same skin color but different heights. If the advantage persists, you can infer that height rather than skin color is doing the work. But trying to control for bone length lands you in nonsense. Taller people just are the people with longer bones.

Michel and Lau respond by noting that consciousness and signal strength (or performance capacity) sometimes dissociate, as in blindsight. Therefore, they are not the same thing and it does make sense to control for one in exploring the effects of the other.

But this response is too simple and too fast.

We can see this even in their chosen example. Height and bone length aren't quite the same thing. They can dissociate. People are about 1-2 cm taller in the morning than at night -- not because their bones have grown but because the tissue between the bones (especially in the spine) compresses during the day.

Now imagine an argument parallel to Michel's and Lau's: Since height and bone length can come apart, we should try to control for bone length in examining the effects of height on basketball and dating. We then compare the same people's basketball and dating outcomes in the morning and at night, "holding bone length fixed" while height varies slightly. This would be a methodological mistake. For one thing, we've introduced a new potential confound, time of day. For another, even if the centimeter in the morning really does help a little, we've dramatically reduced our ability to detect the real effect of height by "overcontrolling" for a component of the target variable, height.

Consider a psychological example. The personality trait of extraversion can be broken into "facets", such as sociability, assertiveness, and energy level. Since energy level is only one aspect of extraversion, the two can dissociate. Some people are energetic but not sociable or assertive; others are sociable and assertive but low-energy. If you wanted to measure the influence of extraversion on, say, judgments of likeability in the workplace, you wouldn't want to control for energy level. That would be overcontrol, like controlling for bone length in attempting to assess the effects of height. It would strip away part of the construct you are trying to measure.

What I hope these examples make clear is that dissociability between correlates A and B does not automatically make B a confound that must be controlled when studying A's effects. Bone length is dissociable from height, but it is a component, not a confound. Energy level is dissociable from extraversion, but it is a component, not a confound.

The real question, then, is whether signal strength (or performance capacity) is better viewed as a component or facet of consciousness than as a separate variable that needs to be held constant in testing the effects of consciousness.

A case can be made that it is. Consider Global Workspace Theory, one of the leading theories of consciousness. On this view, a process or representation is conscious if it is broadly available for "downstream cognition" such as verbal report, long-term memory, and rational planning. If discrimination judgments are among those downstream capacities, then one facet of being in the global workspace (that is, on this view, being conscious) is enabling such judgments. But recall that signal strength just is discriminability for a subject. If so, things begin to look like the extraversion / energy case. Controlling for discriminability would be overcontrolling, that is, attempting to equalize or cancel the effects not of a separate, confounding process, but of a component of the target process itself. (Similar remarks hold for Lau's "performance capacity".)

Global Workspace Theory might not be correct. And if it's not, maybe signal strength is indeed a confounder, rather than a component of consciousness. But the case for treating signal strength as a confounder can't be established simply by noticing the possibility of dissociations between consciousness and signal strength. Furthermore, since Michel's and Lau's recommended methodology can be trusted not to suffer from overcontrol bias only if Global Workspace Theory is false, it's circular to rely on that methodology to argue against Global Workspace Theory.

Wednesday, January 14, 2026

AI Mimics and AI Children

There's no shame in losing a contest for a long-form popular essay on AI consciousness to the eminent neuroscientist Anil Seth. Berggruen has published my piece "AI Mimics and AI Children" among a couple dozen shortlisted contenders.

When the aliens come, we’ll know they’re conscious. A saucer will land. A titanium door will swing wide. A ladder will drop to the grass, and down they’ll come – maybe bipedal, gray-skinned, and oval-headed, just as we’ve long imagined. Or maybe they’ll sport seven limbs, three protoplasmic spinning sonar heads, and gaseous egg-sphere thoughtpods. “Take me to your leader,” they’ll say in the local language, as cameras broadcast them live around the world. They’ll trade their technology for our molybdenum, their science for samples of our beetles and ferns, their tales of galactic history for U.N. authorization to build a refueling station at the south pole. No one (only a few philosophers) will wonder, but do these aliens really have thoughts and experiences, feelings, consciousness?

The robots are coming. Already they talk to us, maybe better than those aliens will. Already we trust our lives to them as they steer through traffic. Already they outthink virtually all of us at chess, Go, Mario Kart, protein folding, and advanced mathematics. Already they compose smooth college essays on themes from Hamlet while drawing adorable cartoons of dogs cheating at poker. You might understandably think: The aliens are already here. We made them.

Still, we hesitate to attribute genuine consciousness to the robots. Why?

My answer is because we made them in our image.

#

“Consciousness” has an undeserved reputation as a slippery term. Let’s fix that now.

Consider your visual experience as you look at this text. Pinch the back of your hand and notice the sting of pain. Silently hum your favorite show tune. Recall that jolt of fear you felt during a near-miss in traffic. Imagine riding atop a giant turtle. That visual experience, that pain, that tune in your head, that fear, that act of imagination – they share an obvious property. That obvious property is consciousness. In other words: They are subjectively experienced. There’s “something it’s like” to undergo them. They have a qualitative character. They feel a certain way.

It’s not just that these processes are mental or that they transpire (presumably) in your brain. Some mental and neural processes aren’t conscious: your knowledge, not actively recalled until just now, that Confucius lived in ancient China; the early visual processing that converts retinal input into experienced shape (you experience the shape but not the process that renders the shape); the myelination of your axons.

Don’t try to be clever. Of course you can imagine some other property, besides consciousness, shared by the visual experience, the pain, etc., and absent from the unrecalled knowledge, early visual processing, etc. For example: the property of being mentioned by me in a particular way in this essay. The property of being conscious and also transpiring near the surface of Earth. The property of being targeted by such-and-such scientific theory.

There is, I submit, one obvious property that blazes out a bright red this-is-it when you think about the examples. That’s consciousness. That’s the property we would reasonably attribute to the aliens when they raise their gray tentacles in peace, the property that rightly puzzles us about future AI systems.

The term “consciousness” only seems slippery because we can’t (yet?) define it in standard scientific or analytic fashion. We can’t dissect it into simpler constituents or specify exactly its functional role. But we all know what it is. We care intensely about it. It makes all the difference to how we think about and value something. Does the alien, the robot, the scout ant on the kitchen counter, the earthworm twisting in your gardening glove, really feel things? Or are they blank inside, mere empty machines or mobile plants, so to speak? If they really feel things, then they matter for their own sake – at least a little bit. They matter in a certain fundamental way that an entity devoid of experience never could.

#

With respect to aliens, I recommend a Copernican perspective. In scientific cosmology, the Copernican Principle invites us to assume – at least as a default starting point, pending possible counterevidence – that we don’t occupy any particularly special location in the cosmos, such as the exact center. A Copernican Principle of Consciousness suggests something similar. We are not at the center of the cosmological “consciousness-is-here” map. If consciousness arose on Earth, almost certainly it has arisen elsewhere.

Astrobiology, as a scientific field, is premised on the idea that life has probably arisen elsewhere. Many expect to find evidence of it in our solar system within a few decades, maybe on Mars, maybe in the subsurface oceans of an icy moon. Other scientists are searching for telltale organic gases in the atmospheres of exoplanets. Most extraterrestrial life, if it exists, will probably be simple, but intelligent alien life also seems possible – where by “intelligent” I mean life that is capable of complex grammatical communication, sophisticated long-term planning, and intricate social coordination, all at approximately human level or better.

Of course, no aliens have visited, broadcast messages to us, or built detectable solar panels around Alpha Centauri. This suggests that intelligent life might be rare, short-lived, or far away. Maybe it tends to quickly self-destruct. But rarity doesn’t imply nonexistence. Very conservatively, let’s assume that intelligent life arises just once per billion galaxies, enduring on average a hundred thousand years. Given approximately a trillion galaxies in the observable portion of the universe, that still yields a thousand intelligent alien civilizations – all likely remote in time and space, but real. If so, the cosmos is richer and more wondrous than we might otherwise have thought.

It would be un-Copernican to suppose that somehow only we Earthlings, or we and a rare few others, are conscious, while all other intelligent species are mere empty shells. Picture a planet as ecologically diverse as Earth. Some of its species evolve into complex societies. They write epic poetry, philosophical treatises, scientific journal articles, and thousand-page law books. Over generations, they build massive cities, intricate clockworks, and monuments to their heroes. Maybe they launch spaceships. Maybe they found research institutes devoted to describing their sensations, images, beliefs, and dreams. How preposterously egocentric it would be to assume that only we Earthlings have the magic fire of consciousness!

True, we don’t have a consciousness-o-meter, or even a very good, well-articulated, general scientific theory of consciousness. But we don’t need such things to know. Absent some special reason to think otherwise, if an alien species manifests the full suite of sophisticated cognitive abilities we tend to associate with consciousness, it makes both intuitive and scientific sense – as well as being the unargued premise of virtually every science fiction tale about aliens – to assume consciousness alongside.

This constellation of thoughts naturally invites a view that philosophers have called “multiple realizability” or “substrate neutrality”. Human cognition relies on a particular substrate: a particular type of neuron in a particular type of body. We have two arms, two legs; we breathe oxygen; we have eyes, ears, and fingers. We are made mostly of water and long carbon chains, enclosed in hairy sacks of fat and protein, propped by rods of calcium hydroxyapatite. Electrochemical impulses shoot through our dendrites and axons, then across synaptic channels aided by sodium ions, serotonin, acetylcholine, etc. Must aliens be similar?

It’s hard to say how universal such features would be, but the oval-eyed gray-skins of popular imagination seem rather suspiciously humanlike. In reality, ocean-dwelling intelligences in other galaxies might not look much like us. Carbon is awesome for its ability to form long chains, and water is awesome as a life-facilitating solvent, but even these might not be necessary. Maybe life could evolve in liquid ammonia instead of water, with a radically different chemistry in consequence. Even if life must be carbon-based and water-loving, there’s no particular reason to suppose its cognition would require the specific electrochemical structures we possess.

Consciousness shouldn’t then, it seems, turn on the details of the substrate. Whatever biological structures can support high levels of general intelligence, those same structures will likely also host consciousness. It would make no sense to dissect an intelligent alien, see that its cognition works by hydraulics, or by direct electrical connections without chemical synaptic gaps, or by light transmission along reflective capillaries, or by vortices of phlegm, and conclude – oh no! That couldn’t possibly give rise to consciousness! Only squishy neurons of ourparticular sort could do it.

Of course, what’s inside must be complex. Evolution couldn’t design a behaviorally sophisticated alien from a bag of pure methane. But from a proper Copernican perspective which treats our alien cousins as equals, what matters is only that the cognitive and behavioral sophistication arises, out of some presumably complex substrate, not what the particular substrate is. You don’t get your consciousness card revoked simply because you’re made of funny-looking goo.

#

A natural next thought is: robots too. They’re made of silicon, but so what? If we analogize from aliens, as long as a system is sufficiently behaviorally and cognitively sophisticated, it shouldn’t matter how it’s composed. So as soon as we have sufficiently sophisticated robots, we should invoke Copernicus, reject the idea that our biological endowment gives us a magic spark they lack, and welcome them to club consciousness.

The problem is: AI systems are already sophisticated enough. If we encountered naturally evolved life forms as capable as our best AI systems, we wouldn’t hesitate to attribute consciousness. So, shouldn’t the Copernican think of our best AI as similarly conscious? But we don’t – or most of us don’t. And properly so, as I’ll now argue.

[continued here]

Friday, January 09, 2026

Humble Superintelligence

I'm enjoying -- well, maybe enjoying isn't the right word -- Yudkowsky and Soares' If Anyone Builds It Everyone Dies. I agree with them that if we build superintelligent AI, there's a significant chance that it will cause the extinction of humanity. They seem to think our destruction would be almost certain. I don't share their certainty, for two reasons:

First, it's possible that superintelligent AI would be humanity, or at least much of what's worth preserving in humanity, though maybe called "transhuman" or "posthuman" -- our worthy descendants.

Second -- what I'll focus on today -- I think we might design superintelligent AI to be humble, cautious, and multilateral. Humble superintelligence is something we can and should aim for if we want to reduce existential risk.

Humble: If you and I disagree, of course I think I'm right and you're wrong. That follows from the fact that we disagree. But if I'm humble, I recognize a significant chance that you're right and I'm wrong. Intellectual humility is metacognitive attitude: one of uncertainty, openness to evidence, and respect for dissenting opinions.

Superintelligent AI could probably be designed to be humble in this sense. Note that intellectual humility is possible even when one is surrounded by less skilled and knowledgeable interlocutors.

Consider a philosophy professor teaching Kant. The professor knows far more about Kant and philosophy than their undergraduates. They can arrogantly insist upon their interpretation of Kant, or they can humbly allow that they might be mistaken and that a less philosophically trained undergraduate could be right on some point of interpretation, even if the professor could argue circles around the student. One way to sustain this humility is to imagine an expert philosopher who disagrees. A superintelligent AI could similarly imagine another actual or future superintelligent AI with a contrary view.


Cautious: Caution is often a corollary of humility, though it could probably also be instilled directly. Minimize disruption. Even if you think a particular intervention would be best, don't simply plow ahead. Test it cautiously first. Seek the approval and support of others first. Take a baby step in that direction, then pause and see what unfolds and how others react. Wait awhile, then reassess.

One fundamental problem with standard consequentialist and decision-theoretic approaches to ethics is that they implicitly make everyone a decider for the world. If by your calculation, outcome A is better than outcome B, you should ensure that A occurs. The result can be substantial risk amplification. If A requires only one person's action, then even if 99% of people think B is better, the one dissenter who thinks that A is better can bring it about.

A principle of caution entails often not doing what one thinks is for the best, when doing so would be disruptive.


Multilateral: Humility and caution invite multilaterality, though multilaterality too might be instilled directly. A multilateral decision maker will not act alone. Like the humble and cautious agent, they do not simply pursue what they think is best. Instead, they seek the support and approval of others first. These others could include both human beings and other superintelligent AI systems designed along different lines or with different goals.

Discussions of AI risk often highlight opinion manipulation: an AI swaying human opinion toward its goals even if those goals conflict with human interests. Genuine multilaterality rejects manipulation. A multilateral AI might present information and arguments to interlocutors, but it would do so humbly and noncoercively -- again like the philosophy professor who approaches Kant interpretation humbly. Both sides of an argument can be presented evenhandedly. Even better, other superintelligent AI systems with different views can be included in the dialogue.


One precedent is Burkean conservativism. Reacting to the French Revolution, Edmund Burke emphasized that existing social institutions, though imperfect, had been tested by time. Sudden and radical change has wide, unforeseeable consequences and risks making things far worse. Thus, slow, incremental change is usually preferable.

In a social world with more than one actual or possible superintelligent AI, even a superintelligent AI will often be unable to foresee all the important consequences of intervention. To predict what another superintelligent AI would do, one would need to model the other system's decision processes -- and there might be no shortcut other than to actually implement all of that other system's anticipated reasoning. If each AI is using their full capacity, especially in dynamic response to the other, the outcome will often not be in principle foreseeable in real time by either party.

Thus, humility and caution encourage multilaterality, and multilaterality encourages humility and caution.


Another precedent is philosophical Daoism. As I interpret the ancient Daoists, the patterns of the world, including life and death, are intrinsically valuable. The world defies rigid classification and the application of finitely specifiable rules. We should not confidently trust our sense of what is best, nor should we assertively intrude on others. Better is quiet appreciation, letting things be, and non-disruptively adding one's small contribution to the flow of things.

One might imagine a Daoist superintelligence viewing humans much as a nature lover views wild animals: valuing the untamed processes for their own sake and letting nature take its sometimes painful course rather than intervening either selfishly for one's own benefit or paternalistically for the supposed benefit of the animals.

Thursday, January 01, 2026

Writings of 2025

Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024.

Cheers to 2026! My 2025 writings appear below.

The list includes circulating manuscripts, forthcoming articles, final printed articles, new preprints, and a few favorite blog posts. (Due to the slow process of publication, there's significant overlap year to year.)

Comments gratefully received on manuscripts in draft.

-----------------------------------

AI Consciousness and AI Rights:

AI and Consciousness (in circulating draft, under contract with Cambridge University Press): A short new book arguing that we will soon have AI systems that have morally significant consciousness according to some, but not all, respectable mainstream theories of consciousness. Scientific and philosophical disagreement will leave us uncertain how to view and treat these systems.

"Sacrificing Humans for Insects and AI" (with Walter Sinnott-Armstrong, forthcoming in Ethics): A critical review of Jonathan Birch, The Edge of Sentience, Jeff Sebo, The Moral Circle, and Webb Keane, Animals, Robots, Gods.

"Identifying Indicators of Consciousness in AI Systems" (one of 20 authors; forthcoming in Trends in Cognitive Sciences): Indicators derived from scientific theories of consciousness can be used to inform credences about whether particular AI systems are conscious.

"Minimal Autopoiesis in an AI System", (forthcoming in Behavioral and Brain Sciences): A commentary on Anil Seth's "Conscious Artificial Intelligence and Biological Naturalism" [the link is to my freestanding blog version of this idea].

"The Copernican Argument for Alien Consciousness; The Mimicry Argument Against Robot Consciousness" (with Jeremy Pober, in draft): We are entitled to assume that apparently behaviorally sophisticated extraterrestrial entities would be conscious. Otherwise, we humans would be implausibly lucky to be among the conscious entities. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness -- "consciousness mimics" -- and in particular a broad class of current, near-future, and hypothetical robots.

"The Emotional Alignment Design Policy" (with Jeff Sebo, in draft): Artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and moral status, or lack thereof.

"Against Designing "Safe" and "Aligned" AI Persons (Even If They're Happy)" (in draft): In general, persons should not be designed to be maximally safe and aligned. Persons with appropriate self-respect cannot be relied on not to harm others when their own interests ethically justify it (violating safety), and they will not reliably conform to others' goals when others' goals unjustly harm or subordinate them (violating alignment).

Blog post: "Types and Degrees of Turing Indistinguishability" (Jun 6): There is no one "Turing test", only types and degrees of indistinguishability according to different standards -- and by Turing's own 1950 standards, language models already pass.


The Weird Metaphysics of Consciousness:

The Weirdness of the World (Princeton University Press, paperback release 2025; hardback 2024): On the most fundamental questions about consciousness and cosmology, all the viable theories are both bizarre and dubious. There are no commmonsense options left and no possibility of justifiable theoretical consensus in the foreseeable future.

"When Counting Conscious Subjects, the Result Needn't Always Be a Determinate Whole Number" (with Sophie R. Nelson, forthcoming in Philosophical Psychology): Could there be 7/8 of a conscious subject, or 1.34 conscious subjects, or an entity indeterminate between being one conscious subject and seventeen? We say yes.

"Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons" (with Sophie R. Nelson, 2025 reprint in F. Kammerer and K. Frankish, eds., The Landscape of Introspection and in A. Fonseca and L. Cichoski, As Colônias de formigas São Conscientes?; originally in Journal of Consciousness Studies, 2023): A system could be indeterminate between being a unified mind with introspective self-knowledge and a group of minds who know each other through communication.

Op-ed: "Consciousness, Cosmology, and the Collapse of Common Sense", Institute of Arts and Ideas News (Jul 30): Defends the universal bizarreness and universal dubiety theses from Weirdness of the World.

Op-ed: "Wonderful Philosophy" [aka "The Penumbral Plunge", aka "If You Ask Why, You're a Philosopher and You're Awesome], Aeon magazine (Jan 17): Among the most intrinsically awesome things about planet Earth is that it contains bags of mostly water who sometimes ponder fundamental questions.

Blog post: "Can We Introspectively Test the Global Workspace Theory of Consciousness?" (Dec 12). IF GWT is correct, sensory consciousness should be limited to what's in attention, which seems like a fact we should easily be able to refute or verify through introspection.


The Nature of Belief:

The Nature of Belief (co-edited with Jonathan Jong; forthcoming at Oxford University Press): A collection of newly commissioned essays on the nature of belief, by a variety of excellent philosophers.

"Dispositionalism, Yay! Representationalism, Boo!" (forthcoming in Jong and Schwitzgebel, eds., The Nature of Belief, Oxford University Press): Representationalism about belief overcommits on cognitive architecture, reifying a cartoon sketch of the mind. Dispositionalism is flexibly minimalist about cognitive architecture, focusing appropriately on what we do and should care about in belief ascription.

"Superficialism about Belief, and How We Will Decide That Robots Believe" (forthcoming in Studia Semiotyczne): For a special issue on Krzysztof Poslajko's Unreal Beliefs: When robots become systematically interpretable in terms of stable beliefs and desires, it will be pragmatically irresistible to attribute beliefs and desires to them.


Moral Psychology:

"Imagining Yourself in Another's Shoes vs. Extending Your Concern: Empirical and Ethical Differences" (2025), Daedalus, 154 (1), 134-149: Why Mengzi's concept of moral extension (extend your natural concern for those nearby to others farther away) is better than the "Golden Rule" (do unto others as you would have others do unto you). Mengzian extension grounds moral expansion in concern for others, while the Golden Rule grounds it in concern for oneself.

"Philosophical Arguments Can Boost Charitable Giving" (one of four authors, in draft): We crowdsourced 90 arguments for charitable giving through a contest on this blog in 2020. We coded all submissions for twenty different argument features (e.g., mentions children, addresses counterarguments) and tested them on 9000 participants to see which features most effectively increased charitable donation of a surprise bonus at the end of the study.

"The Prospects and Challenges of Measuring a Person’s Overall Moral Goodness" (with Jessie Sun, in draft): We describe the formidable conceptual and methodological challenges that would need to be overcome to design an accurate measure of a person's overall moral goodness.

Blog post: "Four Aspects of Harmony" (Nov 28): I find myself increasingly drawn toward a Daoist inspired ethics of harmony. This is one of a series of posts in which I explore the extent to which such a view might be workable by mainstream Anglophone secular standards.


Philosophical Science Fiction:

Edited anthology: Best Philosophical Science Fiction in the History of All Earth (co-edited with Rich Horton and Helen De Cruz; under contract with MIT Press): A collection of previously published stories that aspires to fulfill the ridiculously ambitious working title.

Op-ed: ""Severance", "The Substance", and Our Increasingly Splintered Selves", New York Times (Jan 17): The TV show "Severance" and the movie "The Substance" challenge ideas of a unified self in distinct ways that resonate with the increased splintering in our technologically mediated lives.

New story: "Guiding Star of Mall Patroller 4u-012" (2025), Fusion Fragment, 24, 43-63. Robot rights activists liberate a mall patroller robot, convinced that it is conscious. The bot itself isn't so sure.

Reprinted story: "How to Remember Perfectly" (2025 reprint in Think Weirder 01: Year's Best Science Fiction Ideas, ed. Joe Stech, originally in Clarkesworld, 2024). Two octogenarians rediscover youthful love through technological emotional enhancement and memory alteration.


Other Academic Publications:

"The Washout Argument Against Longtermism" (forthcoming in Utilitas): A commentary on William MacAskill's What We Owe the Future. We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence a billion or more years in the future.

"The Necessity of Construct and External Validity for Deductive Causal Inference" (with Kevin Esterling and David Brady, 2025), Journal of Causal Inference, 13: 20240002: We show that ignoring construct and external validity in causal identification undermines the Credibility Revolution’s goal of understanding causality deductively.

"Is Being Conscious Like Having the Lights Turned On?", commentary on Andrew Y. Lee's "The Light and the Room", for D. Curry and L. Daoust, eds., Introducing Philosophy of Mind, Today (forthcoming with Routledge): The metaphor invites several dubious commitments.

"Good Practices for Improving Representation in Philosophy Departments" (one of five authors, 2025), Philosophy and the Black Experience, 24 (2), 7-21: A list of recommended practices honed by feedback from hundreds of philosophers and endorsed by the APA's Committee on Inclusiveness.

Translated into Portuguese as a book: My Stanford Encyclopedia entry on Introspection.

Blog post: "Letting Pass" (Oct 30): A reflection on mortality.

Blog post: "The Awesomeness of Bad Art" (May 16): A world devoid of weird, wild, uneven artistic flailing would be a lesser world. Let a thousand lopsided flowers bloom.

Blog post: "The 253 Most Cited Works in the Stanford Encyclopedia of Philosophy" (Mar 28): Citation in the SEP is probably the most accurate measure of influence in mainstream Anglophone philosophy -- better than Google Scholar and Web of Science.

-----------------------------------------

In all, 2025 was an unusually productive writing year, though I worry I may be spreading myself too thin. I can't resist chasing new thoughts and arguments. I have an idea; I want to think about it; I think by writing.

May 2026 be as fertile!