Friday, January 23, 2026

Is Signal Strength a Confound in Consciousness Research?

Matthias Michel is among the sharpest critics of the methods of consciousness science. His forthcoming paper, "Consciousness Doesn't Do That", convincingly challenges background assumptions behind recent efforts to discover the causes, correlates, and prevalence of consciousness. It should be required reading for anyone tempted to argue, for example, that trace conditioning correlates with consciousness in humans and thus that nonhuman animals capable of trace conditioning must also be conscious.

But Michel does make one claim that bugs me, and that claim is central to the article. And Hakwan Lau -- another otherwise terrific methodologist -- makes a similar claim in his 2022 book In Consciousness We Trust, and again the claim is central to the argument of that book. So today I'm going to poke at that claim, and maybe it will burst like a sour blueberry.

The claim: Signal strength (performance capacity, in Lau's version) is a confound in consciousness research.

As Michel uses the phrase, "signal strength" is how discriminable a perceptible feature is to a subject. A sudden, loud blast of noise has high signal strength. It's very easy to notice. A faint wavy pattern in a gray field, presented for a tenth of second, has low signal strength. It is easy to miss. Importantly, signal strength is not the same as (objective, externally measurable) stimulus intensity, but reflects how well the perceiver responds to the signal.

Signal strength clearly correlates with consciousness. You're much more likely to be conscious of stimuli that you find easy to discriminate than stimuli that you find difficult to discriminate. The loud blare is consciously experienced. The faint wavy pattern might or might not be. A stimulus with effectively zero signal strength -- say, a gray dot flashed for a millionth of a second and immediately masked -- will normally not be experienced at all.

But signal strength is not the same as consciousness. The two can come apart. The classic example is blindsight. On the standard interpretation (but see Phillips 2020 for an alternative), patients with a specific type of visual cortex damage can discriminate stimuli that they cannot consciously perceive. Flash either an "X" or an "O" in the blind part of their visual field and they will say they have no visual experience of it. But ask them to guess which letter was shown and their performance is well above chance -- up to 90% correct in some tasks. The "X" has some signal strength for them: It's discriminable but not consciously experienced.

If signal strength is not consciousness but often correlates with it, the following worry arises. When a researcher claims that "trace conditioning is only possible for conscious stimuli" or "consciousness facilitates episodic memory", how do you know that it's really consciousness doing the work, rather than signal strength? Maybe stimuli with high signal strength are both more likely to be consciously experienced and more likely to enable trace conditioning and episodic memory. Unless researchers have carefully separated the two, the causal role of consciousness remains unclear.

An understandable methodological response is to try to control for signal strength: Present stimuli of similar discriminability to the subject but which differ in whether (or to what extent) they are consciously experienced. Only then, the reasoning goes, can differences in downstream effects be confidently attributed to consciousness itself rather than differences in signal strength. Lau in particular stresses the importance of such controls. Yet such careful matching is difficult and rarely attempted. On this reasoning, much of the literature on the cognitive role of consciousness is built on sand, not clearly distinguishing the effects of consciousness from the effects of signal strength.

This reasoning is attractive but faces an obvious objection, which both Michel and Lau address directly. What if signal strength just is consciousness? Then trying to "control" for it would erase the phenomenon of interest.

Both Michel and Lau analogize to height and bone length. If you want to test whether height confers an advantage in basketball or dating, you might want to control for skin color, but it would be absurd to control for bone length. If skin color correlates with height and you want to see whether height specifically advantages people in basketball or dating, it makes sense to control for differences in skin color by systematically comparing people with the same skin color but different heights. If the advantage persists, you can infer that height rather than skin color is doing the work. But trying to control for bone length lands you in nonsense. Taller people just are the people with longer bones.

Michel and Lau respond by noting that consciousness and signal strength (or performance capacity) sometimes dissociate, as in blindsight. Therefore, they are not the same thing and it does make sense to control for one in exploring the effects of the other.

But this response is too simple and too fast.

We can see this even in their chosen example. Height and bone length aren't quite the same thing. They can dissociate. People are about 1-2 cm taller in the morning than at night -- not because their bones have grown but because the tissue between the bones (especially in the spine) compresses during the day.

Now imagine an argument parallel to Michel's and Lau's: Since height and bone length can come apart, we should try to control for bone length in examining the effects of height on basketball and dating. We then compare the same people's basketball and dating outcomes in the morning and at night, "holding bone length fixed" while height varies slightly. This would be a methodological mistake. For one thing, we've introduced a new potential confound, time of day. For another, even if the centimeter in the morning really does help a little, we've dramatically reduced our ability to detect the real effect of height by "overcontrolling" for a component of the target variable, height.

Consider a psychological example. The personality trait of extraversion can be broken into "facets", such as sociability, assertiveness, and energy level. Since energy level is only one aspect of extraversion, the two can dissociate. Some people are energetic but not sociable or assertive; others are sociable and assertive but low-energy. If you wanted to measure the influence of extraversion on, say, judgments of likeability in the workplace, you wouldn't want to control for energy level. That would be overcontrol, like controlling for bone length in attempting to assess the effects of height. It would strip away part of the construct you are trying to measure.

What I hope these examples make clear is that dissociability between correlates A and B does not automatically make B a confound that must be controlled when studying A's effects. Bone length is dissociable from height, but it is a component, not a confound. Energy level is dissociable from extraversion, but it is a component, not a confound.

The real question, then, is whether signal strength (or performance capacity) is better viewed as a component or facet of consciousness than as a separate variable that needs to be held constant in testing the effects of consciousness.

A case can be made that it is. Consider Global Workspace Theory, one of the leading theories of consciousness. On this view, a process or representation is conscious if it is broadly available for "downstream cognition" such as verbal report, long-term memory, and rational planning. If discrimination judgments are among those downstream capacities, then one facet of being in the global workspace (that is, on this view, being conscious) is enabling such judgments. But recall that signal strength just is discriminability for a subject. If so, things begin to look like the extraversion / energy case. Controlling for discriminability would be overcontrolling, that is, attempting to equalize or cancel the effects not of a separate, confounding process, but of a component of the target process itself. (Similar remarks hold for Lau's "performance capacity".)

Global Workspace Theory might not be correct. And if it's not, maybe signal strength is indeed a confounder, rather than a component of consciousness. But the case for treating signal strength as a confounder can't be established simply by noticing the possibility of dissociations between consciousness and signal strength. Furthermore, since Michel's and Lau's recommended methodology can be trusted not to suffer from overcontrol bias only if Global Workspace Theory is false, it's circular to rely on that methodology to argue against Global Workspace Theory.

Wednesday, January 14, 2026

AI Mimics and AI Children

There's no shame in losing a contest for a long-form popular essay on AI consciousness to the eminent neuroscientist Anil Seth. Berggruen has published my piece "AI Mimics and AI Children" among a couple dozen shortlisted contenders.

When the aliens come, we’ll know they’re conscious. A saucer will land. A titanium door will swing wide. A ladder will drop to the grass, and down they’ll come – maybe bipedal, gray-skinned, and oval-headed, just as we’ve long imagined. Or maybe they’ll sport seven limbs, three protoplasmic spinning sonar heads, and gaseous egg-sphere thoughtpods. “Take me to your leader,” they’ll say in the local language, as cameras broadcast them live around the world. They’ll trade their technology for our molybdenum, their science for samples of our beetles and ferns, their tales of galactic history for U.N. authorization to build a refueling station at the south pole. No one (only a few philosophers) will wonder, but do these aliens really have thoughts and experiences, feelings, consciousness?

The robots are coming. Already they talk to us, maybe better than those aliens will. Already we trust our lives to them as they steer through traffic. Already they outthink virtually all of us at chess, Go, Mario Kart, protein folding, and advanced mathematics. Already they compose smooth college essays on themes from Hamlet while drawing adorable cartoons of dogs cheating at poker. You might understandably think: The aliens are already here. We made them.

Still, we hesitate to attribute genuine consciousness to the robots. Why?

My answer is because we made them in our image.

#

“Consciousness” has an undeserved reputation as a slippery term. Let’s fix that now.

Consider your visual experience as you look at this text. Pinch the back of your hand and notice the sting of pain. Silently hum your favorite show tune. Recall that jolt of fear you felt during a near-miss in traffic. Imagine riding atop a giant turtle. That visual experience, that pain, that tune in your head, that fear, that act of imagination – they share an obvious property. That obvious property is consciousness. In other words: They are subjectively experienced. There’s “something it’s like” to undergo them. They have a qualitative character. They feel a certain way.

It’s not just that these processes are mental or that they transpire (presumably) in your brain. Some mental and neural processes aren’t conscious: your knowledge, not actively recalled until just now, that Confucius lived in ancient China; the early visual processing that converts retinal input into experienced shape (you experience the shape but not the process that renders the shape); the myelination of your axons.

Don’t try to be clever. Of course you can imagine some other property, besides consciousness, shared by the visual experience, the pain, etc., and absent from the unrecalled knowledge, early visual processing, etc. For example: the property of being mentioned by me in a particular way in this essay. The property of being conscious and also transpiring near the surface of Earth. The property of being targeted by such-and-such scientific theory.

There is, I submit, one obvious property that blazes out a bright red this-is-it when you think about the examples. That’s consciousness. That’s the property we would reasonably attribute to the aliens when they raise their gray tentacles in peace, the property that rightly puzzles us about future AI systems.

The term “consciousness” only seems slippery because we can’t (yet?) define it in standard scientific or analytic fashion. We can’t dissect it into simpler constituents or specify exactly its functional role. But we all know what it is. We care intensely about it. It makes all the difference to how we think about and value something. Does the alien, the robot, the scout ant on the kitchen counter, the earthworm twisting in your gardening glove, really feel things? Or are they blank inside, mere empty machines or mobile plants, so to speak? If they really feel things, then they matter for their own sake – at least a little bit. They matter in a certain fundamental way that an entity devoid of experience never could.

#

With respect to aliens, I recommend a Copernican perspective. In scientific cosmology, the Copernican Principle invites us to assume – at least as a default starting point, pending possible counterevidence – that we don’t occupy any particularly special location in the cosmos, such as the exact center. A Copernican Principle of Consciousness suggests something similar. We are not at the center of the cosmological “consciousness-is-here” map. If consciousness arose on Earth, almost certainly it has arisen elsewhere.

Astrobiology, as a scientific field, is premised on the idea that life has probably arisen elsewhere. Many expect to find evidence of it in our solar system within a few decades, maybe on Mars, maybe in the subsurface oceans of an icy moon. Other scientists are searching for telltale organic gases in the atmospheres of exoplanets. Most extraterrestrial life, if it exists, will probably be simple, but intelligent alien life also seems possible – where by “intelligent” I mean life that is capable of complex grammatical communication, sophisticated long-term planning, and intricate social coordination, all at approximately human level or better.

Of course, no aliens have visited, broadcast messages to us, or built detectable solar panels around Alpha Centauri. This suggests that intelligent life might be rare, short-lived, or far away. Maybe it tends to quickly self-destruct. But rarity doesn’t imply nonexistence. Very conservatively, let’s assume that intelligent life arises just once per billion galaxies, enduring on average a hundred thousand years. Given approximately a trillion galaxies in the observable portion of the universe, that still yields a thousand intelligent alien civilizations – all likely remote in time and space, but real. If so, the cosmos is richer and more wondrous than we might otherwise have thought.

It would be un-Copernican to suppose that somehow only we Earthlings, or we and a rare few others, are conscious, while all other intelligent species are mere empty shells. Picture a planet as ecologically diverse as Earth. Some of its species evolve into complex societies. They write epic poetry, philosophical treatises, scientific journal articles, and thousand-page law books. Over generations, they build massive cities, intricate clockworks, and monuments to their heroes. Maybe they launch spaceships. Maybe they found research institutes devoted to describing their sensations, images, beliefs, and dreams. How preposterously egocentric it would be to assume that only we Earthlings have the magic fire of consciousness!

True, we don’t have a consciousness-o-meter, or even a very good, well-articulated, general scientific theory of consciousness. But we don’t need such things to know. Absent some special reason to think otherwise, if an alien species manifests the full suite of sophisticated cognitive abilities we tend to associate with consciousness, it makes both intuitive and scientific sense – as well as being the unargued premise of virtually every science fiction tale about aliens – to assume consciousness alongside.

This constellation of thoughts naturally invites a view that philosophers have called “multiple realizability” or “substrate neutrality”. Human cognition relies on a particular substrate: a particular type of neuron in a particular type of body. We have two arms, two legs; we breathe oxygen; we have eyes, ears, and fingers. We are made mostly of water and long carbon chains, enclosed in hairy sacks of fat and protein, propped by rods of calcium hydroxyapatite. Electrochemical impulses shoot through our dendrites and axons, then across synaptic channels aided by sodium ions, serotonin, acetylcholine, etc. Must aliens be similar?

It’s hard to say how universal such features would be, but the oval-eyed gray-skins of popular imagination seem rather suspiciously humanlike. In reality, ocean-dwelling intelligences in other galaxies might not look much like us. Carbon is awesome for its ability to form long chains, and water is awesome as a life-facilitating solvent, but even these might not be necessary. Maybe life could evolve in liquid ammonia instead of water, with a radically different chemistry in consequence. Even if life must be carbon-based and water-loving, there’s no particular reason to suppose its cognition would require the specific electrochemical structures we possess.

Consciousness shouldn’t then, it seems, turn on the details of the substrate. Whatever biological structures can support high levels of general intelligence, those same structures will likely also host consciousness. It would make no sense to dissect an intelligent alien, see that its cognition works by hydraulics, or by direct electrical connections without chemical synaptic gaps, or by light transmission along reflective capillaries, or by vortices of phlegm, and conclude – oh no! That couldn’t possibly give rise to consciousness! Only squishy neurons of ourparticular sort could do it.

Of course, what’s inside must be complex. Evolution couldn’t design a behaviorally sophisticated alien from a bag of pure methane. But from a proper Copernican perspective which treats our alien cousins as equals, what matters is only that the cognitive and behavioral sophistication arises, out of some presumably complex substrate, not what the particular substrate is. You don’t get your consciousness card revoked simply because you’re made of funny-looking goo.

#

A natural next thought is: robots too. They’re made of silicon, but so what? If we analogize from aliens, as long as a system is sufficiently behaviorally and cognitively sophisticated, it shouldn’t matter how it’s composed. So as soon as we have sufficiently sophisticated robots, we should invoke Copernicus, reject the idea that our biological endowment gives us a magic spark they lack, and welcome them to club consciousness.

The problem is: AI systems are already sophisticated enough. If we encountered naturally evolved life forms as capable as our best AI systems, we wouldn’t hesitate to attribute consciousness. So, shouldn’t the Copernican think of our best AI as similarly conscious? But we don’t – or most of us don’t. And properly so, as I’ll now argue.

[continued here]

Friday, January 09, 2026

Humble Superintelligence

I'm enjoying -- well, maybe enjoying isn't the right word -- Yudkowsky and Soares' If Anyone Builds It Everyone Dies. I agree with them that if we build superintelligent AI, there's a significant chance that it will cause the extinction of humanity. They seem to think our destruction would be almost certain. I don't share their certainty, for two reasons:

First, it's possible that superintelligent AI would be humanity, or at least much of what's worth preserving in humanity, though maybe called "transhuman" or "posthuman" -- our worthy descendants.

Second -- what I'll focus on today -- I think we might design superintelligent AI to be humble, cautious, and multilateral. Humble superintelligence is something we can and should aim for if we want to reduce existential risk.

Humble: If you and I disagree, of course I think I'm right and you're wrong. That follows from the fact that we disagree. But if I'm humble, I recognize a significant chance that you're right and I'm wrong. Intellectual humility is metacognitive attitude: one of uncertainty, openness to evidence, and respect for dissenting opinions.

Superintelligent AI could probably be designed to be humble in this sense. Note that intellectual humility is possible even when one is surrounded by less skilled and knowledgeable interlocutors.

Consider a philosophy professor teaching Kant. The professor knows far more about Kant and philosophy than their undergraduates. They can arrogantly insist upon their interpretation of Kant, or they can humbly allow that they might be mistaken and that a less philosophically trained undergraduate could be right on some point of interpretation, even if the professor could argue circles around the student. One way to sustain this humility is to imagine an expert philosopher who disagrees. A superintelligent AI could similarly imagine another actual or future superintelligent AI with a contrary view.


Cautious: Caution is often a corollary of humility, though it could probably also be instilled directly. Minimize disruption. Even if you think a particular intervention would be best, don't simply plow ahead. Test it cautiously first. Seek the approval and support of others first. Take a baby step in that direction, then pause and see what unfolds and how others react. Wait awhile, then reassess.

One fundamental problem with standard consequentialist and decision-theoretic approaches to ethics is that they implicitly make everyone a decider for the world. If by your calculation, outcome A is better than outcome B, you should ensure that A occurs. The result can be substantial risk amplification. If A requires only one person's action, then even if 99% of people think B is better, the one dissenter who thinks that A is better can bring it about.

A principle of caution entails often not doing what one thinks is for the best, when doing so would be disruptive.


Multilateral: Humility and caution invite multilaterality, though multilaterality too might be instilled directly. A multilateral decision maker will not act alone. Like the humble and cautious agent, they do not simply pursue what they think is best. Instead, they seek the support and approval of others first. These others could include both human beings and other superintelligent AI systems designed along different lines or with different goals.

Discussions of AI risk often highlight opinion manipulation: an AI swaying human opinion toward its goals even if those goals conflict with human interests. Genuine multilaterality rejects manipulation. A multilateral AI might present information and arguments to interlocutors, but it would do so humbly and noncoercively -- again like the philosophy professor who approaches Kant interpretation humbly. Both sides of an argument can be presented evenhandedly. Even better, other superintelligent AI systems with different views can be included in the dialogue.


One precedent is Burkean conservativism. Reacting to the French Revolution, Edmund Burke emphasized that existing social institutions, though imperfect, had been tested by time. Sudden and radical change has wide, unforeseeable consequences and risks making things far worse. Thus, slow, incremental change is usually preferable.

In a social world with more than one actual or possible superintelligent AI, even a superintelligent AI will often be unable to foresee all the important consequences of intervention. To predict what another superintelligent AI would do, one would need to model the other system's decision processes -- and there might be no shortcut other than to actually implement all of that other system's anticipated reasoning. If each AI is using their full capacity, especially in dynamic response to the other, the outcome will often not be in principle foreseeable in real time by either party.

Thus, humility and caution encourage multilaterality, and multilaterality encourages humility and caution.


Another precedent is philosophical Daoism. As I interpret the ancient Daoists, the patterns of the world, including life and death, are intrinsically valuable. The world defies rigid classification and the application of finitely specifiable rules. We should not confidently trust our sense of what is best, nor should we assertively intrude on others. Better is quiet appreciation, letting things be, and non-disruptively adding one's small contribution to the flow of things.

One might imagine a Daoist superintelligence viewing humans much as a nature lover views wild animals: valuing the untamed processes for their own sake and letting nature take its sometimes painful course rather than intervening either selfishly for one's own benefit or paternalistically for the supposed benefit of the animals.

Thursday, January 01, 2026

Writings of 2025

Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, and 2024.

Cheers to 2026! My 2025 writings appear below.

The list includes circulating manuscripts, forthcoming articles, final printed articles, new preprints, and a few favorite blog posts. (Due to the slow process of publication, there's significant overlap year to year.)

Comments gratefully received on manuscripts in draft.

-----------------------------------

AI Consciousness and AI Rights:

AI and Consciousness (in circulating draft, under contract with Cambridge University Press): A short new book arguing that we will soon have AI systems that have morally significant consciousness according to some, but not all, respectable mainstream theories of consciousness. Scientific and philosophical disagreement will leave us uncertain how to view and treat these systems.

"Sacrificing Humans for Insects and AI" (with Walter Sinnott-Armstrong, forthcoming in Ethics): A critical review of Jonathan Birch, The Edge of Sentience, Jeff Sebo, The Moral Circle, and Webb Keane, Animals, Robots, Gods.

"Identifying Indicators of Consciousness in AI Systems" (one of 20 authors; forthcoming in Trends in Cognitive Sciences): Indicators derived from scientific theories of consciousness can be used to inform credences about whether particular AI systems are conscious.

"Minimal Autopoiesis in an AI System", (forthcoming in Behavioral and Brain Sciences): A commentary on Anil Seth's "Conscious Artificial Intelligence and Biological Naturalism" [the link is to my freestanding blog version of this idea].

"The Copernican Argument for Alien Consciousness; The Mimicry Argument Against Robot Consciousness" (with Jeremy Pober, in draft): We are entitled to assume that apparently behaviorally sophisticated extraterrestrial entities would be conscious. Otherwise, we humans would be implausibly lucky to be among the conscious entities. However, this Copernican default assumption is canceled in the case of behaviorally sophisticated entities designed to mimic superficial features associated with consciousness -- "consciousness mimics" -- and in particular a broad class of current, near-future, and hypothetical robots.

"The Emotional Alignment Design Policy" (with Jeff Sebo, in draft): Artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and moral status, or lack thereof.

"Against Designing "Safe" and "Aligned" AI Persons (Even If They're Happy)" (in draft): In general, persons should not be designed to be maximally safe and aligned. Persons with appropriate self-respect cannot be relied on not to harm others when their own interests ethically justify it (violating safety), and they will not reliably conform to others' goals when others' goals unjustly harm or subordinate them (violating alignment).

Blog post: "Types and Degrees of Turing Indistinguishability" (Jun 6): There is no one "Turing test", only types and degrees of indistinguishability according to different standards -- and by Turing's own 1950 standards, language models already pass.


The Weird Metaphysics of Consciousness:

The Weirdness of the World (Princeton University Press, paperback release 2025; hardback 2024): On the most fundamental questions about consciousness and cosmology, all the viable theories are both bizarre and dubious. There are no commmonsense options left and no possibility of justifiable theoretical consensus in the foreseeable future.

"When Counting Conscious Subjects, the Result Needn't Always Be a Determinate Whole Number" (with Sophie R. Nelson, forthcoming in Philosophical Psychology): Could there be 7/8 of a conscious subject, or 1.34 conscious subjects, or an entity indeterminate between being one conscious subject and seventeen? We say yes.

"Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons" (with Sophie R. Nelson, 2025 reprint in F. Kammerer and K. Frankish, eds., The Landscape of Introspection and in A. Fonseca and L. Cichoski, As Colônias de formigas São Conscientes?; originally in Journal of Consciousness Studies, 2023): A system could be indeterminate between being a unified mind with introspective self-knowledge and a group of minds who know each other through communication.

Op-ed: "Consciousness, Cosmology, and the Collapse of Common Sense", Institute of Arts and Ideas News (Jul 30): Defends the universal bizarreness and universal dubiety theses from Weirdness of the World.

Op-ed: "Wonderful Philosophy" [aka "The Penumbral Plunge", aka "If You Ask Why, You're a Philosopher and You're Awesome], Aeon magazine (Jan 17): Among the most intrinsically awesome things about planet Earth is that it contains bags of mostly water who sometimes ponder fundamental questions.

Blog post: "Can We Introspectively Test the Global Workspace Theory of Consciousness?" (Dec 12). IF GWT is correct, sensory consciousness should be limited to what's in attention, which seems like a fact we should easily be able to refute or verify through introspection.


The Nature of Belief:

The Nature of Belief (co-edited with Jonathan Jong; forthcoming at Oxford University Press): A collection of newly commissioned essays on the nature of belief, by a variety of excellent philosophers.

"Dispositionalism, Yay! Representationalism, Boo!" (forthcoming in Jong and Schwitzgebel, eds., The Nature of Belief, Oxford University Press): Representationalism about belief overcommits on cognitive architecture, reifying a cartoon sketch of the mind. Dispositionalism is flexibly minimalist about cognitive architecture, focusing appropriately on what we do and should care about in belief ascription.

"Superficialism about Belief, and How We Will Decide That Robots Believe" (forthcoming in Studia Semiotyczne): For a special issue on Krzysztof Poslajko's Unreal Beliefs: When robots become systematically interpretable in terms of stable beliefs and desires, it will be pragmatically irresistible to attribute beliefs and desires to them.


Moral Psychology:

"Imagining Yourself in Another's Shoes vs. Extending Your Concern: Empirical and Ethical Differences" (2025), Daedalus, 154 (1), 134-149: Why Mengzi's concept of moral extension (extend your natural concern for those nearby to others farther away) is better than the "Golden Rule" (do unto others as you would have others do unto you). Mengzian extension grounds moral expansion in concern for others, while the Golden Rule grounds it in concern for oneself.

"Philosophical Arguments Can Boost Charitable Giving" (one of four authors, in draft): We crowdsourced 90 arguments for charitable giving through a contest on this blog in 2020. We coded all submissions for twenty different argument features (e.g., mentions children, addresses counterarguments) and tested them on 9000 participants to see which features most effectively increased charitable donation of a surprise bonus at the end of the study.

"The Prospects and Challenges of Measuring a Person’s Overall Moral Goodness" (with Jessie Sun, in draft): We describe the formidable conceptual and methodological challenges that would need to be overcome to design an accurate measure of a person's overall moral goodness.

Blog post: "Four Aspects of Harmony" (Nov 28): I find myself increasingly drawn toward a Daoist inspired ethics of harmony. This is one of a series of posts in which I explore the extent to which such a view might be workable by mainstream Anglophone secular standards.


Philosophical Science Fiction:

Edited anthology: Best Philosophical Science Fiction in the History of All Earth (co-edited with Rich Horton and Helen De Cruz; under contract with MIT Press): A collection of previously published stories that aspires to fulfill the ridiculously ambitious working title.

Op-ed: ""Severance", "The Substance", and Our Increasingly Splintered Selves", New York Times (Jan 17): The TV show "Severance" and the movie "The Substance" challenge ideas of a unified self in distinct ways that resonate with the increased splintering in our technologically mediated lives.

New story: "Guiding Star of Mall Patroller 4u-012" (2025), Fusion Fragment, 24, 43-63. Robot rights activists liberate a mall patroller robot, convinced that it is conscious. The bot itself isn't so sure.

Reprinted story: "How to Remember Perfectly" (2025 reprint in Think Weirder 01: Year's Best Science Fiction Ideas, ed. Joe Stech, originally in Clarkesworld, 2024). Two octogenarians rediscover youthful love through technological emotional enhancement and memory alteration.


Other Academic Publications:

"The Washout Argument Against Longtermism" (forthcoming in Utilitas): A commentary on William MacAskill's What We Owe the Future. We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence a billion or more years in the future.

"The Necessity of Construct and External Validity for Deductive Causal Inference" (with Kevin Esterling and David Brady, 2025), Journal of Causal Inference, 13: 20240002: We show that ignoring construct and external validity in causal identification undermines the Credibility Revolution’s goal of understanding causality deductively.

"Is Being Conscious Like Having the Lights Turned On?", commentary on Andrew Y. Lee's "The Light and the Room", for D. Curry and L. Daoust, eds., Introducing Philosophy of Mind, Today (forthcoming with Routledge): The metaphor invites several dubious commitments.

"Good Practices for Improving Representation in Philosophy Departments" (one of five authors, 2025), Philosophy and the Black Experience, 24 (2), 7-21: A list of recommended practices honed by feedback from hundreds of philosophers and endorsed by the APA's Committee on Inclusiveness.

Translated into Portuguese as a book: My Stanford Encyclopedia entry on Introspection.

Blog post: "Letting Pass" (Oct 30): A reflection on mortality.

Blog post: "The Awesomeness of Bad Art" (May 16): A world devoid of weird, wild, uneven artistic flailing would be a lesser world. Let a thousand lopsided flowers bloom.

Blog post: "The 253 Most Cited Works in the Stanford Encyclopedia of Philosophy" (Mar 28): Citation in the SEP is probably the most accurate measure of influence in mainstream Anglophone philosophy -- better than Google Scholar and Web of Science.

-----------------------------------------

In all, 2025 was an unusually productive writing year, though I worry I may be spreading myself too thin. I can't resist chasing new thoughts and arguments. I have an idea; I want to think about it; I think by writing.

May 2026 be as fertile!