Thursday, February 19, 2026

Disunity and Indeterminacy in Artificial Consciousness (and Maybe in Human Consciousness Too)

Our understanding of the nature of consciousness derives mainly from our understanding of the nature of consciousness in our favorite animal (us, of course). But the features of consciousness in our favorite animal might be specific to that animal rather than universal.

Let's consider two such features and whether we should expect them in conscious AI systems, if conscious AI systems are ever possible.

Unity: Our conscious experiences at any given moment are bound together into a single unified experience, rather than transpiring in separate streams. If I'm sitting on a wet park bench, I might (a.) visually experience the leafy green trees around me, (b.) tactilely experience the cold dampness soaking into my jeans, and (c.) consciously recall the smaller trees of yesteryear. Normally -- perhaps necessarily -- three such experiences would not run in disconnected streams. They would join into a composite experience of (a)-with-(b)-with-(c). I experience not just trees, cold dampness, and a memory of yesteryear, but all three together as a unified bundle.

Determinacy: At any given moment, I am either determinately conscious or determinately nonconscious (as in anesthesia or dreamless sleep). Likewise, I either determinately do, or determinately do not, have any particular experience. Gray-area cases are at least unusual and maybe impossible. Even the simplest, barest cases are still determinate. Consider visual experience: We might imagine the visual field narrowing and losing content until only a gray dot remains -- and then the dot winks out. That dot, however minimal, is still determinately experienced. When it winks out, consciousness determinately disappears. There is no half-winked state between the minimal gray dot and complete absence of visual experience.

My thought is that we should not expect unity and determinacy to be general features of conscious AI systems (if conscious AI is possible). To see why, let's start by assuming the Global Workspace Theory of consciousness. I focus on Global Workspace Theory because it's probably the leading scientific theory of consciousness and because its standard formulation (Dehaene's version) invites the assumption of unity and determinacy.

Global Workspace Theory divides the mind into local information processing modules linked by a shared global workspace. Information becomes conscious when it is broadcast into the workspace. Suppose your auditory system registers the faint honk of a distant car horn. You're absorbed in reading philosophy and accustomed to ignoring traffic noise, so this representation isn't selected for further processing. It's not a target of attention, not broadcast into the workspace, and not consciously experienced. (If you think you constantly consciously experience background sounds, you can't hold a standard Global Workspace view.) Once you attend to the noise, for whatever reason, that information "ignites" into the global workspace, becoming available to a wide variety of "downstream" processes: You can think about it, plan around it, verbally report it, store it in long-term memory, and flexibly combine it with other information in the workspace. On Global Workspace Theory, being available in this way just is what it is for the information to be consciously experienced.

This model suggests unity and determinacy. Since there is just one global workspace, and since that workspace enables flexible integration of everything it contains, it makes sense that its various elements will combine into a unified experience. And on Dehaene's version, ignition into the workspace is a sharp-boundaried event: Information either completely ignites, becoming available for all downstream processes, or it does not. There is no (or only rarely) partial ignition. This can explain determinacy.

But future AI systems might not share this structure. They might have multiple or partially overlapping workspaces. Different specialized subsystems might have access to different regions of a partly-shared workspace. Some animals, such as snails and octopuses, distribute processing among multiple ganglia or neural centers that are less tightly coupled than the hemispheres of the human brain. A robot might broadcast information relevant to locomotion to one area and information relevant to speech to another with limited connectivity.

If the subsystems are entirely disconnected, the result might be entirely discrete centers of subjective experience within a single organism or machine. But if they are partly connected, experience might be only partly unified. In the park bench example, the experience of the trees might be unified with the experience of dampness, and the experience of dampness with memories of yesteryear, but the experience of the trees might not be unified with the memories. (Unification would not then be a transitive relation.) Alternatively, some weaker relation of partial unification might hold among the visual, tactile, and memorial experiences. If this seems inconceivable or impossible, see Sophie Nelson's and my article on indeterminate or fractional subjects.

More abstractly: There's no compelling architectural reason why an AI system would have to make information available either to all downstream processes or to none. A workspace defined in terms of downstream availability could be a patchwork of partial availabilities rather than a fully global all-or-nothing broadcast.

For the same reason, ignition into the workspace needn't be all-or-nothing. Between full ignition with determinate consciousness and no ignition with determinate nonconsciousness, there might be in-between, gray-area half-ignitions that are neither determinately conscious nor determinately nonconscious. Nearly every property with a complex physical or functional basis allows indeterminate, borderline cases: baldness, extraversion, greenness, happiness, whether you're wearing a shoe, whether a country is a democracy. The human global workspace might minimize indeterminacy -- like it's rarely indeterminate in basketball whether the ball has gone through the hoop. But change the architecture and indeterminacy might become common: a half-hearted ignition, or just enough information-sharing to make it indeterminate whether a workspace even exists. (If indeterminacy about consciousness strikes you as inconceivable or impossible, see my 2023 article on borderline consciousness.)

Global Workspace Theory might of course be wrong. But most other theories of consciousness make my argument at least as easy. Dennett's fame-in-the-brain version of broadcast theory explicitly permits disunity and indeterminacy. Higher Order Theories admit the same fragmentation and, probably, gradualism. So do biological theories and theories that focus on embodiment. (Integrated Information Theory is an exception: Its axioms require bright-lined unity and determinacy. But as I've argued, those bright-line axioms lead to unpalatable consequences.)

Recognizing these possibilities for AI systems invites the further thought: Maybe we humans aren't quite as unified as we normally suppose. Maybe indeterminate and disunified consciousness is common. Maybe processes outside of attention hover indeterminately between being conscious and nonconscious. Maybe some processes are only partly unified. If it seems otherwise in introspection and memory, maybe that's because introspection and memory tend to impose unity and determinacy where none was before.

[a Paul Klee painting, untitled 1914: source]

Friday, February 13, 2026

The Intrinsic Value of Diversity

Moral diversity, Olivia Bailey and Thi Nguyen say (in a draft paper shared with Myisha Cherry's Emotion and Society Lab), is valuable. It's good that people have different ethical personalities, opinions, and concerns. (Within reason: Nazis not welcome.)

Why? Their reasons are instrumental. Society benefits when people care intensely about different things. This allows us collectively to achieve a wide range of goals -- curing cancer, helping the homeless, protesting unjust government. Society also benefits if some people explore the ethical possibility space, developing unusual moral visions, most of which will be mistaken but a few of which might eventually be recognized as genuine moral advances (think of the first slavery abolitionists). And individuals benefit from the liberty to adopt moral priorities that fit their skills and temperaments: Some people thrive in battle, others in caregiving, others in solitary work.

But is moral diversity also intrinsically valuable -- that is, valuable for its own sake, independent of these good consequences? I think so. I think so because diversity in general is intrinsically valuable, and there's no good reason to treat moral diversity as an exception.

How does one argumentatively establish the intrinsic value of diversity? The only way I know is to reveal, through thought experiment, that you already implicitly accept it -- and then to ward off objections.

Bailey and Nguyen briefly cite Alexander Nehamas on diversity of aesthetic opinion. Nehamas writes:

I think a world where everyone liked, or loved, the same things would be a desperate, desolate world -- as devoid of pleasure and interest as the most frightful dystopia of those who believe (quite wrongly) that the popular media are inevitably producing a depressingly, disconsolately uniform world culture. And although I say this with serious discomfort, a world in which everyone liked Shakespeare, or Titian, or Bach for the same reasons -- if such a world were possible -- appears to me no better than a world where everyone tuned in to Baywatch or listened to the worst pop music at the same time (Nehamas 2002, p. 58-59).

Why is aesthetic diversity valuable, according to Nehamas? Because style and taste require originality and are bound up with what is distinctive about your life, interests, and sensibility. Without distinctiveness, style and taste collapse -- an aesthetic disaster.

Should we say, then, that diversity, including moral diversity, is valuable aesthetically? That its value lies primarily in its beauty, in its capacity to inspire awe, or some other aesthetic feature? Indeed, diversity is beautiful and awesome (imagine the world without it!) but I don't think this exhausts its intrinsic value. Aesthetic value requires a spectator, at least a notional one, whose appreciation is the point. The intrinsic value of diversity is not, or not primarily, mediated through the hypothetical reaction of an aesthetic spectator.

My favorite approach to thinking about intrinsic value is the Distant Planet Thought Experiment. Imagine a planet on the far side of the galaxy, blocked from view by the galactic core, a planet we'll never see or interact with. What would we hope for on this planet, for its own sake, independent of any potential value for us?

Would you hope that it's a sterile rock, completely devoid of life? I think not. If you do think a lifeless rock would be best, I have no argument against you. For me this is a starting place, a bedrock judgment, which I expect most readers will share.

Suppose, then, that you agree a planet with life would be intrinsically better than one without. Would you hope that its life consists entirely of microbes? Or would you hope that it teems with diverse life: reefs and rainforests, beetles and bats, squid and bees and ferns and foxes -- or rather, not to duplicate Earth too closely, their alien analogues, translated into a different key? I think you'll hope that the planet teems with diverse life.

Would you hope that no life on this planet has humanlike behavioral sophistication -- language, long-term planning, complex social coordination? Would you hope that nothing there could contemplate the meaning of life, the origin of the stars, or its own ancient history? Would you hope that nothing there could create art, or engage in athletic competition, or invent complex games and tricks and jokes? I invite you to join me in thinking otherwise. The planet would be better if it included some beings with that richness of thought and activity.

Would you hope for uniformity of intellectual, aesthetic, and ethical opinion -- that everyone shares the same values and ideas? Or would you hope for diversity? I think you'll join me in thinking that the world would be better, better for its own sake, if it were diverse rather than uniform. Different entities would have different skills, preferences, passions, and ideas. They'll fight and disagree (not genocidally, I hope), sometimes value their differences, sometimes dismiss others as completely wrongheaded, sometimes cluster into shared projects, sometimes collaborate across deep disagreement, sometimes be drawn to opposites, sometimes feel kinship with the like-minded, play within and across divides, pursue an enormous variety of projects, explore a vast space of possible forms of life.

That is what I hope for on this distant planet -- not for instrumental reasons (not, for example, because it will maximize happiness), and not merely because it would strike a hypothetical spectator as beautiful and awesome (though it should). Rather, just because it would be valuable for its own sake. An empty void has little or no value; a rich plurality of forms of existence has immense value, no further justification required.

I have not argued for this. I have only stated it vividly, hoping that you already accept it.

Is ethical opinion an exception? Should we prefer unity and conformity in ethics, even while welcoming diversity elsewhere? I think not, for two reasons.

First, ethics is open-textured, indeterminate, and full of tragic dilemmas. Often there is no one decisively best answer on which everyone should converge. Diversity within at least the bounds of reasonable disagreement should be permitted.

Second, ethical values are inseparable from our other values and ways of life. A philosophy professor, a civil rights lawyer, a professional athlete, and a farmer will value different things. There is, I think, no point in attempting to cleanly separate their differing values into distinct types, some of which are permitted to vary and others of which may not. The ethical, prudential, epistemic, and aesthetic blur together. These distinctions are not as clean as philosophers often assume. Normativity is a mush.

Oh, some of you disagree? Good!

[the cover of my 2024 book, The Weirdness of the World, hardback version]

Thursday, February 05, 2026

Artificial Intelligence as Strange Intelligence: Against Linear Models of Intelligence (New Paper in Draft)

by Kendra Chilson and Eric Schwitzgebel

Our main idea, condensed to 1000 words:

On a linear model of intelligence, entities can be roughly linearly ordered in overall intelligence: frogs are smarter than nematodes, cats smarter than frogs, apes smarter than cats, and humans smarter than apes. This same linear model is often assumed when discussing AI systems. "Narrow AI" systems (like chess machines and autonomous vehicles) are assumed to be subhuman in intelligence, at some point -- maybe soon -- AI systems will have approximately human-level intelligence, and in the future we might expect superintelligent AI that exceeds our intellectual capacity in virtually all domains of interest.

Building on the work of Susan Schneider, we challenge this linear model of intelligence. Central to our project is the concept of general intelligence as the ability to use information to achieve a wide range of goals in a wide variety of environments.

Of course even the simplest entity capable of using information to achieve goals can succeed in some environments, and no finite entity could succeed in all possible goals in all possible environments. "General intelligence" is therefore a matter of degree. Moreover, general intelligence is a massively multidimensional matter of degree: There are many many possible goals and many many possible environments and no non-arbitrary way to taxonomize and weight all these goals and environments into a single linear scale or definitive threshold.

Every entity is in important respects narrow: Humans also can achieve their goals in only a very limited range of environments. Interstellar space, the deep sea, the Earth's crust, the middle of the sky, the center of a star -- transposition to any of these places will quickly defeat almost all our plans. We depend for our successful functioning on a very specific context. So of course do all animals and all AI systems.

Similarly, although humans are good at a certain range of tasks, we cannot detect electrical fields in the water, dodge softballs while hovering in place, communicate with dolphins by echolocation, or calculate a hundred digits of pi in our heads. If we put a server with a language model in the desert without a power source or if we place an autonomous vehicle in a chess tournament and then interpret their incompetence as a lack of general intelligence, we risk being as unfair to them as a dolphin would be to blame us for our poor skills in their environment. Yes, there's a perfectly reasonable sense in which chess machines and autonomous vehicles have much more limited capacities than do humans. They are narrow in their abilities compared to us by almost any plausible metric of narrowness. But it is anthropocentric to insist that general intelligence requires generally successful performance on the tasks and in the environments that we humans tend to favor, given that those tasks and environments are such a small subset of the possible tasks and environments an entity could face. And any attempt to escape anthropocentrism by creating an unbiased and properly weighted taxonomy of task types and environments is either hopeless or liable to generate a variety of very different but equally plausible arbitrary composites.

AI systems, like nonhuman animals and neuroatypical people, can combine skills and deficits in patterns that are unfamiliar to those who have attended mostly to typical human cases. AI systems are highly unlikely to replicate every human capacity, due to limits in data and optimization, as well as a fundamentally different underlying architecture. They struggle to do many things that ordinary humans do effortlessly, such as reliably interpreting everyday visual scenes and performing feats of manual dexterity. But the reverse is also true: Humans cannot perform some feats that machines perform in a fraction of a second. If we think of intelligence as irreducibly multidimensional instead of linear -- as always relativized to the immense number of possible goals and environments -- we can avoid the temptation to try to reach a scalar judgment about which type of entity is actually smarter and by how much.

We might think of typical human intelligence as "familiar intelligence" -- familiar to us, that is -- and artificial intelligence as "strange intelligence". This terminology wears its anthropocentrism on its sleeve, rather than masking it under false objectivity. Something possesses familiar intelligence to the degree it thinks like us. It is a similarity relation. How familiar an intelligence is depends on several factors. Some are architectural: What forms does the basic cognitive processing take? What shortcuts and heuristics does it rely on? How serial or parallel is it? How fast? With what sorts of redundancy, modularity, and self-monitoring for errors? Others are learned and cultural: learned habits, particular cultural practices, acquired skills, chosen effort based on perceived costs and benefits. An intelligence is outwardly familiar if it acts like us in intelligence-based tasks. And it is inwardly familiar if it does so by the same underlying cognitive mechanisms.

Familiarity is also a matter of degree: The intelligence of dogs is more familiar to us (in most respects) than that of octopuses. Although we share some common features with octopuses, they evolved in a very different environment and have very dissimilar cognitive architecture as a result. It's hard for us even to understand their goals, because their existence is so different. Still, as distant as our minds are from those of octopuses, we share with octopuses the broadly familiar lifeways of embodied animals who need to navigate the natural world, find food, and mate.

AI constitutes an even stranger form of intelligence. With architectures, environments, and goals so fundamentally unlike ours, AI is the strangest intelligence we have yet to encounter. AI is not a biological organism; it was not shaped by the evolutionary pressures shared by every living being on Earth, and it does not have the same underlying needs. It is based on an inorganic substrate totally unlike all biological neurophysiology. Its goals are imposed by its makers rather than being autopoietic. Such intelligence should be expected to behave in ways radically different from familiar minds. This raises an epistemic challenge: Understanding and measuring strange intelligence may be extremely difficult for us. Plausibly, the stranger an intelligence is from our perspective, the easier it is for us to fail to appreciate what it’s up to. Strange intelligences rely on methods alien to our cognition.

If intelligence were linear and one-dimensional, then a single example of an egregious mistake by an AI -- a mistake a human would never make, like confusing a strawberry for a toy poodle -- would be enough to show that the systems are nowhere near our level of intelligence. However, since intelligence is massively multidimensional, all these cases show on their own is that these systems have certain lacunae or blindspots. Of course, we humans also have lacunae and blind spots – just consider optical illusions. Our susceptibility to optical illusions is not used as evidence of our lack of general intelligence, however ridiculous our mistakes might seem to any entity not subject to those same illusions.

Full draft here.