Thursday, February 19, 2026

Disunity and Indeterminacy in Artificial Consciousness (and Maybe in Human Consciousness Too)

Our understanding of the nature of consciousness derives mainly from our understanding of the nature of consciousness in our favorite animal (us, of course). But the features of consciousness in our favorite animal might be specific to that animal rather than universal.

Let's consider two such features and whether we should expect them in conscious AI systems, if conscious AI systems are ever possible.

Unity: Our conscious experiences at any given moment are bound together into a single unified experience, rather than transpiring in separate streams. If I'm sitting on a wet park bench, I might (a.) visually experience the leafy green trees around me, (b.) tactilely experience the cold dampness soaking into my jeans, and (c.) consciously recall the smaller trees of yesteryear. Normally -- perhaps necessarily -- three such experiences would not run in disconnected streams. They would join into a composite experience of (a)-with-(b)-with-(c). I experience not just trees, cold dampness, and a memory of yesteryear, but all three together as a unified bundle.

Determinacy: At any given moment, I am either determinately conscious or determinately nonconscious (as in anesthesia or dreamless sleep). Likewise, I either determinately do, or determinately do not, have any particular experience. Gray-area cases are at least unusual and maybe impossible. Even the simplest, barest cases are still determinate. Consider visual experience: We might imagine the visual field narrowing and losing content until only a gray dot remains -- and then the dot winks out. That dot, however minimal, is still determinately experienced. When it winks out, consciousness determinately disappears. There is no half-winked state between the minimal gray dot and complete absence of visual experience.

My thought is that we should not expect unity and determinacy to be general features of conscious AI systems (if conscious AI is possible). To see why, let's start by assuming the Global Workspace Theory of consciousness. I focus on Global Workspace Theory because it's probably the leading scientific theory of consciousness and because its standard formulation (Dehaene's version) invites the assumption of unity and determinacy.

Global Workspace Theory divides the mind into local information processing modules linked by a shared global workspace. Information becomes conscious when it is broadcast into the workspace. Suppose your auditory system registers the faint honk of a distant car horn. You're absorbed in reading philosophy and accustomed to ignoring traffic noise, so this representation isn't selected for further processing. It's not a target of attention, not broadcast into the workspace, and not consciously experienced. (If you think you constantly consciously experience background sounds, you can't hold a standard Global Workspace view.) Once you attend to the noise, for whatever reason, that information "ignites" into the global workspace, becoming available to a wide variety of "downstream" processes: You can think about it, plan around it, verbally report it, store it in long-term memory, and flexibly combine it with other information in the workspace. On Global Workspace Theory, being available in this way just is what it is for the information to be consciously experienced.

This model suggests unity and determinacy. Since there is just one global workspace, and since that workspace enables flexible integration of everything it contains, it makes sense that its various elements will combine into a unified experience. And on Dehaene's version, ignition into the workspace is a sharp-boundaried event: Information either completely ignites, becoming available for all downstream processes, or it does not. There is no (or only rarely) partial ignition. This can explain determinacy.

But future AI systems might not share this structure. They might have multiple or partially overlapping workspaces. Different specialized subsystems might have access to different regions of a partly-shared workspace. Some animals, such as snails and octopuses, distribute processing among multiple ganglia or neural centers that are less tightly coupled than the hemispheres of the human brain. A robot might broadcast information relevant to locomotion to one area and information relevant to speech to another with limited connectivity.

If the subsystems are entirely disconnected, the result might be entirely discrete centers of subjective experience within a single organism or machine. But if they are partly connected, experience might be only partly unified. In the park bench example, the experience of the trees might be unified with the experience of dampness, and the experience of dampness with memories of yesteryear, but the experience of the trees might not be unified with the memories. (Unification would not then be a transitive relation.) Alternatively, some weaker relation of partial unification might hold among the visual, tactile, and memorial experiences. If this seems inconceivable or impossible, see Sophie Nelson's and my article on indeterminate or fractional subjects.

More abstractly: There's no compelling architectural reason why an AI system would have to make information available either to all downstream processes or to none. A workspace defined in terms of downstream availability could be a patchwork of partial availabilities rather than a fully global all-or-nothing broadcast.

For the same reason, ignition into the workspace needn't be all-or-nothing. Between full ignition with determinate consciousness and no ignition with determinate nonconsciousness, there might be in-between, gray-area half-ignitions that are neither determinately conscious nor determinately nonconscious. Nearly every property with a complex physical or functional basis allows indeterminate, borderline cases: baldness, extraversion, greenness, happiness, whether you're wearing a shoe, whether a country is a democracy. The human global workspace might minimize indeterminacy -- like it's rarely indeterminate in basketball whether the ball has gone through the hoop. But change the architecture and indeterminacy might become common: a half-hearted ignition, or just enough information-sharing to make it indeterminate whether a workspace even exists. (If indeterminacy about consciousness strikes you as inconceivable or impossible, see my 2023 article on borderline consciousness.)

Global Workspace Theory might of course be wrong. But most other theories of consciousness make my argument at least as easy. Dennett's fame-in-the-brain version of broadcast theory explicitly permits disunity and indeterminacy. Higher Order Theories admit the same fragmentation and, probably, gradualism. So do biological theories and theories that focus on embodiment. (Integrated Information Theory is an exception: Its axioms require bright-lined unity and determinacy. But as I've argued, those bright-line axioms lead to unpalatable consequences.)

Recognizing these possibilities for AI systems invites the further thought: Maybe we humans aren't quite as unified as we normally suppose. Maybe indeterminate and disunified consciousness is common. Maybe processes outside of attention hover indeterminately between being conscious and nonconscious. Maybe some processes are only partly unified. If it seems otherwise in introspection and memory, maybe that's because introspection and memory tend to impose unity and determinacy where none was before.

[a Paul Klee painting, untitled 1914: source]

12 comments:

Arnold said...

AI could be assigned to determine unity, returning us again and again to learning/teaching ourselves toward hereness...hereness defined as phenomenon without tense...

Benjamin C. Kinney said...

I largely agree with you here - I am unconvinced about determinacy and unity in humans, let alone as necessities. But your insights point toward one potentially compelling alternative...

I think you're absolutely right that "introspection and memory impose unity and determinacy." Our memory certainly organizes things around narrative & causality, and I'd hate to try to distinguish that process from unity & determinacy.

But if "introspection" does this, that has a fuzzy boundary with "consciousness" - after all, sometimes introspection is the sole content of our consciousness! So if this reshaping is performed by introspection, is the reshaping performed by consciousness? And then, as I always ask when comparing two mental processes: are these actually separable?

The point being, it's possible that consciousness is the process that creates unity and/or determinacy. In humans.

Which seems circular but points toward a new direction. Our "memory as narrative-shaping" is an evolved mechanism, presumable for inferring cause-and-effect. Therefore, unity and determinacy could also be specific products of an evolutionary history.

Which doesn't actually address the question of whether these features are *necessary* for consciousness, alas. (Is it one route toward consciousness or the only route?) But perhaps there's something of value there.

Paul D. Van Pelt said...

The line, beginning with * our understanding of the nature...*, seems circular, or, paradoxical, yes? Ideas have been floated on the consciousness thing, in one shape or another since Descartes; maybe prior. I am sorta by the notion of Artificial Consciousness the way I think about Artificial Intelligence. I even like to mess with AI generated phone calls, by either giving a non-usual opening, such as SPEAK, or answering my phone in Spanish;French; Baltic; or some totally off-the-wall phrase such as the Icelandic, du bis valkomen (you are welcome). When I am met with silence from "the caller", I know my work is done. I'm getting fewer "robo-calls", and that is just alright with me.

Consciousness is an attribute of living entities, at whatever level one may decide that begins, earthworm or cephalopod: my vote leans towards cephalopod. Artificial Consciousness is a construct, in my opinion. In much the same way as Artificial Intelligence. We all have our Interests; Motives and Preferences (those darned IMPs); ---Our own album(s) to do.
Still having fun, see.

Paul D. Van Pelt said...

Are some people more conscious than others, and, if so, why? I think they are because of neural capacity, interest, ambitions, talents and so on. My deceased sister-in-law was a bright, engaging, vibrant human with wide-ranging curiosities, interests and an encompassing curiosity about most everything. Then, after her children were grown and successful, she took an increasing common turn: Alzheimer's. She suffered with that for years, and at least once asked my brother: are they ever going to find a cure for this? She died six months before my own wife, under circumstances common to Alzheimer's patients. My wife's death circumstances were different, although she suffered cognitive decline during the half-dozen preceding years.

These loved ones were very different in backgrounds and experiences. My brother and I survived our wives. There is no rational explanation for this.
History has shown that Van Pelt men outlive their spouses, if, and only if they survive would-be murderers and useless wars. Brother and I are batting well,
there. Is our survival genetic or dumb luck? I sure as hell don't know.


Paul D. Van Pelt said...

End Note:
There is something around INTENTION I can't nail down. I sense intentionality is critical to consciousness, though I do not know how. Other beings must possess intention. Elsewise, they should be gone---as would we? D'autrement, the notion is false, and, intention is only assistive---not mandatory.
Yawn. Goodnight.
.

Howie said...

There are people with aphantasia such as msyelf who feel scenes and perceptions without quite making their way into consciousness, It's an even more ambiguous case than blindsight in that the issue is not that subjects can react to stimuli they don't consciously exxperience but that they almost see or hear things. I'm sure this can be reconciled with global workspace

Paul D. Van Pelt said...

Noted and, nearly understood. Please don't take this as a slight. Sometimes I feel or sense things, without understanding why. It has been useful at times; troubling, at others; embarrassing, occasionally and, at worst, dangerous. I don't know if it IS aphantasia, but, it is confusing and scary. Then, I retreat into my administrative law mode, a more comfortable space. That centers me and neutralizes anxiety. We are all good, coping in different ways.
Soyez sage, mon ami. Still teaching my tablet Spanish and French.
Bon soir.

Anonymous said...

Aphantasia as you may know is the inability to visualize or experience any sense through imagination. It sounds like we may experience the same thing but differently. I've been working to improve my mental functioning. It is like an aniconic presentation of the iconic. It in my case related to an improvement of episodic memory. It's hard to know what is going on under the hood or repair it even as you are driving on the highway so to speak

Paul D. Van Pelt said...

Well and succinctly put, Thanks! My memory is far better than I might like.

Arnold said...

Relationships of: hering-listning, seeing-observing, sensing-feeling, consciousness-awareness, phenomenon-experience...toward a self...

Paul D. Van Pelt said...

I seem to recall something more concerning aphantasia. It was written by the late Dr. Oliver Sacks. I can't recall the book. But, I enjoyed reading Sacks' work.

Paul D. Van Pelt said...

Hmmm...Disunity and Indeterminacy. Seems similar to philosophy, doesn't it? How different are disunity and indeterminacy to doubt and uncertainty? Distinctions are thin, I think, but, I further SUPPOSE I am offending someone by proposing a cross-disciplinary relationship. After all, psychology and sociology are disunified and indeterminate. Medicine engenders D and I because different practitioners with similar training have different assessments of known facts. (SKA: diagnoses)

I dismantle complexity by considering interests, motives and preferences [IMPs]...placing those within related context. This approach is not popular. That does not bother me. Everyone has their own reality to do.