Wednesday, October 22, 2025

Two Dimensionalism about Other Minds, and Its Implications for Brain Organoids and Robots

You know (I hope!) that you are conscious. How do you know that other people are conscious too? This is the classic "problem of other minds".

The question isn't mainly developmental or psychological, but epistemic: What justifies you in believing that others have conscious experiences like yours -- feelings of joy and pain, thoughts in inner speech, dreams, sensory experiences -- instead of being, so to speak, automata who are all dark inside?

One common answer appeals to analogy: You are justified on grounds of others' similarity to you. It would be strange if entities so behaviorally and physiologically similar didn't also have similar streams of inner experience.

John Stuart Mill expresses it thus:

By what evidence do I know, or by what considerations am I led to believe, that there exist other sentient creatures; that the walking and speaking figures which I see and hear, have sensations and thoughts, or in other words, possess Minds?... I conclude that other human beings have feelings like me, because, first, they have bodies like me, which I know, in my own case, to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and other outward signs, which in my own case I know by experience to be caused by feelings (An Examination of Sir William Hamilton's Philosophy, 3rd ed., 1867, p. 237).

Notice that Mill appeals to two very different types of similarity: similarity of body and similarity of acts and outward signs.

[title page of John Stuart Mill, An Examination of Sir William Hamilton's Philosophy, 3rd edition]

In a recent paper, Ned Block makes a similar distinction between first order realizer properties, like being made of a certain kind of "meat", and second order functional role properties, like being the kind of thing that causes crying.

Block's functionalist jargon would have been unfamiliar to Mill, but the idea is much the same. Mill writes:

I am conscious in myself of a series of facts connected by a uniform sequence, of which the beginning is modifications of my body, the middle is feelings, the end is outward demeanor. In the case of other human beings I have the evidence of my senses for the first and last links of the series, but not for the intermediate link; which must either the be same in others as in myself, or a different one.... (p. 237-238).

Mill, like a good functionalist, seeks something to fill the middle link of a causal chain from cause to X to effect. The filler or "realizer" of this functional role property could potentially be anything, though in him it is a feeling.

For example, in me, a mosquito sting and the resulting red bump leads to a feeling of itchiness, which in turn leads to scratching. In others, I see the same sting and bump and the same scratching, but I cannot see the itchiness between.

At first, Mill suggests that it's reasonable to assume the intermediate feeling (the itchiness) simply on the grounds that "no other force need be supposed" (p. 238). But later he supports the claim also by appeal to physiological similarity:

I look about me, and though here is only one... body... which is connected with all my sensations in this peculiar manner, I observe that there is a great multitude of other bodies, closely resembling in their sensible properties... this particular one, but whose modifications do not call up, as those of my own body do, a world of sensations in my consciousness. Since they do not do so in my consciousness, I infer that they do it out of my consciousness, and that to each of them belongs a world of consciousness of its own... (p. 238-239).

Because others' bodies are like mine, I infer that the intermediate X -- the feeling of itchiness, in our example -- is also similar.

Let's call this view two-dimensionalism about other minds: Only when another entity is both physiologically and functionally (that is, in terms of typical causes and effects) similar to me am I justified in inferring that it has experiences like mine. When the two dimensions diverge, skepticism follows.

Human babies are physiologically similar to adult humans but functionally quite different. In the bad old days, I gather, there used to be doubts about whether babies were conscious, for example, whether they could actually feel pain (and thus anesthesia was not regularly practiced). Yet because the causes and effects of their pain responses are similar, as well as their physiology, such doubt was misplaced.

Brain organoids are a more difficult case. Human brain cells can be grown in vitro, in clusters of tens of millions of neurons. Could consciousness arise in such systems? Functionally, brain organoids are radically impoverished compared to ordinary humans. But if what matters is neurophysiology, maybe a sufficiently large or well-structured brain organoid would be conscious.

Robots present a complementary case: Language models are becoming similar to us in linguistic behavior. We might guess or imagine that some future robots will become functionally or behaviorally similar to us in other ways too, while remaining physiologically very different. Block argues in his recent paper, as well as in earlier work, that we don't know that the physiology doesn't matter. Maybe only "meat machines" can be conscious, while silicon machines, even if functionally very similar to us, could never be conscious.

The crux of the matter lies, perhaps, in whether two-dimensionalism or one-dimensionalism is the right response to the problem of other minds. The one-dimensionalist -- Mill, briefly -- holds that if we see the right types of similar causal relationships between inputs and outputs, that's enough to justify attributing consciousness (perhaps on grounds of simplicity or parsimony: "no other force need be supposed"). The two-dimensionalist, like Block, thinks doubt is justified unless there's both functional and physiological similarity.

Two dimensionalists are thereby committed to doubting AI consciousness, unless we someday create AI that is not only functionally but physiologically similar to us.

Must one-dimensionalist functionalists reject organoid consciousness? That's not as clear. I see at least two paths for them to accept organoid consciousness. First, they might define the functional roles in terms of features internal to neural systems -- not mosquito bites and scratching, but things like information sharing across a global workspace. Second, they might use the functional role to identify a physiological type, and then a la David Lewis, attribute consciousness whenever that physiological type is present, even if it isn't -- in that particular system -- playing its typical functional role.

No comments: