Wednesday, July 10, 2024

How the Mimicry Argument Against Robot Consciousness Works

A few months ago on this blog, I presented a "Mimicry Argument" against robot consciousness -- or more precisely, an argument that aims to show why it's reasonable to doubt the consciousness of an AI that is built to mimic superficial features of human behavior.  Since then, my collaborator Jeremy Pober and I have presented this material to philosophy audiences in Sydney, Hamburg, Lisbon, Oxford, Krakow, and New York, and our thinking has advanced.

Our account of mimicry draws on work on mimicry in evolutionary biology.  On our account, a mimic is an entity:

  • with a superficial feature (S2) that is selected or designed to resemble a superficial feature (S1) of some model entity
  • for the sake of deceiving, delighting, or otherwise provoking a particular reaction in some particular audience or "receiver"
  • because the receiver treats S1 in the model entity as an indicator of some underlying feature F.
Viceroy butterflies have wing coloration patterns (S2) that resemble the wing color patterns (S1) of monarch butterflies for the sake of misleading predators who treat S1 as an indicator of toxicity.  Parrots emit songs that resemble the songs or speech of other birds or human caretakers for social advantage.  If the receiver is another parrot, the song in the model (but not necessarily the mimic) indicates group membership.  If the receiver is a human, the speech in the model (but not necessarily the mimic) indicates linguistic understanding.  As the parrot case illustrates, not all mimicry needs to be deceptive, and the mimic might or might not possess the feature the receiver attributes.

Here's the idea in a figure:


Pober and I define a "consciousness mimic" as an entity whose S2 resembles an S1 that, in the model entity, normally indicates consciousness.  So, for example, a toy which says "hello" when powered on is a consciousness mimic: For the sake of a receiver (a child), it has a superficial feature (S2, the sound "hello" from its speakers) which resembles a superficial feature (S1) in an English speaking human which normally indicates consciousness (since humans who say "hello" are normally conscious).

Arguably, Large Language Models like ChatGPT are consciousness mimics in this sense.  They emit strings of text modeled on human-produced text for the sake of users who interpret that text as having semantic content of the same sort such text normally does when emitted by conscious humans.

Now, if something is a consciousness mimic, we can't straightforwardly infer its consciousness from its possession of S2 in the same way we can normally infer the model's consciousness from its presence in S1.  The "hello" toy isn't conscious.  And if ChatGPT is conscious, that will require substantial argument to establish; it can't be inferred in the same ready way that we infer consciousness in a human from human utterances.

Let me attempt to formalize this a bit:

(1.) A system is a consciousness mimic if:
a. It possesses superficial features (S2) that resemble the superficial features (S1) of a model entity.
b. In the model entity, the possession of S1 normally indicates consciousness.
c. The best explanation of why the mimic possesses S2 is the mimicry relationship described above.

(2.) Robots or AI systems – at least an important class of them – are consciousness mimics in this sense.

(3.) Because of (1c), if a system is a consciousness mimic, inference to the best explanation does not permit inferring consciousness from its possession of S2.

(4.) Some other argument might justify attributing consciousness to the mimic; but if the mimic is a robot or AI system, any such argument, for the foreseeable future, will be highly contentious.

(5.) Therefore, we are not justified in attributing consciousness to the mimic.

AI systems designed with outputs that look human might understandably tempt users to attribute consciousness based on those superficial features, but we should be cautious about such attributions.  The inner workings of Large Language Models and other AI systems are causally complex and designed to generate outputs that look like the types of outputs humans produce, for the sake of being interpretable by humans; but not all causal complexity implies consciousness and the superficial resemblance to the behaviorally sophisticated patterns we associate with consciousness could be misleading if such patterns could potentially arise without the presence of consciousness.

The main claim is intended to be weak and uncontroversial: When the mimicry structure is present, significant further argument is required before attributing consciousness to an AI system based on superficial features suggestive of consciousness.

Friends of robot or AI consciousness may note two routes by which to escape the Mimicry Argument.  They might argue, contra premise (2), that some important target types of artificial systems are not consciousness mimics.  Or they might present an argument that the target system, despite being a consciousness mimic, is also genuinely conscious – an argument they believe is uncontentious (contra 4) or that justifies attributing consciousness despite being contentious (contra the inference from 4 to 5).

The Mimicry Argument is not meant to apply universally to all robots and AI systems.  Its value, rather, is to clarify the assumptions implicit in arguments against AI consciousness on the grounds that AI systems merely mimic the superficial signs of consciousness.  We can then better see both the merits of that type of argument and means of resisting it.

10 comments:

SelfAwarePatterns said...

I think a key consideration here is the amount of time the behavior is successful. It seems like the traditional Turing test was based on a throwaway remark Turing made in his 1950 paper, that by the year 2000, there would be systems able to fool 30% of human players after five minutes of conversation into thinking it was human. I've heard conflicting accounts about how well current LLMs do against this standard, but I think most people agree it's a weak one, and no one should attribute consciousness or understanding based on it.

The question is what happens if the system is still convincing 90% of us after days, weeks, or months? At what point do we consider it more extraordinary that it's still only mimicking the real thing?

Howard said...

Eric

What's the difference between us and AI? We're consciousness mimics too. My understanding is that Dennett would advance that argument. How different are the algorithms that fire up AI and us?
A Cartesian meditator might argue that AI and all other humans are consciousness mimics, wouldn't she?

Arnold said...

Human between-ness between physic-ness and psychic-ness for posseting place-ness...
...being an entity seems towards purpose-if human, towards choice-if A I...

Jim Cross said...

My takeaway is that you can never rely on superficial features to determine consciousness.

A system could be conscious and exhibit few superficial features we associate with consciousness. A system could be unconscious that exhibits nearly perfect human behavior. We need to bring in the internal dynamics of the system to determine consciousness.

Anonymous said...

I think you are right to insist on an internal condition and not to rely on Turing tests, which only look at behavioural outputs. My own money is on those capaciteis that depend on being living beings with bodies and sensitivity to situations etc.

Milan said...

@Howard,
Strictly speaking that would be an argument for an error theory about consciousness, not for the reality of AI consciousness... As I understand, Eric grants that certain behaviours allow the (defeasible) inference that the behaviour must somehow be due to a conscious being. However, in the case of consciousness mimics, that conscious being is not the mimic itself. It is the model. In the case of humans, the only available models are other humans. Thus, even if we'd accept that humans are consciousness mimics, by the inference that has been granted, at least some humans must be conscious, and so why not all of us? (Unless we all merely mimic the consciousness of God, or of other alien progenitors...)

Milan said...

(That should have read 'our alien progenitors', though I suppose the divine can be quite alien to us mortals.)

Jim Cross said...

@Milan

"Eric grants that certain behaviours allow the (defeasible) inference that the behaviour must somehow be due to a conscious being"

Which and what kind of behaviors?

This actually leads almost directly to the "hard" problem.

If there is some unique behavior that only a conscious entity can perform, we have answered the "hard" problem. We have the "thing" that consciousness is needed to explain. We may not understand the mechanism, but at least we know there is a reason for consciousness and that consciousness has material effects. If it has material effects, then it must be in some way material or, at least, there must be some kind of mind/matter interaction.

But we actually don't have anything like that I am aware of?

Eddy Nahmias said...

Hi Eric, really interesting argument. Have y'all thought about convergent evolution in this context? In the context of A.I. (and even LLMs), they develop through an evolutionary process, one which their designers don't fully understand, so even if they (and their designers) begin by mimicking consciousness (understanding language, creativity, etc.), these systems might develop in a convergent way to actually have the property (S1), even if the underlying features (processes) allowing it are not similar enough to the way humans (and other animals) have S1 to be called F. The argument might look like ones we offer for octopus consciousness, or alien consciousness if we meet them. This move might just be a way of developing premise 4 to defuse the contentiousness claim. Relatedly, one might think that at some point the best (or only) way to mimic some property S1 is to reproduce it in a new way. If a butterfly evolved to mimic the poisonous property of Monarchs (rather than just the look), it would be poisonous, even if not using the same chemical structure F.

Arnold said...

And reductions-productions-evolution from cosmological evaluations-CE ...
...what is the human being feature-F towards...CE, S1, S2, transubstantiation-T...

For remembering what we are here now...