Friday, March 08, 2024

The Mimicry Argument Against Robot Consciousness

Suppose you encounter something that looks like a rattlesnake.  One possible explanation is that it is a rattlesnake.  Another is that it mimics a rattlesnake.  Mimicry can arise through evolution (other snakes mimic rattlesnakes to discourage predators) or through human design (rubber rattlesnakes).  Normally, it's reasonable to suppose that things are what they appear to be.  But this default assumption can be defeated -- for example, if there's reason to suspect sufficiently frequent mimics.

Linguistic and "social" AI programs are designed to mimic superficial features that ordinarily function as signs of consciousness.  These programs are, so to speak, consciousness mimics.  This fact about them justifies skepticism about the programs' actual possession of consciousness despite the superficial features.

In biology, deceptive mimicry occurs when one species (the mimic) resembles another species (the model) in order to mislead another species such as a predator (the dupe).  For example, viceroy butterflies evolved to visually resemble monarch butterflies in order to mislead predator species that avoid monarchs due to their toxicity.  Gopher snakes evolved to shake their tails in dry brush in a way that resembles the look and sound of rattlesnakes.

Social mimicry occurs when one animal emits behavior that resembles the behavior of another animal for social advantage.  For example, African grey parrots imitate each other to facilitate bonding and to signal in-group membership, and their imitation of human speech arguably functions to increase the care and attention of human caregivers.

In deceptive mimicry, the signal normally doesn't correspond with possession of the model's relevant trait.  The viceroy is not toxic, and the gopher snake has no poisonous bite.  In social mimicry, even if there's no deceptive purpose, the signal might or might not correspond with the trait suggested by the signal: The parrot might or might not belong to the group it is imitating, and Polly might or might not really "want a cracker".

All mimicry thus involves three traits: the superficial trait (S2) of the mimic, the corresponding superficial trait (S1) of the model, and an underlying feature (F) of the model that is normally signaled by the presence of S1 in the model.  (In the Polly-want-a-cracker case, things are more complicated, but let's assume that the human model is at least thinking about a cracker.)  Normally, S2 in the mimic is explained by its having been modeled on S1 rather than by the presence of F in the mimic, even if F happens to be present in the mimic.  Even if viceroy butterflies happen to be toxic to some predator species, their monarch-like coloration is better explained by their modeling on monarchs than as a signal of toxicity.  Unless the parrot has been specifically trained to say "Polly want a cracker" only when it in fact wants a cracker, its utterance is better explained by modeling on the human than as a signal of desire.

Figure: The mimic's possession of superficial feature S2 is explained by mimicry of superficial feature S1 in the model.  S1 reliably indicates F in the model, but S2 does not reliably indicate F in the mimic.

[click to enlarge and clarify]

This general approach to mimicry can be adapted to superficial features normally associated with consciousness.

Consider a simple case, where S1 and S2 are emission of the sound "hello" and F is the intention to greet.  The mimic is a child's toy that emits that sound when turned on, and the model is an ordinary English-speaking human.  In an ordinary English-speaking human, emitting the sound "hello" normally (though of course not perfectly) indicates an intention to greet.  However a child's toy has no intention to greet.  (Maybe its designer, years ago, had an intention to craft a toy that would "greet" the user when powered on, but that's not the toy's intention.)  F cannot be inferred from S2, and S2 is best explained by modeling on S1.

Large Language Models like GPT, PaLM, and LLaMA, are more complex but are structurally mimics.

Suppose you ask ChatGPT-4 "What is the capital of California?" and it responds "The capital of California is Sacramento."  The relevant superficial feature, S2, is a text string correctly identifying the capital of California.  The best explanation of why ChatGPT-4 exhibits S2 is that its outputs are modeled on human-produced text that also correctly identifies the capital of California as Sacramento.  Human-produced text with that content reliably indicates the producer's knowledge that Sacramento is the capital of California.  But we cannot infer corresponding knowledge when ChatGPT-4 is the producer.  Maybe "beliefs" or "knowledge" can be attributed to sufficiently sophisticated language models, but that requires further argument.  A much simpler model, trained on a small set of data containing a few instances of "The capital of California is Sacramento" might output the same text string for essentially similar reasons, without being describable as "knowing" this fact in any literal sense.

When a Large Language Model outputs a novel sentence not present in the training corpus, S2 and S1 will need to be described more abstractly (e.g., "a summary of Hamlet" or even just "text interpretable as a sensible answer to an absurd question").  But the underlying considerations are the same.  The LLM's output is modeled on patterns in human-generated text and can be explained as mimicry of those patterns, leaving open the question of whether the LLM has the underlying features we would attribute to a human being who gave a similar answer to the same prompt.  (See Bender et al. 2021 for an explicit comparison of LLMs and parrots.)

#

Let's call something a consciousness mimic if it exhibits superficial features best explained by having been modeled on the superficial features of a model system, where in the model system those superficial features reliably indicate consciousness.  ChatGPT-4 and the "hello" toy are consciousness mimics in this sense.  (People who say "hello" or answer questions about state capitals are normally conscious.)  Given the mimicry, we cannot infer consciousness from the mimics' S2 features without substantial further argument.  A consciousness mimic exhibits traits that superficially look like indicators of consciousness, but which are best explained by the modeling relation rather than by appeal to the entity's underlying consciousness.  (Similarly, the viceroy's coloration pattern is best explained by its modeling on the monarch, not as a signal of its toxicity.)

"Social AI" programs, like Replika, combine the structure of Large Language Models with superficial signals of emotionality through an avatar with an expressive face.  Although consciousness researchers are near consensus that ChatGPT-4 and Replika are not conscious to any meaningful degree, some ordinary users, especially those who have become attached to AI companions, have begun to wonder.  And some consciousness researchers have speculated that genuinely conscious AI might be on the near (approximately ten-year) horizon (e.g., Chalmers 2023; Butlin et al. 2023; Long and Sebo 2023).

Other researchers -- especially those who regard biological features as crucial to consciousness -- doubt that AI consciousness will arrive anytime soon (e.g., Godfrey-Smith 2016Seth 2021).  It is therefore likely that we will enter an era in which it is reasonable to wonder whether some of our most advanced AI systems are conscious.  Both consciousness experts and the ordinary public are likely to disagree, raising difficult questions about the ethical treatment of such systems (for some of my alarm calls about this, see Schwitzgebel 2023a, 2023b).

Many of these systems, like ChatGPT and Replika, will be consciousness mimics.  They might or might not actually be conscious, depending on what theory of consciousness is correct.  However, because of their status as mimics, we will not be licensed to infer that they are conscious from the fact that they have superficial features (S2-type features) that resemble features in humans (S1-type features) that, in humans, reliably indicate consciousness (underlying feature F).

In saying this, I take myself to be saying nothing novel or surprising.  I'm simply articulating in a slightly more formal way what skeptics about AI consciousness say and will presumably continue to say.  I'm not committing to the view that such systems would definitely not be conscious.  My view is weaker, and probably acceptable even to most advocates of near-future AI consciousness.  One cannot infer the consciousness of an AI system that is built on principles of mimicry from the fact that it possesses features that normally indicate consciousness in humans.  Some extra argument is required.

However, any such extra argument is likely to be uncompelling.  Given the highly uncertain status of consciousness science, and widespread justifiable dissensus, any positive argument for these systems' consciousness will almost inevitably be grounded in dubious assumptions about the correct theory of consciousness (Schwitzgebel 2014, 2024).

Furthermore, given the superficial features, it might feel very natural to attribute consciousness to such entities, especially among non-experts unfamiliar with their architecture and perhaps open to, or even enthusiastic about, the possibility of AI consciousness in the near future.

The mimicry of superficial features of consciousness isn't proof of the nonexistence of consciousness in the mimic, but it is grounds for doubt.  And in the context of highly uncertain consciousness science, it will be difficult to justify setting aside such doubts.

None of these remarks would apply, of course, to AI systems that somehow acquire features suggestive of consciousness by some process other than mimicry.

23 comments:

  1. It seems like if a mimic system can consistently convince most of us for an extended period of time, I think we have to be open to considering it conscious. At some point it becomes more incredible that a successful mimic is not conscious than that it is.

    That said, I'm pretty skeptical we'll get a conscious system that way.

    ReplyDelete
  2. I understand the view of proponents of the possibility of conscious AI as claiming that with sufficiently complex programming with the appropriate hardware capable of running the program, consciousness will occur/emerge.
    This possibility is not predicated on solving the so-called hard problem. That difficulty could remain while having created computer consciousness, eg. "privacy", "subjectivity", etc. associated with the mind-body problem.

    The elements of such consciousness are the programming strings, commands, etc. that determine the output of whatever thing the program will be instantiated in. But unlike the components of chemical compounds, atoms, molecules, etc., that can be combined with predictable properties, or often with unexpected results, the components of what would presumably make up AI consciousness cannot be combined to predictably form any sort of consciousness. There are no examples of such consciousness.

    While experimentation in chemistry and physics may result in expected outcomes, such outcomes/properties are because of unknown properties of matter. What are the analogous properties of computer programs?

    ReplyDelete


  3. The last sentence should be, “…may result in unexpected outcomes.” If somehow one stumbles upon a new compound or physical phenomenon through experimentation (through chemistry or physics), it has something to do with chemical or physical properties previously unknown.
    If somehow by stringing commands together conscious AI occurred, poof, what would be learned about these programming principles not known before? It’s not as if we have even a basic knowledge of how programming contributes to anything consciousness-like. If conscious AI occurred it wouldn’t be because of some additional factor added to our understanding of how programming creates consciousness-like AI.

    ReplyDelete
  4. Good analysis, Eric! More depth than I would be able to deliver, but along the same line of reasoning. Returned me quickly to my immediately previous musings about superficial vs. responsive consciousness. Unlike several experts, I just can't think about AI in any other sense than superficiality. This topic is complicated and, now, finally, I understand why some pros are hesitant to proffer a hardline position thereon. The piranha pond is treacherous.

    ReplyDelete
  5. Thanks for the comments, folks!

    Self-aware and anon: I agree that we can’t rule out the possibility of consciousness in AI, and that sophisticated enough programming or mimicry *might* require it. My thought is that we probably won’t *know* the AI is conscious, given the dubiousness of all theories of consciousness, and for AI systems that are designed as mimics, we can’t rely on superficial cues as reliable indicators of underlying consciousness.

    ReplyDelete
  6. The challenge I see is twofold:
    1. By your logic we have no way of knowing if another human is conscious
    2. You posit that mimicry inherently discounts a process without examining whether mimicry is the original source process underlying our presumed consciousness. It seems reasonable to posit that we are not conscious at birth, and the acquisition of learning and the structural development of our kind is almost entirely mimicry and reinforcement learning.

    The underlying processes of the mind are very likely based on a relatively simple structural algorithm that achieves complexity and emergence through iteration, replication and structures of representation. If a machine has a similar (even if not identical or even analogous) underlying capacity, then the underlying processes are the same, and mimicry is just knowledge acquisition and use

    ReplyDelete
  7. I wonder about the relationship between this argument and the Copernican arguments you've considered against the "specialness" of human consciousness. I tend to think that Copernican arguments tell in favor of robot consciousness (even among robo-mimics) about as much as they tell in favor of alien consciousness. If robo-mimics are significantly less likely than non-mimics to be conscious, then we non-mimic conscious systems are a privileged subset of the systems that behave in cognitively and linguistically sophisticated ways. But just as in the original Copernican argument, we should avoid the conclusion that we occupy a privileged subset of the systems that behave in cognitively and linguistically sophisticated ways. So we should avoid the conclusion that non-mimics are significantly more likely to be conscious than mimics.

    In my view, it's not mimicing per se that is evidence against genuine consciousness. It's more that most mimics are only *imperfect mimics*, in the sense that they seem to mimic conscious beings within a relatively small range of circumstances, but fail to mimic conscious beings outside of those circumstances. If we developed a perfect or near-perfect mimic, I'd attribute consciousness on Copernican grounds.

    ReplyDelete
  8. Insofar as I have no knowledge of what Copernicus thought or knew about robots or robot mimicry, I can't make an informed judgement here. I did not know robotics was an issue then, so can only infer that the matter, in itself, is based on some latter-day interpretation of Copernican thinking, transferred towards the modernity of AI and its' siblings. For my simple understanding, this resembles speculation. And, speculation is metaphysical, sorta like my brother's characterization thereof as a "wild assed guess". So, carry on. It does not move me, even a millimeter.

    ReplyDelete
  9. Suppose a consciousness mimicking process occurs that not only perfectly mimics a conscious model’s superficial traits relevant to consciousness but also its physical make up and processes.
    In this case, being behaviorally and physically identical to the model, it would be conscious. It would essentially be a clone. It couldn’t fail to be conscious. It’s consciousness would be a matter of necessity.

    I think functionalism would involve the same necessity, only their position would be that when perfect consciousness mimicry occurred in an appropriate though radically different substrate (computer), it couldn’t fail to be conscious. It would be conscious by necessity.
    I think there are some logical problems with functionalism:

    If A=B, then necessarily A=B.
    Sometimes A=C & -(A=B). If A=C, then necesarily A=C.

    ReplyDelete
  10. The argument for the consciousness in the mimic has an underlying assumption that the intelligent behavior we can observe in the mimic requires consciousness. This may not be true for the mimic, based on our knowledge of its construction, but also may not be true for the non-mimic. We don't actually know the role of consciousness in the brain on its ability to generate intelligent behavior. It could be epiphenomenal, as some suggest, or it could be that consciousness plays only an indirect role. Consciousness could be playing the role of AI trainer by reflecting back updated information to the unconscious brain rather than being the actual AI itself.

    ReplyDelete
  11. The continuum of consciousness toward animals; from my chat with AI...
    ...aspects of consciousness allow also for feeling conscious...

    That Universities can teach us to ask questions of AI about the feelings of all animals...
    ...allowing conscious aspects for thought to merge with conscious aspects for feeling...

    That Universities would teach...
    ...our questions allow AI to provide desired answers for us to...

    ReplyDelete
  12. Am I to believe that from sitting in front of my laptop knowing that I am not engaged in an exchange with a conscious being (eg. asking current ChatGPT questions), that with more sophisticated outputs I will move to being engaged in an exchange with a conscious being only because of the output, my laptop remaining unchanged?
    If somehow I was tempted to believe I was interacting with a conscious being, I would assume it was with a person typing responses rather than with my newly conscious MacBook Pro.

    Believing that a magician actually saws a women in half is because of gullibility.
    How would we know whether we were similarly gullible regarding supposed conscious AI?

    ReplyDelete
  13. You may believe, or disbelieve whatever you wish. I seek proof. As a pragmatist, I expect no less. My entry into speculation was my choice when I chose to enter the morass of philosophy. The swamp got deeper, clarity; cloudier. Need I say more?

    ReplyDelete
  14. ...is infinite questioning the nature of our/my consciousness...

    Does being here help dissuade bots/me from mimicry...

    ReplyDelete
  15. Philosophy of mind arguments typically make two primary mistakes. In the first, Turing equivalence is denied, typically through a misunderstanding of how computation works (Searle's secondary mistake). Using knowledge only known to the maker of computing system falls into this category. And this is made here at least twice; by reference to the intent of the doll's toymaker and knowledge of how LLMs are programmed. It is impossible to tell, from construction alone, what a computing device computes. All we have to go on is how it interacts with the external world. This is true for humans as well. (cf. The Inner Mind).

    But if Turing equivalence is granted (and it must be), then the next mistake is to assume that brains and computers aren't Turing equivalent. But this is a denial, even if implicitly, of the Church-Turing hypothesis. Circular reasoning is then used to prove the conclusion. (Searle's primary mistake). One could also assume it to be true and then arrive at the conclusion that it's true.

    The Church-Turing hypothesis poses a unique problem for philosophers. It cannot be shown to be true or false simply by thinking about it.

    ReplyDelete
  16. Isn't circular reasoning fallacious? If, and only if, that is the case, then anyone who uses it is committing a primary error. AI is mimicry, period, and, regardless of whether consciousness in humans (or other life forms), precedes, or is preceded by intelligence, there seems some something thing those faculties to one another. Whether we discover reliable measurability of that is *the hard problem*. Or, one of them, anyway.

    ReplyDelete
  17. When I left the previous remarks, and reviewed published comment,I was disappointed. My second sentence was garbled, beyond belief, showing that AI Context
    has a life of its' own. But, wait! How can that be right? If, as it seems, AI possesses ITS' own contextual reality, how did/does that emerge? Here is what I mean.
    AI does not have life. It is a creation of human ingenuity. Artificiality, does not= reality. There is a clue, I think. Having offered the IMPs notion, further affiant sayeth not.

    ReplyDelete
  18. Is being here consciousness of oneself...
    ...then searching for oneself in consciousness can begin...

    Is 'Everything needed right in front of us'...
    ...analog, digital, robots and AI included...

    'Returning' to this again and again...

    ReplyDelete
  19. This is important. I remember in 2001, the News report about HAL uses the phrase that he (HAL) “reproduces, some say mimic, the functions of the human brain.”

    ReplyDelete
  20. Let me see. Is mimicry the point here, or only a side issue, around outcome(s)? What Hal was programmed to do was predicated upon thinking, at the time. And, maybe, fears around what what AI, as we know(?) it, is/could be doing now? Visionaries, such as Kubrick, and later, Toffler, were saying *what if*, and, *how about* eventualities other dreamers had only dreamed about---were, likely, reticent to address head-on. Futuristic prognostications are not for the timid. Those "dreamers" practiced the art of uncertainty as well as any avowed philosopher might. Seeing is, uh, believing. Mimicry, is expectation---often based on interests, motives and preferences (IMPs).If, and only if, my notion(s) about contextual reality have validity, this falls together neatly with where we are---and, are likely to go...it is not hard, only complicated.

    ReplyDelete
  21. I fully agree that GPT4 is no more than a consciousness mimic, but I'd like to point out that your examples were all clearly chosen with your conclusion in mind.

    In each case, the mimicry involved displaying or outputting something that had minimal sophistication: a butterfly wing pattern, a canned recording of "Hello", a fairly inert and unimaginative linguistic factoid about what city is the capital of what state. In many cases, what allowed the mimicry was the complete arbitrariness of the linkage between S1 and F: wing pattern and toxicity, "Hello" sound and greeting intent. There is nothing intrinsically linking S1 and F, in these cases, and faking S1 can be easier than reproducing F.

    There are many alternate examples where the mimicry concept fails to apply. If I write a program that mimics a world-class chess player, I have not created a mimic of a chess player; I have created a chess player. If I create a humanoid robot that mimics world-class dancing, then I have created a dancer. If I earn my living by writing books that mimic literature, without any plagiarism, and I win a literary award, I have not merely mimicked writing. The fruits are themselves evidence of a sophisticated underlying function: we are seeing direct evidence of F, not some arbitrary external display.

    So, a robot saying, "I think therefore I am", might be a mimic. But at some stage, mimicry of intelligence is intelligence, and it makes no sense to call it mimicry.

    More argument is needed to determine whether consciousness falls under the same heading as writing, dancing, playing chess or being intelligent, or under the same heading as your simpler examples. I think it is somewhere in the middle, but it is closer to being a functional characteristic reliably revealed in behaviour than it is a secret inner property that can be falsely signalled through an arbitrary external token.

    For all that, I think consciousness refers to a specific cognitive architecture, and I think brute-force mimics will be widespread before that architecture is adopted in AIs, and many people will be fooled, so I think you are right in your ultimate conclusion.

    ReplyDelete
  22. ....at aome point...? sorry. I do not know where tbat point is.

    ReplyDelete
  23. Someone had something to say today about you and your book. I could not access that blog, and know little about either the *someone* or his work. Therefore, I cannot say further...
    Still with you,
    PDV.

    ReplyDelete