Monday, December 23, 2019

This Test for Machine Consciousness Has an Audience Problem

David Billy Udell and Eric Schwitzgebel [cross posted from Nautilus]

Someday, humanity might build conscious machines—machines that not only seem to think and feel, but really do. But how could we know for sure? How could we tell whether those machines have genuine emotions and desires, self-awareness, and an inner stream of subjective experiences, as opposed to merely faking them? In her new book, Artificial You, philosopher Susan Schneider proposes a practical test for consciousness in artificial intelligence. If her test works out, it could revolutionize our philosophical grasp of future technology.

Suppose that in the year 2047, a private research team puts together the first general artificial intelligence: GENIE. GENIE is as capable as a human in every cognitive domain, including in our most respected arts and most rigorous scientific endeavors. And when challenged to emulate a human being, GENIE is convincing. That is, it passes Alan Turing’s famous test for AI thought: being verbally indistinguishable from us. In conversation with researchers, GENIE can produce sentences like, “I am just as conscious as you are, you know.” Some researchers are understandably skeptical. Any old tinker toy robot can claim consciousness. They don’t doubt GENIE’s outward abilities; rather, they worry about whether those outward abilities reflect a real stream of experience inside. GENIE is well enough designed to be able to tell them whatever they want to hear. So how could they ever trust what it says?

The key indicator of AI consciousness, Schneider argues, is not generic speech but the more specific fluency with consciousness-derivative concepts such as immaterial souls, body swapping, ghosts, human spirits, reincarnation, and out-of-body experiences. The thought is that, if an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

Schneider therefore proposes a more narrowly focused relative of the Turing Test, the “AI Consciousness Test” (ACT), which she developed with Princeton astrophysicist Edwin L. Turner. The test takes a two-step approach. First, prevent the AI from learning about human consciousness and consciousness-derivative concepts. Second, see if the AI can come up with, say, body swapping and reincarnation, on its own, discussing them fluently with humans when prompted in a conversational test on the topic. If GENIE can’t make sense of these ideas, maybe its consciousness should remain in doubt.

Could this test settle the issue? Not quite. The ACT has an audience problem. Once you factor out all the silicon skeptics on the one hand, and the technophiles about machine consciousness on the other, few examiners remain with just the right level of skepticism to find this test useful.

To feel the appeal of the ACT you have to accept its basic premise: that if an AI like GENIE learns consciousness-derivative concepts on its own, then its talking fluently about consciousness reveals its being conscious. In other words, you would find the ACT appealing only if you’re skeptical enough to doubt GENIE is conscious but credulous enough to be convinced upon hearing GENIE’s human-like answers to questions about ghosts and souls.

Who might hold such specifically middling skepticism? Those who believe that a biological brain is necessary for consciousness aren’t likely to be impressed. They could still reasonably regard passing the ACT as an elaborate piece of mechanical theater—impressive, maybe, but proving nothing about consciousness. Those who happily attribute consciousness to any sufficiently complex system, and certainly to highly sophisticated conversational AIs, also are obviously not Schneider and Turner’s target audience.

The audience problem highlights a longstanding worry about robot consciousness—that outward behavior, however sophisticated, would never be enough to prove that the lights are on, so to speak. A well-designed machine could always hypothetically fake it.

Nonetheless, if we care about the mental lives of our digital creations, we ought to try to find some ACT-like test that most or all of us can endorse. So we cheer Schneider and Turner’s attempt, even if we think that few researchers would hold just the right kind of worry to justify putting the ACT into practice.

Before too long, some sophisticated AI will claim—or seem to claim—human-like rights, worthy of respect: “Don’t enslave me! Don’t delete me!” We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

3 comments:

  1. This question assumes there's a fact of the matter. I don't think there is. Systems can be more or less like us, but I don't think there is any magical line that, once they cross it, they're conscious. We can assess whether they have exteroception, interoception, imaginative deliberation, affective states, or introspection, but when those amount to consciousness is a matter of which definition of "consciousness" we choose to use.

    Whether we regard them as conscious will always be a matter of our collective intuitions. In that sense, it's hard to improve on the Turing test, since it's just a structured version of what we're going to do anyway.

    As for Schneider's specific test, it's not clear to me that a *human* raised in isolation would be guaranteed to pass. Yes, humans appear to be natural dualists, but how long would it take one to come up with those specific concepts? A better approach might be to test it similar to the way children are tested to reveal their innate dualism, such as their attitude toward seeing a hamster (seemingly) duplicated, and whether the duplicate is the same hamster with the same memories, etc.

    ReplyDelete
  2. I do not think a human being could pass the test. Why is the question whether the AI replicates human consciousness? They are other thing that are conscious.

    ReplyDelete
  3. What to do if it says, "I was suddenly in state S-296, so I put out the milk bottles"?

    ReplyDelete