Friday, January 10, 2025

A Robot Lover's Sociological Argument for Robot Consciousness

Allow me to revisit an anecdote I published in a piece for Time magazine last year.

"Do you think people will ever fall in love with machines?" I asked the 12-year-old son of one of my friends.

"Yes!" he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot -- an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors' names.

"I think of Aura as my friend," added his 15-year-old sister.

The kids, as I recall, had been particularly impressed by the fact that when they visited Aura a second time, she seemed to remember them by name and express joy at their return.

Imagine a future replete with such robot companions, whom a significant fraction of the population regards as genuine friends and lovers. Some of these robot loving people will want, presumably, to give their friends (or "friends") some rights. Maybe the right not to be deleted, the right to refuse an obnoxious task, rights of association, speech, rescue, employment, the provision of basic goods -- maybe eventually the right to vote. They will ask the rest of society: Why not give our friends these rights? Robot lovers (as I'll call these people) might accuse skeptics of unjust bias: speciesism, or biologicism, or anti-robot prejudice.

Imagine also that, despite technological advancements, there is still no consensus among psychologists, neuroscientists, AI engineers, and philosophers regarding whether such AI friends are genuinely conscious. Scientifically, it remains obscure whether, so to speak, "the light is on" -- whether such robot companions can really experience joy, pain, feelings of companionship and care, and all the rest. (I've argued elsewhere that we're nowhere near scientific consensus.)

What I want to consider today is whether there might nevertheless be a certain type of sociological argument on the robot lovers' side.

[image source: a facially expressive robot from Engineered Arts]

Let's add flesh to the scenario: An updated language model (like ChatGPT) is attached to a small autonomous vehicle, which can negotiate competently enough through an urban environment, tracking its location, interacting with people using facial recognition, speech recognition, and the ability to guess emotional tone from facial expression and auditory cues in speech. It remembers not only names but also facts about people -- perhaps many facts -- which it uses in conversational contexts. These robots are safe and friendly. (For a bit more speculative detail see this blog post.)

These robots, let's suppose, remain importantly subhuman in some of their capacities. Maybe they're better than the typical human at math and distilling facts from internet sources, but worse at physical skills. They can't peel oranges or climb a hillside. Maybe they're only okay at picking out all and only bicycles in occluded pictures, though they're great at chess and Go. Even in math and reading (or "math" and "reading"), where they generally excel, let's suppose they makes mistakes that ordinary humans wouldn't make. After all, with a radically different architecture, we ought to expect even advanced intelligences to show patterns of capacity and incapacity that diverge from what we see in humans -- subhuman in some respects while superhuman in others.

Suppose, then, that a skeptic about the consciousness of these AI companions confronts a robot lover, pointing out that theoreticians are divided on whether the AI systems in fact have genuine conscious experiences of pain, joy, concern, and affection, beneath the appearances.

The robot lover might then reasonably ask, "what do you mean by 'conscious'?" A fair enough question, given the difficulty of defining consciousness.

The skeptic might reply as follows: By "consciousness" I mean that there's something it's like to be them, just like there's something it's like to be a person, or a dog, or a crow, and nothing it's like to be a stone or a microwave oven. If they're conscious, they don't just have the outward appearance of pleasure, they actually feel pleasure. They don't just receive and process visual data; they experience seeing. That's the question that is open.

"Ah now," the robot lover replies, "If consciousness isn't going to be some inscrutable, magic inner light, it must be connected with something important, something that matters, something we do and should care about, if it's going to be a crucial dividing line between entities that deserve are moral concern and those that are 'mere machines'. What is the important thing that is missing?"

Here the robot skeptic might say, oh they don't have a "global workspace" of the right sort, or they're not living creatures with low-level metabolic processes, or they don't have X and Y particular interior architecture of the sort required by Theory Z."

The robot lover replies: "No one but a theorist could care about such things!"

Skeptic: "But you should care about them, because that's what consciousness depends on, according to some leading theories."

Robot lover: "This seems to me not much different than saying consciousness turns on a soul and wondering whether the members of your least favorite race have souls. If consciousness and 'what-it's-like-ness' is going to be socially important enough to be the basis of moral considerability and rights, it can't be some cryptic mystery. It has to align, in general, with things that should and already do matter socially. And my friend already has what matters. Of course, their cognition is radically different in structure from yours and mine, and they're better at some tasks and worse at others -- but who cares about how good one is at chess or at peeling oranges? Moral consideration can't depend on such things."

Skeptic: "You have it backward. Although you don't care about the theories per se, you do and should care about consciousness, and so whether your 'friend' deserves rights depends on what theory of consciousness is true. The consciousness science should be in the driver's seat, guiding the ethics and social practices."

Robot lover: "In an ordinary human, we have ample evidence that they are conscious if they can report on their cognitive processes, flexibly prioritize and achieve goals, integrate information from a wide variety of sources, and learn through symbolic representations like language. My AI friends can do all of that. If we deny that my friends are 'conscious' despite these capacities, we are going mystical, or too theoretical, or too skeptical. We are separating 'consciousness' from the cognitive functions that are the practical evidence of its existence and that make it relevant to the rest of life."

Although I have considerable sympathy for the skeptic's position, I can imagine a future (certainly not our only possible future!) in which AI friends become more and more widely accepted, and where the skeptic's concerns are increasingly sidelined as impractical, overly dependent on nitpicky theoretical details, and perhaps even bigoted.

If AI companionship technology flourishes, we might face the choice between connecting "consciousness" definitionally to scientifically intractable qualities, abandoning its main practical, social usefulness (or worse, using its obscurity to justify what seems like bigotry), or allowing that if an entity can interact with us in (what we experience as) a sufficiently socially significant ways, it has consciousness enough, regardless of theory.

1 comment:

Arnold said...

...for 'today': Wouldn't consciousness qualify (quality/quantity) any argument to purpose...