Friday, July 24, 2015

Cute AI and the ASIMO Problem

A couple of years ago, I saw the ASIMO show at Disneyland. ASIMO is a robot designed by Honda to walk bipedally with something like the human gait. I'd entered the auditorium with a somewhat negative attitude about ASIMO, having read Andy Clark's critique of Honda's computationally-heavy approach to robotic locomotion (fuller treatment here); and the animatronic Mr. Lincoln is no great shakes.

But ASIMO is cute! He's about four feet tall, humanoid, with big round dark eyes inside what looks a bit like an astronaut's helmet. He talks, he dances, he kicks soccer balls, he makes funny hand gestures. On the Disneyland stage, he keeps up a fun patter with a human actor. ASIMO's gait isn't quite human, but his nervous-looking crouching run only makes him that much cuter. By the end of the show I thought that if you gave me a shotgun and told me to blow off ASIMO's head, I'd be very reluctant to do so. (In contrast, I might quite enjoy taking a shotgun to my darn glitchy laptop.)

Another case: ELIZA was a simple computer program written in the 1960s that would chat with a user, using a small template of pre-programmed responses to imitate a non-directive psychotherapist (“Are such questions on your mind often?”, “Tell me more about your mother.”) Apparently, some users mistook it for human and spent long periods chatting with it.

I assume that ASIMO and ELIZA are not proper targets of substantial moral concern. They have no more consciousness than a laptop computer, no more capacity for genuine joy and suffering. However, because they share some of the superficial features of human beings, people might come improperly to regard them as targets of moral concern. And future engineers could presumably create entities with an even better repertoire of superficial tricks. Discussing this issue with my sister, she mentioned a friend who had been designing a laptop that would scream and cry when its battery runs low. Imagine that!

Conversely, suppose that it's someday possible to create an Artificial Intelligence so advanced that it has genuine consciousness, a genuine sense of self, real joy, and real suffering. If that AI also happens to be ugly or boxy or poorly interfaced, it might tend to attract less moral concern than is warranted.

Thus, our emotional responses to AIs might be misaligned with the moral status of those AIs, due to superficial features that are out of step with the AI's real cognitive and emotional capacities.

In the Star Trek episode "The Measure of a Man", a scientist who wants to disassemble the humanoid robot Data (sympathetically portrayed by a human actor) says of the robot, "If it were a box on wheels, I would not be facing this opposition." He also points out that people normally think nothing of upgrading the computer systems of a starship, though that means discarding a highly intelligent AI.

I have a cute stuffed teddy bear I bring to my philosophy of mind class on the day devoted to animal minds. Students scream in shock when without warning in the middle of the class, I suddenly punch the teddy bear in the face.

Evidence from developmental and social psychology suggests that we are swift to attribute mental states to entities with eyes and movement patterns that look goal directed, much slower to attribute mentality to eyeless entities with inertial movement patterns. But of course such superficial features needn’t track underlying mentality very well in AI cases.

Call this the ASIMO Problem.

I draw two main lessons from the ASIMO Problem.

First is a methodological lesson: In thinking about the moral status of AI, we should be careful not to overweight emotional reactions and intuitive judgments that might be driven by such superficial features. Low-quality science fiction -- especially low-quality science fiction films and television -- does often rely on audience reaction to such superficial features. However, thoughtful science fiction sometimes challenges or even inverts these reactions.

The second lesson is a bit of AI design advice. As responsible creators of artificial entities, we should want people to neither over- nor under-attribute moral status to the entities with which they interact. Thus, we should generally try to avoid designing entities that don’t deserve moral consideration but to which normal users are nonetheless inclined to give substantial moral consideration. This might be especially important in the design of children’s toys: Manufacturers might understandably be tempted to create artificial pets or friends that children will love and attach to -- but we presumably don’t want children to attach to a non-conscious toy instead of to parents or siblings. Nor do we presumably want to invite situations in which users might choose to save an endangered toy over an an endangered human being!

On the other hand, if we do someday create genuinely human-grade AIs who merit substantial moral concern, it would probably be advisable to design them in a way that would evoke the proper range of moral emotional responses from normal users.

We should embrace an Emotional Alignment Design Policy: Design the superficial features of AIs in such a way that they evoke the moral emotional reactions are appropriate to the real moral status of the AI, whatever it is, neither more nor less.

(What is the real moral status of AIs? More soon! In the meantime, see here and here.)

[image source]

18 comments:

Kris Rhodes said...

Is it okay to design teddy bears that are very cute?

Eric Schwitzgebel said...

As long as we aren't inclined to take them seriously as moral patients!

Arnold said...

Will ASIMO ever be able to substantiate "feet on the ground"...
Is this kind of experience possible beyond one's individuality...

Eric Schwitzgebel said...

Unknown, I'm not sure I understand the question. Could you clarify a bit more?

Arnold said...

Reflecting on how we have gotten to where we are in half a billion years (from the ground)...
Does knowing and sometimes remembering the sense of our "feet on the ground" provide a substantial intention for becoming impartial and objective in questions about morality...

Callan S. said...

He also points out that people normally think nothing of upgrading the computer systems of a starship

I do.

The ship is the elephant is the room.

Robots like ASIMO are all essentially perscription robots - they follow only the line their creators set. And where they go off that line (ie, what their creator would say is an error), they have no frame of good/bad to evaluate that line (and I'm talking good/bad as in something smashing hard against your battery case gets a lot of aversion numbers (bad), and aversion ties to some avoidance routine. Or something more than rote perscription following). Indeed, one might say a key difference is where the device speculates toward what is good and bad (whether it ends up with superstitions or not)

Lets see a show of ASIMO going through a dungeon and speculating what is good and bad in the dungeon. Possibly if he is destroyed (or by some virutal sense, 'destroyed' /loses all HP) take a copy of him that somehow either takes that into account or that randomly is changed (or both!). Repeat a few million times.

Eventually he might break out into the dungeon we're in.

Then what do you say?

ToBlog today said...

I want one!

uzi said...

I am more worried about the Romanian orphans withering in their cribs, lacking human attention. Sooner rather than later we should have furry interactive robots with sophisticated facial expressions (based on predictive Bayesian algorithms) that will also serve as tutors that can guide such children as they explore the WWW. (including educational virtual platforms like Colorado PHET.) Of course there children like that all over the world.
Those children may end up as robophiles but it certainly beats the alternative.
uzi.

chinaphil said...

"...avoid designing entities...users are nonetheless inclined to give substantial moral consideration...children’s toys...love and attach...don’t want children to attach to a non-conscious toy instead of to parents or siblings...save an endangered toy over an an endangered human being!"
Hoho! First, no chance of the toy manufacturers holding back on that. It's a given that they will create mechanical pets and pals for us, with big eyes and primary colours. Second, is this worry anything other than the latest generation of "that kid walks around with his head in a book all day/that kid loves TV more than her own family/that kid is unhealthily obsessed with computer games..."? I'm not sure why lifelike toys are more of a worry now than they ever were before.

I'm not quite convinced by the recommendation for real AIs, either. The Star Trek quote just seems to be a wrong idea. It is not true that Data would be the same if he were box shaped. Being human shaped, having human physical sensations, sharing the same space as humans, and being multi-purpose like humans all affect Data's experience, and make him a more effective AI (in the show). So the assumption that that box could "be" Data is wrong. Conversely, Hal 9000 in 2001 is successfully humanised with very few lines, and just a few close-ups of lenses. The ability to use language properly will be a much bigger factor in the way we react to AIs, I think.

Which is not to say that human form isn't important. I think it will be, but much more important for teaching robots how to "be" like us, rather than in conditioning how we respond to them. In real life, for example, we are quite comfortable engaging with bodiless AIs on the phone and in Siri.

Eric Schwitzgebel said...

chinaphil: I think that *if* you are wrong and there are serious attachment-to-AI-instead-of-person issues (assuming here that the AI is not a proper target of primary attachment, e.g., because a non-conscious non-person), then I think there is a chance that toy manufactures would face ethical pressures that might induce restraint, esp. if there were a threat of government regulation. I'm imagining a very different kind of case than having a stuffed animal. I'm imagining something closer to attachment more like child-parent attachment as the potentially serious concern.

On the physical form: I think you're right that the issues are complicated, but I do think that anything with eyes and human form will have an instant leg up (pun intended) compared to those without. One of various factors, presumably....

Eric Schwitzgebel said...

Uziel: Good point. Granted! I don't propose an exceptionless rule, when there are excellent reasons on the other side as there would presumably be in such a case. (Still, there's something a *little* funny about it, so I don't think it's a fully happy solution.)

Callan S. said...

Chinaphil

Second, is this worry anything other than the latest generation of "that kid walks around with his head in a book all day/that kid loves TV more than her own family/that kid is unhealthily obsessed with computer games..."

Given those are the ones making these robots, perhaps it's more than something to just shrug off.

Kenny Pearce said...

Have you considered the handling of these issues in "Supertoys Last All Summer Long"? I think our psychological reactions to the story make for a great case study on this. See here.

Eric Schwitzgebel said...

That sounds like a really interesting story, Kenny -- thanks for the tip!

Peter Harkins said...

I think there's a flip side to cuteness that's worthwhile.

I've been rereading Thomas Harris's novels (Red Dragon is good, Silence ofthe Lambs is amazing, Hannibal is OK with a terrible ending, Hannibal Rising is blah) and it occurs to me that the reasons the character Hannibal Lecter is so scary are mostly the ways that he looks just like a person but acts nothing at all like one: he's unnaturally still, he does not respond to boredom, he is entertained by human suffering and will go to incredible lengths to cause it.

That last one is the really disconcerting part about him. There's an off-screen scene that's referenced repeatedly in the first two novels and in the SITL movie, that he was friendly and compliant for the first year or two of incarceration, faked a heart attack, waited under ever-so-slightly loose restraints for other people to look away, and calmly ate a nurse's face.

It's a horrifying, chilling thing. It is also more understandable than an AI, because Lecter is mostly human and his desire to cause human suffering means he can do the math on a lot of human morality, even if he's got his positive and negative numbers reversed.

Now imagine Lecter is happy to appear perfectly friendly, compliant, and calm for years at a time until everyone looks away and, rather than maim someone for funsies to do the opposite of moral behavior, he will do something that doesn't even consider humans as moral actors, that treats them exactly as important as furniture or wheat. That's the AI Box experiment.

That's pretty much where my head was at for most of Ex Machina, though I didn't have that metaphor. So imagine instead of the stunningly beautiful ballet-dancer women, Beardy Mc Bossman has a house with a couple copies of Anthony Hopkins at his most disconcerting. The "cute" thing goes both ways, you can use that creepiness and revulsion to prime your intution.

Eric Schwitzgebel said...

Interesting point, Peter. I agree. Interesting to consider whether creepiness might be ethically used as an intentional design feature aligned with some moral possibilities of which the designer is aware.

Callan S. said...

Peter, go read The Prince of Nothing series. Now!!

uzi said...

Should we design child sized fuzzy robots that shower our children with affection, attention and positive reinforcement (as needed)? Suppose these robots could also act as tutors guiding our children in enchanted ‘educational’ virtual landscapes that are chosen by powerful diagnostic algorithms. Such robots could follow the children to maturity storing huge amounts of data. The resulting cognitive profile together with DNA, brain scans, and environmental data will be used in the construction of their personal avatars. For different reasons we will all have personal avatars or virtual butlers (at least the rich) that will represent us in cyberspace and evolve not only to roam the world-wide-web long after our death but to legally gain property, civil rights and so on.
However all I’m trying to say is that if you walk into a ‘high end’ toy store 20 years from now you may very well encounter toys that are designed to increase your child’s curiosity, intelligence but also emotional maturity, confidence and what not. Some are likely to be fuzzy, loving and cute and will definitely have the needed range of facial expression. Imagine a bunch of kids with their virtual butlers roaming the world looking for adventure.
uzi.