Tuesday, March 01, 2022

Do Androids Dream of Sanctuary Moon?

guest post by Amy Kind

In the novel that inspired the movie Blade Runner, Philip K. Dick famously asked whether androids dream of electric sheep. Readers of the Murderbot series by Martha Wells might be tempted to ask a parallel question: Do androids dream of The Rise and Fall of Sanctuary Moon?

Let me back up a moment for those who haven’t read any of the works making up The Murderbot Diaries.[1] The series’ titular character is a SecUnit (short for Security Unit). SecUnits are bot-human constructs, and though they are humanoid in form, they generally don’t act especially human-like and they have all sorts of non-human attributes including a built-in weapons system. Like other SecUnits, Murderbot has spent most of its existence providing security to humans who are undertaking various scientific, exploratory, or commercial missions. But unlike other SecUnits, Murderbot has broken free of the tight restrictions and safeguards that are meant to keep it in check. About four years prior to the start of the series, Murderbot had hacked its governor module, the device that monitors a SecUnit and controls its behavior, sometimes by causing it pain, sometimes by immobilizing it, and sometimes by ending its existence.

So how has Murderbot taken advantage of its newfound liberty? How has it kept itself occupied in its free time? The answer might initially seem surprising: Murderbot has spent an enormous amount of its downtime watching and rewatching entertainment media. In particular, it’s hooked on a serial drama called The Rise and Fall of Sanctuary Moon. We’re not told very much about Sanctuary Moon, or why it would be so especially captivating to a SecUnit, though we get some throw-away details now and then over the course of the series. We know it takes place in space, that it involves murder, sex, and legal drama, and that it has at least 397 episodes. In an interview with Newsweek in 2020, Wells has said that the show “is kind of based on How to Get Away with Murder, but in space, on a colony, with all different characters and hundreds more episodes, basically.”

It's not uncommon for the sophisticated AI of science fiction to adopt hobbies and pursue various activities in their leisure time. Andrew, the robot in Asimov’s Bicentennial Man, takes up wood carving, while Data, the android of Star Trek, takes up painting and spends time with his cat, Spot. HAL, the computing system built into the Discovery One spaceship in 2001: A Space Odyssey, plays chess. But it does seem fairly unusual for an AI to spend so much of its time binge-watching entertainment media. Murderbot’s obsession (one might even say addiction) is somewhat puzzling, at least to its human clients. In All Systems Red, when one of these clients reviews the SecUnit’s personal logs to see what it’s been up to, he discovers that it has downloaded 700 hours of media in the short time since their spacecraft landed on the planet they are exploring. The client hypothesizes that Murderbot must be using the media for some hidden, possibly nefarious purpose, perhaps to mask other data. As the client says, “It can’t be watching it, not in that volume; we’d notice.” (One has to love Murderbot’s response: “I snorted. He underestimated me.”)

Over the course of the series, as we learn more and more about Murderbot, the puzzle starts to dissipate. Certainly, Sanctuary Moon is entertainment for Murderbot. It’s an amusing diversion from its daily grind of security work. But it’s also much more than that. As Murderbot explicitly tells us, rewatching old episodes calms it down in times of stress. It borrows various details from Sanctuary Moon to help it in its work, as when it adopts one of the character’s names as an alias or when it decides what to do based on what characters on the show have done in parallel scenarios. And watching this serial helps Murderbot to process emotions. As it states on more than one occasion, it doesn’t like to have emotions about real life and would much prefer to have them about the show.

Though Murderbot is not comfortable engaging in self-reflection and prefers to avoid examination of its feelings and motivations, it cannot escape this altogether. We do see occasional moments of introspection. One particularly illuminating moment comes during an exchange between the SecUnit and Mensah, the human to which it is closest. In the novella Exit Strategy, when Mensah asks why it likes Sanctuary Moon so much, it doesn’t know how to answer at first. But then, once it pulls up the relevant memory, it’s startled by what it discovers and says more than it means to: “It’s the first one I saw. When I hacked my governor module and picked up the entertainment feed. It made me feel like a person.”

When Mensah pushes Murderbot for more, for why Sanctuary Moon would make it feel that way, it replies haltingly:

“I don’t know.” That was true. But pulling the archived memory had brought it back, vividly, as if it had all just happened. (Stupid human neural tissue does that.) The words kept wanting to come out. It gave me context for the emotions I was feeling, I managed not to say. “It kept me company without…” “Without making you interact?” she suggested.

Not only does Murderbot want to avoid having emotions about events in real life, it also wants to avoid emotional connections with humans. It is scared to form such connections. But a life without any connection is a lonely one. For Murderbot, watching media is not just about combatting boredom. It’s also about combatting loneliness.

As it turns out, then, Murderbot is addicted to Sanctuary Moon for many of the same reasons that any of us humans are addicted to the shows we watch – whether it’s Ted Lasso or Agents of Shield or Buffy the Vampire Slayer. These shows are diverting, yes, but they also bring us comfort, they give us outlets for our emotions, and they help us to fight against isolation. (Think of all the pandemic-induced binge-watching of the last two years.) So even though it might seem surprising at first that a sophisticated AI would want to devote so much of its time to entertainment media, it really is no more surprising than the fact that so many of us want to devote so much of our time to the same thing. Though it seems tempting to ask why an AI would do this, the only real answer is simply: Why wouldn’t it?

The reflections in this post thus bring us to a further moral about science fiction and what we can learn from it about the nature of artificial intelligence. In our abstract thinking about AI, we tend to get caught up in some Very Big Questions: Could they really be intelligent? Could they be conscious? Could they have emotions? Could we love them, and could they love us? None of these questions is easy to answer, and sometimes it’s hard to see how we could make progress on them. So perhaps what we need to do is to step back and think about some smaller questions. It’s here, I think, that science fiction can prove especially useful. When we try to imagine an AI existence, as works of science fiction help us to do, we need to imagine that life in a multi-faceted way. By thinking about what a bot’s daily life might be like, not just how a bot would interact with humans but how it would make sense of those interactions, or how it would learn to get better at them, or even just by thinking about what a bot would do in its free time, we start to flesh out some of our background assumptions about the capabilities of AI. In making progress on these smaller questions, perhaps we’ll also find ourselves better able to make progress on the bigger questions as well. To understand better the possibilities of AI sentience, we have to better understand the contours of what sentience brings along with it.

Ultimately, I don’t know whether androids would dream of Sanctuary Moon, or even of anything at all.[2] But thinking about why they might be obsessed with entertainment media like this can help us to get a better big-picture understanding of the sentience of an AI system like Murderbot… and perhaps even a better understanding of our own sentience as well.


[1] And if you haven’t read them yet, what are you waiting for? I highly recommend them – and rest assured, this post is free of any major spoilers.

[2] Though see Asimov’s story “Robot Dreams” for further reflection on this.

[image source]


Postscript by Eric Schwitzgebel:

This concludes Amy Kind's guest blogging stint at The Splintered Mind. Thanks, Amy, for this fascinating series of guest posts!

You can find all six of Amy's posts under the label Amy Kind.


D said...

I've never really been clear to what extent Murderbot is a cyborg like Major Makoto Kusanagi (a human brain with fully android body) or Robocop (the same, but with a few other human parts like the lower half of a face) or whether it is more properly considered an AI. I wonder whether Wells left it ambiguous so that the reader could take it in whatever way they needed to so that we could accept that it has its own conscious point of view?

Daniel Polowetzky said...

The fictional scenario of highly developed conscious AI in framing these questions may spur the imagination and motivate self-reflection, but it does so because it begs the question of its possibility in the first place.
Assuming that it is possible to manufacture consciousness as described in the science fiction series, it wouldn’t be surprising that such AI has a human-like inner life. It’s presupposed by the scenario.
The Lassie story presupposes a dog that can communicate that Timmy is stuck in a well.

Arnold said...

Amy kind, could you say something about where 'intension' of sentience is, in human sensation emotion mentation, today...
...I.e.: the internal content of a concept. Google

Callan said...

One way of looking at it is it found religion.

chinaphil said...

I happened to be rewatching the movie Her yesterday, and it reflects some similar interests. The format of Her makes it hard to gain that full picture - we see "Her" (Samantha, a conscious version of Siri) exclusively through the lens of her relationship with a human, so we're not getting a fully rounded view. But she does sometimes talk about her life away from him; her conversations with other AIs and her goals and dreams, which end up diverging sharply from the human's.
This seems like an example of thick storytelling, as opposed to thin thought experiments, as you discussed in an earlier post.