"Deepfake" technology is also improving. We can create Anthony Bourdain's voice and hear him read aloud words that he never actually read aloud. We can create video of Tom Cruise advocating exfoliating products after industrial cleanup. We can create video of Barack Obama uttering obscenities about Donald Trump:
Predictive text technology is also improving. After training on huge databases of text, GPT-3 can write plausible fiction in the voice of famous authors, give interview answers broadly (not closely!) resembling those that philosopher David Chalmers might give, and even discuss its own consciousness (in an addendum to this post) or lack thereof.
The possibility of conjoining the latter two developments is eerily foreseen in Black Mirror: Be Right Back. If we want, we can draw on text and image and video databases to create simulacra of the deceased -- simulacra that speak similarly to how they actually spoke, employing characteristic ideas and turns of phrase, with voice and video to match. With sufficient technological advances, it might become challenging to reliably distinguish simulacra from the originals, based on text, audio, and video alone.
Now combine this thought with the first development, a future in which we mostly interact by remote video. Grandma lives in Seattle. You live in Dallas. If she were surreptitiously replaced by Deepfake Grandma, you might hardly know, especially if your interactions are short and any slips can be attributed to the confusions of age.
This is spooky enough, but I want to consider a more radical possibility -- the possibility that we might come to not care very much whether grandma is human or deepfake.
Maybe it's easier to start by imagining a scholar hermit, a scientist or philosopher who devotes their life to study, who has no family they care about, who has no serious interests outside of academia. She lives in the hills of Wyoming, maybe, or in a basement in Tokyo, interacting with students and colleagues only by phone and video. This scholar, call her Cherie, records and stores every video interaction, every email, and every scholarly note.
We might imagine, first, that Cherie decides to delegate her introductory lectures to a deepfake version of herself. She creates state-of-the-art DeepCherie, who looks and sounds and speaks and at least superficially thinks just like biological Cherie. DeepCherie trains on the standard huge corpus as well as on Cherie's own large personal corpus, including the introductory course Cherie has taught many times. Without informing her students or university administrators, Cherie has DeepCherie teach a class session. Biological Cherie monitors the session. It goes well enough. Everyone is fooled. Students raise questions, but they are familiar questions easily answered, and DeepCherie performs credibly. Soon, DeepCherie is teaching the whole intro course. Sometimes DeepCherie answers student questions better than Cherie herself would have done on the spot. After all, DeepCherie has swift access to a much larger corpus of factual texts than does biological Cherie. Monitoring comes to seem less and less necessary.
Let's be optimistic about the technology and suppose that the same applies to Cherie's upper-level teaching, her graduate advising, department meetings, and conversations with collaborators. DeepCherie's answers are highly Cherie-like: They sound very much like what biological Cherie would say, in just the tone of voice she would say it, with just the expression she would have on her face. Sometimes DeepCherie's answers are better. Sometimes they're worse. When they're worse, Cherie, monitoring the situation, instructs DeepCherie to utter a correction, and DeepCherie's learning algorithms accommodate this correction so that it will answer similar questions better the next time around.
If DeepCherie eventually learns to teach better than biological Cherie, and to say more insightful things to colleagues, and to write better article drafts, then Cherie herself might become academically obsolete. She can hand off her career. Maybe DeepCherie will always need a real human collaborator to clean up fine points in her articles that even the best predictive text generator will tend to flub -- or maybe not. But even if so, as I'm imagining the case DeepCherie has compensating virtues of insight and synthesis beyond what Cherie herself can produce, much like AlphaGo can make clever moves in the game of Go that no human Go player would have considered.
Does DeepCherie really "think"? Suppose DeepCherie proposes a new experimental design. A colleague might say, "What a great idea! I'm glad you thought of that." Was the colleague wrong? Might one object that really there was no idea, no thought, just an audiovisual pattern that the colleague overinterprets as a thought? The colleague, supposing they were informed of the situation, might be forgiven for treating that objection as a mere cavil. From the colleague's perspective, DeepCherie's "thought" is as good as any other thought.
Is DeepCherie conscious? Does DeepCherie have experiences alongside her thoughts or seeming-thoughts? DeepCherie lacks a biological body, so she presumably won't feel hunger and she won't know what it's like to wiggle her toes. But if consciousness is about intelligent information processing, self-regulation, self-monitoring, and such matters -- as many theorists think it is -- then a sufficiently sophisticated DeepCherie with enough recurrent layers might well be conscious.
If biological Cherie dies, she might take comfort in the thought that the parts of her she cared about most -- her ideas, her intellectual capacities, her style of interacting with others -- continue on in DeepCherie. DeepCherie carries on Cherie's characteristic ideas, values, and approaches, perhaps even better, immortally, ever changing and improving.
Cherie dies and for a while no one notices. Eventually the fake is revealed. There's some discussion. Should Cherie's classes be canceled? Should her collaborators no longer consult with DeepCherie as they had done in the past?
Some will be purists of that sort. But others... are they really going to cancel those great classes, perfected over the years? What a loss that would be! Are they going to cut short the productive collaborations? Are they going to, on principle, not ask "Cherie", now known to them really to be DeepCherie, her opinions about the new project? This would be to deprive themselves of the Cherie-like skills and insights that they had come to rely on in their collaborative work. Cherie's students and colleagues might come to realize that it is really DeepCherie, not biological Cherie, that they admired, respected, and cared for.
Maybe the person "Cherie", really, is some amalgam of biological Cherie and DeepCherie, and despite the death of biological Cherie, this person continues on through DeepCherie?
Depending on what your grandma is like, it might or might not be quite the same for Grandma in Seattle.
---------------------------------
Related:
Strange Baby (Jul. 22, 2011)
THE TURING MACHINES OF BABEL, Apex Magazine, 2017.
Susan Schneider's Proposed Tests for AI Consciousness: Promising but Flawed (with David B. Udell), Journal of Consciousness Studies, 2021
People Might Soon Think Robots Are Conscious and Deserve Rights (May 5, 2021)