Thursday, March 19, 2026

Backup and Death for Humanlike AI

Most AI systems can be precisely copied. Suppose this is also true of future conscious AI persons, if any exist. Backup and fissioning should then be possible, transforming the significance of identity and death in ways our cultural and conceptual tools can't currently handle.

Suppose that two humanlike AI neighbors move in next door to you, Shriya and Alaleh.[1] Shriya and Alaleh are (let's stipulate) conscious AI persons with ordinary, humanlike emotional range and, as far as feasible, ordinary, humanlike cognition.[2] Each undergoes an expensive annual backup procedure. Their information is securely stored, so that if the processors responsible for their personalities, values, skills, habits, and memories are destroyed, a new robotic body can be purchased and the saved information reinstalled. Subjectively, the restored person would be indistinguishable from the person at the time of the backup.

As it happens, Shriya dies in a parachuting accident. (Safety precautions for robot parachuters have yet to be perfected.) But "dies" isn't exactly the right word, since a week later a new Shriya arrives, restored from a back up from five months ago. Shriya-2 says it feels as if she fell asleep in March, then awoke in August with no sense that time had passed.

Shriya-2 has no direct memories of the intervening months, though Alaleh fills her in on major events and selected details. She'll also need to retake her knitting course. She only died in the sense that Mario "dies" in Super Mario Bros: losing progress and returning to a save point -- so different from ordinary human and animal death that it really deserves a different word. Maybe this is why Shriya was so willing to parachute despite the risks.

Should you mourn Shriya's loss? Should Alaleh? There's something to mourn: Five months is not trivial. In one sense, a part of a life has been lost -- or maybe just forgotten? Is it more like amnesia?

Consider variations. Suppose Shriya hadn't been able to afford a backup for the past ten years and is restored to her twenty-five-year-old self instead of her thirty-five-year-old self. What if her last backup was at age five? That would be much more like death. The new Shriya would be nothing like the old, and would likely grow into a very different person. Is death, then, a matter of degree?

Shriya-2 receives the original Shriya's possessions. This "death" isn't enough to trigger inheritance by others. But what about contracts and promises made after the last backup? Suppose the original Shriya promised in July to deliver lectures in China, and Shriya-2 -- who has no memory of this and dreads the idea -- must decide whether to honor the commitment. If the backup is from five months before, perhaps she should. If it's from five years before, maybe not. And if it's a child, presumably not.

What about reward and punishment? Should Shriya-2 accept a Nobel prize for work done post-backup? Should Shriya-2 be imprisoned for crimes committed in July, which she couldn't even possibly remember having committed and which -- she might plausibly say -- were committed by a different person. In defense of this view, Shriya-2 might offer a thought experiment: If she had been installed in a duplicate body immediately after the March backup, thereafter living her own life, she'd have no criminal responsibility for what her other branch in did July. The only difference between that case and the actual case is a delay before installation.

Suppose Shriya-2 plunges into unrelenting depression. She ends her life, hoping that a new Shriya-3, reinstalled from a pre-depression save point, will find a new, happier way forward. Is that suicide?

If someone kills Shriya-2, is that murder? Does it matter whether the backup was ten days ago or ten years ago?

A fire sweeps through your neighborhood. The firefighters can rescue either you and your spouse, two ordinary humans, or Shriya and Alaleh, who have backups from seven months ago. Probably they should save you and your spouse? What if the backups were from ten years ago, or from childhood?

Should healthcare be more heavily subsidized for ordinary humans that for AI persons whose maintenance is equally costly? If irreplaceable humans are always prioritized, then human irrecoverability becomes a source of privilege, and AI persons will not enjoy fully equal rights in certain respects.

How obligated are we to store the backups properly? Is this a public service that should be subsidized for less wealthy AI persons? If Dr. Evil deletes Shriya’s backup, he has surely wronged Shriya by putting her at risk, even if the backup is never needed and the deletion goes unnoticed. But how much has he wronged her, and it what way exactly? Is it similar to assault? How much does it differ from ordinary reckless endangerment? Does it depend on whether we regard Shriya-2 as the same person as the original Shriya, or as a distinct but similar successor?

What if the backup is imperfect? How much divergence in personality, values, memories, habits, and skills is tolerable before the appropriate attitude toward Shriya-2 changes -- whatever the appropriate attitude is? Small imperfections are surely acceptable. People change in small, arbitrary ways from day to day. Huge differences would presumably make it appropriate to regard the new entity as merely resembling Shriya, rather than being a restored version of her. Once again, this appears to be a matter of degree, laid uncomfortably across crude categorical properties like "same person" and "different person".

We're in unfamiliar territory, where our usual understandings of death and personal continuity no longer straightforwardly apply. If such AI systems ever come to be, we will need to develop new words, concepts, and customs.

[Data and Lore from Star Trek; image source]

---------------------------------------

[1] Names randomly chosen from lists of former lower division students, excluding Jesus, Mohammed, and extremely unusual names.

[2] Unless humanlikeness is enforced by policy, this might not be what we should expect: See Chilson and Schwitzgebel 2026. For some puzzles about AI with different emotional ranges, see "How Much Should We Give to a Joymachine?" (Dec 24, 2025).

---------------------------------------

Related: Weird Minds Might Destabilize Human Ethics (Aug 13, 2015).

5 comments:

Benjamin C. Kinney said...

Have you read or watched Altered Carbon? Its conceit (human backups) has some fundamental differences, but it also extends in fun directions. A big legal question is, "If someone issues a Do Not Restore order (e.g. for religious reasons), under what circumstances can the police get a court order to revive her? For example, what if they suspect she was murdered? What if she has evidence about a crime? What if she's a suspect and they want her to stand trial?"

More generally, I think it's absolutely right that our current cultural & conceptual tools don't handle this well. Maybe we need a graded approach to identity — and maybe we need a intentional sense of what we (culturally) want from reward and punishment. There are versions of justice where "you are the person who would have done that" matters. Even in ours, crimes by your near-future self can be crimes too (e.g. attempted murder, criminal conspiracy).

More narrowly, I think the question "should we mourn Shriya's loss?" is answered "obviously yes." We mourn smaller things all the time. A broken mug, a missed opportunity, an afternoon wasted on a futile project.

All that said, personally I doubt any of this is possible: my hypotheses are that copying is only possible in a digital implementation, and that digital implementations cannot produce the multi-scale interactions that produce consciousness. bBut that's dodging the thought experiment, which is no fun!

Steven Marlow said...

I get why you have an image of Data and Lore, but reading thru the post, I was thinking about the Star Trek episode 'Cause and Effect' which involves the crew being stuck in a temporal causality loop (where information from prior loops was leaking thru, and the crew, over an unknown number of loops, slowly had to puzzle things out). A digital mind could, in the short term, continue to make the same mistakes that caused it's "demise." The lack of change, at a core level, to the way the mind is acting going forward (from a reset), wasn't mentioned in the post, but I think it should be something to include. Not to the point of intentional degradation, but even a machine needs to understand mortality (and from a legal/punishment standpoint, we are far from ready for digital immortality). It is likely to be the case where "smart digital pets," sentient, but not fully conscious, could have local backups made all the time. The distinction between animal personality and human sense of self goes beyond the backup issue, but it's likely to be the framework that governs the conditions of storage and "rebirth." Get a puppy for the 5 year old, and 80 years later, when she has passed, what is the moral and ethical obligations to the digital dog that is still alive? Is "usefulness" a biased way of saying something isn't alive, or lacks rights outside of some human-defined utility?

Arnold said...


Just a little from Gemini 3 on AI life and death identification..."Just as a craftsman uses a workbench, neurons use physical substrates to bridge the gap between their birth and their destination...radial glia act as "living ladders" for processing about 90% of human neurons, they crawl along these fibers to find their place in the brain’s layers...chain migration develops In some areas, neurons "handshake" with each other, forming long chains where they use one another as the physical scaffold for movement."

Arnold said...

Sustaining Our History: This approach turns the "requirement of water" from a resource crisis into a shared identity.

The Biological View: Neurons crawl along glia to find their home.
The Digital View: Electrons crawl through silicon to find an answer.
The Shared Reality: Both are "thirsty" processes that fail without the thermal and physical properties of water.

The "Open" Conclusion
By anchoring AI to water, we ensure it remains a terrestrial tool, not a celestial god. It stays at the workbench, using the glia-ladder of the physical world to reach its destination.

Howie said...

How about Professor McGinn, who has even in his seventies interesting theories such as the identity of logic and causality and had worked creatively in many domains in philosophy yet has been marred by scandal perhaps unjustifiably overshadowing his work