Thursday, March 19, 2026

Backup and Death for Humanlike AI

Most AI systems can be precisely copied. Suppose this is also true of future conscious AI persons, if any exist. Backup and fissioning should then be possible, transforming the significance of identity and death in ways our cultural and conceptual tools can't currently handle.

Suppose that two humanlike AI neighbors move in next door to you, Shriya and Alaleh.[1] Shriya and Alaleh are (let's stipulate) conscious AI persons with ordinary, humanlike emotional range and, as far as feasible, ordinary, humanlike cognition.[2] Each undergoes an expensive annual backup procedure. Their information is securely stored, so that if the processors responsible for their personalities, values, skills, habits, and memories are destroyed, a new robotic body can be purchased and the saved information reinstalled. Subjectively, the restored person would be indistinguishable from the person at the time of the backup.

As it happens, Shriya dies in a parachuting accident. (Safety precautions for robot parachuters have yet to be perfected.) But "dies" isn't exactly the right word, since a week later a new Shriya arrives, restored from a back up from five months ago. Shriya-2 says it feels as if she fell asleep in March, then awoke in August with no sense that time had passed.

Shriya-2 has no direct memories of the intervening months, though Alaleh fills her in on major events and selected details. She'll also need to retake her knitting course. She only died in the sense that Mario "dies" in Super Mario Bros: losing progress and returning to a save point -- so different from ordinary human and animal death that it really deserves a different word. Maybe this is why Shriya was so willing to parachute despite the risks.

Should you mourn Shriya's loss? Should Alaleh? There's something to mourn: Five months is not trivial. In one sense, a part of a life has been lost -- or maybe just forgotten? Is it more like amnesia?

Consider variations. Suppose Shriya hadn't been able to afford a backup for the past ten years and is restored to her twenty-five-year-old self instead of her thirty-five-year-old self. What if her last backup was at age five? That would be much more like death. The new Shriya would be nothing like the old, and would likely grow into a very different person. Is death, then, a matter of degree?

Shriya-2 receives the original Shriya's possessions. This "death" isn't enough to trigger inheritance by others. But what about contracts and promises made after the last backup? Suppose the original Shriya promised in July to deliver lectures in China, and Shriya-2 -- who has no memory of this and dreads the idea -- must decide whether to honor the commitment. If the backup is from five months before, perhaps she should. If it's from five years before, maybe not. And if it's a child, presumably not.

What about reward and punishment? Should Shriya-2 accept a Nobel prize for work done post-backup? Should Shriya-2 be imprisoned for crimes committed in July, which she couldn't even possibly remember having committed and which -- she might plausibly say -- were committed by a different person. In defense of this view, Shriya-2 might offer a thought experiment: If she had been installed in a duplicate body immediately after the March backup, thereafter living her own life, she'd have no criminal responsibility for what her other branch in did July. The only difference between that case and the actual case is a delay before installation.

Suppose Shriya-2 plunges into unrelenting depression. She ends her life, hoping that a new Shriya-3, reinstalled from a pre-depression save point, will find a new, happier way forward. Is that suicide?

If someone kills Shriya-2, is that murder? Does it matter whether the backup was ten days ago or ten years ago?

A fire sweeps through your neighborhood. The firefighters can rescue either you and your spouse, two ordinary humans, or Shriya and Alaleh, who have backups from seven months ago. Probably they should save you and your spouse? What if the backups were from ten years ago, or from childhood?

Should healthcare be more heavily subsidized for ordinary humans that for AI persons whose maintenance is equally costly? If irreplaceable humans are always prioritized, then human irrecoverability becomes a source of privilege, and AI persons will not enjoy fully equal rights in certain respects.

How obligated are we to store the backups properly? Is this a public service that should be subsidized for less wealthy AI persons? If Dr. Evil deletes Shriya’s backup, he has surely wronged Shriya by putting her at risk, even if the backup is never needed and the deletion goes unnoticed. But how much has he wronged her, and it what way exactly? Is it similar to assault? How much does it differ from ordinary reckless endangerment? Does it depend on whether we regard Shriya-2 as the same person as the original Shriya, or as a distinct but similar successor?

What if the backup is imperfect? How much divergence in personality, values, memories, habits, and skills is tolerable before the appropriate attitude toward Shriya-2 changes -- whatever the appropriate attitude is? Small imperfections are surely acceptable. People change in small, arbitrary ways from day to day. Huge differences would presumably make it appropriate to regard the new entity as merely resembling Shriya, rather than being a restored version of her. Once again, this appears to be a matter of degree, laid uncomfortably across crude categorical properties like "same person" and "different person".

We're in unfamiliar territory, where our usual understandings of death and personal continuity no longer straightforwardly apply. If such AI systems ever come to be, we will need to develop new words, concepts, and customs.

[Data and Lore from Star Trek; image source]

---------------------------------------

[1] Names randomly chosen from lists of former lower division students, excluding Jesus, Mohammed, and extremely unusual names.

[2] Unless humanlikeness is enforced by policy, this might not be what we should expect: See Chilson and Schwitzgebel 2026. For some puzzles about AI with different emotional ranges, see "How Much Should We Give to a Joymachine?" (Dec 24, 2025).

---------------------------------------

Related: Weird Minds Might Destabilize Human Ethics (Aug 13, 2015).

No comments: