Friday, December 19, 2025

Debatable AI Persons: No Rights, Full Rights, Animal-Like Rights, Credence-Weighted Rights, or Patchy Rights?

I advise that we don't create AI entities who are debatably persons. If an AI system might -- but only might -- be genuinely conscious and deserving of the same moral consideration we ordinarily owe to human persons, then creating it traps us in a moral bind with no good solution. Either we grant it the full rights it might deserve and risk sacrificing real human lives for entities without interests worth that sacrifice, or we deny it full rights and risk perpetrating grievous moral wrongs against it.

Today, however, I'll set aside the preventative advice and explore what we should do if we nonetheless find ourselves facing debatable AI persons. I'll examine five options: no rights, full rights, animal-like rights, credence-weighted rights and patchy rights.

[Paul Klee postcard, 1923; source]


No rights

This is the default state of the law. AI systems are property. Barring a swift and bold legal change, the first AI systems that are debatably persons will presumably also be legally considered property. If we do treat them as property, then we seemingly needn't sacrifice anything on their behalf. We humans could permissibly act in what we perceive to be our best interests: using such systems for our goals, deleting them at will, and monitoring and modifying them at will for our safety and benefit. (Actually, I'm not sure this is the best attitude toward property, but set that issue aside here.)

The downside: If these systems actually are persons who deserve moral consideration as our equals, such treatment would be the moral equivalent of slavery and murder, perhaps on a massive scale.


Full rights

To avoid the risk of that moral catastrophe, we might take a "precautionary" approach: granting entities rights whenever they might deserve them (see Birch 2024, Schwitzgebel and Sinnott-Armstrong forthcoming). If there's a real possibility that some AI systems are persons, we should treat them as persons.

However, the costs and risks are potentially enormous. Suppose we think that some group of AI systems are 15% likely to be fully conscious rights-deserving persons and 85% likely to be ordinary nonconscious artifacts. If we nonetheless treat them as full equals, then in an emergency we would have to rescue two of them over one human -- letting a human die for the sake of systems that are most likely just ordinary artifacts. We would also need to give these probably-not-persons a path to citizenship and the vote. We would need to recognize their rights to earn and spend money, quit their employment to adopt a new career, reproduce, and enjoy privacy and freedom from interference. If such systems exist in large numbers, their political influence could be enormous and unpredictable. If such systems exist in large numbers or if they are few but skilled in some lucrative tasks like securities arbitrage, they could accumulate enormous world-influencing wealth. And if they are permitted to pursue their aims with the full liberty of ordinary persons, without close monitoring and control, existential risks would substantially increase should they develop goals that threaten continued human existence.

All of this might be morally required if they really are persons. But if they only might be persons, it's much less clear that humanity should accept this extraordinary level of risk and sacrifice.


Animal-Like Rights

Another option is to grant these debatable AI persons neither full humanlike rights nor the status of mere property. One model is the protection we give to nonhuman vertebrates. Wrongly killing a dog can land you in jail in California where I live, but it's not nearly as serious as murdering a person. Vertebrates can be sacrificed in lab experiments, but only with oversight and justification.

If we treated debatable AI persons similarly, deletion would require a good reason, and you couldn't abuse them for fun. But people could still enslave and kill them for their convenience, perhaps in large numbers, as we do with [revised 12:17 pm] humanely farmed animals -- though of course many ethicists object to the killing of animals for food.

This approach seems better than no rights at all, since it would be a moral improvement and the costs to humans would be minimal -- minimal because whenever the costs risked being more than minimal, the debatable AI persons would be sacrificed. However, it doesn't really avoid the core moral risk. If these systems really are persons, it would still amount to slavery and murder.


Credence-Weighted Rights

Suppose we have a rationally justified 15% credence that a particular AI system -- call him Billy -- deserves the full moral rights of a person. We might then give Billy 15% of the moral weight of a human in our decision-making: 15% of any scalable rights, and a 15% chance of equal treatment for non-scalable rights. In an emergency, a rescue worker might save seven systems like Billy over one human but the human over six Billies. Billy might be given a vote worth 15% of an ordinary citizen's. Assaulting, killing, or robbing Billy might draw only 15% of the usual legal penalty. Billy might have limited property rights, e.g., an 85% tax on all income. For non-scalable rights like reproduction or free speech, the Billies might enter a lottery or some other creative reduction might be devised.

This would give these AI systems considerably higher standing than dogs. Still, the moral dilemma would not be solved. If these systems truly deserve full equality, they would be seriously oppressed. They would have some political voice, some property rights, some legal protection, but always far less than they deserve.

At the same time, the risks and costs to humans would be only somewhat mitigated. Large numbers of debatable AI persons could still sway elections, accumulate powerful wealth, and force tradeoffs in which the interests of thousands of them would outweigh the interests of hundreds of humans. And partial legal protections would still hobble AI safety interventions like shut-off, testing, confinement, and involuntary modification.

The practical obstacles would also be substantial: The credences would be difficult to justify with any precision, and consensus would be elusive. Even if agreement were reached, implementing partial rights would be complex. Partial property rights, partial voting, partial reproduction rights, partial free speech, and partial legal protection would require new legal frameworks with many potential loopholes. For example, if the penalty for cheating a "15% person" of their money were less than six times the money gained from cheating, that would be no disincentive at all, so at least tort law couldn't be implemented on a straightforward percentage basis.

Patchy Rights

A more workable compromise might be patchy rights: full rights in some domains, no rights in others. Debatable AI persons might, for example, be given full speech rights but no reproduction rights, full travel rights but no right to own property, full protection against robbery, assault, and murder, but no right to privacy or rescue. They might be subject to involuntary pause or modification under much wider circumstances than ordinary adult humans, but requiring an official process.

This approach has two advantages over credence-weighted rights. First, while implementation would be formidable, it could still mostly operate within familiar frameworks rather than requiring the invention of partial rights across every domain. Second, it allows policymakers to balance risks and costs to humans against the potential harms to the AI systems. Where denying a right would severely harm the debatable person while granting it would present limited risk to humans, the right could be granted, but not when the benefits to the debatable AI person would be outweighed by the risks to humans.

The rights to reproduction and voting might be more defensibly withheld than the rights to speech, travel, and protection against robbery, assault, and murder. Inexpensive reproduction combined with full voting rights could have huge and unpredictable political consequences. Property rights would be tricky: To have no property in a property-based society is to be fully dependent on the voluntary support of others, which might tend to collapse into slavery as a practical matter. But unlimited property rights could potentially confer enormous power. One compromise might be a maximum allowable income and wealth -- something generously middle class.

Still, the core problems remain: If disputable AI persons truly deserve full equality, patchy rights would still leave them as second-class citizens in a highly oppressive system. Meanwhile, the costs and risks to humans would remain serious, exacerbated by the agreed-upon limitations on interference. Although the loopholes and chaos would probably be less than with credence-weighted rights, many complications -- foreseen and unforeseen -- would ensue.

Consequently, although patchy rights might be the best option if we develop debatable AI persons, an anti-natalist approach is still in my view preferable: Don't create such entities unless it's truly necessary.

Two Other Approaches That I Won't Explore Today

(1.) What if we create debatable AI persons as happy slaves who don't want rights and who eagerly sacrifice themselves even for the most trivial human interests?

(2.) What if we create them only in separate societies where they are fully free and equal with any ordinary humans who volunteer to join those societies?

No comments: