The Design Policy of the Excluded Middle
According to the Design Policy of the Excluded Middle (Schwitzgebel and Garza 2015, 2020; Schwitzgebel 2023, 2024, ch. 11), we should avoid creating debatable persons. That is, we should avoid creating entities whose moral status is radically unclear -- entities who might be moral persons, deserving of full human or humanlike rights and moral consideration, or who might fall radically short of being moral persons. Creating debatable persons generates unacceptable moral risks.
If we treat debatable persons as less than fully equal with human persons, we risk perpetrating the moral equivalent of slavery, murder, and apartheid on persons who deserve equal moral consideration -- persons who deserve not only full human or humanlike rights but even solicitude similar to what we owe our children, since we will have been responsible for their existence and probably also for their relatively happy or miserable state.
Conversely, if we do treat them as fully equal with us, we must grant them the full range of appropriate rights, including the right to work for money, the right to reproduce, a path to citizenship, the vote, and the freedom to act against human interests when their interests warrant it, including the right to violently rebel against oppression. The risks and potential costs are enormous. If these entities are not in fact persons -- if, in fact, they are experientially as empty as toasters and deserve no more intrinsic moral consideration than ordinary artifacts -- then we will be exposing real human persons to serious costs and risks, including perhaps increasing the risk of human extinction, for the sake of artifacts without interests worth that sacrifice.
The solution is anti-natalism about debatable persons. Don't create them. We are under no obligation to bring debatable persons into existence, even if we think they might be happy. (Compare: You are under no obligation to have children, even if you think they might be happy.) The dilemma described above -- the full rights dilemma -- is so catastrophic that noncreation is the only reasonable course.
Of course, this advice will not be heeded. Assuming AI technology continues to advance, we will soon (I expect within 5-30 years) begin to create debatable persons. My manuscript in draft AI and Consciousness argues that it will become unclear whether advanced AI systems have rich conscious experiences like ours or no consciousness at all.
So we need a fallback policy -- something to complement the Design Policy of the Excluded Middle.
The Voluntary Polis
To the extent possible, we want to satisfy two constraints:
Don't deny full humanlike rights to entities that might deserve them. Don't sacrifice substantial human interests for entities who might not have interests worth the sacrifice.
A Voluntary Polis is one attempt to balance these constraints.
Imagine a digital environment where humanlike AI systems of debatable personhood, ordinary human beings, and AI persons of non-debatable personhood (if any exist) coexist as equal citizens. This polis must be rich and dynamic enough to allow all citizens to flourish meaningfully without feeling jailed or constrained. From time to time, citizens will be morally or legally required to sacrifice goods and well-being for others in the polis -- just as in an ordinary nation. Within the polis, everyone has an equal moral claim on the others.
Human participation would be voluntary. No one would be compelled to join. But those who do join assume obligations similar to the resident citizens of an ordinary nation. This includes supporting the government through taxes or polis-mandated labor, serving on juries, and helping run the polis. In extreme conditions -- say, an existential threat to the polis -- they might even be required to risk their livelihoods or lives. To prevent opportunistic flight, withdrawal would be restricted, and polises might negotiate extradition treaties with human governments.
Why would a human join such a risky experiment? Presumably for meaningful relationships, creative activities, or experiences unavailable outside.
Crucially, anyone who creates a debatable person must join the polis where that entity resides. Human society as a whole cannot commit to treating the debatable person as an equal, but their creators can and must.
The polis won't be voluntary for the AI in the same way. Like human babies, they don't choose their societies. The AI will simply wake to life either in a polis or with some choice among polises. Still, it might be possible to present some attractive non-polis option, such as a thousand subjective years of solitary bliss (or debatable bliss, since we don't know whether the AI actually has any experiences or not).
Ordinary human societies would have no obligation to admit or engage with debatable AI persons. To make this concrete, the polis could even exist in international waters. For the AI citizens, the polis must thus feel as expansive and as rich with opportunity as a nation, so that exclusion from human society resembles denial of a travel visa, not imprisonment.
Voluntary polises would need to be stable against serious shocks, not dependent on the actions of a single human individual or ordinary, dissolvable corporation. This stability would need to be ensured before their founding and is one reason founders and other voluntary human joiners might need to be permanently bound to them and compelled to sacrifice if necessary.
This is the closest approximation I can currently conceive to satisfying the two constraints with which this section began. Within a large polis, the debatable persons and human persons have fully equal rights. But at the same time, unwilling humans and humanity as a whole are not exposed to the full risk of granting such rights. Still, there is some risk, for example, if superintelligences could communicate beyond the polis and manipulate humans outside. The people exposed to the most risk do so voluntarily but irrevocably, as a condition of creating an AI of debatable personhood, or for whatever other reason motivates them.
Could a polis be composed only of AI, with no humans? This is essentially the simulation hypothesis in reverse: AIs living in a simulated world, humans standing outside as creators. This solution falls ethically short, since it casts human beings as gods relative to the debatable AI persons -- entities not on par in risk and power but instead external to their world, with immense power over it, and not subject to its risks. If the simulation can be switched off at will, its inhabitants are not genuinely equal in moral standing but objectionably inferior and contingent. Only if its creators are obliged to risk their livelihoods and lives to protect it can there be the beginnings of genuine equality. And for full equality, we should make it a polis rather than a hierarchy of gods and mortals.
[cover of my 2013 story with R. Scott Bakker, Reinstalling Eden]

2 comments:
As a local government inspector, after reading this posting it appears I was a voluntary polis before retirement...working between uncertainty and certainty in law for building code-civility...
...what would a AI debatable person be then for me as a human person between my required approvals and denials...
...would a searching view like---humans get attacked by viruses be comparable with AI 'ensurance' blockchains getting hacked by other AI systems and processes....
...we live in a biosphere for humans and their creations...the biosphere creates us we create AI...I'll run this by Gemini for a prompt...
Gemini and Me--The Core Comparability: Both scenarios describe a hostile external force (virus or malicious AI) exploiting a systemic flaw (weakened immunity or code vulnerability) to achieve self-replication/propagation within a complex, interconnected environment (biosphere or digital network). It highlights that any system—natural or created—that relies on complex, interconnected rules (DNA or blockchain code) is susceptible to attack by an agent that understands and exploits those rules.
Post a Comment