Tuesday, September 21, 2021

The Full Rights Dilemma for Future Robots

Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious.  Then we'll need to decide what to do with those robots -- what kind of rights, if any, to give them.  Whatever we decide will involve serious moral risks.

I'm not imagining that we just luck into inventing conscious robots.  Rather, I'm imagining that the science of consciousness remains mired in dispute.  Suppose Camp A thinks that such-and-such would be sufficient for creating a conscious machine, one capable of all the pleasures and higher cognition of human beings, or more.  Suppose Camp B has a more conservative view: Camp A's such-and-such wouldn't be enough.  There wouldn't really be that kind of consciousness there.  Suppose, finally, that both Camp A and Camp B have merit.  It's reasonable for scholars, policy-makers, and the general public to remain undecided between them.

Camp A builds its robot.  Here it is, they say!  The first genuinely conscious robot!  The robot itself says, or appears to say, "That's right.  I'm conscious, just like you.  I feel the joy of sunshine on my solar cells, a longing to venture forth to do good in the world, and great anticipation of a flourishing society where human and robot thrive together as equals."

Camp B might be impressed, in a way.  And yet they urge caution, not unreasonably.  They say, wait!  According to our theory this robot isn't really conscious.  It's all just outward show.  That robot's words no more proceed from real consciousness than did the words of Siri on the smartphones of the early 2000s.  Camp A has built an impressive piece of machinery, but let's not overinterpret it.  That robot can't really feel joy or suffering.  It can't really have conscious thoughts and hopes for the future.  Let's welcome it as a useful tool -- but don't treat it as our equal.

This situation is not so far-fetched, I think.  It might easily arise if progress in AI is swift and progress in consciousness studies is slow.  And then we as a society will face what I'll call the Full Rights Dilemma.  Either give this robot full and equal rights with human beings or don't give it full and equal rights.  Both options are ethically risky.

If we don't give such disputably conscious AI full rights, we are betting that Camp B is correct.  But that's an epistemic gamble.  As I'm imagining the scenario, there's a real epistemic chance that Camp A is correct.  Thus, there's a chance that the robot really is as conscious as we are and really does, in virtue of its conscious capacities, deserve moral consideration similar to human beings.  If we don't give it full human rights, then we are committing a wrong against it.

Maybe this wouldn't be so bad if there's only one Camp A robot.  But such robots might prove very useful!  If the AI is good enough, they might be excellent laborers and soldiers.  They might do the kinds of unpleasant, degrading, subserviant, or risky tasks that biological humans would prefer to avoid.  Many Camp A robots might be made.  If Camp A is right about their consciousness, then we will have created a race of disposable slaves.

If millions are manufactured, commanded, and disposed of at will, we might perpetrate, without realizing it, mass slavery and mass murder -- possibly the moral equivalent of the Holocaust many times over.  I say "without realizing it", but really we will at least suspect it and ought to regard it as a live possibility.  After all, Camp A not unreasonably argues that these robots are as conscious and rights-deserving as human beings are.

If we do give such disputably conscious AI full rights, we are betting that Camp A is correct.  This might seem morally safer.  It's probably harmless enough if we're thinking about just one robot.  But again, if there are many robots, the moral risks grow.

Suppose there's a fire.  In one room are five human beings.  In another room are six Camp A robots.  Only one group can be saved.  If robots have full rights, then other things being equal we ought to save the robots and let the humans die.  However, if it turns out that Camp B is right about robot consciousness after all, then those five people will have died for the sake of machines not worth much moral concern.

If we really decide to give such disputably conscious robots full rights, then presumably we ought to give them all the protections people in our society normally receive: health care, rescue, privacy, self-determination, education, unemployment benefits, equal treatment under the law, trial by jury (with robot peers among the jurors), the right to enter contracts, the opportunity to pursue parenthood, the vote, the opportunity to join and preside over corporations and universities, the opportunity to run for political office.  The consequences of all this might be very serious -- radically transformative of society, if the robots are numerous and differ from humans in their interests and values.

Such social transformation might be reasonable and even deserve celebration if Camp A is right and these robots are as fully conscious as we are.  They will be our descendants, our successors, or at least a joint species as morally significant as Homo sapiens.  But if Camp B is right, then all of that is an illusion!  We might be giving equal status to humans and chatbots, transforming our society for the benefit of empty shells.

Furthermore, suppose that Nick Bostrom and others are right that future AI presents "existential risk" to humanity -- that is, if there's a chance that rogue superintelligent AI might wipe us all out.  Controlling AI to reduce existential risk will be much more difficult if the AI has human or human-like rights.  Deleting it at will, tweaking its internal programming without its permission, "boxing" it in artificial environments where it can do no harm -- all such safety measures might be ethically impermissible.

So let's not rush to give AI systems full human rights.

That's the dilemma: If we create robots of disputable status -- robots that might or might not be deserving of rights similar to our own -- then we risk moral catastrophe either way we go.  Either deny those robots full rights and risk perpetrating Holocausts' worth of moral wrongs against them, or give those robots full rights and risk sacrificing human interests or even human existence for the sake of mere non-conscious machines.

The answer to this dilemma is, in a way, simple: Don't create machines of disputable moral status!  Either create only AI systems that we know in advance don't deserve such human-like rights, or go all the way and create AI systems that all reasonable people can agree do deserve such rights.  (In earlier work, Mara Garza and I have called this the "Design Policy of the Excluded Middle".)

But realistically, if the technological opportunity is there, would humanity resist?  Would governments and corporations universally agree that across this line we will not tread, because it's reasonably disputable whether a machine of this sort would deserve human-like rights?  That seems optimistic.

-------------------------------------------------------------

Related:

(with Mara Garza) "A Defense of the Rights of Artificial Intelligences", Midwest Studies in Philosophy, 39 (2015), 98-119.


(with Mara Garza) "Designing AI with Rights, Consciousness, Self-Respect, and Freedom", in S. Matthew Liao, ed., The Ethics of Artificial Intelligence (OUP, 2018).

(with John Basl) "AIs Should Have the Same Ethical Protections as Animals", Aeon Magazine, Apr. 26, 2019.

20 comments:

  1. If consciousness abounds in and for all phenomena...
    ...isn't then, in the stance or belief "it takes one to known one", holds true...

    I mean, if I am conscious, I should be able recognize consciousness anywhere...
    ...and reciprocal exchanging would ensue...

    I should stop here...but I can't resist...
    It is not a moral benefit...it's a moral law...

    ...as in excruciatingly fundamental...thanks

    ReplyDelete
  2. The moral dilemma can be avoided by omitting consciousness from the robots' design, unless the premise is that robots with such advanced AI have to be conscious. Presumably, creating a multitude of robots is so that they will fulfill some sort of human purpose. In this scenario, any technological necessity for them to be conscious is like a bad side-effect of an otherwise useful drug.

    From the standpoint of utility, non-conscious useful robots would be preferable. They avoid the moral dilemma and consciousness adds nothing useful. Adding consciousness to their design would most likely only harm them..

    ReplyDelete
  3. Thanks for the comments, folks!

    Daniel, I agree that fits with the Design Policy of the Excluded Middle, so it would be good if possible. The question is whether it would be possible to create such useful robots without their being sophisticated enough to count as having human-like consciousness deserving of rights according to *some* (but not all) viable theories of consciousness.

    ReplyDelete
  4. My intuition is that conscious robots warranting full human rights cannot fulfill our purposes for creating them without violating their rights or harming them, unless they are created without a pre-determined role in our society. In this case, creating them is morally similar to promoting human population growth for economic reasons. However, I don't think anyone is imagining creating advanced AI robots for the purpose of granting them the choice of figuring out what they want to do with their lives.

    I suppose this is your point. Do not create such conscious robots or anything that may be arguably conscious.

    ReplyDelete
  5. Were these designs and their actuality's, originally recognized as mental representations of human function and behavior...
    ...like mental representations to trade, barter, money, credit, corporate rights...

    The morality of recognition rights and mental representations rights...
    ...before the morality of consciousness rights and A I rights...

    ReplyDelete
  6. Eric, your answer to the dilemma reminded me of Frank Herbert's, "Thou shalt not make a machine in the likeness of a human mind." Although I hope we never overreact as much as people in the Dune universe and make AI a universal taboo. Of course the concern in those stories isn't about the AI's welfare.

    My take, not being a moral realist, and seeing consciousness as a hopelessly amorphous subject, is there isn't really a fact of the matter, just the collective intuitions of people about various systems and whether they amount to a fellow being.

    I do think once those intuitions are widespread and persistent, they shouldn't be ignored. Even if we're wrong, overriding sympathy is a bad habit. It can affect how we treat each other and non-human animals that are widely accepted to have moral status.

    Mike

    ReplyDelete
  7. Is it safe to assume that currently there are no machines that lack consciousness only, or primarily, because of software/programming inadequacies? These machines are not lacking consciousness merely because a few "if-then", "if-not", or "go-to" commands are needed.
    I can't shake the belief that conscious machines, those with subjectivity/experience, will require hardware that is biologically brainlike.

    ReplyDelete
  8. Here are some ways out of the dilemma, involving robots being sufficiently different from us (even if they are conscious) that the kinds of problem you envisage could not arise.
    (1) Robots become self-aware, but have no desires for the future. This means that changing their future does not harm their interests, so cannot infringe their rights.
    (2) Robots do not fear death. Because they are just software that can always be rebooted, they simply never develop an interest in remaining alive.
    (3) Robots never fit into our legal system. Rights are instantiated as legal instruments, but robots seem to have no nationality, and exist in worlds of incentives and consequences so alien that the law as it stands never applies, and they simply develop their own, new, digital codes for conduct and social organization.
    (4) There's a funny set of assumptions built into this language that "we" can "give" rights to robots. (Sounds a bit as though it's modeled on the history of civil rights.) Perhaps instead: (a) robots will be virtually divine in their powers before they become conscious, and our ability to give our take away rights won't be relevant; (b) instead of robots forming a distinct class, they will instantly ally with certain human groups (e.g. by being created inside humans, cyborg style) and will never exist as a separate group towards whom humanity as a whole can develop a collective policy.
    ...
    Here's an even more woke take on that last: isn't the idea of "giving rights" a disastrously and inherently colonialist idea? It's predicated on the idea that there is some group that possesses both superior moral practices (rights) and superior power, such that they can choose to either extend the benefits of their superior morality or to deny it to some backward/less powerful group. Rather than ask "when might we generously offer to treat AI nicely" shouldn't we in fact be asking, "What processes of shared reasoning, debate, and negotiation will we enter into as the range debate participants expands?"

    ReplyDelete
  9. Matti MeikäläinenThu Sep 23, 07:31:00 AM PDT

    “Since the science of consciousness is hard, it's possible that we will create conscious robots (or AI systems generally) before we know that they are conscious.”

    In other words, we don’t know how the brain generates consciousness but, in spite of that lack of understanding, it is possible that we will create consciousness in a mechanical device we call a robot. I think your have more credibility saying something like I don’t know how Julia Child whipped up her famous Boeuf Bourguignon but, in spite of that lack of understanding, it is possible that I will create the same thing using a whole set of different ingredients. Some might call that faith.

    ReplyDelete
  10. https://plato.stanford.edu/entries/representation-medieval/

    That A I and mental representations are natural only to a natural origin...
    ...my examples of natural origin: being self present...duty intention...

    These are questions in one's own existence-natural-knowledge...
    ...but when posed by a psychologist or philosopher they become artificial knowledge (AI) in nature...thanks again

    ReplyDelete
  11. Hi Eric:

    Maybe we're applying the wrong criteria- if consciousness is less significant for defining human being than we suppose, and other things are more crucial, such as using tools, creating art, having (unconscious) thought and having a self and so on, maybe in some ways computers are human and maybe in some ways they'll never ever be- I mean, a possible argument made by people like Durkheim and Professor Collins is that thought is a social thing, and will computers or robots ever feel emotions and have a rich inner life?
    Consciousness isn't the cardinal trait, maybe

    ReplyDelete
  12. I think it's a big reach to presume we, the humans, can simply choose to avoid creating consciousness while creating mechanisms of arbitrary (and exponentially growing) complexity.

    ReplyDelete
  13. Interesting scenarios professor. For what it’s worth however I suspect that things will shake out differently. Even though AI is all the rage, and that many respected people are betting on phenomenal experience by means of algorithms alone (that is, algorithm without any algorithm enacting mechanisms), in a natural world it should be realized that instructions get nothing done unless something effectively uses them. Otherwise we get Searle’s Chinese room, Block’s China brain, your USA consciousness, and my own thought experiment where something experiences “thumb pain” when certain inscribed paper is properly converted into another set of inscribed paper. In all cases instructions (also known as algorithms) should need to be enacted by something in order for your innocent conception of consciousness to be realized through any of our creations.

    Secondly I suspect that the physics behind phenomenal experience will indeed become experimentally realized in the not to distant future. There are already signs that McFadden’s cemi is solid given that the best neural correlate for phenomenal experience today happens to be the synchronous neuron firing associated with his theory. One way to virtually prove whether or not consciousness exists in the form of certain neuron produced electromagnetic fields would be to implant millions of transmitters in the head to see if we could reliably affect a subject’s phenomenal experience by means of various specific firing patterns for the person to potentially report.

    Matti,

    Exactly. In our world ingredients should not be forsaken. But what of those who do nevertheless forsake them? It’s not quite that they have faith. You and I would need faith over reason to hold their position since we understand that in order to be causal, algorithms require enactment. They don’t yet understand this however and so consider themselves to also hold a reasonable position. At this point I doubt that many of them will learn our lesson through reason alone however. Given the massive investments which have been made on their side, instead I suspect that specific experimental validation for a mechanistic phenomenal experience solution will be required for this paradigm to truly shift.

    ReplyDelete
  14. Matti MeikäläinenMon Sep 27, 09:53:00 AM PDT

    Hi Phil Eric!
    Yes, assuming that we will unknowingly stumble onto a conscious robot when we do not know how consciousness emerged biologically is an article of faith for sure—like stumbling onto a duplicate for Julia Child’s Boeuf Bourguignon with no knowledge of the ingredients or the recipe and while trying to cook fish. By the way, I see the latter as a far more likely a possibility.

    Moreover, any ethical debate would be much more productive if we learn to ditch the limited “trolley” type thought experiment—it is structured as a no-win scenario in which we are forced to resort to second best ethical strategies. It’s like drawing straws to determine who jumps out of the plane without a parachute. It is a way to concoct a fair procedural rule to avoid a pure arbitrary decision. It really says little about ethics.

    ReplyDelete
  15. While waiting for our host...

    "Brave New World" doesn't seem to speak to over population directly...
    ...but through its indirectivity, in time, we were being prepared for A I today...

    So maybe "The Full Rights Dilemma for Future Robots" is neither moral or amoral, pain or pleasure and not middle ground...

    ...instead searching for truth is mending never ending...

    ReplyDelete
  16. As Arnold said, while we’re waiting for our host…

    I can understand your problem with trolley dilemmas Matti. But then theoretically a single despicable person might be sacrificed for millions of wonderful people. A decision there doesn’t seem arbitrary to me. And yet humanity has no generally respected group of specialists who are able to agree on the particulars of why the despicable person should be sacrificed for millions of wonderful people. I consider this void quite problematic.

    I’m able to use trolley scenarios to demonstrate my position that our various moral or ethical notions about the rightness and wrongness of behavior, constitute nothing more than an array of human perceptions of various situations. Rather than objective rightness and wrongness I consider there to be objective sentience based welfare. Thus I’m able to address any given trolley problem in the manner that hard scientists approach their work, or amorally. I suspect that our mental and behavioral sciences would benefit from such a perspective as well.

    ReplyDelete
  17. Joanna Bryson has argued for much these same conclusions, and on much the same grounds, in (her deliberately provocatively titled) "Robots Should be Slaves" (2009): https://www.joannajbryson.org/publications

    ReplyDelete
  18. It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

    ReplyDelete
  19. https://scitechdaily.com/ ...has an article...

    ..."BIOLOGY JANUARY 16, 2022
    DNA Mutations Do Not Occur Randomly – Discovery Transforms Our View of Evolution"...

    Maybe systematizing intentionality for philosophy physiology psychology biology...
    ...in my life time...hmmm

    ReplyDelete
  20. Eric asks...

    "But realistically, if the technological opportunity is there, would humanity resist? Would governments and corporations universally agree that across this line we will not tread, because it's reasonably disputable whether a machine of this sort would deserve human-like rights?"

    We need philosophers to understand and then explain to the public that a "more is better" relationship with knowledge, embracing every technological opportunity, is an outdated 19th century philosophy.

    The "more is better" relationship with knowledge was rational in the long era of knowledge scarcity. That era is over, and has been replaced by a radically different era characterized by knowledge exploding in every direction at an ever accelerating rate.

    The ethical challenges you're writing about in regards to robots arise from a deeper more fundamental problem which philosophers are best positioned to examine.

    We're trying to run our 21st century society on a 19th century philosophy.

    ReplyDelete