I feel some of the pull of this objection. But ultimately, I think it's off the mark.
The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.
In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.
But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.
Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.
But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)
So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.
But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.
In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.
Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.
[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]
------------------------------------------
Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.
I respectfully disagree—slightly. On one level you are correct. Philosophizing about the rights of robots (i.e., a mythical, perhaps future, being) does not directly contradict efforts at philosophizing about the basic meaning of rights (both positive and negative rights), nor the rights of men, women, racial minorities, people of religious faith and atheists, transgender people, citizens, aliens, children, the disabled, combatants and non-combatants in war zones, and even the “group rights” of the above, etc. That is true enough. However, it does allocate certain scarce intellectual resources to a mythical being instead of real people. That, I submit, is a value judgement that deserves debate. I know which side I come down on.
ReplyDeleteOne argument that sways me is that the inquiries about this constellation of various possible human “rights” listed above, may provide a more rich vocabulary of useful tools in debating the rights of hypothetical robots should that ever become a pressing issue. That is, the so-called synergy argument works in the other direction! In fact, not knowing how a future robot might display an intelligence that may deserve rights protection, it would be useful to have developed a rich vocabulary of rights.
One thing I have noticed is that some who like to debate the rights of robots are required to assume too much about such beings putting such discussions far out on the speculative limb and thus making the result as useful as the solution to a crossword puzzle. Moreover, they are sloppy. A right is rarely even defined as if it is just one simple thing.
I think that some of the resistance to considerations of robot rights is that it isn’t clear that participants in these discussions consider themselves as engaged in a thought experiment.
ReplyDeleteAs a thought experiment, robots can be considered as rhetorical devices like Twin Earth scenarios. As such, they assist us in sussing out general ethical concepts.
On the other hand, to those outside of the AI community, the importance of determining the nature of robot rights, per se, may not seem pressing.
As a analytic philosopher who treasures thought experiments about remote possibilities, are you going to present about...
ReplyDeleteMr.Flavell's 'division of metacognitive knowledge into three categories...knowledge of person variables, task variables and strategy variables'...
Mr Flavell's categories seems a diversified approach to human and AI rights...
...next: the complete genome sequence variables vs AI complete sequence variables...
That crowding out concern might be some of it, although that might just be the rationalization. I suspect what's actually happening in a lot of cases is a "Oh come on!" reaction, as in exasperation as if someone is just trying too hard to find something to worry about.
ReplyDeleteAlong those lines, there might be something to exploring what is necessary and sufficient for a system to be a subject of moral concern. A common reaction might be, "Can it suffer?" But then what does it mean exactly to suffer? What are the components of suffering? And how likely is that arrangement to show up in AI designs? Answering that question, I think, also provides insights into animal rights questions.
If it walks, looks and quacks like a metaphysical duck, it is probably a metaphysical duck. Speculation is, in its investigative sense, useful. I am certain many scientific achievements, breakthroughs and discoveries were aided by speculation. As you ably illustrated in your article, we need not fret over this becoming an interference or impediment to furtherance of the rights of humans. But, I am unconvinced of the notion of rights for autonomous robots. As near as I can understand this, such entities, should they emerge, will be of human creation. Their autonomy---sentient or no---will be enabled through human effort, ingenuity and invention. Insofar as this be the case, they, unlike other humans, will be property...to do with, as we choose. If they are given three laws that must not be violated, that will be our doing, not their own. As to metaphysics itself, one account deems it pointless. Works for me.
ReplyDeleteWould you agree that if robots were to have consciousness identical to human consciousness, that would be a problem for the argument that they are mere property lacking rights?
DeleteWhat are the categories of consciousness...
ReplyDelete...In what category is consciousness itself in...
I suspect that the main problem that people have with talking about imparting human rights to robots is not quite that they see a zero sum game here with human rights, and even if they say that’s a problem. I suspect what they really dislike is how likely it should be that as our robots progressively “speak” more effectively with us and become more and more cute, that laws will be passed which make it illegal to “harm” such machines. Apparently there is already a “People for the Ethical Treatment of…” for artificial intelligence, though they call it “reinforcement learners”. The top of their mission statement says:
ReplyDelete“We take the view that humans are just algorithms implemented on biological hardware. Machine intelligences have moral weight in the same way that humans and non-human animals do. There is no ethically justified reason to prioritise algorithms implemented on carbon over algorithms implemented on silicon.”
Furthermore popular consciousness theories in general essentially conform with their position, though unlike this radical group I suppose most presume that the algorithms that our computers use are not the proper ones. So no worries, yet…
Beyond all this chat bot welfare business I suspect that the actual science of sentience will advance some day to thus progress a bit on this “hard problem”. Here it should be considered morally good to build machines which tend to feel good, and generally illegal to build machines that feel bad. And what about rights? Maybe if we can make something sentient speak reasonably human. I’m doubtful.
I’m not sure equating a debate over robot rights to Hilary Putnam’s Twin Earth thought experiment is persuasive. For one, Putnam was concerned with scientific concepts not ethical concepts. Moreover, some argue that “twin water” and “XYZ” are not even concepts, they are mere names. I suggest that this argument gains greater force when we move away from scientific concepts and toward ethical concepts.
ReplyDeleteIn addition, I argued that those who enjoy debating robot rights are sloppy. My example was that there’s usually no attempt to define the ethical concept of rights—a complex family of concepts. Let me suggest that is because these same people generally are skeptical of any ethical concepts. Many hold old positivist and emotivist assumptions. In short, such persons start the robot rights debate from a limited perspective.
Two weird questions: first does it depend on the type of society (Ie Democratic, Capitalist, traditionalist etc.); second, won't robots agitate for their own rights? (even so far as demanding the right to vote!)
ReplyDeleteA great and thoughtful post, as always.
ReplyDeleteI am of at least two minds on this. On the one hand, I think it is fantastic that you (Eric) and other thoughtful researchers and writers are taking the questions of robot rights seriously. In fact, it would strike me as odd if no one, given how much our lives are influenced by AIs of various kinds, was taking this project seriously, getting us to expand our horizons by considering the possibility of robot rights, etc. And you in particular, given your expertises, are equipped to deal with the topic in a clear and compelling fashion.
My other mind, though, can't help but find the larger conversation, the public conversation, around robots and self-driving cars and neuro-link technologies, to be a bit stifling. In short--and let me help myself to some speculation and psychologizing--it feels like people (I'm thinking members of the public here) are often interested in the "are robots conscious" question because it taps some kind of intersection between a fetishization of technological development as savior and a terror of what it would mean to actually make a monster that can think.
That is, there's lots of conversations about how we can "hack" things to fix them: use AI or new tech to hack the environment, the social world, education, relationships, etc. A lot of us put a lot into our hopes for what technologies, especially things broadly labeled AI, can do to fix our existential problems as humans. And, at the same time, it probably horrifies us, or seduces us, to think that we might create a robot with consciousness. And so, what I see so often is people being absolutely enthralled by "AI/ethics" and "self-driving cars/trolley problem" because it packages itself as a path to salvation: if we could only program the cars right and get the software bots to help us allow for just the right amount of information flow so as to filter out misinformation then--finally--we would be more peaceful, more happy, and more communal. So, I think many of us like these questions because they are not unlike petitionary prayers to larger forces.
And in terms of feeling terror in the presence of AI that might be conscious, this captures our worry that we might not have as much control in the outcomes as we might currently want. In other words, we might successfully hack the climate, and solve all our worldly problems, but in so doing bring about the monstrosity of a conscious robot that we now might have unwittingly oppressed.
I guess, then, for me, why the conversation in the public sphere around robots and consciousness is a bit flat isn't that people care about robots and their potential rights. But it is because there isn't as much discussion about why we are so seduced by such questions. I think it probably says more about our existential worries and desire for control than a pure interest--for many of us--in robots as such. Maybe what I'm complaining about is that the conversation around robots in our lives, at least where I've read and talked, could use more influence from folks in religious studies, literature, history, art, etc. It often is presented in technological or brain-science terms, and I might just crave a more expansive discussion. (Please forgive the ramble and the genuine possibility that there is just this kind of literature all around; I'm mostly referencing what I see as the popular conversation around robots, AI, technology, etc.)
Let me click below that "I'm not a robot" and see if this mess I wrote makes sense.
Thanks for the continuing comments, folks!
ReplyDeleteHowie: I'm not inclined to think it should depend on the society. And no, to me it's not clear that robots will agitate for their own rights, for example if they're programmed to be subservient.
Kyle: Super interesting comment! I'm inclined to agree. Yes, it's a huge and interesting question why we are so intrigued by these technological possibilities, both as possible salvation and as a source of terror. Maybe terror and hope for salvation are the obverse and reverse of power beyond our control, which it is fascinating to contemplate unleashing.
I don't think I can speak for why anyone else might be exasperated by robot rights arguments, and I have never been irritated by Schwitzgebel's arguments. But I have occasionally been put off by arguments on this topic made by other writers, so I'll explain the reason for that:
ReplyDeleteIt's that when discussing "rights for robots," some writers tend to treat the question of human rights as though it's a simple, finished argument. The writer seems to assume that everyone agrees on what rights humans have, and that in general, humans' rights are respected by the people and institutions around them. On that basis, he (that's a slightly gendered pronoun, because I believe I've seen more of this kind of argumentation from men) then goes on to reason about whether or how rights should be extended to robots.
This kind of argument irritates me when it feels like it is being done thoughtlessly. It would be perfectly acceptable to say at the beginning of a discussion of robot rights, "I'm going to use X's conception of human rights, and assume that in general, human rights are respected." You don't have to deal with the tough issues around human rights. But when a writer doesn't put that sentence in, I lose trust in his understanding of how human rights came to be. If the writer doesn't understand that human rights as they exist today only exist precariously for many, and were only established through a lot of pain and conflict, then I'm not going to be very interested in that writer's views on how rights might or might not be extended to robots.
So that's the trigger that sets me off sometimes. I wouldn't word it as, "Why waste our time talking about hypothetical robot rights?" In the situation I describe, it would be, "As you don't seem to know enough about human rights, I'm afraid that your discussion of robot rights will be a waste of my time." And my objection can be defanged by a little properly-constructed disclaimer at the outset.