Thursday, April 28, 2022

Will Today's Philosophical Work Still Be Discussed in 200 Years?

I'm a couple days late to this party. Evidently, prominent Yale philosopher Jason Stanley precipitated a firestorm of criticism on Twitter by writing:

I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.

(Stanley has since deleted the tweet, but he favorably retweeted a critique that discusses him specifically, so I assume he wouldn't object to my also doing so.)

Now "abject failure" is too strong -- Stanley has a tendency toward hyperbole on Twitter -- but I think it is entirely reasonable for him to aspire to create philosophical work that will still be read in 200 years and to be somewhat disheartened by the prospect that he will be entirely forgotten. Big-picture philosophy needn't aim only at current audiences. It can aspire to speak to future generations.

How realistic is such an aim? Well, first, we need to evaluate how likely it is that history of philosophy will be an active discipline in 200 years. The work of our era -- Stanley and others -- will of course be regarded as historical by then. Maybe there will be no history of philosophy. Humanity might go extinct or collapse into a post-apocalyptic dystopia with little room for recondite historical scholarship. Alternatively, humanity or our successors might be so cognitively advanced that they regard us early 21st century philosophers as the monkey-brained advocates of simplicistic views that are correct only by dumb luck if they are correct at all.

But I don't think we need to embrace dystopian pessimism; and I suspect that even if our descendants are super-geniuses, there will remain among them some scholars who appreciate the history of 21st century thought, at least in an antiquarian spirit. ("How fascinating that our monkey-brained ancestors were able to come up with all of this!") And of course another possibility is that society proceeds more or less on its current trajectory. Economic growth continues, perhaps at a more modest rate, and with it a thriving global academic culture, hosting ever more researchers of all stripes, with historians in India, Indonesia, Illinois, and Iran specializing in ever more recondite subfields. It's not unreasonable, then, to guess that there will be historians of philosophy in 200 years.

What will they think of our era? Will they study it at all? It seems likely they will. After all, historians of philosophy currently study every era with a substantial body of written philosophy, and as academia has grown, scholars have been filling in the gaps between our favorite eras. I have argued elsewhere that the second half of the 20th century might well be viewed as a golden age of philosophy -- a flourishing of materialism, naturalism, and secularism, as 19th- and early 20th-century dualism and idealism were mostly jettisoned in favor of approaches more straightforwardly grounded in physics and biology. You might not agree with that conjecture. But I think you should still agree that at least in terms of the quantity of work, the variety of topics explored, and the range of views considered, the past fifty years compares favorably with, say, the early medieval era, and indeed probably pretty much any relatively brief era.

So I don't think historians will entirely ignore us. And given that English is now basically the lingua franca of global academia (for better or worse), historians of our era will not neglect English-language philosophers.

Who will be read? The historical fortunes of philosophers rise and fall. Gottlob Frege and Friedrich Nietzsche didn't receive much attention in their day, but are now viewed as a historical giants. Christian Wolff and Henri Bergson were titans in their lifetimes but are little read now. On the other hand, the general tendency is for influential figures to continue to be seen as influential, and we haven't entirely forgotten Wolff and Bergson. A good historian will recognize at least that a full understanding of the eras in which Wolff and Bergson flourished requires appreciating the impact of Wolff and Bergson.

Given the vast number of philosophers writing today and in recent decades, an understanding of our era will probably focus less on understanding the systems of a few great figures and more on understanding the contributions of many scholars to prominent topics of debate -- for example, the rise of materialism, functionalism, and representationalism in philosophy of mind (alongside the major critiques of those views); or the division of normative ethics into consequentialist, deontological, and virtue-ethical approaches. A historian of our era will want to understand these things. And that will require reading David Lewis, Bernard Williams, and other leading figures of the late 20th century as well as, probably, David Chalmers and Peter Singer among others writing now.

As I imagine it, scholars of the 23rd century will still have archival access to our major books and journals. Specialists, then, will thumb through old issues of Nous and Philosophical Review. Some will be intrigued by minor scholars who are in dialogue with the leading figures of our era. They might find some of the work by these minor scholars to be intriguing or insightful -- a valuable critique, perhaps, of the views of the leading figures, maybe prefiguring positions that are more prominently and thoroughly developed by better-known subsequent scholars.

It is not unreasonable, I think, for Stanley to aspire to be among the leading political philosophers and philosophers of language of our era, who will still read by some historians and students, and still perhaps viewed as having some good ideas that are worth continuing discussion and debate.

For my own part, I doubt I will be viewed that way. But I still fantasize that some 23rd-century specialist in the history of philosophy of our era will stumble across one of my books or articles and think, "Hey, some of the work of this mostly-forgotten philosopher is pretty interesting! I think I'll cite it in one of my footnotes." I don't write mainly with that future philosopher in mind, but it still pleases me to think that my work might someday provoke that reaction.

[image generated by wombo.art]

Friday, April 22, 2022

Let's Hope We Don't Live in a Simulation

reposting from the Los Angeles Times, where it appears under a different title[1]

------------------------------------------

There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a “simulation.”

It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.

Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+. Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.

Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals.

In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: “sims” living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or “real,” people. If so, then we ourselves might well be among the sims.

The argument requires some caveats. It’s possible that no technological society ever can produce sims. Even if sims are manufactured, they may be rare — too expensive for mass manufacture, or forbidden by their makers’ law.

Still, the reasoning goes, the simulation hypothesis might be true. It’s possible enough that we have to take it seriously. Bostrom estimates a 1-in-3 chance that we are sims. Chalmers estimates about 25%. Even if you’re more doubtful than that, can you rule it out entirely? Any putative evidence that we aren’t in a sim — such as cosmic background radiation “proving” that the universe originated in a Big Bang — could, presumably, be simulated.

Suppose we accept this. How should we react?

Chalmers seems unconcerned: “Being in an artificial universe seems no worse than being in a universe created by a god” (p. 328). He compares the value of life in a simulation to the value of life on a planet newly made inhabitable. Bostrom acknowledges that humanity faces an “existential risk” that the simulation will shut down — but that risk, he thinks, is much lower than the risk of extinction by a more ordinary disaster. We might even relish the thought that the cosmos hosts societies advanced enough to create sims like us.

In simulated reality, we’d still have real conversations, real achievements, real suffering. We’d still fall in and out of love, hear beautiful music, climb majestic “mountains” and solve the daily Wordle. Indeed, even if definitive evidence proved that we are sims, what — if anything — would we do differently?

But before we adopt too relaxed an attitude, consider who has the God-like power to create and destroy worlds in a simulated universe. Not a benevolent deity. Not timeless, stable laws of physics. Instead, basically gamers.

Most of the simulations we run on our computers are games or scientific studies. They run only briefly before being shut down. Our low-tech sims live partial lives in tiny worlds, with no real history or future. The cities of Sim City are not embedded in fully detailed continents. The simulated soldiers dying in war games fight for causes that don’t exist. They are mere entertainments to be observed, played with, shot at, surprised with disasters. Delete the file, uninstall the program, or recycle your computer and you erase their reality.

But I’m different, you say: I remember history and have been to Wisconsin. Of course, it seems that way. The ordinary citizens of Sim City, if they were somehow made conscious, would probably be just as smug. Simulated people could be programmed to think they live on a huge planet with a rich past, remembering childhood travels to faraway places. Their having these beliefs in fact makes for a richer simulation.

If the simulations that we humans are familiar with reveal the typical fate of simulated beings, long-term sims are rare. Alternatively, if we can’t rely on the current limited range of simulations as a guide, our ignorance about simulated life runs even deeper. Either way, there are no good grounds for confidence that we live in a large, stable simulation.

Taking the simulation hypothesis seriously means accepting that the creator might be a sadistic adolescent gamer about to unleash Godzilla. It means taking seriously the possibility that you are alone in your room with no world beyond, reading a fake blog post, existing only as a short-lived subject or experiment. You might know almost nothing about reality beyond and beneath the simulation. The cosmos might be radically different from anything you could imagine.

The simulation hypothesis is wild and wonderful to contemplate. It’s also radically skeptical. If we take it seriously, it should undermine our confidence about the past, the future and the existence of Milwaukee. What or whom can we trust? Maybe nothing, maybe no one. We can only hope our simulation god is benevolent enough to permit our lives to continue awhile.

Really, we ought to hope the theory is false. A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.

Postscript:

In Reality+, Chalmers argues against the possibility that we live in a local or a temporary simulation on grounds of simplicity (p. 442-447). I am not optimistic that this response succeeds. In general, simplicity arguments against skepticism tend to be underdeveloped and unconvincing -- in part because simplicity itself is complex to evaluate (see my paper with Alan T. Moore, "Experimental Evidence for the Existence of an External World"). And more specifically, it's not clear why it would be easier or simpler to create a giant, simulated world than to create a small simulation with fake indicators of a giant world -- perhaps only enough indicators to effectively fool us for the brief time we exist or on the relatively few tests we run. (And plausibly, our creators might be able to control or predict what thoughts we have or tests we will run and thus only create exactly the portions of reality that they know we will examine.) Continuing the analogy from Sim City, our current sims are more easily constructed if they are small, local, and brief, or if they are duplicated off a template, than if each is giant, a unique run of a whole universe from the beginning. I see no reason why this fact wouldn't generalize to more sophisticated simulations containing genuinely conscious artificial intelligences.

------------------------------------------

[1] The Los Angeles Times titled the piece "Is life a simulation? If so, be very afraid". While I see how one might draw that conclusion from the piece, my own view is that we probably should react emotionally as we react to other small but uncontrollable risks -- not with panic, but rather with a slight shift toward favoring short-term outcomes over long-term ones. See my discussion in "1% Skepticism" and Chapter 4 of my book in draft, The Weirdness of the World. I have also added links, a page reference, and altered the wording for clarity in a few places.

[image generated from inputting the title of this piece into wombo.art's steampunk generator]

Tuesday, April 12, 2022

Let Everyone Sparkle: Psychotechnology in the Year 2067

My latest science fiction story, in Psyche.

Thank you, everyone, for coming to my 60th birthday celebration! I trust that you all feel as young as ever. I feel great! Let’s all pause a moment to celebrate psychotechnology. The decorations and Champagne are not the only things that sparkle. We ourselves glow and fizz as humankind never has before. What amazing energy drinks we have! What powerful and satisfying neural therapies!

If human wellbeing is a matter of reaching our creative and intellectual potential, we flourish now beyond the dreams of previous generations. Sixth-graders master calculus and critique the works of Plato, as only college students could do in the early 2000s. Scientific researchers work 16-hour days, sleeping three times as efficiently as their parents did, refreshed and eager to start at 2:30am. Our athletes far surpass the Olympians of the 2030s, and ordinary fans, jazzed up with attentional cocktails, appreciate their feats with awesome clarity of vision and depth of understanding. Our visual arts, our poetry, our dance and craftwork – all arguably surpass the most brilliant artists and performers of a century ago, and this beauty is multiplied by audiences’ increased capacity to relish the details.

Yet if human wellbeing is a matter not of creative and intellectual flourishing but consists instead in finding joy, tranquility and life satisfaction, then we attain these things too, as never before. Gone are the blues. Our custom pills, drinks and magnetic therapies banish all dull moods. Gone is excessive anxiety. Gone even are grumpiness and dissatisfaction, except as temporary spices to balance the sweetness of life. If you don’t like who you are, or who your spouses and children are, or if work seems a burden, or if your 2,000-square-foot apartment seems too small, simply tweak your emotional settings. You need not remain dissatisfied unless you want to. And why on Earth would anyone want to?

Gone are anger, cruelty, immorality and bitter conflict. There can be no world war, no murderous Indian Partition, no Rwandan genocide. There can be no gang violence, no rape, no crops rotting in warehouses while the masses starve. With the help of psychotechnology, we are too mature and rational to allow such things. Such horrors are fading into history, like a bad dream from which we have collectively woken – more so, of course, among advanced societies than in developing countries with less psychotechnology.

We are Buddhists and Stoics improved. As those ancient philosophers noticed, there have always been two ways to react if the world does not suit your desires. You can struggle to change the world – every success breeding new desires that leave you still unhappy – or you can, more wisely, adjust your desires to match the world as it already is, finding peace. Ancient meditative practices delivered such peace only sporadically and imperfectly, to the most spiritually accomplished. Now, spiritual peace is democratised. You need only twist a dial on your transcranial stimulator or rebalance your morning cocktail.

[continued here]

Wednesday, April 06, 2022

New Essay in Draft: Dehumanizing the Cognitively Disabled: Commentary on Smith's Making Monsters

by Eric Schwitzgebel and Amelia Green[1]

Since the essay is short, we post the entirely of it below. This is a draft. Comments, corrections, and suggestions welcome.

-----------------------------

“No one is doing better work on the psychology of dehumanization than David Livingstone Smith, and he brings to bear an impressive depth and breadth of knowledge in psychology, philosophy, history, and anthropology. Making Monsters is a landmark achievement which will frame all future work on the psychology of dehumanization.” So says Eric Schwitzgebel on the back cover of the book, and we stand by that assessment. Today we aim to extend Smith’s framework to cases of cognitive disability.

According to Smith, “we dehumanize others when we conceive of them as subhuman creatures” (p. 9). However, Smith argues, since it is rarely possible to entirely eradicate our inclination to see other members of our species as fully human, dehumanization typically involves having contradictory beliefs, or at least contradictory representations. On the one hand, the Nazi looks at the Jew, or the southern slaveowner looks at the Black slave, and they can’t help but represent them as human. On the other hand, the Nazi and the slaveowner accept an ideology according to which the Jew and the Black slave are subhuman. The Jew or the Black slave are thus, on Smith’s view, cognitively threatening. They are experienced as confusing and creepy. They seem to transgress the boundaries between human and non-human, violating the natural order. Smith briefly discusses disabled people. Sometimes, disabled people appear to be dehumanized in Smith’s sense. Smith quotes the Nazi doctor Wilhelm Bayer as saying that the fifty-six disabled children he euthanized “could not be qualified as ‘human beings’” (p. 250). Perhaps more commonly, however, people guilty of ableism regard disabled people as humans, but humans who are “chronically defective, incomplete, or deformed” (p. 261). Even in the notorious tract which set the stage for the Nazi euthanasia program, “Permission to Destroy Life Unworthy of Life”, Karl Binding and Alfred Hoche describe those they seek to destroy as “human” (Menschen).

However, we recommend not relying exclusively on explicit language in thinking about dehumanization of people with disabilities. It is entirely possible to represent people as subhuman while still verbally describing them as “human” when explicitly asked. Dehumanization in Smith’s sense involves powerful conflicting representations of the other as both human and subhuman. Verbal evidence is important (and we will use it ourselves), but dehumanization does not require that both representations be verbalized.

We focus on the case of adults with severe cognitive disabilities. Amelie Green is the daughter of Filipino immigrants who worked as live-in caregivers in a small residential home for severely cognitively disabled “clients”. Throughout her childhood and early adulthood, Amelie witnessed the repeated abuse of cognitively disabled people at the hands of caregivers. This includes psychological abuse, physical assault, gross overmedication, needless binding, and nutritional deprivation, directly contrary to law and any reasonable ethical standard. This abuse is possible because the monitoring of these institutions is extremely lax. Surprise visits by regulators rarely occur. Typically, inspections are scheduled weeks or months in advance, giving residential institutions ample time to create the appearance of humane conditions in a brief, pleasing show for regulators. Since the clients are severely cognitively disabled, few are able to communicate their abuse to regulators. Many do not even recognize that they are being abused.

We’ll describe one episode as Amelie recorded it – far from the worst that Amelie has witnessed – to give the flavor and as a target for analysis. The client’s name has been changed for confidentiality.

As I stepped out of the kitchen, I heard a sharp scream, followed by a light thud. The screams continued, and, out of curiosity, I found myself walking towards the back of the house, drawn to two individuals shouting. Halfway towards the commotion, I stopped. I witnessed a caregiver strenuously invert an ambulatory woman strapped to her wheelchair. Both of the patient’s legs pointed towards the ceiling, and her hands clutched the wheelchair’s sidearm handles. As the wailing grew louder, the caregiver proceeded to wedge the patient’s left shoe inside her mouth, muffling the screams.

My initial reaction was to walk away from the scene to compose my thoughts quickly. Upon reflection, I assumed that the soft thud I heard was the impact of Anna’s wheelchair. Anna’s refusal to stop crying must have prompted the caregiver to stuff a shoe inside Anna’s mouth. I assumed that Anna was punished for complaining. After some thought, I noticed that I involuntarily defended the act of physical abuse by conceptualizing the caregiver’s response as a “punishment,” insinuating my biased perspective in favor of the workers. From afar, I caught the female staff outwardly explaining to Anna that she would continue to physically harm her if she made “too much loud noise.” From personal observation, Anna struggled to control her crying spells, oblivious of the commotion she was creating. Nonetheless, Anna involuntarily continued screaming, and the female staff thrust the shoe deeper.

Amelie has witnessed staff members kicking clients in the head; binding them to their beds with little cause; feeding a diabetic client large amounts of sugary drinks with the explicit aim of harming them; eating clients’ attractive food, leaving the clients with a daily diet of mostly salads, eggs, and prunes; falsifying time stamps for medication and feeding; and attempting to control clients by dosing them with psychiatric medications intended for other clients, against medical recommendations. It is not just a few caregivers who engage in such abusive behaviors. In Amelie’s experience, a majority of caregivers are abusive, though to different degrees.

Why do caregivers of the severely cognitive disabled so frequently behave like this? We have three hypotheses.

Convenience. Abuse might be the easiest or most effective means of achieving some practical goal. For example, striking or humiliating a client might keep them obedient, easier to manage than would be possible with a more humane approach. Although humane techniques exist for managing people with cognitive disabilities, they might work more slowly or require more effort from caregivers, who might understandably feel overtaxed in their jobs and frustrated by clients’ unruly behavior. Poorly paid workers might also steal attractive food that would otherwise not be easy for them to afford, justifying it with the thought that the clients won’t know the difference.

Sadism. According to the clinical psychologist Erich Fromm (1974), sadistic acts are acts performed on helpless others that aim at exerting maximum control over those helpless others, usually by inflicting harm on them but also by subjecting those others to arbitrary rules or forcing them to do pointless activities. It is crucial to sadistic control that it lack practical value, since power is best manifested when the chosen action is arbitrary. People typically enact sadism, according to Fromm, when they feel powerless in their own lives. Picture the man who feels frustrated and powerless at work who then comes home and kicks his dog. Cognitively disabled adults might be particularly attractive targets for frustrated workers’ sadistic impulses, since they are mostly powerless to resist and cannot report abuse.

Dehumanization. Abuse might arise from metaphysical discomfort of the sort Smith sees in racial dehumanization. The cognitively disabled might be seen as unnatural and metaphysically threatening. The cognitively disabled might seem creepy, occupying a gray area that defies familiar categories, at once both human and subhuman. Caregivers with conflicting representations of cognitively disabled people both as human and as subhuman might attempt to resolve that conflict by symbolically degrading their clients – implicitly asserting their clients’ subhumanity as a means of resolving this felt tension in favor of the subhuman. If the caregivers have already been mistreating the clients due to convenience or sadism, symbolic degradation might be even more attractive. If they can reinforce their representation of the client as subhuman, sadistic abuse or mistreatment for sake of convenience will seem to matter less.

Consider the example of Anna. To the extent the caregiver’s motivation is convenience, she might be hoping that inverting Anna in the wheelchair and shoving a shoe in her mouth will be an effective punishment that will encourage Anna not to cry so much or so loudly in the future. To the extent the motivation is sadism, the caregiver might be acting out of frustration and a feeling of powerlessness, either in general in her working life or specifically regarding her inability to prevent Anna from crying or both. By inverting Anna and shoving a shoe in her mouth, the caregiver can feel powerful instead of powerless, exerting sadistic control over a helpless other. To the extent the motivation is dehumanization, the worker is symbolically removing Anna’s humanity by literally physically turning her upside-down, into a position that human beings don’t typically occupy. Dogs bite shoes, and humans typically do not, and so arguably Anna is symbolically transformed into a dog. Furthermore, the shoe symbolically and perhaps actually prevents Anna from using her mouth to make humanlike sounds.

These three hypotheses about caregivers’ motives make different empirically distinguishable predictions about who will be abusive, and to whom, and which abusive acts they tend to choose. To the extent convenience is the explanation, we should expect experienced caregivers to choose effective forms of abuse. They will not engage in abuse with no clear purpose, and if a particular form of abuse seems not to be achieving its goal, they will presumably learn to stop that practice. To the extent sadism is the explanation, we should expect that the caregivers who feel most powerless should engage in it and that they should chose as victims clients who are among the most powerless while still being capable of controllable activity. Sadistic abuse should manifest especially in acts of purposeless cruelty and arbitrary control, almost the opposite of what would be chosen if convenience were the motive. To the extent dehumanization is the motive, we should expect the targets of abuse to be disproportionately the clients who are most cognitively and metaphysically threatening – the ones who, in addition to being cognitively disabled, are perceived as having a “deformed” physical appearance, or who seem to resemble non-human animals in their behavior (for example, crawling instead of upright walking), or who are negatively racialized. Acts manifesting dehumanizing motivations should be acts with symbolic value: treating the person in ways that are associated with the treatment of non-human animals, or symbolically altering or preventing characteristically human features or behaviors such as speech, clothing, upright walking, and dining.

We don’t intend convenience, sadism, and dehumanization as an exhaustive list of motives. People do things for many reasons, including sometimes against their will at the behest of others. Nor do we intend these three motives as exclusive. Indeed, as we have already suggested, they might to some extent support each other: Dehumanizing motives might be more attractive once a caregiver has already abused a client for reasons of convenience or sadism. Also, different caregivers might exhibit these motivations in different proportions.

Convenience alone cannot always be the motive. Caregivers often mistreat clients in ways that, far from making things easier for themselves, require extra effort. Adding extra sugar to a diabetic client’s drink serves no effective purpose and risks creating medical complications that the caregiver would then have to deal with. Another client was regularly told lies about his mother, such as that she had died or that she had forgotten about him, seemingly only to provoke a distressed reaction from him. This same client had a tendency to hunch forward and grunt, and caregivers would imitate his slouching and grunting, mocking him in a way that often flustered and confused him. Also, caregivers would go to substantial lengths to avoid sharing the facility’s elegant dining table with clients, even though there was plenty of room for both workers and clients to eat together at opposite ends. Instead, caregivers would rearrange chairs and tablecloths and a large vase before every meal, forcing clients to eat separately at an old, makeshift table. Relatedly, they meticulously ensured that caregivers’ and clients’ dishes and cutlery were never mixed, cleaning them with separate sponges and drying them in separate racks, as if clients were infectious.

But do caregivers really have dehumanizing representations in Smith’s sense? Here, we follow Smith’s method of examining the caregivers’ words. In Amelie’s experience over the years, she has observed that caregivers frequently refer to their clients as “animals” or “no better than animals”. In abusing them, they say things like, “you have to treat them like the animals they are”. Caregivers also commonly treat clients in a manner associated with dogs – for example, whistling for them to come over, saying “Here [name]!” in the same manner you would call a dog, and feeding them food scraps from the table. (These scraps will often be food officially bought on behalf of the clients but which the caregivers are eating for themselves.) The caregivers Amelie has observed also commonly refer to their clients with the English pronoun “it” instead of “he” or “she”, though of course they are aware of their clients’ gender. Some employ “it” so habitually that they accidentally refer to clients as “it” in front of the client’s relatives, during relatives’ visits. This pronoun is perhaps especially telling, since there is no practical justification for using it, and often no sadistic justification either, since many clients aren’t linguistically capable of understanding pronoun use. The use of “it” appears to emerge from an implicit or explicit dehumanizing representation of the client.

Despite speech patterns suggestive of dehumanization, caregivers also explicitly refer to the clients as human beings. In their reflective moments, Amelie has observed them to say things like “It’s hard to remember sometimes that they’re people. When they behave like this, you sometimes forget.” In Amelie’s judgment, the caregivers typically agree when reminded that the clients are people with rights who should be treated accordingly, though they often seem uncomfortable in acknowledging this.

Although the evidence is ambiguous, given caregivers’ patterns of explicitly referring to their cognitively disabled clients both as people and as non-human animals or “it”s, plus non-verbal behavior that appears to suggested dehumanizing representations, we think it’s reasonable to suppose, in accordance with Smith’s model of dehumanization, that many caregivers have powerful contradictory representations of their clients, seeing them simultaneously as human and as subhuman, finding them confusing, creepy, and in conflict with the natural order of things. If so, then it is plausible that they would feel the same kind of cognitive and metaphysical discomfort that Smith identifies in racial dehumanization, and that this discomfort would sometimes lead to inappropriate behavior of the sort described.

There’s another way to reassert the natural order of things, of course. Instead of dehumanizing cognitively disabled clients, you might embrace their humanity. There are two ways of doing this. One involves preserving a certain narrow, traditional sense of the “human” – a sense into which cognitively disabled people don’t easily fit – and then attempting to force the cognitively disabled within that conception. Visiting relatives sometimes seem to do this. One pattern is for a relative to comment with excessive appreciation on a stereotypically human trait that the client has, such as the beauty of their hair – as if to prove to themselves or others that their cognitively disabled relative is a human after all. While this impulse is admirable, it might be rooted in a narrow conception of the human, according to which cognitively disabled people are metaphysical category-straddlers or at best lesser humans.

A different approach to resolving the metaphysical problem – the approach we recommend – involves a more capacious understanding of the human. Plenty of people have disabilities. A person with a missing leg is no less of a human than a person with two legs, nor is the person with a missing leg somehow defective in their humanity. However, our culture appears to have instilled in many of us – perhaps implicitly and even against our better conscious judgment – a tendency to think of high levels of cognitive ability as essential to being fully and non-defectively human. Perhaps historically this has proven to be a useful ideology for eliminating, warehousing, drugging, and binding people who are inconvenient to have around. We suspect that changing this conception would reduce the abuse that caregivers routinely inflict on their cognitively disabled clients.

----------------------------------

[1] "Amelie Green" is a pseudonym chosen to protect Amelie and her family.

Friday, April 01, 2022

Work on Robot Rights Doesn't Conflict with Work on Human Rights

Sometimes I write and speak about robot rights, or more accurately, the moral status of artificial intelligence systems -- or even more accurately, the possible moral status of possible future artificial intelligence systems. I occasionally hear the following objection to this whole line of work: Why waste our time talking about hypothetical robot rights when there are real people, alive right now, whose rights are being disregarded? Let's talk about the rights of those people instead! Some objectors add the further thought that there's a real risk that, under the influence of futurists, our society might eventually treat robots better than some human beings -- ethnic minorities, say, or disabled people.

I feel some of the pull of this objection. But ultimately, I think it's off the mark.

The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.

In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.

But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.

Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.

But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)

So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.

But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.

In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.

Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.

[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]

------------------------------------------

Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.

[image source]