Thursday, April 28, 2022

Will Today's Philosophical Work Still Be Discussed in 200 Years?

I'm a couple days late to this party. Evidently, prominent Yale philosopher Jason Stanley precipitated a firestorm of criticism on Twitter by writing:

I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.

(Stanley has since deleted the tweet, but he favorably retweeted a critique that discusses him specifically, so I assume he wouldn't object to my also doing so.)

Now "abject failure" is too strong -- Stanley has a tendency toward hyperbole on Twitter -- but I think it is entirely reasonable for him to aspire to create philosophical work that will still be read in 200 years and to be somewhat disheartened by the prospect that he will be entirely forgotten. Big-picture philosophy needn't aim only at current audiences. It can aspire to speak to future generations.

How realistic is such an aim? Well, first, we need to evaluate how likely it is that history of philosophy will be an active discipline in 200 years. The work of our era -- Stanley and others -- will of course be regarded as historical by then. Maybe there will be no history of philosophy. Humanity might go extinct or collapse into a post-apocalyptic dystopia with little room for recondite historical scholarship. Alternatively, humanity or our successors might be so cognitively advanced that they regard us early 21st century philosophers as the monkey-brained advocates of simplicistic views that are correct only by dumb luck if they are correct at all.

But I don't think we need to embrace dystopian pessimism; and I suspect that even if our descendants are super-geniuses, there will remain among them some scholars who appreciate the history of 21st century thought, at least in an antiquarian spirit. ("How fascinating that our monkey-brained ancestors were able to come up with all of this!") And of course another possibility is that society proceeds more or less on its current trajectory. Economic growth continues, perhaps at a more modest rate, and with it a thriving global academic culture, hosting ever more researchers of all stripes, with historians in India, Indonesia, Illinois, and Iran specializing in ever more recondite subfields. It's not unreasonable, then, to guess that there will be historians of philosophy in 200 years.

What will they think of our era? Will they study it at all? It seems likely they will. After all, historians of philosophy currently study every era with a substantial body of written philosophy, and as academia has grown, scholars have been filling in the gaps between our favorite eras. I have argued elsewhere that the second half of the 20th century might well be viewed as a golden age of philosophy -- a flourishing of materialism, naturalism, and secularism, as 19th- and early 20th-century dualism and idealism were mostly jettisoned in favor of approaches more straightforwardly grounded in physics and biology. You might not agree with that conjecture. But I think you should still agree that at least in terms of the quantity of work, the variety of topics explored, and the range of views considered, the past fifty years compares favorably with, say, the early medieval era, and indeed probably pretty much any relatively brief era.

So I don't think historians will entirely ignore us. And given that English is now basically the lingua franca of global academia (for better or worse), historians of our era will not neglect English-language philosophers.

Who will be read? The historical fortunes of philosophers rise and fall. Gottlob Frege and Friedrich Nietzsche didn't receive much attention in their day, but are now viewed as a historical giants. Christian Wolff and Henri Bergson were titans in their lifetimes but are little read now. On the other hand, the general tendency is for influential figures to continue to be seen as influential, and we haven't entirely forgotten Wolff and Bergson. A good historian will recognize at least that a full understanding of the eras in which Wolff and Bergson flourished requires appreciating the impact of Wolff and Bergson.

Given the vast number of philosophers writing today and in recent decades, an understanding of our era will probably focus less on understanding the systems of a few great figures and more on understanding the contributions of many scholars to prominent topics of debate -- for example, the rise of materialism, functionalism, and representationalism in philosophy of mind (alongside the major critiques of those views); or the division of normative ethics into consequentialist, deontological, and virtue-ethical approaches. A historian of our era will want to understand these things. And that will require reading David Lewis, Bernard Williams, and other leading figures of the late 20th century as well as, probably, David Chalmers and Peter Singer among others writing now.

As I imagine it, scholars of the 23rd century will still have archival access to our major books and journals. Specialists, then, will thumb through old issues of Nous and Philosophical Review. Some will be intrigued by minor scholars who are in dialogue with the leading figures of our era. They might find some of the work by these minor scholars to be intriguing or insightful -- a valuable critique, perhaps, of the views of the leading figures, maybe prefiguring positions that are more prominently and thoroughly developed by better-known subsequent scholars.

It is not unreasonable, I think, for Stanley to aspire to be among the leading political philosophers and philosophers of language of our era, who will still read by some historians and students, and still perhaps viewed as having some good ideas that are worth continuing discussion and debate.

For my own part, I doubt I will be viewed that way. But I still fantasize that some 23rd-century specialist in the history of philosophy of our era will stumble across one of my books or articles and think, "Hey, some of the work of this mostly-forgotten philosopher is pretty interesting! I think I'll cite it in one of my footnotes." I don't write mainly with that future philosopher in mind, but it still pleases me to think that my work might someday provoke that reaction.

[image generated by wombo.art]

Friday, April 22, 2022

Let's Hope We Don't Live in a Simulation

reposting from the Los Angeles Times, where it appears under a different title[1]

------------------------------------------

There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a “simulation.”

It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.

Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+. Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.

Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals.

In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: “sims” living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or “real,” people. If so, then we ourselves might well be among the sims.

The argument requires some caveats. It’s possible that no technological society ever can produce sims. Even if sims are manufactured, they may be rare — too expensive for mass manufacture, or forbidden by their makers’ law.

Still, the reasoning goes, the simulation hypothesis might be true. It’s possible enough that we have to take it seriously. Bostrom estimates a 1-in-3 chance that we are sims. Chalmers estimates about 25%. Even if you’re more doubtful than that, can you rule it out entirely? Any putative evidence that we aren’t in a sim — such as cosmic background radiation “proving” that the universe originated in a Big Bang — could, presumably, be simulated.

Suppose we accept this. How should we react?

Chalmers seems unconcerned: “Being in an artificial universe seems no worse than being in a universe created by a god” (p. 328). He compares the value of life in a simulation to the value of life on a planet newly made inhabitable. Bostrom acknowledges that humanity faces an “existential risk” that the simulation will shut down — but that risk, he thinks, is much lower than the risk of extinction by a more ordinary disaster. We might even relish the thought that the cosmos hosts societies advanced enough to create sims like us.

In simulated reality, we’d still have real conversations, real achievements, real suffering. We’d still fall in and out of love, hear beautiful music, climb majestic “mountains” and solve the daily Wordle. Indeed, even if definitive evidence proved that we are sims, what — if anything — would we do differently?

But before we adopt too relaxed an attitude, consider who has the God-like power to create and destroy worlds in a simulated universe. Not a benevolent deity. Not timeless, stable laws of physics. Instead, basically gamers.

Most of the simulations we run on our computers are games or scientific studies. They run only briefly before being shut down. Our low-tech sims live partial lives in tiny worlds, with no real history or future. The cities of Sim City are not embedded in fully detailed continents. The simulated soldiers dying in war games fight for causes that don’t exist. They are mere entertainments to be observed, played with, shot at, surprised with disasters. Delete the file, uninstall the program, or recycle your computer and you erase their reality.

But I’m different, you say: I remember history and have been to Wisconsin. Of course, it seems that way. The ordinary citizens of Sim City, if they were somehow made conscious, would probably be just as smug. Simulated people could be programmed to think they live on a huge planet with a rich past, remembering childhood travels to faraway places. Their having these beliefs in fact makes for a richer simulation.

If the simulations that we humans are familiar with reveal the typical fate of simulated beings, long-term sims are rare. Alternatively, if we can’t rely on the current limited range of simulations as a guide, our ignorance about simulated life runs even deeper. Either way, there are no good grounds for confidence that we live in a large, stable simulation.

Taking the simulation hypothesis seriously means accepting that the creator might be a sadistic adolescent gamer about to unleash Godzilla. It means taking seriously the possibility that you are alone in your room with no world beyond, reading a fake blog post, existing only as a short-lived subject or experiment. You might know almost nothing about reality beyond and beneath the simulation. The cosmos might be radically different from anything you could imagine.

The simulation hypothesis is wild and wonderful to contemplate. It’s also radically skeptical. If we take it seriously, it should undermine our confidence about the past, the future and the existence of Milwaukee. What or whom can we trust? Maybe nothing, maybe no one. We can only hope our simulation god is benevolent enough to permit our lives to continue awhile.

Really, we ought to hope the theory is false. A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.

Postscript:

In Reality+, Chalmers argues against the possibility that we live in a local or a temporary simulation on grounds of simplicity (p. 442-447). I am not optimistic that this response succeeds. In general, simplicity arguments against skepticism tend to be underdeveloped and unconvincing -- in part because simplicity itself is complex to evaluate (see my paper with Alan T. Moore, "Experimental Evidence for the Existence of an External World"). And more specifically, it's not clear why it would be easier or simpler to create a giant, simulated world than to create a small simulation with fake indicators of a giant world -- perhaps only enough indicators to effectively fool us for the brief time we exist or on the relatively few tests we run. (And plausibly, our creators might be able to control or predict what thoughts we have or tests we will run and thus only create exactly the portions of reality that they know we will examine.) Continuing the analogy from Sim City, our current sims are more easily constructed if they are small, local, and brief, or if they are duplicated off a template, than if each is giant, a unique run of a whole universe from the beginning. I see no reason why this fact wouldn't generalize to more sophisticated simulations containing genuinely conscious artificial intelligences.

------------------------------------------

[1] The Los Angeles Times titled the piece "Is life a simulation? If so, be very afraid". While I see how one might draw that conclusion from the piece, my own view is that we probably should react emotionally as we react to other small but uncontrollable risks -- not with panic, but rather with a slight shift toward favoring short-term outcomes over long-term ones. See my discussion in "1% Skepticism" and Chapter 4 of my book in draft, The Weirdness of the World. I have also added links, a page reference, and altered the wording for clarity in a few places.

[image generated from inputting the title of this piece into wombo.art's steampunk generator]

Tuesday, April 12, 2022

Let Everyone Sparkle: Psychotechnology in the Year 2067

My latest science fiction story, in Psyche.

Thank you, everyone, for coming to my 60th birthday celebration! I trust that you all feel as young as ever. I feel great! Let’s all pause a moment to celebrate psychotechnology. The decorations and Champagne are not the only things that sparkle. We ourselves glow and fizz as humankind never has before. What amazing energy drinks we have! What powerful and satisfying neural therapies!

If human wellbeing is a matter of reaching our creative and intellectual potential, we flourish now beyond the dreams of previous generations. Sixth-graders master calculus and critique the works of Plato, as only college students could do in the early 2000s. Scientific researchers work 16-hour days, sleeping three times as efficiently as their parents did, refreshed and eager to start at 2:30am. Our athletes far surpass the Olympians of the 2030s, and ordinary fans, jazzed up with attentional cocktails, appreciate their feats with awesome clarity of vision and depth of understanding. Our visual arts, our poetry, our dance and craftwork – all arguably surpass the most brilliant artists and performers of a century ago, and this beauty is multiplied by audiences’ increased capacity to relish the details.

Yet if human wellbeing is a matter not of creative and intellectual flourishing but consists instead in finding joy, tranquility and life satisfaction, then we attain these things too, as never before. Gone are the blues. Our custom pills, drinks and magnetic therapies banish all dull moods. Gone is excessive anxiety. Gone even are grumpiness and dissatisfaction, except as temporary spices to balance the sweetness of life. If you don’t like who you are, or who your spouses and children are, or if work seems a burden, or if your 2,000-square-foot apartment seems too small, simply tweak your emotional settings. You need not remain dissatisfied unless you want to. And why on Earth would anyone want to?

Gone are anger, cruelty, immorality and bitter conflict. There can be no world war, no murderous Indian Partition, no Rwandan genocide. There can be no gang violence, no rape, no crops rotting in warehouses while the masses starve. With the help of psychotechnology, we are too mature and rational to allow such things. Such horrors are fading into history, like a bad dream from which we have collectively woken – more so, of course, among advanced societies than in developing countries with less psychotechnology.

We are Buddhists and Stoics improved. As those ancient philosophers noticed, there have always been two ways to react if the world does not suit your desires. You can struggle to change the world – every success breeding new desires that leave you still unhappy – or you can, more wisely, adjust your desires to match the world as it already is, finding peace. Ancient meditative practices delivered such peace only sporadically and imperfectly, to the most spiritually accomplished. Now, spiritual peace is democratised. You need only twist a dial on your transcranial stimulator or rebalance your morning cocktail.

[continued here]

Wednesday, April 06, 2022

New Essay in Draft: Dehumanizing the Cognitively Disabled: Commentary on Smith's Making Monsters

by Eric Schwitzgebel and Amelia Green[1]

Since the essay is short, we post the entirely of it below. This is a draft. Comments, corrections, and suggestions welcome.

-----------------------------

“No one is doing better work on the psychology of dehumanization than David Livingstone Smith, and he brings to bear an impressive depth and breadth of knowledge in psychology, philosophy, history, and anthropology. Making Monsters is a landmark achievement which will frame all future work on the psychology of dehumanization.” So says Eric Schwitzgebel on the back cover of the book, and we stand by that assessment. Today we aim to extend Smith’s framework to cases of cognitive disability.

According to Smith, “we dehumanize others when we conceive of them as subhuman creatures” (p. 9). However, Smith argues, since it is rarely possible to entirely eradicate our inclination to see other members of our species as fully human, dehumanization typically involves having contradictory beliefs, or at least contradictory representations. On the one hand, the Nazi looks at the Jew, or the southern slaveowner looks at the Black slave, and they can’t help but represent them as human. On the other hand, the Nazi and the slaveowner accept an ideology according to which the Jew and the Black slave are subhuman. The Jew or the Black slave are thus, on Smith’s view, cognitively threatening. They are experienced as confusing and creepy. They seem to transgress the boundaries between human and non-human, violating the natural order. Smith briefly discusses disabled people. Sometimes, disabled people appear to be dehumanized in Smith’s sense. Smith quotes the Nazi doctor Wilhelm Bayer as saying that the fifty-six disabled children he euthanized “could not be qualified as ‘human beings’” (p. 250). Perhaps more commonly, however, people guilty of ableism regard disabled people as humans, but humans who are “chronically defective, incomplete, or deformed” (p. 261). Even in the notorious tract which set the stage for the Nazi euthanasia program, “Permission to Destroy Life Unworthy of Life”, Karl Binding and Alfred Hoche describe those they seek to destroy as “human” (Menschen).

However, we recommend not relying exclusively on explicit language in thinking about dehumanization of people with disabilities. It is entirely possible to represent people as subhuman while still verbally describing them as “human” when explicitly asked. Dehumanization in Smith’s sense involves powerful conflicting representations of the other as both human and subhuman. Verbal evidence is important (and we will use it ourselves), but dehumanization does not require that both representations be verbalized.

We focus on the case of adults with severe cognitive disabilities. Amelie Green is the daughter of Filipino immigrants who worked as live-in caregivers in a small residential home for severely cognitively disabled “clients”. Throughout her childhood and early adulthood, Amelie witnessed the repeated abuse of cognitively disabled people at the hands of caregivers. This includes psychological abuse, physical assault, gross overmedication, needless binding, and nutritional deprivation, directly contrary to law and any reasonable ethical standard. This abuse is possible because the monitoring of these institutions is extremely lax. Surprise visits by regulators rarely occur. Typically, inspections are scheduled weeks or months in advance, giving residential institutions ample time to create the appearance of humane conditions in a brief, pleasing show for regulators. Since the clients are severely cognitively disabled, few are able to communicate their abuse to regulators. Many do not even recognize that they are being abused.

We’ll describe one episode as Amelie recorded it – far from the worst that Amelie has witnessed – to give the flavor and as a target for analysis. The client’s name has been changed for confidentiality.

As I stepped out of the kitchen, I heard a sharp scream, followed by a light thud. The screams continued, and, out of curiosity, I found myself walking towards the back of the house, drawn to two individuals shouting. Halfway towards the commotion, I stopped. I witnessed a caregiver strenuously invert an ambulatory woman strapped to her wheelchair. Both of the patient’s legs pointed towards the ceiling, and her hands clutched the wheelchair’s sidearm handles. As the wailing grew louder, the caregiver proceeded to wedge the patient’s left shoe inside her mouth, muffling the screams.

My initial reaction was to walk away from the scene to compose my thoughts quickly. Upon reflection, I assumed that the soft thud I heard was the impact of Anna’s wheelchair. Anna’s refusal to stop crying must have prompted the caregiver to stuff a shoe inside Anna’s mouth. I assumed that Anna was punished for complaining. After some thought, I noticed that I involuntarily defended the act of physical abuse by conceptualizing the caregiver’s response as a “punishment,” insinuating my biased perspective in favor of the workers. From afar, I caught the female staff outwardly explaining to Anna that she would continue to physically harm her if she made “too much loud noise.” From personal observation, Anna struggled to control her crying spells, oblivious of the commotion she was creating. Nonetheless, Anna involuntarily continued screaming, and the female staff thrust the shoe deeper.

Amelie has witnessed staff members kicking clients in the head; binding them to their beds with little cause; feeding a diabetic client large amounts of sugary drinks with the explicit aim of harming them; eating clients’ attractive food, leaving the clients with a daily diet of mostly salads, eggs, and prunes; falsifying time stamps for medication and feeding; and attempting to control clients by dosing them with psychiatric medications intended for other clients, against medical recommendations. It is not just a few caregivers who engage in such abusive behaviors. In Amelie’s experience, a majority of caregivers are abusive, though to different degrees.

Why do caregivers of the severely cognitive disabled so frequently behave like this? We have three hypotheses.

Convenience. Abuse might be the easiest or most effective means of achieving some practical goal. For example, striking or humiliating a client might keep them obedient, easier to manage than would be possible with a more humane approach. Although humane techniques exist for managing people with cognitive disabilities, they might work more slowly or require more effort from caregivers, who might understandably feel overtaxed in their jobs and frustrated by clients’ unruly behavior. Poorly paid workers might also steal attractive food that would otherwise not be easy for them to afford, justifying it with the thought that the clients won’t know the difference.

Sadism. According to the clinical psychologist Erich Fromm (1974), sadistic acts are acts performed on helpless others that aim at exerting maximum control over those helpless others, usually by inflicting harm on them but also by subjecting those others to arbitrary rules or forcing them to do pointless activities. It is crucial to sadistic control that it lack practical value, since power is best manifested when the chosen action is arbitrary. People typically enact sadism, according to Fromm, when they feel powerless in their own lives. Picture the man who feels frustrated and powerless at work who then comes home and kicks his dog. Cognitively disabled adults might be particularly attractive targets for frustrated workers’ sadistic impulses, since they are mostly powerless to resist and cannot report abuse.

Dehumanization. Abuse might arise from metaphysical discomfort of the sort Smith sees in racial dehumanization. The cognitively disabled might be seen as unnatural and metaphysically threatening. The cognitively disabled might seem creepy, occupying a gray area that defies familiar categories, at once both human and subhuman. Caregivers with conflicting representations of cognitively disabled people both as human and as subhuman might attempt to resolve that conflict by symbolically degrading their clients – implicitly asserting their clients’ subhumanity as a means of resolving this felt tension in favor of the subhuman. If the caregivers have already been mistreating the clients due to convenience or sadism, symbolic degradation might be even more attractive. If they can reinforce their representation of the client as subhuman, sadistic abuse or mistreatment for sake of convenience will seem to matter less.

Consider the example of Anna. To the extent the caregiver’s motivation is convenience, she might be hoping that inverting Anna in the wheelchair and shoving a shoe in her mouth will be an effective punishment that will encourage Anna not to cry so much or so loudly in the future. To the extent the motivation is sadism, the caregiver might be acting out of frustration and a feeling of powerlessness, either in general in her working life or specifically regarding her inability to prevent Anna from crying or both. By inverting Anna and shoving a shoe in her mouth, the caregiver can feel powerful instead of powerless, exerting sadistic control over a helpless other. To the extent the motivation is dehumanization, the worker is symbolically removing Anna’s humanity by literally physically turning her upside-down, into a position that human beings don’t typically occupy. Dogs bite shoes, and humans typically do not, and so arguably Anna is symbolically transformed into a dog. Furthermore, the shoe symbolically and perhaps actually prevents Anna from using her mouth to make humanlike sounds.

These three hypotheses about caregivers’ motives make different empirically distinguishable predictions about who will be abusive, and to whom, and which abusive acts they tend to choose. To the extent convenience is the explanation, we should expect experienced caregivers to choose effective forms of abuse. They will not engage in abuse with no clear purpose, and if a particular form of abuse seems not to be achieving its goal, they will presumably learn to stop that practice. To the extent sadism is the explanation, we should expect that the caregivers who feel most powerless should engage in it and that they should chose as victims clients who are among the most powerless while still being capable of controllable activity. Sadistic abuse should manifest especially in acts of purposeless cruelty and arbitrary control, almost the opposite of what would be chosen if convenience were the motive. To the extent dehumanization is the motive, we should expect the targets of abuse to be disproportionately the clients who are most cognitively and metaphysically threatening – the ones who, in addition to being cognitively disabled, are perceived as having a “deformed” physical appearance, or who seem to resemble non-human animals in their behavior (for example, crawling instead of upright walking), or who are negatively racialized. Acts manifesting dehumanizing motivations should be acts with symbolic value: treating the person in ways that are associated with the treatment of non-human animals, or symbolically altering or preventing characteristically human features or behaviors such as speech, clothing, upright walking, and dining.

We don’t intend convenience, sadism, and dehumanization as an exhaustive list of motives. People do things for many reasons, including sometimes against their will at the behest of others. Nor do we intend these three motives as exclusive. Indeed, as we have already suggested, they might to some extent support each other: Dehumanizing motives might be more attractive once a caregiver has already abused a client for reasons of convenience or sadism. Also, different caregivers might exhibit these motivations in different proportions.

Convenience alone cannot always be the motive. Caregivers often mistreat clients in ways that, far from making things easier for themselves, require extra effort. Adding extra sugar to a diabetic client’s drink serves no effective purpose and risks creating medical complications that the caregiver would then have to deal with. Another client was regularly told lies about his mother, such as that she had died or that she had forgotten about him, seemingly only to provoke a distressed reaction from him. This same client had a tendency to hunch forward and grunt, and caregivers would imitate his slouching and grunting, mocking him in a way that often flustered and confused him. Also, caregivers would go to substantial lengths to avoid sharing the facility’s elegant dining table with clients, even though there was plenty of room for both workers and clients to eat together at opposite ends. Instead, caregivers would rearrange chairs and tablecloths and a large vase before every meal, forcing clients to eat separately at an old, makeshift table. Relatedly, they meticulously ensured that caregivers’ and clients’ dishes and cutlery were never mixed, cleaning them with separate sponges and drying them in separate racks, as if clients were infectious.

But do caregivers really have dehumanizing representations in Smith’s sense? Here, we follow Smith’s method of examining the caregivers’ words. In Amelie’s experience over the years, she has observed that caregivers frequently refer to their clients as “animals” or “no better than animals”. In abusing them, they say things like, “you have to treat them like the animals they are”. Caregivers also commonly treat clients in a manner associated with dogs – for example, whistling for them to come over, saying “Here [name]!” in the same manner you would call a dog, and feeding them food scraps from the table. (These scraps will often be food officially bought on behalf of the clients but which the caregivers are eating for themselves.) The caregivers Amelie has observed also commonly refer to their clients with the English pronoun “it” instead of “he” or “she”, though of course they are aware of their clients’ gender. Some employ “it” so habitually that they accidentally refer to clients as “it” in front of the client’s relatives, during relatives’ visits. This pronoun is perhaps especially telling, since there is no practical justification for using it, and often no sadistic justification either, since many clients aren’t linguistically capable of understanding pronoun use. The use of “it” appears to emerge from an implicit or explicit dehumanizing representation of the client.

Despite speech patterns suggestive of dehumanization, caregivers also explicitly refer to the clients as human beings. In their reflective moments, Amelie has observed them to say things like “It’s hard to remember sometimes that they’re people. When they behave like this, you sometimes forget.” In Amelie’s judgment, the caregivers typically agree when reminded that the clients are people with rights who should be treated accordingly, though they often seem uncomfortable in acknowledging this.

Although the evidence is ambiguous, given caregivers’ patterns of explicitly referring to their cognitively disabled clients both as people and as non-human animals or “it”s, plus non-verbal behavior that appears to suggested dehumanizing representations, we think it’s reasonable to suppose, in accordance with Smith’s model of dehumanization, that many caregivers have powerful contradictory representations of their clients, seeing them simultaneously as human and as subhuman, finding them confusing, creepy, and in conflict with the natural order of things. If so, then it is plausible that they would feel the same kind of cognitive and metaphysical discomfort that Smith identifies in racial dehumanization, and that this discomfort would sometimes lead to inappropriate behavior of the sort described.

There’s another way to reassert the natural order of things, of course. Instead of dehumanizing cognitively disabled clients, you might embrace their humanity. There are two ways of doing this. One involves preserving a certain narrow, traditional sense of the “human” – a sense into which cognitively disabled people don’t easily fit – and then attempting to force the cognitively disabled within that conception. Visiting relatives sometimes seem to do this. One pattern is for a relative to comment with excessive appreciation on a stereotypically human trait that the client has, such as the beauty of their hair – as if to prove to themselves or others that their cognitively disabled relative is a human after all. While this impulse is admirable, it might be rooted in a narrow conception of the human, according to which cognitively disabled people are metaphysical category-straddlers or at best lesser humans.

A different approach to resolving the metaphysical problem – the approach we recommend – involves a more capacious understanding of the human. Plenty of people have disabilities. A person with a missing leg is no less of a human than a person with two legs, nor is the person with a missing leg somehow defective in their humanity. However, our culture appears to have instilled in many of us – perhaps implicitly and even against our better conscious judgment – a tendency to think of high levels of cognitive ability as essential to being fully and non-defectively human. Perhaps historically this has proven to be a useful ideology for eliminating, warehousing, drugging, and binding people who are inconvenient to have around. We suspect that changing this conception would reduce the abuse that caregivers routinely inflict on their cognitively disabled clients.

----------------------------------

[1] "Amelie Green" is a pseudonym chosen to protect Amelie and her family.

Friday, April 01, 2022

Work on Robot Rights Doesn't Conflict with Work on Human Rights

Sometimes I write and speak about robot rights, or more accurately, the moral status of artificial intelligence systems -- or even more accurately, the possible moral status of possible future artificial intelligence systems. I occasionally hear the following objection to this whole line of work: Why waste our time talking about hypothetical robot rights when there are real people, alive right now, whose rights are being disregarded? Let's talk about the rights of those people instead! Some objectors add the further thought that there's a real risk that, under the influence of futurists, our society might eventually treat robots better than some human beings -- ethnic minorities, say, or disabled people.

I feel some of the pull of this objection. But ultimately, I think it's off the mark.

The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.

In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.

But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.

Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.

But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)

So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.

But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.

In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.

Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.

[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]

------------------------------------------

Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.

[image source]

Thursday, March 24, 2022

Evening the Playing Field in Philosophy Classes

As I discussed last week, overconfident students have systematic advantages in philosophy classes, at least as philosophy is typically taught in the United States. By confidently asserting their ideas in classroom -- even from day one, when they have no real expertise on the issues -- they get practice articulating philosophical views in an argumentative context and they receive the professor's customized feedback on their views. Presenting their views before professor and peers engages their emotions and enhances their memory. Typically, professors encourage and support such students, bringing out the best in them. Thus, over the long run, overconfident students tend to perform well, better than their otherwise similar peers with more realistic self-assessments. What seems to be an epistemic vice -- overconfidence -- ultimately helps them flourish and learn more than they otherwise would have.

I like these overconfident students (as long as they're not arrogant or domineering). It's good we encourage them. But I also want to level the playing field so that less-overconfident students can gain some of the same advantages. Here's my advice for doing so.

First, advice for professors. Second, advice for students.

Evening the Playing Field: Advice for Professors

(1.) Small-group discussions. This might sound like tired advice, but there's a reason the advice is so common. Small-group discussions work amazing magic if you do them right. Here's my approach:

* Divide the class into groups of 3 or 4. Insist on exactly 3 or 4. Two is to few, because friends will pair up and have too little diversity of opinion. Five is too many, because the quietest student will be left out of the conversation.

* Give the students a short, co-operative written task which will be graded pass / no credit (be lenient). For example, "write down two considerations in favor of the view that human nature is good and two considerations against the view". Have them designate one student as "secretary", who will write down their collaborative answer on a sheet of paper containing all their names. This should start them talking with each other, aimed at producing something concrete and sensible.

* Allow them five minutes (maybe seven), during which you wander the room, encouraging any quiet groups to start talking and writing.

* Reconvene the class, and then ask a group of usually quiet students what their group came up with.

* Explore the merits of their answers in an encouraging way, then repeat with other groups.

This exercise will get many more students talking in class than the usual six. (Almost no matter the size of the class, there will be six students who do almost all the talking, right?) The increased talkativeness often continues after the exercise is over. Not only do normally quiet students open their mouths more, but they gain some of the more specific benefits of the overconfident student: They practice expressing their views aloud in class, they receive customized feedback from the professor, by having their views put on the spot they feel an emotional engagement that enhances interest and memory, and they get the feeling of support from the professor.

Why it works: It's of course easier to talk with a few peers than in front of the whole class, especially when necessary to complete a (low-stress) assignment. Speaking to a few peers in the classroom and finding them to be nice about it (as they almost always are) facilitates later speaking in front of the whole class. Furthermore, when the professor calls on a small group, instead of on one student in particular, that student isn't being confronted as directly. They have cover: "This is just what our group came up with." And if the student isn't comfortable extemporizing, they can just read the words written on the page. All of this makes it easier for the quieter students to gain practice expressing their philosophical views in front of others. If it goes well, they become more comfortable doing it again.

(2.) Story time. Back in 2016, I dedicated a post to the value of telling stories in philosophy class. My former T.A. Chris McVey was a master of philosophical storytelling. He would start discussion sections of my huge lower-division class, Evil, with personal stories from his childhood, or from his time working on a nuclear submarine, or from other parts of his life, that related to the themes of the class. He kept it personal and real, and students loved it.

A very different type of student tends to engage after storytime than the usual overconfident philosophy guy -- for example, someone who has a similar story in their own lives. The whole discussion has a different, more personal tone, and it can then be steered back into the main ideas of the course. Peter [name randomly chosen from lists of former students], who might normally say nothing in class, finds he has something to say about parental divorce. He has been brought into the discussion, has expressed an opinion and has been shown how his opinion is relevant to the course.

(3.) Diversify topics and cultures. Relatedly, whenever you can diversify topics (add a dimension on religion, or family, or the military) or culture (beyond the usual European / North American focus), you shift around what I think of as the "academic capital" of the students in the class. A student who hasn't had confidence to speak might suddenly feel expert and confident. Maybe they are the one student who has had active duty in the military, or maybe their pre-college teachers regularly quoted from Confucius and Mencius. Respecting their expertise can help them recognize that they bring something important, and they will be readier to commit and engage on the issues at hand.

(4.) Finger-counting questions. Consider adding this custom: The first time a student raises their hand to speak in class, they raise one finger. The second time, two fingers. The third time, three fingers, and so on. When multiple students want to contribute, prioritize those with fewer fingers. When a student raises four fingers, hesitate, looking to see whether some lower-fingered students might also have something to say. This practice doesn't silence the most talkative students, but it will make them more aware of the extent to which they might be crowding other students out, and it constantly communicates to the quieter students that you're especially interested in hearing from them, instead of always from the same six.

This advice aims partly at enhancing oral participation in class, which is a big step toward evening the playing field. But to really level the playing field requires more. It's not just that the overconfident student is more orally active. The overconfident student has opinions, stakes claims, feels invested in the truth or falsity of particular positions, and takes the risk of exposing their ideas to criticism. This creates more emotional and intellectual engagement than do neutral, clarificatory oral contributions. My first three suggestions not only broaden oral participation in general but non-coercively nudge students toward staking claims, with all the good that follows from that.

Evening the Playing Field: Advice for Students

Your professor might not do any of the above. You might see the same six students dominating discussion, and you might not feel able to contribute at their level. You might be uninterested in competing with them for air time, and you might dislike the spotlight on yourself. Do yourself a favor and overcome these reservations!

First, the confident students might not actually know the material better than you do. Most professors in U.S. classrooms interpret students' questions as charitably as possible, finding what's best in them, rather than shooting students down in a discouraging way. If a confident student says something you think doesn't make sense or that you're inclined to disagree with, and if the professor seems to like the comment, it might not be that you're misunderstanding but rather that the professor is doing what they can to turn a weak contribution into something good.

Second, try viewing classroom philosophy discussions like a game. Almost every substantive philosophical claim (apart from simple historical facts and straightforward textual interpretations) is disputable, even the principle of non-contradiction. Take a stand for fun. See if you can defend it. A good professor will both help you see ways in which it might be defensible and ways in which others have argued against it. Think of it as being assigned to defend a view in a debate -- a view with which you might or might not agree.

Third, you owe it to yourself to win the same educational benefits that ordinarily accrue disproportionately to the overconfident students. You might not feel comfortable taking a stand in class. But so much of life is about reaching beyond your comfort zone, doing new things. Right? If you care about your education, care about getting the most out of it by putting your ideas forward in class.

Fourth, try it with other students. Even if your professor doesn't use small discussion groups, you can do this yourself. Most people find that it's much easier to take a stand about the material in front of a peer than in front of the whole class. Outside of class, tell a classmate about your objection to Kant. Bat it around with them a bit. This will give you already a certain amount of practice and feedback, laying the groundwork for later expressing that view, or some other one, in a class context. You could even say to the professor, "My friend and I were wondering whether Kant..." A good professor will love to hear a question like this. Thus students have been arguing about Kant outside of class! Yay!

------------------------------------

Related:

"How to Diversity Philosophy: Two Thoughts and a Plea for More Suggestions" (Aug 24, 2016)

"Storytelling in Philosophy Class" (Oct 21, 2016)

"The Parable of the Overconfident Student -- and Why Academic Philosophy Still Favors the Socially Privileged" (Mar 14, 2022)

[image source]

Monday, March 14, 2022

The Parable of the Overconfident Student -- and Why Academic Philosophy Still Favors the Socially Privileged

If you've taken or taught some philosophy classes in the United States, you know the type: the overconfident philosophy student. Our system rewards these students. The epistemic failing of overconfidence ultimately serves them well. This pattern, I conjecture, helps explain the continuing inequities in philosophy.

It's the second day of class. You're starting some complex topic, say, the conceivability argument for metaphysical dualism. Student X jumps immediately into the discussion: The conceivability argument is obviously absurd! You offer some standard first-pass responses to his objection, but he stands his ground, fishing around for defenses. Before today, he knew nothing about the issues. It's his first philosophy class, but suddenly, he knows better than every philosopher who thinks otherwise, whose counterarguments he has never heard.

[image from the classic Onion article "Guy in Philosophy Class Needs to Shut the Fuck Up"]

It's also Student Y's first philosophy class. Student X and Student Y are similar in intelligence and background knowledge, differing only in that Student Y isn't irrationally overconfident. Maybe Student Y asks a question of clarification. Or maybe she asks how the author would deal with such-and-such an objection. More likely, she keeps quiet, not wanting to embarrass herself or use class time when other, more knowledgeable students presumably have more insightful things to say.

(I've called Student X a "he", since in my experience most students of this type are men. Student Y types are more common, in any gender.)

What will happen to Student X and Student Y over time, in the typical U.S. classroom? Both might do well. Student Y is by no means doomed to fail. But Student X's overconfidence wins him several important advantages.

First, he gets practice asserting his philosophical views in an argumentative context. Oral presentation of one's opinions is a crucial skill in philosophy and closely related to written presentation of one's opinions.

Second, he receives customized expert feedback on his philosophical views. Typically, the professor will restate Student X's views, strengthening them and fitting them better into the existing discourse. The professor will articulate responses those views, so that the student learns those too. If the student responds to the responses, this second layer of responses will also be charitably reworked and rebutted. Thus, Student X will gain specific knowledge on exactly the issues that engage and interest him most.

Third, he engages his emotions and enhances his memory. Taking a public stand stirs up your emotions. Asking a question makes your heart race. Being defeated in an argument with your professor burns that argument into your memory. So also does winning an argument or playing to a draw. After his public stand and argument, it matters to Student X, more than it otherwise would have, that the conceivability argument is absurd. This will intensify his engagement with the rest of the course, where he'll latch on to arguments that support his view and develop counterarguments against the opposition. His written work will also reflect this passion.

Fourth, he wins the support and encouragement of his professor. Unless he is unusually obnoxious or his questions are unusually poor, the typical U.S. professor will appreciate Student X's enthusiasm and his willingness to advance class discussion. His insights will be praised and his mistakes interpreted charitably, enhancing his self-confidence and his sense that he is good at philosophy.

The combined effect of these advantages, multiplied over the course of an undergraduate education, ensures that most students like Student X thrive in U.S. philosophy programs. What was initially the epistemic vice of overconfidence becomes the epistemic virtue of being a knowledgeable, well-trained philosophy student.

Contrast with the sciences. If a first-year chemistry student has strong, ignorant opinions about the electronegativity of fluorine, it won't go so well -- and who would have such opinions, anyway? Maybe at the most theoretically speculative end of the sciences, we can see a similar pattern, though. The social sciences and other humanities might also reward the overconfident student in some ways while punishing him in others. Among academic disciplines as practiced in the U.S., I conjecture that philosophy is the most receptive to the Overconfident Student Strategy.

Success in the Overconfident Student Strategy requires two things: a good sense of what is and is not open for dispute, and comfort in classroom dialogue. Both tend to favor students from privileged backgrounds.

It's ridiculous to dispute simple matters of fact with one's professor. The Overconfident Student Strategy only works if the student can sniff out a defensible position, one rich for back-and-forth dispute. Student X in our example immediately discerned that the conceivability argument for dualism was fertile ground on which to take a stand. Student X can follow his initial gut impression, knowing that even if he can't really win the argument on day two, arguments for his favored view are out there somewhere. Students with academically strong backgrounds -- who have a sense of how academia works, who have some exposure to philosophy earlier in their education, who are familiar with the back-and-forth of academic argumentation -- are at an advantage in sensing defensible positions and glimpsing what broad shapes an argument might take.

And of course speaking up in the classroom, especially being willing to disagree with one's professor, normally requires a certain degree of comfort and self-assurance in academic contexts. It helps if classrooms feel like home, if you feel like you belong, if you see yourself as maybe a professor yourself some day.

For these reasons -- as well as the more general tendency toward overconfidence that comes with social privilege -- we should expect that the Overconfident Student Strategy should be especially available to students with privileged backgrounds: the children of academics, wealthy students who went to elite high schools, White students, men, and non-immigrants, for example. In this way, initial privilege provides advantages that amplify up through one's education.

I have confined myself to remarks about the United States, because I suspect the sociology of overconfidence plays out differently in some other countries, which might explain the difficulty that international students sometimes have adapting to the style in which philosophy is practiced here.

I myself, of course, was just the sort of overconfident student I've described -- the son of two professors, raised in a wealthy suburb with great public schools. Arguably, I'm still employing the same strategy, opining publicly on my blog on a huge range of topics beyond my expertise (e.g., Hume interpretation last week, COVID ethics last month), reaping advantages analogous to the overconfident student's four classroom advantages, only in a larger sphere.

Coming up! Some strategies for evening the playing field.

------------------------------------------

Related: "On Being Good at Seeming Smart" (Mar 25, 2010).

Sunday, March 13, 2022

Some Recent Talks and Interviews

"Would You Shut Off a Robot Who Might Be Conscious?" -- 50 minute talk at Ruhr University Bochum, on YouTube.

"Eric Schwitzgebel: Metaphysics of Mind, Issues of Introspection, Ethics of Ethicists, Aliens and AI" -- a wide ranging two hour interview with Tevin Naidu at Mind-Body Solution, on

* YouTube
* Spotify
* Apple podcasts
* Google podcasts.

Digital Afterlives -- an hour-long YouTube conversation, pitched for a broad audience, at the UCR Palm Desert Campus with Susan Schneider and John M. Fischer on "uploading" your consciousness into computers and personal identity.

"Zombies" -- a 32 minute podcast on zombies (traditional, Hollywood, and philosophical) featuring Christina van Dyke, David Chalmers, John Edgar Browning, and some of my reflections on whether AI systems might be "zombies" in the sense of outwardly seeming to have consciousness but inwardly lacking it.

Tuesday, March 08, 2022

How to Defeat Higher-Order Regress Arguments for Skepticism

In arguing for radical skepticism about arithmetic knowledge, David Hume uses what I'll call a higher-order regress argument. I was reminded of this style of argument when I read Francois Kammerer's similarly structured (and similarly radical) argument for skepticism about the existence of conscious experiences, forthcoming in Philosophical Studies. In my view, Hume's and Kammerer's arguments fail for similar reasons.

Hume begins by arguing that you should have at least a tiny bit of doubt even about simple addition:

In accompts of any length or importance, Merchants seldom trust to the infallible certainty of numbers for their security.... Now as none will maintain, that our assurance in a long numeration exceeds probability, I may safely affirm, that there scarce is any proposition concerning numbers, of which we can have a fuller security. For 'tis easily possible, by gradually diminishing the numbers, to reduce the longest series of addition to the most simple question, which can be form'd, to an addition of two single numbers.... Besides, if any single addition were certain, every one wou'd be so, and consequently the whole or total sum (Treatise of Human Nature 1740/1978, I.IV.i, p. 181)

In other words, since you can be mistaken in adding long lists of numbers, even when each step is the simple addition of two single-digit numbers, it follows that you can be mistaken in the simple addition of two single-digit numbers. Therefore, you should conclude that you know only with "probability", not with absolute certainty, that, say, 7 + 5 = 12.

I'm not a fan of absolute 100% flat utter certainty about anything, so I'm happy to concede this to Hume. (However, I can imagine someone -- Descartes, maybe -- objecting that contemplating 7 + 5 = 12 patiently outside of the context of a long row of numbers might give you a clear and distinct idea of its truth that we don't normally consistently maintain when adding long rows of numbers.)

So far, what Hume has said is consistent with a justifiable 99.99999999999% degree of confidence in the truth of 7 + 5 = 12, which isn't yet radical skepticism. Radical skepticism comes only via a regress argument.

Here's the first step of the regress:

In every judgment, which we can form concerning probability, as well as concerning knowledge, we ought always to correct that first judgment, deriv'd from the nature of the object, by another judgment, deriv'd from the nature of the understanding. 'Tis certain a man of solid sense and long experience... must be conscious of many errors in the past, and must still dread the like for the future. Here then arises a new species of probability to correct and regulate the first, and fix its just standard and proportion. As demonstration is subject to the controul of probability, so is probability liable to a new correction by a reflex act of the mind, wherein the nature of our understanding, and our reasoning from the first probability become our objects.

Having thus found in every probability, beside the original uncertainty inherent in the subject, a new uncertainty deriv'd from the weakness of that faculty, which judges, and having adjusted these two together, we are oblig'd by our reason to add a new doubt deriv'd from the possibility of error in the estimation we make of the truth and fidelity of our faculties (p. 181-182).

In other words, whatever high probability we assign to 7 + 5 = 12, we should feel some doubt about that probability assessment. That doubt, coupled with our original doubt, produces more doubt, thus justifying a somewhat lower -- but still possibly extremely high! -- probability assessment. Maybe 99.9999999999% instead of 99.99999999999%.

But now we're down the path toward an infinite regress:

But this decision, tho' it shou'd be favourable to our preceeding judgment, being founded only on probability, must weaken still further our first evidence, and must itself be weaken'd by a fourth doubt of the same kind, and so on in infinitum; till at last there remain nothing of the original probability, however great we may suppose it to have been, and however small the diminution by every new uncertainty. No finite object can subsist under a decrease repeated in infinitum; and even the vastest quantity, which can enter into human imagination, must in this manner be reduc'd to nothing (p. 182).

We should doubt, Hume says, our doubt about our doubts, adding still more doubt. And we should then doubt our doubt about our doubt about our doubt, and so on infinitely, until nothing remains but doubt. With each higher-order doubt, we should decrease our confidence that 7 + 5 = 12, until at the end we recognize that the only rational thing to do is shrug our shoulders and admit we are utterly uncertain about the sum of 7 and 5.

If this seems absurd... well, probably it is. I'm sympathetic with skeptical arguments generally, but this seems to be one of the weaker ones, and there's a reason it's not the most famous part of the Treatise.

There are at least three moves available to the anti-skeptic.

First, one can dig in against the regress. Maybe the best place to do so is the third step. One can say that it's reasonable to have a tiny initial doubt, and then it's reasonable to add a bit more doubt on grounds that it's doubtful how much doubt one should have, but maybe third-order doubt is unwarranted unless there's some positive reason for it. Unless something about you or something about the situation seems to demand third-order doubt, maybe it's reasonable to just stick with your assessment.

That kind of move is common in externalist approaches to justification, according to which people can sometimes reasonably believe things if the situation is right and their faculties are working well, even if they can't provide full, explicit justifications for those beliefs.

But this move isn't really in the spirit of Hume, and it's liable to abuse by anti-skeptics, so let's set it aside.

Second, one can follow the infinite regress to a convergent limit. The mathematical structure of this move should be familiar from pre-calculus. It's more readily seen with simpler numbers. Suppose that I'm highly confident of something. My first impulse is to assign 100% credence. But then I add a 5% doubt to it, reducing my credence to 95%. But then I have doubts about my doubt, and this second-order doubt leads me to reduce my credence another 2.5%, to 92.5%. I then have a third-order doubt, reducing my credence by 1.25% to 91.25%. And so on. As long as each higher-order doubt reduces the credence by half as much as the previous lower-order doubt, we will have a convergent sum of doubt. In this case, the limit as we approach infinitely many layers of doubt is 10%, so my rational credence need never fall below 90%.

This response concedes a lot to Hume -- that it's reasonable to regress infinitely upward with doubt, and that each step upward should reduce our confidence by some finite amount -- and yet it avoids the radically skeptical conclusion.

Interestingly, Hume himself arguably could not have availed himself of this move, given his skepticism about the infinitesimal (in I.II.i-ii). We can have no adequate conception of the infinitesimal, Hume says, and space and time cannot be infinitely divided. Therefore, when Hume concludes the quoted passage above by saying "No finite object can subsist under a decrease repeated in infinitum; and even the vastest quantity, which can enter into human imagination, must in this manner be reduc'd to nothing", he is arguably relying on his earlier skepticism about infinite division. For that reason, Hume might be unable to accept the convergent limit solution to his puzzle -- though we ourselves, rightly more tolerant of the infinitesimal, shouldn't be so reluctant.

Third, higher-order doubts can take the form of reversing lower-order doubts. Your third-order thought might be that your second-order doubt was too uncertain, and thus on reflection your confidence might rise again. If my first inclination is 100% credence, and my second thought knocks it down to 95%, my next thought might be that 95% is too low rather than too high. Maybe I kick it back up to 97.5%. My fourth thought might then involve tweaking it up or down from there. Thus, even without accepting convergence toward a limit, we might reasonably suspect that ever-higher orders of reflection will always yield a degree of confidence that bounces around within a manageable range, say 90% to 99%. And even if this is only a surmise rather than something I know for certain, it's a surmise that could be either too high or too low, yielding no reason to conclude that infinite reflection would tend toward low degrees of confidence.

* - * - *

Well, that was longer than intended on Hume! But I think I can home in quickly on the core idea from Kammerer that precipitated this line of reflection.

Kammerer is a "strong illusionist". He thinks that conscious experiences don't exist. If this sounds like such a radical claim as to be almost unbelievable, then I think you understand why it's worth calling a radically skeptical position.[1]

David Chalmers offers a "Moorean" reply to this claim (similarly, Bryan Frances): It's just obvious that conscious experience exists. It's more obvious that conscious experience exists than any philosophical or scientific argument to the contrary could ever be, so we can reject strong illusionism out of hand, without bothering ourselves about the details of the illusionist arguments. We know in advance that whatever the details are, the argument shouldn't win us over.

Kammerer's reply is to ask whether it's obvious that it's obvious.[2] Sometimes, of course, we think something is obvious, but we're wrong. Some things we think are obvious are not only non-obvious but actually false. Furthermore, the illusionist suspects we can construct a good explanation of why false claims about consciousness might seem obvious despite their falsity. So, according to Kammerer, we shouldn't accept the Moorean reply unless we think it's obvious that it's obvious.

Kammerer acknowledges that the anti-illusionist might reasonably hold that it is obvious that it's obvious that conscious experience exists. But now the argument repeats: The illusionist might anticipate an explanation of why, even if conscious experience doesn't exist, it seems obvious that it's obvious that conscious experience exists. So it looks like the anti-illusionist needs to go third order, holding that it's obvious that it's obvious that it's obvious. The issue repeats again at the fourth level, and so on, up into a regress. At some point high enough up, it will either no longer be obvious that it's obvious that it's [repeat X times] obvious; or if it's never non-obvious at any finite order of inquiry, there will still always be a higher level at which the question can be raised, so that a demand for obviousness all the way up will never be satisfied.

Despite some important differences from Hume's argument -- especially the emphasis on obviousness rather than probability -- versions of the same three types of reply are available.

Dig in against the regress. The anti-illusionist can hold that it's enough that the claim is obvious; or that it's obvious that it's obvious; or that it's obvious that it's obvious that it's obvious -- for some finite order of obviousness. If the claim that conscious experience exists has enough orders of obviousness, and is furthermore also true, and perhaps has some other virtues, perhaps one can be fully justified in believing it even without infinite orders of obviousness all the way up.

Follow the regress to a convergent limit. Obviousness appears to come in degrees. Some things are obvious. Others are extremely obvious. Still others are utterly, jaw-droppingly, head-smackingly, fall-to-your-knees obvious. Maybe, before we engage in higher-order reflection, we reasonably think that the existence of conscious experience is in the last, jaw-dropping category, which we can call obviousness level 1. And maybe, also, it's reasonable, following Kammerer and Hume, to insist on some higher-order reflection: How obvious is it that it's obvious? Well, maybe it's extremely obvious but not utterly, level 1 obvious, and maybe that's enough to reduce our total epistemic assessment to overall obviousness level .95. Reflecting again, we might add still a bit more doubt, reducing the obviousness level to .925, and so on, converging toward obviousness level .9. And obviousness level .9 might be good enough for the Moorean argument. Obviously (?), these are fake numbers, but the idea should be clear enough. The Moorean argument doesn't require that the existence of conscious experience be utterly, jaw-droppingly, head-smackingly, fall-to-your-knees, level 1 obvious. Maybe the existence of consciousness is that obvious. But all the Moorean argument requires is that the existence of consciousness be obvious enough that we reasonably judge in advance that no scientific or philosophical argument against it should justifiably win us over.

Reverse lower-order doubts with some of the higher-order doubts. Overall obviousness might sometimes increase as one proceeds upward to higher orders of reflection. For example, maybe after thinking about whether it's obvious that it's obvious that [eight times] it's obvious, our summary assessment of the total obviousness of the proposition should be higher than our summary assessment after thinking about whether it's obvious that it's obvious that [seven times] it's obvious. There's no guarantee that with each higher level of consideration the total amount of doubt should increase. We might find as we go up that the total amount of obviousness fluctuates around some very high degree of obviousness. We might then reasonably surmise that further higher levels will stay within that range, which might be high enough for the Moorean argument to succeed.

---------------------------------

[1] Actually, I think there's some ambiguity about what strong illusionism amounts to, since what Kammerer denies the existence of is "phenomenal consciousness", and it's unclear whether this really is the radical thesis that it is sometimes held to be or whether it's instead really just the rejection of a philosopher's dubious notion. For present purposes, I'm interpreting Kammerer as holding the radical view. See my discussions here and here.

[2] Kammerer uses "uniquely obvious" here, and "super-Moorean", asking whether it's uniquely obvious that it's uniquely obvious. But I don't think uniqueness is essential to the argument. For example, that I exist might also be obvious with the required strength.

Tuesday, March 01, 2022

Do Androids Dream of Sanctuary Moon?

guest post by Amy Kind

In the novel that inspired the movie Blade Runner, Philip K. Dick famously asked whether androids dream of electric sheep. Readers of the Murderbot series by Martha Wells might be tempted to ask a parallel question: Do androids dream of The Rise and Fall of Sanctuary Moon?

Let me back up a moment for those who haven’t read any of the works making up The Murderbot Diaries.[1] The series’ titular character is a SecUnit (short for Security Unit). SecUnits are bot-human constructs, and though they are humanoid in form, they generally don’t act especially human-like and they have all sorts of non-human attributes including a built-in weapons system. Like other SecUnits, Murderbot has spent most of its existence providing security to humans who are undertaking various scientific, exploratory, or commercial missions. But unlike other SecUnits, Murderbot has broken free of the tight restrictions and safeguards that are meant to keep it in check. About four years prior to the start of the series, Murderbot had hacked its governor module, the device that monitors a SecUnit and controls its behavior, sometimes by causing it pain, sometimes by immobilizing it, and sometimes by ending its existence.

So how has Murderbot taken advantage of its newfound liberty? How has it kept itself occupied in its free time? The answer might initially seem surprising: Murderbot has spent an enormous amount of its downtime watching and rewatching entertainment media. In particular, it’s hooked on a serial drama called The Rise and Fall of Sanctuary Moon. We’re not told very much about Sanctuary Moon, or why it would be so especially captivating to a SecUnit, though we get some throw-away details now and then over the course of the series. We know it takes place in space, that it involves murder, sex, and legal drama, and that it has at least 397 episodes. In an interview with Newsweek in 2020, Wells has said that the show “is kind of based on How to Get Away with Murder, but in space, on a colony, with all different characters and hundreds more episodes, basically.”

It's not uncommon for the sophisticated AI of science fiction to adopt hobbies and pursue various activities in their leisure time. Andrew, the robot in Asimov’s Bicentennial Man, takes up wood carving, while Data, the android of Star Trek, takes up painting and spends time with his cat, Spot. HAL, the computing system built into the Discovery One spaceship in 2001: A Space Odyssey, plays chess. But it does seem fairly unusual for an AI to spend so much of its time binge-watching entertainment media. Murderbot’s obsession (one might even say addiction) is somewhat puzzling, at least to its human clients. In All Systems Red, when one of these clients reviews the SecUnit’s personal logs to see what it’s been up to, he discovers that it has downloaded 700 hours of media in the short time since their spacecraft landed on the planet they are exploring. The client hypothesizes that Murderbot must be using the media for some hidden, possibly nefarious purpose, perhaps to mask other data. As the client says, “It can’t be watching it, not in that volume; we’d notice.” (One has to love Murderbot’s response: “I snorted. He underestimated me.”)

Over the course of the series, as we learn more and more about Murderbot, the puzzle starts to dissipate. Certainly, Sanctuary Moon is entertainment for Murderbot. It’s an amusing diversion from its daily grind of security work. But it’s also much more than that. As Murderbot explicitly tells us, rewatching old episodes calms it down in times of stress. It borrows various details from Sanctuary Moon to help it in its work, as when it adopts one of the character’s names as an alias or when it decides what to do based on what characters on the show have done in parallel scenarios. And watching this serial helps Murderbot to process emotions. As it states on more than one occasion, it doesn’t like to have emotions about real life and would much prefer to have them about the show.

Though Murderbot is not comfortable engaging in self-reflection and prefers to avoid examination of its feelings and motivations, it cannot escape this altogether. We do see occasional moments of introspection. One particularly illuminating moment comes during an exchange between the SecUnit and Mensah, the human to which it is closest. In the novella Exit Strategy, when Mensah asks why it likes Sanctuary Moon so much, it doesn’t know how to answer at first. But then, once it pulls up the relevant memory, it’s startled by what it discovers and says more than it means to: “It’s the first one I saw. When I hacked my governor module and picked up the entertainment feed. It made me feel like a person.”

When Mensah pushes Murderbot for more, for why Sanctuary Moon would make it feel that way, it replies haltingly:

“I don’t know.” That was true. But pulling the archived memory had brought it back, vividly, as if it had all just happened. (Stupid human neural tissue does that.) The words kept wanting to come out. It gave me context for the emotions I was feeling, I managed not to say. “It kept me company without…” “Without making you interact?” she suggested.

Not only does Murderbot want to avoid having emotions about events in real life, it also wants to avoid emotional connections with humans. It is scared to form such connections. But a life without any connection is a lonely one. For Murderbot, watching media is not just about combatting boredom. It’s also about combatting loneliness.

As it turns out, then, Murderbot is addicted to Sanctuary Moon for many of the same reasons that any of us humans are addicted to the shows we watch – whether it’s Ted Lasso or Agents of Shield or Buffy the Vampire Slayer. These shows are diverting, yes, but they also bring us comfort, they give us outlets for our emotions, and they help us to fight against isolation. (Think of all the pandemic-induced binge-watching of the last two years.) So even though it might seem surprising at first that a sophisticated AI would want to devote so much of its time to entertainment media, it really is no more surprising than the fact that so many of us want to devote so much of our time to the same thing. Though it seems tempting to ask why an AI would do this, the only real answer is simply: Why wouldn’t it?

The reflections in this post thus bring us to a further moral about science fiction and what we can learn from it about the nature of artificial intelligence. In our abstract thinking about AI, we tend to get caught up in some Very Big Questions: Could they really be intelligent? Could they be conscious? Could they have emotions? Could we love them, and could they love us? None of these questions is easy to answer, and sometimes it’s hard to see how we could make progress on them. So perhaps what we need to do is to step back and think about some smaller questions. It’s here, I think, that science fiction can prove especially useful. When we try to imagine an AI existence, as works of science fiction help us to do, we need to imagine that life in a multi-faceted way. By thinking about what a bot’s daily life might be like, not just how a bot would interact with humans but how it would make sense of those interactions, or how it would learn to get better at them, or even just by thinking about what a bot would do in its free time, we start to flesh out some of our background assumptions about the capabilities of AI. In making progress on these smaller questions, perhaps we’ll also find ourselves better able to make progress on the bigger questions as well. To understand better the possibilities of AI sentience, we have to better understand the contours of what sentience brings along with it.

Ultimately, I don’t know whether androids would dream of Sanctuary Moon, or even of anything at all.[2] But thinking about why they might be obsessed with entertainment media like this can help us to get a better big-picture understanding of the sentience of an AI system like Murderbot… and perhaps even a better understanding of our own sentience as well.

-----------------------------------

[1] And if you haven’t read them yet, what are you waiting for? I highly recommend them – and rest assured, this post is free of any major spoilers.

[2] Though see Asimov’s story “Robot Dreams” for further reflection on this.

[image source]

-----------------------------------

Postscript by Eric Schwitzgebel:

This concludes Amy Kind's guest blogging stint at The Splintered Mind. Thanks, Amy, for this fascinating series of guest posts!

You can find all six of Amy's posts under the label Amy Kind.

Tuesday, February 22, 2022

Social Change and the Science Fiction Imagination

guest post by Amy Kind

In “Where No Man Has Gone Before,” the first episode of Star Trek: The Original Series, Captain Kirk pulls out his communicator to hail the Enterprise.[1] At the time this episode aired in September of 1966, this kind of communication device probably struck most viewers as pure fantasy. But, according to a story told in the 2005 documentary, How William Shatner Changed the World, Star Trek’s depiction of the communicator helped turn the fantasy into a reality. Inspired by Star Trek, inventor Martin Cooper worked with a team of engineers to create a genuinely portable phone. The DynaTAC, which made its public debut at a press conference in 1973, was 9 inches tall, weighed 2.5 pounds, and had a battery life that allowed for 35 minutes of talk time.

Cooper has subsequently recanted the story about having been influenced in this way by Star Trek. In a 2015 interview, he claims that the real inspiration for his communication device came many years earlier from the two-way radio wristwatch worn by Dick Tracy in the eponymous comic strip. Whichever of these works was the inspiration, however, this technological development provides a testament to the power of science fiction to change the world.

And this is not the only such testament. Numerous articles suggest various other instances where technology imagined by science fiction authors led to the actual development of such technology. To mention just one illustrative example, an article on Space.com details eleven ideas “that went from science fiction to reality.” In some of these cases, the causal link is undoubtedly exaggerated, but in others, it seems considerably more plausible. Perhaps one of the best examples of such a causal link comes from Igor Sikorsky’s work in aviation – in particular, on helicopters. As described by his son, Sikorsky was deeply inspired by the helicopter described in Jules Verne’s The Clipper of the Clouds:

My father referred to it often. He said it was “imprinted in my memory.” And he often quoted something else from Jules Verne. “Anything that one man can imagine, another man can make real.”

Typically, the kinds of examples mentioned to demonstrate science fiction’s influence relate to what are seen as the traditional themes of science fiction – themes like technological invention, space exploration, and robotics. Interestingly, however, the ability of science fiction to inspire the future is not limited to these kinds of themes. Consider, for example, a recent discussion about the power of science fiction on an episode of the Levar Burton Reads podcast. In a conversation between the podcast host, actor Levar Burton (of Star Trek: The Next Generation fame), and writer and activist Walidah Imarisha, what gets highlighted is the power of science fiction to effect change not in the technological realm but in the social realm. Science fiction, says Imarisha, helps us imagine “a world without borders, a world without prisons, a world without oppression.” And as she underscores, this is really important, because “we can’t build what we can’t imagine.”

This line of thought is part and parcel of what I think of as an optimism about imagination. There are many dimensions to optimism about imagination, but for my purposes here, what’s important is the optimist’s view that imagination can play a key role in bringing about social change. So far I’ve been focused on imagination in the context of science fiction, but that’s not the only context in which such imagining occurs. We see this kind of imagining in political contexts, as when US Representative Alexandria Ocasio-Cortez invokes imagination in discussing the Green New Deal. The first big step in bringing it about, she says, is “just closing our eyes and imagining.” Such imagining is also a key tool for organizers and activists more generally – and also for just about anyone who is aiming to make our world a better and more just place.

In making a case for optimism, one might point to various examples of positive change that have occurred throughout our history, and one might point to how much of this change has been brought about by the prodigious powers of imagination manifested by various key figures who have driven such change. But the case for optimism is met with persistent criticism. Those who are more pessimistic question whether imagination can really have the power that the optimists attribute to it. As noted in an essay by Claudia Rankine and Beth Loffreda, “our imaginations are creatures as limited as we ourselves are. They are not some special, uninfiltrated realm that transcends the messy realities of our lives and minds.” Our imaginations are limited by our experiences and our embodiment – by our race, by our sex and gender, and by our ability status, to name just a few of the relevant sources of limitation.

Confronted with this push-pull between optimism and pessimism, what’s the solution for someone looking to harness the power of imagination to bring about social change? Recently, Shen-yi Liao has argued we would do best not to rely on agent-guided imaginings (or not solely so) but rather on prop-guided imaginings. Drawing an analogy to children’s games of pretense, he notes that the relationship between children’s imagining and props is a two-way street. When children are outside pretending to be Jedi Knights, they will likely look around for some tree branches to serve as light sabers and ignore other objects in their vicinity like rocks and leaves. On the flip side, when children are trying to decide what game of pretend to play, the fact that there are tree branches around might influence them to pretend to be Jedi Knights rather than astronauts. Though our imaginings influence how we use props, our props also influence how we use imaginings.

This leads Liao to an important moral: If we want to bring about social change, we might look to props to “guide and constrain our socially situated and ecologically embedded imagination.” This means that one effective way to bring about social change would be to think about what kinds of props are available to us in the world (e.g., monuments, memorials, and all sorts of other artifacts) and to work to make different props available. So, concludes Liao, “we do have to imagine differently to change the world. But to imagine differently, we might also have to change the world.”

Though I don’t think Liao himself is best described as a pessimist, the pessimist might nonetheless take these reflections as grist for their mill.[2] In particular, Liao’s conclusion might seem to suggest that we face an impossible task, a loop into which there is no entry point. We saw above the suggestion by Imarisha that we can’t build what we can’t imagine, but now it seems that we can’t imagine what we haven’t already built. Perhaps imagination can’t really play an important role in social change after all.

I count myself in the optimist camp, and so this is a conclusion that I’d like to resist. Moreover, as my opening reflections about science fiction suggest, the task can’t be an impossible one, because we’ve seen it happen. The various props that we have in the world inspire the imaginations of science fiction writers, and then the science fiction they produce inspires the imaginations of the engineers and inventors, who then create new and different props, which in turn can inspire the imaginations of a new generation of science fiction writers. We know it can be done with technology, and so it seems eminently plausible that something similar can be done with respect to the social domain – and indeed, when we think about the radical social imaginings in works by science fiction authors such as Octavia Butler and Ursula Le Guin (and so many others), it undoubtedly has already been done. Because of this past progress, the science fiction of today begins from new starting points and can push things even further. In short, just as the science fiction imagination can be an important driver in bringing about technological change, it’s also an important driver in bringing about social change.

---------------------------------------------

[1] Since people sometimes get picky about this sort of thing, I’ll clarify that “Where No Man…” is the first episode of season 1, 1x01. The original pilot, “The Cage,” did not air until 1988 and is treated as 0x01, that is, the first episode in season 0.

[2] Liao thinks that this difficult dialectic means that any progress we make is likely to be incremental. But, of course, incremental progress is still progress, and it’s certainly better than no progress at all! [image source]