Thursday, May 26, 2022

After a Taste-Bud Hiatus, Experiencing Candy Like a Six-Year-Old

I used to blog quite a bit about weird aspects of sensory experience, back when my central research interest concerned the vagaries of introspection and the strange things people say about their streams of experience. (See a few sample posts; some articles; two books.) I thought I'd share another today -- something striking to me -- though actually not that weird, I suppose.

About a month ago, I accidentally bit down hard on an unpopped popcorn kernel, "bruising" the teeth on the left side of my mouth. (Yes, that's a thing. My dentist tells me nothing is broken or cracked; it just needs time to heal.) It was remarkably painful to chew on that side, and for weeks I chewed entirely on the right side of my mouth, barely even letting food drift to the left side. Last week, I resumed gently chewing on the left again -- just soft things, carefully, experimentally. Having dessert one night, I was suddenly struck by how much sweeter the dessert tasted on the left side than on the right side. Remarkably sweeter. Different enough that the fact really jumped out at me, though I wasn't at all expecting or looking for it.

I was eating an "orange slice" candy. You know, one of these guys:

On the right side of my mouth, the candy was blandly sweet with a simple citrus flavor. On the left side, I experienced the candy as vividly sweet, zinging with orange. The contrast persisted as I moved the mass of candy around in my mouth. When I shifted the bulk to the right, it seemed to instantly lose flavor, like a piece of gum chewed too long. When I shifted it back to the left, the flavor brightened again.

I experimented with other candies over the next few days: lemon and lime slices, chocolate, peppermint sticks. I consistently found the left side sweeter than the right -- and not only sweeter, but also more vividly flavored in other ways. However, I found no similarly noticeable difference for savory flavors, or tea, or pure salt, straight lemon juice, or many of the other things I have eaten since. The effect was mostly or entirely limited to sweetness only, and the associated flavors of the sweet things.

I remember loving fruit slice candies when I was six. I would savor them for fifteen minutes, driving my parents nuts, who had to wait for me at the end of meals. (Now I tend to wolf down desserts: See my defense of dessert-wolfing.) The flavor of the orange slice resonated with my memories of youth. It was like my taste buds -- or the related sensory regions in my brain -- were six years old again. It seemed to me that the orange slice tasted to me now, on the left side of my mouth, in that amazing way it had tasted to me as a child, and then when I shifted it to the right side, it fell back into the blandness that I have since become accustomed to.

I'm not sure why the effect was limited to sweetness. In general, taste sensitivity declines with age, but the decline seems to be as strong for salty, savory, and bitter tastes as for sweet ones. My taste experience has probably dulled in multiple respects. Why sweetness only should rejuvenate, I have no idea -- even more confusingly, not simple sweetness only but the more complex flavors tangled up with sweetness, such as chocolate and sweet orange.

A week later, I find that the effect is still present, though diminishing. I want the vivid sweetness back! The experience acutely reminds me of how much of what we vividly experience recedes into a fog with ageing. A comparison point: I remember getting glasses as a teenager after years of slightly blurry vision and loving how sharp the world became. Now even the best prescription I can find will never make the world that sharp. I also feel that when I read fiction I don't quite as vividly imagine the scenes as I used to.

Middle age has its compensating advantages. I'm mellower, more settled. Even sensorily, things are open to me that weren't before: Presumably because of my diminished taste sensitivity, I can enjoy bitter coffee and sharp cheese. But the hiatus from left-side chewing, followed by some fleeting new candy raptures, has given a sharp new tang to my thoughts about sensory loss with age.

[image source]

Tuesday, May 17, 2022

Our Infinite Predecessors: Flipping the Doomsday Argument on Its Head

The Doomsday Argument purports to show, probabilistically, that humanity will not endure for much longer: Likely, at least 5% of the humans who will ever live have already lived. If 60 billion have lived so far, then probably no more than 1.2 trillion humans will live, ever. (This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year.) According to this argument, the odds that humanity colonizes the galaxy with many trillions of inhabitants are vanishingly small.

Why think we are doomed? The core idea, as developed by Brandon Carter (see p. 143), John Leslie, Richard Gott, and Nick Bostrom is this. It would be statistically surprising if we -- you and I and our currently living friends and relatives -- were very nearly the first human beings ever to live. Therefore, it's unlikely that we are in fact very nearly the first human beings ever to live. But if humanity continues on for many thousands of years, with many trillions of future humans, then we would in fact be very nearly the first human beings ever to live. Thus, we can infer, with high probability, that humanity is doomed before too much longer.

Consider two hypotheses: On one hypothesis, call it Endurance, humanity survives for many millions more years, and many, many trillions of people live and die. On the other, call it Doom, humanity survives for only another a few more centuries or millennia. On Endurance, we find ourselves in a surprising and unusual position in the cosmos -- very near the beginning of a very long run! This, arguably, would be as strange and un-Copernican as finding ourselves in some highly unusual spatial position, such as very near the center of the cosmos. The longer the run, the more surprisingly unusual our position. In contrast, Doom suggests that we are in a rather ordinary temporal position, roughly the middle of the pack. Thus, the reasoning goes, unless there's some independent reason to think Endurance to be much more plausible than Doom, we ought to conclude that Doom is likely.

Let me clarify by showing how Doomsday-style reasoning would work in a few more intuitive cases. But first, here's an inverted mushroom cloud to symbolize that I'll soon be flipping the argument over.

Imagine two lotteries. One has ten numbers, the other a hundred numbers. You don't know which one you've entered into, but you go ahead and draw a number. You discover that you have ticket #6. Upon finding this out, you ought to guess that you probably drew from the ten number lottery rather than the hundred number lottery, since #6 would be a surprisingly low draw in a hundred-number lottery. Not impossible, of course, just relatively unlikely. If your prior credence was split 50-50 between the two lotteries, you can use Bayesian inference to derive a posterior credence of about 91% that you are in the ten-number lottery, given that you see a number among the top ten. (Of course, if you have other evidence that makes it very likely that you were in the hundred-number lottery, then you can reasonably retain that belief even after drawing a relatively low number.)

Alternatively, imagine that you're one of a hundred people who have been blindfolded and imprisoned. You know that 90% of the prison cells are on the west side of town and 10% are on the east side. Your blindfold is removed, but you don't see anything that reveals which side of town you're on. Nonetheless, you ought to think it's likely you're on the west side of town.

Or imagine that you know that 10,000 people, including you, have been assigned in some order to view a newly discovered painting by Picasso, but you don't know in what order people actually viewed the painting. Exiting the museum, you should think it unlikely that you were either among the very first or very last.

The reasoning of the Doomsday argument is intended to be analogous: If you don't know where you're temporally located in the run of humans, you ought to assume it's unlikely that you're in the unusual position of being among the first 5% (or 1% or, amazingly, .001%).

Now various disputes and seeming paradoxes arise with respect to such probabilistic approaches to "self-location" (e.g., Sleeping Beauty), and a variety of objections have been raised to Doomsday Argument reasoning in particular (Leslie's book has a good discussion; see also here and here). But let's bracket those objections. Grant that the reasoning is sensible. Today I want to add a pair of observations that have the potential to flip the Doomsday Argument on its head, even if we accept the general style of reasoning.

Observation 1: The argument assumes that only about 60 billion humans have existed so far, rather than vastly many more. Of course this seems plausible, but as we will see there might be reason to reject it.

Observation 2: Standard physical theory appears to suggest that the universe will endure infinitely long, giving rise to infinitely many future people like us.

There isn't room here to get into depth on Observation 2. I am collaborating with a physicist on this issue now; draft hopefully available soon. But the main idea is this. There's no particular reason to think that the universe has a future temporal edge, i.e., that it will entirely cease. Instead, standard physical theory suggests that it will enter permanent "heat death", a state of thin, high-entropy chaos. However, there will from time to time be low-probability events in which people, or even much larger systems, spontaneously congeal from the chaos, by freak quantum or thermodynamical chance. There's no known cap on the size of such spontaneous fluctuations, which could even include whole galaxies full of evolving species, eventually containing all non-zero-probability life forms. (See the literature on Boltzmann brains.) Perhaps there will even be new cosmic inflations, for example, caused by black holes or spontaneous fluctuations. Vanilla cosmology thus appears to imply an infinite future containing infinitely many people like us, to any arbitrarily specified degree of similarity, perhaps in very large chance fluctuations or perhaps in newly nucleated "pocket universes".

Now if we accept this, then by reasoning similar to that of the Doomsday Argument, we ought to be very surprised to find ourselves among the first 60 billion people like us, or living in the first 14 billion years of an infinitely existing cosmos. We'd be among the first 60 billion out of infinity. A tiny chance indeed! On Doomsday-style reasoning, it would be much more reasonable, if we think the future is infinite, to think that the past must be infinite too. Something existed before the Big Bang, and that something contained observers like us. That would make us appropriately mediocre. Then, in accordance with the Copernican Principle, we'd be in an ordinary location in the cosmos, rather than the very special location of being within 14 billion years of the beginning of an infinite duration.

The situation can be expressed as follows. Doomsday reasoning implies the following conditional statement:

Conditional Doom: If only 60 billion humans, or alternatively human-like creatures, have existed so far, then it's unlikely that many trillions more will exist in the future.

If we take as a given that only 60 billion have existed so far, we can apply modus ponens (concluding Q from P and if P then Q) and conclude Doom.

But alternatively, if we take as a given that (at least) many trillions will exist in the future, we can apply modus tollens (concluding not-P from not-Q and if P then Q) and conclude that many more than 60 billion have already existed.

The modus ponens version is perhaps more plausible if we think in terms of our species, considered as a local group of genetically related animals on Earth. But if we think in terms of humanlike creatures instead specifically of our local species, and if we accept an infinite future likely containing many humanlike creatures, then the modus tollens version becomes more plausible, and we can conclude a long past as well as a long future, full of humanlike creatures extending infinitely forward and back.

Call this the Infinite Predecessors argument. From infinite successors and Doomsday-style self-location reasoning, we can conclude infinite predecessors.



Almost Everything You Do Causes Almost Everything (Mar 18, 2021)

My Boltzmann Continuants (Jun 6, 2013).

How Everything You Do Might Have Huge Cosmic Significance (Nov 29, 2016).

And Part 4 of A Theory of Jerks and Other Philosophical Misadventures.

[image adapted from here]

Thursday, May 12, 2022

Draft Good Practice Guide: Sexual Harassment, Caregivers, and Student-Staff Relationships

The Demographics in Philosophy project is seeking feedback on a proposed "Good Practice" guide. Help us make this document better!

[cross-posted at Daily Nous]

This is part one of several.


Good Practice Policy: Sexual Harassment

Sexual harassment can be carried out by persons of any gender, and persons of any gender may be victims. Although harassment of students by staff is often the focus of discussions, departments need to be aware that power differentials of this sort are not essential to sexual harassment. Sexual harassment may occur between any members of the department. Departments should attend equally seriously to harassment committed both by students and by staff, as both can have dramatically negative effects on particular individuals and on departmental culture. Departments should also be aware that sexual harassment may interact with and be modified by issues of race, ethnicity, religion, class and disability status.

There is good evidence that the proportion of incidents of sexual harassment that get reported, even informally, in philosophy departments is very low, and that this has created serious problems for some staff and students. We therefore urge even those staff who do not believe that harassment is a problem in their own departments to give serious consideration to the recommendations below.

The US defines ‘sexual harassment’ as unwanted sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual nature when:

Submission to such conduct is made either explicitly or implicitly a term or condition of an individual’s employment

Submission to or rejection of such conduct by an individual is used as a basis for employment decisions affecting such individual

Such conduct has the purpose or effect of unreasonably interfering with an individual’s work performance or creating an intimidating, hostile, or offensive working environment.

Institutional definitions of ‘sexual harassment’ differ greatly from one another. Some institutional definitions focus solely on sexual conduct, while others include also include non-sexual harassment related to sex.

While departments need to attend to their institution’s definition of ‘sexual harassment’, and to make use of institutional procedures where appropriate, this is not the end of their responsibilities. Where sexist or sexual behavior is taking place that contributes to an unwelcoming environment for underrepresented groups, departments should act whether or not formal procedures are possible or appropriate.

We note that sexual harassment in philosophy can be present even when it does not meet the formal definitions above. Sexual harassment involves conduct of a sexual nature with the purpose or effect of violating the dignity of a person, or creating an intimidating, hostile, degrading, humiliating or offensive environment. This includes both harassment related to sex, sexual orientation, or gender identity (e.g. hostile and dismissive though not sexual comments about women, gay, lesbian, transgender, or nonbinary people) and harassment of a sexual nature. Note that sexual harassment is not limited to one-to-one interactions but may include, for example, general comments made in lectures or seminars that are not aimed at an individual.

General Suggestions

1. All members of the department—undergraduates, graduate students, academic and non-academic staff—should be made aware of the regulations that govern sexual harassment in their university.

a. In particular, they should know the university’s definition of ‘sexual harassment’ and who to contact in possible cases of sexual harassment.

b. They should also know who has standing to file a complaint (in general, and contrary to widespread belief, the complainant need not be the victim).

c. They should be made aware of both formal and informal measures available at their university.

d. Departments may wish to consider including this information in induction sessions for both students and staff, and in training for teaching assistants.

Where the University or Faculty has a list of Harassment Contacts (see e.g., all staff—including non-academic staff—and students should be made aware of it. If no such list exists, the department should consider suggesting this approach to the university. It is very important for department members to be able to seek advice outside their department.

2. All members of staff should read the advice given at on how to deal with individuals who approach them to discuss a particular incident.

3. All of the information listed above should be made permanently available to staff (including non-academic staff) and students, e.g. through a stable URL and/or staff and student handbooks, rather than only in the form of a one-off email communication.

4. The department head and others with managerial responsibilities (such as Directors of Graduate and Undergraduate Studies) should ensure that they have full knowledge of university procedures regarding sexual harassment.

Departmental Culture

1. Seriously consider the harms of an atmosphere rife with dismissive or sexualizing comments and behavior, and address these should they arise. (It is worth noting, however, that the right way to deal with this may vary.)

2. Cultivate—from the top down—an atmosphere in which maintaining a healthy climate for all department members, especially those from under-represented groups and including non-academic staff, is considered everyone’s responsibility. What this entails will vary from person to person and situation to situation. But at a minimum it includes a responsibility to reflect on the consequences (including unintended consequences) of one’s own behavior towards individuals from underrepresented groups. It may also include a responsibility to intervene, either formally or informally. (For more on the range of responses available, see Saul, op. cit.)

3. Ensure, as far as possible, that those raising concerns about sexual harassment are protected against retaliation.

4. Offer bystander training either to staff, or to staff and graduate students, if this is available or can be made available by the institution. This can help bystanders to feel comfortable intervening when they witness harassing behavior. (See the Good Practice website for more information.)


Good Practice Policy: Care Givers

Staff members and students with caregiving responsibilities—whether parental or other—face constraints on their time that others often do not. There are simple measures that departments can take to minimize the extent to which caregivers are disadvantaged.

General Suggestions

Departments should adopt an explicit policy concerning caregivers, which covers as many of the following points as is practically possible:

1. Schedule important events, as far as possible, between 9 and 5 (the hours when childcare is more readily available). When an event has to be scheduled outside of these hours, give plenty of advance notice so that caregivers can make the necessary arrangements. Consider using online scheduling polls to find times that work for as many as possible.

2. Seriously consider requests from staff of any background for part- time and flexible working. (This is largely, but not exclusively, an issue for caregivers—requests from non-caregivers should also be taken seriously.) Also be receptive, as far as possible, to requests for unpaid leave.

3. As far as possible, account for caregiving commitments when scheduling teaching responsibilities. 4. Be aware that students, not just staff, may have caregiving responsibilities. Have a staff contact person for students who are caregivers. Take student requests for caregiving accommodations seriously.

5. Ensure that students and staff are made fully aware of any university services for caregivers.

6. Ensure that staff have an adequate understanding of what caregiving involves. (E.g., don’t expect a PhD student to make lots of progress on dissertating while on parental leave.)

7. Ensure that parental leave funds provided by the university are actually used to cover for parental leave, rather than being absorbed into department or faculty budgets.

8. Those involved in performance evaluations should be fully informed about current policies regarding output reduction for caregivers and take caregiving responsibilities into account where possible.


Good Practice Policy: Staff-Student Relationships

Romantic or sexual relationships that occur in the student-teacher context or in the context of supervision, line management and evaluation present special problems. The difference in power and the respect and trust that are often present between a teacher and student, supervisor and subordinate, or senior and junior colleague in the same department or unit makes these relationships especially vulnerable to exploitation. They can also have unfortunate unintentional consequences.

Such relationships can also generate perceived, and sometimes real, inequalities that affect other members of the department, whether students or staff. For example, a relationship between a senior and junior member of staff may raise issues concerning promotion, granting of sabbatical leave, and allocation of teaching. This may happen even if no preferential treatment actually occurs, and even if the senior staff member in question is not directly responsible for such decisions. In the case of staff-student relationships, questions may arise concerning preferential treatment in seminar discussions, marking, decisions concerning graduate student funding, and so on. Again, these questions may well emerge and be of serious concern to other students even if no preferential treatment actually occurs.

At the same time, we recognise that such relationships do indeed occur, and that they need not be damaging, but may be both significant and long-lasting.

We suggest that departments adopt the following policy with respect to the behavior of members of staff at all levels, including graduate student instructors.

Please note that the recommendations below are not intended to be read legalistically. Individual institutions may have their own policies, and these will constitute formal requirements on staff and student behavior. The recommendations below are intended merely as departmental norms, and to be adopted only where not in conflict with institutional regulations.

General Suggestions

The department’s policy on relationships between staff and students (and between staff) should be clearly advertised to all staff and students in a permanent form, e.g. intranet or staff/student handbooks. The policy should include clear guidance about whom students or staff might consult in the first instance if problems (real or perceived) arise.

Undergraduate Students

1. Staff and graduate student teaching assistants should be informed that relationships between teaching staff and undergraduates are very strongly discouraged, for the reasons given above.

2. If such a relationship does occur, the member of staff in question should:

a. inform a senior member of the department—where possible, the department head—as soon as possible;

b. withdraw from all small-group teaching involving that student (in the case of teaching assistants, this may involve swapping tutorial groups with another TA), unless practically impossible;

c. withdraw from the assessment of that student, even if anonymous marking is used.

d. withdraw from writing references and recommendations for the student in question.

e. It should be made clear to staff and students that if an undergraduate student has entered into a relationship with a member of staff (including a TA), while the responsibility for taking the above steps lies with the member of staff concerned, the student is equally entitled to report their relationship to another member of staff (e.g. Head of Department, if appropriate), and to request that the above steps be taken.

Graduate Students

1. Staff and graduate students should be informed that relationships between academic members of teaching staff and graduate students are very strongly discouraged, especially between a supervisor and a graduate supervisee.

2. If such a relationship occurs between a member of staff and a graduate student, the member of staff should:

a. inform a senior member of staff—where possible, the department head—as soon as possible;

b. withdraw from supervising the student, writing letters of recommendation for them, and making any decisions (e.g. distribution of funding) where preferential treatment of the student could in principle occur;

c. in the case of graduate students, withdraw from all small-group teaching involving that student, unless practically impossible;

d. in the case of graduate students, withdraw from the assessment of that student, even if anonymous marking is used.

e. As much as possible, the Department should encourage a practice of full disclosure in the case of such relationships’ continuance. This avoids real or perceived conflicts of interest, as well as embarrassment for others.

Academic Staff

Between members of academic staff where there is a large disparity in seniority (e.g. Associate Professor/Lecturer; Head of Department/Assistant Professor):

1. Disclosure of any such relationship should be strongly encouraged, in order to avoid real or perceived conflicts of interest.

2. Any potential for real or perceived conflicts of interest should be removed by, e.g., removal of the senior member of staff from relevant decision-making (e.g. promotions, appointment to permanent positions).

Friday, May 06, 2022

Everything Is Valuable

A couple of weeks ago, I was listening to a talk by Henry Shevlin titled "Which Animals Matter?" The apparent assumption behind the title is that some animals don't matter -- not intrinsically, at least. Not in their own right. Maybe jellyfish (with neurons but no brains) or sponges (without even neurons) matter to some extent, but if so it is only derivatively, for example because of what they contribute to ecosystems on which we rely. You have no direct moral obligation to a sponge.

Hearing this, I was reminded of a contrasting view expressed in a famous passage by the 16th century Confucian philosopher Wang Yangming:

[W]hen they see a child [about to] fall into a well, they cannot avoid having a mind of alarm and compassion for the child. This is because their benevolence forms one body with the child. Someone might object that this response is because the child belongs to the same species. But when they hear the anguished cries or see the frightened appearance of birds or beasts, they cannot avoid a sense of being unable to bear it. This is because their benevolence forms one body with birds and beasts. Someone might object that this response is because birds and beasts are sentient creatures. But when they see grass or trees uprooted and torn apart, they cannot avoid feeling a sense of sympathy and distress. This is because their benevolence forms one body with grass and trees. Someone might object that this response because grass and trees have life and vitality. But when they see tiles and stones broken and destroyed, they cannot avoid feeling a sense of concern and regret. This is because their benevolence forms one body with tiles and stones (in Tiwald and Van Norden, eds., 2014, p. 241-242).

My aim here isn't to discuss Wang Yangming interpretation, nor to critique Shevlin (whose view is more subtle than his title suggests), but rather to express a thought broadly in line with Wang Yangming and with which I find myself sympathetic: Everything is valuable. Nothing exists to which we don't owe some sort of moral consideration.

When thinking about value, one of my favorite exercises is to consider what I would hope for on a distant planet -- one on the far side of the galaxy, for example, blocked by the galactic core, which we will never see and never have any interaction with. What would be good to have going on over there?

What I'd hope for, and what I'd invite you to join me in hoping for, is that it not just be a sterile rock. I'd hope that it has life. That would be, in my view, a better planet -- richer, more interesting, more valuable. Microbial life would be cool, but even better would be multicellular life, weird little worms swimming in oceans. And even better than that would be social life -- honeybees and wolves and apes. And even better would be linguistic, technological, philosophical, artistic life, societies full of alien poets and singers, scientists and athletes, philosophers and cosmologists. Awesome!

This is part of my case for thinking that human beings are pretty special. We're central to what makes Earth an amazing planet, a planet as amazing as that other one I've just imagined. The world would be missing something important, something that makes it rich and wonderful, if we suddenly vanished.

Usually I build the thought experiment up to us at the pinnacle (that is, the pinnacle so far; maybe we'll have even more awesome descendants); but also I can strip it down, in the pattern of Wang Yangming. A distant planet without us but with wolves and honeybees would still be valuable. Without the wolves and honeybees but with the worms, it also would still be valuable. With only microbes, it would still have substantial value -- after all, it would have life. Let's not forget how intricately amazing life is.

But even if there's no life -- even if it's a sterile rock after all -- well, in my mind, that's better than pure vacuum. A rock can be beautiful, and beauty has value even if there's no one to see it. Alternatively, even if we're stingy about beauty and regard the rock as a neutral or even ugly thing, well, mere existence is something. It's better that there's something rather than nothing. A universe of things is better than mere void. Or so I'd say, and so I invite you also to think. (It's hard to know how to argue for this other than simply to state it with the right garden path of other ideas around it, hoping that some sympathetic readers agree.)

I now bring this thinking back to Earth. Looking at the pebbles on the roof below my office window, I find myself feeling that they matter. Earth is richer for their existence. The universe is richer for their existence. If they were replaced with vacuum, that would be a loss. (Not that there isn't something cool about vacuums, too, in their place.) Stones aren't high on my list of valuable things that I must treat with care, but neither do I feel that I should be utterly indifferent to their destruction. I'm not sure my "benevolence forms one body" with the stones, but I can get into the mood.

[image source]

Thursday, April 28, 2022

Will Today's Philosophical Work Still Be Discussed in 200 Years?

I'm a couple days late to this party. Evidently, prominent Yale philosopher Jason Stanley precipitated a firestorm of criticism on Twitter by writing:

I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.

(Stanley has since deleted the tweet, but he favorably retweeted a critique that discusses him specifically, so I assume he wouldn't object to my also doing so.)

Now "abject failure" is too strong -- Stanley has a tendency toward hyperbole on Twitter -- but I think it is entirely reasonable for him to aspire to create philosophical work that will still be read in 200 years and to be somewhat disheartened by the prospect that he will be entirely forgotten. Big-picture philosophy needn't aim only at current audiences. It can aspire to speak to future generations.

How realistic is such an aim? Well, first, we need to evaluate how likely it is that history of philosophy will be an active discipline in 200 years. The work of our era -- Stanley and others -- will of course be regarded as historical by then. Maybe there will be no history of philosophy. Humanity might go extinct or collapse into a post-apocalyptic dystopia with little room for recondite historical scholarship. Alternatively, humanity or our successors might be so cognitively advanced that they regard us early 21st century philosophers as the monkey-brained advocates of simplicistic views that are correct only by dumb luck if they are correct at all.

But I don't think we need to embrace dystopian pessimism; and I suspect that even if our descendants are super-geniuses, there will remain among them some scholars who appreciate the history of 21st century thought, at least in an antiquarian spirit. ("How fascinating that our monkey-brained ancestors were able to come up with all of this!") And of course another possibility is that society proceeds more or less on its current trajectory. Economic growth continues, perhaps at a more modest rate, and with it a thriving global academic culture, hosting ever more researchers of all stripes, with historians in India, Indonesia, Illinois, and Iran specializing in ever more recondite subfields. It's not unreasonable, then, to guess that there will be historians of philosophy in 200 years.

What will they think of our era? Will they study it at all? It seems likely they will. After all, historians of philosophy currently study every era with a substantial body of written philosophy, and as academia has grown, scholars have been filling in the gaps between our favorite eras. I have argued elsewhere that the second half of the 20th century might well be viewed as a golden age of philosophy -- a flourishing of materialism, naturalism, and secularism, as 19th- and early 20th-century dualism and idealism were mostly jettisoned in favor of approaches more straightforwardly grounded in physics and biology. You might not agree with that conjecture. But I think you should still agree that at least in terms of the quantity of work, the variety of topics explored, and the range of views considered, the past fifty years compares favorably with, say, the early medieval era, and indeed probably pretty much any relatively brief era.

So I don't think historians will entirely ignore us. And given that English is now basically the lingua franca of global academia (for better or worse), historians of our era will not neglect English-language philosophers.

Who will be read? The historical fortunes of philosophers rise and fall. Gottlob Frege and Friedrich Nietzsche didn't receive much attention in their day, but are now viewed as a historical giants. Christian Wolff and Henri Bergson were titans in their lifetimes but are little read now. On the other hand, the general tendency is for influential figures to continue to be seen as influential, and we haven't entirely forgotten Wolff and Bergson. A good historian will recognize at least that a full understanding of the eras in which Wolff and Bergson flourished requires appreciating the impact of Wolff and Bergson.

Given the vast number of philosophers writing today and in recent decades, an understanding of our era will probably focus less on understanding the systems of a few great figures and more on understanding the contributions of many scholars to prominent topics of debate -- for example, the rise of materialism, functionalism, and representationalism in philosophy of mind (alongside the major critiques of those views); or the division of normative ethics into consequentialist, deontological, and virtue-ethical approaches. A historian of our era will want to understand these things. And that will require reading David Lewis, Bernard Williams, and other leading figures of the late 20th century as well as, probably, David Chalmers and Peter Singer among others writing now.

As I imagine it, scholars of the 23rd century will still have archival access to our major books and journals. Specialists, then, will thumb through old issues of Nous and Philosophical Review. Some will be intrigued by minor scholars who are in dialogue with the leading figures of our era. They might find some of the work by these minor scholars to be intriguing or insightful -- a valuable critique, perhaps, of the views of the leading figures, maybe prefiguring positions that are more prominently and thoroughly developed by better-known subsequent scholars.

It is not unreasonable, I think, for Stanley to aspire to be among the leading political philosophers and philosophers of language of our era, who will still read by some historians and students, and still perhaps viewed as having some good ideas that are worth continuing discussion and debate.

For my own part, I doubt I will be viewed that way. But I still fantasize that some 23rd-century specialist in the history of philosophy of our era will stumble across one of my books or articles and think, "Hey, some of the work of this mostly-forgotten philosopher is pretty interesting! I think I'll cite it in one of my footnotes." I don't write mainly with that future philosopher in mind, but it still pleases me to think that my work might someday provoke that reaction.

[image generated by]

Friday, April 22, 2022

Let's Hope We Don't Live in a Simulation

reposting from the Los Angeles Times, where it appears under a different title[1]


There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a “simulation.”

It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.

Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+. Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.

Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals.

In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: “sims” living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or “real,” people. If so, then we ourselves might well be among the sims.

The argument requires some caveats. It’s possible that no technological society ever can produce sims. Even if sims are manufactured, they may be rare — too expensive for mass manufacture, or forbidden by their makers’ law.

Still, the reasoning goes, the simulation hypothesis might be true. It’s possible enough that we have to take it seriously. Bostrom estimates a 1-in-3 chance that we are sims. Chalmers estimates about 25%. Even if you’re more doubtful than that, can you rule it out entirely? Any putative evidence that we aren’t in a sim — such as cosmic background radiation “proving” that the universe originated in a Big Bang — could, presumably, be simulated.

Suppose we accept this. How should we react?

Chalmers seems unconcerned: “Being in an artificial universe seems no worse than being in a universe created by a god” (p. 328). He compares the value of life in a simulation to the value of life on a planet newly made inhabitable. Bostrom acknowledges that humanity faces an “existential risk” that the simulation will shut down — but that risk, he thinks, is much lower than the risk of extinction by a more ordinary disaster. We might even relish the thought that the cosmos hosts societies advanced enough to create sims like us.

In simulated reality, we’d still have real conversations, real achievements, real suffering. We’d still fall in and out of love, hear beautiful music, climb majestic “mountains” and solve the daily Wordle. Indeed, even if definitive evidence proved that we are sims, what — if anything — would we do differently?

But before we adopt too relaxed an attitude, consider who has the God-like power to create and destroy worlds in a simulated universe. Not a benevolent deity. Not timeless, stable laws of physics. Instead, basically gamers.

Most of the simulations we run on our computers are games or scientific studies. They run only briefly before being shut down. Our low-tech sims live partial lives in tiny worlds, with no real history or future. The cities of Sim City are not embedded in fully detailed continents. The simulated soldiers dying in war games fight for causes that don’t exist. They are mere entertainments to be observed, played with, shot at, surprised with disasters. Delete the file, uninstall the program, or recycle your computer and you erase their reality.

But I’m different, you say: I remember history and have been to Wisconsin. Of course, it seems that way. The ordinary citizens of Sim City, if they were somehow made conscious, would probably be just as smug. Simulated people could be programmed to think they live on a huge planet with a rich past, remembering childhood travels to faraway places. Their having these beliefs in fact makes for a richer simulation.

If the simulations that we humans are familiar with reveal the typical fate of simulated beings, long-term sims are rare. Alternatively, if we can’t rely on the current limited range of simulations as a guide, our ignorance about simulated life runs even deeper. Either way, there are no good grounds for confidence that we live in a large, stable simulation.

Taking the simulation hypothesis seriously means accepting that the creator might be a sadistic adolescent gamer about to unleash Godzilla. It means taking seriously the possibility that you are alone in your room with no world beyond, reading a fake blog post, existing only as a short-lived subject or experiment. You might know almost nothing about reality beyond and beneath the simulation. The cosmos might be radically different from anything you could imagine.

The simulation hypothesis is wild and wonderful to contemplate. It’s also radically skeptical. If we take it seriously, it should undermine our confidence about the past, the future and the existence of Milwaukee. What or whom can we trust? Maybe nothing, maybe no one. We can only hope our simulation god is benevolent enough to permit our lives to continue awhile.

Really, we ought to hope the theory is false. A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.


In Reality+, Chalmers argues against the possibility that we live in a local or a temporary simulation on grounds of simplicity (p. 442-447). I am not optimistic that this response succeeds. In general, simplicity arguments against skepticism tend to be underdeveloped and unconvincing -- in part because simplicity itself is complex to evaluate (see my paper with Alan T. Moore, "Experimental Evidence for the Existence of an External World"). And more specifically, it's not clear why it would be easier or simpler to create a giant, simulated world than to create a small simulation with fake indicators of a giant world -- perhaps only enough indicators to effectively fool us for the brief time we exist or on the relatively few tests we run. (And plausibly, our creators might be able to control or predict what thoughts we have or tests we will run and thus only create exactly the portions of reality that they know we will examine.) Continuing the analogy from Sim City, our current sims are more easily constructed if they are small, local, and brief, or if they are duplicated off a template, than if each is giant, a unique run of a whole universe from the beginning. I see no reason why this fact wouldn't generalize to more sophisticated simulations containing genuinely conscious artificial intelligences.


[1] The Los Angeles Times titled the piece "Is life a simulation? If so, be very afraid". While I see how one might draw that conclusion from the piece, my own view is that we probably should react emotionally as we react to other small but uncontrollable risks -- not with panic, but rather with a slight shift toward favoring short-term outcomes over long-term ones. See my discussion in "1% Skepticism" and Chapter 4 of my book in draft, The Weirdness of the World. I have also added links, a page reference, and altered the wording for clarity in a few places.

[image generated from inputting the title of this piece into's steampunk generator]

Tuesday, April 12, 2022

Let Everyone Sparkle: Psychotechnology in the Year 2067

My latest science fiction story, in Psyche.

Thank you, everyone, for coming to my 60th birthday celebration! I trust that you all feel as young as ever. I feel great! Let’s all pause a moment to celebrate psychotechnology. The decorations and Champagne are not the only things that sparkle. We ourselves glow and fizz as humankind never has before. What amazing energy drinks we have! What powerful and satisfying neural therapies!

If human wellbeing is a matter of reaching our creative and intellectual potential, we flourish now beyond the dreams of previous generations. Sixth-graders master calculus and critique the works of Plato, as only college students could do in the early 2000s. Scientific researchers work 16-hour days, sleeping three times as efficiently as their parents did, refreshed and eager to start at 2:30am. Our athletes far surpass the Olympians of the 2030s, and ordinary fans, jazzed up with attentional cocktails, appreciate their feats with awesome clarity of vision and depth of understanding. Our visual arts, our poetry, our dance and craftwork – all arguably surpass the most brilliant artists and performers of a century ago, and this beauty is multiplied by audiences’ increased capacity to relish the details.

Yet if human wellbeing is a matter not of creative and intellectual flourishing but consists instead in finding joy, tranquility and life satisfaction, then we attain these things too, as never before. Gone are the blues. Our custom pills, drinks and magnetic therapies banish all dull moods. Gone is excessive anxiety. Gone even are grumpiness and dissatisfaction, except as temporary spices to balance the sweetness of life. If you don’t like who you are, or who your spouses and children are, or if work seems a burden, or if your 2,000-square-foot apartment seems too small, simply tweak your emotional settings. You need not remain dissatisfied unless you want to. And why on Earth would anyone want to?

Gone are anger, cruelty, immorality and bitter conflict. There can be no world war, no murderous Indian Partition, no Rwandan genocide. There can be no gang violence, no rape, no crops rotting in warehouses while the masses starve. With the help of psychotechnology, we are too mature and rational to allow such things. Such horrors are fading into history, like a bad dream from which we have collectively woken – more so, of course, among advanced societies than in developing countries with less psychotechnology.

We are Buddhists and Stoics improved. As those ancient philosophers noticed, there have always been two ways to react if the world does not suit your desires. You can struggle to change the world – every success breeding new desires that leave you still unhappy – or you can, more wisely, adjust your desires to match the world as it already is, finding peace. Ancient meditative practices delivered such peace only sporadically and imperfectly, to the most spiritually accomplished. Now, spiritual peace is democratised. You need only twist a dial on your transcranial stimulator or rebalance your morning cocktail.

[continued here]

Wednesday, April 06, 2022

New Essay in Draft: Dehumanizing the Cognitively Disabled: Commentary on Smith's Making Monsters

by Eric Schwitzgebel and Amelia Green[1]

Since the essay is short, we post the entirely of it below. This is a draft. Comments, corrections, and suggestions welcome.


“No one is doing better work on the psychology of dehumanization than David Livingstone Smith, and he brings to bear an impressive depth and breadth of knowledge in psychology, philosophy, history, and anthropology. Making Monsters is a landmark achievement which will frame all future work on the psychology of dehumanization.” So says Eric Schwitzgebel on the back cover of the book, and we stand by that assessment. Today we aim to extend Smith’s framework to cases of cognitive disability.

According to Smith, “we dehumanize others when we conceive of them as subhuman creatures” (p. 9). However, Smith argues, since it is rarely possible to entirely eradicate our inclination to see other members of our species as fully human, dehumanization typically involves having contradictory beliefs, or at least contradictory representations. On the one hand, the Nazi looks at the Jew, or the southern slaveowner looks at the Black slave, and they can’t help but represent them as human. On the other hand, the Nazi and the slaveowner accept an ideology according to which the Jew and the Black slave are subhuman. The Jew or the Black slave are thus, on Smith’s view, cognitively threatening. They are experienced as confusing and creepy. They seem to transgress the boundaries between human and non-human, violating the natural order. Smith briefly discusses disabled people. Sometimes, disabled people appear to be dehumanized in Smith’s sense. Smith quotes the Nazi doctor Wilhelm Bayer as saying that the fifty-six disabled children he euthanized “could not be qualified as ‘human beings’” (p. 250). Perhaps more commonly, however, people guilty of ableism regard disabled people as humans, but humans who are “chronically defective, incomplete, or deformed” (p. 261). Even in the notorious tract which set the stage for the Nazi euthanasia program, “Permission to Destroy Life Unworthy of Life”, Karl Binding and Alfred Hoche describe those they seek to destroy as “human” (Menschen).

However, we recommend not relying exclusively on explicit language in thinking about dehumanization of people with disabilities. It is entirely possible to represent people as subhuman while still verbally describing them as “human” when explicitly asked. Dehumanization in Smith’s sense involves powerful conflicting representations of the other as both human and subhuman. Verbal evidence is important (and we will use it ourselves), but dehumanization does not require that both representations be verbalized.

We focus on the case of adults with severe cognitive disabilities. Amelie Green is the daughter of Filipino immigrants who worked as live-in caregivers in a small residential home for severely cognitively disabled “clients”. Throughout her childhood and early adulthood, Amelie witnessed the repeated abuse of cognitively disabled people at the hands of caregivers. This includes psychological abuse, physical assault, gross overmedication, needless binding, and nutritional deprivation, directly contrary to law and any reasonable ethical standard. This abuse is possible because the monitoring of these institutions is extremely lax. Surprise visits by regulators rarely occur. Typically, inspections are scheduled weeks or months in advance, giving residential institutions ample time to create the appearance of humane conditions in a brief, pleasing show for regulators. Since the clients are severely cognitively disabled, few are able to communicate their abuse to regulators. Many do not even recognize that they are being abused.

We’ll describe one episode as Amelie recorded it – far from the worst that Amelie has witnessed – to give the flavor and as a target for analysis. The client’s name has been changed for confidentiality.

As I stepped out of the kitchen, I heard a sharp scream, followed by a light thud. The screams continued, and, out of curiosity, I found myself walking towards the back of the house, drawn to two individuals shouting. Halfway towards the commotion, I stopped. I witnessed a caregiver strenuously invert an ambulatory woman strapped to her wheelchair. Both of the patient’s legs pointed towards the ceiling, and her hands clutched the wheelchair’s sidearm handles. As the wailing grew louder, the caregiver proceeded to wedge the patient’s left shoe inside her mouth, muffling the screams.

My initial reaction was to walk away from the scene to compose my thoughts quickly. Upon reflection, I assumed that the soft thud I heard was the impact of Anna’s wheelchair. Anna’s refusal to stop crying must have prompted the caregiver to stuff a shoe inside Anna’s mouth. I assumed that Anna was punished for complaining. After some thought, I noticed that I involuntarily defended the act of physical abuse by conceptualizing the caregiver’s response as a “punishment,” insinuating my biased perspective in favor of the workers. From afar, I caught the female staff outwardly explaining to Anna that she would continue to physically harm her if she made “too much loud noise.” From personal observation, Anna struggled to control her crying spells, oblivious of the commotion she was creating. Nonetheless, Anna involuntarily continued screaming, and the female staff thrust the shoe deeper.

Amelie has witnessed staff members kicking clients in the head; binding them to their beds with little cause; feeding a diabetic client large amounts of sugary drinks with the explicit aim of harming them; eating clients’ attractive food, leaving the clients with a daily diet of mostly salads, eggs, and prunes; falsifying time stamps for medication and feeding; and attempting to control clients by dosing them with psychiatric medications intended for other clients, against medical recommendations. It is not just a few caregivers who engage in such abusive behaviors. In Amelie’s experience, a majority of caregivers are abusive, though to different degrees.

Why do caregivers of the severely cognitive disabled so frequently behave like this? We have three hypotheses.

Convenience. Abuse might be the easiest or most effective means of achieving some practical goal. For example, striking or humiliating a client might keep them obedient, easier to manage than would be possible with a more humane approach. Although humane techniques exist for managing people with cognitive disabilities, they might work more slowly or require more effort from caregivers, who might understandably feel overtaxed in their jobs and frustrated by clients’ unruly behavior. Poorly paid workers might also steal attractive food that would otherwise not be easy for them to afford, justifying it with the thought that the clients won’t know the difference.

Sadism. According to the clinical psychologist Erich Fromm (1974), sadistic acts are acts performed on helpless others that aim at exerting maximum control over those helpless others, usually by inflicting harm on them but also by subjecting those others to arbitrary rules or forcing them to do pointless activities. It is crucial to sadistic control that it lack practical value, since power is best manifested when the chosen action is arbitrary. People typically enact sadism, according to Fromm, when they feel powerless in their own lives. Picture the man who feels frustrated and powerless at work who then comes home and kicks his dog. Cognitively disabled adults might be particularly attractive targets for frustrated workers’ sadistic impulses, since they are mostly powerless to resist and cannot report abuse.

Dehumanization. Abuse might arise from metaphysical discomfort of the sort Smith sees in racial dehumanization. The cognitively disabled might be seen as unnatural and metaphysically threatening. The cognitively disabled might seem creepy, occupying a gray area that defies familiar categories, at once both human and subhuman. Caregivers with conflicting representations of cognitively disabled people both as human and as subhuman might attempt to resolve that conflict by symbolically degrading their clients – implicitly asserting their clients’ subhumanity as a means of resolving this felt tension in favor of the subhuman. If the caregivers have already been mistreating the clients due to convenience or sadism, symbolic degradation might be even more attractive. If they can reinforce their representation of the client as subhuman, sadistic abuse or mistreatment for sake of convenience will seem to matter less.

Consider the example of Anna. To the extent the caregiver’s motivation is convenience, she might be hoping that inverting Anna in the wheelchair and shoving a shoe in her mouth will be an effective punishment that will encourage Anna not to cry so much or so loudly in the future. To the extent the motivation is sadism, the caregiver might be acting out of frustration and a feeling of powerlessness, either in general in her working life or specifically regarding her inability to prevent Anna from crying or both. By inverting Anna and shoving a shoe in her mouth, the caregiver can feel powerful instead of powerless, exerting sadistic control over a helpless other. To the extent the motivation is dehumanization, the worker is symbolically removing Anna’s humanity by literally physically turning her upside-down, into a position that human beings don’t typically occupy. Dogs bite shoes, and humans typically do not, and so arguably Anna is symbolically transformed into a dog. Furthermore, the shoe symbolically and perhaps actually prevents Anna from using her mouth to make humanlike sounds.

These three hypotheses about caregivers’ motives make different empirically distinguishable predictions about who will be abusive, and to whom, and which abusive acts they tend to choose. To the extent convenience is the explanation, we should expect experienced caregivers to choose effective forms of abuse. They will not engage in abuse with no clear purpose, and if a particular form of abuse seems not to be achieving its goal, they will presumably learn to stop that practice. To the extent sadism is the explanation, we should expect that the caregivers who feel most powerless should engage in it and that they should chose as victims clients who are among the most powerless while still being capable of controllable activity. Sadistic abuse should manifest especially in acts of purposeless cruelty and arbitrary control, almost the opposite of what would be chosen if convenience were the motive. To the extent dehumanization is the motive, we should expect the targets of abuse to be disproportionately the clients who are most cognitively and metaphysically threatening – the ones who, in addition to being cognitively disabled, are perceived as having a “deformed” physical appearance, or who seem to resemble non-human animals in their behavior (for example, crawling instead of upright walking), or who are negatively racialized. Acts manifesting dehumanizing motivations should be acts with symbolic value: treating the person in ways that are associated with the treatment of non-human animals, or symbolically altering or preventing characteristically human features or behaviors such as speech, clothing, upright walking, and dining.

We don’t intend convenience, sadism, and dehumanization as an exhaustive list of motives. People do things for many reasons, including sometimes against their will at the behest of others. Nor do we intend these three motives as exclusive. Indeed, as we have already suggested, they might to some extent support each other: Dehumanizing motives might be more attractive once a caregiver has already abused a client for reasons of convenience or sadism. Also, different caregivers might exhibit these motivations in different proportions.

Convenience alone cannot always be the motive. Caregivers often mistreat clients in ways that, far from making things easier for themselves, require extra effort. Adding extra sugar to a diabetic client’s drink serves no effective purpose and risks creating medical complications that the caregiver would then have to deal with. Another client was regularly told lies about his mother, such as that she had died or that she had forgotten about him, seemingly only to provoke a distressed reaction from him. This same client had a tendency to hunch forward and grunt, and caregivers would imitate his slouching and grunting, mocking him in a way that often flustered and confused him. Also, caregivers would go to substantial lengths to avoid sharing the facility’s elegant dining table with clients, even though there was plenty of room for both workers and clients to eat together at opposite ends. Instead, caregivers would rearrange chairs and tablecloths and a large vase before every meal, forcing clients to eat separately at an old, makeshift table. Relatedly, they meticulously ensured that caregivers’ and clients’ dishes and cutlery were never mixed, cleaning them with separate sponges and drying them in separate racks, as if clients were infectious.

But do caregivers really have dehumanizing representations in Smith’s sense? Here, we follow Smith’s method of examining the caregivers’ words. In Amelie’s experience over the years, she has observed that caregivers frequently refer to their clients as “animals” or “no better than animals”. In abusing them, they say things like, “you have to treat them like the animals they are”. Caregivers also commonly treat clients in a manner associated with dogs – for example, whistling for them to come over, saying “Here [name]!” in the same manner you would call a dog, and feeding them food scraps from the table. (These scraps will often be food officially bought on behalf of the clients but which the caregivers are eating for themselves.) The caregivers Amelie has observed also commonly refer to their clients with the English pronoun “it” instead of “he” or “she”, though of course they are aware of their clients’ gender. Some employ “it” so habitually that they accidentally refer to clients as “it” in front of the client’s relatives, during relatives’ visits. This pronoun is perhaps especially telling, since there is no practical justification for using it, and often no sadistic justification either, since many clients aren’t linguistically capable of understanding pronoun use. The use of “it” appears to emerge from an implicit or explicit dehumanizing representation of the client.

Despite speech patterns suggestive of dehumanization, caregivers also explicitly refer to the clients as human beings. In their reflective moments, Amelie has observed them to say things like “It’s hard to remember sometimes that they’re people. When they behave like this, you sometimes forget.” In Amelie’s judgment, the caregivers typically agree when reminded that the clients are people with rights who should be treated accordingly, though they often seem uncomfortable in acknowledging this.

Although the evidence is ambiguous, given caregivers’ patterns of explicitly referring to their cognitively disabled clients both as people and as non-human animals or “it”s, plus non-verbal behavior that appears to suggested dehumanizing representations, we think it’s reasonable to suppose, in accordance with Smith’s model of dehumanization, that many caregivers have powerful contradictory representations of their clients, seeing them simultaneously as human and as subhuman, finding them confusing, creepy, and in conflict with the natural order of things. If so, then it is plausible that they would feel the same kind of cognitive and metaphysical discomfort that Smith identifies in racial dehumanization, and that this discomfort would sometimes lead to inappropriate behavior of the sort described.

There’s another way to reassert the natural order of things, of course. Instead of dehumanizing cognitively disabled clients, you might embrace their humanity. There are two ways of doing this. One involves preserving a certain narrow, traditional sense of the “human” – a sense into which cognitively disabled people don’t easily fit – and then attempting to force the cognitively disabled within that conception. Visiting relatives sometimes seem to do this. One pattern is for a relative to comment with excessive appreciation on a stereotypically human trait that the client has, such as the beauty of their hair – as if to prove to themselves or others that their cognitively disabled relative is a human after all. While this impulse is admirable, it might be rooted in a narrow conception of the human, according to which cognitively disabled people are metaphysical category-straddlers or at best lesser humans.

A different approach to resolving the metaphysical problem – the approach we recommend – involves a more capacious understanding of the human. Plenty of people have disabilities. A person with a missing leg is no less of a human than a person with two legs, nor is the person with a missing leg somehow defective in their humanity. However, our culture appears to have instilled in many of us – perhaps implicitly and even against our better conscious judgment – a tendency to think of high levels of cognitive ability as essential to being fully and non-defectively human. Perhaps historically this has proven to be a useful ideology for eliminating, warehousing, drugging, and binding people who are inconvenient to have around. We suspect that changing this conception would reduce the abuse that caregivers routinely inflict on their cognitively disabled clients.


[1] "Amelie Green" is a pseudonym chosen to protect Amelie and her family.

Friday, April 01, 2022

Work on Robot Rights Doesn't Conflict with Work on Human Rights

Sometimes I write and speak about robot rights, or more accurately, the moral status of artificial intelligence systems -- or even more accurately, the possible moral status of possible future artificial intelligence systems. I occasionally hear the following objection to this whole line of work: Why waste our time talking about hypothetical robot rights when there are real people, alive right now, whose rights are being disregarded? Let's talk about the rights of those people instead! Some objectors add the further thought that there's a real risk that, under the influence of futurists, our society might eventually treat robots better than some human beings -- ethnic minorities, say, or disabled people.

I feel some of the pull of this objection. But ultimately, I think it's off the mark.

The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.

In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.

But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.

Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.

But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)

So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.

But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.

In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.

Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.

[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]


Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.

[image source]

Thursday, March 24, 2022

Evening the Playing Field in Philosophy Classes

As I discussed last week, overconfident students have systematic advantages in philosophy classes, at least as philosophy is typically taught in the United States. By confidently asserting their ideas in classroom -- even from day one, when they have no real expertise on the issues -- they get practice articulating philosophical views in an argumentative context and they receive the professor's customized feedback on their views. Presenting their views before professor and peers engages their emotions and enhances their memory. Typically, professors encourage and support such students, bringing out the best in them. Thus, over the long run, overconfident students tend to perform well, better than their otherwise similar peers with more realistic self-assessments. What seems to be an epistemic vice -- overconfidence -- ultimately helps them flourish and learn more than they otherwise would have.

I like these overconfident students (as long as they're not arrogant or domineering). It's good we encourage them. But I also want to level the playing field so that less-overconfident students can gain some of the same advantages. Here's my advice for doing so.

First, advice for professors. Second, advice for students.

Evening the Playing Field: Advice for Professors

(1.) Small-group discussions. This might sound like tired advice, but there's a reason the advice is so common. Small-group discussions work amazing magic if you do them right. Here's my approach:

* Divide the class into groups of 3 or 4. Insist on exactly 3 or 4. Two is to few, because friends will pair up and have too little diversity of opinion. Five is too many, because the quietest student will be left out of the conversation.

* Give the students a short, co-operative written task which will be graded pass / no credit (be lenient). For example, "write down two considerations in favor of the view that human nature is good and two considerations against the view". Have them designate one student as "secretary", who will write down their collaborative answer on a sheet of paper containing all their names. This should start them talking with each other, aimed at producing something concrete and sensible.

* Allow them five minutes (maybe seven), during which you wander the room, encouraging any quiet groups to start talking and writing.

* Reconvene the class, and then ask a group of usually quiet students what their group came up with.

* Explore the merits of their answers in an encouraging way, then repeat with other groups.

This exercise will get many more students talking in class than the usual six. (Almost no matter the size of the class, there will be six students who do almost all the talking, right?) The increased talkativeness often continues after the exercise is over. Not only do normally quiet students open their mouths more, but they gain some of the more specific benefits of the overconfident student: They practice expressing their views aloud in class, they receive customized feedback from the professor, by having their views put on the spot they feel an emotional engagement that enhances interest and memory, and they get the feeling of support from the professor.

Why it works: It's of course easier to talk with a few peers than in front of the whole class, especially when necessary to complete a (low-stress) assignment. Speaking to a few peers in the classroom and finding them to be nice about it (as they almost always are) facilitates later speaking in front of the whole class. Furthermore, when the professor calls on a small group, instead of on one student in particular, that student isn't being confronted as directly. They have cover: "This is just what our group came up with." And if the student isn't comfortable extemporizing, they can just read the words written on the page. All of this makes it easier for the quieter students to gain practice expressing their philosophical views in front of others. If it goes well, they become more comfortable doing it again.

(2.) Story time. Back in 2016, I dedicated a post to the value of telling stories in philosophy class. My former T.A. Chris McVey was a master of philosophical storytelling. He would start discussion sections of my huge lower-division class, Evil, with personal stories from his childhood, or from his time working on a nuclear submarine, or from other parts of his life, that related to the themes of the class. He kept it personal and real, and students loved it.

A very different type of student tends to engage after storytime than the usual overconfident philosophy guy -- for example, someone who has a similar story in their own lives. The whole discussion has a different, more personal tone, and it can then be steered back into the main ideas of the course. Peter [name randomly chosen from lists of former students], who might normally say nothing in class, finds he has something to say about parental divorce. He has been brought into the discussion, has expressed an opinion and has been shown how his opinion is relevant to the course.

(3.) Diversify topics and cultures. Relatedly, whenever you can diversify topics (add a dimension on religion, or family, or the military) or culture (beyond the usual European / North American focus), you shift around what I think of as the "academic capital" of the students in the class. A student who hasn't had confidence to speak might suddenly feel expert and confident. Maybe they are the one student who has had active duty in the military, or maybe their pre-college teachers regularly quoted from Confucius and Mencius. Respecting their expertise can help them recognize that they bring something important, and they will be readier to commit and engage on the issues at hand.

(4.) Finger-counting questions. Consider adding this custom: The first time a student raises their hand to speak in class, they raise one finger. The second time, two fingers. The third time, three fingers, and so on. When multiple students want to contribute, prioritize those with fewer fingers. When a student raises four fingers, hesitate, looking to see whether some lower-fingered students might also have something to say. This practice doesn't silence the most talkative students, but it will make them more aware of the extent to which they might be crowding other students out, and it constantly communicates to the quieter students that you're especially interested in hearing from them, instead of always from the same six.

This advice aims partly at enhancing oral participation in class, which is a big step toward evening the playing field. But to really level the playing field requires more. It's not just that the overconfident student is more orally active. The overconfident student has opinions, stakes claims, feels invested in the truth or falsity of particular positions, and takes the risk of exposing their ideas to criticism. This creates more emotional and intellectual engagement than do neutral, clarificatory oral contributions. My first three suggestions not only broaden oral participation in general but non-coercively nudge students toward staking claims, with all the good that follows from that.

Evening the Playing Field: Advice for Students

Your professor might not do any of the above. You might see the same six students dominating discussion, and you might not feel able to contribute at their level. You might be uninterested in competing with them for air time, and you might dislike the spotlight on yourself. Do yourself a favor and overcome these reservations!

First, the confident students might not actually know the material better than you do. Most professors in U.S. classrooms interpret students' questions as charitably as possible, finding what's best in them, rather than shooting students down in a discouraging way. If a confident student says something you think doesn't make sense or that you're inclined to disagree with, and if the professor seems to like the comment, it might not be that you're misunderstanding but rather that the professor is doing what they can to turn a weak contribution into something good.

Second, try viewing classroom philosophy discussions like a game. Almost every substantive philosophical claim (apart from simple historical facts and straightforward textual interpretations) is disputable, even the principle of non-contradiction. Take a stand for fun. See if you can defend it. A good professor will both help you see ways in which it might be defensible and ways in which others have argued against it. Think of it as being assigned to defend a view in a debate -- a view with which you might or might not agree.

Third, you owe it to yourself to win the same educational benefits that ordinarily accrue disproportionately to the overconfident students. You might not feel comfortable taking a stand in class. But so much of life is about reaching beyond your comfort zone, doing new things. Right? If you care about your education, care about getting the most out of it by putting your ideas forward in class.

Fourth, try it with other students. Even if your professor doesn't use small discussion groups, you can do this yourself. Most people find that it's much easier to take a stand about the material in front of a peer than in front of the whole class. Outside of class, tell a classmate about your objection to Kant. Bat it around with them a bit. This will give you already a certain amount of practice and feedback, laying the groundwork for later expressing that view, or some other one, in a class context. You could even say to the professor, "My friend and I were wondering whether Kant..." A good professor will love to hear a question like this. Thus students have been arguing about Kant outside of class! Yay!



"How to Diversity Philosophy: Two Thoughts and a Plea for More Suggestions" (Aug 24, 2016)

"Storytelling in Philosophy Class" (Oct 21, 2016)

"The Parable of the Overconfident Student -- and Why Academic Philosophy Still Favors the Socially Privileged" (Mar 14, 2022)

[image source]