Tuesday, June 20, 2017

The Dauphin's Metaphysics, read by Tatiana Grey at Podcastle

My alternative-history story about love and low-tech body switching through hypnosis has just been released in audio at PodCastle. Terrific reading by Tatiana Grey!

PodCastle 475: The Dauphin's Metaphysics

This has been my best-received story so far, recommended by Locus Online, translated into Chinese and Hungarian for leading SF magazines in those languages, and assigned as required reading in at least two philosophy classes in the US.

The setting is Beijing circa 1700, post-European invasion and collapse, resulting in a mashup of European and Chinese institutions. Dauphin Jisun Fei takes a metaphysics class with the the Academy's star woman professor and conceives a plan for radical life extension.

Story originally published in Unlikely Story, fall 2015.

Thursday, June 15, 2017

On Not Distinguishing Too Finely Among One's Motivations

I'm working through Daniel Batson's latest book, What's Wrong with Morality?

Batson distinguishes between four different types of motives for seemingly moral behavior, each with a different type of ultimate goal. Batson's taxonomy is helpful -- but I want to push back against distinguishing as finely as he does among people's motives for doing good.

Suppose I offer a visiting speaker a ride to the airport. That seems like a nice thing to do. According to Batson, I might have one (or more) of the following types of motivation:

(1.) I might be egoistically motivated -- acting in my own perceived self-interest. Maybe the speaker is the editor of a prestigious journal and I think I'll have a better shot publishing and advancing my career if the speaker thinks well of me.

(2.) I might be altruistically motivated -- aiming primarily to benefit the speaker herself. I just want her to have a good visit, a good experience at UC Riverside, and giving her a ride is a way of advancing that goal I have.

(3.) I might be collectivistically motivated -- aiming primarily to benefit a group. I want UC Riverside's Philosophy Department to flourish, and giving the speaker a ride is a way of advancing that thing I care about.

(4.) I might be motivated by principle -- acting according to a moral standard, principle, or ideal. Maybe I think driving the speaker to the airport will maximize global utility, or that it is ethically required given my social role and past promises.

Batson characterizes his view of motivation as "Galilean" -- focused on the underlying forces that drive behavior (p. 25-26). The idea seems to be that when I make that offer to the visiting speaker, that action must have been induced by some particular motivational force inside me that is egoistic, altruistic, collectivist, or principled, or some specific combination of those. On this view, we don't understand why I am offering the ride until we know which of these interior forces is the one that caused me to offer the ride. Principled morality is rare, Batson argues, because it requires being caused to act by the fourth type of motivation, and people are more normally driven by the first three.

I'm nervous about appeals to internal causes of this sort. My best guess is that these sorts of simple, familiar folk (or quasi-folk) categories don't map neatly onto the real causal processes generating our behavior, which are likely to be much more complicated, and also misaligned with categories that come naturally to us. (Compare connectionist structures and deep learning.)

Rather than try to articulate an alternative positive account, which would be too much to add to this post, let me just suggest the following. It's plausible that our motivations are often a tangled mess, and when they are a tangled mess, attempting to distinguish finely among them is usually a mistake.

For example, there are probably hypothetical conditions under which I would decline to drive the speaker because it conflicted with my self-interest, and there are probably other hypothetical conditions under which I would set aside my self-interest and choose to drive the speaker anyway. I doubt these hypothetical conditions line up neatly, so that I decline to drive the speaker if and only if it would require sacrificing X amount or more of self-interest. Some situations might just channel me into driving her, even at substantial personal cost, while others might more easily invite the temptation to wiggle out.

The same is likely true for the other motivations. Hypothetically, if the situation were different so that it was less in the collective interest of the department, or less in the speaker's interest, or less compelled by my favorite moral principles, I might drive or not drive the speaker depending partly on each of these but also partly on other factors of situation and internal psychology, habits, scripts, potential embarrassment -- probably in no tidy pattern.

Furthermore, egoistic, altruistic, collectivist, and principled aims come in many varieties, difficult to disentangle. I might be egoistically invested in the collective flourishing of the department as a way of enhancing my own stature in the profession. I might be drawn to different, conflicting moral principles. I might altruistically desire both that the speaker get to her flight on time and that she enjoy the company of the cleverest conversationalist in the department (me!). I might enjoy showing off the sights of the L.A. basin through the windows of my car, with a feeling of civic pride. Etc.

Among all of these possible motivations -- indefinitely many possible motivations, perhaps, if we decide to slice finely among them -- does it make sense to try to determine which one or few are the real motivations that are genuinely causally responsible for my choosing to drive the speaker?

Now if my actual and hypothetical choices were all neatly aligned with my perceived self-interest, then of course self-interest would be my real motive. Similarly, if my pattern of actual and hypothetical choices were all neatly aligned with one particular moral principle, then we could say I was mainly moved by that principle. But if my patterns of choice are not so neatly explained, if my choices arise from a tangle of factors far more complex than Batson's four, then each of Batson's factors is only a simplified label for a pattern that I don't very closely match, rather than a deep Galilean cause of my choice.

The four factors might, then, not compete with each other as starkly as Batson seems to suppose. Each of them might, to a first approximation, capture my motivation reasonably well, in those fortunate cases where self-interest, other-interest, collective interest, and moral principle all tend to align. I have lots of reasons for driving the speaker! This might be so even if, in hypothetical cases, I diverge from the predicted patterns, probably in different and complex ways. My motivations might be described, with approximately equal accuracy, as egoistic, altruistic, collectivist, and principled, when these four factors tend to align across the relevant range of situations -- not because each type of motivation contributes equal causal juice to my behavior but rather because each attribution captures well enough the pattern of choices I would make in the types of cases we care about.

Wednesday, June 07, 2017

Academic Pyramids, Academic Tubes

Greetings from Cambridge! Traveling around Europe and the UK, I am struck by the extent to which different countries have relatively pyramid-like vs relatively tube-like academic systems. This has moved me to think, also, about the extent to which US academia has recently been becoming more pyramidal.

Please forgive my ugly sketch of a pyramid and a tube:

The German system is quite pyramidal: There is a small group of professors at the top, and many stages between undergraduate and professor, at any one of which you might suddenly find yourself ejected from the system: undergraduate, then masters, then PhD, then one or more postdocs and/or assistantships before moving up or out; and at each stage one needs to actively seek a position and typically move locations if successful.

In contrast, the US system, as it stood about twenty years ago, was more tubular: fewer transition stages requiring application and moving, with much sharper cutdowns between each stage. To a first approximation, undergraduates applied to PhD programs, very few got in, and then if they completed there was one more transition from completing the PhD to gaining a tenure-track job (and typically, though of course not always, tenure after 6-7 years on the tenure track).

Philosophy in the US is becoming more pyramidal, I believe, with more people pursuing terminal Master's degrees before applying to PhD programs, and with the increasing number of adjunct positions and postdoctoral positions for newly-minted PhDs. Instead of approximately three phases (undergrad, grad/PhD, tenure-track/tenured professor), we are moving closer to five-phase system (undergrad, MA, PhD, adjunct/post-doc, tenure-track/tenured).

This more pyramidal system has some important advantages. One advantage is that it provides more opportunities for people from nonelite backgrounds to advance through the system. It has always been difficult from students from nonelite undergraduate universities to gain acceptance to elite PhD programs (and it still is); similarly for students who struggled a bit in their undergraduate careers before finding philosophy. With the increasing willingness of PhD programs to accept students with Master's degrees, a broader range of students can earn a shot at academia: They can compete to get into a Master's program (typically easier to do for people with nonelite backgrounds than being admitted to a comparably-ranked PhD program) and then possibly shine there, gaining admittance to a range of PhD programs that would otherwise have been closed to them. A similar pattern sometimes occurs with postdocs.

The other advantage of the pyramid is that being exposed to a variety of institutions, advisors, and academic subcultures has advantages both for the variety of perspectives it provides and for meeting more people in the academic community. A Master's program or a postdoctoral fellowship can be a rewarding experience.

But I am also struck by the downside of pyramidal structures. In Europe, I met many excellent philosophers in their 30s or 40s, post-PhD, unsure whether they would make the next jump up the pyramid or not, unable to settle down securely into their careers. This used to be relatively uncommon in the US, though it has become more common. It is hard on marriages and families; and it's hard to face the prospects of a major career change in mid-life after devoting a dozen or more years to academia.

The sciences in the US have tended to be more pyramidal than philosophy, with one or more postdocs often expected before the tenure-track job. This is partly, I suspect, just due to the money available in science. There are lots of post-docs to be had, and it's easier to compete for professor positions with that extra postdoctoral experience. One possibly unintended consequence of the increased flow of money into philosophical research projects, through the Templeton Foundation and government research funding organizations, is to increase the number of postdocs, and thus the pyramidality of the discipline.

Of course, the rise of inexpensive adjunct labor is a big part of this -- bigger, probably, than the rise of terminal Master's programs as a gateway to the PhD and the rise of the philosophy post-doc -- but all of these contribute in different ways to making our discipline more pyramidal than it was a few decades ago.

Thursday, June 01, 2017

The Social-Role Defense of Robot Rights

Daniel Estrada's Made of Robots has launched a Hangout on Air series in philosophy of technology. The first episode is terrific!

Robot rights cheap yo.

Cheap: Estrada's argument for robot rights doesn't require that robots have any conscious experiences, any feelings, any reinforcement learning, or (maybe) any cognitive processing at all. Most other defenses of the moral status of robots assume, implicitly or explicitly, that robots who are proper targets of moral concern will exist only in the future, once they have cognitive features similar to humans or at least similar to non-human vertebrate animals.

In contrast, Estrada argues that robots already deserve rights -- actual robots that currently exist, even simple robots.

His core argument is this:

1. Some robots are already "social participants" deeply incorporated into our social order.

2. Such deeply incorporated social participants deserve social respect and substantial protections -- "rights" -- regardless of whether they are capable of interior mental states like joy and suffering.

Let's start with some comparison cases. Estrada mentions corpses and teddy bears. We normally treat corpses with a certain type of respect, even though we think they themselves aren't capable of states like joy and suffering. And there's something that seems at least a little creepy about abusing a teddy bear, even though it can't feel pain.

You could explain these reactions without thinking that corpses and teddy bears deserve rights. Maybe it's the person who existed in the past, whose corpse is now here, who has rights not to be mishandled after death. Or maybe the corpse's relatives and friends have the rights. Maybe what's creepy about abusing a teddy bear is what it says about the abuser, or maybe abusing a teddy harms the child whose bear it is.

All that is plausible, but another way of thinking emphasizes the social roles that corpses and teddy bears play and the importance to our social fabric (arguably) of our treating them in certain ways and not in other ways. Other comparisons might be: flags, classrooms, websites, parks, and historic buildings. Destroying or abusing such things is not morally neutral. Arguably, mistreating flags, classrooms, websites, parks, or historic buildings is a harm to society -- a harm that does not reduce to the harm of one or a few specific property owners who bear the relevant rights.

Arguably, the destruction of hitchBOT was like that. HitchBOT was cute ride-hitching robot, who made it across the length of Canada but who was destroyed by pranksters in Philadelphia when its creators sent it to repeat the feat in the U.S. Its destruction not only harmed its creators and owners, but also the social networks of hitchBOT enthusiasts who were following it and cheering it on.

It might seem overblown to say that a flag or a historic building deserves rights, even if it's true that flags and historic buildings in some sense deserve respect. If this is all there is to "robot rights", then we have a very thin notion of rights. Estrada isn't entirely explicit about it, but I think he wants more than that.

Here's the thing that makes the robot case different: Unlike flags, buildings, teddy bears, and the rest, robots can act. I don't mean anything too fancy here by "act". Maybe all I mean or need to mean is that it's reasonable to take the "intentional stance" toward them. It's reasonable to treat them as though they had beliefs, desires, intentions, goals -- and that adds a new richer dimension, maybe different in kind, to their role as nodes in our social network.

Maybe that new dimension is enough to warrant using the term "rights". Or maybe not. I'm inclined to think that whatever rights existing (non-conscious, not cognitively sophisticated) robots deserve remain derivative on us -- like the "rights" of flags and historic buildings. Unlike human beings and apes, such robots have no intrinsic moral status, independent of their role in our social practices. To conclude otherwise would require more argument or a different argument than Estrada gives.

Robot rights cheap! That's good. I like cheap. Discount knock-off rights! If you want luxury rights, though, you'll have to look somewhere else (for now).

[image source] Update: I changed "have rights" to "deserve rights" in a few places above.

Thursday, May 25, 2017

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

At The Deviant Philosopher Wayne Riggs, Amy Olberding, Kelly Epley, and Seth Robertson are collecting suggestions for teaching units, exercises, and primers that incorporate philosophical approaches and philosophers that are not currently well-represented in the formal institutional structures of the discipline. The idea is to help philosophers who want suggestions for diversifying their curriculum. It looks like a useful resource!

I contributed the following to their site, and I hope that others who are interested in diversifying the philosophical curriculum will also contribute something to their project.

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

Primary Texts

  • Allen, James, Hilton Als, John Lewis, and Leon F. Litwack (2000). Without sanctuary: Lynching photography in America. Santa Fe: Twin Palms. Pp. 8-16, 173-176, 178-180, 184-185, 187-190, 194-196, 198, 201 (text only), and plates #20, 25, 31, 37-38, 54, 57, 62-65, 74, and 97.
  • Wells-Barnett, Ida B. (1892/2002). On lynchings. Ed. P.H. Collins. Amherst, NY: Humanity. Pp. 42-46.
  • Mengzi (3rd c. BCE/1970). Trans. B.W. Van Norden. Indianapolis: Hackett. 1A7, 1B5, 1B11, 2A2 (p. 35-41 only), 2A6, 2B9, 3A5, 4B12, 6A1 through 6A15, 6B1, 7A7, 7A15, 7A21, 7B24, 7B31.
  • Rousseau, Jean-Jacques (1755/1995). Discourse on the origin of inequality. Trans. F Philip. Ed. P. Coleman. Oxford: Oxford. Pp. 45-48.
  • Xunzi (3rd c. BCE/2014). Xunzi: The complete text. Trans. E. Hutton. Princeton, NJ: Princeton. Pp. 1-8, 248-257.
  • Hobbes, Thomas (1651/1996). Leviathan. Ed. R. Tuck. Cambridge: Cambridge. Pp. 86-90.
  • Doris, John M. (2002). Lack of character. Cambridge: Cambridge. Pp. 28-61.
  • The Milgram video on Obedience to Authority.
Secondary Texts for Instructor
  • Dray, Philip (2002). At the hands of persons unknown. New York: Modern Library.
  • Ivanhoe, Philip J. (2000). Confucian moral self cultivation, 2nd ed. Indianapolis: Hackett. 
  • Schwitzgebel, Eric (2007). Human nature and moral education in Mencius, Xunzi, Hobbes, and Rousseau. History of Philosophy Quarterly, 24, 147-168.
Suggested Courses
  • Introduction to Ethics
  • Ethics
  • Introduction to Philosophy
  • Evil
  • Philosophy of Psychology
  • Political Philosophy
Overview

This is a two-week unit. Day one is on the history of lynching in the United States, featuring lynching photography and Ida B. Wells. Day two is Mengzi on human nature (with Rousseau as secondary reading). Day three is Xunzi on human nature (with Hobbes as secondary reading). Days four and five are the Milgram video and John Doris on situationism.

The central question concerns the psychology of lynching perpetrators and Milgram participants. On a “human nature is good” view, we all have some natural sympathies or an innate moral compass that would be revolted by our participation in such activities, if we were not somehow swept along by bad influences (Mengzi, Rousseau). On a “human nature is bad” view, our natural inclinations are mostly self-serving and morality is an artificial human construction; so if one’s culture says “this is the thing to do” there is no inner source of resistance unless you have already been properly trained (Xunzi, Hobbes). Situationism (which is not inconsistent with either of these alternatives) suggests that most people can commit great evil or good depending on what seem to be fairly moderate situational pressures (Doris, Milgram).

Students should be alerted in advance about the possibly upsetting photographs, and they must be encouraged to look closely at the faces of the perpetrators rather than being too focused on the bodies of the victims (which may be edited out if desired for classroom presentation). You might even consider giving the students possible alternative readings if they find the lynching material too difficult (such as an uplifting chapter from Colby & Damon 1992).

On Day One, a point of emphasis should be that most of the victims were not even accused of capital crimes, and focus can be both on the history of lynching in general and on the emotional reactions of the perpetrators as revealed by their behavior described in the texts and by their faces in the photos.

On Day Two, the main emphasis should be on Mengzi’s view that human nature is good. King Xuan and the ox (1A7), the child at the well (2A6), and the beggar refusing food insultingly given (6A10) are the most vivid examples. The metaphor of cultivating sprouts is well worth extended attention (as discussed in the Ivanhoe and Schwitzgebel readings for the instructor). If the lynchers had paused to reflect in the right way, would they have found in themselves a natural revulsion against what they were doing, as Mengzi would predict? Rousseau’s view is similar (especially as developed in Emile) but puts more emphasis on the capacity of philosophical thinking to produce rationalizations of bad behavior.

On Day Three, the main emphasis should be on Xunzi’s view that human nature is bad. His metaphor of straightening a board is fruitfully contrasted with Mengzi’s of cultivating sprouts. For example, in straightening a board, the shape (the moral structure) is imposed by force from outside. In cultivating a sprout, the shape grows naturally from within, given a supportive, nutritive, non-damaging environment. Students can be invited to consider cartoon versions of “conservative” moral education (“here are the rules, like it or not, follow them or you’ll be punished!”) versus “liberal” moral education (“don’t you feel bad that you hurt Ana’s feelings?”).

Day Four you might just show the Milgram video.

Day Five the focus should be on articulating situationism vs dispositionism (or whatever you want to call the view that broad, stable, enduring character traits explain most of our moral behavior). I recommend highlighting the elements of truth in both views, and then showing how there are both situationist and dispositionist elements in both Mengzi and Xunzi (e.g., Mengzi says that young men are mostly cruel in times of famine, but he also recommends cultivating stable dispositions). Students can be encouraged to discuss how well or poorly the three different types of approach explain the lynchings and the Milgram results

If desired, Day Six and beyond can cover material on the Holocaust. Hannah Arendt’s Eichmann in Jerusalem and Daniel Goldhagen’s Hitler’s Willing Executioners make a good contrast (with Mengzian elements in Arendt and Xunzian elements in Goldhagen). (If you do use Goldhagen, be sure you are aware of the legitimate criticisms of some aspects of his view by Browning and others.)

Discussion Questions
  • What emotions are the lynchers feeling in the photographs?
  • If the lynchers had stopped to reflect on their actions, would they have been able to realize that what they were doing was morally wrong?
  • Mengzi lived in a time of great chaos and evil. Although he thought human nature was good, he never denied that people actually commit great evil. What resources are available in his view to explain actions like those of the lynch mobs, or other types of evil actions?
  • Is morality an artificial cultural invention? Or do we all have natural moral tendencies that only need to be cultivated in a nurturing environment?
  • In elementary school moral education, is it better to focus on enforcing rules that might not initially make sense to the children, or is it better to try to appeal to their sympathies and concerns for other people?
  • How effectively do you think people can predict what they themselves would do in a situation like the Milgram experiment or a lynch mob?
  • Are there people who are morally steadfast enough to resist even strong situational pressures? If so, how do they become like that?
Activities (optional)

On the first day, an in class assignment might be for them to spend 5-7 minutes writing down their opinion on whether human nature is good or evil (or in-between, or alternatively that the question doesn’t even make sense as formulated). Then can then trade their written notes with a neighbor or two and compare answers. On the last day, they can review what they wrote on the first day and discuss whether their opinions have changed.
[Greetings from Graz, Austria, by the way!]

Friday, May 19, 2017

Pre-Excuse

I'm heading off to Europe tomorrow for a series of talks and workshops. Nijmegen, Vienna, Graz, Lille, Leuven, Antwerp, Oxford, Cambridge -- whee! Then back to Riverside for a week and off to Iceland with the family to celebrate my son's high school graduation. Whee again! I return to sanity July 5.

I've sketched out a few ideas for blog posts, but nothing polished.

If I descend into incoherence, I have my pre-excuse ready! Jetlag and hotel insomnia.

[image source]

Thursday, May 18, 2017

Hint, Confirm, Remind

You can't say anything only once -- not when you're writing, not if you want the reader to remember. People won't read the words exactly as you intend them, or they will breeze over them; and often your words will admit of more interpretations than you realize, which you rule out by clarifying, angling in, repeating, filling out with examples, adding qualifiers, showing how what you say is different from some other thing it might be mistaken for.

I have long known this about academic writing. Some undergraduates struggle to fill their 1500-word papers because they think that every idea gets one sentence. How do you have eighty ideas?! It becomes much easier to fill the pages -- indeed the challenge shifts from filling the pages to staying concise -- once you recognize that every idea in an academic paper deserves a full academic-sized paragraph. Throw in an intro and conclusion and you've got, what, five ideas in a 1500-word paper? Background, a main point, one elaboration or application, one objection, a response -- done.

It took a while for me to learn that this is also true in writing fiction. You can't just say something once. My first stories were too dense. (They are now either trunked or substantially expanded.) I guess I implicitly figured that you say something, maybe in a clever oblique way, the reader gets it, and you're done with that thing. Who wants boring repetition and didacticism in fiction?

Without being didactically tiresome, there are lots of ways to slow things down so that the reader can relish your idea, your plot turn, your character's emotion or reaction, rather than having the thing over and done in a sentence. You can break it into phases; you can explicitly set it up, then deliver; you can repeat in different words (especially if the phrasings are lovely); you can show different aspects of the scene, relevant sensory detail, inner monologue, other characters' reactions, a symbolic event in the environment.

But one of my favorite techniques is hint, confirm, remind. You can do this in a compact way (as in the example I'm about to give), but writers more commonly spread HCRs throughout the story. Some early detail hints or foreshadows -- gives the reader a basis for guessing. Then later, when you hit it directly, the earlier hint is remembered (or if not, no biggie, not all readers are super careful), and the alert reader will enjoy seeing how the pieces come together. Still later, you remind the reader -- more quickly, like a final little hammer tap (and also so that the least alert readers finally get it).

Neil Gaiman is a master of the art. As I was preparing some thoughts for a fiction-writing workshop for philosophers I'm co-leading next month, I noticed this passage about "imposter syndrome", recently going around. Here's Gaiman:

Some years ago, I was lucky enough invited to a gathering of great and good people: artists and scientists, writers and discoverers of things. And I felt that at any moment they would realise that I didn’t qualify to be there, among these people who had really done things.

On my second or third night there, I was standing at the back of the hall, while a musical entertainment happened, and I started talking to a very nice, polite, elderly gentleman about several things, including our shared first name. And then he pointed to the hall of people, and said words to the effect of, "I just look at all these people, and I think, what the heck am I doing here? They’ve made amazing things. I just went where I was sent."

And I said, "Yes. But you were the first man on the moon. I think that counts for something."

And I felt a bit better. Because if Neil Armstrong felt like an imposter, maybe everyone did.

Hint: an elderly gentleman, same first name as Gaiman, famous enough to be backstage among well known artists and scientists. Went where he was sent.

Confirm: "You were the first man on the moon".

Remind: "... if Neil Armstrong..."

The hints set up the puzzle. It's unfolding fast before you, if you're reading at a normal pace. You could slow way down and treat it as a riddle, but few of us would do that.

The confirm gives you the answer. Now it all fits together. Bonus points to Gaiman for making it natural dialogue rather than flat-footed exposition.

The remind here is too soon after the confirm to really be a reminder, as it would be if it appeared a couple of pages later in a longer piece of writing. But the basic structure is the same: The remind hammer-taps the thing that should already be obvious, to make sure the reader really has it -- but quickly, with a light touch.

If you want the reader to remember, you can't just say it only once.

[image source]

Thursday, May 11, 2017

The Sucky and the Awesome

Here are some things that "suck":

  • bad sports teams;
  • bad popular music groups;
  • getting a flat tire, which you try to change in the rain because you're late to catch a plane for that vacation trip you've been planning all year, but the replacement tire is also flat, and you get covered in mud, miss the plane, miss the vacation, and catch a cold;
  • me, at playing Sonic the Hedgehog.
  • It's tempting to say that all bad things "suck". There probably is a legitimate usage of the term on which you can say of anything bad that it sucks; and yet I'm inclined to think that this broad usage is an extension from a narrower range of cases that are more central to the term's meaning.

    Here are some bad things that it doesn't seem quite as natural to describe as sucking:

  • a broken leg (though it might suck to break your leg and be laid up at home in pain);
  • lying about important things (though it might suck to have a boyfriend/girlfriend who regularly lies);
  • inferring not-Q from (i) P implies Q and (ii) not-P (though you might suck at logic problems);
  • the Holocaust.
  • The most paradigmatic examples of suckiness combine aesthetic failure with failure of skill or functioning. The sports team or the rock band, instead of showing awesome skill and thereby creating an awesome audience experience of musical or athletic splendor, can be counted on to drop the ball, hit the wrong note, make a jaw-droppingly stupid pass, choose a trite chord and tacky lyric. Things that happen to you can suck in a similar way to the way it sucks to be stuck at a truly horrible concert: Instead of having the awesome experience you might have hoped for, you have a lousy experience (getting splashed while trying to fix your tire, then missing your plane). There's a sense of waste, lost opportunity, distaste, displeasure, and things going badly. You're forced to experience one stupid, rotten thing after the next.

    Something sucks if (and only if) it should deliver good, worthwhile experiences or results, but it doesn't, instead wasting people's time, effort, and resources in an unpleasant and aesthetically distasteful way.

    The opposite of sucking is being awesome. Notice the etymological idea of "awe" in the "awesome": Something is awesome if it does or should produce awe and wonder at its greatness -- its great beauty, its great skill, the way everything fits elegantly together. The most truly sucky of sucky things instead, produces wonder at its badness. Wow, how could something be that pointless and awful! It's amazing!

    That "sucking" focuses our attention on the aesthetic and experiential is what makes it sound not quite right to say that the Holocaust sucked. In a sense, of course, the Holocaust did suck. But the phrasing trivializes it -- as though what is most worth comment is not the moral horror and the millions of deaths but rather the unpleasant experiences it produced.

    Similarly for other non-sucky bad things. What's central to their badness isn't aesthetic or experiential. To find nearby things that more paradigmatically suck, you have to shift to the experiential or to a lack of (awesome) skill or functioning.

    All of this is very important to understand as a philosopher, of course, because... because...

    Well, look. We wouldn't be using the word "sucks" so much if it wasn't important to us whether or not things suck, right? Why is it so important? What does it say about us, that we think so much in terms of what sucks and what is awesome?

    Here's a Google Ngram of "that sucks, this sucks, that's awesome". Notice the sharp rise that starts in the mid-1980s and appears to be continuing through the end of the available data.

    [click to enlarge]

    We seem to be more inclined than ever to divide the world into the sucky and the awesome.

    To see the world through the lens of sucking and awesomeness is to evaluate the world as one would evaluate a music video: in terms of its ability to entertain, and generate positive experiences, and wow with its beauty, magnificence, and amazing displays of skill.

    It's to think like Beavis and Butthead, or like the characters in the Lego Movie.

    That sounds like a superficial perspective on the world, but there's also something glorious about it. It's glorious that we have come so far -- that our lives are so secure that we expect them to be full of positive aesthetic experiences and maestro performances, so that we can dismissively say "that sucks!" when those high expectations aren't met.

    --------------------------------------

    For a quite different (but still awesome!) analysis of the sucky and the awesome, check out Nick Riggle's essay "How Being Awesome Became the Great Imperative of Our Time".

    Many thanks to my Facebook friends and followers for the awesome comments and examples on my public post about this last week.

    Wednesday, May 03, 2017

    On Trump's Restraint and Good Judgment (I Hope)

    Yesterday afternoon, I worked up the nerve to say the following to a room full of (mostly) white retirees in my politically middle-of-the-road home town of Riverside, California.

    (I said this after giving a slightly trimmed version of my Jan 29 L.A. Times op-ed What Happens to Democracy If the Experts Can't Be Both Factual and Balanced.)

    Our democracy requires substantial restraints on the power of the chief executive. The president cannot simply do whatever he wants. That's dictatorship.

    Dictatorship has arrived when other branches of government -- the legislature and the judiciary -- are unable to thwart the president. This can happen either because the other branches are populated with stooges or because the other branches reliably fail in their attempts to resist the president.

    President Trump appears to have expressed admiration for undemocratic chief executives who seize power away from judiciaries and legislatures.

    Here's something that could occur. President Trump might instruct the security apparatus of the United States -- the military, the border patrol, police departments -- to do something, for example to imprison or deport groups of people he describes as a threat. And then a judge or a group of judges might decide that Trump's instructions should not be implemented. And Trump might persist rather than deferring. He might insist that the judge or judges who aim to block him are misinterpreting or misusing the law. He might demand that his orders be implemented despite the judicial outcome.

    Here's one reason to think that won't occur: In January, Trump issued an executive order banning travel from seven majority-Muslim countries. When judges decided to block the order, Trump backed down. He insulted the judges and derided the decision, saying it left the nation less safe. But he did not demand that the security apparatus of the United States ignore the decision.

    So that's good.

    Probably Trump will continue to defer to the judiciary in that way. He has not been as aggressive about seizing power as he could have been, if he were set upon maximizing executive power.

    But if, improbably, Trump in the future decides to continue with an order that a judge is attempting to halt -- if, for some reason, Trump decides to insist that the executive branch disregard what he sees as an unwise and unjust judicial decision -- then quite suddenly our democracy would be comprised.

    Democracy depends on the improbable capacity of a few people who sit in courtrooms and study the law to convince large groups of people with guns to do things that those people with guns might not want to do, including things that the people with guns regard as contrary to the best interest of their country and the safety of their communities. It's quite amazing. A few people in black robes -- perhaps themselves with divided opinions -- versus the righteous desires of an army.

    If Trump says do this, and a judge in Hawaii says no, stop, and then Trump says army of mine, ignore that judge, what will the people with the guns do?

    It won’t happen. I don’t think it will happen.

    We as a country have chosen to wager our democracy on Trump's restraint and good judgment.

    [image source]

    Tuesday, May 02, 2017

    Is My Usage of "Crazy" Ableist?

    In 2014, I published a paper titled "The Crazyist Metaphysics of Mind". Since the beginning, I have been somewhat ambivalent about my use of the word "crazy".

    Some of my friends have expressed the concern that my use of "crazy" is ableist. I do agree that the use of "crazy" can be ableist -- for example, when it is used to insult or dismiss someone with a perceived psychological disability.

    I have a new book contract with MIT Press. The working title of the book is "How to Be a Crazy Philosopher". Some of my friends have urged me to reconsider the title.

    I disagree that the usage is ableist, but I am open to being convinced.

    I define a position as "crazy" just in case (1) it is highly contrary to common sense, and (2) we are not epistemically compelled to believe it. "Crazyism" about some domain is the view that something that meets conditions (1) and (2) must be true in that domain. I defend crazyism about the metaphysics of mind, and in some other areas. In these areas, something highly contrary to common sense must be true, but we are not in a good epistemic position to know which of the "crazy" possibilities is the true one. For example, panpsychism might be true, or the literal group consciousness of the United States, or the transcendental ideality of space, or....

    I believe that this usage is not ableist in part because (a) I am using the term with a positive valence, (b) I am not labeling individual people, and (c) the term is often used with a positive valence in our culture when it is not used to label people (e.g., "that's some crazy jazz!", "we had a crazy good time in Vegas"). I'm inclined to think that usages like those are typically morally permissible and not objectionably ableist.

    I welcome discussion, either in comments on this post or by email, if you have thoughts about this.

    Update: On my public post on Facebook, Daniel Estrada writes:

    I think the critical thing is to explicitly acknowledge and appreciate how the term "crazy" has been used to stigmatize and mystify issues around mental health. I don't think it's wrong to use any term, as long as you appreciate its history, and how your use contributes to that history. I think the overlap on "mystification" in your use is the extra prickly thorn in this nest. Contributing an essay (maybe just the preface?) where you address these complications explicitly seems like basic due diligence.

    I like that idea. If I keep the title and the usage, perhaps we can premise further discussion on the assumption that I do something like what Daniel has suggested.

    My Next Book...

    I've signed a contract with MIT Press for my next book. Working title: How to Be a Crazy Philosopher.

    The book will collect, revise, and to some extent integrate selected blog posts, op-eds, and longform journalism pieces, plus some new material. It will not be thematically unified around "crazyism" although of course it will include some of my material on that theme.

    Readers, if any my posts have struck you as especially memorable and worth including, I'd be interested to hear your opinion, either in the comments to this post or by email.

    -----------------------------------

    Some friends have expressed concerns about my use of "crazy" in the working title, since they view the usage as ableist. I am ambivalent about my use of the word, though I have been on the hook for it since at least 2014, when I published "The Crazyist Metaphysics of Mind". I will now create a separate post for discussion of that issue.

    Thursday, April 27, 2017

    The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot

    Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.

    Explicitly acknowledging such tradeoffs is unpleasant -- sufficiently unpleasant that it's tempting to try to rationalize them away. It's distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I'll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.

    Today I'll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you'll find these techniques useful too!

    The Happy Coincidence Defense. Consider travel for work. I don't have to travel around the world, giving talks and meeting people. It's not part of my job description. No one will fire me if I don't do it, and some of my colleagues do it considerably less than I do. On the face of it, I seem to be prioritizing my research career at the cost of being a somewhat less good father, teacher, and global moral citizen (given the luxurious use of resources and the pollution of air travel).

    The Happy Coincidence Defense says, no, in fact I am not sacrificing these other goals at all! Although I am away from my children, I am a better father for it. I am a role model of career success for them, and I can tell them stories about my travels. I have enriched my life, and then I can mingle that richness into theirs. I am a more globally aware, wiser father! Similarly, although I might cancel a class or two and de-prioritize my background reading and lecture preparation, since research travel improves me as a philosopher, it improves my teaching in the long run. And my philosophical work, isn't that an important contribution to society? Maybe it's important enough to morally justify the expense, pollution, and waste: I do more good for the world traveling around discussing philosophy than I could do leading a more modest lifestyle at home, donating more money to charities, and working within my own community.

    After enough reflection of this sort, it can come to seem that I am not making any tradeoffs at all among these four things I care intensely about. Instead, I am maximizing them all! This trip to England is the best thing I can do, all things considered, as a philosopher and as a father and as a teacher and as a citizen of the moral community. Yay!

    Now that might be true. If so, that would be a happy coincidence. Sometimes there really are such happy coincidences. But the pattern of reasoning is, I think you'll agree, suspicious. Life is full of tradeoffs among important things. One cannot, realistically, always avoid hard choices. Happy Coincidence reasoning has the odor of rationalization. It seems likely that I am illegitimately convincing oneself that something I want to be true really is true.

    The-Most-I-Can-Do Sweet Spot. Sometimes people try so hard at something that they end up doing worse as a result. For example, trying too hard to be a good father might make you in a father who is overbearing, who hovers too much, who doesn't give his children sufficient distance and independence. Teaching sometimes goes better when you don't overprepare. And sometimes, maybe, moral idealists push themselves so hard in pursuit of their ideals that they would have been better off pursuing a more moderate, sustainable course. For example, someone moved by the arguments for vegetarianism who immediately attempts the very strictest veganism might be more likely to revert to cheeseburger eating after a few months than someone who sets their sights a bit lower.

    The-Most-I-Can-Do Sweet Spot reasoning harnesses these ideas for convenient self-defense: Whatever I'm doing right now is the most I can realistically, sustainably do! Were I to try any harder to be a good father, I would end up being a worse father. Were I to spend any more time reading and writing philosophy than I actually do, I would only exhaust myself. If I gave any more to charity, or sacrificed any more for the well-being of others in my community, then I would... I would... I don't know, collapse from charity-fatigue? Or seethe so much with resentment at how more awesomely moral I am than everyone else that I'd be grumpy and end up doing some terrible thing?

    As with Happy Coincidence reasoning, The-Most-I-Can-Do Sweet Spot reasoning can sometimes be right. Sometimes you really are doing the most you can do about everything you care intensely about. But it would be kind of amazing if this were reliably the case. It wouldn't be that hard for me to be a somewhat better father, or to give somewhat more to my students -- with or without trading off other things. If I reliably think that wherever I happen to be in such matters, that's the Sweet Spot, I am probably rationalizing.

    Having cute names for these patterns of rationalization better helps me spot them as they are happening, I think -- both in myself and sometimes, I admit, somewhat uncharitably, also in others.

    Rather than think of something clever to say as the kicker for this post, I think I'll give my family a call.

    Friday, April 21, 2017

    Common Sense, Science Fiction, and Weird, Uncharitable History of Philosophy

    Philosophers have three broad methods for settling disputes: appeal to "common sense" or culturally common presuppositions, appeal to scientific evidence, and appeal to theoretical virtues like simplicity, coherence, fruitfulness, and pragmatic value. Some of the most interesting disputes are disputes in which all three of these broad methods are problematic and seemingly indecisive.

    One of my aims as a philosopher is to intervene on common sense. "Common sense" is inherently conservative. Common sense used to tell us that the Earth didn't move, that humans didn't descend from ape-like ancestors, that certain races were superior to others, that the world was created by a god or gods of one sort or another. Common sense is a product of biological and cultural evolution, plus the cognitive and social development of people in a limited range of environments. Common sense only has to get things right enough, for practical purposes, to help us manage the range of environments to which we are accustomed. Common sense is under no obligation to get it right about the early universe, the microstructure of matter, the history of the species, future technologies, or the consciousness of weird hypothetical systems we have never encountered.

    The conservativism and limited vision of common sense leads us to dismiss as "crazy" some philosophical and scientific views that might in fact be true. I've argued that this is especially so regarding theories of consciousness, about which something crazy must be true. For example: literal group consciousness, panpsychism, and/or the failure of pain to supervene locally. Although I don't believe that existing arguments decisively favor any of those possibilities, I do think that we ought to restrain our impulse to dismiss such views out of hand. Fit with common sense is one important factor in evaluating philosophical claims, especially when direct scientific evidence and considerations of general theoretical virtue are indecisive, but it is only one factor. We ought to be ready to accept that in some philosophical domains, our commonsense intuitions cannot be entirely preserved.

    Toward this end, I want to broaden our intuitive sense of the possible. The two best techniques I know are science fiction and cross-cultural philosophy.

    The philosophical value of science fiction consists not only in the potential of science fictional speculations to describe possible futures that we might actually encounter. Historically, science fiction has not been a great predictor of the future. The primary philosophical value of science fiction might rather consist in its ability to flex our minds and disrupt commonsense conservatism. After reading far-out stories about weird utopias, uploading into simulated realities, bizarrely constructed intelligent aliens, body switching, Matrioshka Brains, and alternative universes, philosophical speculations about panpsychism and group consciousness no longer seem quite so intolerably weird. At least that's my (empirically falsifiable) conjecture.

    Similarly, brain-flexing is an important part of the value of reading the history of philosophy -- especially work from traditions other than those with which you are already familiar. Here it's especially important not to be too "charitable" (i.e. assimilative). Relish the weirdness -- "weird" from your perspective! -- of radical Buddhist metaphysics, of medieval Chinese neo-Confucianism, of neo-Platonism in late antiquity, of 19th century Hegelianism and neo-Hegelianism.

    If something that seems crazy must be true about the metaphysics of consciousness, or about the nature of objects and causes, or about the nature of moral value -- as extended philosophical discussions of these topics suggest probably is the case -- then to evaluate the possibilities without excess conservatism, we need to get used to bending our minds out of their usual ruts.

    This is my new favorite excuse for reading Ted Chiang, cyberpunk, and Zhuangzi.

    [image source]

    Friday, April 14, 2017

    We Who Write Blogs Recommend... Blogs!

    Here's The 20% Statistician, Daniel Lakens, on why blogs have better science than Science.

    Lakens observes that blogs (usually) have open data, sources, and materials; open peer review; no eminence filter; easy error correction; and open access.

    I would add that blogs are designed to fit human cognitive capacities. To reach a broad audience, they are written to be broadly comprehensible -- and as it turns out, that's a good thing for science (and philosophy), since it reduces the tendency to hide behind jargon, technical obscurities, and dubious shared subdisciplinary assumptions. The length of a typical substantive blog post (500-1500 words) is also, I think, a good size for human cognition: long enough to have some meat and detail, but short enough that the reader can keep the entire argument in view. These features make blog posts much easier to critique, enabling better evaluation by specialists and non-specialists alike.

    Someone will soon point out, for public benefit, the one-sidedness of Lakens' and my arguments here.

    [HT Wesley Buckwalter]

    Sunday, April 09, 2017

    Does It Matter If the Passover Story Is Literally True?

    My opinion piece in today's LA Times.

    You probably already know the Passover story: How Moses asked Pharoah to let his enslaved people leave Egypt, and how Moses’ god punished Pharaoh — bringing about the death of the Egyptians’ firstborn sons even as he passed over Jewish households. You might even know the ancillary tale of the Passover orange. How much truth is there in these stories? At synagogues this time of year, myth collides with fact, tradition with changing values. Negotiating this collision is the puzzle of modern religion.

    Passover is a holiday of debate, reflection, and conversation. Last Passover, as my family and I and the rest of the congregation waited for the feast at our Reform Jewish temple, our rabbi prompted us: “Does it matter if the story of Passover isn’t literally true?”

    Most people seemed to shake their heads. No, it doesn’t matter.

    I was imagining the Egyptians’ sons. I am an outsider to the temple. My wife and teenage son are Jewish, but I am not. My 10-year-old daughter, adopted from China at age 1, describes herself as “half Jewish.”

    I nodded my head. Yes, it does matter if the Passover story is literally true.

    “Okay, Eric, why does it matter?” Rabbi Suzanne Singer handed me the microphone.

    I hadn’t planned to speak. “It matters,” I said, “because if the story is literally true, then a god who works miracles really exists. It matters if there is such a god or not. I don’t think I would like the moral character of that god, who kills innocent Egyptians. I’m glad there is no such god.”

    “It is odd,” I added, “that we have this holiday that celebrates the death of children, so contrary to our values now.”

    The microphone went around, others in the temple responding to me. Values change, they said. Ancient war sadly and necessarily involved the death of children. We’re really celebrating the struggle for freedom for everyone....

    Rabbi Singer asked if I had more to say in response. My son leaned toward me. “Dad, you don’t have anything more to say.” I took his cue and shut my mouth.

    Then the Seder plates arrived with the oranges on them.

    Seder plates have six labeled spots: two bitter herbs, charoset (fruit and nuts), parsley, a lamb bone, a boiled egg, each with symbolic value. There is no labeled spot for an orange.

    The first time I saw an orange on a Seder plate, I was told this story about it: A woman was studying to be a rabbi. An orthodox rabbi told her that a woman belongs on the bimah (pulpit) like an orange belongs on the Seder plate. When she became a rabbi, she put an orange on the plate.

    A wonderful story — a modern, liberal story. More comfortable than the original Passover story for a liberal Reform Judaism congregation like ours, proud of our woman rabbi. The orange is an act of defiance, a symbol of a new tradition that celebrates gender equality.

    Does it matter if it’s true?

    Here’s what actually happened. Dartmouth Jewish Studies professor Susannah Heschel was speaking to a Jewish group at Oberlin College in Ohio. The students had written a story in which a girl asks a rabbi if there is room for lesbians in Judaism, and the rabbi rises in anger, shouting, “There’s as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!” Heschel, inspired by the students but reluctant to put anything as unkosher as leavened bread on the Seder plate, used a tangerine instead.

    The orange, then, is not a wild act of defiance, but already a compromise and modification. The shouting rabbi is not an actual person but an imagined, simplified foe.

    It matters that it’s not true. From the two stories of the orange, we learn the central lesson of Reform Judaism: that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, changing as our values change — but also that only limited change is possible in a tradition-governed institution. An orange, but not a crust of bread.

    In a way, my daughter and I are also oranges: a new type of presence in a Jewish congregation, without a marked place, welcomed this year, unsure we belong, at risk of rolling off.

    In the car on the way home, my son scolded me: “How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously! You should have seen how they were looking at you, with so much anger. If you’d said more, they would practically have been ready to lynch you.”

    Due to the seating arrangement, I had been facing away from most of the congregation. I hadn’t seen those faces. Were they really so outraged? Was my son telling me the truth on the way home that night? Or was he creating a simplified myth of me?

    In belonging to an old religion, we honor values that are no longer entirely ours. We celebrate events that no longer quite make sense. We can’t change the basic tale of Passover. But we can add liberal commentary to better recognize Egyptian suffering, and we can add a new celebration of equality.

    Although the new celebration, the orange, is an unstable thing atop an older structure that resists change, we can work to ensure that it remains. It will remain only if we can speak the story of it compellingly enough to give our new values too the power of myth.

    -------------------------------------

    Revised and condensed from my blogpost Orange on the Seder Plate (Apr 27, 2016).

    Wednesday, April 05, 2017

    Only 4% of Editorial Board Members of Top-Ranked Anglophone Philosophy Journals Are from Non-Anglophone Countries

    If you're an academic aiming to reach a broad international audience, it is increasingly the case that you must publish in English. Philosophy is no exception. This trend gives native English speakers an academic advantage: They can more easily reach a broad international audience without having to write in a foreign language.

    A related question is the extent to which people who make their academic home in Anglophone countries control the English-language journals in which so much of our scholarly communication takes place. One could imagine the situation either way: Maybe the most influential academic journals in English are almost exclusively housed in Anglophone countries and have editorial boards almost exclusively composed of people in those same countries; or maybe English-language journals are a much more international affair, led by scholars from a diverse range of countries.

    To examine this question, I looked at the editorial boards of the top 15 ranked journals in Brian Leiter's 2013 poll of "top philosophy journals without regard to area". I noted the primary institution of every board member. (For methodological notes see the supplement at the end.)

    In all, 564 editorial board members were included in the analysis. Of these, 540 (96%) had their primary academic affiliation with an institution in an Anglophone country. Only 4% of editorial board members had their primary academic affiliation in a non-Anglophone country.

    The following Anglophone countries were represented:

    USA: 377 philosophers (67% of total)
    UK: 119 (21%)
    Australia: 26 (5%)
    Canada: 13 (2%)
    New Zealand: 5 (1%)

    The following non-Anglophone countries were represented:

    Germany: 6 (1%)
    Sweden: 5 (1%)
    Netherlands: 3 (1%)
    China (incl. Hong Kong): 2 (<1%)
    France: 2 (<1%)
    Belgium: 1 (<1%)
    Denmark: 1 (<1%)
    Finland: 1 (<1%)
    Israel: 1 (<1%)
    Singapore: 1 (<1%) [N.B.: English is one of four official languages]
    Spain: 1 (<1%)

    Worth noting: Synthese showed much more international participation than any of the other journals, with 13/31 (42%) of its editorial board from non-Anglophone countries.

    It seems to me that if English is to continue in its role as the de facto lingua franca of philosophy (ironic foreign-language use intended!), then the editorial boards of the most influential journals ought to reflect substantially more international participation than this.

    -------------------------------------------------

    Related Posts:

    How Often Do Mainstream Anglophone Philosophers Cite Non-Anglophone Sources? (Sep 8, 2016)

    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (Aug 14, 2014)

    -------------------------------------------------

    Methodological Notes:

    The 15 journals were Philosophical Review, Journal of Philosophy, Nous, Mind, Philosophy & Phenomenological Research, Ethics, Philosophical Studies, Australasian Journal of Philosophy, Philosopher's Imprint, Analysis, Philosophical Quarterly, Philosophy & Public Affairs, Philosophy of Science, British Journal for the Philosophy of Science, and Synthese. Some of these journals are "in house" or have a regional focus in their editorial boards. I did not exclude them on those grounds. It is relevant to the situation that the two top-ranked journals on this list are edited by the faculty at Cornell and Columbia respectively.

    I excluded editorial assistants and managers without without full-time permanent academic appointments (which are typically grad students or publishing or secretarial staff). I included editorial board members, managers, consultants, and staff with full-time permanent academic appointments, including emeritus.

    I used the institutional affiliation listed at the journal's "editorial board" website when that was available (even in a few cases where I knew the information to be no longer current), otherwise I used personal knowledge or a web search. In each case, I tried to determine the individual's primary institutional affiliation or most recent primary affiliation for emeritus professors. In a few cases where two institutions were about equally primary, I used the first-listed institution either on the journal's page or on a biographical or academic source page that ranked highly in a Google search for the philosopher.

    I am sure I have made some mistakes! I've made the raw data available here. I welcome corrections. However, I will only make corrections in accord with the method above. For example, it is not part of my method to update inaccurate affiliations on the journals' websites. Trying to do so would be unsystematic, disproportionately influenced by blog readers and people in my social circle.

    A few mistakes are inevitable in projects of this sort and shouldn't have a large impact on the general findings.

    -------------------------------------------------

    [image source]

    Thursday, March 30, 2017

    On Being Accused of Ableism

    Like many (most?) 21st-century North Americans, I hate to be told I’ve done something ableist (or racist, or sexist). Why does it sting so much, and how should I think about such a charge, when it is leveled against me?

    Short answer: It stings so much because it’s usually partly, if only partly, true—and partly true criticisms are the ones that sting worst. And the best reaction to the charge is, usually, to recognize its partial, if only partial, truth.

    First, let’s remind ourselves of a quote from the great Confucius:

    How fortunate I am! If I happen to make a mistake, others are sure to inform me.
    (Analects 7.31, Slingerland trans.)

    (As it happens, bloggers are fortunate in just the same way.)

    Confucius might have been speaking partly ironically in that particular passage. A couple of centuries later, another Confucian, Xunzi, speaks not at all ironically:

    He who rightly criticizes me acts as a teacher to me, and he who rightly supports me acts as friend to me, while he who flatters and toadies to me acts as a villain toward me. Accordingly, the gentleman exalts those who act as teachers toward him....
    (ch 2, Hutton trans., p. 9)

    This is difficult advice to heed.

    Note, though: If I make a mistake. He (she, they) who rightly criticizes me. Someone who criticizes me wrongly is no teacher, only an annoying pest! And if you’re anything like me, then your gut reaction to charges of ableism will usually be to want to swat back at the pest, to assume, defensively, that the criticism must be off-target, because of course you’re a good egalitarian, committed to fighting unjustified prejudice!

    No. Here’s the thing. We all have ableist reactions and engage in ableist practices sometimes, to some degree. Disability is so various, and the ableist structures of our culture so deep and pervasive, that it would be superhuman to be immune. Maybe you are immune to ableism toward people who use wheelchairs. Maybe your partner of many years uses a wheelchair and you see wheelchair-use as just one of the many diverse human ways of comporting oneself, with its challenges and (sometimes) benefits, just like every other way of getting around. But how do you react to someone who stutters? How do you react to someone who is hard of hearing? How do you react to someone with depression or PTSD? Someone with facial burns or another skin condition you find unappealing? Or a very short man? What sorts of social structures do you manifest and reinforce in your behavior? In your choice of words? In your implicit assumptions? In what you expect (and don’t expect) people to be able to do?

    Here’s my guess: You don’t always act in ways that are free of unjustified prejudice. If someone calls you out on ableism, they might well be right.

    You might sincerely and passionately affirm that "all people are equal"—whatever that amounts to, which is really hard to figure out!—and you might even pay some substantial personal costs for the sake of a more just and equal society. In this respect, you are not ableist. You are even anti-ableist. But you are not a unified thing. Unless you are an angel walking upon the Earth, our society’s ableism acts through you.

    An absurd charge does not sting. If someone tells me I spend too much time watching soccer, the charge is merely ridiculous. I don’t watch soccer. But if someone charges me with ableism, the partial truth of it does sting, or at least the plausibility of it stings. Maybe I shouldn’t have used the particular word that I used. Maybe I shouldn’t have made that particular assumption or dismissed that particular person. Maybe, deep down, I’m not the egalitarian I thought I was. Ouch.

    Your ableist actions and reactions can be hard to recognize and admit if you implicitly assume that people have unified attitudes. If people have unified attitudes, they are either prejudiced against disabled people or they are not. If people have unified attitudes, then evidence of ableist behavior is evidence that you are one of the prejudiced, one of the bad guys. No one wants to think that about themselves. If people have unified attitudes, then it’s easy to assume that because you explicitly reject ableism you cannot be simultaneously enacting the very ableism that you are fighting against.

    [Image description: psychedelic art "shifting realities", explosion of mixing colors, white on right through blue on the left]

    The best empirical evidence suggests that people are highly disunified—inconstant across situations, capable of both great sacrifice and appalling misbehavior, variable in word and deed, spontaneously enacting our cultural practices for both good and bad. If this is true, then you ought to expect that charges of ableism against you will sometimes stick. You should be unsurprised if they do. But you should also celebrate that these charges are only very partial: The whole you is not like that! The whole you is a tangled chaos with many beautiful, admirable parts!

    If you accept your disunity, you ought also to be forgiving. You ought to be forgiving especially if you cast your eye more broadly to the many forms of prejudice and injustice in which we participate. Suppose, impossibly, that you were utterly free of any ableist tendencies, practices, or background assumptions. It would be a huge life project to achieve that. Are you equally free of racism, classism, sexism, ageism, bias against those who are not conventionally beautiful? Are you saving the environment, fighting international poverty, phoning your senators about prisons and wage justice, volunteering in your community?

    We must pick our projects. A more vivid appreciation of our own disunity, flaws, and abandoned good intentions ought to make us both more ready to see the truth in charges of prejudice against us and also more forgiving of the disunity, flaws, and abandoned good intentions in others.

    [image source]

    [Cross-posted at Discrimination and Disadvantage; HT Shelley Tremain for the invitation and editorial feedback]

    Wednesday, March 22, 2017

    What Kinds of Universities Lack Philosophy Departments? Some Data

    University administrators sometimes think it's a good idea to eliminate their philosophy departments. Some of these efforts have been stopped, others not. This has led me to wonder how prevalent philosophy departments are in U.S. colleges and universities, and how their presence or absence relates to institution type.

    Here's what I did. I pulled every ranked college and university from the famous US News college ranking site, sorting them into four categories: national universites, national liberal arts colleges, regional universities (combining the four US News categories for regional universities: north, south, midwest, and west), and regional colleges (again combining north, south, midwest, and west). I randomly selected twenty schools from each of these four lists. Then I attempted to determine from the school's website whether it had a philosophy department and a philosophy major. [See note 1 on "departments".]

    Since some schools combine philosophy with another department (e.g. "Philosophy and Religion") I distinguished standalone philosophy departments from combined departments that explicitly mention "philosophy" in the department name along with something else.

    I welcome corrections! The websites are sometimes a little confusing, so it's likely that I've made an error or two.

    ***************************************************

    Results

    National Universities:

    Eighteen of the twenty sampled "national universities" have standalone philosophy departments (or equivalent: note 1) and majors. The only two that do not are institutes of technology: Georgia Tech (ranked #34) and Florida Tech (#171).

    Virginia Tech (#74), however, does have a Department of Philosophy and a philosophy major -- as do Stanford, Duke, Rice, Rochester, Penn State, UT Austin, Rutgers-New Brunswick, Baylor, U Mass Amherst, Florida State, Auburn, Kansas, Biola, Wyoming (for now), North Carolina-Charlotte, Missouri-St Louis, and U Mass Boston.

    National Liberal Arts Colleges:

    Similarly, seventeen of the twenty sampled "national liberal arts colleges" have standalone philosophy departments, and eighteen offer the philosophy major. Offering neither department nor major are Virginia Military Institute (#72) and the very small science/engineering college Harvey Mudd (#21) (circa 735 students, part of the Claremont consortium). Beloit College (#62, circa 1358 students) offers the philosophy major within a "Department of Philosophy and Religious Studies".

    The seventeen sampled schools with both major and standalone department are: Swarthmore, Carleton, Hamilton, Wesleyan, Richmond, DePauw, Puget Sound, Westmont, Hollins, Lake Forest, Stonehill, Hanover, Guilford, Carthage, Oglethorpe, Franklin (not to be confused with Franklin & Marshall), and Georgetown College (not to be confused with Georgetown University).

    Some of these colleges are very small. According to Wikipedia estimates, two have fewer than a thousand students: Hollins (639) and Georgetown (984). Another four are below 1300: Franklin (1087), Hanover (1133), Oglethorpe (1155), and Westmont (1298).

    Regional Universities:

    Nine of the twenty sampled regional universities have standalone philosophy departments, and another three have a combined department with philosophy in its name. Twelve offer the philosophy major (not exactly the same twelve). Seven offer neither major nor department: Ramapo College of New Jersey, Wentworth Institute of Technology, Delaware Valley University, Stephens College, Mount St Joseph, Elizabeth City State, and Robert Morris. Two of these are specialty schools: Wentworth is a technical institute, and Stephens specializes in arts and fashion.

    Offering major and/or standalone or combined department: Simmons, Whitworth, Mansfield of Pennsylvania, Rosemont, U of Northwestern-St Paul, Central Washington, Towson, Ganon, North Park, Wisconsin-Oshkosh, Northern Michigan, Mount Mary, and Appalachian State.

    Regional Colleges:

    Seven of the twenty sampled regional colleges have a standalone philosophy department, and another four have a combined department with philosophy in its name. Seven offer a philosophy major, and one (Brevard) has a "Philosophy and Religion" major. Offering neither major nor department: California Maritime Academy, Marymount California U (not to be confused with Loyola Marymount), Paul Smith's College (not to be confused with Smith College), Alderson Broaddus, Dickinson State, North Carolina Wesleyan, Crown College, and Iowa Wesleyan. Four of these are specialty schools: California Maritime Academy and Marymount California each offer only six majors total, Paul Smith's focuses on tourism and service industries, and Iowa Wesleyan offers only three Humanities majors: Christian Studies, Digital Media Design, and Music.

    Offering major and/or standalone or combined department: Carroll, Mount Union, Belmont Abbey, La Roche, St Joseph's, Blackburn, Messiah, Tabor, Ottawa University (not to be confused with University of Ottawa), Northwestern College (not to be confused with Northwestern University), and Cazenovia College.

    Summary

    In my sample of forty nationally ranked universities and liberal arts colleges, each one has a standalone philosophy department and offers a philosophy major, with the following exceptions: three science/engineering specialty schools, one military institute, and one school offering a philosophy major within a department of "Philosophy and Religious Studies".

    Even among the smallest nationally ranked liberal arts colleges, with 1300 or fewer students, all have philosophy majors and standalone philosophy departments (or similar administrative units), with the exception of one science/engineering speciality college.

    The schools that US News describes as "regional" are mixed. In this sample of forty, about half offer philosophy majors and about half have standalone philosophy departments. Among the fifteen with neither department nor major in philosophy, six are specialty schools.

    I'll refrain from drawing causal or normative conclusions here.

    ***************************************************

    Update 8:53 a.m.: Expanding the Sample:

    I'm tempted to conclude that, with the exception of specialty schools, almost every nationally ranked university and liberal arts college, no matter how small, has a philosophy major and a large majority have a standalone philosophy department. But maybe that's too strong a claim to draw from a sample of forty? So I've doubled the sample.

    Doubling the sample supports this claim. Among the additional twenty universities sampled, nineteen offer the philosophy major, and the one that does not, UC Merced, is a new campus that plans to add the philosophy major soon. Sixteen have standalone Philosophy Departments, and three have combined departments: Philosophy and Religion at Northeastern and Tulsa, Politics and Philosophy at University of Idaho. The sampled universities with both standalone philosophy departments and the philosophy major are Tennessee, Nevada-Reno, Colorado State, South Dakota, New Mexico, Dartmouth, UC San Diego, U of Oregon, Columbia, Indiana-Bloomington, Kentucky, Alabama-Huntsville, Brandeis, George Washington, Azusa Pacific, and UC Riverside.

    Adding twenty more nationally ranked liberal arts colleges also confirms my initial results. Nineteen offer the major, with the only exception being Thomas Aquinas College, which appears to offer only one major to all students (Liberal Arts). Three colleges have combined departments, all with religion: Washington College, Wartburg, and College of Idaho. Sixteen have both major and standalone department: Wooster, Wheaton, Hampton-Sydney, Muhlenberg, Houghton, Colgate, Middlebury, Washington & Lee, New College of Florida, Transylvania, Sweet Briar, Knox College, Colorado College, Oberlin, Luther, and Pomona.

    ***************************************************

    Note 1: Some schools don't appear to have "departments" or have very broad "departments" that encompass many majors. If a school had fewer than fifteen "departments" I attempted to assess whether it had a department-like administrative unit for philosophy, or if that assessment wasn't possible, whether it hosted a philosophy major apparently on administrative par with popular majors like psychology and biology.

    [image source]

    Thursday, March 16, 2017

    My Defense of Anger and Empathy: Flanagan's, Bloom's, and Others' Responses

    Last week I posted a defense of anger and empathy against recent critiques by Owen Flanagan and Paul Bloom. The post drew a range of lively responses in social media, including from Flanagan and Bloom themselves.

    My main thought was just this: Empathy and anger are part of the rich complexity of our emotional lives, intrinsically valuable insofar as having rich emotional lives is intrinsically valuable.

    We can, of course, also debate the consequences of empathy and anger, as Flanagan and Bloom do -- and if the consequences of one or the other are bad enough we might be better off in sum without them. But we shouldn't look only at consequences. There is also an intrinsic value in having a rich emotional life, including anger and empathy.


    1. Adding Nuance.

    I have presented Flanagan's and Bloom's views simply: Flanagan and Bloom argue against anger and empathy, respectively. Their detailed views are more nuanced, as one might expect. One interpretive question is whether it is fair to set aside this nuance in critiquing their views.

    Well, how do they themselves summarize their views?

    Flanagan argues in defense of the Stoic and Buddhist program of entirely "eliminating" or "extirpating" anger, against mainstream "containment" views which hold that anger is a virtue when it is moderate, appropriate to the situation, and properly contained (p. 160). Although this is where he puts his focus and energy, he adds a few qualifications like this: "I do not have a firm position [about the desirability of entirely extirpating anger]. I am trying to explore varieties of moral possibility that we rarely entertain, but which might be genuine possibilities for us" (p. 215).

    Bloom titles his book Against Empathy. He says that "if we want to make the world a better place, then we are better off without empathy" (p. 3) and "On balance, empathy is a negative in human affairs" (p. 13). However, Bloom also allows that he wouldn't want to live in a world without empathy, anger, shame, or hate (p. 9). At several points, he accepts that empathy can be pleasurable and play a role in intimate relationships.

    It's helpful to distinguish between the headline view and the nuanced view.

    Here's what I think the typical reader -- including the typical academic reader -- recalls from their reading, two weeks later: one sentence. Maybe "Bloom is against empathy because it's so biased and short-sighted". Maybe "Flanagan thinks we should try to eliminate anger, like a Buddhist or Stoic sage". These are simplifications, but they come close enough to how Bloom and Flanagan summarize and introduce their positions that it's understandable if that's how readers remember their views. In writing academic work, especially academic work for a broad audience, it's crucial to keep our eye on the headline view -- the practical, memorable takeaway that is likely to be the main influence on readers' thoughts down the road.

    As an author, you are responsible for both the headline view and the nuanced view. Likewise, as a critic, I believe it's fair to target the headline view as long as one also acknowledges the nuance beneath.

    In their friendly replies on social media, both Bloom and Flanagan seemed to acknowledge the value of engaging first at the headline level; but they both also pushed me on the nuance.

    Hey, before I go farther, let me not forget to be friendly too! I loved both these books. Of course I did. Otherwise, I wouldn't have spent my time reading them cover-to-cover and critiquing them. Bloom and Flanagan challenge my presuppositions in helpful ways, and my thinking has advanced in reacting to them.

    For more on the downsides of nuance, see Kieran Healy.

    2. Bloom's Response.

    In this tweet, Bloom appears to be suggesting that empathy is fine as long as you don't use it to guide moral judgment. (He makes a similar claim in a couple of Facebook comments on my post.) Similarly, at the end of his book, he says he worries "that I have given the impression that I am against empathy" (p. 240). An understandable worry, given the title of his book! (I am sure he is aware of this and speaking partly tongue in cheek.) He clarifies that he is against empathy "only in the moral domain... but there is more to life than morality" (p. 240-241). Empathy, he says, can be an immense source of pleasure.

    The picture seems to be that the world would be morally better without empathy, but that there can be excellent selfish reasons to want to experience empathy nonetheless.

    If the picture here is that there are some decisions to which morality is irrelevant and that it's fine to be guided by empathy in those decisions, I would object as follows. Every decision is a moral decision. Every dollar you spend on yourself is a dollar that could instead be donated to a good cause. Every minute you spend is a minute you could have done something more kind or helpful than what you actually did. Every person you see, you could greet warmly or grumpily, give them a kind word or not bother. Of course, it's exhausting to think this way! But still, there is I believe no such thing as a morally innocent choice. If you purge empathy from moral decision-making you purge it from decision-making.

    Here's what seems closer to right, to me -- and what I think is one of the great lessons of Bloom's book. Public policy decisions and private acts directed toward distant strangers (e.g., what charities to support) are perhaps, on average, better made in a mood of cool rationality, to the extent that is possible. But it's different for personal relationships. Bloom argues that empathy might make us "too-permissive parents and too-clingy friends" (p. 163). This is a possible risk, sure. Sometimes empathic feelings should be set aside or even suppressed. Of course, there are risks to attempting to set aside empathy in favor of cool rationality as well (see, e.g., Lifton on Nazi doctors). Let's not over-idealize either process! In some cases, it might be morally best to experience empathy and to be able to act otherwise if necessary, rather than not to feel empathy.

    Furthermore, it might be partly constitutive of the syndrome of full-bodied friendship and loving-parenthood that one is prone to empathy. I am Aristotelian or Confucian enough to see the flourishing of such relationships as central to morality.


    3. Flanagan's Response.

    On Facebook, Flanagan also added nuance to his view, writing:

    There are varieties of anger. 1. Payback anger - you hurt me, I hurt you; 2. Pain-passing -- I am hurting (not because of you) I pass pain to you. 3. Instrumental anger. I aim you to get you to do what is right (this might hurt your feelings etc. but that is not my aim; 4 Political anger. I am outraged at racist or sexist etc. practices and want them to end; 5. Impersonal anger. At the gods or heaven for awful states of affairs, the dying child. I am concerned about 1 & 2. I worry about 3-4 if and when the desire to pass pain or payback gets too much of a grip....

    This is helpful -- and also not entirely Buddhist or Stoic (which of course is fine, especially since Flanagan presented his earlier arguments against anger as only something worth exploring rather than his final view).

    In his thinking on this, Flanagan has partly been influenced by Myisha Cherry's and others' work on anger as a force for social change.

    I appreciate the defense of anger as a path toward social justice. But I also want to defend anger's intrinsic value, not just its instrumental value; and specifically I want to defend the intrinsic value of payback anger.

    The angry jerk is an ugly thing. Grumping around, feeling his time is being wasted by the incompetent fools around him, feeling he hasn't been properly respected, enraged when others' ends conflict with his own. He should settle down, maybe try some empathy! But consider, instead, the angry sweetheart.

    I see the "sweetheart" as the opposite of the jerk -- someone who is spontaneously and deeply attuned to the interests, values, and attitudes of other people, full of appreciation, happy to help, quick to believe that he rather than the other might be in the wrong, quick to apologize and in extreme cases sometimes being so attuned to others' perspectives that he risks losing track of his own interests, values, and attitudes. Spongebob Squarepants, Forrest Gump, sweet subordinate sitcom mothers from the 1950s and 1960s. These people don't feel enough anger. We should, I think, cheer their anger when it finally rises. We should let them relish their anger, the sense that they have been harmed and that the wrongdoer should pay them back.

    I don't want sweethearts always to be bodhisattvas toward those who wrong them. Anger manifests the self-respect that they should claim, and it's part of the emotional range of experience that they might have too little of.


    4. More.

    Shoot, I've already gone on longer than intended, and I haven't got to all the comments by others that I'd wanted to address! Just quickly:

    Some people suggested that eliminating anger might result in opening up other different ranges of emotions, in the right kind of sage. Interesting thought! I'd also add that there's a kind of between-person richness that I'd celebrate. If sages can eliminate anger as a great personal and moral accomplishment, I think that's wonderful. My concern is more with the ideal of a blanket extirpation as general advice.

    Some people pointed out that the anger of the oppressed is particularly worth cultivating -- and that there may even be whole communities of oppressed people who feel too little anger. Yes!

    Others wondered about whether I would favor adding brand-new unheard-of negative emotions just to improve our emotional range. This would make a fascinating speculative fiction thought experiment.

    More later, I hope. In addition to the comments section at The Splintered Mind, the public Facebook conversation was lively and fun.

    [image source]