Wednesday, November 22, 2017

Yay Boo Strange Hope

Happy (almost) Thanksgiving (in the U.S.)! I want to share a recent family tradition which might help you through some of those awkward conversations with others around the table. We call it Yay Boo Strange Hope.

The rules:

(1.) Sit in a circle (e.g., around the dinner table).

(2.) Choose a topic. For example: your schoolday/workday, wilderness camping, Star Wars, the current state of academic philosophy.

(3.) Arbitrarily choose someone to go first.

(4.) That person says one good thing about the topic (the Yay), one bad thing (the Boo), one weird or unexpected thing (the Strange), and some wish for the future related to the topic in question (the Hope).

(5.) Interruptions for clarificatory questions are encouraged.

(6.) Sympathetic cheers and hisses are welcome, or brief affirmations like "that stinks" or "I agree!" But others shouldn't take the thread off in their own direction. Keep the focus on the opinions and experiences of the person whose turn it is.

(7.) Repeat with the next person clockwise around the circle until everyone has had a turn.

Some cool things about this game:

* It is modestly successful in getting even monosyllabic teenagers talking a bit. Usually they can muster at least a laconic yay boo strange and hope about their day or about a topic of interest to them.

* It gives quiet people a turn at the center of the conversation, and discourages others from hijacking the thread.

* Yay Boo Strange Hope typically solicits less predictable and more substantive responses than bland questions like, "So what happened at school today?" Typically, you'll hear about at least three different events (the Yay, Boo, and Strange) and one of those events (the Strange) is likely to be novel.

* The Boo gives people an excuse to complain (which most people enjoy) and the Yay forces people to find a bright side even on a topic where their opinion is negative.

* By ending on Hope, each person's turn usually concludes on an up note or a joke.


When I was touring Pomona College with my son in the summer of 2016, I overhead another group's tour guide describing something like this game as a weekly ritual among her dormmates. I suspect the Pomona College version differs somewhat from my family's version, since I only partly overheard and our practice has evolved over time. If you know variations of this game, I'd be interested to hear from you in the comments.

Thursday, November 16, 2017

A Moral Dunning-Kruger Effect?

In a famous series of experiments Justin Kruger and David Dunning found that people who scored in the lowest quartile of skill in grammar, logic, and (yes, they tried to measure this) humor tended to substantially overestimate their abilities, rating themselves as a bit above average in these skills. In contrast, people in the top half of ability had more accurate estimations (even tending to underestimate a bit). The average participant in each quartile rated themselves as above average, and the correlation between self-rated skill and measured skill was small.

For example, here's Kruger and Dunning's chart for logic ability and logic scores:

(Kruger & Dunning 1999, p. 1129).

Kruger and Dunning's explanation is that poor skill at (say) logical reasoning not only impairs one's performance at logical reasoning tasks but also impairs one's ability to evaluate one's own performance at logical reasoning tasks. You need to know that affirming the consequent is a logical error in order to realize that you've just committed a logical error in affirming the consequent. Otherwise, you're likely to think, "P implies Q, and Q is true, so P must be true. Right! Hey, I'm doing great!"

Although popular presentations of the Kruger-Dunning effect tend to generalize it to all skill domains, it seems unlikely that it does generalize universally. In domains where evaluating one's success doesn't depend on the skill in question, and instead depends on simpler forms of observation and feedback, one might expect more realistic self-evaluations by novices. (I haven't noticed a clear, systematic discussion of cases where Dunning-Kruger doesn't apply, though Kahneman & Klein 2009 is related; tips welcome.) For example: footraces. I'd wager that people who are slow runners don't tend to think that they are above average in running speed. They might not have perfect expectations; they might show some self-serving optimistic bias (Taylor & Brown 1988), but we probably won't see the almost flat line characteristic of Kruger-Dunning. You don't have to be a fast runner to evaluate your running speed. You just need to notice that others tend to run faster than you. It's not like logic where skill at the task and skill at self-evaluation are closely related.

So... what about ethics? Ought we to expect a moral Dunning-Kruger Effect?

My guess is: yes. Evaluating one's own ethical or unethical behavior is a skill that itself depends on one's ethical abilities. The least ethical people are typically also the least capable of recognizing what counts as an ethical violation and how serious the violation is -- especially, perhaps, when thinking about their own behavior. I don't want to over-commit on this point. Certainly there are exceptions. But as a general trend, this strikes me as plausible.

Consider sexism. The most sexist people tend to be the people least capable of understanding what constitutes sexist behavior and what makes sexist behavior unethical. They will tend either to regard themselves as not sexist or to regard themselves only as "sexist" in a non-pejorative sense. ("Yeah, so what, I'm a 'sexist'. I think men and women are different. If you don't, you're a fool.") Similarly, the most habitual liars might not see anything bad in lying or just assume that everyone else who isn't just a clueless sucker also lies when convenient.

It probably doesn't make sense to think that overall morality can be accurately captured in a single unidimensional scale -- just like it probably doesn't make sense to think that there's one correct unidimensional scale for skill at baseball or for skill as a philosopher or for being a good parent. And yet, clearly some baseball players, philosophers, and parents are better than others. There are great, good, mediocre, and crummy versions of each. I think it's okay as a first approximation to think that there are more and less ethical people overall. And if so, we can at least imagine a rough scale.

With that important caveat, then, consider the following possible relationships between one's overall moral character and one's opinion about one's overall moral character:

Dunning-Kruger (more self-enhancement for lower moral character):

[Note: Sorry for the cruddy-looking images. They look fine in Excel. I should figure this out.]

Uniform self-enhancement (everyone tends to think they're a bit better than they are):

U-shaped curve (even more self-enhancement for the below average):

Inverse U (realistically low self-image for the worst, self-enhancement in the middle, and self-underestimation for the best):

I don't think we really know which of these models is closest to the truth.

Thursday, November 09, 2017

Is It Perfectly Fine to Aim to be Morally Average?

By perfectly fine I mean: not at all morally blameworthy.

By aiming I mean: being ready to calibrate ourselves up or down to hit the target. I would contrast aiming with settling, which does not necessarily involve calibrating down if one is above target. (For example, if you're aiming for a B, then you should work harder if you get a C on the first exam and ease up if you get an A on the first exam. If you're willing to settle for a B, then you won't necessarily ease up if you happen fortunately to be headed toward an A.)

I believe that most people aim to be morally mediocre, even if they don't explicitly conceptualize themselves that way. Most people look at their peers' moral behavior, then calibrate toward so-so, wanting neither to be among the morally best (with the self-sacrifice that seems to involve) nor among the morally worst. But maybe "mediocre" is too loaded a word, with its negative connotations? Maybe it's perfectly fine, not at all blameworthy, to aim for the moral middle?

Here's one reason you might think so:

The Fairness Argument.

Let's assume (of course it's disputable) that being among the morally best, relative to your peers, normally involves substantial self-sacrifice. It's morally better to donate large amounts to worthy charities than to donate small amounts. It's morally better to be generous rather than stingy with one's time in helping colleagues, neighbors, and distant relatives who might not be your favorite people. It's morally better to meet your deadlines than to inconvenience others by running late. It's morally better to have a small carbon footprint than a medium-size or large one. It's morally better not to lie, cheat, and fudge in all the small (and sometimes large) ways that people tend to do.

To be near the moral maximum in every respect would be practically impossible near-sainthood; but we non-saints could still presumably be somewhat better in many of these ways. We just choose not to be better, because we'd rather not make the sacrifices involved. (See The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot for my discussion of a couple of ways of insisting that you couldn't be morally better than you in fact are.)

Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to do so. If the average person in your financial condition gives 3% of their income to charity, then it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, fib, and flake X amount of the time, it's only fair that you get to do the same. Fairness requires that we demand no more than average moral sacrifice from the average person. Thus, there's nothing wrong with aiming to be only a middling member of the moral community -- approximately as selfish, dishonest, and unreliable as everyone else.

Two Replies to the Fairness Argument.

(1.) Absolute standards. Some actions are morally bad, even if the majority of your peers are doing them. As an extreme example, consider a Nazi death camp guard in 1941, who is somewhat kinder to the inmates and less enthusiastic about killing than the average death camp guard, but who still participates in and benefits from the system. "Hey, at least I'm better than average!" is a poor excuse. More moderately, most people (I believe) regularly exhibit small to moderate degrees of sexism, racism, ableism, and preferential treatment of the conventionally beautiful. Even though most people do this, one remains criticizable for it -- that you're typical or average in your degree of bias is at most a mitigator of blame, not a full excuser from blame. So although some putative norms might become morally optional (or "supererogatory") if most of your peers fail to comply, others don't show that structure. With respect to some norms, aiming for mediocrity is not perfectly fine.

(2.) The seeming-absurdity of trade offs between norm types. Most of us see ourselves as having areas of moral strength and weakness. Maybe you're a warm-hearted fellow, but flakier than average about responding to important emails. Maybe you know you tend to be rude and grumpy to strangers, but you're an unusually active volunteer for good causes in your community. My psychological conjecture is that, in implicitly guiding our own behavior, we tend to treat these tradeoffs as exculpatory or licensing: You forgive yourself for the one in light of the other. You let your excellence in one area justify lowering your aims in another, so that averaging the two, you come out somewhere in the middle. (In these examples, I'm assuming that you didn't spend so much time and energy on the one that the other becomes unfeasible. It's not that you spent hours helping your colleague so that you simply couldn't get to your email.)

Although this is tempting reasoning when you're motivated to see yourself (or someone else) positively, a more neutral judge might tend to find it strange: "It's fine that I insulted that cashier, because this afternoon I'm volunteering for river clean-up." "I'm not criticizable for neglecting Cameron's urgent email because this morning I greeted Monica and Britney kindly, filling the office with good vibes." Although non-consciously or semi-consciously we tend to cut ourselves slack in one area when we think about our excellence in others, when the specifics of such tradeoffs are brought to light, they often don't stand scrutiny.


It's not perfectly fine to aim merely for the moral middle. Your peers tend to be somewhat morally criticizable; and if you aim to be the same, you too are somewhat morally criticizable for doing so. The Fairness Argument doesn't work as a general rule (though it may work in some cases). If you're not aiming for moral excellence, you are somewhat morally blameworthy for your low moral aspirations.

[image source]

Thursday, November 02, 2017

Two Roles for Belief Attribution

Belief attribution, both in philosophy and in ordinary language, normally serves two different types of role.

One role is predicting, tracking, or reporting what a person would verbally endorse. When we attribute belief to someone we are doing something like indirect quotation, speaking for them, expressing what we think they would say. This view is nicely articulated in (the simple versions of) the origin-myths of belief talk in the thought experiments of Wilfrid Sellars and Howard Wettstein, according to which belief attribution mythologically evolves out of a practice of indirect quotation or imagining interior analogues of outward speech. The other role is predicting and explaining (primarily) non-linguistic behavior -- what a person will do, given their background desires (e.g. Dennett 1987; Fodor 1987; Andrews 2012) .

We might call the first role testimonial, the second predictive-explanatory. In adult human beings, when all goes well, the two coincide. You attribute to me the belief that class starts at 2 pm. It is true both that I would say "Class starts at 2 pm" and that I would try to show up for class at 2 pm (assuming I want to attend class).

But sometimes the two roles come apart. For example, suppose that Ralph, a philosophy professor, sincerely endorses the statement "women are just as intelligent as men". He will argue passionately and convincingly for that claim, appealing to scientific evidence, and emphasizing how it fits the egalitarian and feminist worldview he generally endorses. And yet, in his day-to-day behavior Ralph tends not to assume that women are very intellectually capable. It takes substantially more evidence, for example, to convince him of the intelligence of an essay or comment by a woman than a man. When he interacts with cashiers, salespeople, mechanics, and doctors, he tends to assume less intelligence if they are women than if they are men. And so forth. (For more detailed discussion of these types of cases, see here and here.) Or consider Kennedy, who sincerely says that she believes money doesn't matter much, above a certain basic income, but whose choices and emotional reactions seem to tell a different story. When the two roles diverge, should belief attribution track the testimonial or the predictive-explanatory? Both? Neither?

Self-attributions of belief are typically testimonial. If we ask Ralph whether he believes that women and men are equally intelligent, he would presumably answer with an unqualified yes. He can cite the evidence! If he were to say that he doesn't really believe that, or that he only "kind of" believes it, or that he's ambivalent, or that only part of him believes it, he risks giving his conversational partner the wrong idea. If he went into detail about his spontaneous reactions to people, he would probably be missing the point of the question.

On the other hand, consider Ralph's wife. Ralph comes home from a long day, and he finds himself enthusiastically talking to his wife about the brilliant new first-year graduate students in his seminar -- Michael, Nestor, James, Kyle. His wife asks, what about Valery and Svitlana? [names selected by this random procedure] Ah, Ralph says, they don't seem quite as promising, somehow. His wife challenges him: Do you really believe that women and men are equally intelligent? It sure doesn't seem that way, for all your fine, egalitarian talk! Or consider what Valery and Svitlana might say, gossiping behind Ralph's back. With some justice, they agree that he doesn't really believe that women and men are equally intelligent. Or consider Ralph many years later. Maybe after a long experience with brilliant women as colleagues and intellectual heroes, he has left his implicit prejudice behind. Looking back on his earlier attitudes, his earlier evaluations and spontaneous assumptions, he can say: Back then, I didn't deep-down believe that women were just as smart as men. Now I do believe that. Not all belief attribution is testimonial.

It is a simplifying assumption in our talk of "belief" that these two roles of belief attribution -- the testimonial and the predictive-explanatory -- converge upon a single thing, what one believes. When that simplifying assumption breaks down, something has to give, and not all of our attributional practices can be preserved without modification.

[This post is adapted from Section 6 of my paper in draft, "The Pragmatic Metaphysics of Belief"]

[HT: Janet Levin.]

[image source]