I have an empirical thesis and a normative thesis. The empirical thesis is that most people aim to be morally mediocre. They aim to be about as morally good as their peers, not especially better, not especially worse. This mediocrity has two aspects. It is peer-relative rather than absolute, and it is middling rather than extreme. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of people we regard as our peers and we aim to behave broadly within that range. We -- most of us -- look around, notice how others are acting, then calibrate toward so-so.
This empirical thesis is, I think, plausible on the face of it. It also receives some support from two recent subliteratures in social psychology and behavioral economics.
One is the literature on following the (im-)moral crowd. I'm thinking especially of the work of Robert B. Cialdini and Cristina Bicchieri. Cialdini argues that "injunctive norms" (that is, social or moral admonitions) most effectively promote norm-compliant behavior when they align with "descriptive norms" (that is, facts about how people actually behave). People are less likely to litter when they see others being neat, more likely to reuse their hotel towels when they learn that others also do so, and more likely to reduce their household energy consumption when they see that they are using more than their neighbors. Bicchieri argues that people are more likely to be selfish in "dictator games" when they are led to believe that earlier participants had mostly been selfish and that convincing communities to adopt new health practices like family planning and indoor toilet use typically requires persuading people that their neighbors will also comply. It appears that people are more likely to abide by social or moral norms if they believe that others are also doing so.
The other relevant literature concerns moral self-licensing. A number of studies suggest that after having performed good acts, people are likely to behave less morally well than after performing a bad or neutral act. For example, after having done something good for the environment, people might tend to make more selfish choices in a dictator game. Even just recalling recent ethical behavior might reduce people's intentions to donate blood, money, and time. The idea is that people are more motivated to behave well when their previous bad behavior is salient and less motivated to behave well when their previous good behavior is salient. They appear to calibrate toward some middle state.
One alternative hypothesis is that people aim not for mediocrity but rather for something better than that, though short of sainthood. Phenomenologically, that might be how it seems to people. Most people think that they are somewhat above average in moral traits like honesty and fairness (Tappin and McKay 2017); and maybe then people mostly think that they should more or less stay the course. An eminent ethicist once told me he was aiming for a moral "B+". However, I suspect that most of us who like to think of ourselves as aiming for substantially above-average moral goodness aren't really willing to put in the work and sacrifice required. A close examination of how we actually calibrate our behavior will reveal us wiggling and veering toward a lower target. (Compare the undergraduate who says they're "aiming for B+" in a class but who wouldn't be willing to put in more work if they received a C on the first exam. It's probably better to say that they are hoping for a B+ than that they are aiming for one.)
My normative thesis is that it's morally mediocre to aim for moral mediocrity. Generally speaking, it's somewhat morally bad, but not terribly bad, to aim for the moral middle.
In defending this view, I'm mostly concerned to rebut the charge that it's perfectly morally fine to aim for mediocrity. Two common excuses, which I think wither upon critical scrutiny, are the Happy Coincidence Defense and The-Most-You-Can-Do Sweet Spot. The Happy Coincidence Defense is an attractive rationalization strategy that attempts to justify doing what you prefer to do by arguing that it's also for the moral best -- for example, that taking this expensive vacation now is really the morally best choice because you owe it to you family, and it will refresh you for your very important work, and.... The Most-You-Can-Do Sweet Spot is a similarly attractive rationalization strategy that relies on the idea that if you tried to be any morally better than you in fact are, you would end up being morally worse -- because you would collapse along the way, maybe, or you would become sanctimonious and intolerant, or you would lose the energy and joie de vivre on which your good deeds depend, or.... Of course it can sometimes be true that by Happy Coincidence your preferences align with the moral best or that you are already precisely in The-Most-You-Can-Do Sweet Spot. But this reasoning is suspicious when deployed repeatedly to justify otherwise seemingly mediocre moral choices.
Another normative objection is the Fairness Objection, which I discussed on the blog last month. Since (by stipulation) most of your peers aren't making the sacrifices necessary for peer-relative moral excellence, it's unfair for you to be blamed for also declining to make such sacrifices. If the average person in your financial condition gives X% to charity, for example, it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, lie, and flake X amount of the time, it's only fair that you should get to do the same.
The simplest response to the Fairness Objection is to appeal to absolute moral standards. Although some norms are peer-relative, so that they become morally optional if most of your peers fail to comply with them, other norms aren't like that. A Nazi death camp guard is wrong to kill Jews even if that is normal behavior among his peers. More moderately, sexism, racism, ableism, elitism, and so forth are wrong and blameworthy, even if they are common among your peers (though blame is probably also partly mitigated if you are less biased than average). If you're an insurance adjuster who denies or slow-walks important health benefits on shaky grounds because you guess the person won't sue, the fact that other insurance adjusters might do the same in your place is again at best only partly mitigating. It would likely be unfair to blame you more than your peers are blamed; but if you violate absolute moral standards you deserve some blame, regardless of your peers' behavior.
-----------------------------------------
Full length version of the paper here.
As always, comments welcome either by email to me or in the comments field of this post. Please don't feel obliged to read the full paper before commenting, if you have thoughts based on the summary arguments in this post.
[Note: Somehow my final round of revisions on this post was lost and an old version was posted. The current version has been revised in attempt to recover the lost changes.]
Not to nitpick, but how do you measure morality accurately. The issues that come up in real life or in the literature are the marquis issues that might not be a representative sampling of our broader ethical performance.
ReplyDeleteOur measurement of ethical performance currently might be comparable to the measurement of time before clocks.
Your argument is plausible and cool, but there must be a way experimentally and conceptually to measure ethical performance more vividly and securely.
Am I being unduly skeptical, you've given this issue a lot of consideration, I gather.
Your evidence seems circumstatntial
I like your analogy to measuring time before clocks. I don't think we *can* measure morality accurately -- but that doesn't mean that we can't estimate it roughly. And that's good enough for some purposes, including present purposes. Roughly speaking, but only roughly speaking, people tend to aim for the moral middle.
ReplyDeleteHi Eric,
ReplyDeleteI have a couple of comments, questions, and perhaps objections:
1. In the case of removing petrified wood, (C) might signal that:
a. People who remove petrified wood are generally not caught.
b. There might not be any rules against removing it.
The rates of theft may have increased because more people reckoned they weren't likely to get caught (vs. what would otherwise have happened), and even more people reckoned it was less likely that there was a rule against it. Similarly, when they're told not to remove it, that makes it clear that there is a rule, but also, visitors who already know that there is a rule (probably the vast majority) would rationally think it's more probable that the rule will be effectively enforced (more probable than it would be if they weren't told about it).
2. In the case of energy consumption, aiming to consume as others do may not be aiming for mediocrity. People may well think there is nothing wrong in the average consumption, that that's a reasonable amount of electricity consumption, and so on - and when they're told that they're consuming more than average, then they might reckon that perhaps they can have a similar quality of life with less energy, and they should make an effort.
In fact, maybe they believe the local non-written rule is "it's permissible to consume such-and-such amount", where the amount is what others generally consume.
3. It seems to me a person may believe that behaving as their neighbors do is morally permissible - that that is in line with the local rules, and that there is no moral prohibition overriding them.
On that note, there are plenty of behaviors that are morally permissible unless they're explicitly banned by a local rule - written or not. Looking at the behavior of others may be a way of ascertaining what the local rules are - which actually do not need to be written, and in fact may be in conflict with the written laws. Human societies have had local rules for much longer than they've had written laws (and other animals also have local rules).
On that note, you point out that "Even just recalling recent ethical behavior might reduce people's intentions to donate blood, money, and time."
However, they may well not believe they have a moral obligation to donate blood, money and time, or they may believe they have an obligation to do some of those (or other) things but not all, and after they did something, that's enough and they've met their obligations. And I think it's permissible to aim at meeting one's moral obligations, even if one does not aim at engaging in supererogatory actions. Of course, even if that's a general aim, one may well fail and behave immorally, even as a result of an improper way of assessing moral permissibility. But that does not seem to be the same as aiming at moral mediocrity - then again, you don't seem to consider that aiming has to be conscious, so I'm not sure I'm getting what you mean by that.
Also, you say "If the average person in your financial condition gives X% to charity, for example, it would be unfair to blame you for not giving more. If your colleagues down the hall cheat, shirk, lie, and flake X amount of the time, it's only fair that you should get to do the same."
But there may well be an implicit assessment in that context (even if a false one) that it's not immoral on your part to behave as your neighbors. If so, in a sense the person is not aiming at moral mediocrity. They're aiming (even if improperly) at doing what they like while not behaving immorally. They're mistaken (in some cases) about what's immoral.
Yet, your reply (e.g., the insurance example) suggest that you consider that that too is aiming at mediocrity. So, I'd like to ask for clarification on what you mean.
I guess the problem that I have with the line of reasoning behind this article is the thought that we can make sense of propositions about "morality", as a coherent, well-defined space of reason-giving considerations distinct from the space of non-moral, practical-prudential, considerations which are logically distinct from their moral counterparts.
ReplyDeleteThe idea that individuals tend to aim for moral mediocrity is, from one standpoint, attractive and I'm even inclined to agree with the spirit of the point. But I do worry that, lacking a coherent concept of "the moral", there is a similar lack of sense attaching to (i) propositions about "being more moral" and (ii) parallel thoughts about how such moral considerations are meant to enter into the deliberations of a practically rational agent -- i.e., how is mediocrity meant to be assessed, and how is an individual qua moral agent meant to aim at maximizing his/her moral actions?
Just based on the remarks here, I'm not seeing that there is any clear way to answer these questions. It appears there's a lot of heavy lifting being done by a notion of "moral" here, alongside conceptions of practical reason and practically-rational agency, and without some further elaboration on the role of these background notions -- and why they do/ought to matter ethically speaking -- I'm not sure I buy the argument as presented.
I come at this from a perspective of evolutionary ethics so I don't see these good vs. bad examples you cite as always being examples of moral mediocrity. They may just be individuals striking a balance between cooperation and free ridership. "Good" morality doesn't ask us to always sacrifice self-interest. Morality asks us to find the optimum tradeoffs between self and society, humans vs. ecosystems, now vs. then, competition vs. cooperation, etc. So I don't like to hear a kind of wise balancing labeled as mediocrity even though misbalanced morals can be mediocre.
ReplyDeleteYou may be right and wrong, for thinking to establish morality as mediocre...
ReplyDelete...Is it because philosophy has not yet risen to a worldly-planetary view of itself...
Subjective morality got us this far...Is Objective morality next...
...To be or not to be--life in the universe...What would fairness be then...
I like this paper very much indeed, think it makes a lot of sense.
ReplyDeleteHere's one possible line of objection: It seems like in making this argument you have to accept the existence of two separate kinds of moral reckoning. One kind must be a level of moral quality of a person, and moreover a level of moral quality that is constituted by the actions that person takes; the other kind must be some kind of moral measure that can be attached to acts. Your pluralism makes this fairly straightforward, but any moral theory which relies on there being a single moral substance would pretty much kill this idea. If act utilitarianism is true, then there can be no coherent moral principle that could lead you to try to be a mediocre person; at best it would be some kind of odd proxy measure. If some kind of virtue ethic is true, the same thing.
Another possible objection would be to construing virtue as an effort. If virtue is always an effort, then are we to see good people as harder workers? This seems radically at odds with ideas like Confucius's training until virtue becomes ingrained in (part of?) your character; or the Christian idea of loving God (though very much in line with the idea of being "tested" by God).
Those said, I really like the paper for what it implies about the possible teachability and improvability of morality. We can crank that moral mediocre standard higher as time goes by! While moral mediocrity may not be much of a personal goal, as a social tool it may be the key to progress.