Friday, October 27, 2023

Utilitarianism and Risk Amplification

A thousand utilitarian consequentialists stand before a thousand identical buttons.  If any one of them presses their button, ten people will die.  The benefits of pressing the button are more difficult to estimate.  Ninety-nine percent of the utilitarians rationally estimate that fewer than ten lives will be saved if any of them presses a button.  One percent rationally estimate that more than ten lives will be saved.  Each utilitarian independently calculates expected utility.  Since ten utilitarians estimate that more lives will be saved than lost, they press their buttons.  Unfortunately, as the 99% would have guessed, fewer than ten lives are saved, so the result is a net loss of utility.

This cartoon example illustrates what I regard as a fundamental problem with simple utilitarianism as decision procedure: It deputizes everyone to act as risk-taker for everyone else.  As long as anyone has both (a.) the power and (b.) a rational utilitarian justification to take a risk on others' behalf, then the risk will be taken, even if a majority would judge the risk not to be worth it.

Consider this exchange between Tyler Cowen and Sam Bankman-Fried (pre-FTX-debacle):

COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?

BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.

COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.

BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.

COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?

BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

There are, I think, two troubling things about Bankman-Fried's reasoning here.  (Probably more than two, but I'll restrain myself.)

First is the thought that it's worth risking everything valuable for a small chance of a huge gain.  (I call this the Black Hole Objection to consequentialism.)

Second, I don't want Sam Bankman-Fried making that decision.  That's not (just) because of who in particular he is.  I wouldn't want anyone making that decision -- at least not unless they were appropriately deputized with that authority through an appropriate political process, and maybe not even then.  No matter how rational and virtuous you are, I don't want you deciding to take risks on behalf of the rest of us simply because that's what your consequentialist calculus says.  This issue subdivides into two troubling aspects: the issue of authority and the issue of risk amplification.

The authority issue is: We should be very cautious in making decisions that sacrifice others or put them at high risk.  Normally, we should do so only in constrained circumstances where we are implicitly or explicitly endowed with appropriate responsibility.  Our own individual calculation of high expected utility (no matter how rational and well-justified) is not normally, by itself, sufficient grounds for substantially risking or harming others.

The risk amplification issue is: If we universalize utilitarian decision-making in a way that permits many people to risk or sacrifice others whenever they reasonably calculate that it would be good to do so, we render ourselves collectively hostage to whomever has the most sacrificial reasonable calculation.  That was the point illustrated in the opening scenario.

[Figure: Simplified version of the opening scenario.  Five utilitarians have the opportunity to sacrifice five people to save an unknown number of others.  The button will be pressed by the utilitarian whose estimate errs highest.  Click to enlarge and clarify.]

My point is not that some utilitarians might be irrationally risky, though certainly that's a concern.  Rather, my point is that even if all utilitarians are perfectly rational, if they differ in their assessments of risk and benefit, and if all it takes to trigger a risky action is one utilitarian with the power to choose that action, then the odds of a bad outcome rise dramatically.

Advocates of utilitarian decision procedures can mitigate this problem in a few ways, but I'm not seeing how to escape it without radically altering the view.

First, a utilitarian could adopt a policy of decision conciliationism -- that is, if you see that most others aren't judging the risk or cost worth it, adjust your own assessment of the benefits and likelihoods, so that you fall in line with the majority.  However, strong forms of conciliationism are pretty radical in their consequences; and of course this only works if the utilitarians know that there are others in similar positions deciding differently.

Second, a utilitarian could build some risk aversion and loss aversion into their calculus.  This might be a good idea on independent grounds.  Unfortunately, aversion corrections only shift the weights around.  If the anticipated gains are sufficiently high, as judged by the most optimistic rational utilitarian, they will outweigh any discounts due to risk or loss aversion.

Third, they could move to rule utilitarianism: Endorse some rule according to which you shouldn't generally risk or sacrifice others without the right kind of authority.  Plausibly, the risk amplification argument above is exactly the sort of argument that might a motivate a utilitarian to adopt rule utilitarianism as a decision procedure rather than trying to evaluate the consequences of each act individually.  That is, it's a utilitarian argument in favor of not always acting according to utilitarian calculations.  However, the risk amplification and authority problems are so broad in scope (even with appropriate qualifications) that moving to rule utilitarianism to deal with them is to abandon act utilitarianism as a general decision procedure.

Of course, one could also design scenarios in which bad things happen if everyone is a rule-following deontologist!  Picture a thousand "do not kill" deontologists who will all die unless one of them kills another.  Tragedy.  We can cherry-pick scenarios in which any view will have unfortunate results.

However, I don't think my argument is that unfair.  The issues of authority and risk amplification are real problems for utilitarian decision procedures, as brought out in these cartoon examples.  We can easily imagine, I think, a utilitarian Robespierre, a utilitarian academic administrator, Sam Bankman-Fried with his hand on the destroy-or-duplicate button, calculating reasonably, and too easily inflicting well-intentioned risk on the rest of us.

9 comments:

Richard Y Chappell said...

Great post. You might be interested in discussion of this concern amongst Effective Altruists, where it goes by the label "The Unilateralist's Curse":
https://forum.effectivealtruism.org/topics/unilateralist-s-curse

Of course, since it would have predictably bad results for people to act upon their naive / first-pass expected value judgments, utilitarianism itself recommends that we avoid doing this.

That said, if thinking about real-life cases, I suspect that irrational risk aversion still causes vastly more harm than risk amplification. I discuss several real-life examples of this in my paper on pandemic ethics and status-quo risk: https://philpapers.org/rec/CHAPEA-10

Eric Schwitzgebel said...

Thanks for these links, Richard! I'd thought of tagging you on the post, since you usually respond so helpfully to my critiques of consequentialism. The brief summary of the Unilateralist's Curse does accurately capture the risk amplification problem as I see it. I'll check out the links there as well as your own paper. I'm thinking of a follow-up post endorsing a version of "status-quo bias".

Arnold said...

Isn't it primarily the nature of Being, is always to be consequentially More...
...as in more involution-expansion of sensation, mind, self...

That philosophical 'status-quo bias' is devolution...
...away from Observation...

Howie said...

In everyday life, someone who loses big by taking risks gets punished- take Bankman-Fried for example, though sometimes they get rewarded for bad decisions
A decision maker cannot be insulated from his or her track record or the feedback of his or her community
How does that impact your thought experiment?

chinaphil said...

I'm not sure that this argument applies particularly to utilitarians rather than anyone else. I was thinking about how this has played out in history: do people with the power to make life-or-death decisions that affect others in general make them in good ways? And the answer seems to be obviously not. Kings and chiefs and popes have often spent most of their time sending their subjects to fight wars, and they were not driven by utilitarian considerations.
I think this post makes the problem specific to utilitarians by saying "If we universalize utilitarian decision-making in a way that permits many people to risk or sacrifice others..." But I don't see much reason to think that this would happen. Crucially, I don't think that utilitarianism calls for such a thing to happen. If someone manages to create a very dangerous thing, the correct thing for the inventor to do with that thing is to strictly restrict access to it. That's as true under a utilitarian calculus as it would be under any other.

Richard Y Chappell said...

To expand on chinaphil's point:

(1) Many non-utilitarians place more weight on preventing suffering than on securing positive welfare. Depending on how strong an in-principle asymmetry one goes for, and one's empirical estimates of the prospects for future suffering and flourishing, one might end up with the strong anti-natalist view that future existence is bad and ought to be prevented.

So if you gave a million non-utilitarians a button that would (if pressed) prevent any future lives from coming into existence, predictably *someone* would press the button, even if *most* would agree that pressing the extinction button is an utterly horrendous thing to do.

(2) Many unreflective deontologists (though generally not more reflective ones) have qualms about allowing research trial participants to altruistically volunteer for socially beneficial research that involves some risk to themselves, such as vaccine challenge trials.

So if you gave a million deontologists each a veto over socially valuable research that involves some risk to participants (e.g. vaccine challenge trials), this would amplify to a near-certainty the risk that *someone* would veto the socially valuable research, no matter how many millions of lives it would expectably save and how negligible the risk to participants.

So I think this is a nice example of a problem for everyone that gets misattributed to utilitarians specifically, perhaps just because the implications of utilitarianism are especially clear and easy to think about. But the structural issue is really more general.

Eric Schwitzgebel said...

Thanks for the continuing comments, chinaphil and Richard!

I continue to think that the situation isn't parallel for act utilitarian decision procedures vs. deontological ones. I agree that there are problems for any decision procedure in which one person is deciding on behalf of others, whether that's utilitarian or deontological. But the deontologist has a resource that the act utilitarian doesn't have: A set of rules that say "don't do that!" So a *good* set of deontological rules can incorporate rules to avoid these problems. It's less clear that the act utilitarian can do so, without becoming a rule utilitarian.

Anonymous said...

If nobody presses the button

Anonymous said...

What happens if nobody presses a button?