A thousand utilitarian consequentialists stand before a thousand identical buttons. If any one of them presses their button, ten people will die. The benefits of pressing the button are more difficult to estimate. Ninety-nine percent of the utilitarians rationally estimate that fewer than ten lives will be saved if any of them presses a button. One percent rationally estimate that more than ten lives will be saved. Each utilitarian independently calculates expected utility. Since ten utilitarians estimate that more lives will be saved than lost, they press their buttons. Unfortunately, as the 99% would have guessed, fewer than ten lives are saved, so the result is a net loss of utility.
This cartoon example illustrates what I regard as a fundamental problem with simple utilitarianism as decision procedure: It deputizes everyone to act as risk-taker for everyone else. As long as anyone has both (a.) the power and (b.) a rational utilitarian justification to take a risk on others' behalf, then the risk will be taken, even if a majority would judge the risk not to be worth it.
Consider this exchange between Tyler Cowen and Sam Bankman-Fried (pre-FTX-debacle):
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
There are, I think, two troubling things about Bankman-Fried's reasoning here. (Probably more than two, but I'll restrain myself.)
First is the thought that it's worth risking everything valuable for a small chance of a huge gain. (I call this the Black Hole Objection to consequentialism.)
Second, I don't want Sam Bankman-Fried making that decision. That's not (just) because of who in particular he is. I wouldn't want anyone making that decision -- at least not unless they were appropriately deputized with that authority through an appropriate political process, and maybe not even then. No matter how rational and virtuous you are, I don't want you deciding to take risks on behalf of the rest of us simply because that's what your consequentialist calculus says. This issue subdivides into two troubling aspects: the issue of authority and the issue of risk amplification.
The authority issue is: We should be very cautious in making decisions that sacrifice others or put them at high risk. Normally, we should do so only in constrained circumstances where we are implicitly or explicitly endowed with appropriate responsibility. Our own individual calculation of high expected utility (no matter how rational and well-justified) is not normally, by itself, sufficient grounds for substantially risking or harming others.
The risk amplification issue is: If we universalize utilitarian decision-making in a way that permits many people to risk or sacrifice others whenever they reasonably calculate that it would be good to do so, we render ourselves collectively hostage to whomever has the most sacrificial reasonable calculation. That was the point illustrated in the opening scenario.
[Figure: Simplified version of the opening scenario. Five utilitarians have the opportunity to sacrifice five people to save an unknown number of others. The button will be pressed by the utilitarian whose estimate errs highest. Click to enlarge and clarify.]My point is not that some utilitarians might be irrationally risky, though certainly that's a concern. Rather, my point is that even if all utilitarians are perfectly rational, if they differ in their assessments of risk and benefit, and if all it takes to trigger a risky action is one utilitarian with the power to choose that action, then the odds of a bad outcome rise dramatically.
Advocates of utilitarian decision procedures can mitigate this problem in a few ways, but I'm not seeing how to escape it without radically altering the view.
First, a utilitarian could adopt a policy of decision conciliationism -- that is, if you see that most others aren't judging the risk or cost worth it, adjust your own assessment of the benefits and likelihoods, so that you fall in line with the majority. However, strong forms of conciliationism are pretty radical in their consequences; and of course this only works if the utilitarians know that there are others in similar positions deciding differently.
Second, a utilitarian could build some risk aversion and loss aversion into their calculus. This might be a good idea on independent grounds. Unfortunately, aversion corrections only shift the weights around. If the anticipated gains are sufficiently high, as judged by the most optimistic rational utilitarian, they will outweigh any discounts due to risk or loss aversion.
Third, they could move to rule utilitarianism: Endorse some rule according to which you shouldn't generally risk or sacrifice others without the right kind of authority. Plausibly, the risk amplification argument above is exactly the sort of argument that might a motivate a utilitarian to adopt rule utilitarianism as a decision procedure rather than trying to evaluate the consequences of each act individually. That is, it's a utilitarian argument in favor of not always acting according to utilitarian calculations. However, the risk amplification and authority problems are so broad in scope (even with appropriate qualifications) that moving to rule utilitarianism to deal with them is to abandon act utilitarianism as a general decision procedure.
Of course, one could also design scenarios in which bad things happen if everyone is a rule-following deontologist! Picture a thousand "do not kill" deontologists who will all die unless one of them kills another. Tragedy. We can cherry-pick scenarios in which any view will have unfortunate results.
However, I don't think my argument is that unfair. The issues of authority and risk amplification are real problems for utilitarian decision procedures, as brought out in these cartoon examples. We can easily imagine, I think, a utilitarian Robespierre, a utilitarian academic administrator, Sam Bankman-Fried with his hand on the destroy-or-duplicate button, calculating reasonably, and too easily inflicting well-intentioned risk on the rest of us.