Longtermism is the view that what we choose to do now should be substantially influenced by its expected consequences for the trillions of people who might possibly exist in the longterm future. Maybe there's only a small chance that trillions of people will exist in the future, and only a minuscule chance that their lives will go appreciably better or worse as a result of what you or I do now. But however small that chance is, if we multiply it by a large enough number of possible future people -- trillions? trillions of trillions? -- the effects are worth taking very seriously.
Longtermism is a hot topic in the effective altruism movement, and William MacAskill's What We Owe the Future, released last week, has made a splash in the popular media, including The New Yorker, NPR, The Atlantic, and Boston Review. I finished the book Sunday. Earlier this year, I argued against longtermism on several grounds. Today, I'll expand on one of those arguments, which (partly following Greaves and MacAskill 2021) I'll call the Washout Argument.
The Washout Argument comes in two versions, infinite and finite.
The Washout Argument: Infinite Version
But the heat death of the universe is only the beginning! Standard cosmological models don't generally envision a limit to future time. So post heat death, we should expect the universe to just keep enduring and enduring. In this state, there will be occasional events in which particles enter unlikely configurations, by chance. For example, from time to time six particles will by chance converge on the same spot, or six hundred will, or -- very, very rarely (but we have infinitude to play with) six hundred trillion. Under various plausible assumptions, any finitely probable configuration of a finite number of particles should occur eventually, and indeed infinitely often.
This relates to the famous Boltzmann brain problem, because some of those chance configurations will be molecule-for-molecule identical with human brains. These unfortunate brains might be having quite ordinary thoughts, with no conception that they are mere chance configurations amid post-heat-death chaos.
Now remember, the causal ripples from the particles you perturbed yesterday by raising your right hand are still echoing through this post-heat-death universe.
Suppose that, by freak chance, a human brain in a state of great suffering appears at spatiotemporal location X that has been influenced by a ripple of causation arising from your having raised your hand. That brain wouldn't have appeared in that location had you not raised your hand. Chancy events are sensitive in that way. Thus, one extremely longterm consequence of your action was that Boltzmann brain's suffering. Of course, there are also things of great value that arise which wouldn't have arisen if you hadn't raised your hand -- indeed, whole amazing worlds that wouldn't otherwise have come into being. What awesome power you have!
[For a more careful treatment see Schwitzgebel and Barandes forthcoming.]
Consequently, from a longterm perspective, everything you do has a longterm expected value of positive infinity minus negative infinity -- a value that is normally undefined. Even if you employed some fancy mathematics to subtract these infinitudes from each other, finding that, say, the good would overall outweigh the bad, there would still be a washout, since almost certainly nothing you do now would have a bearing on the balance of those two infinitudes. (Note, by the way, that my argument here is not simply that adding a finite value to an infinite value is of no consequence, though that is arguably also true.) Whatever the expected effects of your actions are in the short term, they will eventually be washed out by infinitely many good and bad consequences in the long term.
Should you then go murder people for fun, since ultimately it makes no difference to the longterm expected balance of good to bad in the world? Of course not. I consider this argument a reductio ad absurdum of the idea that we should evaluate actions by their longterm consequences, regardless of when those consequences occur, with no temporal discounting. We should care more about the now than about the far distant future, contra at least the simplest formulations of longtermism.
You might object: Maybe my physics is wrong. Sure, maybe it is! But as long as you allow that there's even a tiny chance that this cosmological story is correct, you end up with infinite positive and negative expected values. Even if it's 99.9% likely that your actions only have finite effects, to get an expected value in the standard way, you'll need to add in a term accounting for 0.1% chance of infinite effects, which will render the final value infinite or undefined.
The Washout Argument: Two Finite Versions
Okay, what if we forget about infinitude and just truncate our calculations at heat death? There will be only finitely many people affected by your actions (bracketing some worries about multiverse theory), so we'll avoid the problems above.Here the issue is knowing what will have a positive versus negative longterm effect. I recommend radical skepticism. Call this Skeptical Washout.
Longtermists generally think that the extinction of our species would be bad for the longterm future. There are trillions of people who might have led happy lives who won't do so if we wipe ourselves out in the next few centuries!
But is this so clear?
Here's one argument against it: We humans love our technology. It's our technology that creates the big existential risks of human extinction. Maybe the best thing for the longterm future is for us to extinguish ourselves as expeditiously as possible, so as to clear the world for another species to replace us -- one that, maybe, loves athletics and the arts but not technology quite so much. Some clever descendants of dolphins, for example? Such a species might have a much better chance than we do of actually surviving a billion years. The sooner we die off, maybe, the better, before we wipe out too many more of the lovely multicellular species on our planet that have the potential to eventually replace and improve on us.
Here's another argument: Longtermists like MacAskill and Toby Ord typically think that these next few centuries are an unusually crucial time for our species -- a period of unusual existential risk, after which, if we safely get through, the odds of extinction fall precipitously. (This assumption is necessary for their longtermist views to work, since if every century carries an independent risk of extinction of, say, 10%, the chance is vanishingly small that our species will survive for millions of years.) What's the best way to tide us through these next few especially dangerous centuries? Well, one possibility is a catastrophic nuclear war that kills 99% of the population. The remaining 1% might learn the lesson of existential risk so well that they will be far more careful with future technology than we are now. If we avoid nuclear war now, we might soon develop even more dangerous technologies that would increase the risk of total extinction, such as engineered pandemics, rogue superintelligent AI, out-of-control nanotech replicators, or even more destructive warheads. So perhaps it's best from the longterm perspective to let us nearly destroy ourselves as soon as possible, setting our technology back and teaching us a hard lesson, rather than blithely letting technology advance far enough that a catastrophe is more likely to be 100% fatal.
Look, I'm not saying these arguments are correct. But in my judgment they're not especially less plausible than the other sorts of futurist forecasting that longtermists engage in, such as the assumption that we will somehow see ourselves safely past catastrophic risk if we survive the next few centuries.
The lesson I draw is not that we should try to destroy or nearly destroy ourselves as soon as possible! Rather, my thought is this: We really have no idea what the best course is for the very long term future, millions of years from now. It might be things that we find intuitively good, like world peace and pandemic preparedness, or it might be intuitively horrible things, like human extinction or nuclear war.
If we could be justified in thinking that it's 60% likely that peace in 2023 is better than nuclear war in 2023 in terms of its impact on the state of the world over the entire course of the history of the planet, then the longtermist logic could still work (bracketing the infinite version of the Washout Argument). But I don't think we can be justified even in that relatively modest commitment. Regarding what actions now will have a positive expected impact on the billion-year future, I think we have to respond with a shoulder shrug. We cannot use billion-year expectations to guide our decisions.
Even if you don't want to quite shrug your shoulders, there's another way the finite Washout Argument can work. Call this Negligible Probability Washout.
Let's say you're considering some particular action. You think that action has a small chance of creating an average benefit of -- to put a toy number on it -- one unit to each future person who exists. Posit that there are a trillion future people. Now consider, how small is that small chance? If it's less than one in a trillion, then on a standard consequentialist calculus, it would be better to create a sure one unit benefit for one person who exists now.
What are reasonable odds to put on the chance that some action you do will materially benefit a trillion people in the future? To put this in perspective, consider the odds that your one vote will decide the outcome of your country's election. There are various ways to calculate this, but the answer should probably be tiny, one in a hundred thousand at most (if you're in a swing state in a close U.S. election), maybe one in a million, one in ten million or more. That's a very near event, whose structure we understand. It's reasonable to vote on those grounds, by the utilitarian calculus. If I think that my vote has a one in ten million chance of making my country ten billion dollars better off, then -- if I'm right -- my vote is a public good worth an expected $1000 (ten billion times one in ten million).
My vote is a small splash in a very large pond, though a splash worth making. But the billion-year future of Earth is a much, much larger pond. It seems reasonable to conjecture that the odds that some action you do now will materially improve the lives of trillions of people in the future should be many orders of magnitude lower than one in a million -- low enough to be negligible, even if (contra the first part of this argument) you can accurately predict the direction.
On the Other Hand, the Next Few Centuries
... are (moderately) predictable! Nuclear war would be terrible for us and our immediate descendants. We should care about protecting ourselves from pandemics, and dangerous AI systems, and environmental catastrophes, and all those other things that the longtermists care about. I don't in fact disagree with most of the longtermists' priorities and practical plans. But the justification should be the long term future in the more ordinary sense of "long term" -- fifteen years, fifty years, two hundred years, not ten million years. Concern about the next few generations is reason enough to be cautious with the world.
[Thanks to David Udell for discussion.]