Longtermism is the view that what we choose to do now should be substantially influenced by its expected consequences for the trillions of people who might possibly exist in the longterm future. Maybe there's only a small chance that trillions of people will exist in the future, and only a minuscule chance that their lives will go appreciably better or worse as a result of what you or I do now. But however small that chance is, if we multiply it by a large enough number of possible future people -- trillions? trillions of trillions? -- the effects are worth taking very seriously.
Longtermism is a hot topic in the effective altruism movement, and William MacAskill's What We Owe the Future, released last week, has made a splash in the popular media, including The New Yorker, NPR, The Atlantic, and Boston Review. I finished the book Sunday. Earlier this year, I argued against longtermism on several grounds. Today, I'll expand on one of those arguments, which (partly following Greaves and MacAskill 2021) I'll call the Washout Argument.
The Washout Argument comes in two versions, infinite and finite.
The Washout Argument: Infinite Version
But the heat death of the universe is only the beginning! Standard cosmological models don't generally envision a limit to future time. So post heat death, we should expect the universe to just keep enduring and enduring. In this state, there will be occasional events in which particles enter unlikely configurations, by chance. For example, from time to time six particles will by chance converge on the same spot, or six hundred will, or -- very, very rarely (but we have infinitude to play with) six hundred trillion. Under various plausible assumptions, any finitely probable configuration of a finite number of particles should occur eventually, and indeed infinitely often.
This relates to the famous Boltzmann brain problem, because some of those chance configurations will be molecule-for-molecule identical with human brains. These unfortunate brains might be having quite ordinary thoughts, with no conception that they are mere chance configurations amid post-heat-death chaos.
Now remember, the causal ripples from the particles you perturbed yesterday by raising your right hand are still echoing through this post-heat-death universe.
Suppose that, by freak chance, a human brain in a state of great suffering appears at spatiotemporal location X that has been influenced by a ripple of causation arising from your having raised your hand. That brain wouldn't have appeared in that location had you not raised your hand. Chancy events are sensitive in that way. Thus, one extremely longterm consequence of your action was that Boltzmann brain's suffering. Of course, there are also things of great value that arise which wouldn't have arisen if you hadn't raised your hand -- indeed, whole amazing worlds that wouldn't otherwise have come into being. What awesome power you have!
[For a more careful treatment see Schwitzgebel and Barandes forthcoming.]
Consequently, from a longterm perspective, everything you do has a longterm expected value of positive infinity minus negative infinity -- a value that is normally undefined. Even if you employed some fancy mathematics to subtract these infinitudes from each other, finding that, say, the good would overall outweigh the bad, there would still be a washout, since almost certainly nothing you do now would have a bearing on the balance of those two infinitudes. (Note, by the way, that my argument here is not simply that adding a finite value to an infinite value is of no consequence, though that is arguably also true.) Whatever the expected effects of your actions are in the short term, they will eventually be washed out by infinitely many good and bad consequences in the long term.
Should you then go murder people for fun, since ultimately it makes no difference to the longterm expected balance of good to bad in the world? Of course not. I consider this argument a reductio ad absurdum of the idea that we should evaluate actions by their longterm consequences, regardless of when those consequences occur, with no temporal discounting. We should care more about the now than about the far distant future, contra at least the simplest formulations of longtermism.
You might object: Maybe my physics is wrong. Sure, maybe it is! But as long as you allow that there's even a tiny chance that this cosmological story is correct, you end up with infinite positive and negative expected values. Even if it's 99.9% likely that your actions only have finite effects, to get an expected value in the standard way, you'll need to add in a term accounting for 0.1% chance of infinite effects, which will render the final value infinite or undefined.
The Washout Argument: Two Finite Versions
Okay, what if we forget about infinitude and just truncate our calculations at heat death? There will be only finitely many people affected by your actions (bracketing some worries about multiverse theory), so we'll avoid the problems above.Here the issue is knowing what will have a positive versus negative longterm effect. I recommend radical skepticism. Call this Skeptical Washout.
Longtermists generally think that the extinction of our species would be bad for the longterm future. There are trillions of people who might have led happy lives who won't do so if we wipe ourselves out in the next few centuries!
But is this so clear?
Here's one argument against it: We humans love our technology. It's our technology that creates the big existential risks of human extinction. Maybe the best thing for the longterm future is for us to extinguish ourselves as expeditiously as possible, so as to clear the world for another species to replace us -- one that, maybe, loves athletics and the arts but not technology quite so much. Some clever descendants of dolphins, for example? Such a species might have a much better chance than we do of actually surviving a billion years. The sooner we die off, maybe, the better, before we wipe out too many more of the lovely multicellular species on our planet that have the potential to eventually replace and improve on us.
Here's another argument: Longtermists like MacAskill and Toby Ord typically think that these next few centuries are an unusually crucial time for our species -- a period of unusual existential risk, after which, if we safely get through, the odds of extinction fall precipitously. (This assumption is necessary for their longtermist views to work, since if every century carries an independent risk of extinction of, say, 10%, the chance is vanishingly small that our species will survive for millions of years.) What's the best way to tide us through these next few especially dangerous centuries? Well, one possibility is a catastrophic nuclear war that kills 99% of the population. The remaining 1% might learn the lesson of existential risk so well that they will be far more careful with future technology than we are now. If we avoid nuclear war now, we might soon develop even more dangerous technologies that would increase the risk of total extinction, such as engineered pandemics, rogue superintelligent AI, out-of-control nanotech replicators, or even more destructive warheads. So perhaps it's best from the longterm perspective to let us nearly destroy ourselves as soon as possible, setting our technology back and teaching us a hard lesson, rather than blithely letting technology advance far enough that a catastrophe is more likely to be 100% fatal.
Look, I'm not saying these arguments are correct. But in my judgment they're not especially less plausible than the other sorts of futurist forecasting that longtermists engage in, such as the assumption that we will somehow see ourselves safely past catastrophic risk if we survive the next few centuries.
The lesson I draw is not that we should try to destroy or nearly destroy ourselves as soon as possible! Rather, my thought is this: We really have no idea what the best course is for the very long term future, millions of years from now. It might be things that we find intuitively good, like world peace and pandemic preparedness, or it might be intuitively horrible things, like human extinction or nuclear war.
If we could be justified in thinking that it's 60% likely that peace in 2023 is better than nuclear war in 2023 in terms of its impact on the state of the world over the entire course of the history of the planet, then the longtermist logic could still work (bracketing the infinite version of the Washout Argument). But I don't think we can be justified even in that relatively modest commitment. Regarding what actions now will have a positive expected impact on the billion-year future, I think we have to respond with a shoulder shrug. We cannot use billion-year expectations to guide our decisions.
Even if you don't want to quite shrug your shoulders, there's another way the finite Washout Argument can work. Call this Negligible Probability Washout.
Let's say you're considering some particular action. You think that action has a small chance of creating an average benefit of -- to put a toy number on it -- one unit to each future person who exists. Posit that there are a trillion future people. Now consider, how small is that small chance? If it's less than one in a trillion, then on a standard consequentialist calculus, it would be better to create a sure one unit benefit for one person who exists now.
What are reasonable odds to put on the chance that some action you do will materially benefit a trillion people in the future? To put this in perspective, consider the odds that your one vote will decide the outcome of your country's election. There are various ways to calculate this, but the answer should probably be tiny, one in a hundred thousand at most (if you're in a swing state in a close U.S. election), maybe one in a million, one in ten million or more. That's a very near event, whose structure we understand. It's reasonable to vote on those grounds, by the utilitarian calculus. If I think that my vote has a one in ten million chance of making my country ten billion dollars better off, then -- if I'm right -- my vote is a public good worth an expected $1000 (ten billion times one in ten million).
My vote is a small splash in a very large pond, though a splash worth making. But the billion-year future of Earth is a much, much larger pond. It seems reasonable to conjecture that the odds that some action you do now will materially improve the lives of trillions of people in the future should be many orders of magnitude lower than one in a million -- low enough to be negligible, even if (contra the first part of this argument) you can accurately predict the direction.
On the Other Hand, the Next Few Centuries
... are (moderately) predictable! Nuclear war would be terrible for us and our immediate descendants. We should care about protecting ourselves from pandemics, and dangerous AI systems, and environmental catastrophes, and all those other things that the longtermists care about. I don't in fact disagree with most of the longtermists' priorities and practical plans. But the justification should be the long term future in the more ordinary sense of "long term" -- fifteen years, fifty years, two hundred years, not ten million years. Concern about the next few generations is reason enough to be cautious with the world.
[Thanks to David Udell for discussion.]
Your position versus longtermism seems similar to that of rule utilitarianism versus act utilitarianism.
ReplyDeleteWhile one can endorse act utilitarianism as the ultimate basis for action, given the uncertainties of the consequences of a specific
act, for practical purposes we rely on rule utilitarianism.
I googled Jacob Barandes to see what he has found about physics and knowledge...
ReplyDelete... with the googled question: How does an electron know it's being observed?...
This seems to me the state of affairs of philosophy today...
...that knowing and knowledge are different activities for us humans...
Throw in understanding...
...and 'evolution in/of time' may begin to appear...
Really great reads...
[Posting this comment in two parts, to fit within the character limit]
ReplyDeleteI agree that there's huge uncertainty over the long-term consequences of almost any action you take, such that in almost all respects it's basically hopeless to try and predict even the sign of things. (And that things get very murky with unbounded utility functions, especially once you add uncertainty over your world model.)
But I think you're mischaracterizing the longtermist arguments for pursuing certain courses of action in the 21st century. The claim is not that we can work out, with high certainty, which butterfly wing flaps will end up averting more hurricanes than they cause. The (much weaker and more plausible, in my view) claim is specifically that (1) human extinction this century would be extremely bad and (2) that we have some nonzero information right now about how to reduce its chances.
1. Human extinction this century would be extremely bad
As you point out, we have a lot of moral uncertainty over how things should go in the future - perhaps a more grown-up human civilization would decide that the best course of action involves things that seem bizarre or wrong to us intuitively.
But preventing extinction now is not committing to any one course of action! It's buying us time to figure out what it is we want, what we value, what the right course of action is, how our choices will affect things billions of years down the line. Maybe human existence is very bad for the world in ways we don't understand, and it'll take us 10,000 years to work that out. But those extra 10,000 years seem quite a small price to pay when there's a good chance we might have wanted to continue on for 10^100.
Not going extinct gives us an option on the future, one we can choose to exercise or not when we're older and wiser as a species. Unilaterally denying that choice to our potential descendants, to figure out how the future should go with much better information than we have now, seems to me a much more rash choice than extending our current state of affairs forward another few thousand years to figure stuff out.
2. We have some nonzero information about how to reduce the odds of extinction
ReplyDeletePredicting the future even a few years out is tough, but I don't think it's so intractable that we need to throw our hands up in defeat. I'll happily defend 60+% confidence in the truth of claims like "Careful non-capabilities-advancing research into AI alignment reduces the odds of human extinction this century" or "Creating far better PPE than we have in 2022 will make it more likely that extremely virulent pandemics don't kill everyone". A thriving human civilization is very hard to predict out a billion years, but a dead one is very easy - if I want to bring down the chance of the world where everyone is dead in 10 billion years, I can straightforwardly do that by making it less likely that everyone is dead in 2040.
I think "the assumption that we will somehow see ourselves safely past catastrophic risk if we survive the next few centuries" is a pretty reasonable one. Perhaps it ends up being the case that in the limit of technological capability and cultural development, offense is fundamentally easier than defense, and single actors will be able to unilaterally bring down stable systems. But perhaps not - I would personally put well over 80% on the existence of stable enduring ways for the world to be that are extremely unlikely to destroy themselves - and if there are such possibilities, it seems to me that we stand a good chance of finding them given some time to work things out.
I agree that in the absence of further information, you should expect that your odds of affecting extremely large groups of people substantially are quite small. But we don't have an absence of further information - there are all these giant, glaring, dangerous problems that we can go make material progress on! The pathways to reducing extinction risk aren't a result of armchair reasoning, but from people looking at the world and seeing concrete ways we might prevent that from happening.
I find the focus on preventing human extinction a bit peculiar. For me, any appeal of longtermism is based on the assumption that humans will exist in the distant future and that our actions should be aimed at causing them minimum suffering and maximizing their welfare, or something of that ilk. Merely, assuring their existence, independent of the quality of their lives, has a religious quality.
ReplyDeleteWould longtermism focus on preventing human extinction in the distant future if they were reason to believe that life would not be worth living?
It's not even clear to me that the next few centuries are particularly predictable. Historically, it seems like predictions made more than a few decades out tend to be increasingly wrong. The predictions are typically dominated by the preoccupations of whatever time they're made in.
ReplyDeleteOverall, my impression is that longtermers are rationalizing for things they want to see happen but can't find any reasonably foreseeable justifications for.
Thanks for the comments, everyone!
ReplyDeleteDan: Yes, I see a relation in the skeptical motivations for both.
Arnold: Thanks for the kind words, and of course the "measurement problem" is a huge morass right at the core of philosophy of quantum mechanics.
Drake: Thanks for the detailed comment! I agree with your Point 2, so I'll focus on Point 1. You write, "But preventing extinction now is not committing to any one course of action! It's buying us time to figure out what it is we want, what we value, what the right course of action is, how our choices will affect things billions of years down the line." I agree but I think that statement is incomplete, since it could be *costing* some better species time for the same types of reflection -- or rather, not just time, but the very chance at existence. I hope it's obvious that I don't actually support human extinction, but I think that my reasoning in this part of the argument stands: The longer we take to go extinct (as I think we probably will, and within a few hundred or few thousand years), the more likely that we will have done irreparable harm to the planet, such as wiping out good candidate replacement species, that will prevent a more durable species from taking our place. So although what you say is correct, the benefits of giving ourselves more time must be placed in the balance against the downsides of giving ourselves more time.
Daniel: Yes, that's a good point. To be fair to MacAskill, he addresses that question explicitly and makes a case that future humans' lives would likely overall be worth living (because our lives are and because it's likely that the technological potential to improve lives will stay at present levels or higher -- though of course that's just a best guess).
SelfAware: I suspect that it's salient to us when predictions about the future go wrong, but of course many go right. For example, the writers of the Constitution of the United States anticipated that there would still be countries in 200 years, and that it would make sense to have some form of representative government. Benjamin Franklin invested $1000 to collect interest and be paid out for the public benefit after 200 years of accumulation, and that turned out pretty well. Early works of science fiction in the 1700s that portrayed the distant future imagined cities would still have horses, which was wrong, but they also imagined that people would want transportation, care about clothing, get in arguments with each other, and live in a society of unequal wealth. Lots has *not* changed. While I can't be totally sure, I do feel like it's a good guess that a large scale nuclear war next year would be something that future humans if they exist would look back on with regret.
I confess ignorance of the term. But, the premise for pursuit of it just does not feel right, somehow. Planning and organization trump stochasticism in any real world setting. But, not everything undertaken can be assumed or expected to achieve desired ends...that is pie-in-the-sky enthusiasm, not real-world reality.
ReplyDeleteHi Eric
ReplyDeleteDo you doubt human survival in the centuries and millennia ahead, is that by some kind of Asimovian Psychohistory?
Is it simply making the case we are or some of us naturally violent, and will acquire even more violent tools? Some like Bostrom, make that argument; but I'm not sure his facts are facts by fiat, ceritus paribus.
I think these argument confuse the future with history- it makes for lively conversation, but even statistically the centuries ahead are on principle unknowable
Longtermism seems to be the complete opposite of anti-natalism. It asserts a strong obligation to the wellbeing of a huge number of future generations. It further seems to assert an obligation for procreation. Presumably, such a position would have an answer to anti-natalism, given the enormous harm anti-natalism implies.
ReplyDeleteIf anti-natalism is correct, or even has a tiny probability of being correct, then procreation should be approached with caution.
I am not sure whether anti-natalists like Benatar believe that human like could evolve to warrant abandoning their position.
I have a couple of quibbles. One is with the model of causation and its interaction with utility across cosmological times. You assume some strong non-linearity cum classical chaos in the propagation of utility into the future from any single event so ending up with actual infinities. At one level, this is just the usual Knightian uncertainty argument against utilitarianism or any kind of planning, so we don't actually need the infinities - as you comment later, the immediate extinction of humanity _could_ be a long term gain for universal goodness depending on how you specify utility. Ditto my becoming quadriplegic after being hit by falling SpaceX debris - maybe my net long term life satisfaction might be increased!
ReplyDeleteBut in reality, much of the evolution of physics seems pretty regular, even if the fine details are completely indeterministic. So it seems just as attractive to assume a predictable moral progress made up of the massed actions of billions of well-intentioned people (including post-humans). So we must merely act in such a way as maximizing the opportunity of those future generations to do well (ObSF, you might see me as channelling Graydon Saunders here).
Even 'indefinitely is a presumption of persistent effects'...
ReplyDelete...but from affects there may be room to wonder...
The affects of quanta and qualia and in between...
...what in movement and change is up to us...
Philosophies of psychology and mind for...
...phenomena of wonder... from affects of persistence...
Consider the man with a loaded gun in his mouth, who is typically too bored by the gun to bother discussing it. Rational?
ReplyDeleteNow consider the man who is willing to discuss the gun in sweeping philosophical terms, but expresses little to no interest in practical steps for removing the gun. Rational?
This is who is conjuring up theories about longtermism and other such philosophies, we who have thousands of hydrogen bombs aimed down our own throats, we who consider practical discussions of how to remove that gun to be mere politics largely unworthy of our attention.
If we are unable or unwilling to understand ourselves and our relationship of denial with the existential threat that we face today, it's hard to imagine why anything we might have to say about the longterm should be considered credible.
Critical thinking involves not only how we approach any particular topic, but also the ability to prioritize which topics most merit our attention.
What is the rational argument for prioritizing any topic over the gun that is in our mouth right now today?
If we're unable to achieve clarity on that question there's no point in discussing the long term, because there isn't going to be one.
Regarding the book "what we owe the future", I think the reduce-suffering has a rather theoretical essay on erasing past suffering. the basic premise is "why should humans\living beings in the future merit our help more than those in the past"
ReplyDeletefood for thought! regardless whether its possible or not (to affect the past)!