Wednesday, December 20, 2023

The Washout Argument Against Longtermism

I have a new essay in draft, "The Washout Argument Against Longtermism". As always, thoughts, comments, and objections welcome, either as comments on this post or by email to my academic address.

Abstract:

We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis.

According to the Infinite Washout Argument, standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes.

According to the Cluelessness Argument, we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years.

According to the Negligibility Argument, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.

For more details see the full-length draft.

A brief, non-technical version of these arguments is also now available at the longtermist online magazine The Latecomer.

[Midjourney rending of several happy dolphins playing]

Excerpt from full-length essay

If MacAskill’s and most other longtermists’ reasoning is correct, the world is likely to be better off in a billion years if human beings don’t go extinct now than if human beings do go extinct now, and decisions we make now can have a non-negligible influence on whether that is the case. In the words of Toby Ord, humanity stands at a precipice. If we reduce existential risk now, we set the stage for possibly billions of years of thriving civilization; if we don’t, we risk the extinction of intelligent life on Earth. It’s a tempting, almost romantic vision of our importance. I also feel drawn to it. But the argument is a card-tower of hand-waving plausibilities. Equally breezy towers can be constructed in favor of human self-extermination or near-self-extermination. Let me offer....

The Dolphin Argument. The most obvious solution to the Fermi Paradox is also the most depressing. The reason we see no signs of intelligent life elsewhere in the universe is that technological civilizations tend to self-destruct in short order. If technological civilizations tend to gain increasing destructive power over time, and if their habitable environments can be rendered uninhabitable by a single catastrophic miscalculation or a single suicidal impulse by someone with their finger on the button, then the odds of self-destruction will be non-trivial, might continue to escalate over time, and might cumulatively approach nearly 100% over millennia. I don’t want to commit to the truth of such a pessimistic view, but in comparison, other solutions seem like wishful thinking – for example, that the evolution of intelligence requires stupendously special circumstances (the Rare Earth Hypothesis) or that technological civilizations are out there but sheltering us from knowledge of them until we’re sufficiently mature (the Zoo Hypothesis).

Anyone who has had the good fortune to see dolphins at play will probably agree with me that dolphins are capable of experiencing substantial pleasure. They have lives worth living, and their death is a loss. It would be a shame if we drove them to extinction. Suppose it’s almost inevitable that we wipe ourselves out in the next 10,000 years. If we extinguish ourselves peacefully now – for example, by ceasing reproduction as recommended by antinatalists – then we leave the planet in decent shape for other species, including dolphins, which might continue to thrive. If we extinguish ourselves through some self-destructive catastrophe – for example, by blanketing the world in nuclear radiation or creating destructive nanotech that converts carbon life into gray goo – then we probably destroy many other species too and maybe render the planet less fit for other complex life.

To put some toy numbers on it, in the spirit of longtermist calculation, suppose that a planet with humans and other thriving species is worth X utility per year, a planet with other thriving species with no humans is worth X/100 utility (generously assuming that humans contribute 99% of the value to the planet!), and a planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility. If we destroy ourselves in 10,000 years, the billion year sum of utility is 10^4 * X + (approx.) 10^9 * X/200 = (approx.) 5 * 10^6 * X. If we peacefully bow out now, the sum is 10^9 * X/100 = 10^7 * X. Given these toy numbers and a billion-year, non-human-centric perspective, the best thing would be humanity’s peaceful exit.

Now the longtermists will emphasize that there’s a chance we won’t wipe ourselves out in a terribly destructive catastrophe in the next 10,000 years; and even if it’s only a small chance, the benefits could be so huge that it’s worth risking the dolphins. But this reasoning ignores a counterbalancing chance: That if human beings stepped out of the way a better species might evolve on Earth. Cosmological evidence suggests that technological civilizations are rare; but it doesn’t follow that civilizations are rare. There has been a general tendency on Earth, over long, evolutionary time scales, for the emergence of species with moderately high intelligence. This tendency toward increasing intelligence might continue. We might imagine the emergence of a highly intelligent, creative species that is less destructively Promethean than we are – one that values play, art, games, and love rather more than we do, and technology, conquering, and destruction rather less – descendants of dolphins or bonobos, perhaps. Such a species might have lives every bit as good as ours (less visible to any ephemeral high-tech civilizations that might be watching from distant stars), and they and any like-minded descendants might have a better chance of surviving for a billion years than species like ours who toy with self-destructive power. The best chance for Earth to host such a species might, then, be for us humans to step out of the way as expeditiously as possible, before we do too much harm to complex species that are already partway down this path.

Think of it this way: Which is the likelier path to a billion-year happy, intelligent species: that we self-destructive humans manage to keep our fingers off the button century after century after century somehow for ten million centuries, or that some other more peaceable, less technological clade finds a non-destructive stable equilibrium? I suspect we flatter ourselves if we think it’s the former.

This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch.

...

My aim with the Dolphin Argument... is not to convince readers that humanity should bow out for the sake of other species.... Rather, my thought is this: It’s easy to concoct stories about how what we do now might affect the billion-year future, and then to attach decision-theoretic numbers to those stories. We lack good means for evaluating these stories. We are likely just drawn to one story or another based on what it pleases us to think and what ignites our imagination.

11 comments:

Arnold said...

I try for now, finding possibilities abound limitlessness abounds...
...everything else seems, as is said, comes and goes...

Duncan Webb said...

I would be interested to hear your thoughts on how your arguments relate to the framework on the Washing Out Hypothesis that I described on the EA forum.

https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis

Paul D. Van Pelt said...

I had to take a look at a definition of this, in order to form any opinion(s). Have read a bit about altruism lately: its' connection with philanthropy, and so on. The washout notion seems sound, if, but not only if, Murphy's prime directive is sound. If anything can go wrong and will, at the worst possible time, then longtermist altruism is a self-defeating proposition, a priori. To my intuition, longtermist altruism= washout, all good intention notwithstanding. Thanks, Eric!

Eric Schwitzgebel said...

Thanks for the comments, folks!

Duncan: Thanks for the link. For the finite case, I think I'm okay with the formal model, as a first-pass reaction, insofar as I'm okay with formal models in general. *However*, I do also think that formal models over long time horizons are more likely to mislead than enlighten, so from the perspective of higher-order epistemology, I might resist approaches of this sort in general. For the infinite case, things are more complicated. I think it's reasonable to have a more complex model in which you expect both good *and* bad effects of your actions, with some non-zero probability for virtually any action. Then the model sums to negative infinity plus positive infinity, which can't be evaluated. Various ways of trying to formally dodge this result appear also to fail. For more detail on that point see here:
https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Infinitude.htm

Alex Popescu said...

Kind of a tangent, but with respect to the dolphin argument I'm surprised you consider the super-late filter hypothesis to be the most plausible solution to the fermi paradox. Surely a much more likely answer is that life is just really rare on account of the improbability of life-formation based on abiogenesis? There is still radical uncertainty regarding the nature of abiogenesis, so we just don't know to what degree there might have existed intermediate steps to increase the likelihood of protein formation and other complex structures. It might be that the intermediate steps aren't that great and you still get something like a 1 in 10^43 chance to start life on an earth-like planet, making us almost certainly the only life in the observable universe.

The "life is rare" hypothesis also seems way better because the improbabilities have the potential to be so much higher in comparison. It's very hard to believe that only 1 in 10^43 civilizations don't go extinct, but not at all hard to believe that only 1 in 10^43 earth-like planets develop life.

That said, here's an excellent article by David Kipping which argues that the early emergence of life on earth provides at least some evidence (but by no means conclusive) against the early filter hypothesis: https://www.pnas.org/doi/10.1073/pnas.1921655117

Arnold said...

Goldbach's conjecture comes to mind about lack of oneness or wholeness...

Because our sensing and our evolution appear as in conflict of purpose...

We do have some observation though which can elude to oneness...

Eric Schwitzgebel said...

Thanks for the continuing comment, folks!

Alex: Thanks for the link. Right, it's hard to evaluate the improbability of life formation. *Maybe* it's vastly improbable. But, right, its early occurrence on Earth is a weak Bayesian consideration in favor of it's not being too difficult. And amino acids are easy to form, and it's not so hard to imagine some kind of chemical evolution among amino acids, given the vast number of opportunities on early Earth. Also, the late filter hypothesis can be conjoined with other hypotheses, such as that technological life isn't always easy to detect. The various hypotheses can complement each other, of course.

Arnold said...

Gravitational biology:

https://www.bing.com/search?q=gravity%20of%20evolution&FORM=ARPSEC&PC=ARPL&PTAG=30122#

https://plato.stanford.edu/search/r?entry=/entries/biology-developmental/&page=1&total_hits=553&pagesize=10&archive=None&rank=2&query=gravitational%20biology

Arnold said...

I forgot to make a comment...
...does everything become more sensible when we include the gravity of our situation...

jmvu said...

Your dolphin argument (why risk 'volatile' human ascendance, when it's very possible that other sentient and intelligent [not-necessarily sapient per se] species will derive a lot of eudaimonia living here on Earth for a billion years?) is potent, but I think it can be immediately challenged by an argument that serves as a near-coup de grĂ¢ce:

It may be that humans (or our biological/genetic descendant species) are not the best arbiter of long-term eudaimonia (note I am e a longtermist nor do I refer to it here. But even to a non-longtermist it can be a factor, though of course not the all-determining singular one), because the tech we create leads to volatile outcomes; additionally, you can make the claim we as a species are actually psychologically quite violent, vengeful, and the claim we willfully completely disregard the welfare of other species (see factory farming's), *despite* intellectual/moral awareness, is not even an item of discussion.

But all of this matters not, because if you allow such a volatile techy species as humans to "take the reins", it means a 100 000 planets over the galaxy will be colonized, whereas the more comfortable and easygoing species like your example of some sapient bonobo successors, might permanently contend themselves to establish a global Earth civilization with Renaissance-style conditions and where conflicts are only waged in courtrooms or in Venetian carnival-like ballrooms.

It genuinely matters that there would now be 100 000 such planets, rather than just one or few. It can't be intellectually reduced away by making the discussion sufficiently abstract.

Such a pro-human-expansion advocate might say: it doesn't matter the parameters under which the expansion of sentient life beyond Earth are initially achieved, *as long it happens*. In fact, if you claim humans are unsuitable and tend to self-destruct, as long as it's a certain kind of self-destruction, humans dying away as their work is completed and the galaxy peopled (or humans dying out at a stage before that, but where the trajectory is "on the rails" towards that) is more of a feature than a bug.

Now, of course I didn't gloss over that your statement really must be understood counting for every life-bearing (terraformed) star system in separate. But the point of challenge is that it's a non-trivial fact that we very possibly will NOT achieve an expansion beyond Earth *unless* a species like humans is the vehicle, in order to get over the initial speedbump (the local maximum of difficulty, which can, under one hypothesis, be assumed to be insurmountable to non-techy dolphin, bonobo, et al species).

Eric Schwitzgebel said...

Thanks for this thoughtful comment, jmvu. I meant to implicitly address this argument with the following reasoning:

"This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch."

The argument about colonization might work if complex, non-technological life is rare, so that most target star systems (or other locales) are devoid of life. But if one in a hundred star systems host complex, non-technological life, then my generalization might plausibly apply. Of course, this is difficult to estimate.