Thursday, April 13, 2023

The Black Hole Objection to Longtermism and Consequentialism

According to consequentialism, we should act to maximize good consequences. According to longtermism, we should act to benefit the long-term future. If either is correct, then it would be morally good to destroy Earth to seed a new, life-supporting universe.

Hypothetically, it might someday become possible to generate whole new universes. Some cosmological theories, for example, hypothesize that black holes seed new universes -- universes causally disconnected from our own universe, each with its own era of Big-Bang-like inflation, resulting in vastly many new galaxies. Maybe our own universe is itself the product of a black hole in a prior universe. If we artificially generate a black hole of the right sort, we might create a whole new universe.

Now let's further suppose that black holes are catastrophically expensive or dangerous: The only way to generate a new black-hole-seeded universe requires sacrificing Earth. Maybe to do it, we need to crash Earth into something else, or maybe the black hole needs to be sufficiently large that it swallows us up rather than harmlessly dissipating.

So there you are, facing a choice: Flip a switch and you create a black hole that destroys Earth and births a whole new universe, or don't flip the switch and let things continue as they are.

Let's make it more concrete: You are one of the world's leading high-energy physicists. You are in charge of a very expensive project that will be shut down tomorrow and likely never repeated. You know that if tonight you launch a certain process, it will irreversibly create a universe-generating black hole that will quickly destroy Earth. The new universe will be at least the size of our own universe, with at least as many galaxies abiding by the same general laws of physics. If you don't launch the process tonight, it's likely that no one in the future ever will. A project with this potential may never be approved again before the extinction of humanity, or if it is, it will likely have safety protocols that prevent black holes.

[Image: Midjourney rendition of a new cosmos exploding out of the back of a black hole]

If you flip the switch, you kill yourself and everyone you know. You break every promise you ever made. You destroy not only all of humanity but every plant and animal on Earth, as well as the planet itself. You destroy the potential of any future biological species or AI that might replace or improve upon us. You become by far the worst mass murderer and genocidaire that history has ever known. But... a whole universe worth of intelligent life will exist that will not exist if you don't flip the switch.

Do you flip the switch?

From a simple consequentialist or longtermist perspective, the answer seems obvious. Flip the switch! Assume you estimate that the future value of all life on, or deriving from, Earth is X. Under even conservative projections about the prevalence of intelligent life in galaxies following laws like our own, the value of a new universe should be at least a billion times X. If we're thinking truly long term, launching the new universe seems to be by far the best choice.

Arguably, even if you think there's only a one in a million chance that a new universe will form, you ought to flip that switch. After all, here's the expected value calculation:

  • Flip switch: 0  + 0.000001*1,000,000,000X = 1000X.
  • Don't flip switch: X + 0 = X.
(In each equation, the first term reflects the expected value of Earth's future given the decision and the second term reflects the expected value generated or not generated in the seeded universe.)

Almost certainly, you would simply destroy the whole planet, with no compensating good consequences. But if there's a one in a million chance that by doing so you'd create a whole new universe of massive value, the thinking goes, it's worth it!

Now I'm inclined to think that it wouldn't be morally good to completely destroy Earth to launch a new universe, and I'm even more strongly inclined to think it wouldn't be morally good to completely destroy Earth for a mere one in a million chance of launching a new universe. I suspect many (not all) of you will share these inclinations.

If so, then either the consequentialist and longtermist thinking displayed here must be mistaken, or the consequentialist or longtermist has some means of wiggling out of the black hole conclusion. Call this the Black Hole Objection.

Could the consequentialist or longtermist wiggle out by appealing to some sort of discounting of future or spatiotemporally disconnected people? Maybe. But there would have to be a lot of discounting to shift the balance of considerations, and it's in the spirit of standard consequentialism and longtermism that we shouldn't discount distant people and the future too much. Still, a non-longtermist, highly discounting consequentialist might legitimately go this route.

Could the consequentialist or longtermist wiggle out by appealing to deontological norms -- that is ethical rules that would be violated by flipping the switch? For example, maybe you promised not to flip the switch. Also, murder is morally forbidden -- especially mass murder, genocide, and the literal destruction of the entire planet. But the core idea of consequentialism is that what justifies such norms is only their consequences. Lying and murder are generally bad because they lead to bad consequences, and when the overall consequences tilt the other direction, one should lie (e.g., to save a friend's life) or murder (e.g., to stop Hitler). So it doesn't seem like the consequentialist can wiggle out in this way. A longtermist needn't be a consequentialist, but almost everyone agrees that consequences matter substantially. If the longtermist is committed to the equal weighting of long-term and short-term goods, this seems to be a case where the long-term goods would massively outweigh the short-term goods.

Could the consequentialist or longtermist wiggle out by appealing to the principle that we owe more to existing people than to future people? As Jan Narveson puts it, "We are in favor of making people happy, but neutral about making happy people" (1973, p. 80). Again, any strong application of this principle seems contrary to the general spirit of consequentialism and longtermism. The longtermist, especially, cares very much about ensuring that the future is full of happy people.

Could they wiggle out by suggesting that intelligent entities, on average, have zero or negative value, so that creating more of them is neutral or even bad? For example, maybe the normal state of things is that negative experiences outweigh positive ones, and most creatures have miserable lives not worth living. This is either a dark view on which we would be better off not have been born or a view on which somehow humanity luckily has positive value despite the miserable condition of space aliens. The first option seems too dark (though check out Schopenhauer) and the second unjustified.

Could they wiggle out by appealing to infinite expectations? Maybe our actions now have infinite long-term expected value, through their unending echoes through the future universe, so that adding a new positive source of value is as pointless as trying to sum two infinitudes into a larger infinitude. (Infinitudes come in different cardinalities, but one generally doesn't get larger infinitudes by summing two of them.) As I've argued in an earlier post, this is more of a problem for longtermism and consequentialism than a promising solution.

Could they wiggle out by appealing to risk aversion -- that is, the principle of preferring outcomes with low uncertainty? Maybe, but the principle is contentious and difficult to apply. Too strict an application of it is probably inconsistent with longtermist thinking. The long-term future is highly uncertain, and thus risk aversion seemingly justifies its sacrifice for more certain short-term goods. (As with discounting, this escape might be more available to a consequentialist than a longtermist.)

Could they wiggle out by assuming a great future for humanity? Maybe it's possible that humanity populates the universe far beyond Earth. This substantially increases the value of X. Let's generously assume that if we populate the universe far beyond Earth, the value of our descendants' lives is equal to the value of the whole universe you could create tonight by generating a black hole. Even so, given that there's substantial uncertainty that humanity will have so great a future, you should still flip the switch. Suppose you think there's a 10% chance. The expectations then become .1*X (don't flip the switch) vs. X (flip the switch). Only if you think it more likely that humanity has that great future than that the black hole generates some other species of set of species whose value is comparable to that hypothetical future, would it make sense to refrain from flipping the switch.

If we add the thought that our descendants might generate black holes, which generate new universes which generate new black holes, which generate new universes which generate new black holes, and so on, then we're back into the infinite expectations problem.

Philosophers are creative! I'm sure there are other ways the consequentialist or longtermist could try to wiggle out of the Black Hole Objection. But my inclination is to think the most natural move is for them simply to "bite the bullet": Admit that it would be morally good to destroy Earth to seed a new cosmic inflation, then tolerate the skeptical looks from those of us who would prefer you not to be so ready (hypothetically!) to kill us in an attempt to create something better.

23 comments:

Howard said...

The problem with consequentialism is that it can only reasonably apply to the present and short term: my pleasure of reading your post is now gone and inconsequential.
I'd say this physicist will destroy all the great art and literature on earth without being sure the new universe will have great art too or that it's nature will be as beautiful as ours or that its physical laws will be as sublime as ours.
If we had a way to hand on our great all the above through this wormhole, then maybe.
Also as an aside: maybe we can find a way to stretch out earth's life instead

Anonymous said...

This seems pretty similar to the utility monster argument (which to be clear, I think is a devastating critique of consequentialism). If one hyper-hedonic utility monster is implausible, you can just hypothesize a planet full of monsters that watch the Earth and feel happy when we suffer and vice versa. If the planet is big enough, it follows absurdly that we are obliged to please the monsters. The black hole argument is more physically plausible than a planet of sadistic voyeur aliens, but you can increase the plausibility of the monsters by saying we live in a VR world and they're watching us or some such. Or maybe they're Boltzmann brains and there are an infinite number of them over infinite the expanse of time…

Eric Schwitzgebel said...

Thanks for the comments, Howard and Carl!

Howard: Stretching out Earth's life sounds good. On your main point: Is it the uncertainty or the distance, or both, that carries the weight in your thinking? If uncertainty, then risk aversion might apply; if distance, then discounting might apply.

Carl: Yes, I'm inclined to agree about utility monsters, and I think you're right that there's a similarity between the two arguments.

P.D. Magnus said...

Another possible reply is to deny that the probabilities of what happens behind blackholes are sufficiently objective to support this kind of calculation. Even if the evidence favors belief in blackhole-backside multiverses, it does not narrow down the numerical value enough. It will too-much reflect whatever subjective priors you started out with.
If we stipulate in the thought experiment that the probabilities are tightly constrained by evidence, then it is far enough from the real world that I'm not sure what my intuitions are. I'm not even sure what the evidence could look like which would sufficiently establish probabilities for what goes on inside blackholes.

P.D. Magnus said...

Just to follow up: If that reply works for the blackhole case, then the longtermist would still need to show that probabilities about the distant future are more constrained by evidence.

Eric Schwitzgebel said...

Thanks for those thoughts, P.D.! My inclination is to read the standard version of longtermism and consequentialism as relying on warranted credences rather than objective probabilities, since credences are what feed naturally into decision-theoretical calculations. You're right that objective probabilities might be another way to go, though. Taking up that idea, I offer this dilemma: If we're cautious about assigning objective probabilities, then we are left without warrant for launching the black hole but also without warrant for planning a long-term future, which is also hard to evaluate with objective probabilities (as I think you are suggesting in your second comment). If we're liberal about assigning objective probabilities, then we might allow our best physical theory to guide us, and then it's not a remote possibility that the best physical theory says that black holes of a certain sort are ~100% likely to seed universes.

chinaphil said...

As with the last Nick Riggle post, I think this argument understates the importance of our imperfect knowledge. You suggest, "there's only a one in a million chance that a new universe will form" - but given how bad human brains are at dealing with large numbers, that probability could be out by 20 orders of magnitude. And perhaps just as important, I'm not sure that expected value is meaningful for single decisions. It may be simply the wrong calculation to do. The thought experiment begins by assuming we have knowledge good enough to make predictions, and I'd agree with you that *if* we have that kind of knowledge, we should bite the bullet. Real world comparison: doctor says operation has an X% chance of curing you and a Y% chance of killing you - this really happens, and individual patients decide whether they want to take those odds all the time. That's realistic. But we should be aware that on the Earth level, the thought experiment remains wild fantasy because we don't have any level of understanding of those odds.
My second reaction is, if we accept the premises of the thought experiment, there still seems to be something different about the possibility of exterminating all intelligent life. There seems to be a sense in which that risk is always too big... Perhaps because it's self-denying? If the black holes are barren, and the earth is gone, then there is no more moral value. Therefore the bet on greater moral value didn't just fail, but ended the game. By allowing the extermination of value arbiters, you preclude the possibility of value ever existing, so those barren universes are not just zero value, they're not even on the moral value scale... I'm not sure if this argument works, but something about the risk of complete extermination feels different to other kinds of risk. Not commensurable.

William S. Robinson said...

Thank you for the quote from Jan Narveson. I think your discussion is a reductio of any view that doesn't take his point to heart. (Not that it's easy to work out exactly how to draw the implied line!)

Paul D. Van Pelt said...

This is only a random aside. The first term you describe sounds appropriate for utilitarianism. Longtermism seems to coincide, in part, with a long view of events and histories, which is part of my notions about philosophy.

P.D. Magnus said...

Eric: I'm not sure exactly what you mean by "warranted credences". If my credences obey the axioms of probability and I update them by conditionalizing, then they are warranted in a sense. However, if I haven't gotten much evidence to update on, then my credences still mostly reflect my priors. One might insist that the evidence has been sufficient to wash out differences in priors, so that we consider probabilities in a range of the credences which would be held by almost all agents. And that's what I think is missing in the thought experiment. The uncertainties involved in theorizing about black holes are enormous.
So I think a version of my worry arises for subjective credences as much as for objective probabilities. (This is similar, I think, to chinaphil's first worry.)

Anonymous said...

Is this meant to be an argument for giving extra weight to the status quo? What position are you arguing for?

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

chinaphil: On your first point: If we accept standard decision theory, which is generally the default background approach of both consequentialism and longtermism, then we have to made do with our best guesses, even if we acknowledge that our subjective credences might drastically mismatch the "objective" chances (however those are conceptualized), so I think they are stuck with the problem as it is set up. On your second point: Yes, I'm inclined to agree that destruction of all Earth is a qualitatively different catastrophe from even the destruction of a single life, which makes the usual utilitarian/longtermist ways of thinking a poor fit for the case.

Bill: I feel the pull of Narveson, but I'm inclined to moderate it. We owe much *more* to currently existing people than to future people. But complete neutrality about making happy people seems wrong. If there were no cost to creating a universe full of happy people, wouldn't it be good to do it?

Paul: "Longtermism" in the sense I use is isn't just long-term thinking in the ordinary sense of having, say, 20 or 200 years in view. It's the more radical view (e.g., in MacAskill) that our decisions now should be strongly influenced by expected outcomes for futures measured in millions or billions of years.

PD: I think if we're going to do standard decision theory, we're stuck with subjective credences, for all their weaknesses. This might be an argument against standard decision theory -- but then, all the worse for standard applications of consequentialism and longtermism, which rely on it.

Anon Apr 14: I'm arguing *against* standard-issue utilitarianism and longtermism, not for any particular position. I'm skeptical of ambitious, general moral theories, so I don't have a specific substitute in mind. But I do accept the general idea that we owe much more to those socially and spatiotemporally close to us, and to existing people, than to those who are very distant or who will exist in the future.

Howard said...

Why grant a mere physicist such potent powers? Wouldn't it be a democratic choice by a body such as the UN? Or a committee of physicists? If a physicist moreover is comparable to a physician, isn't he obliged to do no harm to his unieverse?

Arnold said...

For this largesse view of ethics...
...do we need to constate any relative's of time...

Wouldn't there be standard developments in the flipping of switches...
...to values observations, plus minus equals...

Anonymous said...

"I'm arguing *against* standard-issue utilitarianism and longtermism, not for any particular position. But I do accept the general idea that we owe much more to those socially and spatiotemporally close to us, and to existing people, than to those who are very distant or who will exist in the future."
That's what I thought but just wanted to make sure. I'm surprised we don't see more formal models of this. All models are wrong, but some are useful.

Philosopher Eric said...

I like how this though experiment takes longtermists (who I think already believe ridiculous things about the potential longevity of humanity) and then also throw in all sorts of other scenarios that I also consider ridiculous, to potentially create a “gotcha” moment. But does this scenario bother longtermists? I’m not exactly sure it should. They might even resort to selective reason by saying something like, “Oh come on, in coming centuries scientists will probably conclude that what you’re talking about is essentially impossible.” Yes of course scientists should come to believe that, though shouldn’t they also believe that probability mandates that we or something else will kill us off in a timely manner given an ever present possibility of being killed off? Shouldn’t scientists grasp that our relatively new set of annihilation dice should not continue to be rolled with favorable results for thousands and thousands of years, let alone billions? We’ve only been rolling those dice for about a century.

Regarding consequentialism, it seems to me that this position is tailor made for moral parody given that it’s all about effect rather than cause. Here whenever evil actions lead to good results, evil must also be considered good. In a moral capacity how might one embarrass someone if they already formally advocate selective evil?

My suggestion for all would be to go beyond the notion of what’s moral (which is to say the theorized rightness and wrongness of human behavior), and instead address the mind based nature of welfare in itself. Here what’s good/bad for any defined subject should amount to the magnitude of how good/bad it feels over a specific period of time, whether an individual or any number of them. Notice that the relatively hard behavioral science of economics is already founded upon this premise. Conversely the quite soft behavior science of psychology hasn’t yet made this or any conclusion regarding a founding motivational premise from which to potentially model human behavior. Seems in need of correction.

Paul D. Van Pelt said...

I like your approach, Eric. Pragmatic, seems to me. Terms we use (morality, axiology, deontology, etc.) are ways of talking about our human concepts,ideas and ideals. I have asserted those are altered or revised, over time. Such alterations and revisions lead to some ludicrous notions which situate themselves in mass/popular culture. Those are, at n bottom, outgrowths of interest, preference and motive. This is not to say all such trends, fads and the like are vacuous or even harmful. A few can prove useful. But, comparatively, I hold those are few indeed. Great thinking and execution, PE.

Arnold said...

Complaints have been posted reminding us 'even nothing is something'...
...like: 'top down values' and 'bottom up values', are, when aired, apparent...

Parent: "a group from which another arises", Merriam-Webster...
...thanks...

Philosopher Eric said...

Thanks Paul. Peer validation is a much coveted but rare gift for me. Still I do feel an obligation to furnish you with material from which to potentially recant… or even for us to get to know each other better. The following is a long recent discussion with a new friend. Then here’s another with some friends that I’ve known for quite a while. Either should better clue you in on the radical sorts of things that I happen to believe.

Tim Smith said...
This comment has been removed by the author.
Tim Smith said...

I'm late to the party as usual.

From an anthropocentric view, we are already sacrificing our Earth. The main concern is that thinking we could seed anything is fantasy. Unfortunately, science is unequivocal, and science fiction is not. To not comment on this misalignment is an oversight, if only a nit to pick, in your post, Eric and I would do that here.

The idea that we could ever transit to Mars, much less build sustainable culture there, is questionable. Sending a human outside the Oort Cloud will never happen, at least not with sanity and ethical concern. Getting a human body to a meaningful place outside our solar system is impossible. I suppose Splintered Mind isn't a science blog, but I will defend these claims anywhere and be wrong somewhere. The raw sensibility here is off base and vital to killing the Utility Monster in the room.

The philosophical concern, whether based on scientific and partial knowledge or not, is that arguments like this are accelerating anthropocentric harm and functionally harming a greater good to be had here on Earth. I love me some NASA, but it is bought into the SpaceX and Blue Origins industry, which is self-serving and directed.

Granted, most of the space industry is pointed back to our Earth for now. The only long-term view of space that philosophically holds water (and that is the issue in the end) is to view the sky and the 14+ billion-year-old vestige we will never visit except in our heads.

Eric Schwitzgebel said...

I fear you're right, Tim!

Tim Smith said...

I regret it, if so Eric.

The majority of the damage has occurred in my lifetime, and responsibility is deferred by me colleagues, and even by the newest generation. Getting the affect right is important. Way more important than pointing the finger.