Last night, I finished Toby Ord's fascinating and important book, The Precipice: Existential Risk and the Future of Humanity. This has me thinking about "longtermism" in ethics.
I fell the pull of longtermism. There's something romantic in it. It's breaktaking in scope and imagination. Nevertheless, I'm against it.
Longtermism, per Ord,
is especially concerned about the impacts of our actions on the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape -- or fail to shape -- that story (p. 46).
By "longterm future", Ord means very longterm. He means not just forty years from now, or a hundred years, or a thousand. He means millions of years from now, hundreds of millions, billions! In Ord's view, as his book title suggests, we are on an existential "precipice": Our near-term decisions (over the next few centuries) are of crucial importance for the next million years plus. Either we will soon permanently ruin ourselves, or we will survive through a brief "period of danger" thereafter achieving "existential security" with the risk of self-destruction permanently minimal and humanity continuing onward into a vast future.
Given the uniquely dangerous period we face, Ord argues, we must prioritize the reduction of existential risks to humanity. Even a one in a billion chance of saving humanity from permanent destruction is worth a huge amount, when multiplied by something like a million future generations. For some toy numbers, ten billion lives times a hundred million years is 10^18 lives. An action with a one in a billion chance of saving that many lives has an expected value of 10^18 / 10^9 = a billion lives. Surely that's worth at least a trillion dollars of the world's economy (not much more than the U.S. annual military budget)? To be clear, Ord doesn't work through the numbers in so concrete a way, seeming to prefer vaguer and more cautious language about future value -- but I think this calculation is broadly in his spirit, and other longtermists do talk this way.
Now I am not at all opposed to prioritizing existential risk reduction. I favor doing so, including for very low risks. A one in a billion chance of the extinction of humanity is a risk worth taking seriously, and a one in a hundred chance of extinction ought to be a major focus of global attention. I agree with Ord that people in general treat existential risks too lightly. Thus, I accept much of Ord's practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now.
What is wrong with longtermism?
First, it's unlikely that we live in a uniquely dangerous time for humanity, from a longterm perspective. Ord and other longtermists suggest, as I mentioned, that if we can survive the next few centuries, we will enter a permanently "secure" period in which we no longer face serious existential threats. Ord's thought appears to be that our wisdom will catch up with our power; we will be able to foresee and wisely avoid even tiny existential risks, in perpetuity or at least for millions of years. But why should we expect so much existential risk avoidance from our descendants? Ord and others offer little by way of argument.
I'm inclined to think, in contrast, that future centuries will carry more risk for humanity, if technology continues to improve. The more power we have to easily create massively destructive weapons or diseases -- including by non-state actors -- and in general the more power we have to drastically alter ourselves and our environment, the greater the risk that someone makes a catastrophic mistake, or even engineers our destruction intentionally. Only a powerful argument for permanent change in our inclinations or capacities could justify thinking that this risk will decline in a few centuries and remain low ever after.
You might suppose that, as resources improve, people will grow more cooperative and more inclined toward longterm thinking. Maybe. But even if so, cooperation carries risks. For example, if we become cooperative enough, everyone's existence and/or reproduction might come to depend on the survival of the society as a whole. The benefits of cooperation, specialization, and codependency might be substantial enough that more independent-minded survivalists are outcompeted. If genetic manipulation is seen as dangerous, decisions about reproduction might be centralized. We might become efficient, "superior" organisms that reproduce by a complex process different from traditional pregancy, requiring a stable web of technological resources. We might even merge into a single planet-sized superorganism, gaining huge benefits and efficiencies from doing so. However, once a species becomes a single organism the same size as its environment, a single death becomes the extinction of the species. Whether we become a supercooperative superorganism or a host of cooperative but technologically dependent individual organisms, one terrible miscalculation or one highly unlikely event could potentially bring down the whole structure, ending us all.
A more mundane concern is this: Cooperative entities can be taken advantage of. As long as people have differential degrees of reproductive success, there will be evolutionary pressure for cheaters to free-ride on others' cooperativeness at the expense of the whole. There will always be benefits for individuals or groups who let others be the ones who think longterm, making the sacrifices necessary to reduce existential risks. If the selfish groups are permitted to thrive, they could employ for their benefit technology with, say, a 1/1000 or 1/1000000 annual risk of destroying humanity, flourishing for a long time until the odds finally catch up. If, instead, such groups are aggressively quashed, that might require warlike force, with the risks that war entails, or it might involve complex webs of deception and counterdeception in which the longtermists might not always come out on top.
There's something romantically attractive about the idea that the next century or two are uniquely crucial to the future of humanity. However it's much likelier that selective pressures favoring a certain amount of short-term self-interest, either at the group or the individual level, will prevent the permanent acquisition of the hyper-cautious wisdom Ord hopes for. All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk -- perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can't survive existential risks of 1/100 per century for a million years.
If this reasoning is correct, it's very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.
Second, the future is hard to see. Of course, my pessimism could be mistaken! Next year is difficult enough to predict, much less the next million years. But to the extent this is true, this cuts against longtermism in a different way. We might think that the best approach to the longterm survival of humanity is to do X -- for example, to be cautious about developing superintelligent A.I. or to reduce the chance of nuclear war. But that's not at all clear. Risks such as nuclear war, unaligned A.I., or a genetically engineered pandemic would have been difficult to imagine even a century ago. We too might have a very poor sense of what the real sources of risk will be a century from now.
It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to almost destroy humanity right now. The biggest sources of existential risk, Ord suggests, are technological: out-of-control artificial intelligence, engineered pandemics, climate change, and nuclear war. However, as Ord also argues, no such event -- not even nuclear war -- is likely to completely wipe us out, if it were to happen now. If a nuclear war were to destroy most of civilization and most of our capacity to continue on our current technological trajectory, that might postpone our ability to develop even more destructive technologies in the next century. It might also teach us a fearsome lesson about existential risk. Unintuitively, then, if we really are on the precipice, our best chance for longterm survival might be to promptly blast ourselves nearly to oblivion.
Even if we completely destroy humanity now, that might be just the thing the planet needs for another, better, and less self-destructive species to arise.
I'm not, of course, saying that we should destroy or almost destroy ourselves! My point is only this: We currently have very little idea what present action would be most likely to ensure a flourishing society a million years in the future. It could quite easily be the opposite of what we're intuitively inclined to think.
What we do know is that nuclear war would be terrible for us, for our children, and for our grandchildren. That's reason enough to avoid it. Tossing speculations about the million-year future into the decision-theoretic mix risks messing up that straightforward reasoning.
Third, it's reasonable to care much more about the near future than the distant future. In Appendix A, Ord has an interesting discussion of the logic of temporal discounting. He argues on technical grounds that a "pure time preference" for a benefit simply because it comes earlier should be rejected. (For example, if it's non-exponential, you can be "Dutch booked", that is, committed to a losing gamble; but if it's strictly exponential it leads to highly unintuitive results such as caring about one death in 6000 years much more than about a billion deaths in 9000 years.) The rejection of temporal discounting is important to longtermism, since it's the high weight we are supposed to give to distant future lives that renders the longterm considerations so compelling.
But we don't need to be pure temporal discounters to care much more about the near future than the distant future. We can instead care about particular people and their particular near-term descendants. In Confucian ethics, for example, one ought to care most about near family, next about more distant family, next about neighbors, next about more distant compatriots, etc. I can -- rationally, I think -- care intensely about the welfare of my children, care substantially about the welfare of the children they might eventually have, care somewhat about their potential grandchildren, and only dimly and about equally about their sixty-greats-grandchildren and their thousand-greats-grandchildren. I can care intensely about the well-being of my society and the world as it now exists, substantially about society and the world as it will exist a hundred years after my death, and much less, but still somewhat, about society and the world in ten thousand or a million years. Since this isn't pure temporal discounting but instead concern about particular individuals and societies, it needn't lead to the logical or intuitive troubles Ord highlights.
Fourth, there's a risk that fantasizing about extremely remote consequences becomes an excuse to look past the needs and interests of the people living among us, here and now. I don't accuse Ord in particular of this. He also works on applied issues in global healthcare, for example. He concludes Precipice with some sweet reflections on the value of family and the joys of fatherhood. But there's something dizzying or intoxicating about considering the possible billion-year future of humanity. Persistent cognitive focus in this direction has at least the potential to turn our attention away from more urgent and personal matters, perhaps especially among those prone to grandiose fantasies.
Instead of longtermism, I recommend focusing on the people already among us and what's in the relatively foreseeable future of several decades to a hundred years. It's good to emphasize and prevent existential risks, yes. And it's awe-inspiring to consider the million-year future! Absolutely, we should let ourselves imagine what incredible things might lie before our distant descendants if the future plays out well. But practical decision-making today shouldn't ride upon such far-future speculations.
ETA Jan. 6: Check out the comments below and the public Facebook discussion for some important caveats and replies to interesting counterarguments -- also Richard Yetter Chappell's blogpost today with point-by-point replies to this post.
------------------------------------------
Related:
Group Minds on Ringworld (Oct 24, 2012)
Group Organisms and the Fermi Paradox (May 16, 2014)
How to Disregard Extremely Remote Possibilities (Apr 16, 2015)
Against the "Value Alignment" of Future Artificial Intelligence (Dec 22, 2021)
[image generated by wombo.art]
Hi Eric, thanks for posting on longtermism! As someone who leans longtermist but not confidently so, I often have trouble finding well-thought-out arguments against it, so I really appreciated your post. Here are some of my thoughts on your points:
ReplyDelete1: I think there is a good chance that as you say, future centuries will carry greater risk than the present. If this is the case, then humanity's future will be short and there's not anything we can do about that. But from a utilitarian perspective (which I take), this isn't hugely important for our actions. Since good long-term futures have immense potential, the expected utility from those outcomes still wildly exceeds that of the outcomes where risk increases. So if for example there's a 99% chance that future risk will be unavoidably higher and a 1% chance that it can be kept permanently low, my estimate for total future utility would drop by 99%, but the possibility of low future risk would still be the most important since it's probability-weighted potential would still be so much greater.
I also think that the probability of permanently low existential risk is pretty high. If humans or the conscious beings that follow manage to start expanding throughout the galaxy at a fair percentage of the speed of light, then we may become numerous and spread out enough to be hard to eliminate, except with perfectly correlated existential risks like vacuum collapse.
2: I agree with this point. Aside from preventing near-term existential risks, it's hard to say what will have a positive impact on the future. I have a weak belief that generally things that make the present and near-term future are likely to have a positive impact on the long-term future, but I'm not sure how justified that is.
3: I think Ord tried to write the book to appeal to people with a broad range of ethical views, when really it is most justified under certain consequentialist ones. Many people do believe it's morally correct to value close kin and currently living people more, which can undermine longtermism. Like the first point, this one is irrelevant under utilitarianism, strengthening my view that longtermism is much more justified under utilitarianism than other moral beliefs.
From a practical perspective, I think that focusing on the nearer term like you suggest may be the best way to operate even if longtermism is correct, at least as long as we continue to have deep uncertainty about the long-term future.
This was a fun read! I haven't read Ord's book, so maybe he addresses this, but one thing that kept coming to mind as I read your post was this: In my mind, as soon as humans become a spacefaring, multi-planetary species, many of the things we currently view as existential threats will become far less "existential". For instance, destroying our planet through climate change, nuclear war, or some other unforeseen event, is far less of a threat to our species as a whole if a sufficient number of us are on one or more other planets. It is also hard to catch a super virus from a lightyear away.
ReplyDeleteSo, the real question in my mind for how existentially risky the future might be is how quickly we can diversify our intergalactic real estate portfolio.
What would you say about an argument like this?
ReplyDelete(1) Humanity has reached or will continue to reach toward a condition of having near total control over the environment and the form that human life will take in the future.
(2) It is taken to be important for an ordinary human individual who has a significant degree of control over her life and the course that it takes to not overly emphasize the present at the cost of the future and to prioritize certain important future projects as the expense of indulgence in quick and easy present pleasures.
(3) This is because any creature which has such control and the corresponding responsibility is under an obligation to ensure that her long-term well being is taken care of prior to any less important (on this view) transient ends (generalization from the case of 2/free standing principle? not sure what to do with this)
(4) Humanity taken as a whole can be considered as such a creature as described in 3(from 1)
(5) Therefore humanity is under an obligation to ensure that its long-term well being is taken care of prior to its pursuit of any transient, less long-term ends. (from 3 and 4)
I wasn't sure how to put this into a formalism exactly, but I hope the idea is clear enough - humanity, insofar as it reaches this near total control over the environment in the ways it has interfered with it, imposes further burdens and responsibilities on itself that we wouldn't ordinarily think that it has, and so insofar as an individual which has such obligations with respect to the course of her life to first and foremost insure that her long-term well-being is taken care of first, so too would humanity as a whole.
I don’t think your first argument holds up to your second - as you say, it is extremely difficult to predict the path of the long-term future, so in particular we should have lots of uncertainty over whether prosperous stable states of civilization can be attained.
ReplyDeleteIt could be the case that some actors will always have the capacity to trigger existential risks, and that every century risks a 1% chance or more of extinction. It might also be the case that there exist truly stable forms of governance, robustly aligned AI, etc., which can ensure their long-term stability and propagation of human values much better than any modern entities. Given the range of possibilities here (many of which it is likely no one has thought of yet), it seems premature to assert with confidence that no such stable state is within reach of humanity if we can make it through a millennium or two of trying.
Additionally, there are some axes along which we know (up to our current understanding of physical laws) that this century must be fairly special from among the next million years; we can't even make it to the year 10000 with current levels of GDP growth, as Holden Karnofsky talks about here. So we have reasons to expect that something qualitatively special is going on around now, and so there might be unusually important things to do in the 21st century - even if we're unsure exactly what they'll be, making progress on figuring them out seems very important.
As another perspective, when we think about humans 100,000 years ago we tend to think of their lives as mattering for their own sake - I experience a sort of curiosity and general compassion for people living what I assume were more difficult lives than we generally lead today, but I mostly think about their actions in the context of those actions' local consequences. In contrast, when I consider events in the 1600s-1800s, a lot of what matters to me has to do with the consequences of those events on today's world, and the ways that things might have been different if certain political or scientific endeavors had gone otherwise. There's a sense of deep gratitude for certain important contributions, and of loss from things that went badly wrong. I suspect humans in the distant future (if they still exist) will have similar attitudes about 2022-humans, living in these unusually-pivotal times.
Thanks for these thoughtful comments, folks!
ReplyDeletetimduffy: On my argument 1: Maybe a 1% chance that existential risk can be kept permanently low is reasonable, with the most plausible-seeming scenario in that direction being humanity's dispersion among the stars far enough from each other to sufficiently decorrelate the risks. If so, then argument 1 can't work alone without some help from arguments 2-4. One reason to think it's unlikely that we will survive longterm and disperse among the stars is the Fermi Paradox, which ought to add credal weight to technological self-destruction and non-spacefaring scenarios, in my view -- though how much weight is very much up for debate.
On my argument 2: This argument could carry the weight alone if we can justify radical uncertainty about what would be longterm best. I'm thinking here of radical uncertainty as either the right kind of indifferent distribution or, if it can be made sense of, a decision-theoretical posture of taking particular "uncertain" topics and discarding them from one's decision-theoretic calculations.
On my argument 3: Yes, that seems right to me.
On your concluding point: Biggest practical risk of divergence between my view and the longetermists would be if there's say a 51% chance that something very bad over the next few hundred years (like nuclear war) would be better from a longterm perspective. Then the longtermist might have to advocate nuclear war.
Chris: Right! Ord does discuss that near the end of the book. He argues that merely colonizing the Solar System or a few near stars wouldn't decorrelate the risks sufficiently. But yet, a truly intergalactic civilization would presumably have highly uncorrelated local risks as long as the speed of light remains a constraint. Even a small justifiable credence in that outcome could then outweigh all near-term considerations unless we either adopt radical uncertainty about what we could do now to increase the likelihood of that outcome or we have some value function that very much overweights the near and/or soon. For this reason, argument 1 might not stand wholly on its own, unless the credence in intergalacticity is extremely small.
Anon Jan 05: I kind of like that argument. I don't think that it needs to conflict with the reasoning in this post, however. By analogy: If I as an individual think it's extremely unlikely I will live to be 1000 years old, then taking care of my longterm interests won't involve giving much weight to my hypothetical interests at that age (cf. argument 1). If I as an individual have no idea what would give me the best chance of surviving another 1000 years, then even if I do want to think about my 1000-year future there's nothing I can rationally choose now with the aim of increasing that likelihood (cf. argument 2). Also, it might be reasonable to care more about myself now than to care about my hypothetical future 1000-year-old self (cf. argument 3).
Drake: Yes, I am inclined to agree that the first argument maybe shouldn't be considered decisive on its own without some added considerations from arguments 2-4 (see also my replies to timduffy and Chris). And yes, in *some* ways our century is likely to be special (e.g., in terms of rate of economic growth). But of course the crucial question for the longterm calculations is whether it is unique or extremely rare in terms of existential risk in particular. And that's what I have never seen a good argument for from any longtermist (though I am open to reading suggestions). And of course your last point depends upon the truth of your second point.
Thanks for writing this! I'm excited to see more critiques of longtermism.
ReplyDeleteYou may already know this, but I think your critique is sometimes referred to as "cluelessness" in philosophical literature. The conclusion is less clear than your post suggests though: there is some probability that nuclear war would be good, and some probability that it would be bad, but it would be very surprising if these probabilities perfectly canceled each other out such that we didn't have to worry about the long-term implications of nuclear war.
You are right to reject this so-called ethical issue. I would place your fourth objection first as it is fully sufficient to reject Ord and so-called Longtermism—“fantasizing about extremely remote consequences” is an excuse to avoid real everyday ethical problems. It is essentially philosophical daydreaming akin to science fiction. A product of our times I suppose.
ReplyDeleteI agree with that I think, but wouldn't it best to assume an indefinite future in cases in which one is unsure of when the end will come? My thought was that this might be the case with humanity as a whole (and so a point of disanalogy with individual humans, but not important to the reasoning of the argument I tried to present.) If I am playing a board game that does not have a time limit, I feel as though I have to treat the game as if it is indefinite in time, and to make sure that I am focused and making the best plays possible at each step of the way.
ReplyDeleteThanks for the continuing comments, folks:
ReplyDeleteBen: Right! I'm aware of the Lenman/Greaves debate, though probably there are recent iterations I should catch up on. I agree that it would be in some sense surprising if the most rational credence for nuclear war's longterm impact was such that the good exactly balanced the bad. However, I'm still inclined to think that something like longterm cluelessness might be sustainable. Complexities about indifference, partitioning, and balancing aside, we might rationally be able to rationally disregard speculations about a particular domain (e.g., the longterm future) from our decision-making without having to adopt cluelessness about every action in general.
Matti: Ah, but I love philosophical daydreaming and science fiction! I don't have quite so negative a view as you seem to be expressing. However, I do think we should be careful not to get lost in these speculations and overweight them.
Anon Jan 06: Your comment makes me think of the St Petersburg paradox. And (if argument 1 can be sustained) maybe the solution is the same: Over time the odds of a huge, huge win with good luck upon good luck become small enough to outweigh the benefits of the win.
Professor,
ReplyDeleteI have zero problem with good science fiction—as a literary genre—intended to illuminate the human condition by placing characters in novel and unique situations. That helps gain insight into our moral boundaries. That, in my opinion, is proper philosophical daydreaming. What Toby Ord is attempting to do is far from that and, in my opinion, far from any type of genuine moral philosophizing. It is no wonder that his fellow countryman and consequentialist, Peter Singer, praises his work. Consequentialism hit a dead end at least half a century ago and it seems to seek new and strange frontiers to give its true believers something to do. As I said, you are correct to reject it. I just wonder why you gave it the time of day.
Matti: I am not a consequentialist but I do have a lot of respect for it.
ReplyDeleteToby’s Ord’s consequentislist ethics, based on some vague end times scenario along with a mythic and heroic future of humanity, is like so many prophetic preachings of old. Ord’s preaching, of course, is scientific and atheistic for our times. But it’s the same old story. In spite of his good works, what turns me off the most is not his jeremiad of future doom but his lack of genuine regard for real people—which ought to be the focus of any ethics. His aim is on a mythic evolved human species—as Toby Ord says, “It is not a human that is remarkable, but humanity.” Thus our potential is the Good for him. As he says, “I think that we have barely begun the ascent.”
ReplyDeleteA pseudo-scientific ethic of recent past was Marx’s concern for the “species being” or what we would evolve into under pure communism. A philosopher friend of mine used to remark that whenever we try to perfect mankind it usually involves killing people. Right, Marx didn’t preach that—it fell upon Stalin to usher forth the Soviet man!
I’m impressed by the objectivity that you display here professor. Many seem to get so immersed in fantasy speculation that they disregard the odds in favor of sci-fi fun. Statistically this makes no sense. The more time that we spend as powerful creatures that can destroy ourselves, the more time that possible annihilation dice must continue to be rolled. I expect us to last hundreds of years more, or even thousands, but the odds on millions seem ridiculously long.
ReplyDeleteRegarding the present future however, it seems to me that few people today grasp how powerful the Chinese might soon become. With their social credit system they are effectively creating a massive unified non-conscious organism made up of the Chinese people themselves. Here individuals are highly monitored as well as rewarded and punished on the basis of government aims. Consider how tremendous their education, technology and general industry theoretically might become by means of such incentive. Conversely we in liberal societies can only fight things out amongst ourselves. Obvious costs are displayed in crime and legal disputes. I suspect that many Americans will gladly give up their liberty to become Chinese government tools in order to potentially enjoy the sorts of lives that general Chinese people will tend to have. An end to liberty should not be an end to humanity however.
So how do I suspect that we’ll meet our end? Imagine a creature on a different planet where feeling good constitutes value to it, while feeling bad punishes it. Furthermore imagine that it becomes technically powerful enough to make itself feel millions of times better than normal by directly affecting its biology rather than having the “real” experiences that evolution had implemented to get it that far. Shouldn’t it progressively transition to more and more fake rather than real experiences, and so rely more and more upon machines to facilitate what it requires for survival? I think so, and that those machines should eventually glitch to kill them off when they’re too vulnerable to otherwise take care of themselves. Us too.
Matti: I worry about the kind of thinking you describe, yes. I don't think I saw so much lack of concern for current human beings in the book, so I don't want to attribute it specifically to Ord.
ReplyDeletePhil E: It is certainly the case that China might soon surpass the United States as the most powerful country on Earth. I hope the dystopian scenario you describe doesn't come to pass. As different as the Chinese and U.S. governments might be, my sense is that the people of the U.S. and China are not so very different from each other at root. Self-destruction through increasing attention to "fake experiences" is a distant future possibility that I agree has some plausibility -- for example in Chapter 37 of my Theory of Jerks:
http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/JerksZombieRobots-181130.htm
I understand what you are saying and I do give Orb credit for his many good works. However! His ethical theme is consequentialism on steroids! And within this argument, I submit, lies a dangerous seed. The basic argument is simply to take the moral problems we face in contemporary society and compare them to, as he says, “the immense value of humanity’s potential. … For longtermism is animated by a moral re-orientation toward the vast future.” Orb phrases it thusly; “Rather than looking at the morality of an individual human’s actions as they bear on others, we can address the dispositions and character of humanity as a whole.” In other words, to paraphrase Commander Spock, the needs of few are outweighed by the needs of the billions upon billions. The abstract needs of future generations is compared to the real problems of today. That is a dangerous seed!
ReplyDeleteIn her New Yorker review of the book (Nov 2020), Corinne Purtill summarized Orb’s ethical starting point as follows: “Humanity’s potential is worth preserving, Toby Ord argues, not because we are so great now but because of the possibility, however small, that we are a bridge to something far greater.”
If BLOG proposes life and AI are a constant syntheses for value also...
ReplyDelete...then math syntheses themselves become objects of observation...
Isn't it we are earth's objects, of a solar system's object, of a galaxy's object...
...can we remember ourselves in a constant syntheses for value also...
Happy New Year...
I hope you don’t mind me quoting this from you professor:
ReplyDelete“For me, the greatest philosophical thrill is realizing that something I’d long taken for granted might not be true, that some “obvious” apparent truth is in fact doubtable – not just abstractly and hypothetically doubtable, but really, seriously, in-my-gut doubtable. The ground shifts beneath me…”
Yes, me too. For much of my adult life I figured that humanity would eventually straighten things out even given its countless problems. That presumption began to crumble when I learned the specifics of China’s plan. I don’t see how they’ll fail to produce far more valuable goods and services this way per capital than any liberal country is able to manage. Such productivity should bring massive wealth that earns the derision of the so called “liberated”, as well as their jealousy.
If humanity does go this way, or creates a single world government in which citizens are continually rewarded and punished on the basis of the observed choices they make in their lives, would this inherently be “dystopian”? Regardless of endless ways that this point is made in literature, I suppose I’m optimistic enough to think that perhaps such a beast might be tamed at least somewhat.
In the Matrioshka brain it’s interesting to me that you began with something that might not have any psychology, but then observed that if it did then it might alter its goals such that it would succeed by doing nothing and so gain perfect bliss perpetually. Conversely I’ve taken something which clearly does have psychology and reasoned that it would do something similar given its eventual grasp of neuroscience. We wouldn’t simply reprogram ourselves to be as happy as possible and be done however. Here there should be tremendous worries about permitting humanity to become too weak to survive. Thus policies set up to counter this end should be instituted, and I think especially in a world in which citizens exist as government tools. Here the annihilation dice would continue to be rolled however.
A couple of things occurred to me while reading this.
ReplyDeleteFirst, one possible reason to take long-termism more seriously is something like "initial conditions." It seems plausible to say that humanity is going through a kind of phase change at the moment. Just 200 years ago, we were still essentially an agricultural species. In 200 years' time, it seems likely that we will no longer be bound by the Earth's ecosystem (we'll probably be living on other planets; and we'll probably have the technological ability to produce all the food we need without using plants and animals). So the current phase is a transition period. And plausibly, the way we get through a transition period could have a massive and ongoing impact on our post-natural future, by setting its "initial conditions."
Second, simple uncertainty about the (far) future could easily outweigh any of the factors mentioned here. That uncertainty might be built into the structure of the universe (quantum randomness); a feature of systems (mathematical chaos); or a function of how prediction works (turns out to take as much computational power to predict a universe as it does to be a universe). If that uncertainty is higher than a (probably very low) threshold, then there may be literally no value in worrying about the long-term future at all.
These two points are in tension, and all of the arguments given by Prof S and commenters above are good as well, so I don't know what to make of it!
Tensions may appear only as random and chaotic mechanicalness...
ReplyDelete...but value, in the big scheme of things, seems to underline it all...
As we appear to be the only ones to see purpose experience value...
Matti: Yes, I agree that is a dangerous seed. Well put. I'm not sure Ord would (or should) disagree that it is dangerous, though unfortunately he does not specifically address its danger at any length in the book.
ReplyDeletePhil E: I'm wary of emphasizing China specifically. However, I agree completely about the catastrophic risks involved in government gaining increasing control over their citizens through technology. I think a similar thing can happen in a liberal society too, through corporate control indirectly through motivating and rewarding people who tweak themselves best toward the company's ends. The catastrophic power of excessive emotional control could be getting lost in "happiness"/reward, or it could be "productivity", or.... I'm actually working on a couple of short pieces about this topic now.
Chinaphil: I like your characterization of the plausibility of both sides of the argument. I would make a couple of tweaks, though, that tend to shift the balance toward the second side. First, I'm skeptical that in 200 years there will a large, self-sustaining population living elsewhere in the Solar System. Earth is just so much vastly better an environment for humans that it's hard to imagine sustaining the vast expense and risk of living elsewhere except for small, dependent colonies of researchers or miners. Compare Antarctica, which is vastly more favorable a place for humans than the Moon or Jupiter. Second, I think there's also substantial medium-term uncertainty in which the odds quite plausibly flip the opposite direction of what Ord and Yetter Chappell (in his reply to me) suggest. For example, as I suggest in the post, it's not at all implausible to me that a catastrophic but not extinction-making nuclear war could sufficiently slow down humanity and scare us into caution that IF we currently in an uniquely dangerous phase we can wait out that phase. I would not at all recommend nuclear war! But the combination of longtermism and the idea that we just need to be extra cautious for a few centuries until we're safe makes it, in my mind, as plausible that nuclear war would be good long term as that it would be bad.
Professor,
ReplyDeleteI emphasized China here specifically because their plans have been widely reported. Does this mean that such instruments aren’t being rolled out in North Korea, Russia, or an assortment of countries in the Middle East? I suspect that also. If anyone would like to read a recent short article regarding the specifics of what China is doing, this was the first hit that came up for me in a search.
The difference between a government doing this sort of thing versus a big company such as Google or Facebook, is that theoretically big companies can be restricted by means of government. And indeed, here in America privacy advocates seem to have strong political clout. I’m amazed that China’s plan has generated so little interest among such groups. Is race a factor? Perhaps few grasp what I do. And what exactly is that?
Under its SCS I believe that in decades to come the Chinese people will progressively become far more educated and generally productive. Thus their superior ability to produce goods and services should make them substantially more wealthy than the people of liberal nations. Observe that in order to achieve unified control, George Orwell figured that government would need to use a drug such as his “Soma”. I think he got that wrong. All that should actually be needed is to reward people for doing what it wants them to in their daily lives, and otherwise punish them.
In the past liberal nations fighting totalitarian governments could at least depend upon the support of repressed citizens. Conversely the Chinese people not only shouldn’t feel repressed in general, but the citizens of liberal nations should naturally become progressively more jealous.
Or if I’m wrong about this, what have I missed?