Thursday, January 27, 2022

What Is It Like to Be a Plant?

guest post by Amy Kind

Is there something that it’s like to be plant? I suspect that most people hearing this question would unhesitatingly answer in the negative. In this respect, plants seem quite different from animals. In fact, it’s this difference that undoubtedly helps to explain why so many people who feel squeamish about eating animal products don’t feel at all squeamish about eating fruits, vegetables, and other plant products.

Philosophical assessment of the consciousness of plants and animals is generally in line with this common-sense judgment. Though there’s disagreement about how far consciousness extends throughout the animal kingdom (Are ants conscious? What about garden snails?), and though there’s disagreement about whether and to what extent we can understand the nature of non-human consciousness (can we know, for example, what it’s like to be a bat?), there is general philosophical agreement that at least some non-human animals are conscious. In contrast, very few (if any) philosophers have defended the claim that plants are conscious. Even philosophers such as Chauncy Maher who have recently argued that plants have minds tend not to commit to the claim that there’s something that it’s like to be a plant.

In refraining from this commitment, Maher suggests that the predicate “consciousness” is not determinate when it comes to plants, i.e., there is (currently) no fact of the matter about whether plants are or are not conscious. Maybe they are; maybe they’re not. But given our present understanding of the nature of consciousness, Maher claims that “our standards don’t yet settle whether plants belong in the extension of the term.” When it comes to plants, we simply don’t yet have an adequate understanding of what it would be for them to be conscious.

On this score, however, a recent science fiction duology by Sue Burke helps provide some important insight. Semiosis, the first book in the duology, takes place in the 2060s. Fleeing the wars and environment crises that have engulfed earth, a group of humans united by pacifist ideals travel to a distant planet they name Pax. In their efforts to build a settlement, they eventually come across signs of another sentient animal species on the planet, the Glassmakers. But the Glassmakers are not their only encounter with sentience. As they soon discover, the bamboo-like plant (“rainbow bamboo”) that grows rampant in the area they are settling is also sentient. This plant, whom they name Stevland, initiates communication with the humans and eventually becomes integrated into their society as a full and valuable member – even taking on a leadership role.

We are told even more about Stevland in Interference, the second book in the duology, which takes place about a hundred years after the events of Semiosis when a team of earth scientists travel to Pax to find out what had happened to the original expedition (with whom they had long lost contact). With the more sophisticated equipment these scientists bring, it’s discovered that the rainbow bamboo on Pax has nerve tissue, and that Stevland has a collection of neurons that deserved to be called a brain.

Rainbow bamboo, and Stevland in particular, is clearly very different from bamboo plants on earth. For one thing, earth plants do not adopt names and pronouns for themselves. More to the point for our discussion here, earth bamboo plants lack neurons. Earth bamboo plants also lack the kind of linguistic and emotional capacities that Stevland has, nor can they develop and execute complex and temporally extended plans. But when we set aside these capacities that Stevland has in virtue of his sentience, he nonetheless exhibits a lot of the properties that seem essential or constitutive of plants in general. He relies on sunlight and water, and he is in competition with other nearby plants for these resources. He does not need to sleep. He has a long life span. He is situated in one place with no capacity to move himself to an entirely different place. But he is distributed over a large area, and he can extend his presence to connected terrain. He can survive significant damage to various of his parts and can even survive their destruction. Moreover, such parts can be regrown.

All of these features of plants seem relevant to how they would experience the world. What would give a plant joy? What would make it angry? Given that a plant lacks visual and auditory sense capacities, how would it gain an understanding of its environment? How would it initiate communication? How would it form relationships – whether friendly or unfriendly – with others? Given that a plant lacks mobility, how would it execute plans? How would it strike at its enemies? (Interestingly, and non-coincidentally, Burke herself started thinking seriously about the nature of plant and plant behavior after she witnessed one of her house plants “attacking” another.) Thus, even though Stevland’s neuronal system makes him different from earth plants – so different that, as plant biologist Laci Gerhart has complained, a “scientific reader will struggle with … the seemingly preposterous abilities of Pax’s plant life compared to their terran equivalents.” – the important similarities that he shares with earth plants means that Burke’s exploration of his sentience can help us understand more generally what plant sentience might be like. Unsurprisingly, this was something I thought a lot about as I read the books.

So how does Stevland do the kinds of things just mentioned? Much of his behavior proceeds via his root system. He learns about his environment by ways of his roots. He can extend his roots in new directions, and he does so for many purposes, whether exploring new terrain or connecting and communicating with other plants. He makes additional communicative efforts by generating smells, specific leaf patterns, or distributing chemicals through the production of fruit. His relationships with other plants are driven largely by need. And he has a great deal of patience.

When we think about the features of plants delineated above, this all makes a lot of sense. For example, the relatively long life span of plant species like bamboo – and the fact that they are situated in a single place – suggests that their temporal experience, and correspondingly their patience, would likely be different from that of humans. Moreover, because their situatedness means their only options for relationships are those nearby, we could reasonably expect that plant relationships would be different from human relationships. Here I’ll note that a related point is made by Brandon Sanderson in Cytonic, the third novel in his Skyward series. As a sentient alien from a (non-animal) crystalline species tells the human protagonist: “my species evolved as motionless individuals who would spend decades next to one another. It’s not in our nature to argue. Unlike motile species, we cannot simply walk away if we make one another angry.”

So, is there something it is like to be a plant? Maybe there is, maybe there’s not. Burke’s duology doesn’t really address this question. But in helping us understand something about what it might be like to be a plant if there were something it were like to be a plant, the books pave the way towards a better understanding of the standards we should use in attributing consciousness to other non-human entities. Thus, even if Maher is right that it’s currently indeterminate about whether the predicate “consciousness” should apply to plants, reflections on Burke’s extended thought experiment helps us to make progress in resolving the indeterminacy.

[image source]

Monday, January 24, 2022

Reflections on Science Fiction as Philosophy, Plus Zombie Robots

Last weekend, two interviews of me came out. One is a long interview (about 6000 words) at with Nigel Warburton at Five Books on science fiction as a way of doing philosophy, including my recommendation of five great books of philosophical science fiction.

From the interview:

You could say that science fiction is a good teaching tool -- that it’s not really philosophy, but it’s good for popularising philosophical questions or getting people who might not otherwise be attracted to philosophy to think about philosophical questions. But serious philosophy takes the form of the expository essay, the journal article, the monograph. I don’t agree with that. I think serious philosophy can take a variety of forms.

Consider a classic of recent moral philosophy, Bernard Williams’ essay ‘Moral Luck’. That essay turns on an imaginary version of the story of Gauguin. Had Williams’ treatment of Gaugin been more detailed and more complex, it might have been even more philosophically interesting, as some subsequent commentators have pointed out. The more detail, the more we understand the complex dilemma that Gaugin faced, concerning his hopes for being a great artist and what the difficulties of leaving his family might be....

There’s a reason that philosophers sometimes reach for sketching mini-fictions in their writing. Those mini-fictions achieve something that can’t be as effectively achieved through more abstract prose. But as long as it remains a mini-fiction contained within an essay, it’s going to be somewhat impoverished as a fiction... It’s a kind of historical accident that philosophers almost exclusively write expository essays now. That’s not historically been the case.

Check out the interview also for discussion of the philosophical ideas in my five recommended books:

Ted Chiang, Stories of Your Life and Others
Greg Egan, Diaspora
Kazuo Ishiguro, Klara and the Sun
Ursula K. Le Guin, The Dispossessed
Olaf Stapledon, Sirius

Also last weekend, Barry Lam dropped the latest episode of his philosophical podcost Hi-Phi Nation -- this one on zombies. Philosophers who work on consciousness will be unsurprised to hear that David Chalmers features centrally in the episode. Christina Van Dyke and John Edgar Browning are also featured.

The episode concludes with some of my reflections on what I've called the Full Rights Dilemma for Future Robots -- the question of what we should do if we ever create machines whose moral status is unclear, machines who might or might not genuinely have conscious experiences like ours and thus might or might not deserve moral consideration similar to that of human beings. Do we give them the full rights of human beings, including rights to health care, rescue, and the vote, and thus risk (if they aren't actually conscious) sacrificing real human interests for empty machines without moral status worth the sacrifice? Or do we deny them full rights, and risk (if they do actually have rich conscious lives like ours) perpetrating mass slavery and murder?

Wednesday, January 19, 2022

Learning from Science Fiction

guest post by Amy Kind

Thanks to Eric for inviting me for to take a stint as guest blogger here at The Splintered Mind. I’ve been having a lot of fun putting together this series of posts, all of which will focus in some way on philosophical issues raised by science fiction.

Like Eric, and indeed like many philosophers, I read a lot of science fiction. I won’t try to sort through the many possible explanations for why philosophers tend to be attracted to science fiction, but I’ll highlight one such explanation of which I’m especially fond. As Hugo Award winner Robert J. Sawyer has noted, science fiction would be better known as philosophical fiction, as “phi-fi not sci-fi” (see the back cover of Susan Schneider’s Science Fiction and Philosophy). Kate Wilhelm, another Hugo Award winner, makes a similar point in her introduction to an edited collection of Nebula Award winning stories from 1973:

The future. Space travel, or cosmology. Alternate universes. Time travel. Robots. Marvelous inventions. Immortality. Catastrophes. Aliens. Superman. Other dimensions. Inner space, or the psyche. These are the ideas that are essential to science fiction. The phenomena change, the basic ideas do not. These ideas are the same philosophical concepts that have intrigued [humankind] throughout history.

In fact, we might naturally take these claims by Sawyer and Wilhelm one step further. Not only does science fiction concern itself with the kinds of issues and problems that are of interest to philosophers, but it is also thought to provide its readers with important insight into these issues and problems. By engaging with science fiction, we can learn more about them.

As intuitive as this idea is, however, once we start to think about it more closely, we confront a puzzle. After all, science fiction is fiction, and the defining characteristic of fiction is that it’s made up. So how can we learn from it? If we really wanted to learn about space exploration or robots or time travel, wouldn’t we do better to consult textbooks or refereed journal articles focusing on astronomy or robotics or quantum physics?

I suspect that some think this puzzle can be easily resolved. The way that science fiction enriches our understanding is by providing us with thought experiments (TEs). Just as we can learn from TEs presented to us in philosophy – from Jackson’s case of Mary the color scientist to Thomson’s plugged-in violinist – we can learn from TEs presented to us in science fiction. The question of how we learn from philosophical TEs is the subject of considerable philosophical debate: Are they simply disguised arguments, or do they function in some other way? But the claim that we learn from them is widely accepted (though see the work of Kathleen Wilkes for one notable exception).

In his recent book Knowing and Imagining, however, Greg Currie has called into question this resolution of the puzzle. Though he is focused on fiction more generally rather than just on science fiction, his discussion is directly relevant here. In the course of an argument that we should be skeptical of the claim that imaginative engagement with fiction provides readers with any significant knowledge, Currie takes up the question whether one might be able to defend the claim that we can gain knowledge from works of fiction by treating them as TEs. Ultimately, his answer his no. In his view, there are good reasons to doubt that fictional narratives can provide the same kind of epistemic benefits provided by philosophical or scientific TEs. Here I’ll consider just one of the reasons he offers – what might be called the argument from simplicity. I focus on this one because I think consideration of science fiction in particular helps to show why it is mistaken.

As Currie notes, the epistemically successful TEs found in philosophy are notably simple and streamlined. We don’t need to know anything about what Mary the color scientist looks like, or anything about her desires and dreams, in order to evaluate what happens when she leaves her black and white room and sees a ripe tomato for the first time. But even the most pared down fictional narratives are considerably more complex, detailed, and stylized than philosophical TEs. These embellishments of detail and style are likely to count against from the epistemic power of the TE presented by the fiction. A reader won’t know whether they’re reacting to the extraneous details or to the essential content. Philosophical TEs would get worse, not better, if they were elaborated and told with lots of panache. And that’s just what fiction does.

In response to this argument, I want to make two points.

First, though epistemically successful TEs are generally simple, they do contain some of level of detail, and those details might well sway readers’ reactions – as Currie himself notes. Someone who has bad memories of their childhood violin lessons, or who associates violin music with a particularly toxic former relationship, might react differently to Thomson’s case from someone whose beloved partner excels at the instrument. Dennett’s TEs in “Quining Qualia” include lots of cutesy details, and so does Parfit’s description of his well-known teletransporter case. But despite this, we nonetheless think we can learn from these cases. Currie is undoubtedly right that we need to exercise care when engaging with TEs, and we need to be guard against being swayed by extraneous details. But in philosophical contexts, we generally seem able to do so – perhaps not perfectly, but well enough. So why wouldn’t that be the case in fiction as well?

Second, and here’s where consideration of SF becomes especially important, it’s not clear to me that simplicity is always the best policy. The seemingly extraneous details need be seen as so extraneous. Consider Andy Weir’s Project Hail Mary, and in particular, the character Rocky. Rocky is a sentient and intelligent alien hailing from the 40 Eridani star system. Members of the Eridian species do not have eyes and navigate the world primarily by using sound and vibration. In many ways, Weir is presenting us with an extended thought experiment about what kind of civilization such a species would develop. What would their interpersonal reactions be like? How would they make scientific progress? How would they achieve space flight? Trying to understand what it’s like to be an Eridian is a lot like trying to understand what it’s like to be bat – something Thomas Nagel has claimed cannot be done. But trying to understand what Eridian society might be like is not similarly out of reach, and Weir’s discussion helps enormously in achieving this understanding. Here’s a place where more complexity was helpful, not hurtful. To gain the understanding that I believe myself to have gained, I needed the fuller picture that the book provided. Without the details, I’m pretty sure my understanding would have been considerably impoverished.

While this is just one example, I think the point extends widely across science fiction and the TEs presented to us in this genre. Perhaps in other genres of fiction, Currie’s argument from simplicity might have more bite. But to my mind, when it comes to science fiction, the complexity of the thought experiments presented can help explain not only why philosophers would be so attracted to these kinds of works but also why we can gain such insight from them.

[image source]

Thursday, January 13, 2022

Ethical Efficiencies

I've been writing a post on whether and why people should behave morally better after studying ethics. So far, I remain dissatisfied with my drafts. (The same is true of a general theoretical paper I am writing on the topic.) So this week I'll share a piece of my thinking on a related issue, which I'll call ethical efficiency.

Let's say that you aim -- as most people do -- to be morally mediocre. You aim, that is, not to be morally good (or non-bad) by absolute standards but rather to be about as morally good as your peers, neither especially better nor especially worse. Suppose also that you are in some respects ethically ignorant. You think that A, B, C, D, E, F, G, and H are morally good and that I, J, K, L, M, N, O, and P are morally bad, but in fact you're wrong 25% of the time: A, B, C, L, E, F, G, and P are good and the others are bad. (It might be better to do this exercise with a "morally neutral" category also, and 25% is a toy error rate that is probably too high -- but let's ignore such complications, since this is tricky enough as it is.)

Finally, suppose that some of these acts you'd be inclined to do independently of their moral status: They're enjoyable, or they advance your interests. The others you'd prefer not to do, except to the extent that they are morally good and you want to do enough morally good things to hit the sweet zone of moral mediocrity. The acts you'd like to do are A, B, C, D, I, J, K, and L. This yields the following table, with the acts whose moral valence you're wrong about indicated in red:

Now, what acts will you choose to perform? Clearly A, B, C, and D, since you're inclined toward them and you think they are morally good (e.g., hugging your kids). And clearly not M, N, O, and P, since you're disinclined toward them and you think they are morally bad (e.g., stealing someone's bad sandwich). Acts E, F, G, and H are contrary to your inclinations but you figure you should do your share of them (e.g., serving on annoying committees, retrieving a piece of litter that the wind whipped out of your hand). Acts I, J, K, and L are tempting: You're inclined toward them but you see them as morally bad (e.g., taking an extra piece of cake when there isn't quite enough to go around). Suppose then that you choose to do E, F, I, and J in addition to A, B, C, and D: two good acts that you'd otherwise to disinclined to do (E and F) and two bad acts that you permit yourself to be tempted into (I and J).

Continuing our unrealistic idealizations, let's count up a prudential (that is, self-interested) score and moral score by giving each act a +1 or -1 in each category. Your prudential score will be +4 (+6 for A, B, C, D, I, and J, and -2 for E and F). Your own estimation of your moral score will also be +4 (+6 for A, B, C, D, E, and F, and -2 for I and J). This might be the mediocre sweet spot you're aiming for, short of the self-sacrificial saint (prudential 0, moral +8) but not as bad as the completely selfish person (prudential +8, moral 0). Looking around at your peers, maybe you judge them to be on average around +2 or +3, so a +4 lets you feel just slightly morally better than average.

Of course, in this model you've made some moral mistakes. Your actual moral score will only be +2 (+5 for A, B, C, E, and F, and -3 for D, I, and J). You're wrong about D. (Maybe D is complimenting someone in a way you think is kind but is actually objectionably sexist.) Thus, unsurprisingly, in moral ignorance we might overestimate our own morality. Aiming to be a little morally better than average might, on average, result in hitting the moral average, given moral ignorance.

Let's think of "ethical efficiency" as one's ability to squeeze the most moral juice from the least prudential sacrifice. If you're aiming for a moral score of +4, how can you do so with the least compromise to your prudential score? Your ignorance impairs you. You might think that by doing E and F and refraining from K and L, you're hitting +4, while also maintaining a prudential +4, but actually you've missed your moral target. You'd have done better to choose L instead of D -- an action as attractive as D but moral instead of (as you think) immoral (maybe you're a religious conservative and L is starting up a homosexual relationship with a friend who is attracted to you). Similarly, H would have been an inefficient choice: a prudential sacrifice for a moral loss instead of (as you think) a moral gain (e.g., fighting for a bad cause).

Perhaps this is schematic to the point of being silly. But I think the root idea makes sense. If you're aiming for some middling moral target rather than being governed by absolute standards, and if in the course of that aiming you are balancing prudential goods against moral goods, the more moral knowledge you have, the more effective you ought to be in efficiently trading off prudential goods for moral ones, getting the most moral bang for your buck. This is even clearer if we model the decisions with scalar values: If you know that E is +1.2 moral and -0.6 prudential then it would make sense to choose it over F which is +0.9 moral and -0.6 prudential. If you're ignorant about the relative morality of E and F you might just flip a coin, not realizing that E is the more efficient choice.

In some ways this resembles the consequentialist reasoning behind effective altruism, which explores how give resources to others in a way that most effectively benefits those others. However ethical efficiency is more general, since it encompasses all forms of moral tradeoff including free-riding vs. contributing one's share, lying vs. truth-telling, courageously taking a risk vs. playing it safe, and so on. Also, despite having mathematical features of the sort generally associated with the consequentialist's love of calculations, one needn't be a consequentialist to think this way. One could also reason in terms of tradeoffs in strengths and weaknesses of character (I'm lazy in this, but I make up for it by being courageous about that) or the efficient execution of deontological imperfect duties. Most of us do, I suspect, to some extent weigh up moral and prudiential tradeoffs, as suggested by the phenomena of moral self-licensing (feeling freer to do a bad thing after having done a good thing) and moral cleansing (feeling compelled to do something good after having done something bad).

If all of this is right, then one advantage of discovering moral truths is discovering more efficient ways to achieve your mediocre moral targets with the minimum of self-sacrifice. That is one, perhaps somewhat peculiar, reason to study ethics.

Wednesday, January 05, 2022

Against Longtermism

Last night, I finished Toby Ord's fascinating and important book, The Precipice: Existential Risk and the Future of Humanity. This has me thinking about "longtermism" in ethics.

I fell the pull of longtermism. There's something romantic in it. It's breaktaking in scope and imagination. Nevertheless, I'm against it.

Longtermism, per Ord,

is especially concerned about the impacts of our actions on the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape -- or fail to shape -- that story (p. 46).

By "longterm future", Ord means very longterm. He means not just forty years from now, or a hundred years, or a thousand. He means millions of years from now, hundreds of millions, billions! In Ord's view, as his book title suggests, we are on an existential "precipice": Our near-term decisions (over the next few centuries) are of crucial importance for the next million years plus. Either we will soon permanently ruin ourselves, or we will survive through a brief "period of danger" thereafter achieving "existential security" with the risk of self-destruction permanently minimal and humanity continuing onward into a vast future.

Given the uniquely dangerous period we face, Ord argues, we must prioritize the reduction of existential risks to humanity. Even a one in a billion chance of saving humanity from permanent destruction is worth a huge amount, when multiplied by something like a million future generations. For some toy numbers, ten billion lives times a hundred million years is 10^18 lives. An action with a one in a billion chance of saving that many lives has an expected value of 10^18 / 10^9 = a billion lives. Surely that's worth at least a trillion dollars of the world's economy (not much more than the U.S. annual military budget)? To be clear, Ord doesn't work through the numbers in so concrete a way, seeming to prefer vaguer and more cautious language about future value -- but I think this calculation is broadly in his spirit, and other longtermists do talk this way.

Now I am not at all opposed to prioritizing existential risk reduction. I favor doing so, including for very low risks. A one in a billion chance of the extinction of humanity is a risk worth taking seriously, and a one in a hundred chance of extinction ought to be a major focus of global attention. I agree with Ord that people in general treat existential risks too lightly. Thus, I accept much of Ord's practical advice. I object only to justifying this caution by appeal to expectations about events a million years from now.

What is wrong with longtermism?

First, it's unlikely that we live in a uniquely dangerous time for humanity, from a longterm perspective. Ord and other longtermists suggest, as I mentioned, that if we can survive the next few centuries, we will enter a permanently "secure" period in which we no longer face serious existential threats. Ord's thought appears to be that our wisdom will catch up with our power; we will be able to foresee and wisely avoid even tiny existential risks, in perpetuity or at least for millions of years. But why should we expect so much existential risk avoidance from our descendants? Ord and others offer little by way of argument.

I'm inclined to think, in contrast, that future centuries will carry more risk for humanity, if technology continues to improve. The more power we have to easily create massively destructive weapons or diseases -- including by non-state actors -- and in general the more power we have to drastically alter ourselves and our environment, the greater the risk that someone makes a catastrophic mistake, or even engineers our destruction intentionally. Only a powerful argument for permanent change in our inclinations or capacities could justify thinking that this risk will decline in a few centuries and remain low ever after.

You might suppose that, as resources improve, people will grow more cooperative and more inclined toward longterm thinking. Maybe. But even if so, cooperation carries risks. For example, if we become cooperative enough, everyone's existence and/or reproduction might come to depend on the survival of the society as a whole. The benefits of cooperation, specialization, and codependency might be substantial enough that more independent-minded survivalists are outcompeted. If genetic manipulation is seen as dangerous, decisions about reproduction might be centralized. We might become efficient, "superior" organisms that reproduce by a complex process different from traditional pregancy, requiring a stable web of technological resources. We might even merge into a single planet-sized superorganism, gaining huge benefits and efficiencies from doing so. However, once a species becomes a single organism the same size as its environment, a single death becomes the extinction of the species. Whether we become a supercooperative superorganism or a host of cooperative but technologically dependent individual organisms, one terrible miscalculation or one highly unlikely event could potentially bring down the whole structure, ending us all.

A more mundane concern is this: Cooperative entities can be taken advantage of. As long as people have differential degrees of reproductive success, there will be evolutionary pressure for cheaters to free-ride on others' cooperativeness at the expense of the whole. There will always be benefits for individuals or groups who let others be the ones who think longterm, making the sacrifices necessary to reduce existential risks. If the selfish groups are permitted to thrive, they could employ for their benefit technology with, say, a 1/1000 or 1/1000000 annual risk of destroying humanity, flourishing for a long time until the odds finally catch up. If, instead, such groups are aggressively quashed, that might require warlike force, with the risks that war entails, or it might involve complex webs of deception and counterdeception in which the longtermists might not always come out on top.

There's something romantically attractive about the idea that the next century or two are uniquely crucial to the future of humanity. However it's much likelier that selective pressures favoring a certain amount of short-term self-interest, either at the group or the individual level, will prevent the permanent acquisition of the hyper-cautious wisdom Ord hopes for. All or most or at least many future generations with technological capabilities matching or exceeding our own will face substantial existential risk -- perhaps 1/100 per century or more. If so, that risk will eventually catch up with us. Humanity can't survive existential risks of 1/100 per century for a million years.

If this reasoning is correct, it's very unlikely that there will be a million-plus year future for humanity that is worth worrying about and sacrificing for.

Second, the future is hard to see. Of course, my pessimism could be mistaken! Next year is difficult enough to predict, much less the next million years. But to the extent this is true, this cuts against longtermism in a different way. We might think that the best approach to the longterm survival of humanity is to do X -- for example, to be cautious about developing superintelligent A.I. or to reduce the chance of nuclear war. But that's not at all clear. Risks such as nuclear war, unaligned A.I., or a genetically engineered pandemic would have been difficult to imagine even a century ago. We too might have a very poor sense of what the real sources of risk will be a century from now.

It could be that the single best thing we could do to reduce the risk of completely destroying humanity in the next two hundred years is to almost destroy humanity right now. The biggest sources of existential risk, Ord suggests, are technological: out-of-control artificial intelligence, engineered pandemics, climate change, and nuclear war. However, as Ord also argues, no such event -- not even nuclear war -- is likely to completely wipe us out, if it were to happen now. If a nuclear war were to destroy most of civilization and most of our capacity to continue on our current technological trajectory, that might postpone our ability to develop even more destructive technologies in the next century. It might also teach us a fearsome lesson about existential risk. Unintuitively, then, if we really are on the precipice, our best chance for longterm survival might be to promptly blast ourselves nearly to oblivion.

Even if we completely destroy humanity now, that might be just the thing the planet needs for another, better, and less self-destructive species to arise.

I'm not, of course, saying that we should destroy or almost destroy ourselves! My point is only this: We currently have very little idea what present action would be most likely to ensure a flourishing society a million years in the future. It could quite easily be the opposite of what we're intuitively inclined to think.

What we do know is that nuclear war would be terrible for us, for our children, and for our grandchildren. That's reason enough to avoid it. Tossing speculations about the million-year future into the decision-theoretic mix risks messing up that straightforward reasoning.

Third, it's reasonable to care much more about the near future than the distant future. In Appendix A, Ord has an interesting discussion of the logic of temporal discounting. He argues on technical grounds that a "pure time preference" for a benefit simply because it comes earlier should be rejected. (For example, if it's non-exponential, you can be "Dutch booked", that is, committed to a losing gamble; but if it's strictly exponential it leads to highly unintuitive results such as caring about one death in 6000 years much more than about a billion deaths in 9000 years.) The rejection of temporal discounting is important to longtermism, since it's the high weight we are supposed to give to distant future lives that renders the longterm considerations so compelling.

But we don't need to be pure temporal discounters to care much more about the near future than the distant future. We can instead care about particular people and their particular near-term descendants. In Confucian ethics, for example, one ought to care most about near family, next about more distant family, next about neighbors, next about more distant compatriots, etc. I can -- rationally, I think -- care intensely about the welfare of my children, care substantially about the welfare of the children they might eventually have, care somewhat about their potential grandchildren, and only dimly and about equally about their sixty-greats-grandchildren and their thousand-greats-grandchildren. I can care intensely about the well-being of my society and the world as it now exists, substantially about society and the world as it will exist a hundred years after my death, and much less, but still somewhat, about society and the world in ten thousand or a million years. Since this isn't pure temporal discounting but instead concern about particular individuals and societies, it needn't lead to the logical or intuitive troubles Ord highlights.

Fourth, there's a risk that fantasizing about extremely remote consequences becomes an excuse to look past the needs and interests of the people living among us, here and now. I don't accuse Ord in particular of this. He also works on applied issues in global healthcare, for example. He concludes Precipice with some sweet reflections on the value of family and the joys of fatherhood. But there's something dizzying or intoxicating about considering the possible billion-year future of humanity. Persistent cognitive focus in this direction has at least the potential to turn our attention away from more urgent and personal matters, perhaps especially among those prone to grandiose fantasies.


Instead of longtermism, I recommend focusing on the people already among us and what's in the relatively foreseeable future of several decades to a hundred years. It's good to emphasize and prevent existential risks, yes. And it's awe-inspiring to consider the million-year future! Absolutely, we should let ourselves imagine what incredible things might lie before our distant descendants if the future plays out well. But practical decision-making today shouldn't ride upon such far-future speculations.

ETA Jan. 6: Check out the comments below and the public Facebook discussion for some important caveats and replies to interesting counterarguments -- also Richard Yetter Chappell's blogpost today with point-by-point replies to this post.

------------------------------------------

Related:

Group Minds on Ringworld (Oct 24, 2012)

Group Organisms and the Fermi Paradox (May 16, 2014)

How to Disregard Extremely Remote Possibilities (Apr 16, 2015)

Against the "Value Alignment" of Future Artificial Intelligence (Dec 22, 2021)

[image generated by wombo.art]

Saturday, January 01, 2022

Writings of 2021

Every New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, and 2020.

The biggest project was my new book The Weirdness of the World (under contract with Princeton University Press). Although most of the chapters are based on previously published essays, much of my writing energy for the year was expended in updating and revising those essays, integrating them into a book, and writing new material for the book. The biggest negative impact was on my fiction and public philosophy. In 2022, I hope to be able to write more in both genres.

-----------------------------------


Books

Appearing in print:

In draft:

    The Weirdness of the World (under contract with Princeton University Press). Check out the draft.
      I'd really appreciate and value comments! Anyone who gives me comments on the entirety will receive a free signed copy from me when it appears in print, plus my undying gratitude and a toenail clipping from my firstborn grandchild.
Under contract / in progress:

    As co-editor with Helen De Cruz and Rich Horton, a yet-to-be-titled anthology with MIT Press containing great classics of philosophical SF.


Full-length non-fiction essays

Appearing in print:

Finished and forthcoming:
In draft and circulating:
    "Inflate and explode". (I'm trying to decide whether to trunk this one or continue revising it.)

Shorter non-fiction

    "Does the heart revolt at evil? The case of racial atrocities", Journal of Confucian Philosophy and Culture (forthcoming).

Science fiction stories

    No new published stories this year. [sad emoji] I drafted and trunked a couple. Back in the saddle in 2022!

Some favorite blog posts

Reprints and Translations

    "Fish Dance", reprinted in Ralph M. Ambrose, Vital: The Future of Health Care (Inlandia Books).