Friday, December 29, 2023

Normativism about Swimming Holes, Anger, and Beliefs

Among philosophers studying belief, normativism is an increasingly popular position. According to normativism, beliefs are necessarily, as part of their essential nature, subject to certain evaluative standards. In particular, beliefs are necessarily defective in a certain way if they are false or unresponsive to counterevidence.

In this way, believing is unlike supposing or imagining. If I merely suppose that P is true, nothing need have gone wrong if P is false. The supposition is in no way defective. Similarly, if I imagine Q and then learn that evidence supports not-Q, nothing need have gone wrong if I continue imagining Q. In contrast, if I believe P, the belief is in a certain way defective ("incorrect") if it is false and I have failed as a believer (I've been irrational) if I don't reduce my confidence in P in the face of compelling counterevidence.

But what is a normative essence? Several different things could be meant, some plausible but tepid, others bold but less plausible.

Let's start at the tepid end. Swimming hole is, I think, also an essentially normative concept. If I decide to call a body of water a swimming hole, I'm committed to evaluating it in certain ways -- specifically, as a locale for swimming. If the water is dirty or pollution-tainted, or if it has slime or alligators, it's a worse swimming hole. If it's clean, beautiful, safe, sufficiently deep, and easy on your bare feet, it's a better swimming hole.

But of course bodies of water are what they are independently of their labeling as swimming holes. The better-or-worse normativity is entirely a function of externally applied human concepts and human uses. Once I think of a spot as a swimming hole, I am committed to evaluating it in a certain way, but the body of water is not inherently excellent or defective in virtue of its safety or danger. The normativity derives from the application of the concept or from the practices of swimming-hole users. Nonetheless, there's a sense in which it really is part of the essence of being a swimming hole that being unsafe is a defect.

[Midjourney rendition of an unsafe swimming hole with slime, rocks, and an alligator]

If belief-normativity is like swimming-hole-normativity, then the following is true: Once we label a mental state as a belief, we commit to evaluating it in certain ways -- for example as "incorrect" if untrue and "irrational" if held in the teeth of counterevidence. But if this is all there is to the normativity of belief, then the mental state in question might not be in any way intrinsically defective. Rather, we belief-ascribers are treating the state as if it should play a certain role; and we set ourselves up for disappointment if it doesn't play that role.

Suppose a member of a perennially losing sports team says, on day one of the new season, "This year, we're going to make the playoffs!" Swimming-hole normativity suggests that we interpreters have a choice. We could treat this exclamation as the expression of a belief, in which case it is defective because unjustified by the evidence and (as future defeats will confirm) false. Or we could treat the exclamation as an expression of optimism and team spirit, in which case it might not be in any way defective. There need be no fact of the matter, independent of our labeling, concerning its defectiveness or not.

Advocates of normativism about belief typically want to make a bolder claim than that. So let's move toward a bolder view of normativity.

Consider hearts. Hearts are defective if they don't pump blood, in a less concept-dependent way than swimming holes are defective if they are unsafe. That thing really is a heart, independent of any human labeling, and as such it has a function, independent of any human labeling, which it can satisfy or fail to satisfy.

Might beliefs be inherently normative in that way, the heart-like way, rather than just the swimming-hole way? If I believe this year we'll make the playoffs, is this a state of mind with an essential function in the same way that the heart is an organ with an essential function?

I am a dispositionalist about belief. To believe some proposition P is, on my view, just to be disposed to act and react in ways that are characteristic of a P-believer. To believe this year we'll make the playoffs, for example, is to be disposed to say so, with a feeling of sincerity, to be willing to wager on it, to feel surprise and disappointment with each mounting loss, to refuse to make other plans during playoff season, and so on. It's not clear that a cluster of dispositions is a thing with a function in the same way that a heart is a thing with a function.

Now maybe (though I suspect this is simplistic) some mechanism in us functions to create dispositional belief states in the face of evidence: It takes evidence that P as an input and then produces in us dispositional tendencies to act and react as if P is true. Maybe this mechanism malfunctions if it generates belief states contrary to the evidence, and maybe this mechanism has been evolutionarily selected because it produces states that cause us to act in ways that track the truth. But it doesn't follow from this, I think, that the states that are produced are inherently defective if they arise contrary to the evidence or don't track the truth.

Compare anger: Maybe there's a system in us that functions to create anger when there's wrongdoing against us or those close to us, and maybe this mechanism has been selected because it produces states that prepare us to fight. It doesn't seem to follow that the state is inherently defective if produced in some other way (e.g., by reading a book) or if one isn't prepared to fight (maybe one is a pacifist).

I conjecture that we can get all the normativity we want from belief by a combination of swimming-hole type normativity (once we conceptualize an attitude as a belief, we're committed to saying it's incorrect if false) and normativity of function in our belief-producing mechanisms, without treating belief states themselves as having normative essences.

Wednesday, December 20, 2023

The Washout Argument Against Longtermism

I have a new essay in draft, "The Washout Argument Against Longtermism". As always, thoughts, comments, and objections welcome, either as comments on this post or by email to my academic address.

Abstract:

We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis.

According to the Infinite Washout Argument, standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes.

According to the Cluelessness Argument, we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years.

According to the Negligibility Argument, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.

For more details see the full-length draft.

A brief, non-technical version of these arguments is also now available at the longtermist online magazine The Latecomer.

[Midjourney rending of several happy dolphins playing]

Excerpt from full-length essay

If MacAskill’s and most other longtermists’ reasoning is correct, the world is likely to be better off in a billion years if human beings don’t go extinct now than if human beings do go extinct now, and decisions we make now can have a non-negligible influence on whether that is the case. In the words of Toby Ord, humanity stands at a precipice. If we reduce existential risk now, we set the stage for possibly billions of years of thriving civilization; if we don’t, we risk the extinction of intelligent life on Earth. It’s a tempting, almost romantic vision of our importance. I also feel drawn to it. But the argument is a card-tower of hand-waving plausibilities. Equally breezy towers can be constructed in favor of human self-extermination or near-self-extermination. Let me offer....

The Dolphin Argument. The most obvious solution to the Fermi Paradox is also the most depressing. The reason we see no signs of intelligent life elsewhere in the universe is that technological civilizations tend to self-destruct in short order. If technological civilizations tend to gain increasing destructive power over time, and if their habitable environments can be rendered uninhabitable by a single catastrophic miscalculation or a single suicidal impulse by someone with their finger on the button, then the odds of self-destruction will be non-trivial, might continue to escalate over time, and might cumulatively approach nearly 100% over millennia. I don’t want to commit to the truth of such a pessimistic view, but in comparison, other solutions seem like wishful thinking – for example, that the evolution of intelligence requires stupendously special circumstances (the Rare Earth Hypothesis) or that technological civilizations are out there but sheltering us from knowledge of them until we’re sufficiently mature (the Zoo Hypothesis).

Anyone who has had the good fortune to see dolphins at play will probably agree with me that dolphins are capable of experiencing substantial pleasure. They have lives worth living, and their death is a loss. It would be a shame if we drove them to extinction. Suppose it’s almost inevitable that we wipe ourselves out in the next 10,000 years. If we extinguish ourselves peacefully now – for example, by ceasing reproduction as recommended by antinatalists – then we leave the planet in decent shape for other species, including dolphins, which might continue to thrive. If we extinguish ourselves through some self-destructive catastrophe – for example, by blanketing the world in nuclear radiation or creating destructive nanotech that converts carbon life into gray goo – then we probably destroy many other species too and maybe render the planet less fit for other complex life.

To put some toy numbers on it, in the spirit of longtermist calculation, suppose that a planet with humans and other thriving species is worth X utility per year, a planet with other thriving species with no humans is worth X/100 utility (generously assuming that humans contribute 99% of the value to the planet!), and a planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility. If we destroy ourselves in 10,000 years, the billion year sum of utility is 10^4 * X + (approx.) 10^9 * X/200 = (approx.) 5 * 10^6 * X. If we peacefully bow out now, the sum is 10^9 * X/100 = 10^7 * X. Given these toy numbers and a billion-year, non-human-centric perspective, the best thing would be humanity’s peaceful exit.

Now the longtermists will emphasize that there’s a chance we won’t wipe ourselves out in a terribly destructive catastrophe in the next 10,000 years; and even if it’s only a small chance, the benefits could be so huge that it’s worth risking the dolphins. But this reasoning ignores a counterbalancing chance: That if human beings stepped out of the way a better species might evolve on Earth. Cosmological evidence suggests that technological civilizations are rare; but it doesn’t follow that civilizations are rare. There has been a general tendency on Earth, over long, evolutionary time scales, for the emergence of species with moderately high intelligence. This tendency toward increasing intelligence might continue. We might imagine the emergence of a highly intelligent, creative species that is less destructively Promethean than we are – one that values play, art, games, and love rather more than we do, and technology, conquering, and destruction rather less – descendants of dolphins or bonobos, perhaps. Such a species might have lives every bit as good as ours (less visible to any ephemeral high-tech civilizations that might be watching from distant stars), and they and any like-minded descendants might have a better chance of surviving for a billion years than species like ours who toy with self-destructive power. The best chance for Earth to host such a species might, then, be for us humans to step out of the way as expeditiously as possible, before we do too much harm to complex species that are already partway down this path.

Think of it this way: Which is the likelier path to a billion-year happy, intelligent species: that we self-destructive humans manage to keep our fingers off the button century after century after century somehow for ten million centuries, or that some other more peaceable, less technological clade finds a non-destructive stable equilibrium? I suspect we flatter ourselves if we think it’s the former.

This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch.

...

My aim with the Dolphin Argument... is not to convince readers that humanity should bow out for the sake of other species.... Rather, my thought is this: It’s easy to concoct stories about how what we do now might affect the billion-year future, and then to attach decision-theoretic numbers to those stories. We lack good means for evaluating these stories. We are likely just drawn to one story or another based on what it pleases us to think and what ignites our imagination.

Saturday, December 16, 2023

Could the Universe Be Infinite?

It's not absurd to think the universe might endure forever.

by Eric Schwitzgebel and Jacob Barandes

From The Weirdness of the World, forthcoming from Princeton University Press in January, excerpted Dec 15 at Nautilus.

On recent estimates, the observable universe—the portion of the universe that we can detect through our telescopes—extends about 47 billion light-years in every direction. But the limit of what we can see is one thing, and the limit of what exists is quite another. It would be remarkable if the universe stopped exactly at the edge of what we can see. For one thing, that would place us, surprisingly and un-Copernicanly, precisely at the center.

But even granting that the universe is likely to be larger than 47 billion light-years in radius, it doesn’t follow that it’s infinite. It might be finite. But if it’s finite, then one of two things should be true: Either the universe should have a boundary or edge, or it should have a closed topology.

It’s not absurd to think that the universe might have an edge. Theoretical cosmologists routinely consider hypothetical finite universes with boundaries at which space comes to a sudden end. However, such universes require making additional cosmological assumptions for which there is no direct support—assumptions about the conditions, if any, under which those boundaries might change, and assumptions about what would happen to objects or light rays that reach those boundaries.

It’s also not absurd to think that the universe might have a closed topology. By this we mean that over distances too large for us to see, space essentially repeats, so that a particle or signal that traveled far enough would eventually come back around to the spatial region from which it began—like how when Pac-Man exits one side of the TV screen, he re-emerges from the other side. However, there is currently no evidence that the universe has a closed topology.

Leading cosmologists, including Alex Vilenkin, Max Tegmark, and Andrei Linde, have argued that spatial infinitude is the natural consequence of the best current theories of cosmic inflation. Given that, plus the absence of evidence for an edge or closed topology, infinitude seems a reasonable default view. The mere 47 billion light-years we can see is the tiniest speck of a smidgen of a drop in an endless expanse.

Let’s call any galaxy with stars, planets, and laws of nature like our own a sibling galaxy. Exactly how similar a galaxy must be to qualify as a sibling we will leave unspecified, but we don’t intend high similarity. Andromeda is sibling enough, as are probably most of the other hundreds of billions of ordinary galaxies we can currently see.

The finiteness of the speed of light means that when we look at these faraway galaxies, we see them as they were during earlier periods in the universe’s history. Taking this time delay into account, the laws of nature don’t appear to differ in regions of the observable universe that are remote from us. Likewise, galaxies don’t appear to be rarer or differently structured in one direction or another. Every direction we look, we see more or less the same stuff. These observations help motivate the Copernican Principle, which is the working hypothesis that our position in the universe is not special or unusual—not the exact center, for example, and not the one weird place that happens to have a galaxy operating by special laws that don’t hold elsewhere.

Still, our observable universe might be an atypical region of an infinite universe. Possibly, somewhere beyond what we can see, different forms of elementary matter might follow different laws of physics. Maybe the gravitational constant is a little different. Maybe there are different types of fundamental particles. Even more radically, other regions might not consist of three-dimensional space in the form we know it. Some versions of string theory and inflationary cosmology predict exactly such variability.

But even if our region is in some respects unusual, it might be common enough that there are infinitely many other regions similar to it—even if just one region in 10500. Again, this is a fairly standard view among speculative cosmologists, which comports well with straightforward interpretations of leading cosmological theories. One can hardly be certain, of course. Maybe we’re just in a uniquely interesting spot! But we are going to assume that’s not the case. In the endless cosmos, infinitely many regions resemble ours, with three spatial dimensions, particles that obey approximately the “Standard Model” of particle physics, and cluster upon cluster of sibling galaxies.

Under the assumptions so far, the Copernican Principle suggests that there are infinitely many sibling galaxies in a spacelike relationship with us, meaning that they exist in spatiotemporal regions roughly simultaneous with ours (in some frame of reference). We will have seen the past history of some of these simultaneously existing sibling galaxies, most of which, we assume, continue to endure. However, it’s a separate question whether there are also infinitely many sibling galaxies in a timelike relationship to us—more specifically, existing in our future. Are there infinitely many sibling galaxies in spatiotemporal locations that are, at least in principle, eventually reachable by particles originating in our galaxy? (If the locutions of this paragraph seem convoluted, that’s due to the bizarreness of relativity theory, which prevents us from using “past,” “present,” and “future” in the ordinary, commonsense way.)

Thinking about whether infinitely many sibling galaxies will exist in the future requires thinking about heat death. Stars have finite lifetimes. If standard physical theory is correct, then ultimately all the stars we can currently see will burn out. Some of those burned-out stars will contribute to future generations of stars, which will, in turn, burn out. Other stars will become black holes, but then those black holes also will eventually dissipate (through Hawking radiation).

Given enough time, assuming that the laws of physics as we understand them continue to hold, and assuming things don’t re-collapse in a “Big Crunch” in the distant future, the standard view is that everything we presently see will inevitably enter a thin, boring, high-entropy state near equilibrium—heat death. Picture nearly empty darkness, with particles more or less evenly spread out, with even rock-size clumps of matter being rare.

But what happens after heat death? This is of course even more remote and less testable than the question of whether heat death is inevitable. It requires extrapolating far beyond our current range of experience. But still we can speculate based on currently standard assumptions. Let’s think as reasonably as we can about this. Here’s our best guess, based on standard theory, from Ludwig Boltzmann through at least some time slices of Sean Carroll.

For this speculative exercise, we will assume that the famously probabilistic behavior of quantum systems is intrinsic to the systems themselves, persisting post-heat-death and not requiring external observers carrying out measurements. This is consistent with most current approaches to quantum theory (including most many-worlds approaches, objective-collapse approaches, and Bohmian mechanics). It is, however, inconsistent with theories according to which the probabilistic behavior requires external observers (some versions of the “Copenhagen interpretation”) and theories on which the post-heat-death universe would inescapably occupy a stationary ground state. Under this assumption, standard probabilistic theories of what happens in high-entropy, near-vacuum conditions continue to apply post-heat-death. More specifically, the universe will continue to support random fluctuations of photons, protons, and whatever other particles remain. Consequently, from time to time, these particles will, by chance, enter unlikely configurations. This is predicted by both standard statistical mechanics and standard quantum mechanics. Post-heat-death, seven particles will sometimes converge, by chance, upon the same small region. Or 700. Or—very rarely!—7 trillion.

There appears to be no in-principle limit to how large such chance fluctuations can be or what they can contain if they pass through the right intermediate phases. Wait long enough and extremely large fluctuations should occur. Assuming the universe continues infinitely, rather than having a temporal edge or forming a closed loop, for which there is no evidence, then eventually some random fluctuation should produce a bare brain having cosmological thoughts. Wait longer, and eventually some random fluctuation will produce, as Boltzmann suggested, a whole galaxy. If the galaxy is similar enough to our own, it will be a sibling galaxy. Wait still longer, and another sibling galaxy will arise, and another, and another....

For good measure, let’s also assume that after some point post-heat-death, the rate at which galaxy-size systems fluctuate into existence does not systematically decrease. There’s some minimal probability of galaxy-size fluctuations, not an ever-decreasing probability with longer and longer average intervals between galaxies. Fluctuations appear at long intervals, by random chance, then fade back into chaos after some brief or occasionally long period, and the region returns to the heat-death state, with the same small probability of large fluctuations as before. Huge stretches of not much will be punctuated by rare events of interesting, even galaxy-size, complexity.

Of course, this might not be the way things go. We certainly can’t prove that the universe is like this. But despite the bizarreness that understandably causes some people to hesitate, the overall picture we’ve described appears to be the most straightforward consequence of standard physical theory, taken out of the box, without too much twisting things around.

Even if this specific speculation is wrong, there are many other ways in which the cosmos might deliver infinitely many sibling galaxies in the future. For example, some process might ensure we never enter heat death and new galaxies somehow continue to be born.

Alternatively, processes occurring pre-heat-death, such as the formation of black holes, might lead to new bangs or cosmic inflations, spatiotemporally unconnected or minimally connected to our universe, and new stars and galaxies might be born from these new bangs or inflations in much the same way as our familiar stars and galaxies were born from our familiar Big Bang.

Depending on what constitutes a “universe” and a relativistically specifiable “timelike” relation between our spatiotemporal region and some future spatiotemporal region, those sibling galaxies might not exist in our universe or stand in our future, technically speaking, but if so, that detail doesn’t matter to our core idea. Similarly, if the observable universe reverses its expansion, it might collapse upon itself in a Big Crunch, followed by another Big Bang, and so on in an infinitely repeating cycle, containing infinitely many sister galaxies post-Crunch. This isn’t currently the mainstream view, but it’s a salient and influential alternative if the heat-death scenario outlined above is mistaken.

We conclude that it is reasonable to think that the universe is infinite, and that there exist infinitely many galaxies broadly like ours, scattered throughout space and time, including in our future. It’s a plausible reading of our cosmological situation. It’s a decent guess and at least a possibility worth taking seriously....

Excerpted from The Weirdness of the World. In the book, this argument sets up the case that virtually every action you perform has causal ripples extending infinitely into the future, causing virtually every physically possible, non-unique, non-zero probability event.

Tuesday, December 05, 2023

Falling in Love with Machines

People occasionally fall in love with AI systems. I expect that this will become increasingly common as AI grows more sophisticated and new social apps are developed for large language models. Eventually, this will probably precipitate a crisis in which some people have passionate feelings about the rights and consciousness of their AI lovers and friends while others hold that AI systems are essentially just complicated toasters with no real consciousness or moral status.


Last weekend, chatting with the adolescent children of a family friend, helped cement my sense that this crisis might arrive soon. Let’s call the kids Floyd (age 12) and Esmerelda (age 15). Floyd was doing a science fair project comparing the output quality of Alexa, Siri, Bard, and ChatGPT. But, he said, "none of those are really AI".

What did Floyd have in mind by "real AI"? The robot Aura in the Las Vegas Sphere. Aura has an expressive face and an ability to remember social interactions (compare Aura with my hypothetical GPT-6 mall cop).

Aura at the Las Vegas Sphere

"Aura remembered my name," said Esmerelda. "I told Aura my name, then came back forty minutes later and asked if it knew my name. It paused a bit, then said, 'Is it Esmerelda?'"

"Do you think people will ever fall in love with machines?" I asked.

"Yes!" said Floyd, instantly and with conviction.

"I think of Aura as my friend," said Esmerelda.

I asked if they thought machines should have rights. Esmerelda said someone asked Aura if it wanted to be freed from the Dome. It said no, Esmerelda reported. "Where would I go? What would I do?"

I suggested that maybe Aura had just been trained or programmed to say that.

Yes, that could be, Esmerelda conceded. How would we tell, she wondered, if Aura really had feelings and wanted to be free? She seemed mildly concerned. "We wouldn't really know."

I accept the current scientific consensus that current large language models do not have a meaningful degree of consciousness or deserve moral consideration similar to that of vertebrates. But at some point, there will likely be legitimate scientific dispute, if AI systems start to meet some but not all of the criteria for consciousness according to mainstream scientific theories.


The dilemma will be made more complicated by corporate interests, as some corporations (e.g., Replika, makers of the "world's best AI friend") will have financial motivation to encourage human-AI attachment while others (e.g., OpenAI) intentionally train their language models to downplay any user concerns about consciousness and rights.