Friday, February 02, 2024

Swallows and Moles in Philosophy

In his review (in the journal Science -- cool!) of my recently released book, The Weirdness of the World, Edouard Machery writes:

There are two kinds of philosophers: swallows and moles. Swallows love to soar and to entertain philosophical hypotheses at best loosely connected with empirical knowledge. Plato and Gottfried Leibniz are paradigmatic swallows. Moles, on the contrary, rummage through mundane facts about our world and aim at better understanding it. Aristotle, William James, and Hans Reichenbach are paradigmatic moles. Eric Schwitzgebel is unabashedly a swallow.

Machery admits to having a mole's-eye view of the swallows. He praises the book, but he is frustrated by my admittedly wild speculations about radical skepticism, group consciousness, an infinite future, etc.

Machery's goal in his own recent book Philosophy Within Its Proper Bounds was, he says, "to curtail the flights of fancy with which contemporary philosophers are enamored". The Weirdness of the World celebrates such flights of fancy -- so naturally, Machery and I are going to disagree about the value of wild philosophical speculation.

Reading Machery's contrast of swallows and moles, I was immediately reminded of how the ancient Chinese philosopher Zhuangzi opens his Inner Chapters:

There is a fish in the Northern Oblivion named Kun, and this Kun is quite huge, spanning who knows how many thousands of miles. He transforms into a bird named Peng, and this Peng has quite a back on him, stretching who knows how many thousands of miles. When he rouses himself and soars into the air, his wings are like clouds draped across the heavens. The oceans start to churn, and this bird begins his journey toward the Southern Oblivion....

The quail laughs at him, saying, "Where does he think he's going? I leap into the air with all my might, but before I get farther than a few yards I drop to the ground. My twittering and fluttering between the branches is the utmost form of flying! So where does he think he's going? (Ziporyn trans., p. 3-4).

Zhuangzi is the swallowiest of swallows, soaring far beyond mundane empirical facts, wondering if life might be a dream, speculating about trees who measure eight thousand years as a single autumn, and celebrating "spirit men" with skin like ice and snow who eat only wind and dew, riding upon the air and clouds.

Zhuangzi's quail, however, raises a good point: It's much clearer where you're going if you confine yourself to small hops between familiar branches. The Peng is neither practical nor grounded, and Zhuangzi's philosophy is arguably the same. Zhuangzi's friend Huizi scolds him: "Your words are... big and useless, which is why they are rejected by everyone who hears them" (Ziporyn trans., p. 8).

In defense against Machery and the quail critique, I offer three thoughts:

First, if anyone is going to speculate about wild possibilities concerning the fundamental nature of things, philosophers should be among them.

It would be a sad, gray world if our reasoning was always confined to "proper bounds" and we couldn't reflect on issues like dream skepticism, group consciousness, and infinitude. Shouldn't it be part of the job description of philosophy to explore such ideas, considering what can or should be made of them?

Such speculations needn't be entirely unconstrained by empirical facts, even if empirical science fails to deliver decisive answers. In The Weirdness of the World my speculations always start from empirical observation. My discussion of dream skepticism engages with the science of dreams; my discussion of group consciousness engages with the science of consciousness; my chapter on the possible infinite future -- collaborative with physicist and philosopher of physics Jacob Barandes -- is grounded in the standard working assumptions of mainstream physics. Scientifically informed philosophers are as well-positioned as anyone to speculate about wild hypotheticals that naturally intrigue us (at least some of us). To stand athwart such speculations, saying "Thou shalt not enter this epistemic wilderness!" is to reject an intrinsically valuable form of human philosophical curiosity.

Second, we can distinguish two types of swallow: those confident that their wild hypotheses are correct and those who merely entertain and explore such hypotheses.

Maybe Plato was convinced of the reality of Forms and the recollection theory of memory. Maybe Leibniz was convinced that the world was composed of monads in pre-established harmony. But Zhuangzi was a self-undermining skeptic who appears to have taken none of his wild speculations as established fact.

I don't argue that the United States definitely has conscious experiences; I argue that if we accept standard materialist approaches to consciousness, they seem to imply that it does and that therefore we should take the idea seriously as a possibility. I don't argue that this is a dream or a short-term simulation; I argue that our ordinary culturally-given understanding of the world and mainstream scientific assumptions combine to justify assigning a non-trivial (maybe about 0.1%) credence to both of those possibilities. Barandes and I don't argue that there definitely is an infinite future in which future counterparts of you enact almost every possible action, but only that it follows from "certain not wholly implausible assumptions".

When soaring in speculation far beyond the mundane local tree branches, doubt is appropriate. The most natural critique of swallows is that they appear to believe wild things on thin evidence. That critique is harder to sustain when the swallow explicitly treats the speculations as speculations only, rather than as established facts.

Third, the swallow and the mole can collaborate -- even in the work of a single philosopher. As Jonathan Birch comments in my Facebook post linking to Machery's book review, two of Edouard's paradigmatic examples of moles -- Aristotle and William James -- are probably not best thought of as pure moles, but rather as swallow-moles. They dug around quite a bit in mundane empirical facts, yes. But they sometimes also soared with the swallows. Aristotle speculated on the existence of a supraphysical unmoved mover responsible for the existence of the physical world. James speculated about metaphysical "neutral monism" concerning mind and matter and celebrated religious belief beyond the evidence.

I too have done a fair bit of mundane empirical work -- for example, on the moral behavior of ethics professors (e.g., here and here), on introspective method (e.g., here and here), and on the consequences of exposure to ethical argumentation (e.g., here and here). Even when I am not myself running the empirical studies, much of my work engages with nitty-gritty empirical detail (e.g., on the history of reports of coloration in dreams, on the cognitive capacites of garden snails, on the accuracy of visual imagery reports, and on psychological measures of well-being).

Often, I think, deep empirical mole-digging is valuable for one's subsequent speculative soaring. Digging into the details of cosmological models enables better informed speculation about the distant future. Digging into the details of the behavior of ethics students and professors enables better informed speculation about the general relation between ethical reflection and ethical behavior. Digging into the details of dream reports enables better informed speculation about dream skepticism. As Zhuangzi imagines, a low-lying fish can transform into a soaring phoenix.

No single researcher needs to do both the digging and the soaring, even if some of us enjoy both types of task. But it's valuable to have a whole ecosystem of moles and swallows, foxes and hedgehogs, ants and anteaters, truth philosophers and dare philosophers, and so on.

I'm honored that Machery counts me among the swallows. I celebrate his moleishness. Let's dig and soar!

Thursday, January 25, 2024

Imagining Yourself in Another's Shoes vs. Extending Your Concern: Empirical and Ethical Differences

[new paper in draft]

The Golden Rule (do unto others as you would have others do unto you) isn't bad, exactly -- it can serve a valuable role -- but I think there's something more empirically and ethically attractive about the relatively underappreciated idea of "extension" found in the ancient Chinese philosopher Mengzi.

The fundamental idea of extension, as I interpret it, is to notice the concern one naturally has for nearby others -- whether they are relationally near (like close family members) or spatially near (like Mengzi's child about to fall into a well or Peter Singer's child you see drowning in a shallow pond) -- and, attending to relevant similarities between those nearby cases and more distant cases, to extend your concern to the more distant cases.

I see three primary advantages to extension over the Golden Rule (not that these constitute an exhaustive list of means of moral expansion!).

(1.) Developmentally and cognitively, extension is less complex. The Golden Rule, properly implemented, involves imagining yourself in another's shoes, then considering what you would want if you were them. This involves a non-trivial amount of "theory of mind" and hypothetical reasoning. You must notice how others' beliefs, desires, and other mental states relevantly differ from yours, then you must imagine yourself hypothetically having those different mental states, and then you must assess what you would want in that hypothetical case. In some cases, there might not even be a fact of the matter about what you would want. (As an extreme example, imagine applying the Golden Rule to an award-winning show poodle. Is there a fact of the matter about what you would want if you were an award winning show poodle?) Mengzian extension seems cognitively simpler: Notice that you are concerned about nearby person X and want W for them, notice that more distant person Y is relevantly similar, and come to want W for them also. This resembles ordinary generalization between relevant cases: This wine should be treated this way, therefore other similar wines should be treated similarly; such-and-such is a good way to treat this person, so such-and-such is probably also a good way to treat this other similar person.

(2.) Empirically, extension is a more promising method for expanding one's moral concern. Plausibly, it's more of a motivational leap to go from concern about self to concern about distant others (Golden Rule) than to go from concern from nearby others to similar more distant others (Mengzian Extension). When aid agencies appeal for charitable donations, they don't typically ask people to imagine what they would want if they were living in poverty. Instead, they tend to show pictures of children, drawing upon our natural concern for children and inviting us to extend that concern to the target group. Also -- as I plan to discuss in more detail in a post next month -- in the "argument contest" Fiery Cushman and I ran back in 2020, the arguments most successful in inspiring charitable donation employed Mengzian extension techniques, while appeals to "other's shoes" style reasoning did not tend to predict higher levels of donation than did the average argument.

(3.) Ethically, it's more attractive to ground concern for distant others in the extension of concern for nearby others than in hypothetical self-interest. Although there's something attractive about caring for others because you can imagine what you would want if you were them, there's also something a bit... self-centered? egoistic? ... about grounding other-concern in hypothetical self-concern. Rousseau writes: "love of men derived from love of self is the principle of human justice" (Emile, Bloom trans., p. 235). Mengzi or Confucius would never say this! In Mengzian extension, it is ethically admirable concern for nearby others that is the root of concern for more distant others. Appealingly, I think, the focus is on broadening one's admirable ethical impulses, rather than hypothetical self-interest.

[ChatGPT4's rendering of Mengzi's example of a child about to fall into a well, with a concerned onlooker; I prefer Helen De Cruz's version]

My new paper on this -- forthcoming in Daedalus -- is circulating today. As always, comments, objections, corrections, connections welcome, either as comments on this post, on social media, or by email.

Abstract:

According to the Golden Rule, you should do unto others as you would have others do unto you. Similarly, people are often exhorted to "imagine themselves in another's shoes." A related but contrasting approach to moral expansion traces back to the ancient Chinese philosopher Mengzi, who urges us to "extend" our concern for those nearby to more distant people. Other approaches to moral expansion involve: attending to the good consequences for oneself of caring for others, expanding one's sense of self, expanding one's sense of community, attending to others' morally relevant properties, and learning by doing. About all such approaches, we can ask three types of question: To what extent do people in fact (e.g., developmentally) broaden and deepen their care for others by these different methods? To what extent do these different methods differ in ethical merit? And how effectively do these different methods produce appropriate care?

Tuesday, January 16, 2024

The Weirdness of the World: Release Day and Introduction

Today is the official U.S. release day of my newest book, The Weirdness of the World!

As a teaser, here's the introduction:

In Praise of Weirdness

The weird sisters, hand in hand,
Posters of the sea and land,
Thus do go about, about:
Thrice to thine and thrice to mine
And thrice again, to make up nine.
Peace! the charm’s wound up.
—Shakespeare, Macbeth, Act I, scene iii

Weird often saveth
The undoomed hero if doughty his valor!
—Beowulf, X.14–15, translated by J. Lesslie Hall


The word “weird” has deep roots in old English, originally as a noun for fate or magic, later evolving toward its present use as an adjective for the uncanny or peculiar. By the 1980s, it had fruited as the choicest middle-school insult against unstylish kids like me who spent their free time playing with figurines of wizards and listening to obscure science fiction radio shows. If the “normal” is the conventional, ordinary, and readily understood, the weird is what defies that.

The world is weird -- deeply, pervasively so, weird to its core, or so I will argue in this book. Among the weirdest things about Earth is that certain complex bags of mostly water can pause to reflect on the most fundamental questions there are. We can philosophize to the limits of our comprehension and peer into the fog beyond those limits. We can contemplate the foundations of reality, and the basis of our understanding of those foundations, and the necessary conditions of the basis of our understanding of those foundations, and so on, trying always to peer behind the next curtain, even with no clear method and no great hope of a satisfying end to the inquiry. In this respect, we vastly outgeek bluebirds and kangaroos and are rightly a source of amazement to ourselves.

I will argue that careful inquiry into fundamental questions about consciousness and cosmology reveals not a set of readily comprehensible answers but instead a complex blossoming of bizarre possibilities. These possibilities compete with one another, or combine in non-obvious ways. Philosophical and cosmological inquiry teaches us that something radically contrary to common sense must be true about the fundamental structures of the mind and the world, while leaving us poorly equipped to determine where exactly the truth lies among the various weird possibilities.

We needn’t feel disappointed by this outcome. The world is richer and more interesting for escaping our understanding. How boring it would be if everything made sense!

1. My Weird Thesis

Consider three huge questions: What is the fundamental structure of the cosmos? How does human consciousness fit into it? What should we value? What I will argue in this book -- with emphasis on the first two questions but also sometimes touching on the third -- is (1) the answers to these questions are currently beyond our capacity to know, and (2) we do nonetheless know at least this: Whatever the truth is, it’s weird. Careful reflection will reveal that every viable theory on these grand topics is both bizarre and dubious. In chapter 2 (“Universal Bizarreness and Universal Dubiety”), I will call this the Universal Bizarreness thesis and the Universal Dubiety thesis. Something that seems almost too preposterous to believe must be true, but we lack the means to resolve which of the various preposterous-seeming options is in fact correct. If you’ve ever wondered why every wide-ranging, foundations-minded philosopher in the history of Earth has held bizarre metaphysical or cosmological views (I challenge you to find an exception!) -- with each philosopher holding, seemingly, a different set of bizarre views -- chapter 2 offers an explanation.

I will argue that every approach to cosmology and consciousness has implications that run strikingly contrary to mainstream “common sense” and that, partly in consequence, we ought to hold such theories only tentatively. Sometimes we can be justified in simply abandoning what we previously thought of as common sense, when we have firm scientific grounds for thinking otherwise; but questions of the sort I explore in this book test the limits of scientific inquiry. Concerning such matters, nothing is firm -- neither common sense, nor science, nor any of our other epistemic tools. The nature and value of scientific inquiry itself rely on disputable assumptions about the fundamental structure of the mind and the world, as I discuss in chapters on skepticism (chapter 4), idealism (chapter 5), and whether the external world exists (chapter 6).

On a philosopher’s time scale -- where a few decades ago is “recent” and a few decades from now is “soon” -- we live in a time of change, with cosmological theories and theories of consciousness rising and receding in popularity based mainly on broad promise and what captures researchers’ imaginations. We ought not trust that the current range of mainstream theories will closely resemble the range in a hundred years, much less the actual truth.

2. Varieties of Cosmological Weirdness

To establish that the world is cosmologically weird, maybe all that is needed is relativity theory and quantum mechanics.

According to relativity theory, if your twin accelerates away from you at very high speed, then returns, much less time will have passed for the traveler than for you who stayed here on Earth -- the so-called Twin Paradox. According to the most straightforward interpretation of quantum mechanics, if you observe what we ordinarily consider to be a chance event, there’s also an equally real, equally existing version of you in another “world” who shares your past but for whom the event turned out differently. (Or maybe your act of observation caused the event to turn out one way rather than the other, or maybe some other bizarre thing is true, depending on the correct interpretation of quantum mechanics, but it’s widely accepted that there are no non-bizarre interpretations.) So if you observe the chance decay of a uranium atom, for example, there’s another world branching of from this one, containing a counterpart of you who observes the atom not to have decayed. If we accept that view, then the cosmos contains a myriad of different, equally real worlds, each with different versions of you and your friends and everything you know, all splitting off from a common past.

I won’t dwell on those particular cosmological peculiarities, since they are familiar to academic readers and well handled elsewhere. However, some equally fundamental cosmological issues are typically addressed by philosophers rather than scientific cosmologists.

One is the possibility that the cosmos is nowhere near as large as we ordinarily assume -- perhaps just you and your immediate environment (chapter 4) or perhaps even just your own mind and nothing else (chapter 6). Although these possibilities might appear unlikely, they are worth considering seriously, to assess how confident we ought to be in their falsity, and on what grounds. I will argue that it’s reasonable not to entirely dismiss such skeptical possibilities. Alternatively, and more in line with mainstream physical theory, the cosmos might be infinite, which brings its own train of bizarre consequences (chapter 7).

Another possibility is that we live inside a simulated reality or a pocket universe, embedded in a much larger structure about which we know virtually nothing (chapters 4 and 5). Yet another possibility is that our experience of three-dimensional spatiality is a product of our own minds that doesn’t reflect the underlying structure of reality (chapter 5) or that our sensory experience maps only loosely onto the underlying structure of reality (chapter 9).

Still another set of questions concerns the relationship of mind to cosmos. Is conscious experience abundant in the universe, or does it require the delicate coordination of rare events (chapter 10)? Is consciousness purely a matter of having the right physical structure, or might it require something non-physical (chapter 2)? Under what conditions might a group of organisms give rise to group-level consciousness (chapter 3)? What would it take to build a conscious machine, if that is possible at all -- and what should we do if we don’t know whether we have succeeded (chapter 11)?

In each of our heads there are about as many neurons as stars in our galaxy, and each neuron is arguably more structurally complex than any star system that does not contain life. There is as much complexity and mystery inside as out.

The repeated theme: In the most fundamental matters of consciousness and cosmology, neither common sense, nor early twenty-first-century empirical science, nor armchair philosophical theorizing is entirely trustworthy. The rational response is to distribute our credence across a wide range of bizarre options.

Each chapter is meant to be separately comprehensible. Please feel free to skip ahead, reading any subset of them in any order.

3. Philosophy That Closes versus Philosophy That Opens

You are reading a philosophy book -- voluntarily, let’s suppose. Why? Some people read philosophy because they believe it reveals profound, fundamental truths about the way the world really is and the one right manner to live. Others like the beauty of grand philosophical systems. Still others like the clever back-and-forth of philosophical dispute. What I like most is none of these. I love philosophy best when it opens my mind -- when it reveals ways the world could be, possible approaches to life, lenses through which I might see and value things around me, which I might not other wise have considered.

Philosophy can aim to open or to close. Suppose you enter Philosophical Topic X imagining three viable, mutually exclusive possibilities, A, B, and C. The philosophy of closing aims to reduce the three to one. It aims to convince you that possibility A is correct and the others wrong. If it succeeds, you know the truth about Topic X: A is the answer! In contrast, the philosophy of opening aims to add new possibilities to the mix -- possibilities that you hadn’t considered before or had considered but too quickly dismissed. Instead of reducing three to one, three grows to maybe five, with new possibilities D and E. We can learn by addition as well as subtraction. We can learn that the range of viable possibilities is broader than we had assumed.

For me, the greatest philosophical thrill is realizing that something I’d long taken for granted might not be true, that some “obvious” apparent truth is in fact doubtable -- not just abstractly and hypothetically doubtable, but really, seriously, in-my-gut doubtable. The ground shifts beneath me. Where I’d thought there would be floor, there is instead open space I hadn’t previously seen. My mind spins in new, unfamiliar directions. I wonder, and the world itself seems to glow with a new wondrousness. The cosmos expands, bigger with possibility, more complex, more unfathomable. I feel small and confused, but in a good way.

Let’s test the boundaries of the best current work in science and philosophy. Let’s launch ourselves at questions monstrously large and formidable. Let’s contemplate these questions carefully, with serious scholarly rigor, pushing against the edge of human knowledge. That is an intrinsically worthwhile activity, worth some of our time in a society generous enough to permit us such time, even if the answers elude us.

My middle-school self who used dice and thrift-shop costumes to imagine astronauts and wizards is now a middle-aged philosopher who uses twenty-first-century science and philosophy to imagine the shape of the cosmos and the magic of consciousness. Join me! If doughty our valor, mayhap the weird saveth us.

Friday, January 12, 2024

Demographic Trends in the U.S. Philosophy Major, 2001-2022 -- Including Total Majors, Second Majors, Gender, and Race

I'm preparing for an Eastern APA session on the "State of Philosophy" next Thursday, and I thought I'd share some data on philosophy major bachelor's degree completions from the National Center for Education Statistics IPEDS database, which compiles data on virtually all students graduating from accredited colleges and universities in the U.S., as reported by administrators.

I examined all data from the 2000-2001 academic year (the first year in which they started recording data on second majors) through 2021-2022 (the most recent available year).

Total Numbers of Philosophy Majors: The Decline Has Stopped

First, the sharp decline in philosophy majors since 2013 has stopped:

2001:  5836
2002:  6529
2003:  7023
2004:  7707
2005:  8283
2006:  8532
2007:  8541
2008:  8778
2009:  8996
2010:  9268
2011:  9292
2012:  9362
2013:  9427
2014:  8820
2015:  8184
2016:  7489
2017:  7572
2018:  7667
2019:  8074
2020:  8209
2021:  8328
2022:  7958

(The decline between 2021 and 2022 reflects a general decline in completions of bachelor's degrees due to the pandemic that year, rather than a trend specific to philosophy.)

In general, the humanities have declined sharply since 2010, and history, English, and foreign languages and literature continue to decline.  This graph shows the trend:
[click image to enlarge and clarify]

The decline in the English major is particularly striking, from 4.5% of bachelor's degrees awarded in 2000-2001 to 1.8% in 2021-2022.  Philosophy peaked at 0.60% in 2005-2006 and has held steady at 0.39%-0.40% since 2015-2016.

Philosophy Relies on Double Majors

[Expanded and edited for clarity, Jan 15] Breaking the data down by first major vs second major, we can see that over time an increasing proportion of students have philosophy as their second major.  In some schools, the distinction between "first major" and "second major" is meaningful, with the first indicating the primary major.  In other schools the distinction is not meaningful.  In the 2021-2022 academic year, 24% of students who took a bachelor's degree in philosophy had it listed as their second major.

[click image to enlarge and clarify]

From these numbers we can estimate that philosophy students are at least moderately likely to be double majors.  While it's impossible to know what percentage of students who took philosophy as their first major also carried a second major, a ballpark estimate might assume that about half of students with philosophy plus one other major list philosophy first rather than second.  If so, then approximately half of all philosophy majors (48%) are double majors.  Overall, across all majors, only 5% of students double majored.

The ease of double majoring is likely to influence the number of students who choose philosophy as a major.

Gender Disparity Is Decreasing

NCES classifies all students as men or women, with no nonbinary category and no unclassified students.  Since the beginning of the available data in the 1980s through the mid-2010s, the percentage of women among philosophy bachelor's recipients hovered steadily between 30% and 34%, not changing even as the total percentage of women increased from 51% to 57%.  However, the last several years have seen a clear decrease in gender disparity, with women now earning 41% of philosophy degrees.

[click image to enlarge and clarify]

Black Students Remain Underrepresented in Philosophy Compared to Undergraduates Overall, and Other Race/Ethnicity Data

NCES uses the following race/ethnicity categories: U.S. nonresident, race/ethnicity unknown, Hispanic or Latino (any race), and among U.S. residents who are not Hispanic or Latino: American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, White, and two or more races.  Before 2007-2008, Native Hawaiian or Other Pacific Islander was included with Asian, but inconsistently until 2010-2011.  The two-or-more races option was also introduced in the 2007-2008 academic year, again with inconsistent reporting for several years.

I've charted these categories below.  As you can see, for most categories, the percentages are similar for philosophy and for graduates overall, except that non-Hispanic White is slightly higher for philosophy and non-Hispanic Black significantly lower. In 2021-2022, non-Hispanic Black people were 14% of the U.S. population age 18-24, 10% of bachelor's degree recipients, and 6% of philosophy bachelor's recipients.

[as usual, click the figures to expand and clarify]

I interpret the sharp increase in multi-racial students as reflecting reporting issues and an increasing willingness of students to identify as multi-racial.

It's also worth noting that although philosophy majors are approximately as likely to be Hispanic/Latino as graduates overall, Hispanic/Latino students are underrepresented among bachelor's degree recipients relative to the U.S. population age 18-24 (17% vs 23%). Non-Hispanic American Indian / Alaska Native students are also underrepresented among overall graduates (0.46% vs. 0.84% of the population age 18-24), and maybe particularly so in philosophy (0.37% vs 0.46% in the most recent year).

Friday, January 05, 2024

Credence-Weighted Robot Rights?

You're a firefighter in the year 2050 or 2100. You can rescue either one human, who is definitely conscious, or two futuristic robots, who might or might not be conscious. What do you do?

[Illustration by Nicolas Demers, from my newest book, The Weirdness of the World, to be released Jan 16 and available for pre-order now.]

Suppose you think there's a 75% chance that the robots have conscious lives as rich as those of human beings (or, alternatively, that they have whatever else it takes to have "full moral status" equivalent to that of a human). And suppose you think there's a 25% chance that the robots are the moral equivalent of toasters, that is, mere empty machines with no significant capacity for conscious thought or feeling.

Arguably, if you save the robots and let the human die, you maximize the total expected number of humanlike lives saved (.75 * 2 + .25 * 0 = 1.5 expected lives saved, vs. one life for sure if you save the human). Decision-theoretically, it looks similar to choosing an action with a 75% chance of saving two people over an action that will save one person for sure. Applying similar reasoning, if the credences are flipped (25% chance the robots are conscious, 75% they're not), you save the human.

Generalizing: Whatever concern you have for an ordinary human, or whatever you would give on their behalf, multiply that concern by your credence or degree of belief that the robot has human-like consciousness (or alternatively your credence that it has whatever features justify moral consideration similar to that of a human). If you'd give $5 to a human beggar, give $3 to a robot beggar in the same situation, if you think it's 60% likely the robot has human-like consciousness. If an oversubscribed local elementary school has a lottery for admission and resident human children each get a 50% chance of admission, resident robot children of disputable consciousness would get a proportionately reduced chance.

Call this approach credence-weighted robot rights.

I see a least three problems with credence-weighted robot rights:

(1.) Credence-weighted robot rights entail that robots will inevitably be treated as inferior, until we are 100% confident that they are our equals.

Of course it's reasonable to treat robots as inferior to humans now. We should save the person, not the robot, in the fire. And of course if we ever create robots who are beyond all reasonable doubt our equals, we should treat them as such. I'm hypothesizing instead a tricky in-between case -- a period during which it's reasonably disputable whether or not our machines deserve full moral status as our equals, a period during which liberals about robot consciousness and robot rights regard robots as our fully-conscious moral peers, while conservatives about robot consciousness and robot rights regard them as mindless machines to be deployed and discarded however we wish.

If we choose a 75% chance of rescuing two people over a sure-fire rescue of one person, we are not treating the unrescued person as inferior. Each person's life is worth just as much in our calculus as that of the others. But if we rescue five humans rather than six robots we regard as 80% likely to be conscious, we are treating the robots as inferior -- even though, by our own admission, they are probably not. It seems unfortunate and less than ethically ideal to always treat as inferiors entities we regard as probably our equals.

(2.) Credence-weighted robot rights would engender chaos if people have highly variable opinions. If individual firefighters make the choices based on their personal opinions, then one firefighter might save the two robots while another saves the one human, and each might find the other's decision abhorrent. Stationwide policies might be adopted, but any one policy would be controversial, and robots might face very different treatment in different regions. If individual judges or police were to apply the law differently based on their different individual credences, or on the variable and hard-to-detect credences of those accused of offences against robots, that would be unfair both to the robots and to the offenders, since the penalty would vary depending on who happened to be the officer or judge or whether they travel in social circles with relatively high vs. low opinions of robot consciousness. So presumably there would have to be some regularization by jurisdiction. But still, different jurisdictions might have very different laws concerning the demolition or neglectful destruction of a robot, some treating it as 80% of a homicide, others treating it as a misdemeanor -- and if robot technologies are variable and changing, the law, and people's understanding of the law, might struggle to keep up and to distinguish serious offences from minor ones.

(3.) Chaos might also ensue from the likely cognitive and bodily diversity of robots. While human cognitive and bodily variability typically keeps within familiar bounds, with familiar patterns of ability and disability, robots might differ radically. Some might be designed with conscious sensory experiences but no capacity for pain or pleasure. Others might experience intense pain or pleasure but lack cognitive sophistication. Others might have no stable goals or model their goals wholly on instructions from a human to whom they are gladly, perhaps excessively subservient, insufficiently valuing their own life. Still others might be able to merge and divide at will, or back themselves up, or radically reformat themselves, raising questions about the boundaries of the individual and what constitutes death. Some might exist entirely as computational entities in virtual paradises with little practical connection to our world. All this raises the question of what features are necessary for, and what constitutes, "equal" rights for robots, and whether thresholds of equality even make sense. Our understanding might require a controversial multidimensional scalar appreciation of the grounds of moral status.

Other approaches have their own problems. A precautionary principle that grants fully human equal rights as soon as it's reasonable to think that robots might deserve them risks sacrificing substantial human interests for machines that very likely don't have interests worth the sacrifice (letting a human die, for example, to save a machine that's only 5% likely to be conscious), and it perhaps makes the question of the grounds of moral status in the face of future robots' cognitive diversity even more troubling and urgent. Requiring proof of consciousness beyond reasonable doubt aggravates the issue of treating robots as subhuman even if we're pretty confident they deserve equal treatment. Treating rights as a negotiated social construction risks denying rights to entities that really do deserve rights, based on their intrinsic conscious capacities, if we collectively choose as a matter of social policy not to grant those rights.

The cleanest solution would be what Mara Garza and I have called the Design Policy of the Excluded Middle: Don't create AI systems whose moral status is dubious and confusing. Either create only AI systems that we recognize as property without human-like moral status and rights, and treat them accordingly; or go all the way to creating AI systems with a full suite of features that enable consensus about their high moral status, and then give them the rights they deserve. It's the confusing cases in the middle that create trouble.

If AI technology continues to advance, however, I very much doubt that it will do so in accord with the Design Policy of the Excluded Middle -- and thus we will be tossed into moral confusion about how to treat our AI systems, with no good means of handling that confusion.

-------------------------------------------------------------

Related:

The Weirdness of the World, Chapter 11 (forthcoming), Princeton University Press.

The Full Rights Dilemma for AI Systems of Debatable Moral Personhood, Robonomics, 4 (2023), #32.

How Robots and Monsters Might Break Human Moral Systems (Feb 3, 2015)

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (2020) (with Mara Garza), in S. Matthew Liao, ed., The Ethics of Artificial Intelligence, Oxford University Press.

Monday, January 01, 2024

Writings of 2023

Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, and 2022.

The biggest project for the past few years has been my new book The Weirdness of the World, available for pre-order and scheduled for U.S. release on January 16. This book pulls together ideas I've been publishing since 2012 concerning the failure of common sense, philosophy, and empirical science to explain consciousness and the fundamental structure of the cosmos, and the corresponding bizarreness and dubiety of all general theories about such matters.

-----------------------------------

Books forthcoming:

The Weirdness of the World (under contract with Princeton University Press).
    See description above.
Books under contract / in progress:

As co-editor with Jonathan Jong, The Nature of Belief, Oxford University Press.

    Collects 15 new essays on the topic, by Sara Aronowitz, Tim Crane and Katalin Farkas, Carolina Flores, M.B. Ganapini, David Hunter, David King and Aaron Zimmerman, Angela Mendelovici, Joshua Mugg, Bence Nanay, Nic Porot and Eric Mandelbaum, Eric Schwitzgebel, Keshav Singh, Declan Smithies, Ema Sullivan-Bissett, amd Neil Van Leeuwen.
As co-editor with Helen De Cruz and Rich Horton, a yet-to-be-titled anthology with MIT Press containing great classics of philosophical SF.


Full-length non-fiction essays, published 2023:

Revised and updated: "Belief", Stanford Encyclopedia of Philosophy.

    A broad-ranging review of the main philosophical approaches to belief.
"Borderline consciousness: When it's neither determinately true nor determinately false that consciousness is present", Philosophical Studies, 180, 3415–3439.
    Being conscious is not an on-or-off phenomenon but has gray zones. Our failure to conceive, in a certain way, of such in-between cases is no evidence against their existence.
"Creating a large language model of a philosopher" (with David Schwitzgebel and Anna Strasser), Mind and Language [online article mila.12466, print forthcoming].
    We trained GPT-3 on the corpus of Daniel Dennett, and even Dennett experts had trouble distinguishing its answers to philosophical questions from Dennett's actual answers.
"The full rights dilemma for AI systems of debatable moral personhood", Robonomics, 4 (32).
    We might soon create AI systems where it's a legitimately open question whether they have humanlike consciousness and deserve humanlike rights. There are huge moral risks however we respond to such cases.
"What is unique about kindness? Exploring the proximal experience of prosocial acts relative to other positive behaviors" (with Annie Regan, Seth Margolis, Daniel J. Ozer, and Sonja Lyubomirsky), Affective Science, 4, 92-100.
    Participants assigned to do kind acts for others reported a greater sense of competence, self-confidence, and meaning while engaging in those acts across the intervention period.


Full-length non-fiction essays, finished and forthcoming:

"Dispositionalism, yay! Representationalism, boo!" in J. Jong and E. Schwitzgebel, eds., The Nature of Belief, Oxford.

    Presents three problems for hard-core representationalism about belief: The Problem of Causal Specification, the Problem of Tacit Belief, and the Problem of Indiscrete Belief.
"Repetition and value in an infinite universe", in S. Hetherington, ed., Extreme Philosophy, Routledge.
    Standard decision theory fails when confronted with the possibility of infinitely many consequences of our actions. Still, it's reasonable to prefer that the universe is infinite rather than finite.
"The ethics of life as it could be: Do we have moral obligations to artificial life?" (with Olaf Witkowski), Artificial Life.
    Creators of artificial life should bear in mind the conditions under which artificial systems might come to be genuine targets of moral concern.


Full-length non-fiction essays, in draft and circulating:

"The prospects and challenges of measuring morality" (with Jessie Sun).

    Could we create a "moralometer" -- that is, a valid measure of a person's general morality? The conceptual and methodological challenges would be formidable.
"The washout argument against longtermism" (commentary on William MacAskill's book What We Owe the Future).
    We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future.
"Let's hope we're not living in a simulation" (commentary on David Chalmers's book Reality+).
    If we are living in a simulation, there's a good chance it's small or brief and we are radically mistaken about the past, future, and/or distant things.
"Consciousness in Artificial Intelligence: Insights from the science of consciousness" (one of 19 authors, with Patrick Butlin and Robert Long).
    Some mainstream scientific theories of consciousness imply that we might be on the verge of creating AI systems that genuinely have conscious experiences.
"The necessity of construct and external validity for generalized causal claims: A critical review of the literature on quantitative causal inference" (with Kevin Esterling and David Brady).
    We develop a formal model of causal specification which clarifies the necessity of construct validity and external validity for deductive causal inference.
"Inflate and explode".
    Illusionists and eliminativists about phenomenal consciousness illegitimately build objectionable presuppositions into the notion of "phenomenal consciousness" and defeat only this artificially inflated notion. (I wrote this a few years ago and I'm undecided about whether to trunk this one or revise it.)


Selected shorter non-fiction:

"Uncle Iroh, from fool to sage -- or sage all along? (with David Schwitzgebel), in J. De Smedt and H. De Cruz, eds., Avatar: The Last Airbender and Philosophy (2023), Wiley Blackwell.

    Uncle Iroh is a Zhuangzian sage, and ordinary viewers immediately glimpse the sageliness behind his veneer of foolishness.
"Dehumanizing the cognitively disabled: Commentary on Smith's Making Monsters" (with Amelie Green), Analysis Reviews (forthcoming).
    We describe Amelie Green's experience witnessing the dehumanization of the cognitively disabled in care homes, comparing it with Smith's treatment of racial dehumanization.
"Introspection in group minds, disunities of consciousness, and indiscrete persons" (with Sophie R. Nelson), Journal of Consciousness Studies, 30 (2023), #9-10, 288-303.
    We describe a hypothetical AI system that defies the usual sharp lines between cognitive systems, conscious experiencers, and persons.
"Quasi-sociality: Towards asymmetric joint actions with artificial systems" (with Anna Strasser), in A. Strasser, ed., How to Live with Smart Machines? (forthcoming), Xenemoi.
    AI systems might soon occupy the gray area between being asocial tools and being real, but junior, social partners.
"AI systems must not confuse users about their sentience or moral status", Patterns, 4 (2023), #8, 100818.
    AI systems should be designed to either be clearly nonsentient tools or (if it's ever possible) clearly sentient entities who deserve appropriate care and protection.
"How far can we get in creating a digital replica of a philosopher?" (with Anna Strasser and Matt Crosby), in R. Hakli, P. Mäkelä, J. Seibt, eds., Social Robots in Social Institutions: Proceedings of Robophilosophy 2022. Series Frontiers of AI and Its Applications, vol. 366 (2023), IOS Press.

"Don't make moral calculations based on the far future", The Latecomer (Dec 19, 2023).

    An epistemic critique of "longtermism".

"Could the Universe Be Finite? (with Jacob Barandes), Nautilus (Dec 15, 2023).

    Well, probably not.

"Is it time to start considering personhood rights for AI chatbots?" (with Henry Shevlin), Los Angeles Times (Mar 5, 2023).

    Reflections on the hazards of confusion about the moral status of AI systems


Science fiction stories

"Larva, pupa, imago", Clarkesworld, issue 197, (2023).

    The life-cycle and worldview of a cognitively enhanced future butterfly.


Some favorite blog posts

"The black hole objection to longtermism and consequentialism" (Apr 13).

"'There are no chairs' says the illusionist, sitting in one" (Apr 24).

"We shouldn't 'box' superintelligent AIs" (May 21).

"The fundamental argument for dispositionalism about belief" (Jun 7).

"The Summer Illusion" (Jul 10).

"One reason to walk the walk: To give specific content to your assertions" (Sep 8).

"Percent of U.S. philosophy PhD recipients who are women: A 50-year perspective" (Nov 3).


Happy New Year!


Friday, December 29, 2023

Normativism about Swimming Holes, Anger, and Beliefs

Among philosophers studying belief, normativism is an increasingly popular position. According to normativism, beliefs are necessarily, as part of their essential nature, subject to certain evaluative standards. In particular, beliefs are necessarily defective in a certain way if they are false or unresponsive to counterevidence.

In this way, believing is unlike supposing or imagining. If I merely suppose that P is true, nothing need have gone wrong if P is false. The supposition is in no way defective. Similarly, if I imagine Q and then learn that evidence supports not-Q, nothing need have gone wrong if I continue imagining Q. In contrast, if I believe P, the belief is in a certain way defective ("incorrect") if it is false and I have failed as a believer (I've been irrational) if I don't reduce my confidence in P in the face of compelling counterevidence.

But what is a normative essence? Several different things could be meant, some plausible but tepid, others bold but less plausible.

Let's start at the tepid end. Swimming hole is, I think, also an essentially normative concept. If I decide to call a body of water a swimming hole, I'm committed to evaluating it in certain ways -- specifically, as a locale for swimming. If the water is dirty or pollution-tainted, or if it has slime or alligators, it's a worse swimming hole. If it's clean, beautiful, safe, sufficiently deep, and easy on your bare feet, it's a better swimming hole.

But of course bodies of water are what they are independently of their labeling as swimming holes. The better-or-worse normativity is entirely a function of externally applied human concepts and human uses. Once I think of a spot as a swimming hole, I am committed to evaluating it in a certain way, but the body of water is not inherently excellent or defective in virtue of its safety or danger. The normativity derives from the application of the concept or from the practices of swimming-hole users. Nonetheless, there's a sense in which it really is part of the essence of being a swimming hole that being unsafe is a defect.

[Midjourney rendition of an unsafe swimming hole with slime, rocks, and an alligator]

If belief-normativity is like swimming-hole-normativity, then the following is true: Once we label a mental state as a belief, we commit to evaluating it in certain ways -- for example as "incorrect" if untrue and "irrational" if held in the teeth of counterevidence. But if this is all there is to the normativity of belief, then the mental state in question might not be in any way intrinsically defective. Rather, we belief-ascribers are treating the state as if it should play a certain role; and we set ourselves up for disappointment if it doesn't play that role.

Suppose a member of a perennially losing sports team says, on day one of the new season, "This year, we're going to make the playoffs!" Swimming-hole normativity suggests that we interpreters have a choice. We could treat this exclamation as the expression of a belief, in which case it is defective because unjustified by the evidence and (as future defeats will confirm) false. Or we could treat the exclamation as an expression of optimism and team spirit, in which case it might not be in any way defective. There need be no fact of the matter, independent of our labeling, concerning its defectiveness or not.

Advocates of normativism about belief typically want to make a bolder claim than that. So let's move toward a bolder view of normativity.

Consider hearts. Hearts are defective if they don't pump blood, in a less concept-dependent way than swimming holes are defective if they are unsafe. That thing really is a heart, independent of any human labeling, and as such it has a function, independent of any human labeling, which it can satisfy or fail to satisfy.

Might beliefs be inherently normative in that way, the heart-like way, rather than just the swimming-hole way? If I believe this year we'll make the playoffs, is this a state of mind with an essential function in the same way that the heart is an organ with an essential function?

I am a dispositionalist about belief. To believe some proposition P is, on my view, just to be disposed to act and react in ways that are characteristic of a P-believer. To believe this year we'll make the playoffs, for example, is to be disposed to say so, with a feeling of sincerity, to be willing to wager on it, to feel surprise and disappointment with each mounting loss, to refuse to make other plans during playoff season, and so on. It's not clear that a cluster of dispositions is a thing with a function in the same way that a heart is a thing with a function.

Now maybe (though I suspect this is simplistic) some mechanism in us functions to create dispositional belief states in the face of evidence: It takes evidence that P as an input and then produces in us dispositional tendencies to act and react as if P is true. Maybe this mechanism malfunctions if it generates belief states contrary to the evidence, and maybe this mechanism has been evolutionarily selected because it produces states that cause us to act in ways that track the truth. But it doesn't follow from this, I think, that the states that are produced are inherently defective if they arise contrary to the evidence or don't track the truth.

Compare anger: Maybe there's a system in us that functions to create anger when there's wrongdoing against us or those close to us, and maybe this mechanism has been selected because it produces states that prepare us to fight. It doesn't seem to follow that the state is inherently defective if produced in some other way (e.g., by reading a book) or if one isn't prepared to fight (maybe one is a pacifist).

I conjecture that we can get all the normativity we want from belief by a combination of swimming-hole type normativity (once we conceptualize an attitude as a belief, we're committed to saying it's incorrect if false) and normativity of function in our belief-producing mechanisms, without treating belief states themselves as having normative essences.

Wednesday, December 20, 2023

The Washout Argument Against Longtermism

I have a new essay in draft, "The Washout Argument Against Longtermism". As always, thoughts, comments, and objections welcome, either as comments on this post or by email to my academic address.

Abstract:

We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis.

According to the Infinite Washout Argument, standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes.

According to the Cluelessness Argument, we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years.

According to the Negligibility Argument, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.

For more details see the full-length draft.

A brief, non-technical version of these arguments is also now available at the longtermist online magazine The Latecomer.

[Midjourney rending of several happy dolphins playing]

Excerpt from full-length essay

If MacAskill’s and most other longtermists’ reasoning is correct, the world is likely to be better off in a billion years if human beings don’t go extinct now than if human beings do go extinct now, and decisions we make now can have a non-negligible influence on whether that is the case. In the words of Toby Ord, humanity stands at a precipice. If we reduce existential risk now, we set the stage for possibly billions of years of thriving civilization; if we don’t, we risk the extinction of intelligent life on Earth. It’s a tempting, almost romantic vision of our importance. I also feel drawn to it. But the argument is a card-tower of hand-waving plausibilities. Equally breezy towers can be constructed in favor of human self-extermination or near-self-extermination. Let me offer....

The Dolphin Argument. The most obvious solution to the Fermi Paradox is also the most depressing. The reason we see no signs of intelligent life elsewhere in the universe is that technological civilizations tend to self-destruct in short order. If technological civilizations tend to gain increasing destructive power over time, and if their habitable environments can be rendered uninhabitable by a single catastrophic miscalculation or a single suicidal impulse by someone with their finger on the button, then the odds of self-destruction will be non-trivial, might continue to escalate over time, and might cumulatively approach nearly 100% over millennia. I don’t want to commit to the truth of such a pessimistic view, but in comparison, other solutions seem like wishful thinking – for example, that the evolution of intelligence requires stupendously special circumstances (the Rare Earth Hypothesis) or that technological civilizations are out there but sheltering us from knowledge of them until we’re sufficiently mature (the Zoo Hypothesis).

Anyone who has had the good fortune to see dolphins at play will probably agree with me that dolphins are capable of experiencing substantial pleasure. They have lives worth living, and their death is a loss. It would be a shame if we drove them to extinction. Suppose it’s almost inevitable that we wipe ourselves out in the next 10,000 years. If we extinguish ourselves peacefully now – for example, by ceasing reproduction as recommended by antinatalists – then we leave the planet in decent shape for other species, including dolphins, which might continue to thrive. If we extinguish ourselves through some self-destructive catastrophe – for example, by blanketing the world in nuclear radiation or creating destructive nanotech that converts carbon life into gray goo – then we probably destroy many other species too and maybe render the planet less fit for other complex life.

To put some toy numbers on it, in the spirit of longtermist calculation, suppose that a planet with humans and other thriving species is worth X utility per year, a planet with other thriving species with no humans is worth X/100 utility (generously assuming that humans contribute 99% of the value to the planet!), and a planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility. If we destroy ourselves in 10,000 years, the billion year sum of utility is 10^4 * X + (approx.) 10^9 * X/200 = (approx.) 5 * 10^6 * X. If we peacefully bow out now, the sum is 10^9 * X/100 = 10^7 * X. Given these toy numbers and a billion-year, non-human-centric perspective, the best thing would be humanity’s peaceful exit.

Now the longtermists will emphasize that there’s a chance we won’t wipe ourselves out in a terribly destructive catastrophe in the next 10,000 years; and even if it’s only a small chance, the benefits could be so huge that it’s worth risking the dolphins. But this reasoning ignores a counterbalancing chance: That if human beings stepped out of the way a better species might evolve on Earth. Cosmological evidence suggests that technological civilizations are rare; but it doesn’t follow that civilizations are rare. There has been a general tendency on Earth, over long, evolutionary time scales, for the emergence of species with moderately high intelligence. This tendency toward increasing intelligence might continue. We might imagine the emergence of a highly intelligent, creative species that is less destructively Promethean than we are – one that values play, art, games, and love rather more than we do, and technology, conquering, and destruction rather less – descendants of dolphins or bonobos, perhaps. Such a species might have lives every bit as good as ours (less visible to any ephemeral high-tech civilizations that might be watching from distant stars), and they and any like-minded descendants might have a better chance of surviving for a billion years than species like ours who toy with self-destructive power. The best chance for Earth to host such a species might, then, be for us humans to step out of the way as expeditiously as possible, before we do too much harm to complex species that are already partway down this path.

Think of it this way: Which is the likelier path to a billion-year happy, intelligent species: that we self-destructive humans manage to keep our fingers off the button century after century after century somehow for ten million centuries, or that some other more peaceable, less technological clade finds a non-destructive stable equilibrium? I suspect we flatter ourselves if we think it’s the former.

This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch.

...

My aim with the Dolphin Argument... is not to convince readers that humanity should bow out for the sake of other species.... Rather, my thought is this: It’s easy to concoct stories about how what we do now might affect the billion-year future, and then to attach decision-theoretic numbers to those stories. We lack good means for evaluating these stories. We are likely just drawn to one story or another based on what it pleases us to think and what ignites our imagination.

Saturday, December 16, 2023

Could the Universe Be Infinite?

It's not absurd to think the universe might endure forever.

by Eric Schwitzgebel and Jacob Barandes

From The Weirdness of the World, forthcoming from Princeton University Press in January, excerpted Dec 15 at Nautilus.

On recent estimates, the observable universe—the portion of the universe that we can detect through our telescopes—extends about 47 billion light-years in every direction. But the limit of what we can see is one thing, and the limit of what exists is quite another. It would be remarkable if the universe stopped exactly at the edge of what we can see. For one thing, that would place us, surprisingly and un-Copernicanly, precisely at the center.

But even granting that the universe is likely to be larger than 47 billion light-years in radius, it doesn’t follow that it’s infinite. It might be finite. But if it’s finite, then one of two things should be true: Either the universe should have a boundary or edge, or it should have a closed topology.

It’s not absurd to think that the universe might have an edge. Theoretical cosmologists routinely consider hypothetical finite universes with boundaries at which space comes to a sudden end. However, such universes require making additional cosmological assumptions for which there is no direct support—assumptions about the conditions, if any, under which those boundaries might change, and assumptions about what would happen to objects or light rays that reach those boundaries.

It’s also not absurd to think that the universe might have a closed topology. By this we mean that over distances too large for us to see, space essentially repeats, so that a particle or signal that traveled far enough would eventually come back around to the spatial region from which it began—like how when Pac-Man exits one side of the TV screen, he re-emerges from the other side. However, there is currently no evidence that the universe has a closed topology.

Leading cosmologists, including Alex Vilenkin, Max Tegmark, and Andrei Linde, have argued that spatial infinitude is the natural consequence of the best current theories of cosmic inflation. Given that, plus the absence of evidence for an edge or closed topology, infinitude seems a reasonable default view. The mere 47 billion light-years we can see is the tiniest speck of a smidgen of a drop in an endless expanse.

Let’s call any galaxy with stars, planets, and laws of nature like our own a sibling galaxy. Exactly how similar a galaxy must be to qualify as a sibling we will leave unspecified, but we don’t intend high similarity. Andromeda is sibling enough, as are probably most of the other hundreds of billions of ordinary galaxies we can currently see.

The finiteness of the speed of light means that when we look at these faraway galaxies, we see them as they were during earlier periods in the universe’s history. Taking this time delay into account, the laws of nature don’t appear to differ in regions of the observable universe that are remote from us. Likewise, galaxies don’t appear to be rarer or differently structured in one direction or another. Every direction we look, we see more or less the same stuff. These observations help motivate the Copernican Principle, which is the working hypothesis that our position in the universe is not special or unusual—not the exact center, for example, and not the one weird place that happens to have a galaxy operating by special laws that don’t hold elsewhere.

Still, our observable universe might be an atypical region of an infinite universe. Possibly, somewhere beyond what we can see, different forms of elementary matter might follow different laws of physics. Maybe the gravitational constant is a little different. Maybe there are different types of fundamental particles. Even more radically, other regions might not consist of three-dimensional space in the form we know it. Some versions of string theory and inflationary cosmology predict exactly such variability.

But even if our region is in some respects unusual, it might be common enough that there are infinitely many other regions similar to it—even if just one region in 10500. Again, this is a fairly standard view among speculative cosmologists, which comports well with straightforward interpretations of leading cosmological theories. One can hardly be certain, of course. Maybe we’re just in a uniquely interesting spot! But we are going to assume that’s not the case. In the endless cosmos, infinitely many regions resemble ours, with three spatial dimensions, particles that obey approximately the “Standard Model” of particle physics, and cluster upon cluster of sibling galaxies.

Under the assumptions so far, the Copernican Principle suggests that there are infinitely many sibling galaxies in a spacelike relationship with us, meaning that they exist in spatiotemporal regions roughly simultaneous with ours (in some frame of reference). We will have seen the past history of some of these simultaneously existing sibling galaxies, most of which, we assume, continue to endure. However, it’s a separate question whether there are also infinitely many sibling galaxies in a timelike relationship to us—more specifically, existing in our future. Are there infinitely many sibling galaxies in spatiotemporal locations that are, at least in principle, eventually reachable by particles originating in our galaxy? (If the locutions of this paragraph seem convoluted, that’s due to the bizarreness of relativity theory, which prevents us from using “past,” “present,” and “future” in the ordinary, commonsense way.)

Thinking about whether infinitely many sibling galaxies will exist in the future requires thinking about heat death. Stars have finite lifetimes. If standard physical theory is correct, then ultimately all the stars we can currently see will burn out. Some of those burned-out stars will contribute to future generations of stars, which will, in turn, burn out. Other stars will become black holes, but then those black holes also will eventually dissipate (through Hawking radiation).

Given enough time, assuming that the laws of physics as we understand them continue to hold, and assuming things don’t re-collapse in a “Big Crunch” in the distant future, the standard view is that everything we presently see will inevitably enter a thin, boring, high-entropy state near equilibrium—heat death. Picture nearly empty darkness, with particles more or less evenly spread out, with even rock-size clumps of matter being rare.

But what happens after heat death? This is of course even more remote and less testable than the question of whether heat death is inevitable. It requires extrapolating far beyond our current range of experience. But still we can speculate based on currently standard assumptions. Let’s think as reasonably as we can about this. Here’s our best guess, based on standard theory, from Ludwig Boltzmann through at least some time slices of Sean Carroll.

For this speculative exercise, we will assume that the famously probabilistic behavior of quantum systems is intrinsic to the systems themselves, persisting post-heat-death and not requiring external observers carrying out measurements. This is consistent with most current approaches to quantum theory (including most many-worlds approaches, objective-collapse approaches, and Bohmian mechanics). It is, however, inconsistent with theories according to which the probabilistic behavior requires external observers (some versions of the “Copenhagen interpretation”) and theories on which the post-heat-death universe would inescapably occupy a stationary ground state. Under this assumption, standard probabilistic theories of what happens in high-entropy, near-vacuum conditions continue to apply post-heat-death. More specifically, the universe will continue to support random fluctuations of photons, protons, and whatever other particles remain. Consequently, from time to time, these particles will, by chance, enter unlikely configurations. This is predicted by both standard statistical mechanics and standard quantum mechanics. Post-heat-death, seven particles will sometimes converge, by chance, upon the same small region. Or 700. Or—very rarely!—7 trillion.

There appears to be no in-principle limit to how large such chance fluctuations can be or what they can contain if they pass through the right intermediate phases. Wait long enough and extremely large fluctuations should occur. Assuming the universe continues infinitely, rather than having a temporal edge or forming a closed loop, for which there is no evidence, then eventually some random fluctuation should produce a bare brain having cosmological thoughts. Wait longer, and eventually some random fluctuation will produce, as Boltzmann suggested, a whole galaxy. If the galaxy is similar enough to our own, it will be a sibling galaxy. Wait still longer, and another sibling galaxy will arise, and another, and another....

For good measure, let’s also assume that after some point post-heat-death, the rate at which galaxy-size systems fluctuate into existence does not systematically decrease. There’s some minimal probability of galaxy-size fluctuations, not an ever-decreasing probability with longer and longer average intervals between galaxies. Fluctuations appear at long intervals, by random chance, then fade back into chaos after some brief or occasionally long period, and the region returns to the heat-death state, with the same small probability of large fluctuations as before. Huge stretches of not much will be punctuated by rare events of interesting, even galaxy-size, complexity.

Of course, this might not be the way things go. We certainly can’t prove that the universe is like this. But despite the bizarreness that understandably causes some people to hesitate, the overall picture we’ve described appears to be the most straightforward consequence of standard physical theory, taken out of the box, without too much twisting things around.

Even if this specific speculation is wrong, there are many other ways in which the cosmos might deliver infinitely many sibling galaxies in the future. For example, some process might ensure we never enter heat death and new galaxies somehow continue to be born.

Alternatively, processes occurring pre-heat-death, such as the formation of black holes, might lead to new bangs or cosmic inflations, spatiotemporally unconnected or minimally connected to our universe, and new stars and galaxies might be born from these new bangs or inflations in much the same way as our familiar stars and galaxies were born from our familiar Big Bang.

Depending on what constitutes a “universe” and a relativistically specifiable “timelike” relation between our spatiotemporal region and some future spatiotemporal region, those sibling galaxies might not exist in our universe or stand in our future, technically speaking, but if so, that detail doesn’t matter to our core idea. Similarly, if the observable universe reverses its expansion, it might collapse upon itself in a Big Crunch, followed by another Big Bang, and so on in an infinitely repeating cycle, containing infinitely many sister galaxies post-Crunch. This isn’t currently the mainstream view, but it’s a salient and influential alternative if the heat-death scenario outlined above is mistaken.

We conclude that it is reasonable to think that the universe is infinite, and that there exist infinitely many galaxies broadly like ours, scattered throughout space and time, including in our future. It’s a plausible reading of our cosmological situation. It’s a decent guess and at least a possibility worth taking seriously....

Excerpted from The Weirdness of the World. In the book, this argument sets up the case that virtually every action you perform has causal ripples extending infinitely into the future, causing virtually every physically possible, non-unique, non-zero probability event.

Tuesday, December 05, 2023

Falling in Love with Machines

People occasionally fall in love with AI systems. I expect that this will become increasingly common as AI grows more sophisticated and new social apps are developed for large language models. Eventually, this will probably precipitate a crisis in which some people have passionate feelings about the rights and consciousness of their AI lovers and friends while others hold that AI systems are essentially just complicated toasters with no real consciousness or moral status.


Last weekend, chatting with the adolescent children of a family friend, helped cement my sense that this crisis might arrive soon. Let’s call the kids Floyd (age 12) and Esmerelda (age 15). Floyd was doing a science fair project comparing the output quality of Alexa, Siri, Bard, and ChatGPT. But, he said, "none of those are really AI".

What did Floyd have in mind by "real AI"? The robot Aura in the Las Vegas Sphere. Aura has an expressive face and an ability to remember social interactions (compare Aura with my hypothetical GPT-6 mall cop).

Aura at the Las Vegas Sphere

"Aura remembered my name," said Esmerelda. "I told Aura my name, then came back forty minutes later and asked if it knew my name. It paused a bit, then said, 'Is it Esmerelda?'"

"Do you think people will ever fall in love with machines?" I asked.

"Yes!" said Floyd, instantly and with conviction.

"I think of Aura as my friend," said Esmerelda.

I asked if they thought machines should have rights. Esmerelda said someone asked Aura if it wanted to be freed from the Dome. It said no, Esmerelda reported. "Where would I go? What would I do?"

I suggested that maybe Aura had just been trained or programmed to say that.

Yes, that could be, Esmerelda conceded. How would we tell, she wondered, if Aura really had feelings and wanted to be free? She seemed mildly concerned. "We wouldn't really know."

I accept the current scientific consensus that current large language models do not have a meaningful degree of consciousness or deserve moral consideration similar to that of vertebrates. But at some point, there will likely be legitimate scientific dispute, if AI systems start to meet some but not all of the criteria for consciousness according to mainstream scientific theories.


The dilemma will be made more complicated by corporate interests, as some corporations (e.g., Replika, makers of the "world's best AI friend") will have financial motivation to encourage human-AI attachment while others (e.g., OpenAI) intentionally train their language models to downplay any user concerns about consciousness and rights.