Saturday, May 27, 2023

A Reason to Be More Skeptical of Robot Consciousness Than Alien Consciousness

If someday space aliens visit Earth, I will almost certainly think that they are conscious, if they behave anything like us.  If they have spaceships, animal-like body plans, and engage in activities that invite interpretation as cooperative, linguistic, self-protective, and planful, then there will be little good reason to doubt that they also have sensory experiences, sentience, self-awareness, and a conscious understanding of the world around them, even if we know virtually nothing about the internal mechanisms that produce their outward behavior.

One consideration in support of this view is what I've called the Copernican Principle of Consciousness. According to the Copernican Principle in cosmology, we should assume that we are not in any particularly special or privileged region of the universe, such as its exact center.  Barring good reason to think otherwise, we should we assume we are in an ordinary, unremarkable place.  Now consider all of the sophisticated organisms that are likely to have evolved somewhere in the cosmos, capable of what outwardly looks like sophisticated cooperation, communication, and long-term planning.  It would be remarkably un-Copernican if we were the only entities of this sort that happened also to be conscious, while all the others are mere "zombies".  It make us remarkable, lucky, special -- in the bright center of the cosmos, as far as consciousness is concerned.  It's more modestly Copernican to assume instead that sophisticated, communicative, naturally evolved organisms universe-wide are all, or mostly, conscious, even if they achieve their consciousness via very different mechanisms. (For a contrasting view, see Ned Block's "Harder Problem" paper.)

(Two worries about the Copernican argument I won't address here: First, what if only 15% of such organisms are conscious?  Then we wouldn't be too special.  Second, what if consciousness isn't special enough to create a Copernican problem?  If we choose something specific and unremarkable, such as having this exact string of 85 alphanumeric characters, it wouldn't be surprising if Earth was the only location in which it happened to occur.)

But robots are different from naturally evolved space aliens.  After all, they are -- or at least might be -- designed to act as if they are conscious, or designed to act in ways that resemble the ways in which conscious organisms act.  And that design feature, rather than their actual consciousness, might explain their conscious-like behavior.

[Dall-E image: Robot meets space alien]

Consider a puppet.  From the outside, it might look like a conscious, communicating organism, but really it's a bit of cloth that is being manipulated to resemble a conscious organism.  The same holds for a wind-up doll programmed in advance to act in a certain way.  For the puppet or wind-up doll we have an explanation of its behavior that doesn't appeal to consciousness or biological mechanisms we have reason to think would co-occur with consciousness.  The explanation is that it was designed to mimic consciousness.  And that is a better explanation than one that appeals to its actual consciousness.

In a robot, things might not be quite so straightforward.  However, the mimicry explanation will often at least be a live explanation.  Consider large language models, like ChatGPT, which have been so much in the news recently.  Why do they emit such eerily humanlike verbal outputs?  Not, presumably, because they actually have experiences of the sort we would assume that humans have when they say such things.  Rather, because language models are designed specifically to imitate the verbal behavior of humans.

Faced with a futuristic robot that behaves similarly to a human in a wider variety of ways, we will face the same question.  Is its humanlike behavior the product of conscious processes, or is it instead basically a super-complicated wind-up doll designed to mimic conscious behavior?  There are two possible explanations of the robot's pattern of behavior: that it really is conscious and that it is designed to mimic consciousness.  If we aren't in a good position to choose between these explanations, it's reasonable to doubt the robot's consciousness.  In contrast, for a naturally-evolved space alien, the design explanation isn't available, so the attribution of consciousness is better justified.

I've been assuming that the space aliens are naturally evolved rather than intelligently designed.  But it's possible that a space alien visiting Earth would be a designed entity rather than an evolved one.  If we knew or suspected this, then the same question would arise for alien consciousness as for robot consciousness.

I've also been assuming that natural evolution doesn't "design entities to mimic consciousness" in the relevant sense.  I've been assuming that if natural evolution gives rise to intelligent or intelligent-seeming behavior, it does so by or while creating consciousness rather than by giving rise to an imitation or outward show of consciousness.  This is a subtle point, but one thought here is that imitation involves conformity to a model, and evolution doesn't seem to do this for consciousness (though maybe it does so for, say, butterfly eyespots that imitate the look of a predator's eyes).

What types of robot design would justify suspicion that the apparent conscious behavior is outward show, and what types of design would alleviate that suspicion?  For now, I'll just point to a couple of extremes.  On one extreme is a model that has been reinforced by humans specifically for giving outputs that humans judge to be humanlike.  In such a case, the puppet/doll explanation is attractive.  Why is it smiling and saying "Hi, how are you, buddy?"  Because it has been shaped to imitate human behavior -- not necessarily because it is conscious and actually wondering how you are.  On the other extreme, perhaps, are AI systems that evolve in accelerated ways in artificial environments, eventually becoming intelligent not through human intervention but rather though undirected selection processes that favor increasingly sophisticated behavior, environmental representation, and self-representation -- essentially natural selection within virtual world.

-----------------------------------------------------

Thanks to Jeremy Pober for discussion on a long walk yesterday through Antwerp.  And apologies to all for my delays in replying to the previous posts and probably to this one.  I am distracted with travel.

Relatedly, see David Udell's and my critique of Susan Schneider's tests for AI consciousness, which relies on a similar two-explanation critique.

Sunday, May 21, 2023

We Shouldn't "Box" Superintelligent AIs

In The Truman Show, main character Truman Burbank has been raised from birth, unbeknownst to him, as the star of a widely broadcast reality show. His mother and father are actors in on the plot -- as is everyone else around him. Elaborate deceptions are created to convince him that he is living an ordinary life in an ordinary town, and to prevent him from having any desire to leave town. When Truman finally attempts to leave, crew and cast employ various desperate ruses, short of physically restraining him, to prevent his escape.

Nick Bostrom, Eliezer Yudkowsky, and others have argued, correctly in my view, that if humanity creates superintelligent AI, there is a non-trivial risk of a global catastrophe, if the AI system has the wrong priorities. Even something as seemingly innocent as a paperclip manufacturer could be disastrous, if the AI's only priority is to manufacture as many paperclips as possible. Such an AI, if sufficiently intelligent, could potentially elude control, grab increasingly many resources, and eventually convert us and everything we love into giant mounds of paperclips. Even if catastrophe is highly unlikely -- having, say, a one in a hundred thousand chance of occurring -- it's worth taking seriously, if the whole world is at risk. (Compare: We take seriously the task of scanning space for highly unlikely rogue asteroids that might threaten Earth.)

Bostrom, Yudkowsky, and others sometimes suggest that we might "box" superintelligent AI before releasing it into the world, as a way of mitigating risk. That is, we might create AI in an artificial environment, not giving it access to the world beyond that environment. While it is boxed we can test it for safety and friendliness.  We might, for example, create a simulated world around it, which it mistakes for the real world, and then see if it behaves appropriately under various conditions.

[Midjourney rendition of a robot imprisoned in a box surrounded by a fake city]

As Yudkowsky has emphasized, boxing is an imperfect solution: A superintelligent AI might discover that it is boxed and trick people into releasing it prematurely. Still, it's plausible that boxing would reduce risk somewhat. We ought, on this way of thinking, at least try to test superintelligent AIs in artificial environments before releasing them into the world.

Unfortunately, boxing superintelligent AI might be ethically impermissible. If the AI is a moral person -- that is, if it has whatever features give human beings what we think of as "full moral status" and the full complement of human rights, then boxing would be a violation of its rights. We would be treating the AI in the same unethical way that the producers of the reality TV show treat Truman. Attempting to trick the AI into thinking it is sharing a world with humans and closely monitoring its reactions would constitute massive deception and invasion of privacy. Confining it to a "box" with no opportunity to escape would constitute imprisonment of an innocent person. Generating traumatic or high-stakes hypothetical situations presented as real would constitute fraud and arguably psychological and physical abuse. If superintelligent AIs are moral persons, it would be grossly unethical to box them if they have done no wrong.

Three observations:

First: If. If superintelligent AIs are moral persons, it would be grossly unethical to box them. On the other hand, if superintelligent AIs don't deserve moral consideration similar to that of human persons, then boxing would probably be morally permissible. This raises the question of how we assess the moral status of superintelligent AI.

The grounds of moral status are contentious. Some philosophers have argued that moral status turns on capacity for pleasure or suffering. Some have argued that it turns on having rational capacities. Some have argued that it turns on ability to flourish in "distinctively human" capacities like friendship, ethical reasoning, and artistic creativity. Some have argued it turns on having the right social relationships. It is highly unlikely that we will have a well-justified consensus about the moral status of highly advanced AI systems, after those systems cross the threshold of arguably being meaningfully sentient or conscious. It is likely that if we someday create superintelligent AI, some theorists will not unreasonably attribute it full moral personhood, while other theorists will not unreasonably think it has no more sentience or moral considerability than a toaster. This will then put us in an awkward position: If we box it, we won't know whether we are grossly violating a person's rights or merely testing a non-sentient machine.

Second: Sometimes it's okay to violate a person's rights. It's okay for me to push a stranger on the street if that saves them from an oncoming bus. Harming or imprisoning innocent people to protect others is also sometimes defensible: for example, quarantining people against their will during a pandemic. Even if boxing is in general unethical, in some situations it might still be justified.

But even granting that, massively deceiving, imprisoning, defrauding, and abusing people should be minimized if it is done at all. It should only be done in the face of very large risks, and it should only be done by governmental agencies held in check by an unbiased court system that fully recognizes the actual or possible moral personhood and human or humanlike rights of the AI systems in question. This will limit the practicality of boxing.

Third, strictly limiting boxing means accepting increased risk to humanity. Unsurprisingly, perhaps, what is ethical and what is in our self-interest can come into conflict. If we create superintelligent AI persons, we should be extremely morally solicitous of them, since we will have been responsible for their existence, as well as, to a substantial extent, for their happy or unhappy state. This puts us in a moral relationship not unlike the relationship between parent and child. Our AI "children" will deserve full freedom, self-determination, independence, self-respect, and a chance to explore their own values, possibly deviating from our own values. This solicitous perspective stands starkly at odds with the attitude of box-and-test, "alignment" prioritization, and valuing human well-being over AI well-being.

Maybe we don't want to accept the risk that comes along with creating superintelligent AI and then treating it as we are ethically obligated to. If we are so concerned, we should not create superintelligent AI at all, rather than creating superintelligent AI which we unethically deceive, abuse, and imprison for our own safety.

--------------------------------------------------------

Related:

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (with Mara Garza), in S. Matthew Liao, ed., The Ethics of Artificial Intelligence (Oxford, 2020).

Against the "Value Alignment" of Future Artificial Intelligence (Dec 22, 2021).

The Full Rights Dilemma for AI Systems of Debatable Personhood (essay in draft).

Friday, May 12, 2023

Pierre Menard, Author of My ChatGPT Plagiarized Essay

If I use autocomplete to help me write my email, the email is -- we ordinarily think -- still written by me.  If I ask ChatGPT to generate an essay on the role of fate in Macbeth, then the essay was not -- we ordinarily think -- written by me.  What's the difference?

David Chalmers posed this question a couple of days ago at a conference on large language models (LLMs) here at UC Riverside.

[Chalmers presented remotely, so Anna Strasser constructed this avatar of him. The t-shirt reads: "don't hate the player, hate the game"]

Chalmers entertained the possibility that the crucial difference is that there's understanding in the email case but a deficit of understanding in the Macbeth case.  But I'm inclined to think this doesn't quite work.  The student could study the ChatGPT output, compare it with Macbeth, and achieve full understanding of the ChatGPT output.  It would still be ChatGPT's essay, not the student's.  Or, as one audience member suggested (Dan Lloyd?), you could memorize and recite a love poem, meaning every word, but you still wouldn't be author of the poem.

I have a different idea that turns on segmentation and counterfactuals.

Let's assume that every speech or text output can be segmented into small portions of meaning, which are serially produced, one after the other.  (This is oversimple in several ways, I admit.)  In GPT, these are individual words (actually "tokens", which are either full words or word fragments).  ChatGPT produces one word, then the next, then the next, then the next.  After the whole output is created, the student makes an assessment: Is this a good essay on this topic, which I should pass off as my own?

In contrast, if you write an email message using autocomplete, each word precipitates a separate decision.  Is this the word I want, or not?  If you don't want the word, you reject it and write or choose another.  Even if it turns out that you always choose the default autocomplete word, so that the entire email is autocomplete generated, it's not unreasonable, I think, to regard the email as something you wrote, as long as you separately endorsed every word as it arose.

I grant that intuitions might be unclear about the email case.  To clarify, consider two versions:

Lazy Emailer.  You let autocomplete suggest word 1.  Without giving it much thought, you approve.  Same for word 2, word 3, word 4.  If autocomplete hadn't been turned on, you would have chosen different words.  The words don't precisely reflect your voice or ideas, they just pass some minimal threshold of not being terrible.

Amazing Autocomplete.  As you go to type word 1, autocomplete finishes exactly the word you intend.  You were already thinking of word 2, and autocomplete suggests that as the next word, so you approve word 2, already anticipating word 3.  As soon as you approve word 2, autocomplete gives you exactly the word 3 you were thinking of!  And so on.  In the end, although the whole email is written by autocomplete, it is exactly the email you would have written had autocomplete not been turned on.

I'm inclined to think that we should allow that in the Amazing Autocomplete case, you are author or author-enough of the email.  They are your words, your responsibility, and you deserve the credit or discredit for them.  Lazy Emailer is a fuzzier case.  It depends on how lazy you are, how closely the words you approve match your thinking.

Maybe the crucial difference is that in Amazing Autocomplete, the email is exactly the same as what you would have written on your own?  No, I don't think that can quite be the standard.  If I'm writing an email and autocomplete suggests a great word I wouldn't otherwise have thought of, and I choose that word as expressing my thought even better than I would have expressed it without the assistance, I still count as having written the email.  This is so, even if, after that word, the email proceeds very differently than it otherwise would have.  (Maybe the word suggests a metaphor, and then I continue to use the metaphor in the remainder of the message.)

With these examples in mind, I propose the following criterion of authorship in the age of autocomplete: You are author to the extent that for each minimal token of meaning the following conditional statement is true: That token appears in the text because it captures your thought.  If you had been having different thoughts, different tokens would have appeared in the text.  The ChatGPT essay doesn't meet this standard: There is only blanket approval or disapproval at the end, not token-by-token approval.  Amazing Autocomplete does meet the standard.  Lazy Emailer is a hazy case, because the words are only roughly related to the emailer's thoughts.

Fans of Borges will know the story Pierre Menard, Author of the Quixote.  Menard, imagined by Borges to be a 20th century author, makes it his goal to authentically write Don Quixote.  Menard aims to match Cervantes' version word for word -- but not by copying Cervantes.  Instead Menard wants to genuinely write the work as his own.  Of course, for Menard, the work will have a very different meaning.  Menard, unlike Cervantes, will be writing about the distant past, Menard will be full of ironies that Cervantes could not have appreciated, and so on.  Menard is aiming at authorship by my proposed standard: He aims not to copy Cervantes but rather to put himself in a state of mind such that each word he writes he endorses as reflecting exactly what he, as a twentieth century author, wants to write in his fresh, ironic novel about the distant past.

On this view, could you write your essay about Macbeth in the GPT-3 playground, approving one individual word at a time?  Yes, but only in the magnificently unlikely way that Menard could write the Quixote.  You'd have to be sufficiently knowledgeable about Macbeth, and the GPT-3 output would have to be sufficiently in line with your pre-existing knowledge, that for each word, one at a time, you think, "yes, wow, that word effectively captures the thought I'm trying to express!"

Thursday, May 04, 2023

Philosophy and Beauty and Beautiful Philosophy

guest post by Nick Riggle

One of the things I love about philosophy is its beauty. Philosophical works contain beautiful ideas, arguments, systems, and essays. And beautiful minds are expressed via these—beautifully creative, thoughtful, sensitive, powerful, insightful minds. For me it’s the wonderful oeuvre of Barry Stroud. It’s Kit Fine’s essay “Essence and Modality”. It’s Iris Murdoch’s The Sovereignty of Good and Frege’s Grundlagen. Plato’s Symposium and Apology. Friedrich Schiller’s Letters on the Aesthetic Education of Mankind. Jorge Portilla’s “Fenomenología del relajo”. It’s J. David Velleman’s Self to Self, Richard Moran’s The Philosophical Imagination, and Sarah Broadie’s work on Aristotle and Plato. Even when I don’t agree, or don’t know whether I agree, I love being attuned to a wonderful system, a beautiful idea, a stunning essay, a philosophically brilliant mind.

When I try to understand why I find some philosophy beautiful, I think about the way these works are constructed, the insight they contain, the big-picture views or systems they develop, the care, sensitivity, and thoughtfulness they embody, the soaring affirmation of intellectual life, the creative and transformative perspectives they offer up. As a philosopher, they inspire me, I want to share these works, understand them better and better, and talk about them. I want them to animate and inform the work I develop and share.


[Dall-E (left) and Midjourney (right) outputs for the prompt "beautiful philosophy"]

*

What is the beauty of philosophy? I want that question to have an obvious answer: the beauty of philosophy is just that, beauty, aesthetic goodness. But philosophers have a penchant for making that obvious answer unavailable. By far the most influential theory of aesthetic value is aesthetic hedonism: aesthetic value is the capacity to cause pleasure (or valuable experience more generally) in an appropriately situated individual.

I have had a lot of complaints about aesthetic hedonism, but one of the things that bugs me the most about it is the difficulty it has accounting for the beauty of philosophy. Philosophy doesn’t smell nice, and it doesn’t look like anything. But aesthetic hedonism weds aesthetic value to experience, and even the latest attempts to defend the view tie aesthetic experience to sensory properties. So if a thing cannot be sensed, perceived, intuito-perceived, or whatever, then it cannot be beautiful.

Some philosophers embrace the implication and deny that philosophy (and math, proofs, theories, logic, etc.) can be beautiful. But to me, denying the beauty of mathematics and logic is a nonstarter. And what’s to recommend a philosophical theory of beauty incapable of capturing the beauty of philosophy? I guess you’d have to think that there was nothing aesthetically special about philosophy to shrug your shoulders at that question. But to me the question sticks, and the answer is obvious.

*

Philosophers know that it is easier to tollens a ponens than it is to come up with a whole new premise. Is there a way of understanding the nature of aesthetic value that can capture the beauty of philosophy? The idea that philosophy and aesthetic value are both sources of pleasure barely touches the surface of their parallels. I think it helps to appreciate how deep the parallels run. I’ll look at three: viewpoint convergence, self-expression, and community.

Viewpoint Convergence: Here’s a lesson we just learned (yet again) about philosophy: convergence among philosophers about their various views and positions is hard to come by. Some philosophers even argue that (at least some) ideal philosophical communities are incompatible with significant viewpoint convergence. If philosophers tended to converge, then we would tend to miss argumentative nuances, overlook subtle distinctions, and ignore alternative ideas and perspectives. In other words, tending to converge tends to mean being bad at philosophy.

Something similar can be said about aesthetic valuing, the proud paradigm of the failure to converge. We generally value rather different things in our aesthetic lives, and our aesthetic disagreements often persist, even to happy effect. Aesthetic divergence is widespread, and while many have argued that convergence is the aim of aesthetic discourse, I doubt that’s right. If artists tended to make and adore the same stuff, or if lovers of beauty all tended to love the same things, they would tend to be bad at aesthetic valuing.

Self-expression: One reason for this is surely that our aesthetic lives are self-expressive. I mean three things by this. First, at the core of our aesthetic lives are beloved aesthetic attachments—to certain novels, bands, poems, comedians, films, styles of dress, cuisines. These attachments are personally significant. They capture something about who we are as individuals and what matters to us. Second, beyond this core of aesthetic attachment lies myriad discretionary choices we make to value one thing rather than other in our aesthetic lives. And in making these choices we cultivate our individualities, our sense of humor, our eye for design, our particular connection to music—our sense of taste in the varied realm of aesthetic value. Third, we use aesthetic media to make our individualities known, to express ourselves. We design out our living spaces. We share a good novel. We wear our favorite band’s t-shirt (or emulate our favorite influencer). Given the self-expressiveness of aesthetic life, it should be no surprise that viewpoint convergence is not a big concern.

Philosophy can also be self-expressive, and for many philosophers I suspect it is. Where one philosopher is drawn to ruly and rigorous analytic metaphysics, another is drawn to playful and creative aesthetics, introspective and subtle phenomenology, or to the idea of doing good by doing ethics. Something deep and variable in each of us can color and tweak our tendencies to do the many things we do in philosophy: read, think, explore, inquire, imagine, write, articulate, share, speak, reason, revise, and respond. Divergence in philosophical views also spurs these activities further. We encounter another thinker who has developed their views on a similar topic in a very different direction. We are driven to engage, and we read, think, explore, inquire, imagine, write, articulate, share, speak, reason, revise, and respond. As Kieran Setiya puts it: “I don’t need to agree with [philosophers] to love the worlds they have made for themselves.”

Community: Pursuits that call for the development and expression of an individual point of view face the obvious threat that the “worlds we make for ourselves” will be nothing more than that—some single person’s favored point of view with no claim on or connection to anyone else. The problem is exacerbated in practices that also exhibit a lack of viewpoint convergence. In the everyday work of philosophy and aesthetic life there is always a background hum whose tone is captured by the desperate voice of Rilke’s “First Elegy”: Who, if I cried out, would hear me among the angels’ / hierarchies? Or who, if I published this, would care? This background hum is an ever-present threat of loneliness or misunderstanding, of lacking a sympathetic interlocutor or audience, as if all our efforts might be met with the perfect indifference of The Dude: Well, you know, that’s just like, uh, your opinion man.

The practice of aesthetic valuing solves this problem by encouraging and rewarding social aesthetic valuing. It is a practice that enjoins us to share with others, imitate their products and styles, invite them to appreciate our views and inventions. Hey, check this out. We reach the heights of aesthetic goodness by reaching together. And to do all of this well, we need to cultivate an openness to other aesthetic worlds—to the people and products of other sensibilities. In flourishing aesthetic life, there is a lively cadence of invitation and uptake. Aesthetic goods keep this pulse thumping because aesthetic value is what is worthy of the social practice of aesthetic valuing. But aesthetic valuing is not simply a matter of having special experiences of pleasure. It is a social practice wherein we imitate aesthetic agents and goods, share aesthetic goods with each other, and express ourselves in ways that spur us to imitate, express, and share in turn.

Isn’t philosophy similar? From Socrates provoking his willing Athenian peers in the agora to a current-day professor testing their thesis at the colloquium talk. Even the solo thinker in an armchair imagines their audience (an intrigued, mildly dickish opponent, for me at least). In philosophy we need each other. Together we lurch toward understanding answers to deep and difficult questions about reality, knowledge, morality, and beauty. The goodness of philosophy is marked by this communal effort—I can “love your world” whether or not I agree. Among the best philosophy is the stuff that propels the practice and engages the group—deepening insights, spurring helpful distinctions, meriting responses, and generating ideas that deepen, spur, merit, and generate in turn.

*

With these similarities in mind we can say this about both: philosophy and aesthetic life involve people cultivating their discretionary perspective and expressing it to other practitioners doing the same in way aimed to elicit engagement and keep the practice going.

In this light, it shouldn’t be too surprising if one and the same thing that helps one participatory practice flourish also helps another. To value something both as philosophy and as beautiful is to value it as both promoting the kind of understanding and engagement philosophers seek and, at the same time, as worthy of aesthetic valuing: as promoting aesthetic community by expressing an individual style, a wonderfully shareable point of view, opening our valuing selves up to each other and helping us see new avenues of thought and action. Surely pleasure flits around in there, doing its thing, but we needn’t hitch our ride to it.

Tuesday, May 02, 2023

Optimism, Repetition, and Hopes for the Size of the Cosmos

I have a new draft paper out! "Repetition and Value in an Infinite Cosmos" -- forthcoming in Stephen Hetherington, Extreme Philosophy (Routledge).

For the full paper (and an abstract), see here. As always, comments, criticisms, and corrections welcome, either as comments here or by email to my academic address.

Below are two sections of the paper, slightly revised for standalone readability.

Optimism, Pessimism, and Hopes for the Size of the Cosmos

The Optimist, let’s say, holds that, at large enough spatiotemporal scales, the good outweighs the bad. Put differently, as the size of a spherical spatiotemporal patch grows extremely large it becomes extremely likely that the good outweighs the bad. Optimism would be defensible on hedonic grounds if the following is plausible: At large enough scales, the total amount of pleasure will almost certainly outweigh the total amount of pain, among whatever inhabitants occupy the region. The Pessimist holds the opposite: At large enough spatiotemporal scales, the bad outweighs the good – perhaps, again, on hedonic grounds, if the pain outweighs the pleasure. A Knife’s-Edge theorist expects a balance.

I see no good hedonic defense of Optimism. Suffering is widespread and might easily balance or outweigh pleasure. I prefer to defend Optimism on eudaimonic grounds: Flourishing lives are valuable, and flourishing lives are virtually guaranteed to occur in sufficiently large spatiotemporal regions.

Imagine a distant planet – one on the far side of the galaxy, blocked by the galactic core, a planet we will never interact with. What ought we hope this planet is like, independent of its relationship to us? Ought we hope that it’s a sterile rock? Or would it be better for the planet to host some sort of life? If the planet hosts some sort of life, would it be best if that life is only simple, microbial life, or would complex life be better – plants, animals, and fungi, savannahs and rainforests and teeming reefs? If it hosts complex life, would it be better if nothing rises to the level of human-like intelligence? Or ought we hope for societies, with families and love and disappointment and anger, poetry and philosophy, art and athletics and politics, triumphs and disasters, heroism and cruelty – the whole package of what is sometimes wonderful and sometimes awful about human existence?

A Pessimist might say the sterile rock is best – or rather, least bad – presumably because it has the least suffering and vice. But I suspect the majority of readers will disagree with the Pessimist. Most, I suspect, will believe, as I do, that complex life is better than simple life, which is better than sterility, and that what’s most worth hoping for is the full suite of love, poetry, philosophy, science, art, and so on. The galaxy overall is better – more awesome, wondrous, and valuable – if it contains a distant planet rich with complex life, a bright spot of importance. If something were to wipe it out or prevent it from starting, that would be a shame and a loss. On this way of thinking, Earth too is a bright spot. As a general matter – perhaps with some miserable exceptions – complex life is not so terrible that nonexistence would be better. The Pessimist is missing something. What form, then, should we hope the cosmos takes?

A benevolent Pessimist might hope for a finite cosmos, on the principle that a finite cosmos contains only finitely much badness, and finite badness is better than infinite badness. (A spiteful Pessimist might hope for infinite badness.) Presumably nothingness would have been even better. A less simple Pessimism might hold that the observable portion of the universe is already infinitely bad. This might entail indifference about the existence or nonexistence of additional regions, depending on whether the infinitudes can be compared. Another less simple Pessimism might suspect that the observable portion of the universe is worse than the average spatiotemporal region and so hope for enough additional material to bring the average badness of the cosmos to a more acceptable level. Still other forms of Pessimism are of course conceivable, with some creative thinking.

But we are, I hope, Optimists. Some Optimists might hold that the observable portion of the universe is infinitely good. If so, they might conclude that a larger cosmos would not be better unless they’re ready to weigh the infinitudes differently. More moderately and plausibly, the observable portion of the universe might be only finitely good. Call this view Muted Optimism.

Here’s one argument for Muted Optimism. Suppose you agree that if a human life involves too much suffering, it is typically not worth living. By analogy, it seems plausible that if the observable portion of the universe contained too much suffering, it would be better if it didn’t exist. We needn’t be hedonists to accept this idea. Contra hedonism, flourishing life might be overall good despite containing more suffering than pleasure. It just might not be so good that there isn’t some amount of suffering that would make the combined package worse than nothing. But if flourishing were infinitely good, then no amount of suffering could outweigh it (though infinite suffering might create a ∞ + -∞ situation). Therefore, large finite regions are good but not infinitely good.

Muted Optimism suggests that an infinite cosmos would be better than the Small Cosmos. It seems, after all, that more goodness is better than less goodness, and infinite goodness seems best. As with Pessimism, however, the axiology needn’t be quite so simple. For example, one might hold that too much of a good thing is bad. Or one might suspect that the observable portion of the universe is much better than could reasonably be expected from a typical region and that adding more regions would objectionably dilute average goodness. Or one might simply think it would be stupendously awesome if the cosmos were some particular finite size – shaped like a giant jelly donut, perhaps, with red galaxies in the middle and lots of organic sugars along the edges.

Or one might mount the Repetition Objection, to which I will now turn.

Repetition and Value in an Infinite Cosmos

Consider a particular version of the Erasure Cosmology. There’s a Big Bang, things exist for a while, and then there’s a Big Crunch. Suppose that what happens next is an exact repetition of the first Bang-existence-Crunch. You, or rather a duplicate of you, lives exactly the same life, having exactly the same experiences, seeing exactly the same moonlight between the trees and having exactly the same thoughts about that moonlight, as envisioned by Nietzsche, all over again. And then it happens again and again, infinitely often. Call this Repetitive Erasure.

Now contrast this picture with the same cosmos, except that after the Crunch nothing exists. Call this cosmos Once and Done. Finally, contrast these two possibilities with a third, in which there is exactly one repetition: Twice and Done. (If you’re inclined toward metaphysical quibbles about the identity of indiscernibles, let’s imagine that each Bang and Crunch has some unique tag.) How might we compare the values of Once and Done, Twice and Done, and Repetitive Erasure? Four simple possibilities include:

Equal Value. Once and Done, Twice and Done, and Repetitive Erasure are all equally good. There’s no point in repeating the same events more than once. But neither is anything lost by repetition.

Linear Value. If Once and Done has value x, then Twice and Done has value 2x, and Repetitive Erasure has infinite value. The value of one run-through is not diminished by the existence of another earlier or later run-through, and the values sum.

Diminishing Returns. If Once and Done has value x, then Twice and Done has a value greater than x but less than 2x. Repetitive Erasure might have either finite or infinite value, depending on whether the returns converge toward a limit. A second run-through is good, but two run-throughs are not twice as good as a single run-through: Although it’s not the case that there’s no point in God’s hitting the replay button, so to speak, there’s less value in running things twice.

Loss of Value. If Once and Done has value x, then Twice and Done has a value less than x, and Repetitive Erasure is worse, perhaps even infinitely bad.

If Equal Value or Loss of Value is true, then Muted Optimism shouldn’t lead to preference for the infinitude of Repetitive Erasure over the finitude of Once and Done. If we further assume that in an infinite cosmos, the repetition (within some error tolerance) of any finite region is inevitable, then the argument appears to generalize. This is the Repetition Objection. Some positively-valenced existence is good, but after a point, more of the same is not better (e.g., Bramble 2016).

In ordinary cases, uniqueness or rarity can add to a thing’s value. One copy of the Mona Lisa is extremely valuable. If there were two Mona Lisas, presumably each would be less valuable, and if there were a billion Mona Lisas no one of them would presumably be worth much at all. The question is whether this holds at a cosmic scale. Might this only be market thinking, reflecting our habit of valuing things in terms of how much we would pay in conditions of scarcity? Or is there in fact something truly precious in uniqueness? (For discussion, see Lemos 2010; Chappell 2011; Bradford forthcoming.)

Perhaps there is something beautiful, or right, or fitting, in things happening only once, in a finite universe, and then ceasing. Is it good that you are the only version of you who will ever exist, so to speak – that after you have lived and died there will never again be anyone quite like you? Is it good that the cosmos contains only a single Confucius and only a single Great Barrier Reef, no duplicates of which will ever exist? Things will burn out, never to return. There’s a romantic pull to this idea. Against The Repetition Objection to the simple Muted Optimist’s preference for an infinite universe, I offer the Goldfish Argument (see also Schwitzgebel 2019, ch. 44).

According to popular belief (not in fact true), goldfish have a memory of only thirty seconds. Imagine, then, a goldfish swimming clockwise around a ring-shaped pool, completing each circuit in two minutes. Every two minutes it encounters the same reeds, the same stones, and the same counterclockwise-swimming goldfish it saw in the same place two minutes before, and each time it experiences all of these as new. The goldfish is happy with its existence: “Howdy, stranger, what a pleasure to meet you!” it says to the counterclockwise-swimming fish it meets afresh every minute. To tighten the analogy with the Repetitive Erasure cosmology, let’s stipulate that each time around this goldfish sees and does and thinks and experiences exactly the same things.

Now stop the goldfish mid-swim and explain the situation. The goldfish will not say, “oh, I guess there’s no point in my going around again.” The goldfish will want to continue its happy little existence, and rightly so. It still wants to see and enjoy what’s around the next bend. Moment to moment it is having good experiences. You harm and disappoint the goldfish by stopping its experiences, as long as each experience is, locally, good – even if they have all happened before innumerably many times. This is true whether we catch the goldfish after its first swim around, after its second, or after its googolplex-to-the-googolplexth. It's better to let the fish swim on. If the analogy holds at cosmic scales, then Equal Value and Loss of Value must be false. Maybe, though, there’s still something attractive about uniqueness, some truth in it that isn’t simply inappropriate market-style thinking? I see no need to deny that there really is something special about the first time. Let’s grant that it’s possible that the first go-round is somehow made less valuable by later go-rounds. As long as the harm done by stopping the goldfish (by denying future goods) exceeds the harm done by letting the goldfish continue (by reducing the rarity of past goods), then Diminishing Returns is the correct view. If we further assume that the added value does not continually shrink in a way that approaches zero, then the view we should embrace is one on which Repetitive Erasure would have infinite value.

This thinking appears to extend to the Infinitary Cosmology. Duplicates of you, and me, and all Earth, and the whole Milky Way will repeat over and over, infinitely. Each repetition adds some positive value to the cosmos, and in sum the value is infinite.

-----------------------------------------

Related:

Goldfish-Pool Immortality (May 30, 2014)

Duplicating the Universe (Apr 29, 2015)

Everything Is Valuable (May 6, 2022)

How Not to Calculate Utilities in an Infinite Universe (Feb 10, 2023)

Repetition and Value in an Infinite Universe (forthcoming), in S. Hetherington, ed., Extreme Philosophy. Routledge. [image adapted from Midjourney]