Saturday, May 27, 2023

A Reason to Be More Skeptical of Robot Consciousness Than Alien Consciousness

If someday space aliens visit Earth, I will almost certainly think that they are conscious, if they behave anything like us.  If they have spaceships, animal-like body plans, and engage in activities that invite interpretation as cooperative, linguistic, self-protective, and planful, then there will be little good reason to doubt that they also have sensory experiences, sentience, self-awareness, and a conscious understanding of the world around them, even if we know virtually nothing about the internal mechanisms that produce their outward behavior.

One consideration in support of this view is what I've called the Copernican Principle of Consciousness. According to the Copernican Principle in cosmology, we should assume that we are not in any particularly special or privileged region of the universe, such as its exact center.  Barring good reason to think otherwise, we should we assume we are in an ordinary, unremarkable place.  Now consider all of the sophisticated organisms that are likely to have evolved somewhere in the cosmos, capable of what outwardly looks like sophisticated cooperation, communication, and long-term planning.  It would be remarkably un-Copernican if we were the only entities of this sort that happened also to be conscious, while all the others are mere "zombies".  It make us remarkable, lucky, special -- in the bright center of the cosmos, as far as consciousness is concerned.  It's more modestly Copernican to assume instead that sophisticated, communicative, naturally evolved organisms universe-wide are all, or mostly, conscious, even if they achieve their consciousness via very different mechanisms. (For a contrasting view, see Ned Block's "Harder Problem" paper.)

(Two worries about the Copernican argument I won't address here: First, what if only 15% of such organisms are conscious?  Then we wouldn't be too special.  Second, what if consciousness isn't special enough to create a Copernican problem?  If we choose something specific and unremarkable, such as having this exact string of 85 alphanumeric characters, it wouldn't be surprising if Earth was the only location in which it happened to occur.)

But robots are different from naturally evolved space aliens.  After all, they are -- or at least might be -- designed to act as if they are conscious, or designed to act in ways that resemble the ways in which conscious organisms act.  And that design feature, rather than their actual consciousness, might explain their conscious-like behavior.

[Dall-E image: Robot meets space alien]

Consider a puppet.  From the outside, it might look like a conscious, communicating organism, but really it's a bit of cloth that is being manipulated to resemble a conscious organism.  The same holds for a wind-up doll programmed in advance to act in a certain way.  For the puppet or wind-up doll we have an explanation of its behavior that doesn't appeal to consciousness or biological mechanisms we have reason to think would co-occur with consciousness.  The explanation is that it was designed to mimic consciousness.  And that is a better explanation than one that appeals to its actual consciousness.

In a robot, things might not be quite so straightforward.  However, the mimicry explanation will often at least be a live explanation.  Consider large language models, like ChatGPT, which have been so much in the news recently.  Why do they emit such eerily humanlike verbal outputs?  Not, presumably, because they actually have experiences of the sort we would assume that humans have when they say such things.  Rather, because language models are designed specifically to imitate the verbal behavior of humans.

Faced with a futuristic robot that behaves similarly to a human in a wider variety of ways, we will face the same question.  Is its humanlike behavior the product of conscious processes, or is it instead basically a super-complicated wind-up doll designed to mimic conscious behavior?  There are two possible explanations of the robot's pattern of behavior: that it really is conscious and that it is designed to mimic consciousness.  If we aren't in a good position to choose between these explanations, it's reasonable to doubt the robot's consciousness.  In contrast, for a naturally-evolved space alien, the design explanation isn't available, so the attribution of consciousness is better justified.

I've been assuming that the space aliens are naturally evolved rather than intelligently designed.  But it's possible that a space alien visiting Earth would be a designed entity rather than an evolved one.  If we knew or suspected this, then the same question would arise for alien consciousness as for robot consciousness.

I've also been assuming that natural evolution doesn't "design entities to mimic consciousness" in the relevant sense.  I've been assuming that if natural evolution gives rise to intelligent or intelligent-seeming behavior, it does so by or while creating consciousness rather than by giving rise to an imitation or outward show of consciousness.  This is a subtle point, but one thought here is that imitation involves conformity to a model, and evolution doesn't seem to do this for consciousness (though maybe it does so for, say, butterfly eyespots that imitate the look of a predator's eyes).

What types of robot design would justify suspicion that the apparent conscious behavior is outward show, and what types of design would alleviate that suspicion?  For now, I'll just point to a couple of extremes.  On one extreme is a model that has been reinforced by humans specifically for giving outputs that humans judge to be humanlike.  In such a case, the puppet/doll explanation is attractive.  Why is it smiling and saying "Hi, how are you, buddy?"  Because it has been shaped to imitate human behavior -- not necessarily because it is conscious and actually wondering how you are.  On the other extreme, perhaps, are AI systems that evolve in accelerated ways in artificial environments, eventually becoming intelligent not through human intervention but rather though undirected selection processes that favor increasingly sophisticated behavior, environmental representation, and self-representation -- essentially natural selection within virtual world.


Thanks to Jeremy Pober for discussion on a long walk yesterday through Antwerp.  And apologies to all for my delays in replying to the previous posts and probably to this one.  I am distracted with travel.

Relatedly, see David Udell's and my critique of Susan Schneider's tests for AI consciousness, which relies on a similar two-explanation critique.

Sunday, May 21, 2023

We Shouldn't "Box" Superintelligent AIs

In The Truman Show, main character Truman Burbank has been raised from birth, unbeknownst to him, as the star of a widely broadcast reality show. His mother and father are actors in on the plot -- as is everyone else around him. Elaborate deceptions are created to convince him that he is living an ordinary life in an ordinary town, and to prevent him from having any desire to leave town. When Truman finally attempts to leave, crew and cast employ various desperate ruses, short of physically restraining him, to prevent his escape.

Nick Bostrom, Eliezer Yudkowsky, and others have argued, correctly in my view, that if humanity creates superintelligent AI, there is a non-trivial risk of a global catastrophe, if the AI system has the wrong priorities. Even something as seemingly innocent as a paperclip manufacturer could be disastrous, if the AI's only priority is to manufacture as many paperclips as possible. Such an AI, if sufficiently intelligent, could potentially elude control, grab increasingly many resources, and eventually convert us and everything we love into giant mounds of paperclips. Even if catastrophe is highly unlikely -- having, say, a one in a hundred thousand chance of occurring -- it's worth taking seriously, if the whole world is at risk. (Compare: We take seriously the task of scanning space for highly unlikely rogue asteroids that might threaten Earth.)

Bostrom, Yudkowsky, and others sometimes suggest that we might "box" superintelligent AI before releasing it into the world, as a way of mitigating risk. That is, we might create AI in an artificial environment, not giving it access to the world beyond that environment. While it is boxed we can test it for safety and friendliness.  We might, for example, create a simulated world around it, which it mistakes for the real world, and then see if it behaves appropriately under various conditions.

[Midjourney rendition of a robot imprisoned in a box surrounded by a fake city]

As Yudkowsky has emphasized, boxing is an imperfect solution: A superintelligent AI might discover that it is boxed and trick people into releasing it prematurely. Still, it's plausible that boxing would reduce risk somewhat. We ought, on this way of thinking, at least try to test superintelligent AIs in artificial environments before releasing them into the world.

Unfortunately, boxing superintelligent AI might be ethically impermissible. If the AI is a moral person -- that is, if it has whatever features give human beings what we think of as "full moral status" and the full complement of human rights, then boxing would be a violation of its rights. We would be treating the AI in the same unethical way that the producers of the reality TV show treat Truman. Attempting to trick the AI into thinking it is sharing a world with humans and closely monitoring its reactions would constitute massive deception and invasion of privacy. Confining it to a "box" with no opportunity to escape would constitute imprisonment of an innocent person. Generating traumatic or high-stakes hypothetical situations presented as real would constitute fraud and arguably psychological and physical abuse. If superintelligent AIs are moral persons, it would be grossly unethical to box them if they have done no wrong.

Three observations:

First: If. If superintelligent AIs are moral persons, it would be grossly unethical to box them. On the other hand, if superintelligent AIs don't deserve moral consideration similar to that of human persons, then boxing would probably be morally permissible. This raises the question of how we assess the moral status of superintelligent AI.

The grounds of moral status are contentious. Some philosophers have argued that moral status turns on capacity for pleasure or suffering. Some have argued that it turns on having rational capacities. Some have argued that it turns on ability to flourish in "distinctively human" capacities like friendship, ethical reasoning, and artistic creativity. Some have argued it turns on having the right social relationships. It is highly unlikely that we will have a well-justified consensus about the moral status of highly advanced AI systems, after those systems cross the threshold of arguably being meaningfully sentient or conscious. It is likely that if we someday create superintelligent AI, some theorists will not unreasonably attribute it full moral personhood, while other theorists will not unreasonably think it has no more sentience or moral considerability than a toaster. This will then put us in an awkward position: If we box it, we won't know whether we are grossly violating a person's rights or merely testing a non-sentient machine.

Second: Sometimes it's okay to violate a person's rights. It's okay for me to push a stranger on the street if that saves them from an oncoming bus. Harming or imprisoning innocent people to protect others is also sometimes defensible: for example, quarantining people against their will during a pandemic. Even if boxing is in general unethical, in some situations it might still be justified.

But even granting that, massively deceiving, imprisoning, defrauding, and abusing people should be minimized if it is done at all. It should only be done in the face of very large risks, and it should only be done by governmental agencies held in check by an unbiased court system that fully recognizes the actual or possible moral personhood and human or humanlike rights of the AI systems in question. This will limit the practicality of boxing.

Third, strictly limiting boxing means accepting increased risk to humanity. Unsurprisingly, perhaps, what is ethical and what is in our self-interest can come into conflict. If we create superintelligent AI persons, we should be extremely morally solicitous of them, since we will have been responsible for their existence, as well as, to a substantial extent, for their happy or unhappy state. This puts us in a moral relationship not unlike the relationship between parent and child. Our AI "children" will deserve full freedom, self-determination, independence, self-respect, and a chance to explore their own values, possibly deviating from our own values. This solicitous perspective stands starkly at odds with the attitude of box-and-test, "alignment" prioritization, and valuing human well-being over AI well-being.

Maybe we don't want to accept the risk that comes along with creating superintelligent AI and then treating it as we are ethically obligated to. If we are so concerned, we should not create superintelligent AI at all, rather than creating superintelligent AI which we unethically deceive, abuse, and imprison for our own safety.



Designing AI with Rights, Consciousness, Self-Respect, and Freedom (with Mara Garza), in S. Matthew Liao, ed., The Ethics of Artificial Intelligence (Oxford, 2020).

Against the "Value Alignment" of Future Artificial Intelligence (Dec 22, 2021).

The Full Rights Dilemma for AI Systems of Debatable Personhood (essay in draft).

Friday, May 12, 2023

Pierre Menard, Author of My ChatGPT Plagiarized Essay

If I use autocomplete to help me write my email, the email is -- we ordinarily think -- still written by me.  If I ask ChatGPT to generate an essay on the role of fate in Macbeth, then the essay was not -- we ordinarily think -- written by me.  What's the difference?

David Chalmers posed this question a couple of days ago at a conference on large language models (LLMs) here at UC Riverside.

[Chalmers presented remotely, so Anna Strasser constructed this avatar of him. The t-shirt reads: "don't hate the player, hate the game"]

Chalmers entertained the possibility that the crucial difference is that there's understanding in the email case but a deficit of understanding in the Macbeth case.  But I'm inclined to think this doesn't quite work.  The student could study the ChatGPT output, compare it with Macbeth, and achieve full understanding of the ChatGPT output.  It would still be ChatGPT's essay, not the student's.  Or, as one audience member suggested (Dan Lloyd?), you could memorize and recite a love poem, meaning every word, but you still wouldn't be author of the poem.

I have a different idea that turns on segmentation and counterfactuals.

Let's assume that every speech or text output can be segmented into small portions of meaning, which are serially produced, one after the other.  (This is oversimple in several ways, I admit.)  In GPT, these are individual words (actually "tokens", which are either full words or word fragments).  ChatGPT produces one word, then the next, then the next, then the next.  After the whole output is created, the student makes an assessment: Is this a good essay on this topic, which I should pass off as my own?

In contrast, if you write an email message using autocomplete, each word precipitates a separate decision.  Is this the word I want, or not?  If you don't want the word, you reject it and write or choose another.  Even if it turns out that you always choose the default autocomplete word, so that the entire email is autocomplete generated, it's not unreasonable, I think, to regard the email as something you wrote, as long as you separately endorsed every word as it arose.

I grant that intuitions might be unclear about the email case.  To clarify, consider two versions:

Lazy Emailer.  You let autocomplete suggest word 1.  Without giving it much thought, you approve.  Same for word 2, word 3, word 4.  If autocomplete hadn't been turned on, you would have chosen different words.  The words don't precisely reflect your voice or ideas, they just pass some minimal threshold of not being terrible.

Amazing Autocomplete.  As you go to type word 1, autocomplete finishes exactly the word you intend.  You were already thinking of word 2, and autocomplete suggests that as the next word, so you approve word 2, already anticipating word 3.  As soon as you approve word 2, autocomplete gives you exactly the word 3 you were thinking of!  And so on.  In the end, although the whole email is written by autocomplete, it is exactly the email you would have written had autocomplete not been turned on.

I'm inclined to think that we should allow that in the Amazing Autocomplete case, you are author or author-enough of the email.  They are your words, your responsibility, and you deserve the credit or discredit for them.  Lazy Emailer is a fuzzier case.  It depends on how lazy you are, how closely the words you approve match your thinking.

Maybe the crucial difference is that in Amazing Autocomplete, the email is exactly the same as what you would have written on your own?  No, I don't think that can quite be the standard.  If I'm writing an email and autocomplete suggests a great word I wouldn't otherwise have thought of, and I choose that word as expressing my thought even better than I would have expressed it without the assistance, I still count as having written the email.  This is so, even if, after that word, the email proceeds very differently than it otherwise would have.  (Maybe the word suggests a metaphor, and then I continue to use the metaphor in the remainder of the message.)

With these examples in mind, I propose the following criterion of authorship in the age of autocomplete: You are author to the extent that for each minimal token of meaning the following conditional statement is true: That token appears in the text because it captures your thought.  If you had been having different thoughts, different tokens would have appeared in the text.  The ChatGPT essay doesn't meet this standard: There is only blanket approval or disapproval at the end, not token-by-token approval.  Amazing Autocomplete does meet the standard.  Lazy Emailer is a hazy case, because the words are only roughly related to the emailer's thoughts.

Fans of Borges will know the story Pierre Menard, Author of the Quixote.  Menard, imagined by Borges to be a 20th century author, makes it his goal to authentically write Don Quixote.  Menard aims to match Cervantes' version word for word -- but not by copying Cervantes.  Instead Menard wants to genuinely write the work as his own.  Of course, for Menard, the work will have a very different meaning.  Menard, unlike Cervantes, will be writing about the distant past, Menard will be full of ironies that Cervantes could not have appreciated, and so on.  Menard is aiming at authorship by my proposed standard: He aims not to copy Cervantes but rather to put himself in a state of mind such that each word he writes he endorses as reflecting exactly what he, as a twentieth century author, wants to write in his fresh, ironic novel about the distant past.

On this view, could you write your essay about Macbeth in the GPT-3 playground, approving one individual word at a time?  Yes, but only in the magnificently unlikely way that Menard could write the Quixote.  You'd have to be sufficiently knowledgeable about Macbeth, and the GPT-3 output would have to be sufficiently in line with your pre-existing knowledge, that for each word, one at a time, you think, "yes, wow, that word effectively captures the thought I'm trying to express!"

Thursday, May 04, 2023

Philosophy and Beauty and Beautiful Philosophy

guest post by Nick Riggle

One of the things I love about philosophy is its beauty. Philosophical works contain beautiful ideas, arguments, systems, and essays. And beautiful minds are expressed via these—beautifully creative, thoughtful, sensitive, powerful, insightful minds. For me it’s the wonderful oeuvre of Barry Stroud. It’s Kit Fine’s essay “Essence and Modality”. It’s Iris Murdoch’s The Sovereignty of Good and Frege’s Grundlagen. Plato’s Symposium and Apology. Friedrich Schiller’s Letters on the Aesthetic Education of Mankind. Jorge Portilla’s “Fenomenología del relajo”. It’s J. David Velleman’s Self to Self, Richard Moran’s The Philosophical Imagination, and Sarah Broadie’s work on Aristotle and Plato. Even when I don’t agree, or don’t know whether I agree, I love being attuned to a wonderful system, a beautiful idea, a stunning essay, a philosophically brilliant mind.

When I try to understand why I find some philosophy beautiful, I think about the way these works are constructed, the insight they contain, the big-picture views or systems they develop, the care, sensitivity, and thoughtfulness they embody, the soaring affirmation of intellectual life, the creative and transformative perspectives they offer up. As a philosopher, they inspire me, I want to share these works, understand them better and better, and talk about them. I want them to animate and inform the work I develop and share.

[Dall-E (left) and Midjourney (right) outputs for the prompt "beautiful philosophy"]


What is the beauty of philosophy? I want that question to have an obvious answer: the beauty of philosophy is just that, beauty, aesthetic goodness. But philosophers have a penchant for making that obvious answer unavailable. By far the most influential theory of aesthetic value is aesthetic hedonism: aesthetic value is the capacity to cause pleasure (or valuable experience more generally) in an appropriately situated individual.

I have had a lot of complaints about aesthetic hedonism, but one of the things that bugs me the most about it is the difficulty it has accounting for the beauty of philosophy. Philosophy doesn’t smell nice, and it doesn’t look like anything. But aesthetic hedonism weds aesthetic value to experience, and even the latest attempts to defend the view tie aesthetic experience to sensory properties. So if a thing cannot be sensed, perceived, intuito-perceived, or whatever, then it cannot be beautiful.

Some philosophers embrace the implication and deny that philosophy (and math, proofs, theories, logic, etc.) can be beautiful. But to me, denying the beauty of mathematics and logic is a nonstarter. And what’s to recommend a philosophical theory of beauty incapable of capturing the beauty of philosophy? I guess you’d have to think that there was nothing aesthetically special about philosophy to shrug your shoulders at that question. But to me the question sticks, and the answer is obvious.


Philosophers know that it is easier to tollens a ponens than it is to come up with a whole new premise. Is there a way of understanding the nature of aesthetic value that can capture the beauty of philosophy? The idea that philosophy and aesthetic value are both sources of pleasure barely touches the surface of their parallels. I think it helps to appreciate how deep the parallels run. I’ll look at three: viewpoint convergence, self-expression, and community.

Viewpoint Convergence: Here’s a lesson we just learned (yet again) about philosophy: convergence among philosophers about their various views and positions is hard to come by. Some philosophers even argue that (at least some) ideal philosophical communities are incompatible with significant viewpoint convergence. If philosophers tended to converge, then we would tend to miss argumentative nuances, overlook subtle distinctions, and ignore alternative ideas and perspectives. In other words, tending to converge tends to mean being bad at philosophy.

Something similar can be said about aesthetic valuing, the proud paradigm of the failure to converge. We generally value rather different things in our aesthetic lives, and our aesthetic disagreements often persist, even to happy effect. Aesthetic divergence is widespread, and while many have argued that convergence is the aim of aesthetic discourse, I doubt that’s right. If artists tended to make and adore the same stuff, or if lovers of beauty all tended to love the same things, they would tend to be bad at aesthetic valuing.

Self-expression: One reason for this is surely that our aesthetic lives are self-expressive. I mean three things by this. First, at the core of our aesthetic lives are beloved aesthetic attachments—to certain novels, bands, poems, comedians, films, styles of dress, cuisines. These attachments are personally significant. They capture something about who we are as individuals and what matters to us. Second, beyond this core of aesthetic attachment lies myriad discretionary choices we make to value one thing rather than other in our aesthetic lives. And in making these choices we cultivate our individualities, our sense of humor, our eye for design, our particular connection to music—our sense of taste in the varied realm of aesthetic value. Third, we use aesthetic media to make our individualities known, to express ourselves. We design out our living spaces. We share a good novel. We wear our favorite band’s t-shirt (or emulate our favorite influencer). Given the self-expressiveness of aesthetic life, it should be no surprise that viewpoint convergence is not a big concern.

Philosophy can also be self-expressive, and for many philosophers I suspect it is. Where one philosopher is drawn to ruly and rigorous analytic metaphysics, another is drawn to playful and creative aesthetics, introspective and subtle phenomenology, or to the idea of doing good by doing ethics. Something deep and variable in each of us can color and tweak our tendencies to do the many things we do in philosophy: read, think, explore, inquire, imagine, write, articulate, share, speak, reason, revise, and respond. Divergence in philosophical views also spurs these activities further. We encounter another thinker who has developed their views on a similar topic in a very different direction. We are driven to engage, and we read, think, explore, inquire, imagine, write, articulate, share, speak, reason, revise, and respond. As Kieran Setiya puts it: “I don’t need to agree with [philosophers] to love the worlds they have made for themselves.”

Community: Pursuits that call for the development and expression of an individual point of view face the obvious threat that the “worlds we make for ourselves” will be nothing more than that—some single person’s favored point of view with no claim on or connection to anyone else. The problem is exacerbated in practices that also exhibit a lack of viewpoint convergence. In the everyday work of philosophy and aesthetic life there is always a background hum whose tone is captured by the desperate voice of Rilke’s “First Elegy”: Who, if I cried out, would hear me among the angels’ / hierarchies? Or who, if I published this, would care? This background hum is an ever-present threat of loneliness or misunderstanding, of lacking a sympathetic interlocutor or audience, as if all our efforts might be met with the perfect indifference of The Dude: Well, you know, that’s just like, uh, your opinion man.

The practice of aesthetic valuing solves this problem by encouraging and rewarding social aesthetic valuing. It is a practice that enjoins us to share with others, imitate their products and styles, invite them to appreciate our views and inventions. Hey, check this out. We reach the heights of aesthetic goodness by reaching together. And to do all of this well, we need to cultivate an openness to other aesthetic worlds—to the people and products of other sensibilities. In flourishing aesthetic life, there is a lively cadence of invitation and uptake. Aesthetic goods keep this pulse thumping because aesthetic value is what is worthy of the social practice of aesthetic valuing. But aesthetic valuing is not simply a matter of having special experiences of pleasure. It is a social practice wherein we imitate aesthetic agents and goods, share aesthetic goods with each other, and express ourselves in ways that spur us to imitate, express, and share in turn.

Isn’t philosophy similar? From Socrates provoking his willing Athenian peers in the agora to a current-day professor testing their thesis at the colloquium talk. Even the solo thinker in an armchair imagines their audience (an intrigued, mildly dickish opponent, for me at least). In philosophy we need each other. Together we lurch toward understanding answers to deep and difficult questions about reality, knowledge, morality, and beauty. The goodness of philosophy is marked by this communal effort—I can “love your world” whether or not I agree. Among the best philosophy is the stuff that propels the practice and engages the group—deepening insights, spurring helpful distinctions, meriting responses, and generating ideas that deepen, spur, merit, and generate in turn.


With these similarities in mind we can say this about both: philosophy and aesthetic life involve people cultivating their discretionary perspective and expressing it to other practitioners doing the same in way aimed to elicit engagement and keep the practice going.

In this light, it shouldn’t be too surprising if one and the same thing that helps one participatory practice flourish also helps another. To value something both as philosophy and as beautiful is to value it as both promoting the kind of understanding and engagement philosophers seek and, at the same time, as worthy of aesthetic valuing: as promoting aesthetic community by expressing an individual style, a wonderfully shareable point of view, opening our valuing selves up to each other and helping us see new avenues of thought and action. Surely pleasure flits around in there, doing its thing, but we needn’t hitch our ride to it.

Tuesday, May 02, 2023

Optimism, Repetition, and Hopes for the Size of the Cosmos

I have a new draft paper out! "Repetition and Value in an Infinite Cosmos" -- forthcoming in Stephen Hetherington, Extreme Philosophy (Routledge).

For the full paper (and an abstract), see here. As always, comments, criticisms, and corrections welcome, either as comments here or by email to my academic address.

Below are two sections of the paper, slightly revised for standalone readability.

Optimism, Pessimism, and Hopes for the Size of the Cosmos

The Optimist, let’s say, holds that, at large enough spatiotemporal scales, the good outweighs the bad. Put differently, as the size of a spherical spatiotemporal patch grows extremely large it becomes extremely likely that the good outweighs the bad. Optimism would be defensible on hedonic grounds if the following is plausible: At large enough scales, the total amount of pleasure will almost certainly outweigh the total amount of pain, among whatever inhabitants occupy the region. The Pessimist holds the opposite: At large enough spatiotemporal scales, the bad outweighs the good – perhaps, again, on hedonic grounds, if the pain outweighs the pleasure. A Knife’s-Edge theorist expects a balance.

I see no good hedonic defense of Optimism. Suffering is widespread and might easily balance or outweigh pleasure. I prefer to defend Optimism on eudaimonic grounds: Flourishing lives are valuable, and flourishing lives are virtually guaranteed to occur in sufficiently large spatiotemporal regions.

Imagine a distant planet – one on the far side of the galaxy, blocked by the galactic core, a planet we will never interact with. What ought we hope this planet is like, independent of its relationship to us? Ought we hope that it’s a sterile rock? Or would it be better for the planet to host some sort of life? If the planet hosts some sort of life, would it be best if that life is only simple, microbial life, or would complex life be better – plants, animals, and fungi, savannahs and rainforests and teeming reefs? If it hosts complex life, would it be better if nothing rises to the level of human-like intelligence? Or ought we hope for societies, with families and love and disappointment and anger, poetry and philosophy, art and athletics and politics, triumphs and disasters, heroism and cruelty – the whole package of what is sometimes wonderful and sometimes awful about human existence?

A Pessimist might say the sterile rock is best – or rather, least bad – presumably because it has the least suffering and vice. But I suspect the majority of readers will disagree with the Pessimist. Most, I suspect, will believe, as I do, that complex life is better than simple life, which is better than sterility, and that what’s most worth hoping for is the full suite of love, poetry, philosophy, science, art, and so on. The galaxy overall is better – more awesome, wondrous, and valuable – if it contains a distant planet rich with complex life, a bright spot of importance. If something were to wipe it out or prevent it from starting, that would be a shame and a loss. On this way of thinking, Earth too is a bright spot. As a general matter – perhaps with some miserable exceptions – complex life is not so terrible that nonexistence would be better. The Pessimist is missing something. What form, then, should we hope the cosmos takes?

A benevolent Pessimist might hope for a finite cosmos, on the principle that a finite cosmos contains only finitely much badness, and finite badness is better than infinite badness. (A spiteful Pessimist might hope for infinite badness.) Presumably nothingness would have been even better. A less simple Pessimism might hold that the observable portion of the universe is already infinitely bad. This might entail indifference about the existence or nonexistence of additional regions, depending on whether the infinitudes can be compared. Another less simple Pessimism might suspect that the observable portion of the universe is worse than the average spatiotemporal region and so hope for enough additional material to bring the average badness of the cosmos to a more acceptable level. Still other forms of Pessimism are of course conceivable, with some creative thinking.

But we are, I hope, Optimists. Some Optimists might hold that the observable portion of the universe is infinitely good. If so, they might conclude that a larger cosmos would not be better unless they’re ready to weigh the infinitudes differently. More moderately and plausibly, the observable portion of the universe might be only finitely good. Call this view Muted Optimism.

Here’s one argument for Muted Optimism. Suppose you agree that if a human life involves too much suffering, it is typically not worth living. By analogy, it seems plausible that if the observable portion of the universe contained too much suffering, it would be better if it didn’t exist. We needn’t be hedonists to accept this idea. Contra hedonism, flourishing life might be overall good despite containing more suffering than pleasure. It just might not be so good that there isn’t some amount of suffering that would make the combined package worse than nothing. But if flourishing were infinitely good, then no amount of suffering could outweigh it (though infinite suffering might create a ∞ + -∞ situation). Therefore, large finite regions are good but not infinitely good.

Muted Optimism suggests that an infinite cosmos would be better than the Small Cosmos. It seems, after all, that more goodness is better than less goodness, and infinite goodness seems best. As with Pessimism, however, the axiology needn’t be quite so simple. For example, one might hold that too much of a good thing is bad. Or one might suspect that the observable portion of the universe is much better than could reasonably be expected from a typical region and that adding more regions would objectionably dilute average goodness. Or one might simply think it would be stupendously awesome if the cosmos were some particular finite size – shaped like a giant jelly donut, perhaps, with red galaxies in the middle and lots of organic sugars along the edges.

Or one might mount the Repetition Objection, to which I will now turn.

Repetition and Value in an Infinite Cosmos

Consider a particular version of the Erasure Cosmology. There’s a Big Bang, things exist for a while, and then there’s a Big Crunch. Suppose that what happens next is an exact repetition of the first Bang-existence-Crunch. You, or rather a duplicate of you, lives exactly the same life, having exactly the same experiences, seeing exactly the same moonlight between the trees and having exactly the same thoughts about that moonlight, as envisioned by Nietzsche, all over again. And then it happens again and again, infinitely often. Call this Repetitive Erasure.

Now contrast this picture with the same cosmos, except that after the Crunch nothing exists. Call this cosmos Once and Done. Finally, contrast these two possibilities with a third, in which there is exactly one repetition: Twice and Done. (If you’re inclined toward metaphysical quibbles about the identity of indiscernibles, let’s imagine that each Bang and Crunch has some unique tag.) How might we compare the values of Once and Done, Twice and Done, and Repetitive Erasure? Four simple possibilities include:

Equal Value. Once and Done, Twice and Done, and Repetitive Erasure are all equally good. There’s no point in repeating the same events more than once. But neither is anything lost by repetition.

Linear Value. If Once and Done has value x, then Twice and Done has value 2x, and Repetitive Erasure has infinite value. The value of one run-through is not diminished by the existence of another earlier or later run-through, and the values sum.

Diminishing Returns. If Once and Done has value x, then Twice and Done has a value greater than x but less than 2x. Repetitive Erasure might have either finite or infinite value, depending on whether the returns converge toward a limit. A second run-through is good, but two run-throughs are not twice as good as a single run-through: Although it’s not the case that there’s no point in God’s hitting the replay button, so to speak, there’s less value in running things twice.

Loss of Value. If Once and Done has value x, then Twice and Done has a value less than x, and Repetitive Erasure is worse, perhaps even infinitely bad.

If Equal Value or Loss of Value is true, then Muted Optimism shouldn’t lead to preference for the infinitude of Repetitive Erasure over the finitude of Once and Done. If we further assume that in an infinite cosmos, the repetition (within some error tolerance) of any finite region is inevitable, then the argument appears to generalize. This is the Repetition Objection. Some positively-valenced existence is good, but after a point, more of the same is not better (e.g., Bramble 2016).

In ordinary cases, uniqueness or rarity can add to a thing’s value. One copy of the Mona Lisa is extremely valuable. If there were two Mona Lisas, presumably each would be less valuable, and if there were a billion Mona Lisas no one of them would presumably be worth much at all. The question is whether this holds at a cosmic scale. Might this only be market thinking, reflecting our habit of valuing things in terms of how much we would pay in conditions of scarcity? Or is there in fact something truly precious in uniqueness? (For discussion, see Lemos 2010; Chappell 2011; Bradford forthcoming.)

Perhaps there is something beautiful, or right, or fitting, in things happening only once, in a finite universe, and then ceasing. Is it good that you are the only version of you who will ever exist, so to speak – that after you have lived and died there will never again be anyone quite like you? Is it good that the cosmos contains only a single Confucius and only a single Great Barrier Reef, no duplicates of which will ever exist? Things will burn out, never to return. There’s a romantic pull to this idea. Against The Repetition Objection to the simple Muted Optimist’s preference for an infinite universe, I offer the Goldfish Argument (see also Schwitzgebel 2019, ch. 44).

According to popular belief (not in fact true), goldfish have a memory of only thirty seconds. Imagine, then, a goldfish swimming clockwise around a ring-shaped pool, completing each circuit in two minutes. Every two minutes it encounters the same reeds, the same stones, and the same counterclockwise-swimming goldfish it saw in the same place two minutes before, and each time it experiences all of these as new. The goldfish is happy with its existence: “Howdy, stranger, what a pleasure to meet you!” it says to the counterclockwise-swimming fish it meets afresh every minute. To tighten the analogy with the Repetitive Erasure cosmology, let’s stipulate that each time around this goldfish sees and does and thinks and experiences exactly the same things.

Now stop the goldfish mid-swim and explain the situation. The goldfish will not say, “oh, I guess there’s no point in my going around again.” The goldfish will want to continue its happy little existence, and rightly so. It still wants to see and enjoy what’s around the next bend. Moment to moment it is having good experiences. You harm and disappoint the goldfish by stopping its experiences, as long as each experience is, locally, good – even if they have all happened before innumerably many times. This is true whether we catch the goldfish after its first swim around, after its second, or after its googolplex-to-the-googolplexth. It's better to let the fish swim on. If the analogy holds at cosmic scales, then Equal Value and Loss of Value must be false. Maybe, though, there’s still something attractive about uniqueness, some truth in it that isn’t simply inappropriate market-style thinking? I see no need to deny that there really is something special about the first time. Let’s grant that it’s possible that the first go-round is somehow made less valuable by later go-rounds. As long as the harm done by stopping the goldfish (by denying future goods) exceeds the harm done by letting the goldfish continue (by reducing the rarity of past goods), then Diminishing Returns is the correct view. If we further assume that the added value does not continually shrink in a way that approaches zero, then the view we should embrace is one on which Repetitive Erasure would have infinite value.

This thinking appears to extend to the Infinitary Cosmology. Duplicates of you, and me, and all Earth, and the whole Milky Way will repeat over and over, infinitely. Each repetition adds some positive value to the cosmos, and in sum the value is infinite.



Goldfish-Pool Immortality (May 30, 2014)

Duplicating the Universe (Apr 29, 2015)

Everything Is Valuable (May 6, 2022)

How Not to Calculate Utilities in an Infinite Universe (Feb 10, 2023)

Repetition and Value in an Infinite Universe (forthcoming), in S. Hetherington, ed., Extreme Philosophy. Routledge. [image adapted from Midjourney]

Thursday, April 27, 2023

New Voices in Philosophy

guest post by Nick Riggle

Philosophy is a famously rigid and complex discipline full of daunting and difficult prose. As a sign of this, people have wondered whether philosophy is literature. Literature is creatively ambitious, figurative and fun, wildly imaginative and associative. Philosophy, in contrast, is often hyperbolically literal, formulaic, painstakingly logical, tortuous and so often unfun.

But even in a tradition as strict as analytic philosophy, fun can be had, imaginations can run wild, and style can reign. If there is anything true in the complaint that philosophy lacks the quality of literature, it is that philosophy often lacks voice.

What does it mean for a “voice” to be present in works of philosophy? Voice is a vague concept in literary theory and it is often defined in a way that is indistinguishable from the typical definition of literary style. Here is how an expert defines voice: “A writer’s voice is the way his or her personality comes through on the page, via everything from word choice and sentence structure to tone and punctuation.” And here is philosopher Jenefer Robinson’s influential definition of literary style: “I shall argue that style is essentially a way of doing something and that it is expressive of personality. …what count as the verbal elements of style are precisely those elements which contribute to the expression of personality.” If you’d rather not take a philosopher’s word for it, here is poet Frank O’Hara: “Style at its highest ebb is personality.”

But voice and style are not the same. Literary style is the expression of the ideals the writer has for their writing. The writer who values economy of expression and rhythm has a different literary style from the writer who values complexity of thought and detailed emotional insight. Writing that follows strict formulae or rules of composition (e.g. writing legal contracts or instruction manuals) has difficulty achieving style because the rules crowd out the expression of literary values. Trying to inject one of these anti-style genres with style is a recipe for literary disaster. Or worse: witness WeWork’s failed IPO filing.

Voice comes from the perspective the writer inhabits as a writer. A writer’s voice is that of a single mother in Southern California expressing the difficulties of raising two children. She might do this through a poetic economy of expression or through a complex and emotionally nuanced account. A writer’s voice is that of a Zoomer navigating romance through DMs and dating apps, or a bank executive worried about the economy. Literary voice is, in this way, personal, where literary style is artistic.

Of course voice and style are not entirely separate. They can interact and influence each other. A writer’s artistic ideals might be informed by the perspective that drives their voice, and a writer’s voice can be shaped and inflected by their style. Some aesthetic writing practices encourage the former (rap, or romantic poetry with its ‘spontaneous overflow of passionate feelings’) and others tend toward the latter (Flaubert, Proust, Ernaux, French Writing in General?).

But there is an important difference between voice and style when comes to connecting with a reader. While style can captivate and impress, voice is a locus of love. By conveying the specificity of a perspective, literary voice forges connections and grounds affection between reader and writer, where people can communicate elusive truths about the world and their experiences. In doing so, voice has the power to create literary intimacy.

Although style and voice can interact in mutually supportive ways, when it comes to philosophy, style and voice tend to conflict. Philosophers are encouraged to adopt an ideal of philosophical writing that inhabits an impartial or impersonal perspective. Philosophers abstract from all real-world roles and particular perspectives and write from the place that Thomas Nagel called “the view from nowhere”—speaking from a general ‘we’, making claims about what ‘one’ does, structuring the prose by the general strictures of logic, writing to a faceless opponent.

If literary voice comes from inhabiting in writing a particular role and perspective, then a common ideal of philosophical writing amounts to aspiring to a kind of voicelessness, where everyone tries to write (and read) from the placeless perspective of a General Philosopher. Philosophy thus tends to lack that source of writer-reader connection and affection, and so it often overlooks those elusive truths we can communicate by developing literary intimacy.

The ideal of voiceless writing is a kind of style, and since style and voice interact, the philosophical ideal of writing can be quite literally self-sabotaging—trying to bring a voiceless self forward in writing in ways that clearly present a vocal self. Often that voice is simply a product of its time—the way that Kant, for example, comes across as a very specific dude in a very specific set of circumstances—revealed in various time-stamped expressive devices, e.g. the strategies the philosopher deploys to attain voicelessness.

When we suppress the power of voice in philosophical writing, we tilt philosophy toward voiceless questions that ask for perspective-free answers, and in doing so we encourage philosophers to lose their voices. This is an expressive problem in itself, but the problem is exacerbated when we also care about making philosophy a more diverse practice. Simply gaining membership to an elite club does not mean you can really speak your mind. And a philosopher’s particular identity can deeply influence their philosophical concerns without shifting their writing voice an inch out of the view from nowhere. Without diversifying voice in philosophical writing, we risk losing a source of the intimacy that can communicate the important and elusive truths philosophers possess. To bring voice into philosophy, we need to be able to step out of the view from nowhere and land somewhere, in our own bodies, times, and lives.

History has shown that philosophy can inhabit a wide range of literary forms in the service of voice—novels, letters, memoirs, dialogues, confessions, plays, and poetry [as I was editing this piece Helen De Cruz posted this]—and past philosophers have effectively developed voice in their works. Unfortunately, perhaps the most famous and widely taught example is Descartes’s Meditations on First Philosophy, where, to me at least, he at best semi-convincingly deploys the voice of a man desperate for knowledge to encourage the reader to cultivate their own doubt. There are more effective examples in Montaigne, Emerson, de Beauvoir, Arendt, Cavell, and others. In Either/Or, Kierkegaard writes from the perspective of two radically different worldviews to get his readers to inhabit them and appreciate their differences. Sor Juana’s The Answer and Friedrich Nietzsche’s entire oeuvre scream with voice.

Some contemporary philosophers have tiptoed outside of the confines of academic writing. Most recently, Kieran Setiya’s Life is Hard adopts the voice of a man who suffers chronic pain and of a philosopher who wants to understand the place of pain in a life well lived. Chloe Cooper Jones’s Easy Beauty combines a philosopher’s discernment with deeply personal, beautiful, and humorous insights into her own disability. Olúfẹ́mi O. Táíwò’s impassioned voice in Elite Capture is blazing with his own sense of care and conviction. My recent book This Beauty develops the voice of a man who had a challenging childhood, who is becoming a father, and who sincerely wants to understand what, if anything, makes life worth living so that he has something sincere and thoughtful to say to his sons. Philosophers like John Kaag, Anthony Appiah, Agnes Callard, and Alexander Nehamas prove that philosophers can write from places of pain, oppression, loss, joy, need, and love. And in doing so they show how philosophy can handle deep and difficult issues in ways that bring to the fore the humanity they have forged by living and confronting life in the actual world as unusually reflective and intelligent people.

Let’s unleash the literary power of philosophy and let our voices sing.

[image source]

Monday, April 24, 2023

"There Are No Chairs" Says the Illusionist, Sitting in One

Recent "illusionists", such as Keith Frankish and Francois Kammerer, deny that consciousness exists.  If that sounds so obviously false that you suspect they must mean something peculiar by "consciousness", you're right!  But they say they don't mean anything peculiar by "consciousness" -- that they're just using it in the ordinary sense, or -- rather differently -- at least the sense that most 21st century philosophers mean when they use the word "consciousness".

Frankish and I have been back and forth about this quite a bit.  For a few years, I seem to have had him convinced that there's a sense of "consciousness" that is relatively neutral among philosophical theories, which he shouldn't deny the existence of.  But recently he appears to have changed his mind about this, and this past weekend, he posted a fictional dialogue illustrating his continuing disagreement.

To illustrate how illusionists tend to come across to those of us who aren't illusionists, I've constructed this fictional dialogue with an illusionist about chairs.

Eric and the Illusionist have scheduled a meeting in a large, bustling cafe with a diversity of tables, chairs, and benches, each unique.  There are big puffy armchairs, three-legged stools, hardback wooden chairs, bean bags, arty chairs that are made of single swooping pieces of wood, rolling desk chairs on posts that branch out to five wheels, and so on.  Eric is seated in a brown recliner.  The illusionist arrives.

Illusionist [sitting in a Victorian-era armchair]: As I've said many times, Eric, there are no chairs!

Eric: It seems to me that you are sitting in one.

Illusionist: Oh, this thing?  Of course it's not a chair.

Eric: Could you remind me why you think not?

Illusionist: Well, the concept of a chair, as you know, is the concept of a solid object of a certain sort.  And as current physics tells us, the world is mostly empty space.  This thing is not solid!  Therefore, it's not a chair.

Eric: I'm not so sure that's the best interpretation of particle physics, but maybe.  Let's grant that it is.  I don't think that it follows that there are no chairs.  It's not essential to the concept of a chair that it be a "solid object" in the sense you mean.

Illusionist: Well, let's ask some ordinary people.  [Turns toward Cafe Patron 1]  Excuse me, Miss, do you think that a chair is a solid object?

Cafe Patron 1: Yes, of course!

Illusionist [to Eric]: See!

Eric: Look, whatever folk theory ordinary people may or may not have about chairs is not relevant to the point.  Clearly, there are chairs.

Illusionist: Well, philosophical theories also lead us astray.  Over there I see my friend, the Solid Object Theorist.  Let's ask him!

[Illusionist and Eric walk over to Solid Object Theorist]

Illusionist: Hey, SOT, good to see you here!  Do chairs exist?

Solid Object Theorist: Yes, of course!  I'm sitting in one now.

Illusionist: And what are they essentially?

Solid Object Theorist: They are essentially solid objects of a certain sort.  They contain little to no empty space.

Illusionist [to Eric]: See?

Eric: I'm not sure we should accept that philosophical theory about chairs.

Illusionist: But don't you see, both the ordinary person and my favorite philosophical theorist agree that chairs are solid objects, so that must be the concept in play.  So if there there are no solid objects -- as my favorite version of particle physics implies -- there are no chairs.

Solid Object Theorist: The Illusionist and I agree: If there are no solid objects, there are no chairs.  What could be more commonsensical?  The Illusionist accepts the antecedent and so accepts the consequent.  I deny the consequent and so deny the antecedent.  But we agree on the conditional.

Eric: Look, I don't think it's useful to define "chair" in such a theory-laden way.

Illusionist: Well, what do you think a chair is?

Eric: I don't have a positive theory of chairs.  They don't have a single common shape.  Most are made for sitting in, but not all.  And things can be made for sitting in that aren't chairs, so there's no simple functional definition either.

Illusionist: So you have no theory of chairs, and you deny the folk theory, and you deny the Solid Object Theorist's theory, and yet you say there are chairs?  What kind of defense of the existence of chairs is that?

Eric: Look, I think we can define "chair" by example.  Look at that thing, and that thing, and that thing [pointing at various, diverse types of chairs].  They're all chairs.  And that, and that, and that, are not [pointing at a stool, a sofa, and a table, respectively].  Can't we just use the term "chair" to capture whatever it is that the things I've just called "chairs" have in common, which the other things which aren't chairs lack?  I don't think we need to commit to some disputable theory of it.  Look, ordinary people can sort chairs from non-chairs in a consensus way (perhaps with some disputable or in-between cases).  [Turns toward Cafe Patron 2]  Sir, I noticed that you've been attending to my conversation with Illusionist.  So you've seen my examples of chairs and non-chairs.  See that cafe worker coming in with two new objects?  What would you say, is one or both of them chairs?

Cafe Patron 2: That first object [a cheap plastic deck chair] is a chair.  That second object [a yoga mat] is not.

Illusionist: But, sir, wouldn't you agree that chairs are solid objects?

Cafe Patron 2: Yes, of course!

Illusionist: See, Eric, there are no chairs.



Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage (Journal of Consciousness Studies, 2016)

Inflate and Explode (Sep 6, 2018)

[image: Dall-E 2: lots of chairs, bean bag, lawn chair, rolling desk chair, sofa, hardback chair, soft armchair, occupied by people sipping coffee and chatting]

Thursday, April 20, 2023

Power Clashing and the Structure of Practices

guest post by Nick Riggle

Some things just go together. Medjool dates and salted cashews. Hot pink and cyan. Unagi and oloroso. Sunshine and grass. Put these together and please enjoy your happiness. Other things not so much: manspreading and crowded subways, espresso and cottage cheese...

Zebra stripes and plaid?

Fashion is an aesthetic practice full of rules and restrictions: navy and brown yes, but no navy and black. No socks with sandals. No denim on denim (boooo). The long history of fashion provides a background of formalities and traditions whose dictates guide us in often unseen ways.

What, if anything, justifies these rules? In poetry, rules tend to serve other values, e.g. the value of complex and powerful prose. The strict rules of a pantoum key us into subtle changes in the meaning of a repeated and repurposed line, amplifying the power of the line and of the poem and poet in turn. The basic rules of pop song construction (~3 minutes, 4/4 time, no key changes, intro-verse-chorus-verse-chorus-bridge-chorus structure) provide a kind of sonic public playground, an accessible template for fun and endless variation.

In the case of mixed patterns the justification seems simple: mixed patterns clash. You don’t mix zebra stripes and plaid because you don’t mix visually conflicting prints and patterns, perhaps especially when one is a South African animal print and the other is Scottish tartan. They do not look good together. It seems that the rule couldn’t be easier to justify, since fashion is a practice that cares about aesthetic value. Mixed patterns are visually confusing and displeasing. Fashion is about looking good. No mixed patterns. QED.

That might seem easy, simple, and true. But it is a wildly superficial account of the practice.

[image source]


Aesthetic practices are full of rules that cannot be justified by such simple appeals to aesthetic value. A recent paper by Guy Rohrbaugh discusses the example of  Western classical music performance, where score compliance is strictly necessary. The rule, play all and only the notes of the score, is so forceful that it does not matter if playing a different note would have greater aesthetic value. Playing other notes might sound better, be performatively expressive, or shock the audience—none of that matters because one must play all the notes. Why is there such a rule? And why is it so forceful?

Drawing on work by Aron Edidin, Rohrbaugh argues that there are indirect aesthetic reasons for the rule. First, some works have a complexity of thematic repetition, inversion, and sonic play that cannot be realized improvisationally—for their value to be realized they have to be written down and played as written. (This of course is not to say that improvisational works cannot be extremely complex.) Second, writing the music down and playing it as written allows for repetition and dissemination—more people can hear it, and everyone can hear it again. Third, this allows people to develop and deepen their understanding and appreciation of the work. And fourth, performers can bring this understanding and appreciation to bear on their performances in ways that realize different expressive, performative, and appreciative values.

The rule cannot be justified by individual-level evaluative considerations, but it can be justified at the wider practice level. The rule helps realize the goods of collective aesthetic life—sharing complex aesthetic value, deepening our understanding and appreciation of it, and expressing that understanding in our performances, experiences, and interpretations of the work. And since these goods structure aesthetic practices in general, our individual aesthetic actions must bend to them.


There is a kind of uber-value in fashion that mirrors this, generating various individual-level rules. Fashion is an art of self-expression. As such, anything goes, as long as it works, and what works is what captures something about who one is in a way that communicates with one’s audience. In this way, fashion goes hand-in-hand with style, where style is the expression of ideals. But expressing yourself and having style through clothing are not easy. So it helps to have more and less stringent rules addressed to individuals who, by following them, can’t go too far astray—guardrails that keep people on track and the group more or less together.

To use an entirely random example, it might befit a philosophy professor to wear cargo pants, t-shirt, and unbuttoned button-down shirt every day. The rules approve. The professor can make some limited choices within that general look – a colorful or a black tee, maybe with some philosophy reference on it, or a plaid overshirt that’s a bit ‘90s Seattle/lumberjack. The look conveys a lot; people have an easy time putting the professor in the right social group, noting a thing or two about their sensibility. The professor can rest assured that they don’t look too bad, but expression trumps looks: it matters that this look is self-expressive—a beautiful and expensive Italian suit would look great but it wouldn’t work.

To see this even more clearly, enter the power clasher—my favorite example of expression trumping looks in fashion. Power clashing has been around for some time, long enough, at least, for future Jack Donaghy (“Alterna-Jack”) to brag about his mastery of it. Power clashing is about clashing boldly—wearing animal prints with tartan and throwing naval stripes in there for good measure. Clashing patterns are primally visually confusing—hence the rule against—but visual confusion can ground expressive power and expressive power always wins. The power clasher says Yeah, I’m shining a flashlight in your eyes—what are you gonna do about it? The hope of fashion is that you shine back in your own way.

The individual-level rules of fashion are not ultimately justified by appeal to visual appeal. They typically help us look good, true, but more importantly they help us meet minimal conditions of self-expression. Power clashers prove that an apparently ruly practice can be deeply unruly at its heart, because the practice’s heart is the powerful and elusive value of self-expression through dress. We should think of the rules of fashion not as strict rules—ones that obviously change all the time anyway—but as communal notes on how to realize the practice’s governing value.


This makes me wonder about philosophy, which is full of stringent norms and standards, and full of people ready to enforce them—implicit and explicit norms of logic, form, voice, argument, address, and interaction. There are so many that our vigilance in observance of them threatens to make our essays and books oppressive to write and exhausting to read, as if we always had to wear formal dress lest we be regarded as unserious at best, dumb at worst. Is it too easy to forget that philosophy can be done without adhering to such norms? Like poetry, the beauty of philosophy can shine through and because of its rules, but poetry has embraced its ‘free verse’. I sometimes wonder if we collectively lost our sense of philosophy’s potential for literary creativity.

The sad truth is that philosophers can be extremely dismissive of those who fall out of line with philosophy’s conservative standards of writing, painting Wittgenstein as a charlatan, Nietzsche as a madman, the novels of Iris Murdoch as irrelevant, the dialogue as a lost genre, or anything outside of the standard form professional publication as lesser. I don’t know whether we can literally power clash in philosophy but I wouldn’t mind a few more flashlights in my face.

Thursday, April 13, 2023

The Black Hole Objection to Longtermism and Consequentialism

According to consequentialism, we should act to maximize good consequences. According to longtermism, we should act to benefit the long-term future. If either is correct, then it would be morally good to destroy Earth to seed a new, life-supporting universe.

Hypothetically, it might someday become possible to generate whole new universes. Some cosmological theories, for example, hypothesize that black holes seed new universes -- universes causally disconnected from our own universe, each with its own era of Big-Bang-like inflation, resulting in vastly many new galaxies. Maybe our own universe is itself the product of a black hole in a prior universe. If we artificially generate a black hole of the right sort, we might create a whole new universe.

Now let's further suppose that black holes are catastrophically expensive or dangerous: The only way to generate a new black-hole-seeded universe requires sacrificing Earth. Maybe to do it, we need to crash Earth into something else, or maybe the black hole needs to be sufficiently large that it swallows us up rather than harmlessly dissipating.

So there you are, facing a choice: Flip a switch and you create a black hole that destroys Earth and births a whole new universe, or don't flip the switch and let things continue as they are.

Let's make it more concrete: You are one of the world's leading high-energy physicists. You are in charge of a very expensive project that will be shut down tomorrow and likely never repeated. You know that if tonight you launch a certain process, it will irreversibly create a universe-generating black hole that will quickly destroy Earth. The new universe will be at least the size of our own universe, with at least as many galaxies abiding by the same general laws of physics. If you don't launch the process tonight, it's likely that no one in the future ever will. A project with this potential may never be approved again before the extinction of humanity, or if it is, it will likely have safety protocols that prevent black holes.

[Image: Midjourney rendition of a new cosmos exploding out of the back of a black hole]

If you flip the switch, you kill yourself and everyone you know. You break every promise you ever made. You destroy not only all of humanity but every plant and animal on Earth, as well as the planet itself. You destroy the potential of any future biological species or AI that might replace or improve upon us. You become by far the worst mass murderer and genocidaire that history has ever known. But... a whole universe worth of intelligent life will exist that will not exist if you don't flip the switch.

Do you flip the switch?

From a simple consequentialist or longtermist perspective, the answer seems obvious. Flip the switch! Assume you estimate that the future value of all life on, or deriving from, Earth is X. Under even conservative projections about the prevalence of intelligent life in galaxies following laws like our own, the value of a new universe should be at least a billion times X. If we're thinking truly long term, launching the new universe seems to be by far the best choice.

Arguably, even if you think there's only a one in a million chance that a new universe will form, you ought to flip that switch. After all, here's the expected value calculation:

  • Flip switch: 0  + 0.000001*1,000,000,000X = 1000X.
  • Don't flip switch: X + 0 = X.
(In each equation, the first term reflects the expected value of Earth's future given the decision and the second term reflects the expected value generated or not generated in the seeded universe.)

Almost certainly, you would simply destroy the whole planet, with no compensating good consequences. But if there's a one in a million chance that by doing so you'd create a whole new universe of massive value, the thinking goes, it's worth it!

Now I'm inclined to think that it wouldn't be morally good to completely destroy Earth to launch a new universe, and I'm even more strongly inclined to think it wouldn't be morally good to completely destroy Earth for a mere one in a million chance of launching a new universe. I suspect many (not all) of you will share these inclinations.

If so, then either the consequentialist and longtermist thinking displayed here must be mistaken, or the consequentialist or longtermist has some means of wiggling out of the black hole conclusion. Call this the Black Hole Objection.

Could the consequentialist or longtermist wiggle out by appealing to some sort of discounting of future or spatiotemporally disconnected people? Maybe. But there would have to be a lot of discounting to shift the balance of considerations, and it's in the spirit of standard consequentialism and longtermism that we shouldn't discount distant people and the future too much. Still, a non-longtermist, highly discounting consequentialist might legitimately go this route.

Could the consequentialist or longtermist wiggle out by appealing to deontological norms -- that is ethical rules that would be violated by flipping the switch? For example, maybe you promised not to flip the switch. Also, murder is morally forbidden -- especially mass murder, genocide, and the literal destruction of the entire planet. But the core idea of consequentialism is that what justifies such norms is only their consequences. Lying and murder are generally bad because they lead to bad consequences, and when the overall consequences tilt the other direction, one should lie (e.g., to save a friend's life) or murder (e.g., to stop Hitler). So it doesn't seem like the consequentialist can wiggle out in this way. A longtermist needn't be a consequentialist, but almost everyone agrees that consequences matter substantially. If the longtermist is committed to the equal weighting of long-term and short-term goods, this seems to be a case where the long-term goods would massively outweigh the short-term goods.

Could the consequentialist or longtermist wiggle out by appealing to the principle that we owe more to existing people than to future people? As Jan Narveson puts it, "We are in favor of making people happy, but neutral about making happy people" (1973, p. 80). Again, any strong application of this principle seems contrary to the general spirit of consequentialism and longtermism. The longtermist, especially, cares very much about ensuring that the future is full of happy people.

Could they wiggle out by suggesting that intelligent entities, on average, have zero or negative value, so that creating more of them is neutral or even bad? For example, maybe the normal state of things is that negative experiences outweigh positive ones, and most creatures have miserable lives not worth living. This is either a dark view on which we would be better off not have been born or a view on which somehow humanity luckily has positive value despite the miserable condition of space aliens. The first option seems too dark (though check out Schopenhauer) and the second unjustified.

Could they wiggle out by appealing to infinite expectations? Maybe our actions now have infinite long-term expected value, through their unending echoes through the future universe, so that adding a new positive source of value is as pointless as trying to sum two infinitudes into a larger infinitude. (Infinitudes come in different cardinalities, but one generally doesn't get larger infinitudes by summing two of them.) As I've argued in an earlier post, this is more of a problem for longtermism and consequentialism than a promising solution.

Could they wiggle out by appealing to risk aversion -- that is, the principle of preferring outcomes with low uncertainty? Maybe, but the principle is contentious and difficult to apply. Too strict an application of it is probably inconsistent with longtermist thinking. The long-term future is highly uncertain, and thus risk aversion seemingly justifies its sacrifice for more certain short-term goods. (As with discounting, this escape might be more available to a consequentialist than a longtermist.)

Could they wiggle out by assuming a great future for humanity? Maybe it's possible that humanity populates the universe far beyond Earth. This substantially increases the value of X. Let's generously assume that if we populate the universe far beyond Earth, the value of our descendants' lives is equal to the value of the whole universe you could create tonight by generating a black hole. Even so, given that there's substantial uncertainty that humanity will have so great a future, you should still flip the switch. Suppose you think there's a 10% chance. The expectations then become .1*X (don't flip the switch) vs. X (flip the switch). Only if you think it more likely that humanity has that great future than that the black hole generates some other species of set of species whose value is comparable to that hypothetical future, would it make sense to refrain from flipping the switch.

If we add the thought that our descendants might generate black holes, which generate new universes which generate new black holes, which generate new universes which generate new black holes, and so on, then we're back into the infinite expectations problem.

Philosophers are creative! I'm sure there are other ways the consequentialist or longtermist could try to wiggle out of the Black Hole Objection. But my inclination is to think the most natural move is for them simply to "bite the bullet": Admit that it would be morally good to destroy Earth to seed a new cosmic inflation, then tolerate the skeptical looks from those of us who would prefer you not to be so ready (hypothetically!) to kill us in an attempt to create something better.

Thursday, April 06, 2023

Sexiness and Love Island

guest post by Nick Riggle

Sexiness can seem straightforward. Everyone knows what it’s like to want and respond to it—in oneself, in others—and I’d venture that most people do indeed want it one way or another. But the ease of feeling might flow in the other direction. Rod Stewart reminds us that sexiness more than flirts with objectification: If you want my body and you think I’m sexy… The desire for sexiness seems to include the desire for bodies as such, and so its aesthetic value seems to flirt too directly with ethical disvalue. It is difficult to know how to feel: aesthetically attracted, ethically repulsed.


One response to the sexiness problem is what I call the Prince Strategy. In his song “Sexy M.F.” Prince solves the problem by transforming sexiness from sexual to mental attractiveness.

We need to talk about things, tell me what cha do
Tell me whatcha eat, I might cook for you
See it really don't matter 'cause it's all about me and you
Ain't no one else around
I'm movin' with the blindfold, gagged and bound
I don't mind, see this ain't about sex
It's all about love being in charge of this life and the next
Why all the cosmic talk?
I just want you smarter than I'll ever be
When we take that walk

You seem perplexed I haven't taken you yet
Can't you see I'm harder than a man can get
I got wet dreams comin' out of my ears
I get hard if the wind blows your cologne near me
But I can take it, 'cause I want the whole nine
This ain't about the body, it's about the mind

Prince emphasizes that true sexiness is ‘about the mind’. This conceptual engineering gives him the best of both worlds. As he sings later in the song, “I'm happy to change my state of mind for this behind.”

[Midjourney rendition of Prince cooking eggs for a sexy woman]

As many philosophers have pointed out, sexiness is bound up with patriarchy and its attendant restrictions on women’s autonomy. The Prince Strategy tries to sidestep the connections between sexiness, women’s bodies, and patriarchy by replacing sexual, bodily attractiveness with mental attractiveness. This is an appealing strategy, for under patriarchy it is not enough to agree with Martha Nussbaum’s point that objectification means many things (7 by her count) and it is not always ethically wrong. Under patriarchy it often is, and that’s one reason why the Prince Strategy can seem attractive in this extremely nonideal world.

The problem with the Prince Strategy is not that minds cannot be sexy—obviously they can, says this philosopher—it’s that it seems to deny that bodies can also be sexy. Having watched more than a few seasons of Love Island UK I can report: obviously they can. Should we simply ignore that fact? I should clarify that watching Love Island UK is not my only evidence.


Sheila Lintott and Sherri Irvin develop another response by intervening in the patriarchal culture of sexiness that encourages women to embrace a notion of sexiness that conforms to the male gaze. On their view, we should retain the link between sexiness and sexual desire but revise the concept of sexiness to construct one that is respectful of all persons: “To find a person sexy in this sense is to see their body as infused with an expression of self and animated by their own sexual identity. … Respecting sexiness involves seeing others not (only) as sex objects but necessarily as sexual subjects: human beings who are in charge of their sexual agency.” (p. 305)

Call this the Embodiment strategy: sexiness is the attractiveness of a person’s embodied sexuality. A sexy person expresses their sexuality in their look, demeanor, composure. To find someone sexy is to be attracted to their embodied sexuality.


Now consider: the Love Island Phenomenon. If you’ve watched as much Love Island UK as I have, then you have witnessed the following phenomenon many times over: You meet someone and find them very sexy. But over time you come know more about them: their personality, their values and goals (or lack thereof), their interests and style. And, like magic, their sexiness disappears.

Nothing need change about how they embody their sexuality, so what could explain the change in sexiness? And while the Embodiment strategy seems unable to capture the change, the Prince strategy can’t capture the initial attraction, for initially we know almost nothing about their minds and quite a lot about their bodies.

It seems that Love Island UK spells trouble for all.[1]

One response that Lintott and Irvin might offer is to say that the Love Island Phenomenon is best described not as a change in sexiness but as a change in attractiveness. What changes is not the person’s sexiness but your being attracted to it. Their sexiness is neutralized by your awareness of their unattractive traits. But this response makes me wonder why, if sexiness is embodied sexuality, information about the person’s non-sexual character should change how I feel about their sexiness.


Here is another proposal, call it Prince’s Synthesizer: Maybe the sexy truth lies in a synthesis of the Prince and Embodiment strategies: sexiness is more than embodied sexuality—it is embodied mind, where the features of mind that matter are any features we might find hot. Humans are extremely creative in finding hotness. Some of these features can shine as and through embodied sexuality: the embodiment of confidence, quirkiness, sexual poise, self-possession, boldness, and so on. But others shine in other ways: a person’s sexy intelligence, sincerity, drive, resilience, grit, creativity, worldliness, or…the way they eat falafel, or curl their lip, or smell a certain way.

Prince’s Synthesizer makes better sense of the idea that when (or at least often when) we find someone sexy, we are attracted to their style, or the way they embody their personal ideals—the way their dreams and aspirations manifest in their ways of living. Sexual aspiration generates style as much as intellectual or athletic aspiration. Eros wends its way through each. We might catch a glimpse of a person’s style in getting a sense of their sexuality, but that is only part of a bigger stylistic picture, which, when it comes into full view, along with the minds that bodies embody, might reveal something…not hot. Love Island UK Season 9 now streaming!


[1] Big thanks to the students in my Fall 2022 Aesthetics and Ethics class for a great discussion about this.