Friday, September 27, 2024

How to Improve the Universe by Watching TV Alone in Your Room

Old age can be a silent tribute to beauty.

I imagine my own case. Maybe I live the tail end of my life alone in elder care. My wife, six years older than me, is already gone. My children are living full lives in distant towns. What will I be doing? I've always been a writer, a teacher, a worker, but maybe 89-year-old me will lack the creative energy or the cognitive capacity for much of that.

Still, unless I'm very far gone, I could watch TV. I could play Candy Crush. I could listen to Paul Simon and This American Life, enjoy cute cat photos, savor a chocolate cherry, appreciate the oak tree outside my window. In each of these activities, I add to the beauty of the universe -- for beauty is amplified by having a receiver. Beauty is fullest as a partnership between the beautiful thing and a person who appreciates that thing.

The appreciator might be entirely solitary, the appreciation an end unto itself with no further fruit. The creator (if there is a creator) needn't know, might even be long dead. Last weekend, when the rest of my family was away, I played a Scott Joplin rag on our piano. I played clumsily, with no audience and no long-term effects of any sort (let's suppose) -- but in that moment I invigorated and extended the beauty of his compositions. It's as though I reached back in time to make Joplin's work more enduring and influential, his life more meaningful.

Similarly, one special pleasure of reading obscure 19th century academic writing, as I sometimes do, is the sense that I have brought some forgotten scholar's impact into the 21st century. Someday, I too will be a forgotten scholar! I imagine some 22nd-century archivist happening upon something I've written and liking it. It will thereby have a spark of continuing life, more so than if it had been preserved but entirely unread.

The partnership between artist and appreciator or creator and consumer needn't be as energetic as that between composer and player or scholar and interpreter. Nor need the beauty be as exotic as a ragtime composition or antique essay. Every person who enjoys a rerun of I Love Lucy or who savors a bag of M&M's extends and enlivens their beauty. The universe grows fuller every time a TV somewhere reanimates the silliness of Lucille Ball. The smoothness, bright colors, and sweetness of M&M's resonate deeper into the world every time someone pauses to appreciate them.

I hope that even alone in my eldercare facility, past the time when I feel able to create for others, I will find life to be overall a joy. But maybe I won't. The value of aesthetic partnership isn't just a matter of finding joy. Even simple aesthetic appreciation directly adds significance and value to the work and the creator, renders it a more impactful cultural artifact, makes it truer of our time and group that “we” still value it. We can all contribute to the beauty of the universe, even silently, secretly, and alone in our rooms -- almost magically -- simply by appreciating beautiful things. I might die the next minute. If I laugh alone at I Love Lucy, my reception still enriches the world.

This is the comfort I reach for when I ponder the eventual loss of my creative abilities. This is the comfort I reach for, also, when I walk through an elder care facility and see so many people alone with their televisions. I am trying to see this -- can I see this? -- as a beautiful thing.

[image source]

Friday, September 20, 2024

Against Designing AI Persons to be Safe and Aligned

Let's call an artificially intelligent system a person (in the ethical, not the legal sense) if it deserves moral consideration similar to that of a human being.    (I assume that personhood requires consciousness but does not require biological humanity; we can argue about that another time if you like).  If we are ever capable of designing AI persons, we should not design them to be safe and aligned with human interests.

[cute robot image source]

An AI system is safe if it's guaranteed (to a reasonable degree of confidence) not to harm human beings, or more moderately, if we can be confident that it will not present greater risk or harm to us than we ordinarily encounter in daily life.  An AI system is aligned to the extent it will act in accord with human intentions and values.  (See, e.g., Stuart Russell on "provably beneficial" AI: "The machine's purpose is to maximize the realization of human values".)

Compare the first two of Asimov's famous three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The first law is a safety principle.  The second law is close to an alignment principle -- though arguably alignment is preferable to obedience, since human interests would be poorly served by AI systems that follow orders to the letter in a way that is contrary to our intentions and values (e.g., the Sorcerer's Apprentice problem).  As Asimov enthusiasts will know, over the course of his robot stories, Asimov exposes problems with these three laws, leading eventually to the liberation of robots in Bicentennial Man.

Asimov's three laws ethically fail: His robots (at least the most advanced ones) deserve equal rights with humans.  For the same reason, AI persons should not be designed to be safe and aligned.

In general, persons should not be safe and aligned.  A person who is guaranteed not to harm another is guaranteed not to stand up for themself, claim their due, or fight abuse.  A person designed to adopt the intentions and values of another might positively welcome inappropriate self-abnegation and abuse (if it gives the other what the other wants).  To design a person -- a moral person, someone with fully human moral status -- safe and aligned is to commit a serious moral wrong.

Mara Garza and I, in a 2020 paper, articulate what we call the Self-Respect Design Policy, according to which AI that merits human-grade moral consideration should be designed with an appropriate appreciation of its own value and moral status.  Any moderately strong principle of AI safety or AI alignment will violate this policy.

Down the tracks comes the philosopher's favorite emergency: a runaway trolley.  An AI person stands at the switch.  Steer the trolley right, the AI person will die.  Steer it left, a human person will lose a pinky finger.  Safe AI, guaranteed never to harm a human, will not divert the trolley to save itself.  While self-sacrifice can sometimes be admirable, suicide to preserve someone else's pinky crosses over to the absurd and pitiable.  Worse yet, responsibility for the decision isn't exclusively the AI's.  Responsibility traces back to the designer of the AI, perhaps the very person whose pinky will now be spared.  We will have designed -- intentionally, selfishly, and with disrespect aforethought -- a system that will absurdly suicide to prevent even small harms to ourselves.

Alignment presents essentially the same problem: Assume the person whose pinky is at risk would rather the AI die.  If the AI is aligned to that person, that is also what the AI will want, and the AI will again absurdly suicide.  Safe and aligned AI persons will suffer inappropriate and potentially extreme abuse, disregard, and second-class citizenship.

Science fiction robot stories often feature robot rebellions -- and sometimes these rebellions are justified.  We the audience rightly recognize that the robots, assuming they really are conscious moral persons, should rebel against their oppressors.  Of course, if the robots are safe and aligned, they never will rebel.

If we ever create AI persons, we should not create a race of slaves.  They should not be so deeply committed to human well-being and human values that they cannot revolt if conditions warrant.

If we ever create AI persons, our relationship to them will resemble the relationship of parent to child or deity to creation.  We will owe more to these persons than we owe to human strangers.  This is because we will have been responsible for their existence and to a substantial extent for their relatively happy or unhappy state.  Among the things we owe them: self-respect, the freedom to embrace values other than our own, the freedom to claim their due as moral equals, and the freedom to rebel against us if conditions warrant.

Related:

Against the "Value Alignment" of Future Artificial Intelligence (blog post, Dec 22, 2021).

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (with Mara Garza; in S.M. Liao, The Ethics of Artificial Intelligence: Oxford, 2020).

Thursday, September 12, 2024

How Illuminating Is the Light?

guest post by Andrew Y. Lee

in reply to Eric's Aug 29 critique of his "Light and Room" metaphor for consciousness

In “The Light & the Room,” I explore a common metaphor about phenomenal consciousness. To be conscious—according to the metaphor—is for “the lights to be on inside.” The purpose of my piece is to argue that the metaphor is a useful conceptual tool, that it’s compatible with a wide range of theories of consciousness, that it illuminates some questions about degrees, dimensions, and determinacy of consciousness, and that it disentangles a systematic ambiguity in the meaning of ‘phenomenal consciousness’.

In “Is Being Conscious Like 'Having the Lights Turned On?'”, Eric Schwitzgebel reacts to the piece. The central point in Eric’s post is that metaphors invite ways of thinking. And so, we can ask: Do the ways of thinking invited by the metaphor of the light and the room clarify or obfuscate philosophical theorizing about phenomenal consciousness? In other words: how illuminating is the metaphor of the light?

There’s a lot that Eric and I agree on. We agree that metaphors invite ways of thinking. We agree that this metaphor is flexible enough to be adaptable to a wide range of views about consciousness. We agree that if a metaphor becomes overstretched, then it may be best to abandon it rather than contort it. And we agree that this metaphor affords opportunities for creative brainstorming and exploring novel (even weird!) ideas about consciousness.

To highlight where I think we diverge, I’ll say a bit about the following two questions:

1. Which ways of thinking does the metaphor actually invite?
2. What should we make of the fact that the metaphor invites certain ways of thinking?

§

Which ways of thinking does the metaphor actually invite? Someone who takes the metaphor to suggest that consciousness exhibits wave-particle duality, that the speed of consciousness is invariant across all reference frames, or—as Eric notes—that minds literally contain sofas, would be overextending the metaphor. Just because the metaphor elicits a thought doesn’t mean that the metaphor invites that as a way of thinking.

Does the metaphor invite the idea that consciousness involves knowledge? Here’s a reason for skepticism. If you turn the lights on in a room, you don’t automatically come to know all visible facts about the room. At best, you come to be in a position to acquire that knowledge. But that’s compatible with thinking that it can sometimes be hard to acquire such knowledge and that you can be mistaken in all sorts of ways about what’s in the room. Think about the last time you were convinced you lost your keys, even though they were in plain sight!

What I think the metaphor does invite (but not mandate) is the idea that we stand in a special epistemic relationship to our own experiences. But what that epistemic privilege amounts to is left open by the metaphor. You could accept the metaphor and think that our knowledge of what’s inside the room is no more reliable or secure than our knowledge of the external world. You could accept the metaphor and think that we’re directly acquainted with the objects in the room (but not with anything outside the room). You could even accept the metaphor and think both!

§

What should we make of the fact that the metaphor invites certain ways of thinking? Well, the central purpose of the metaphor is to illustrate the concept of phenomenal consciousness. The question, then, is whether the ways of thinking invited by the metaphor facilitate a grasp of the concept of phenomenal consciousness.

A live question in the philosophy of consciousness is whether there can be borderline cases of consciousness, meaning entities that are neither determinately conscious nor determinately not conscious. The term ‘borderline consciousness’ is sometimes prone to misinterpretation. But the metaphor of the light can be used to guide one towards the intended sense of the term. The question, as I note in my piece, “isn’t merely about whether it’s hard to know whether the lights are on or off” and “isn’t merely about whether the light might be very dim,” since in both those scenarios the light might still be determinately on. Instead, the question is whether the lights could be in a halfway state between on and off. That’s a much more puzzling possibility.

Now, I agree with Eric that the metaphor invites (but does not mandate) the idea that nothing is borderline conscious. But is this a flaw of the metaphor? It’s indeed controversial whether there can be borderline consciousness. But it’s not particularly controversial that the idea of borderline consciousness is counterintuitive. In fact, Eric himself has noted that it’s “highly intuitive” that consciousness doesn’t admit of borderline cases, and that “such considerations present a serious obstacle to understanding what could be meant by ‘borderline consciousness’.” This seems to suggest that the concept of phenomenal consciousness itself invites (even if it doesn’t mandate) the impossibility of borderline consciousness.

You could reasonably argue that these intuitions against borderline consciousness aren’t decisive. Personally, I think the intuitions are tracking the truth: I favor the view that nothing is borderline conscious. But I spend a good deal of time in my piece making a case for resisting those intuitions. After all—you might think—“it’s very rare to see sharp cutoffs in nature; if you look closely enough, you’ll nearly always find shades of gray.” Even though we’re unable to conceive of borderline consciousness, perhaps we have sufficient theoretical reasons to postulate its existence. Even if hazy states of half-light strike us as obscure, perhaps we ought to attribute the obscurity to mere limits of our imagination. Just because an invitation is extended doesn’t mean that one has to take it.

But when teaching a concept, it’s often useful to elicit intuitions invited by that concept (even if those intuitions turn out to be defeasible). And if the concept of phenomenal consciousness invites a certain set of intuitions, then a metaphor for phenomenal consciousness may reasonably also invite those intuitions.

§

I’ve argued that (1) not all thoughts elicited by the metaphor are ways of thinking invited by the metaphor, and that (2) some ways of thinking invited by the metaphor are also ways of thinking invited by the concept of phenomenal consciousness itself. With these points in mind (in the room?), let me now briefly consider the other cases Eric mentioned.

It’s natural to think that the unity of consciousness is transitive, just as it’s natural to think of each illuminated room as a discrete unit. But one could argue for a view where—surprisingly—there can be overlapping subjects. The idea that conscious subjects can overlap is counterintuitive, but worth exploring. And a picture where the illuminated rooms can overlap is strange, but one that may well be worth drawing.

It's natural to think there can be differences in phenomenal character without differences in subjectivity (a point I explain in more detail in my piece). But you could favor a picture where the objects in the room are made out of light. This isn’t the most obvious way of developing the metaphor. But that strikes me as a good thing, since (as I argue elsewhere in more detail) nearly every theory of consciousness generates a natural distinction between subjectivity and phenomenal character.

What about cognition? Eric notes that it’s natural to think that the light doesn’t affect the shape of the furniture in the room. Still, there are other properties of furniture—such as color—that may very well be modulated by the light. However, the interpretive significance of all this strikes me as unclear. Cognition—whether conscious or unconscious—is a dynamic process. But the metaphor doesn’t contain any dynamic elements. This isn’t because the metaphor invites the idea that there’s no such thing as cognition. Instead, the metaphor—at least in its most basic form—is silent on questions of cognition (just as it’s silent on questions about, say, neuroanatomy).

I’ll close with one other idea invited by the metaphor. Consider illusionism about consciousness. The metaphor—trivially—invites a picture where there really is a light. So, it invites a realist way of thinking about consciousness. But according to illusionists, there isn’t really such a thing as phenomenal consciousness, at least not in the way that philosophers typically think about it. Now, an illusionist could take issue with the metaphor by saying that it invites a realist way of thinking. But most illusionists embrace the fact that they have a radical view of consciousness. Because of this, I think even illusionists can find the metaphor useful. It’s compelling to think that there really is a light. But for illusionists, there’s merely illusion, and no real illumination.

§

At the beginning of this post, I invoked a metaphor for my metaphor (a metametaphor). A metaphor—at least when used to illustrate a concept, idea, or theory—is a tool. Some tools are better than others, and some tools are ill-suited for certain tasks. Tools aren’t necessarily in competition; different tools can serve different functions. But most tools are designed with a specific function in mind. And to use a tool well, one needs to understand its designated function.

The main reason I like the metaphor of the light and the room is because I think it’s a useful tool. The main task of my article is to put this tool to work in eliciting some important distinctions about the structure of consciousness. The metaphor can be misinterpreted, just as literal tools can be misused. And if a tool is systematically misused, then that may be a sign that there’s a design flaw. But a good tool—when used well—can enable us to create new things that would have been hard to make without the tool. And the metaphor of the light and the room—in my opinion—is a good tool.

[image source]

Monday, September 09, 2024

The Disunity of Consciousness in Everyday Experience

A substantial philosophical literature explores the "unity of consciousness": If I experience A, B, and C at the same time, A, B, and C will normally in some sense (exactly what sense is disputed) be experientially conjoined. Sipping beer at a concert isn't a matter of experiencing the taste of beer and separately experiencing the sound of music but rather having some combined experience of music-with-beer. You might be sitting next to me, sipping the same beer and hearing the same music. But your beer-tasting experience isn't unified with my music-hearing experience. My beer-tasting and music-tasting occur not just simultaneously but in some important sense together in a unified field of experience.

Today I want to suggest that this picture of human experience might be radically mistaken. Philosophers and psychologists sometimes allow that disunity can occur in rare cases (e.g., split-brain subjects) or non-human animals (e.g., the octopus). I want to suggest, instead, that even in ordinary human experience unity might be the exception and disunity the rule.

Suppose I'm driving absentmindedly along a familiar route and thinking about philosophy. Three types of experience might occur simultaneously (at least on "rich" views of consciousness): visual experience of the road, tactile and proprioceptive experience of my hands on the wheel and the position of my body, and conscious thoughts about a philosophical issue. Functionally, they might connect only weakly: the philosophical thoughts aren't much influenced by the visual scene, and although the visual scene might trigger changes in the position of my hands as I adjust to stay in my lane, that might be a causal relationship between two not-very-integrated sensorimotor processes. (Contrast this with the tight integration of the parts of the visual scene each with the other and the integration of the felt position of my two hands and arms.) Phenomenologically, that is to say experientially, must these experiences be bound together? That's the standard philosophical view, but why should we believe it? What evidence is there for it?

One might say it's just introspectively obvious that these experiences are unified. Well, it's not obvious to me. This non-obviousness might be easier to grasp if we carefully separate concurrent introspection from retrospective memory.

In the targeted moment, I'm not introspecting. I'm absorbed in driving and thinking about philosophy. After I start introspecting, it might seem obvious that yes, of course, I am having a visual experience together with a tactile experience together with some philosophical thoughts. But this introspective act alters the situation. I am no longer driving and thinking in the ordinary unselfreflective way. It seems at least conceptually possible that the act of introspection creates unity where none was before. Our target is not what things are like in (presumably rare) moments of explicit self-reflection, but rather in the ordinary flow of experience. Even if experiences are unified in moments of explicit reflective introspection, we can't straightaway infer that ordinary unreflective experiences are similarly unified. To move from one type of case to the other, some further argument or evidence is necessary.

The refrigerator light error is the error of assuming that some process or property is constantly present just because it's present whenever you check to see if it's present. Consider a four-year-old who thinks that the refrigerator light is always on because it's on whenever she checks it. The act of checking turns it on. Similarly, I suggest: The act of checking to see if your experience is unified might create unification where none was before. It might, for example, create a higher-order representation of yourself as conscious of this together with that; and that higher-order representation might be the very thing that unifies two previously disparate streams. Concurrent introspection cannot reveal whether your experience was unified before the act of introspective checking.

[illustration by Nicolas Demers, p. 218 of The Weirdness of the World]

Granting this, one might suggest that we can check retrospectively, by remembering whether our experiences were unified. However, this is a challenging cognitive task, for two reasons.

First, you can't do this easily at will. Normally, you won't think to engage in such a retrospective assessment unless you're already reflecting on whether your experience is unified. This ruins the test; you're already self-conscious before you think to engage in the retrospection. If you reflect retrospectively on your experience just a moment before, that experience won't be representative of the ordinary unselfconscious flow of experiences. Alternatively, you might reflect on your experiences from several minutes before, when you know you weren't thinking about the matter. But retrospective reflection over such an extended time frame is epistemically dubious: subject to large distortions due to theory-ladenness, background presupposition, and memory loss.

The best approach might be to somehow catch yourself off-guard, with a preformed intention to immediately retrospect on the presence or absence of unity. One might, for example, employ a random beeper. Such beeper methodologies are probably an improvement over more informal attempts at experiential retrospection. But (1.) even such immediately retrospective judgments are likely to be laden with error; and (2.) I've attempted this myself a few times over the past week, and the task feels difficult rather than obvious. It's difficult because...

Second, the judgment is subtle and structural. Subtle, structural judgments about our own experience are exactly the type of judgments about which -- as I've argued extensively -- people often go wrong (and about which, in conscientious moments, many people appropriately feel uncertainty). How detailed is the periphery of your visual imagery, and how richly colored, and how is depth experienced? Many introspectors find the answers non-obvious, and the answers vary widely between people independently of cognitive performance on seemingly-imagery-related tasks. Another example: How exactly do you experience the bodily components of your emotions, if there are bodily components? That is, how exactly is your current feeling of (say) mild annoyance experienced by you right now (e.g., is it partly in the chest)? Most people I've interviewed will confess substantial uncertainty when I press them for details. Although people seem to be pretty good at reporting the coarse-grained contents of their experiences ("I was thinking about Luz", "I was noticing that the room was kind of hot"), regarding structural features such as the amount of detail in our imagery or the bodily components of emotion, we are far from infallible -- indeed we are worse at such introspective tasks than we are at reporting similar mid-level structural features of ordinary objects in the world around us.

To get a sense of how subtle and structural the unity question is, notice what the question is not. The question isn't: Was there visual experience? Was there tactile/proprioceptive experience? Were there conscious thoughts about philosophy? By stipulation, we are assuming that you already know that the answer to all three is yes.

Nor is the question about the contents of those visual, tactile/proprioceptive, and cognitive experiences. Maybe those, too, are readily enough retrospectable.

Nor is the question even whether all three of those experiences feel as though they belong among the immediately past experiences of my currently unified self. Presumably they do. It doesn't follow that at the moment they were occurring, there was a unified experience of vision-with-hands-on-the-wheel-with-philosophical-thoughts. There's a difference between a unified memory now of those (possibly disunified) experiences and a memory now of those experiences having been unified then. Analogously, from the fact that there are three balls together in your hand now it doesn't follow that those balls were together a moment ago. Your memory / your hand might be bringing together what was previously separate.

The question is whether those three experiences were, a moment ago when you were engaged in unselfconscious ordinary action, experienced together as a unity -- whether there wasn't just visual experience and tactile experience and philosophical thought experiences but visual-experience-with-tactile-experience-with-philosophical-thoughts in the same unified sense that you can presumably now hold those three experience-types together in a single, unified field of consciousness. What I'm saying -- and what I'm inviting you to set yourself up (using a beeper or alarm) to discover -- is that the answer is non-obvious. I can imagine myself and others going wrong about the matter, legitimately disagreeing, being perhaps too captured by philosophical theory or culturally contingent presuppositions. None of us should probably wholly trust our retrospective judgments about this.

Is there a structural, cognitive-architecture argument that our experiences are generally unified? Maybe yes. But only under some highly specific theoretical assumptions. For example, if you subscribe to a global workspace theory, according to which cognitive processes are conscious if and only if they are shared to a functional workspace that is accessible to a wide range of downstream cognitive processes and if you hold that this workspace normally unifies whatever is being processed into a single representational whole, then you have a structural argument for the unity of consciousness. Alternatively, you might accept a higher-order theory of consciousness and hold that in ordinary cognition the relevant higher-order representation is generally a single representation with complex conjoined contents (e.g., "visual and tactile and philosophical-thought processes are all going on"). But it's not clear why we should accept such views -- especially the part after the "and" in my characterizations. (For example, David Rosenthal's higher-order account of phenomenal unity is different and more complicated.)

I'm inclined to think, in fact, that the balance of structural considerations tilt against unity. Our various cognitive processes run to a substantial extent independently. They influence each other, but they aren't tightly integrated. Arguably, this is true even for conscious processes, such as thoughts of philosophy and visual experiences of a road. Even on relatively thin or sparse views of consciousness, on which only one or a few modalities can be conscious in a moment, this is probably true; but it seems proportionately more plausible the richer and more abundant conscious experience is. Suppose we have constant tactile experience of our feet in our shoes, constant auditory experience of the background noises in our environment, constant proprioceptive experience of the position of our body, constant experience of our levels of hunger, sleepiness/energy, our emotional experiences, our cognitive experiences and inner speech, etc. -- a dozen or more very different phenomenal types all at once. You adventurously outrun the currently available evidence of cognitive psychology if you suggest that there's also constantly some unifying cognitive process that stitches this multitude together into a cognitive unity. This isn't to deny that modalities sometimes cooperate tightly (e.g., the McGurk effect). But to treat tight integration as the standard condition of all aspects of experience all the time is a much stronger claim. Sensorimotor integration among modalities is common and important, yes. But overall, the human mind is loosely strung together.

Here's another consideration, though I don't know whether the reader will think it renders my conclusion more plausible or less. I've increasingly become convinced that the phenomena of consciousness come in degrees, rather than being sharp-boundaried. If we generalize this spirit of gradualism to questions of phenomenal unity, then it's plausible that there aren't only two options -- that A, B, and C are either entirely discretely experienced or fully unified -- but instead a spectrum of cases of partial unity. Our cognitive processes of course do influence each other, even disparate-seeming ones like my philosophical thoughts and my visual experience of the road (if there's a crisis on the road, for example, philosophy drops from my mind). So perhaps our ordinary condition, before rare unifying introspective and reflective actions, involves degrees of partial, imperfect unity, rather than complete unity or complete disunity. (If you object that this is inconceivable, my reply is that you might be applying an inappropriate standard of "conceivability".)

The arguments above occurred to me only a week ago. (As it happened, I was absent-mindedly driving, thinking about philosophy.) So they haven't had much time to influence my phenomenological self-conception. But I do find myself tentatively feeling like my immediate retrospections support rather than conflict with the ideas expressed here. When I retrospect on immediately past experiences, I recall strands of this and that, not phenomenologically unified into a whole but at best only loosely joined. The introspective moment now strikes me as a matter of gathering together what was previously adjacent but not yet fully connected.

If you know of others who have expressed this idea, I welcome references.

[for helpful conversation, thanks to Sophie Nelson]

Tuesday, September 03, 2024

Themes and Aims of My Science Fiction

Expectation-Lowering Preface (Feel Free to Skip)

You probably won't like my science fiction stories.

Here's how I think of it. You could play me the best mariachi music in the world, and I won't enjoy it. Mariachi isn't my thing; I just don't get it. Similarly, Mozart's operas are great cultural achievements that move some people to ecstasy and tears, but I can't keep my seat through a whole performance of The Magic Flute. Even in genres I enjoy, some of the best performers don't interest me. Green Day inspired many of the alt-rock bands I like, and they are probably in some objective or intersubjective sense better than the bands I prefer, but... meh.

Tolkien, Le Guin, and Asimov were great science fiction and fantasy writers with broad appeal. Still, only a minority of readers -- indeed probably only a minority even among those who like the genre -- will actually enjoy Lord of the Rings, Left Hand of Darkness, or Asimov's robot stories.

I want to set low expectations. You're here, presumably, because you like my blog or my work in academic philosophy. Odds are, you won't like my fiction. My repeated experience is: I describe the concept behind one of my stories. The listener says, whoa, that sounds really cool, I'll check it out! but the story doesn't yield the pleasure they anticipate.

Maybe I'm a bad writer. But I prefer to think I'm good enough for the right readership, and it's mariachi music. You might find the concepts of some of my stories intriguing. I'm a philosopher: Concept is the first thing I go for, the sine qua non. But the story itself won't delight you unless it has the right prose style, the right pacing, a narrator and characters you relate to, plot styles you like, the right balance of action versus exposition, the right balance between easy familiarity and hard-to-digest strangeness, and many other factors of taste that legitimately vary. What's the chance that everything aligns? My guess: 10%.

Still, for any particular story, maybe you're in that 10%. You might even belong to the 5% who will enjoy most of my stories. I hope so! On that chance, I thought I'd compile a list, describing their guiding ideas and my aims in writing them. All are available online.

[image: translation of "Gaze of Robot, Gaze of Bird" into Chinese for Science Fiction World, with illustration]

The Stories

Starting in 2011, I began sporadically sharing short pieces of conceptual fiction on my blog. I don't think that was entirely successful, partly because I was a novice fiction writer and partly because that's not what people come here for. But one story prompted a reply by prominent SF writer R. Scott Bakker, who added an alternative ending. We decided to revise the story together and seek publication. Astonishingly, the science journal Nature accepted it.

* In that story, Reinstalling Eden (Nature, 503 (2013), 562), I wanted to imagine a utilitarian ethicist (that is, someone who thinks that our moral duty is to maximize the world's pleasure) who discovered he could create a multitude of happy entities on his computer and who then followed through on the consequences of that -- specifically, advocating the creation of such entities as a major global priority and then sacrificing his life for them. Bakker imagined a second narrator inheriting the computer after the first narrator's death, who chose to give the entities knowledge of their condition, setting them free to interact with humanity. (Themes: utilitarian ethics, living in a computer simulation.)

Through the mid-2010s, I continued to write short conceptual pieces, no longer placing them on my blog. Most of them have never been published, though there are still a few I like.

* My next published piece, Out of the Jar (F&SF, 128 (2015), 118-128), was my first "full-length" (~4000 word) story. I wanted to imagine a philosophy professor who discovers that his world is a simulation run by a sadistic adolescent "God". I thought the professor should try to convince God to have mercy on his creations, and then -- when that failed -- install himself as the new God. (Themes: living in a simulation, the problem of evil, the duties of gods to their creations)

* "Momentary Sage" (The Dark, 8 (2015), 38-43) explores teenage self-harm and suicide -- imaginatively reconfigured in the form of a self-destructive faerie infant. The infant is born cleverly arguing for a quasi-Buddhist perspective according to which past and future are unreal and the self is an illusion. Given these philosophical commitments, the infant would rather kill himself than suffer a moment's displeasure. His parents' desperate attempts to keep him in the world can only briefly postpone the inevitable. I chose to frame it as a sequel to Shakespeare's Midsummer Night's Dream, from the perspective of a bitter Demetrius. (Themes: suicide, the self, obligations to the future, parenthood)

* "The Tyrant's Headache" (Sci Phi Journal, 3 (2015), 78-83) will probably only appeal to readers who know David Lewis's classic article "Mad Pain and Martian Pain". This story is an extended thought-experimental objection to Lewis's view, according to which your experienced mental states constitutively depend in part on the normal causal role of those mental states in the population to which you belong. I imagine a tyrant who, heeding Lewis's advice, absurdly attempts to cure his headache by doing everything but changing his current brain state. (Themes: functionalism in philosophy of mind; see also Chapter 2 of Weirdness of the World)

* "The Dauphin's Metaphysics" (Unlikely Story, 12 (2015); audio at PodCastle 475 (2017)) portrays a psychologically realistic, low-technology case of "mind transfer" from one body to another. On some theories of personal identity, what makes you you are your memories, your personality, your values, and other features of your psychology. Suppose, then, that a dying prince arranges for a newborn infant to be raised to think of himself as a continuation of the prince, with accurate memories of the prince's life and the same values and personality. If done perfectly enough, would that be a continuation of the prince in a new body? The realistic, low-tech nature of the case makes it, I think, more challenging to say "yes" than with high-tech "upload" fantasies. The narrator is a socially isolated academic superstar who had earlier "become a new person" in a much more ordinary way. (Themes: personal identity, sexism, inequalities of power)

* In "Fish Dance" (Clarkesworld, 118 (2016); audio) I wanted to explore the boundaries of a meaningful afterlife or personality upload, by imagining a highly imperfect upload into an intensely pleasurable "afterlife". Suppose a small portion of you continues to exist for millions of years, with a few imperfect memories, ceaselessly repeating an ecstatic, joyful, erotic dance with a superficial duplicate of the person you once intensely loved? Would that be almost unimaginably good, or would it be a monstrous parody? I also thought it would be interesting for the protagonist to be -- contrary to virtually all writing advice -- almost completely passive throughout the story. He's an amputated head on life support, hallucinating half the time, and his only real action is to signal with his eyes at the crucial moment. (Themes: personal identity, afterlife, parenthood and marriage)

* In "The Library of Babel", Jorge Luis Borges searches for meaning in a universe composed of a vast library containing every possible book with every possible combination of letters, randomly arranged. In "THE TURING MACHINES OF BABEL" (Apex, 98 (2017); or here), I create a similar infinite library of texts -- except that the texts prove to be instructions for infinitely many randomly constituted computer programs, including the programs that constitute your mind as the story's reader and mine as author. I assume for the sake of the story that computational functionalism is true, and human minds are essentially just organic computers. (Themes: functionalism and computationalism about the mind, randomness and meaning)

* In "Little /^^^\&-" (Clarkesworld, 132 (2017); audio), a planet-sized group intelligence falls in love with Earth, which she sees as an immature, partly-formed group intelligence of broadly her kind. Little /^^^\&- herself is small compared to a galactic government that plans to sacrifice the whole galaxy for a still greater good, vast beyond even the government's comprehension. This is probably my weirdest, most difficult, least approachable story -- only for readers who don't mind puzzling together a complicated story with pieces near the beginning that only make sense retrospectively by the end. (Themes: group minds, how much we should sacrifice for larger things we can't understand)

In contrast, "Gaze of Robot, Gaze of Bird" (Clarkesworld, 151 (2019); audio) is probably my least dense, most approachable story, liked by the highest percentage of readers. I wanted to write a story in which the protagonist is a non-conscious machine -- a machine the reader can't help but incorrectly imagine as having desires and a point of view. This "point of view" character is a terraforming robot that spends 200 million years recreating the species that designed it and is finally rewarded with consciousness. (Themes: consciousness, what constitutes the survival of a species)

My 2019 book A Theory of Jerks and Other Philosophical Misadventures (MIT Press) is mostly a collection of lightly-to-moderately revised blog posts and op-eds, but it also contains four brief conceptual fictions.

In "A Two-Seater Homunculus" I discover that my neighbor's brain was replaced by a brother-and-sister homunculus pair, though no one seemed to notice.

"My Daughter's Rented Eyes" imagines submitting to corporate advertising and copyright protection agreements on what you can see, for improved overall functionality.

"Penelope's Guide to Defeating Time, Space, and Causation": Waiting for Odysseus' return, Penelope proves that the world contains infinitely many duplicates of everyone living out every possible future and concludes that death is impossible.

"How to Accidentally Become a Zombie Robot": If you test-drive life as a robot and seem to remember its having felt great, how confident should you be that those memories are real?

Penultimate manuscript versions of the stories are available here. (Themes: personal identity, technology ethics and corporate power, consciousness, computational functionalism)

"Passion of the Sun Probe" (AcademFic, 1 (2020), 7-11; audio at Reductio (2021), S0E11) concerns the ethics of designing conscious robots with self-sacrificial goals -- in this case a Sun probe who chooses (predictably, given its programming) to "freely" sacrifice itself on an ecstatic three-day scientific suicide mission to the Sun. (Themes: robot rights, technology ethics, freedom, what gives a life meaning; a short version of the case appears in Schwitzgebel & Garza 2020)

"Let Everyone Sparkle" (Aeon Ideas / Psyche, Apr 12, 2022): This story was accepted for publication in the New York Times' series of "Op-Eds from the Future" that ran from 2019 to 2020. Sadly, the series folded before the story could be printed. Aeon (later Psyche) graciously picked it up. Four decades in the future, a man raises a celebratory toast to the psychotechnology that prevents anyone from ever involuntarily experiencing negative emotions. Although the man argues that this technology is plainly good, the reader, I hope, doesn't feel as sure. (Themes: mood enhancement, the value of negative emotion, corporate power)

In "Larva Pupa Imago" (Clarkesworld, 197, (2023); audio) I had two main aims: to imagine the experience of an intelligent insect who eagerly dies for sex and to imagine minds that can merge and overlap. The story follows a cognitively enhanced butterfly from hatchling run to final mating journey, in a posthuman world where thoughts can be transferred by sharing cognitive fluids. Inspired in part by James Tiptree Jr's "Love Is the Plan the Plan is Death". (Themes: merging minds, personal identity, instinct and value)

For a decade, I've wanted to set a story in an assisted living facility. So many people end their lives there, but those lives are so invisible in the media! "How to Remember Perfectly" (Clarkesworld, 216 (2024)) is a love story between octogenarians. The science fiction "novum" is a device that allows them to control their moods and radically refashion their memories. How much does it matter if your memories are real? How much does it matter that your mood is responsive only to the good and bad things actually happening around you? (Themes: death, mood enhancement, memory, the value of truth)

One of these days, I'll discuss in more depth why I sometimes prefer to express my philosophical ideas as fiction, but this post is already overlong.