Thursday, March 30, 2023

Wearing Band T-Shirts of Bands You Don’t Know

guest post by Nick Riggle

A recent trend has teens and twenty somethings wearing band t-shirts of bands they don’t know. You can know that they don’t know the band because you can ask them: Are you a fan of AC/DC, Nirvana, Metallica, Guns N’ Roses, The Doors, Slayer, Led Zeppelin? They will respond with breathtaking nonchalance: Huh? Oh, this shirt?

They wear it for the look.

Could the look be flattering? For Millennials, and most certainly for Gen-X, there might be no clearer example of a negative volitional necessity that generates the unthinkable. Thou shalt not wear a band t-shirt if thou knowest not the band. No thought is so bold, so unafraid of rejection, as to cross the Millennial and Gen-X mind.

But to them it’s edgy. It’s street. It’s rock n roll. It’s Kendall Jenner.

*

Who knows where it started (perhaps Ghesquière for Balenciaga in 2012), but everyone knows that Jenner slays. Long an influencer in the glammed-up band t-shirt game (along with her sisters), Jenner has been spotted wearing, among others, Metallica, Kiss, AC/DC, ZZ Top, and Led Zeppelin shirts, but when it comes to concerts she favors Harry Styles, Taylor Swift, and Fleetwood Mac. Philosophers have wondered whether you should be consistent in your personal aesthetics. Jenner seems unconcerned. When the metal band Slayer’s guitarist Gary Holt wore a t-shirt that read “Kill the Kardashians” on the band’s farewell tour in 2019, Jenner responded with classic Kardashian shade. She rocked a Slayer shirt that said “RIP” and (probably) helped bump the legacy band’s merch sales up to 10 million dollars that year.

*

Is it wrong to wear band t-shirts of bands you don’t know? In one sense it is obviously a mistake. I find it useful here to think about the groundbreaking theory of aesthetic value that Dominic McIver Lopes develops in his 2018 book Being for Beauty. Lopes argues that aesthetic value is metaphysically tied to the rules and norms of specific aesthetic practices. Your aesthetic reasons for acting one way rather than another when it comes to e.g. pulling an espresso shot are tied to the specific norms and achievements of the practice of pulling espresso. In the practice of band fandom that produces and distributes band t-shirts, one of the rules is crystal clear: wear a band shirt only if you love the band. If you wear one without knowing the band, then you’re not very good at the practice. You are not responding to the reasons the practice generates.

But Lopes’s theory allows for a single non-aesthetic property (or set of such) to ground different aesthetic properties in different practices. Different practices have different ‘profiles’, mapping different movements, shapes, colors, images, and so on onto different aesthetic values. The same contours and colors that are lively in minimalism are subdued abstract expressionism; the same sequence of movements that is beautiful in tap dance is awkward in ballet; a few perfect poetry verses make for a few terrible rap verses.

Maybe Kendall Jenner (and whoever else) established a new aesthetic practice of wearing band shirts as a non-fan. Wearing the shirt will potentially have different aesthetic values in each practice. Wearing one as a fan is metal, dark, unruly. As a non-fan it is edgy, indie, street, and…Kardashianesque.

*

There are many aesthetic practices that cast the products of another aesthetic practice in a new role. In bad cases it is ignorant cultural appropriation. In good cases it is brilliant fusion or avant-garde innovation. Even with billionaire influencers involved, wearing band t-shirts of bands you don’t know seems less like Ikea selling “jerk chicken” (bad) or sushirritos (good) and more like Jim Stark, the rebel without a cause, the jobless teen wearing work boots. Some aesthetic practices are freeloaders. Some other practice imbues the band shirt, the boots, the jeans with meaning. Outsiders detect that meaning, are somehow attracted to it, they latch onto and repurpose it without contributing to the original practice. In a sense they have to repurpose it because they don’t really know what it is. But it seems cool or interesting or irresistible and so whatever they do with it is unlikely to be the aesthetically right thing according to the practice. But being moved and inspired in this way is just part of what it is to aesthetically value something—it is to want to imitate it, incorporate it into your life, make it yours and share it with others. And in a sense they are not wrong: Band t-shirts are obviously cool.

*

If wearing band t-shirts of bands you don’t know is a new aesthetic practice, then there is ignorance on each side. From the fan’s point of view, the Kendall Jenner imitators are brazenly ignorant and disrespectful, flippantly adopting a mere ‘look’ by exploiting something deeply personal. From the Jenner point of view, the fans are at best overly protective, at worst plain pretentious. It’s just a t-shirt, after all, and look how dope it looks with jeans, heels, and the right jewelry!

But these freeloader practices are sure sources of confusion and letdown. Fan shirts are a quick path to aesthetic community. As a fan, you might get excited to see the youth—or perhaps billionaire influencers—in your fan club. You give a knowing nod to the unusually cool-looking person wearing an ‘80s Megadeath shirt and they return a cold stare. You say what a proper fan might say: Endgame was surprisingly good huh? To which they respond: Oh, the shirt? I don’t really know this band, I just like how it looks.

If the aesthetic goods in the non-fan’s practice piggyback on the goods in the fan’s practice, then surely the fan is owed something: Deference? Recognition? Respect? This suggests a rule that structures aesthetic practices in general: aesthetic freeloaders should orbit and defer to their source practice; they should not spin away in an attempt to create their own orbit.

What could justify this rule? It’s not that bad to ignorantly wear band t-shirts—it just kind of sucks—so the idea that it’s morally wrong seems off. And if specific aesthetic practices are practical worlds unto themselves, then it’s not immediately obvious what resources Lopes’s theory has to make good on the wrongness. But here is a thought: the aesthetic value of band t-shirts lies in the way they promote aesthetic community. They anchor individuality, express it to others, and occasion community. The t-shirt freeloader misunderstands the aesthetic value of their clothes. Doing so is apt to produce confusion and missed connection, and so some deference to the aesthetic goods of a source practice is called for.

When the right deference, recognition, or respect is not forthcoming, one might be tempted to respond with contempt at the failure of aesthetic connection, at the disrespect for the norms of a good practice. But just beyond the confusion and let down is a social opening—an opportunity for these different individuals to bond. The failure of communication is an opportunity for enlightenment: “Well you should check out Rust in Peace. “Tornado of Souls” is my favorite song.” That might take a little extra swagger on the part of the true fan. All the more reason to do it. Boldly opening up can create a bridge between worlds, and venturing into each other’s different aesthetic worlds can both expand and refine our own.

Bridges go both ways. As it turns out, Gary Holt’s encounter with the Kardashians made him a lot more Kardashianesque. Although he wore the “Kill the Kardashians” shirt because he wanted to “kill their careers”, they seem to have rubbed off on him. He now sells a lot of those t-shirts online and has expanded his collection of merch to include a range of anti-Kardashian graphics and products. A bona fide influencer.

One of my brothers is a deep player in the niche sneaker world. Like, Pizza Hut branded Reebok Pumps deep. And he got me a pair of niche sneakers that I wear now and then. I barely understand what I own. My main relationship with the shoes is that they are by far the most comfortable pair I have ever worn. So I rock them gladly, and almost every time I do some stranger comments: Nice kicks; dope shoes; fire *whatup headnod*. I happily reap the communal benefits of a practice I don’t understand.

The older I get, the more I get used to being ignored in public. Increasingly rare is the lovely random encounter, sparked by whatever, lighting up the day. Maybe I’ll buy half a dozen random band t-shirts, wear them around town, and see who I meet.

[image source]

Tuesday, March 21, 2023

The Emotional Alignment Design Policy

I've been writing a lot recently about what Mara Garza and I, since 2015, have been calling the Design Policy of the Excluded Middle: Don't create AI systems of disputable moral status. Doing so, one courts the risk of either underattributing or overattributing rights to the systems, and both directions of error are likely to have serious moral costs.

(Violations of the Design Policy of the Excluded Middle are especially troubling when some well-informed experts reasonably hold that the AI systems are far below having humanlike moral standing and other well-informed experts reasonably hold that the AI systems deserve moral consideration similar to that of humans. The policy comes in various strengths in terms of (a.) how wide a range of uncertainty to tolerate, and (b.) how high a bar is required for legitimate disputability. More on this in a future post, I hope.)

Today, I want to highlight another design policy Garza and I advocated in 2015: The Emotional Alignment Design Policy.

Design AI systems so that ordinary users have emotional reactions appropriate to the systems' genuine moral status.

Joanna Bryson articulates one half of this design policy in her well-known (and in my view unfortunately titled) article "Robots Should Be Slaves". According to Bryson, robots -- and AI systems in general -- are disposable tools and should be treated as such. User interfaces that encourage people to think of AI systems as anything more than disposable tools -- for example, as real companions, capable of genuine pleasure or suffering -- should be discouraged. We don't want ordinary people fooled into thinking it would be morally wrong to delete their AI "friend". And we don't want people sacrificing real human interests for what are basically complicated toasters.

Now to be clear, I think tools -- and even rocks -- can and should be valued. There's something a bit gratingly consumerist about the phrase "disposable tools" that I am inclined to use here. But I do want to highlight the difference in the type of moral status possessed, say, by a beautiful automobile versus that possessed by a human, cat, or even maybe garden snail.

The other half of the Emotional Alignment Design Policy, which goes beyond Bryson, is this: If we do someday create AI entities with real moral considerability similar to non-human animals or similar to humans, we should design them so that ordinary users will emotionally react to them in a way that is appropriate to their moral status. Don't design a human-grade AI capable of real pain and suffering, with human-like goals, rationality, and thoughts of the future, and put it in a bland box that people would be inclined to casually reformat. And if the AI warrants an intermediate level of concern -- similar, say, to a pet cat -- then give it an interface that encourages users to give it that amount of concern and no more.

I have two complementary concerns here.

One -- the nearer-term concern -- is that tech companies will be motivated to create AI systems that users emotionally attach to. Consider, for example, Replika, advertised as "the world's best AI friend". You can design an avatar for the Replika chat-bot, give it a name, and buy it clothes. You can continue conversations with it over the course of days, months, even years, and it will remember aspects of your previous interactions. Ordinary users sometimes report falling in love with their Replika. With a paid subscription, you can get Replika to send you "spicy" selfies, and it's not too hard to coax into erotic chat. (This feature was apparently toned down in February after word got out that children were having "adult" conversations with Replika.)

Now I'm inclined to doubt that ordinary users will fall in love with the current version of Replika in a way that is importantly different from how a child might love a teddy bear or a vintage automobile enthusiast might love their 1920 Model T. We know to leave these things behind in a real emergency. Reformatting or discontinuing Replika might be upsetting to people who are attached, but I don't think ordinary users would regard it as the moral equivalent of murder.

My worry is that it might not take too many more steps of technological improvement before ordinary users can become confused and can come to form emotional connections that are inappropriate to the type of thing that AI currently is. If we put our best chatbot in an attractive, furry pet-like body, give it voice-to-text and text-to-speech interfaces so that you can talk to it orally, give it an emotionally expressive face and tone of voice, give it long-term memory of previous interactions as context for new interactions -- well, then maybe users do really start to fall more seriously in love or at least treat it as being as having the moral standing of a pet mammal. This might be so even with technology not much different from what we currently have, about which there is generally expert consensus that it lacks meaningful moral standing.

It's easy to imagine how tech companies might be motivated to encourage inflated attachment to AI systems. Attached users will have high product loyalty. They will pay for monthly subscriptions. They will buy enhancements and extras. We already see a version of this with Replika. The Emotional Alignment Design Policy puts a lid on this: It should be clear that this is an interactive teddy-bear, nothing more. Buy cute clothes for your teddy bear, sure! But forgo the $4000 cancer treatment you might give to a beloved dog.

The longer-term concern is the converse: that tech companies will be inclined to make AI systems disposable even if those AI systems, eventually, are really conscious or sentient and really deserve rights. This possibility has been imagined over and over in science fiction, from Asimov's robot stories through Star Trek: The Next Generation, Black Mirror, and West World.

Now there is, I think, one thing a bit unrealistic about those fictions: The disposable AI systems are designed to look human or humanoid in a way that engages users' sympathy. (Maybe that's a function of the fictional medium: From a fiction-writing perspective, humanlike features help engage readers' and viewers' sympathy.) More realistic, probably, is the idea that if the tech companies want to minimize annoying protests about AI rights, they will give the robots or AI systems bland, not-at-all-humanlike interfaces that minimize sympathetic reactions, such as the shipboard computer in Star Trek or the boxy robots in Interstellar.

[the boxy TARS robot from Interstellar; source]


The fundamental problem in both directions is that companies' profit incentives might misalign with AI systems' moral status. For some uses, companies might be incentivized to trick users into overattributing moral status, to extract additional money from overly attached users. In other cases, companies might be incentivized to downplay the moral status of their creations -- for example, if consciousness/sentience proves to be a useful feature to build into the most sophisticated future AI workers.

The Emotional Alignment Design Policy, if adhered to, will reduce these moral risks.

Thursday, March 16, 2023

Presentations, May 29 - April 6

I have some travel and talks coming up. If you're interested and in the area, and if the hosting institution permits, please come by!

Mar 29: Claremont McKenna College, Athenaeum Lecture: Falling in Love with Machines

Mar 30: University of Washington, Seattle, brown bag discussion: Moral Reflection and Moral Behavior [probably closed to outsiders]

Mar 30: University of Washington, Seattle, brown bag discussion: The Demographics of Philosophy [probably closed to outsiders]

Mar 31: University of Puget Sound, Undergraduate Philosophy Conference keynote: Does Studying Ethics Make People More Ethical?

Apr 2 (remote): Northeastern University, Information Ethics Roundtable: Let's Hope We're Not Living in a Simulation

Apr 3: University of California, Office of the President (Oakland): Principles Governing Online Majors. [This one is definitely not public, but I welcome readers' thoughts about what University of California policy should be regarding the approval of online majors.]

Apr 5: American Philosophical Association, Pacific Division (San Francisco), Society for the Philosophy of Animal Minds: The Mind of a Garden Snail, or What Is It to Have a Mind?

Apr 5: American Philosophical Association, Pacific Division (San Francisco), Science Fiction and Philosophy Society: TBD, either Science Fiction as Philosophy or Science Fiction and Large Language Models.

Apr 6: American Philosophical Association, Pacific Division (San Francisco), Book Symposium on David Chalmers' Reality+: Let's Hope We're Not Living in a Simulation

Yes, that's nine presentations in nine days, on seven different topics. Perhaps I'm spreading myself a little thin!

Tuesday, March 14, 2023

Don't Create AI Systems of Disputable Moral Status (Redux)

[originally published at Daily Nous, Mar. 14, as part of a symposium on large language models, ed. Annette Zimmerman]

Engineers will likely soon be able to create AI systems whose moral status is legitimately disputable. We will then need to decide whether to treat such systems as genuinely deserving of our care and solicitude. Error in either direction could be morally catastrophic. If we underattribute moral standing, we risk unwittingly perpetrating great harms on our creations. If we overattribute moral standing, we risk sacrificing real human interests for AI systems without interests worth the sacrifice.

The solution to this dilemma is to avoid creating AI systems of disputable moral status.

Both engineers and ordinary users have begun to wonder whether the most advanced language models, such as GPT-3, LaMDA, and Bing/Sydney might be sentient or conscious, and thus deserving of rights or moral consideration. Although few experts think that any currently existing AI systems have a meaningful degree of consciousness, some theories of consciousness imply that we are close to creating conscious AI. Even if you the reader personally suspect AI consciousness won’t soon be achieved, appropriate epistemic humility requires acknowledging doubt. Consciousness science is contentious, with leading experts endorsing a wide range of theories.

Probably, then, it will soon be legitimately disputable whether the most advanced AI systems are conscious. If genuine consciousness is sufficient for moral standing, then the moral standing of those systems will also be legitimately disputable. Different criteria for moral standing might produce somewhat different theories about the boundaries of the moral gray zone, but most reasonable criteria—capacity for suffering, rationality, embeddedness in social relationships—admit of interpretations on which the gray zone is imminent.

We might adopt a conservative policy: Only change our policies and laws once there’s widespread consensus that the AI systems really do warrant care and solicitude. However, this policy is morally risky: If it turns out that AI systems have genuine moral standing before the most conservative theorists would acknowledge that they do, the likely outcome is immense harm—the moral equivalents of slavery and murder, potentially at huge scale—before law and policy catch up.

A liberal policy might therefore seem ethically safer: Change our policies and laws to protect AI systems as soon as it’s reasonable to think they might deserve such protection. But this is also risky. As soon as we grant an entity moral standing, we commit to sacrificing real human interests on its behalf. In general, we want to be able to control our machines. We want to be able to delete, update, or reformat programs, assigning them to whatever tasks best suit our purposes.

If we grant AI systems rights, we constrain our capacity to manipulate and dispose of them. If we go so far as to grant some AI systems equal rights with human beings, presumably we should give them a path to citizenship and the right to vote, with potentially transformative societal effects. If the AI systems genuinely are our moral equals, that might be morally required, even wonderful. But if liberal views of AI moral standing are mistaken, we might end up sacrificing substantial human interests for an illusion.

Intermediate policies are possible. But it would be amazing good luck if we happened upon a policy that gave the whole range of advanced AI systems exactly the moral consideration they deserve, no more and no less. Our moral policies for non-human animals, people with disabilities, and distant strangers are already confused enough, without adding a new potential source of grievous moral error.

We can avoid the underattribution/overattribution dilemma by declining to create AI systems of disputable moral status. Although this might delay our race toward ever fancier technologies, delay is appropriate if the risks of speed are serious.

In the meantime, we should also ensure that ordinary users are not confused about the moral status of their AI systems. Some degree of attachment to artificial AI “friends” is probably fine or even desirable—like a child’s attachment to a teddy bear or a gamer’s attachment to their online characters. But users know the bear and the character aren’t sentient. We will readily abandon them in an emergency.

But if a user is fooled into thinking that a non-conscious system really is capable of pleasure and pain, they risk being exploited into sacrificing too much on its behalf. Unscrupulous technology companies might even be motivated to foster such illusions, knowing that it will increase customer loyalty, engagement, and willingness to pay monthly fees.

Engineers should either create machines that plainly lack any meaningful degree of consciousness or moral status, making clear in the user interface that this is so, or they should go all the way (if ever it’s possible) to creating machines on whose moral status reasonable people can all agree. We should avoid the moral risks that the confusing middle would force upon us.

----------------------------------------------------------

Notes

For a deeper dive into these issues, see “The Full Rights Dilemma for AI Systems of Debatable Personhood” (in draft) and “Designing AI with Rights, Consciousness, Self-Respect, and Freedom” (with Mara Garza; in Liao, ed., The Ethics of Artificial Intelligence, Oxford: 2020).

See also Is it time to start considering personhood rights for AI chatbots? (with Henry Shevlin), in the Los Angeles Times (Mar 5).

[image: Dall-E 2 "robot dying in a fire"]

Thursday, March 09, 2023

New Paper in Draft: Let's Hope We're Not Living in a Simulation

I'll be presenting an abbreviated version of this at the Pacific APA in April, as a commentary on David Chalmers' book Reality+.

According to the simulation hypothesis, we might be artificial intelligences living in a virtual reality.  Advocates of this hypothesis, such as Chalmers, Bostrom, and Steinhart, tend to argue that the skeptical consequences aren’t as severe as they might appear.  In Reality+, Chalmers acknowledges that although he can’t be certain that the simulation we inhabit, if we inhabit a simulation, is larger than city-sized and has a long past, simplicity considerations speak against those possibilities.  I argue, in contrast, that cost considerations might easily outweigh considerations of simplicity, favoring simulations that are catastrophically small or brief – small or brief enough that a substantial proportion of our everyday beliefs would be false or lack reference in virtue of the nonexistence of things or events whose existence we ordinarily take for granted.  More generally, we can’t justifiably have high confidence that if we live in a simulation it’s a large and stable one.  Furthermore, if we live in a simulation, we are likely at the mercy of ethically abhorrent gods, which makes our deaths and suffering morally worse than they would be if there were no such gods.  There are reasons both epistemic and axiological to hope that we aren’t living in a simulation.

Paper here.

As always, comments welcome!

Sunday, March 05, 2023

Wednesday, March 01, 2023

God Stumbles Over the Power Cord

Princeton University Press generously hired an illustrator to create some images for my forthcoming book, The Weirdness of the World (forthcoming January 2024).

Here's an illustration for my chapter "Kant Meets Cyberpunk" -- a revised and expanded version of this article from 2019, concerning the epistemic and metaphysical consequences of living in a computer simulation.