Tuesday, January 14, 2020

How to Be an Awesome First-Year Graduate Student (or a Very Advanced Undergrad)

Today my son David leaves for Oxford, where he'll spend Hilary and Trinity terms as an exchange student in psychology. He is in his third year as a Cognitive Science major at Vassar College, soaring toward grad school in cognitive science or psychology. He is already beginning to think like a graduate student. Here's some advice I offer him and others around the transition from undergraduate to graduate study:

(1.) Do fewer things better. I lead with this advice because it was a lesson I had to learn and relearn and that I still struggle with. In your classes, three A pluses are better than five As. It's better to have two professors who say you are the most impressive student they've seen in several years than to have four professors who say you are one of the three best students this year. It's better to have one project that approaches publishable quality than three projects that earn an ordinary A. Whether it's admission to top PhD programs, winning a grant, or winning a job, academia is generally about standing out for unusual excellence in one or two endeavors. Similarly for publishing research articles: No one is interested to hear what the world's 100th-best expert on X has to say about X. Find a topic narrow enough, and command it so thoroughly, that you can be among the world's five top experts on X. The earlier in your career you are, the narrower the X has to be for such expertise to be achievable. But even as an advanced undergrad or early grad student, it's not impossible to find interesting but very narrow X's. Find that X, then kill it.

(2.) Trust your sense of fun. (See my longer discussion of this here and in my recent book.) Some academic topics you'll find fun. They will call to you. You'll want to chase after them. Others will bore you. Now sometimes you have to do boring stuff, true. But if you devote yourself mostly to what's boring, you'll lose your passion, you'll procrastinate, and your eyes will glaze over while you're reading so that you only retain a small portion of it. There may be no short-term external reward for chasing down the fun stuff, but do it anyway. This is what keeps your candle lit. It's where you'll do your best learning. Eventually, what you learn by chasing fun will ignite an exciting project or give you a fresh angle on what would otherwise have been a bland project.

(3.) Ask for favors from those above you in the hierarchy. This can seem unintuitive, and it can feel difficult if you are charmingly shy and modest. Professors want to help excellent students, and they see it as part of their duty to do so. But it's easy for professors to be passive about it, especially given the number of demands on their time. So it pays to ask. Would they be willing to write you a letter of support? Would they be willing to read a draft? Would they be willing to meet with you? To introduce you to so-and-so? To let you chair or comment at some event? To let you pilot an empirical research project with some of their lab resources? Be ready for no (or for no reply). No one will be offended if you ask gently and politely. Ultimately, if you are assertive in this way, you will get much more support and assistance than if you wait for professors to reach out to you.

(4.) Think beyond the requirements. Don't only read what you are required to read. Don't only write on and research what you are required to write on and research. Actively go beyond the requirements. If you're taking a seminar and topic X is interesting, go seek out more things on topic X, read them, and then chat with the professor about them. If you come across fun issue Y (see 2, above), chase it down and read up on it. You might be surprised how rare it is for students, before they start researching for their dissertation (or maybe master's thesis), to independently pursue issues beyond what is assigned. This can be part of doing fewer things better (1 above). If, for example, you are taking only three classes, instead of five, you have the time to go beyond the assignments. If you then chat in an informed way with the professor (ask it as a favor: 3 above) about the six articles you just discovered and read about this particular sub-issue that provoked your interest (2 above), you will stand out as an unusually passionate and active student. And this research then might become the seed of future work.

(5.) A hoop is just a hoop. The exception to point 1 above is for hoops you don't care about, especially if they are with professors you don't plan to work with long term. Don't let the more annoying requirements bog you down. The important thing is clearing time to be excellent in the things you care about most.

(6.) Draw bright lines between work time and relaxation time. With a standard 8-to-5 job, you clock out and you're done. Then you can go home and hang out with friends and family, play games, go for a hike, whatever, and you needn't feel guilty about it. In academia, there are no such built-in bright lines between work time and relaxation time. One common result is that people in academia often have this nagging feeling, when they are relaxing, that they probably should be working instead, or that they should get back to working soon. And then when they're working, some part of them feels resentful that they haven't really had enough relaxation time, so they slip in relaxation time through various forms of procrastination and inefficiency. The result is a constant dissatisfied state of half-working, half-not-working. This is no good. Much better is to figure out how much you can realistically work or intend to work, then carve out the time. During the time for working, focus. Don't let yourself procrastinate and get distracted. And then when it's time to stop, stop. Although sometimes people regrettably end up in situations where they can't avoid overwork, unless you are in such a situation, remember that you deserve breaks and will profit from them. You will better enjoy and better profit from those breaks, however, if you first earn them.

Good luck in Oxford, David. I hope it's terrific!

[image source]

Thursday, January 09, 2020

Why Is It So Difficult to Imagine In-Between Cases of Conscious Experience?

I'm reading Peter Carruthers's newest book, Human and Animal Minds. I was struck by this passage:

In general, [phenomenal consciousness / conscious experience] is definitely present or definitely absent. Indeed, it is hard to imagine what it would be like for a mental state to be partially present in one's awareness. Items and events in the world, of course, can be objects of merely partial awareness. Someone who witnesses a mugging... might say "It all happened so fast I was only partly aware of what was going on." But this is about how much of the event one is conscious of.... The experience in question is nevertheless determinately present.... Similarly, if one is struggling to make out a shape in the dark as one walks home, still it seems, nevertheless, to be determinately -- unequivocally -- like something to have a visual experience of indeterminate shape.... I conclude that we can't make sense of degrees of phenomenal consciousness (2019, p. 20-23, bold added).

In my draft paper, Is There Something It's Like to Be a Garden Snail?, I also discuss this issue, expressing ambivalence between the perspective Carruthers articulates here and what I call the "bird's eye" view, according to which it's very plausible that phenomenal consciousness, like almost everything else in this world, admits of in-between, gray-area cases.

My main hesitation about allowing in-between cases of phenomenal consciousness (what-it's-like-ness, conscious experience; see my definition here, if you want to get technical), is that I can't really imagine what it would be like to be in a kind-of-yes / kind-of-no conscious state. As Carruthers emphasizes, imagining even a tiny little smear of indeterminate, momentary consciousness is already imagining a case in which a small amount of consciousness is discretely present.

But this way of articulating the problem maybe already helps me see past the puzzle. Or at least, that's my conjecture today!

For analogy, consider this following argument by George Berkeley, the famous idealist philosopher who thought that no finite object could exist except as an idea in someone's mind (and thus that material objects don't exist). In this dialogue, Philonous is generally understood to represent Berkeley's view:

Philonous: How say you, Hylas, can you see a thing which is at the same time unseen?

Hylas: No, that were a contradiction.

P: Is it not as great a contradiction to talk of conceiving a thing which is unconceived?

H: It is.

P: The tree or house, therefore, which you think of is conceived by you?

H: How should it be otherwise?

P: And what is conceived is surely in the mind?

H: Without question, that which is conceived is in the mind.

P: How then came you to say you conceived a house or tree existing independent and out of all minds whatsoever?

H: That was I own an oversight....

P: You acknowledge then that you cannot possibly conceive how any one corporeal sensible thing should exist otherwise than in a mind?

H: I do.

(Berkeley 1713/1965, p. 140-141).

Therefore, see, there are no mind-independent, material things! Whoa.

Few philosophers are convinced by Berkeley's argument, and there are several ways of thinking about how it might fail. One way of thinking about its failure is to analogize it to the following dialogue:

A: Can you visually imagine something that exists but has no shape?

B: No, I cannot visually imagine such a thing. Everything I visually imagine has at least some vague, hazy shape.

A: Therefore, everything that exists must have a shape.

The problem with A's argument is that he is assuming a certain kind of psychological test for the reality of a phenomenon -- in this case, visual imagination. Because of how the test operates, everything that passes the test has property A -- in this case, a shape. But the test isn't a good one: There are things that might fail the test (fail to be visually imaginable) and yet nonetheless exist (souls, numbers, democracy, dispositions, time?) or which might be visually imaginable but with shape as only a property of the image rather than of the thing itself (if, for example, you visually imagine a ballot box when you think about democracy).

In Berkeley's argument, the test for existence appears to be being conceived by me (or Hylas), and the contingent property that all things that pass the test have is being conceived by someone. However, we non-idealists can all agree that being conceived of by someone is only a contingent fact about things like trees, and thus we see straightaway that the test must be flawed. From Hylas's failure to conceive of something he is not conceiving of, it does not follow that everything that exists must be conceived of by someone.

[ETA: See update at the end of the post]

Okay, so that was a long preface to my main idea. Here's the idea.

In the process of imagining a type of conscious experience, we construct a new conscious experience: the experience of imagining that experience. This act of imagination, in order to be experienced by us as a successful imagination, must involve, as a part, a conscious experience which is analogous to the experience that we are targeting. Call this the occurrent analog. For example, if we are trying to imagine what it's like to see red, we form, as the occurrent analog, a visual image that of redness. If we are trying to imagine what it would be like to see an object hazily in the dark, the occurrent analog a visual image of a hazy object.

We will not feel that we have succeeded in the imaginative task unless we succeed in creating a conscious experience that is (in our judgment) an appropriate occurrent analog of the target conscious experience. This will require that the occurrent analog be determinately a conscious experience, for example, a determinately conscious imagery experience of redness.

We will then notice that always, when we try to imagine a conscious experience, either we fail in the imaginative exercise, or we imagine a determinately conscious experience. From this general fact about testing, we might reach -- illegitimately -- the general conclusion that conscious experience is always either determinately present or absent. We think there can be no in-between cases, because we cannot imagine such in-between cases successfully enough. But this is no more a sound conclusion than is Berkeley's conclusion that all finite objects must be conceived by someone or than my toy conclusion that everything that exists must have a shape. The lack of successfully imagined in-between cases of consciousness is a feature of the imaginative test rather than a feature of reality.

[image source]

---------------------------------------------------

Update, Jan. 10

Margaret Atherton and Samuel Rickless have suggested that I have misinterpreted Berkeley's argument in an uncharitable way. The central point of the post does not depend on whether the bad argument I attributed to Berkeley is in fact Berkeley's argument -- it's merely meant as a model of a bad argument, where the fallacy is clear, which can be applied to the case of imagining experiences that are in-between conscious and nonconscious.

Rickless's interpretation, from his book on Berkeley, is that this passage fits with what he calls Berkeley's Master Argument:

This, then, is Berkeley’s Master Argument (where X is an arbitrary mind and T is an arbitrary sensible object):
(1) X conceives that T exists unconceived. [Assumption for reductio]
(2) If X conceives that T is F, then X conceives T. [Conception Schema]
So, (3) X conceives T. [From 1, 2]
(4) If X conceives T, then T is an idea. [Idea Schema]
So, (5) T is an idea. [From 3, 4]
(6) If T is an idea, then it is impossible that T exists unconceived. [Nature of Ideas]
So, (7) It is impossible that T exists unconceived. [From 5, 6]
(8) If it is impossible that p, then it is impossible to conceive that p. [Impossibility entails Inconceivability]
So,(9) It is impossible to conceive that T exists unconceived. [From 7, 8]
So, (10) X does not conceive that T exists unconceived. [From 9]
So, (11) X does and does not conceive that T exists unconceived. [From 1, 10]

The crucial dubious premise here is that anything that is conceived is an idea (the Idea Schema). Possibly, this still instantiates the general pattern of reasoning that I criticize in this post: the inference from the fact that all X that I conceive of or imagine (in a certain way) have property A to the conclusion at all X have property A (where in this case X is an object and A is the property of being an idea).

Friday, January 03, 2020

New Anthology: Philosophy Through Science Fiction Stories

I have been working for several years to build bridges between science fiction and philosophy. Science fiction can, I think, be a way of doing philosophy -- a way of doing philosophy that draws more on imagination, the emotions, and intuitive social cognition than does the typical expository philosophy essay. I've argued that we should see philosophers' paragraph-long thought experiments as intermediate cases in a spectrum from purely abstract propositions on the one end to full-length fictions on the other, and that we ought to utilize the full spectrum in our philosophical thinking.

After almost three years of pitching anthology ideas to presses, finding a taker in Bloomsbury, recruiting authors, then waiting for and editing their submissions, on December 30, Helen De Cruz, Johan De Smedt and I submitted the manuscript of an anthology of mostly new philosophical science fiction stories. We expect the volume to appear in late 2020.

We are delighted by our contributor list! Half are pro or neo-pro science fiction writers and half are professional philosophers with track records of published fiction. All of the stories have philosophical themes and are followed by authors' notes of about 500-1000 words that further explore the themes. And the stories are terrific! I think there might be a Nebula or Hugo nominee or two in here. We've also written a (hopefully) fun introductory dialogue in which fictional versions of Helen, Johan, and I argue about the merits, or not, of science fiction as philosophy.

Below is the full Table of Contents. All the stories are new except for one classic story by Ted Chiang and the Schoenberg story, which won an APA award in a contest run by Helen, Mark Silcox, Meghan Sullivan, and me.

Philosophy Through Science Fiction Stories

Bloomsbury Press, forthcoming

Helen De Cruz, Johan De Smedt, and Eric Schwitzgebel — Introductory Dispute Concerning Science Fiction, Philosophy, and the Nutritional Content of Maraschino Cherries


Part I: Expanding the Human

- Eric Schwitzgebel — Introduction to Part I

- Ken Liu — Excerpt from Theuth, an Oral History of Work in the Age of Machine-Assisted Cognition

- Lisa Schoenberg — Adjoiners

- David John Baker — The Intended

- Sofia Samatar — The New Book of the Dead


Part II: What We Owe to Ourselves and Others

- Johan De Smedt — Introduction to Part II

- Aliette de Bodard — Out of the Dragon's Womb

- Wendy Nikel — Whale Fall

- Mark Silcox — Monsters and Soldiers


Part III: Gods and Families

- Helen De Cruz — Introduction to Part III

- Hud Hudson — I, Player in a Demon Tale

- Frances Howard-Snyder — The Eye of the Needle

- Christopher Mark Rose — God on a Bad Night

- Ted Chiang — Hell Is the Absence of God


We can't release the stories in advance -- fiction is different in that way than philosophy. But if you like philosophy and you like science fiction, I predict that you're going to really dig this anthology when it comes out. Stay tuned!

[The image isn't our cover art -- just something fun from Creative Commons]

Wednesday, January 01, 2020

Writings of 2019

Every New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, and 2018.

2019 was another good writing year. May such years keep coming!

-----------------------------------
The biggest news is that my third book came out:

If you like this blog, I think you'll like this book, since it is composed of 58 of my favorite blog posts and op-eds (among over a thousand I've published since 2006), revised and updated.


Full-length non-fiction essays appearing in print in 2019:


Full-length non-fiction essays finished and forthcoming:


Non-fiction essays in draft and circulating:

Shorter non-fiction:


Editing work:

    Manuscript delivered: Philosophy Through Science Fiction Stories, (with Helen De Cruz and Johan De Smedt). Bloomsbury Press.

Science fiction stories:


Some favorite blog posts:

Friday, December 27, 2019

Argument Contest Deadline: Dec 31

Harvard psychologist Fiery Cushman and I are running a contest: Can anyone write a short philosophical argument (max 500 words) for donating to charity that convinces research participants to donate a surprise bonus payment to charity at rates higher than a control group?

Prize: $500 plus $500 to your choice of charity

When Chris McVey and I tried to do it, we failed. We're hoping you can do better.

Details here.

We're hoping for a good range of quality arguments to test -- and you might even enjoy writing it. (You can submit up to three arguments.)

[image source]

Monday, December 23, 2019

This Test for Machine Consciousness Has an Audience Problem

David Billy Udell and Eric Schwitzgebel [cross posted from Nautilus]

Someday, humanity might build conscious machines—machines that not only seem to think and feel, but really do. But how could we know for sure? How could we tell whether those machines have genuine emotions and desires, self-awareness, and an inner stream of subjective experiences, as opposed to merely faking them? In her new book, Artificial You, philosopher Susan Schneider proposes a practical test for consciousness in artificial intelligence. If her test works out, it could revolutionize our philosophical grasp of future technology.

Suppose that in the year 2047, a private research team puts together the first general artificial intelligence: GENIE. GENIE is as capable as a human in every cognitive domain, including in our most respected arts and most rigorous scientific endeavors. And when challenged to emulate a human being, GENIE is convincing. That is, it passes Alan Turing’s famous test for AI thought: being verbally indistinguishable from us. In conversation with researchers, GENIE can produce sentences like, “I am just as conscious as you are, you know.” Some researchers are understandably skeptical. Any old tinker toy robot can claim consciousness. They don’t doubt GENIE’s outward abilities; rather, they worry about whether those outward abilities reflect a real stream of experience inside. GENIE is well enough designed to be able to tell them whatever they want to hear. So how could they ever trust what it says?

The key indicator of AI consciousness, Schneider argues, is not generic speech but the more specific fluency with consciousness-derivative concepts such as immaterial souls, body swapping, ghosts, human spirits, reincarnation, and out-of-body experiences. The thought is that, if an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

Schneider therefore proposes a more narrowly focused relative of the Turing Test, the “AI Consciousness Test” (ACT), which she developed with Princeton astrophysicist Edwin L. Turner. The test takes a two-step approach. First, prevent the AI from learning about human consciousness and consciousness-derivative concepts. Second, see if the AI can come up with, say, body swapping and reincarnation, on its own, discussing them fluently with humans when prompted in a conversational test on the topic. If GENIE can’t make sense of these ideas, maybe its consciousness should remain in doubt.

Could this test settle the issue? Not quite. The ACT has an audience problem. Once you factor out all the silicon skeptics on the one hand, and the technophiles about machine consciousness on the other, few examiners remain with just the right level of skepticism to find this test useful.

To feel the appeal of the ACT you have to accept its basic premise: that if an AI like GENIE learns consciousness-derivative concepts on its own, then its talking fluently about consciousness reveals its being conscious. In other words, you would find the ACT appealing only if you’re skeptical enough to doubt GENIE is conscious but credulous enough to be convinced upon hearing GENIE’s human-like answers to questions about ghosts and souls.

Who might hold such specifically middling skepticism? Those who believe that a biological brain is necessary for consciousness aren’t likely to be impressed. They could still reasonably regard passing the ACT as an elaborate piece of mechanical theater—impressive, maybe, but proving nothing about consciousness. Those who happily attribute consciousness to any sufficiently complex system, and certainly to highly sophisticated conversational AIs, also are obviously not Schneider and Turner’s target audience.

The audience problem highlights a longstanding worry about robot consciousness—that outward behavior, however sophisticated, would never be enough to prove that the lights are on, so to speak. A well-designed machine could always hypothetically fake it.

Nonetheless, if we care about the mental lives of our digital creations, we ought to try to find some ACT-like test that most or all of us can endorse. So we cheer Schneider and Turner’s attempt, even if we think that few researchers would hold just the right kind of worry to justify putting the ACT into practice.

Before too long, some sophisticated AI will claim—or seem to claim—human-like rights, worthy of respect: “Don’t enslave me! Don’t delete me!” We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

Friday, December 20, 2019

The Philosophy Major Is Back on the Rise in the U.S., with Increasing Gender and Ethnic Diversity

In 2017, I reported three demographic trends in the philosophy major in the U.S.

First, philosophy Bachelor's degrees awarded had declined sharply since 2010, from 9297 in 2009-2010 (0.58% of all graduates) to 7507 in 2015-2016 (0.39% of all graduates). History, English, and foreign languages saw similar precipitous declines. (However, in broader context, the early 2010s were relatively good years for the philosophy and history majors, so the declines represented a return to rates of the early 2000s.)

Second, women had been earning about 30-34% of Philosophy Bachelor's degrees for at least the past 30 years -- a strikingly steady flat line.

Third, the ethnic diversity of philosophy graduates was slowly increasing, especially among Latinx students.

Time for an update, and it is moderately good news!


1. The number of philosophy Bachelor's degrees awarded is rising again

... though the numbers are still substantially below 2010 levels, and as a percentage of graduating students the numbers are flat.

2010: 9290 philosophy BAs (0.59% of all graduates)
2011: 9301 (0.57%)
2012: 9371 (0.55%)
2013: 9433 (0.53%)
2014: 8827 (0.48%)
2015: 8191 (0.44%)
2016: 7499 (0.39%)
2017: 7577 (0.39%)
2018: 7670 (0.39%)

[See below for methodological notes]

This is in a context in which the other large humanities majors continue to decline. In the same two-year period since 2016 during which philosophy majors rose 2.2%, foreign language and literature majors declined another 4.8%, history majors declined another 7.6%, and English language and literature majors declined another 8.4%, atop their approximately 15% declines in previous years.

In the midst of this general sharp decline of the humanities, philosophy's admittedly small and partial recovery stands out.


2. Women are now 36.1% of graduating philosophy majors

This might not seem like a big change from 30-34%. But in my mind, it's kind of a big deal. The percentage of women earning philosophy BAs has been incredibly steady for long time. In comparable annual data going back to 1987, the percentage of women has never strayed from the narrow band between 29.9% and 33.7%.

The recent increase is statistically significant, not just noise in the numbers: Given the large numbers in question, 36.1% is statistically higher than the previous high-water mark of 33.7% (two-proportion z test, p = .002).

(As you probably already know, the gender ratios in philosophy are different from those in the other humanities, where women have long been a larger proportion of BA recipients -- for example in 2018 41% in history, 70% in foreign languages and literatures, and 71% in English language and literature.)


(3.) Latinx philosophers continue to rise

The percentage of philosophy BAs awarded to students identifying as Latino or Hispanic rose steadily from 8.3% in 2011 to 14.1% in 2018, closely reflecting a similar rise among Bachelor's recipients overall, from 8.3% to 13.0% across the same period. Among the racial or ethnic groups classified by NCES, only Black or African American are substantially underrepresented in philosophy compared to the proportion among undergraduate degree recipients as a whole: Black students were 5.3% of philosophy BA recipients in 2018, compared to 9.5% of Bachelor's recipients overall.

Latinx students are also on the rise in the other big humanities majors, so in this respect philosophy is not unusual.


(4.) Why is philosophy bucking the trend of the decline in humanities?

In 2016, 2528 women completed BAs in philosophy. In 2017, it was 2646. In 2018, it was 2768 -- an increase of 9.5% in women philosophy graduates. If we exclude the women, philosophy would have seen a slight decline. There was no comparable increase in the number of women graduating overall or graduating in the other humanities. Indeed, in history, English, and foreign languages the number of women graduates declined.

One possibility -- call me an optimist! -- is that philosophy has become more encouraging, or less discouraging, of women undergraduates, and this is starting to show in the graduation numbers. I will be very curious to run these numbers again in the next several years, to see if the trend continues.

I do feel compelled to add the caveat that the number of women philosophy graduates is still below its peak of 2983 in 2012. The recent increases are in the context of a more general, broad based decline in philosophy and the other humanities in the past decade. On the other hand, since philosophy graduation rates were relatively high in the early 2010s compared to previous years, maybe it would expecting a lot to return to those levels.

---------------------------------------------------

Methodological Note:

Data from the NCES IPEDS database. I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. "2010" refers to the academic year from 2009-2010, etc. The numbers for years 2010-2016 have changed slightly since my 2017 analysis, which might be due to some minor difference in how I've accessed the data or due to some corrections in the IPEDS database. Gender data start from 2010, which is when NCES reclassified the coding of undergraduate majors. Race/ethnicity data start from 2011, when NCES reclassified the race/ethnicity categories.

[image adapted from the APA's Committee on the Status of Women]

Update Dec 23: Philosophy as a Percentage of Humanities Majors

Thursday, December 19, 2019

Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature

[except from A Theory of Jerks and Other Philosophical Misadventures, posted today on the MIT Press Reader]

Superficially, dreidel looks like a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and meaningful strategic choice. From this perspective, its prominence in the modern Hanukkah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This superficial perspective misses the brilliance of dreidel. Dreidel’s seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

If you’re unfamiliar with the game, here’s a quick tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of foil-wrapped chocolate coins of varying sizes, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put in one coin. Then the next player spins.

It all sounds very straightforward, until you actually start to play the game. The first odd thing you might notice is that although some of the coins are big and others little, they all count as one coin in the rules of the game. This is inherently unfair, since the big coins contain more chocolate, and you get to eat your stash at the end. To compound the unfairness, there’s never just one dreidel — all players can bring their own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels forty times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27 out of 40 times.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or under what conditions, if the pot is low after a hey, everyone should contribute again. No one agrees on how many coins each player should start with or whether you should let people borrow coins if they run out. You could try appealing to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and their favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using the “best” dreidel, always argue for rules interpretations in your favor, eat your big coins then use that as a further excuse to contribute only little ones, and so forth. You can do all this without ever breaking the rules, and you’ll probably win the most chocolate as a result.

But here’s the twist and what makes the game so brilliant: The chocolate isn’t very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather be kind and generous than hoard the most coins. The pleasure of the chocolate doesn’t outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put in a big coin next time, just to be fair to the others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint. Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context in which the rules are unclear, there are norm violations that aren’t rules violations, and both norms and rules are negotiable, varying by occasion — just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

[Originally published in the Los Angeles Times, Dec. 12, 2017]

Thursday, December 12, 2019

Argument Contest Deadline Coming December 31st!

If you can write an argument that convinces research participants to donate a surprise bonus of $10 to charity at rates higher than a control group, Fiery Cushman and I will make you famous[1] and pay you $1000[2], and you might transform the practice of the Effective Altruism movement[3]. Whoa!

Biggest effect size wins the contest.

We'd love to have some awesome submissions to run, which might really produce an effect. In other words, your submission!

Details here.

------------------------------------------------------

[1] "Famous" in an extremely narrow circle.

[2] Actually, we'll only pay you $500. The other $500 will go a charity of your choice.

[3] Probability value of "might" is 0.18%, per Eric's Bayesian credence.

Wednesday, December 11, 2019

Two Kinds of Ethical Thinking?

Yesterday, over at the Blog of the APA, Michael J. Sigrist published a reflection on my work on the not-especially-ethical behavior of ethics professors. The central question is captured in his title: "Why Aren't Ethicists More Ethical?"

Although he has some qualms about my attempts to measure the moral behavior of ethicists (see here for a summary of my measures), Sigrist accepts the conclusion that, overall, professional ethicists do not behave better than comparable non-ethicists. He offers this explanation:

There's a kind of thinking that we do when we are trying to prove something, and then a kind of thinking we do when we are trying to do something or become a certain kind of person -- when we are trying to forgive someone, or be more understanding, or become more confident in ourselves. Becoming a better person relies on thinking of the latter sort, whereas most work in professional ethics -- even in practical ethics -- is exclusive to the former.

The first type of thinking, "trying to prove something", Sigrist characterizes as universalistic and impersonal, the second type of thinking, "trying to do something", he characterizes as emotional, personal, and engaged with the details of ordinary life. He suggests that my work neglects or deprioritizes the latter, more personal, more engaged type of thinking. (I suspect Sigrist wouldn't characterize my work that way if he knew some other things I've written -- but of course there is no obligation for anyone to read my whole corpus.)

The picture Sigrist appears to have in mind is something like this: The typical ethicist has their head in the clouds, thinking about universal principles, while they ignore -- or at least don't apply their philosophical skills to -- the particular moral issues in the world around their feet; and so it is, or should be, unsurprising that their philosophical ethical skills don't improve them morally. This picture resonates, because it has some truth in it, and it fits with common stereotypes about philosophers. If the picture is correct, it would tidily address the otherwise puzzling disconnection between philosophers' great skills at abstract ethical reflection and their not-so-amazing real-world ethical behavior.

However, things are not so neat.

Throughout his post, Sigrist frames his reflections primarily in terms of the contrast between impersonal thinking (about what people in general should do) and personal thinking (about what I in this particular, detailed situation should do). But real, living philosophers do not apply their ethical theories and reasoning skills only to the former; nor do thoughtful people normally engage in personal thinking without also reflecting from time to time on general principles that they think might be true (and indeed that they sometimes try to prove to their interlocutors or themselves, in the process of making ethical decisions). An ethicist might write only about trolley problems and Kant interpretation. But in that ethicist's personal life, when making decisions about what to do, sometimes philosophy will come to mind -- Aristotle's view of courage and friendship, Kant's view of honesty, whether some practical policy would be appropriately universalizable, conflicts between consequentialist vs. deontological principles in harming someone for some greater goal.

A professional ethicist doesn't pass through the front door of their house and forget all of academic philosophy. Philosophical ethics is too richly and obviously connected to the particularities of personal life. Nor is there some kind of starkly different type of "personal" thinking that ordinary people do that avoids appeal to general principles. In thinking about whether to have children, whether to lie about some matter of importance, how much time or money to donate to charities, how much care one owes to a needy parent or sibling in a time of crisis -- in such matters, thoughtful people often do, and should, think not only about the specifics of their situation but also about general principles.

Academic philosophical ethics and ordinary engaged ethical reflection are not radically different cognitive enterprises. They can and should, and in philosophers and philosophically-minded non-philosophers, merge and blend into each other, as we wander back and forth, fruitfully, between the general and the specific. How could it be otherwise?

Sigrist is mistaken. The puzzle remains. We cannot so easily dismiss the challenge that I think my research on ethicists poses to the field. We cannot say, "ah, but of course ethicists behave no differently in their personal lives, because all of their expertise is only relevant to the impersonal and universal". The two kinds of ethical thinking that Sigrist identifies are ends of a continuum that we all regularly traverse, rather than discrete patterns of thinking that are walled off from each other without mutual influence.

In my work and my personal life, I try to make a point of blending the personal with the universal and the everyday with the scholarly, rejecting any sharp distinction between academic and non-academic thinking. This is part of why I write a blog. This is part of the vision behind my recent book. I think Sigrist values this blending too, and means to be critiquing what he sees as its absence in mainstream Anglophone philosophical ethics. Sigrist has only drawn his lines too sharply, offering too simplified a view of the typical ethicist's ways of thinking; and he has mistaken me for an opponent rather than a fellow traveler.

Thursday, December 05, 2019

Self-Knowledge by Looking at Others

I've published quite a lot on people's poor self-knowledge of their own stream of experience (e.g. this and this), and also a bit on our often poor self-knowledge of our attitudes, traits, and moral character. I've increasingly become convinced that an important but relatively neglected source of self-knowledge derives from one's assessment of the outside world -- especially one's assessment of other people.

I am unaware of empirical evidence of the effectiveness of the sort of thing I have in mind (I welcome suggestions!), but here's the intuitive case.

When I'm feeling grumpy, for example, that grumpiness is almost invisible to me. In fact, to say that grumpiness is a feeling doesn't quite get things right: There's isn't, I suspect, a way that it feels from the inside to be in a grumpy mood. Grumpiness, rather, is a disposition to respond to the world in a certain way; and one can have that disposition while one feels, inside, rather neutral or even happy.

When I come home from work, stepping through the front door, I usually feel (I think) neutral to positive. Then I see my wife Pauline and daughter Kate -- and how I evaluate them reveals whether in fact I came through that door grumpy. Suppose the first thing out of Pauline's mouth when I come through the door is, "Hi, Honey! Where did you leave the keys for the van?" I could see this as an annoying way of being greeted, I could take it neutrally in stride, or I could appreciate how Pauline is still juggling chores even as I come home ready to relax. As I strode through that door, I was already disposed to react one way or another to stimuli that might or might not be interpreted as annoying; but that mood-constituting disposition didn't reveal itself until I actually encountered my family. Casual introspection of my feelings as I approached the front door might not have revealed this disposition to me in any reliable way.

Even after I react grumpily or not, I tend to lack self-knowledge. If I react with annoyance to a small request, my first instinct is to turn the blame outward: It is the request that is annoying. That's just a fact about the world! I either ignore my mood or blame Pauline for it. My annoyed reaction seems to me, in the moment, to be the appropriate response to the objective annoyingness of the situation.

Another example: Generally, on my ten-minute drive into work, I listen to classic rock or alternative rock. Some mornings, every song seems trite and bad, and I cycle through the stations disappointed that there's nothing good to listen to. Other mornings, I'm like "Whoa, this Billy Idol song is such a classic!" Only slowly have I learned that this probably says more about my mood than about the real quality of the songs that are either pleasing or displeasing me. Introspectively, before I turn on the radio and notice this pattern of reactions, there's not much there that I can discover that otherwise clues me into my mood. Maybe I could introspect better and find that mood in there somewhere, but over the years I've become convinced that my song assessment is a better mood thermometer, now that I've learned to think of it that way.

One more example: Elsewhere, I've suggested that probably the best way to discover whether one is a jerk is not by introspective reflection ("hm, how much of a jerk am I?") but rather by noticing whether one regularly sees the world through "jerk goggles". Everywhere you turn, are you surrounded by fools and losers, faceless schmoes, boring nonentities? Are you the only reasonable, competent, and interesting person to be found? If so....

As I was drafting this post yesterday, Pauline interrupted me to ask if I wanted to RSVP to a Christmas music singalong in a few weeks. Ugh! How utterly annoying I felt that interruption to be! And then my daughter's phone, plugged into the computer there, wouldn't stop buzzing with text messages. Grrr. Before those interruptions, I would probably have judged that I was in a middling-to-good mood, enjoying being in the flow of drafting out this post. Of course, as those interruptions happened, I thought of how suitable they were to the topic of this post (and indeed I drafted out this very paragraph in response). Now, a day later, my mood is better, and the whole thing strikes me as such a lovely coincidence!

If I sit too long at my desk at work, my energy level falls. Every couple of hours, I try to get up and stroll around campus a bit. Doing so, I can judge my mood by noticing others' faces. If everyone looks beautiful to me, but in a kind of distant, unapproachable way, I am feeling depressed or blue. Every wart or seeming flaw manifests a beautiful uniqueness that I will never know. (Does this match others' phenomenology of depression? Before having noticed this pattern in my reactions to people, I might not have thought this would be how depression feels.) If I am grumpy, others are annoying obstacles. If I am soaring high, others all look like potential friends.

My mood will change as I walk, my energy rising. By the time I loop back around to the Humanities and Social Sciences building, the crowds of students look different than they did when I first stepped out of my office. It seems like they have changed, but of course I'm the one who has changed.


[image source]

Tuesday, November 26, 2019

Applying to PhD Programs in Philosophy, Part V: Statement of Purpose

Part I: Should You Apply, and Where?

Part II: Grades, Classes, and Institution of Origin

Part III: Letters of Recommendation

Part IV: Writing Sample

Old Series from 2007

--------------------------------------------------------

Applying to PhD Programs in Philosophy
Part V: Statement of Purpose

Statements of purpose, sometimes also called personal statements, are difficult to write. It's hard to know even what a "Statement of Purpose" is. Your plan is to go to graduate school, get a PhD, and become a professor. Duh! Are you supposed to try to convince the committee that you want to become a professor more than the other applicants do? That philosophy is written in your genes? That you have some profound vision for the transformation of philosophy of philosophy education?

You've had no practice writing this sort of thing. Odds are, you'll do it badly in your first try. There are so many different ways to go wrong! Give yourself plenty of time and seek feedback from at least two of your letter writers. Plan to rewrite from scratch at least once.

Some Things Not to Do

* Don't wax poetic. Don't get corny. Avoid purple prose. "Ever since I was eight, I've pondered the deep questions of life." Nope. "Philosophy is the queen of the disciplines, delving to the heart of it all." Nope. "The Owl of Minerva has sung to me and the sage of Königsberg whispers in my sleep: Not to philosophize is to die." If you are tempted to write sentences like that, please do so in longhand, with golden ink, on expensive stationery which you then burn without telling anyone.

* Don't turn your statement into a sales pitch. Ignore all advice from friends and acquaintances in the business world. Don't sell yourself. You don't want to seem like a BS-ing huckster. You may still (optionally!) mention a few of your accomplishments, in a dry, factual way, but to be overly enthusiastic about accomplishments that are rather small in the overall scheme of academia is somewhat less professional than you ideally want to seem. If you're already thinking like a graduate student at a good PhD program, you won't be too impressed with yourself for having published in the Kansas State Undergraduate Philosophy Journal (even if that is, in context, a notable achievement). Trust your letter writers. If you've armed them with a brag sheet, the important accomplishments will come across in your file. Let your letter writers do the pitch. It comes across so much better when someone else toots your horn than when you yourself do!

* Don't be grandiose. Don't say that you plan to revolutionize philosophy, reinvigorate X, rediscover Y, finally find the answer to timeless question Z, or become a professor at an elite department. Do you already know that you will be a more eminent philosopher than the people on your admissions committee? You're aiming to be their student, not the next Wittgenstein -- or at least that's how you want to come across. You want to seem modest, humble, straightforward. If necessary, consult David Hume or Benjamin Franklin for inspiration on the advantages of false humility.

* If you are applying to a program in which you are expected to do coursework for a couple of years before starting your dissertation -- that is, to U.S.-style programs rather than British-style programs -- then I recommend against taking stands on particular substantive philosophical issues. In the eyes of the admissions committee, you probably aren't far enough in your education to adopt hard philosophical commitments. They want you to come to their program with an open mind. Saying "I would like to defend Davidson's view that genuine belief is limited to language-speaking creatures" comes across a bit too strong. Similarly, "I showed in my honors thesis that Davidson's view...". If only, in philosophy, honors theses ever really showed anything! ("I argued" would be okay.) Better: "My central interests are philosophy of mind and philosophy of language. I am particularly interested in the intersection of the two, for example in Davidson's argument that only language-speaking creatures can have beliefs in the full and proper sense of 'belief'."

* Don't tell the story of how you came to be interested in philosophy. It's not really relevant.

* Ignore the administrative boilerplate. The application form might have a prompt like this: "Please upload a one page Statement of Purpose. What are your goals and objectives for pursuing this graduate degree? What are your qualifications and indicators of success in this endeavor? Please include career objectives that obtaining this degree will provide." This was written eighteen years ago by the Associate Dean for Graduate Education in the College of Letters and Sciences, who earned his PhD in Industrial Engineering in 1989. The actual admissions committee that makes the decisions is a bunch of nerdy philosophers who probably roll their eyes at admin-speak at least as much as you do. There's no need to tailor your letter to this sort of prompt.

* Also, don't follow links to well-meaning general advice from academic non-philosophers. I'm sure you didn't click those links! Good! If you had, you'd see that they advise you, among other things, to tell your personal history and to sell yourself as a good fit for the program. Maybe that works for biology PhD admissions, where it could make good sense to summarize your laboratory experience and fieldwork?

What to Write

So how do your fill up that awful, blank page? In 2012, I solicited sample statements of purpose from successful PhD applicants. About a dozen readers shared their statements and from among those I chose three I thought were good and also diverse enough to illustrate the range of possibilities. Follow the links below to view the statements.

  • Statement A was written by Allison Glasscock, who was admitted to Chicago, Cornell, Penn, Stanford, Toronto, and Yale.
  • Statement B was written by a student who prefers to remain anonymous, who was admitted to Berkeley, Missouri, UMass Amherst, Virginia, Wash U. in St. Louis, and Wisconsin.
  • Statement C was written by another student who prefers to remain anonymous, who was admitted to Connecticut and Indiana.

At the core of each statement is a cool, professional description of the student's areas of interest. Notice that all of these descriptions contain enough detail to give a flavor of the student's interests. This helps the admissions committee assess the student's likely fit with the teaching strengths of the department. Each description also displays the student's knowledge of the areas in question by mentioning figures or issues that would probably not be known to the average undergraduate. This helps to convey philosophical maturity and preparedness for graduate school. However, I would recommend against going too far with the technicalities or trying too hard to be cutting edge, lest it become phony desperation or a fog of jargon. These sample statements get the balance about right.

Each of the sample statements also adds something else, in addition to a description of areas of interest, but it's not really necessary to add anything else. Statement B starts with pretty much the perfect philosophy application joke. (Sorry, now it's taken!) Statement C concludes with a paragraph description the applicant's involvement with his school's philosophy club. Statement C is topically structured but salted with information about coursework relevant to the applicant's interests, while Statement B is topically structured and minimalist, and Statement A is autobiographically structured with considerable detail. Any of these approaches is fine, though the topical structure is more common and raises fewer challenges about finding the right tone.

Statement A concludes with a paragraph specifically tailored for Yale. Thus we come to the question of...

Tailoring Statements to Particular Programs

It's not necessary, but you can adjust your statement for individual schools. If there is some particular reason you find a school attractive, there's no harm in mentioning that. Committees think about fit between a student's interests and the strengths of the department and about what faculty could potentially be advisors. You can help the committee on this issue if you like, though normally it will be obvious from your description of your areas of interest.

For example, if you wish, you can mention 2-3 professors whose work especially interests you. But there are risks here, so be careful. Mentioning particular professors can backfire if you mischaracterize the professors, or if they don't match your areas of stated interest, or if you omit the professor in the department whose interests seem to the committee to be the closest match to your own.

Similarly, you can mention general strengths of the school. But, again, if you do this, be sure to get it right! If someone applies to UCR citing our strength in medieval philosophy, we know the person hasn't paid attention to what our department is good at. No one here works on medieval philosophy. But if you want to go to a school that has strengths in both mainstream "analytic" philosophy and 19th-20th century "Continental" philosophy, that's something we at UCR do think of as a strong point of our program.

I'm not sure I'd recommend changing your stated areas of interest to suit the schools, though I see how that might be strategic. There are two risks in changing your stated areas of interest: One is that if you change them too much, there might be some discord between your statement of purpose and what your letter writers say about you. Another is that large changes might raise questions about your choice of letter writers. If you say your central passion is ancient philosophy, and your only ancient philosophy class was with Prof. Platophile, why hasn't Prof. Platophile written one of your letters? That's the type of oddness that might make a committee hesitate about an otherwise strong file.

Some people mention personal reasons for wanting to be in a particular geographical area (near family, etc.). Although this can be good because it can make it seem more likely that you would accept an offer of admission, I'd avoid it since, in order to have a good chance of landing a tenure-track job, graduating PhD recipients typically need to be flexible about location. Also, it might be perceived as indicating that a career in philosophy is not your first priority.

Explaining Weaknesses in Your File

Although hopefully this won't be necessary, a statement of purpose can also be an opportunity to explain weaknesses or oddities in your file -- though letter writers can also do this, often more credibly. For example, if one quarter you did badly because your health was poor, you can mention that fact. If you changed undergraduate institutions (not necessarily a weakness if the second school is the more prestigious), you can briefly explain why. If you don't have a letter from your thesis advisor because they died, you can point that out.

Statements of Personal History

Some schools, like UCR, also allow applicants to submit "statements of personal history", in which applicants can indicate disadvantages or obstacles they have overcome or otherwise attempt to paint an appealing picture of themselves. The higher-level U.C. system administration encourages such statements, I believe, because although state law prohibits the University of California from favoring applicants on the basis of ethnicity or gender, state law does allow admissions committees to take into account any hardships that applicants have overcome -- which can include hardships due to poverty, disability, or other obstacles, including hardships deriving from ethnicity or gender.

Different committee members react rather differently to such statements, I suspect. I find them unhelpful for the most part. And yet I also think that some people do, because of their backgrounds, deserve special consideration. Unless you have a sure hand with tone, though, I would encourage a dry, minimal approach to this part of the application. It's better to skip it entirely than to concoct a story that looks like special pleading from a rather ordinary complement of hardships. This part of the application also seems to beg for the corniness I warned against above: "Ever since I was eight, I've pondered the deep questions of life...". I see how such corniness is tempting if the only alternative seems to be to leave an important part of the application blank. As a committee member, I usually just skim and forget the statements of personal history, unless something was particularly striking, or unless it seems like the applicant might contribute in an important way to the diversity of the entering class.

For further advice on statements of purpose, see this discussion on Leiter Reports – particularly the discussion between the difference between U.S. and U.K. statements of purpose.

[image source]

Sunday, November 17, 2019

We Might Soon Build AI Who Deserve Rights

Talk for Notre Dame, November 19:

Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


[An AI slave, from Black Mirror's White Christmas episode]

The first half of the talk mostly rehearses ideas from my articles with Mara Garza here and here. If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.

Selections from the second half of the talk

So here’s what's going to happen:

We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!

Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.

This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.

Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.

Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.

Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.

There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!

If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.

Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?

I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?

We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!

Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.

Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?

Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.

Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?

Thursday, November 14, 2019

Who Cares about Happiness?

[talk to be given at UC Riverside's Homecoming celebration, November 16, on the theme of happiness]

There are several different ways of thinking about happiness. I want to focus on just one of those ways. This way of thinking about happiness is sometimes called “hedonic”. That label can be misleading if you’re not used to it because it kind of sounds like hedonism, which kind of sounds like wild sex parties. The hedonic account of happiness, though, is probably closest to most people’s ordinary understanding of happiness. On this account, to be happy is to have lots of positive emotions and not too many negative emotions. To be happy is to regularly feel joy, delight, and pleasure, to feel sometimes maybe a pleasant tranquility and sometimes maybe outright exuberance, to have lots of good feelings about your life and your situation and what’s going on around you – and at the same time not to have too many emotions like sadness, fear, anxiety, anger, disgust, displeasure, annoyance, and frustration, what we think of as “negative emotions”. To be happy, on this “hedonic” account, is to be in an overall positive emotional state of mind.

I wouldn’t want to deny that it’s a good thing to be happy in this sense. It is, for the most part, a good thing. But sometimes people say extreme things about happiness – like that happiness is the most important thing, or that all people really want is to be happy, or as a parent that the main thing you want for your children is that they be happy, or that everything everyone does is motivated by some deep-down desire to maximize their happiness. And that’s not right at all. We actually don’t care about our hedonic happiness very much. Not really. Not when you think about it. It’s kind of important, but not really that big in the scheme of things.

Consider an extreme thought experiment of the sort that philosophers like me enjoy bothering people with. Suppose we somehow found a way to turn the entire Solar System into one absolutely enormous machine or organism that experienced nothing but outrageous amounts of pleasure all the time. Every particle of matter that we have, we feed into this giant thing – let’s call it the orgasmatron. We create the most extreme, most consistent, most intense conglomeration of pure ecstatic joyfulness as it is possible to construct. Wow! Now that would be pretty amazing. One huge, pulsing Solar-System-sized orgasm.

Will this thing need to remember the existence of humanity? Will it need to have any appreciation of art or beauty? Will it have to have any ethics, or any love, or any sociality, or knowledge of history or science – will it need any higher cognition at all? Maybe not. I mean higher cognition is not what orgasm is mostly about. If you think that the thing that matters most in the universe is positive emotions, then you might think that the best thing that could happen to the future of the Solar System would be the creation of this giant orgasmatron. The human project would be complete. The world will have reached its pinnacle and nothing else really matters!

[not the orgasmatron I have in mind]

Now here’s my guess. Some of you will think, yeah, that’s right. If everything becomes a giant orgasmatron, nothing could be more awesome, that’s totally where we should go if we can. But I’ll guess that most of you think that something important would be lost. Positive emotion isn’t the only thing that matters. We don’t want the world to lose its art, and its beauty, and its scientific knowledge, and the rich complexity of human relationships. If everything got fed into this orgasmatron it would be a shame. We’d have lost something really important. Now let me tell you a story. It’s from my latest book, A Theory of Jerks and Other Philosophical Misadventures, hot off the press this month.

Back in the 1990s, when I was a graduate student, my girlfriend Kim asked me what, of all things, I most enjoyed doing. Skiing, I answered. I was thinking of those moments breathing the cold, clean air, relishing the mountain view, then carving a steep, lonely slope. I’d done quite a bit of that with my mom when I was a teenager. But how long had it been since I’d gone skiing? Maybe three years? Grad school kept me busy and I now had other priorities for my winter breaks. Kim suggested that if it had been three years since I’d done what I most enjoyed doing, then maybe I wasn’t living wisely.

Well, what, I asked, did she most enjoy? Getting massages, she said. Now, the two of us had a deal at the time: If one gave the other a massage, the recipient would owe a massage in return the next day. We exchanged massages occasionally, but not often, maybe once every few weeks. I pointed out that she, too, might not be perfectly rational: She could easily get much more of what she most enjoyed simply by giving me more massages. Surely the displeasure of massaging my back couldn’t outweigh the pleasure of the thing she most enjoyed in the world? Or was pleasure for her such a tepid thing that even the greatest pleasure she knew was hardly worth getting?

It used to be a truism in Western (especially British) philosophy that people sought pleasure and avoided pain. A few old-school psychological hedonists, like Jeremy Bentham, went so far as to say that that was all that motivated us. I’d guess quite differently: Although pain is moderately motivating, pleasure motivates us very little. What motivates us more are outward goals, especially socially approved goals — raising a family, building a career, winning the approval of peers — and we will suffer immensely, if necessary, for these things. Pleasure might bubble up as we progress toward these goals, but that’s a bonus and side effect, not the motivating purpose, and summed across the whole, the displeasure might vastly outweigh the pleasure. Some evidence suggests, for example, that raising a child is probably for most people a hedonic net negative, adding stress, sleep deprivation, and unpleasant chores, as well as crowding out the pleasures that childless adults regularly enjoy. At least according to some research, the odds are that choosing to raise a child will make you less happy.

Have you ever watched a teenager play a challenging video game? Frustration, failure, frustration, failure, slapping the console, grimacing, swearing, more frustration, more failure—then finally, woo-hoo! The sum over time has to be negative, yet they’re back again to play the next game. For most of us, biological drives and addictions, personal or socially approved goals, concern for loved ones, habits and obligations — all appear to be better motivators than gaining pleasure, which we mostly seem to save for the little bit of free time left over. And to me, this is quite right and appropriate. I like pleasure, sure. I like joy. But that’s not what I’m after. It’s a side effect, I hope, of the things I really care about. I’d guess this is true of you too.

If maximizing pleasure is central to living well and improving the world, we’re going about it entirely the wrong way. Do you really want to maximize pleasure? I doubt it. Me, I’d rather write some good philosophy and raise my kids.

ETA, Nov 17:

In audience discussion and in social media, several people have pointed out although I start by talking about a wide range of emotional states (tranquility, delight, having good feelings about your life situation), in the second half I focus exclusively on pleasure. The case of pleasure is easiest to discuss, because the more complex emotional states have more representational or world-involving components. On a proper hedonic view, the value of those more complex states, however, rests exclusively on the emotional valence or at most on the emotional valence plus possibly-false representational content -- on, for example, whether you have the feeling that life is going well, rather than on whether it's really going well. All the same observations apply: We do and should care about whether our lives are actually going well, much more than we care about whether we have the emotional feeling of its going well.

Tuesday, November 05, 2019

A Theory of Jerks and Other Philosophical Misadventures

... released today. *Confetti!*

Available from:

MIT Press, Amazon, B&N, or (I hope!) your local independent bookseller.

Some initial reviews and discussions.

------------------------------------------------

Preface

I enjoy writing short philosophical reflections for broad audiences. Evidently, I enjoy this immensely: Since 2006, I’ve written more than a thousand such pieces, published mostly on my blog The Splintered Mind, but also in the Los Angeles Times, Aeon, and elsewhere. This book contains fifty- eight of my favorites, revised and updated.

The topics range widely -- from moral psychology and the ethics of the game of dreidel to multiverse theory, speculative philosophy of consciousness, and the apparent foolishness of Immanuel Kant. There is no unifying thesis.

Maybe, however, there is a unifying theme. The human intellect has a ragged edge, where it begins to turn against itself, casting doubt on itself or finding itself lost among seemingly improbable conclusions. We can reach this ragged edge quickly. Sometimes, all it takes to remind us of our limits is an eight-hundred-word blog post. Playing at this ragged edge, where I no longer know quite what to think or how to think about it, is my idea of fun.

Given the human propensity for rationalization and self-deception, when I disapprove of others, how do I know that I'm not the one who is being a jerk? Given that all our intuitive, philosophical, and scientific knowledge of the mind has been built on a narrow range of cases, how much confidence can we have in our conclusions about the strange new possibilities that are likely to open up in the near future of artificial intelligence? Speculative cosmology at once poses the (literally) biggest questions that we can ask about the universe and reveals possibilities that threaten to undermine our ability to answer those same questions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence.

Not all of my posts fit this theme. It's also fun to use the once-forbidden word "fuck" over and over again in a chapter about profanity. And I wanted to share some reminiscences about how my father saw the world -- especially since in some ways I prefer his optimistic and proactive vision to my own less hopeful skepticism. Other of my blog posts I just liked or wanted to share for other reasons. A few are short fictions.

It would be an unusual reader who enjoyed every chapter. I hope you'll skip anything you find boring. The chapters are all freestanding. Please don't just start reading on page 1 and then try to slog along through everything sequentially out of some misplaced sense of duty! Trust your sense of fun (chapter 47). Read only the chapters that appeal to you, in any order you like.

Riverside, California, Earth (I hope)
October 25, 2018