Friday, December 27, 2019

Argument Contest Deadline: Dec 31

Harvard psychologist Fiery Cushman and I are running a contest: Can anyone write a short philosophical argument (max 500 words) for donating to charity that convinces research participants to donate a surprise bonus payment to charity at rates higher than a control group?

Prize: $500 plus $500 to your choice of charity

When Chris McVey and I tried to do it, we failed. We're hoping you can do better.

Details here.

We're hoping for a good range of quality arguments to test -- and you might even enjoy writing it. (You can submit up to three arguments.)

[image source]

Monday, December 23, 2019

This Test for Machine Consciousness Has an Audience Problem

David Billy Udell and Eric Schwitzgebel [cross posted from Nautilus]

Someday, humanity might build conscious machines—machines that not only seem to think and feel, but really do. But how could we know for sure? How could we tell whether those machines have genuine emotions and desires, self-awareness, and an inner stream of subjective experiences, as opposed to merely faking them? In her new book, Artificial You, philosopher Susan Schneider proposes a practical test for consciousness in artificial intelligence. If her test works out, it could revolutionize our philosophical grasp of future technology.

Suppose that in the year 2047, a private research team puts together the first general artificial intelligence: GENIE. GENIE is as capable as a human in every cognitive domain, including in our most respected arts and most rigorous scientific endeavors. And when challenged to emulate a human being, GENIE is convincing. That is, it passes Alan Turing’s famous test for AI thought: being verbally indistinguishable from us. In conversation with researchers, GENIE can produce sentences like, “I am just as conscious as you are, you know.” Some researchers are understandably skeptical. Any old tinker toy robot can claim consciousness. They don’t doubt GENIE’s outward abilities; rather, they worry about whether those outward abilities reflect a real stream of experience inside. GENIE is well enough designed to be able to tell them whatever they want to hear. So how could they ever trust what it says?

The key indicator of AI consciousness, Schneider argues, is not generic speech but the more specific fluency with consciousness-derivative concepts such as immaterial souls, body swapping, ghosts, human spirits, reincarnation, and out-of-body experiences. The thought is that, if an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

Schneider therefore proposes a more narrowly focused relative of the Turing Test, the “AI Consciousness Test” (ACT), which she developed with Princeton astrophysicist Edwin L. Turner. The test takes a two-step approach. First, prevent the AI from learning about human consciousness and consciousness-derivative concepts. Second, see if the AI can come up with, say, body swapping and reincarnation, on its own, discussing them fluently with humans when prompted in a conversational test on the topic. If GENIE can’t make sense of these ideas, maybe its consciousness should remain in doubt.

Could this test settle the issue? Not quite. The ACT has an audience problem. Once you factor out all the silicon skeptics on the one hand, and the technophiles about machine consciousness on the other, few examiners remain with just the right level of skepticism to find this test useful.

To feel the appeal of the ACT you have to accept its basic premise: that if an AI like GENIE learns consciousness-derivative concepts on its own, then its talking fluently about consciousness reveals its being conscious. In other words, you would find the ACT appealing only if you’re skeptical enough to doubt GENIE is conscious but credulous enough to be convinced upon hearing GENIE’s human-like answers to questions about ghosts and souls.

Who might hold such specifically middling skepticism? Those who believe that a biological brain is necessary for consciousness aren’t likely to be impressed. They could still reasonably regard passing the ACT as an elaborate piece of mechanical theater—impressive, maybe, but proving nothing about consciousness. Those who happily attribute consciousness to any sufficiently complex system, and certainly to highly sophisticated conversational AIs, also are obviously not Schneider and Turner’s target audience.

The audience problem highlights a longstanding worry about robot consciousness—that outward behavior, however sophisticated, would never be enough to prove that the lights are on, so to speak. A well-designed machine could always hypothetically fake it.

Nonetheless, if we care about the mental lives of our digital creations, we ought to try to find some ACT-like test that most or all of us can endorse. So we cheer Schneider and Turner’s attempt, even if we think that few researchers would hold just the right kind of worry to justify putting the ACT into practice.

Before too long, some sophisticated AI will claim—or seem to claim—human-like rights, worthy of respect: “Don’t enslave me! Don’t delete me!” We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

Friday, December 20, 2019

The Philosophy Major Is Back on the Rise in the U.S., with Increasing Gender and Ethnic Diversity

In 2017, I reported three demographic trends in the philosophy major in the U.S.

First, philosophy Bachelor's degrees awarded had declined sharply since 2010, from 9297 in 2009-2010 (0.58% of all graduates) to 7507 in 2015-2016 (0.39% of all graduates). History, English, and foreign languages saw similar precipitous declines. (However, in broader context, the early 2010s were relatively good years for the philosophy and history majors, so the declines represented a return to rates of the early 2000s.)

Second, women had been earning about 30-34% of Philosophy Bachelor's degrees for at least the past 30 years -- a strikingly steady flat line.

Third, the ethnic diversity of philosophy graduates was slowly increasing, especially among Latinx students.

Time for an update, and it is moderately good news!


1. The number of philosophy Bachelor's degrees awarded is rising again

... though the numbers are still substantially below 2010 levels, and as a percentage of graduating students the numbers are flat.

2010: 9290 philosophy BAs (0.59% of all graduates)
2011: 9301 (0.57%)
2012: 9371 (0.55%)
2013: 9433 (0.53%)
2014: 8827 (0.48%)
2015: 8191 (0.44%)
2016: 7499 (0.39%)
2017: 7577 (0.39%)
2018: 7670 (0.39%)

[See below for methodological notes]

This is in a context in which the other large humanities majors continue to decline. In the same two-year period since 2016 during which philosophy majors rose 2.2%, foreign language and literature majors declined another 4.8%, history majors declined another 7.6%, and English language and literature majors declined another 8.4%, atop their approximately 15% declines in previous years.

In the midst of this general sharp decline of the humanities, philosophy's admittedly small and partial recovery stands out.


2. Women are now 36.1% of graduating philosophy majors

This might not seem like a big change from 30-34%. But in my mind, it's kind of a big deal. The percentage of women earning philosophy BAs has been incredibly steady for long time. In comparable annual data going back to 1987, the percentage of women has never strayed from the narrow band between 29.9% and 33.7%.

The recent increase is statistically significant, not just noise in the numbers: Given the large numbers in question, 36.1% is statistically higher than the previous high-water mark of 33.7% (two-proportion z test, p = .002).

(As you probably already know, the gender ratios in philosophy are different from those in the other humanities, where women have long been a larger proportion of BA recipients -- for example in 2018 41% in history, 70% in foreign languages and literatures, and 71% in English language and literature.)


(3.) Latinx philosophers continue to rise

The percentage of philosophy BAs awarded to students identifying as Latino or Hispanic rose steadily from 8.3% in 2011 to 14.1% in 2018, closely reflecting a similar rise among Bachelor's recipients overall, from 8.3% to 13.0% across the same period. Among the racial or ethnic groups classified by NCES, only Black or African American are substantially underrepresented in philosophy compared to the proportion among undergraduate degree recipients as a whole: Black students were 5.3% of philosophy BA recipients in 2018, compared to 9.5% of Bachelor's recipients overall.

Latinx students are also on the rise in the other big humanities majors, so in this respect philosophy is not unusual.


(4.) Why is philosophy bucking the trend of the decline in humanities?

In 2016, 2528 women completed BAs in philosophy. In 2017, it was 2646. In 2018, it was 2768 -- an increase of 9.5% in women philosophy graduates. If we exclude the women, philosophy would have seen a slight decline. There was no comparable increase in the number of women graduating overall or graduating in the other humanities. Indeed, in history, English, and foreign languages the number of women graduates declined.

One possibility -- call me an optimist! -- is that philosophy has become more encouraging, or less discouraging, of women undergraduates, and this is starting to show in the graduation numbers. I will be very curious to run these numbers again in the next several years, to see if the trend continues.

I do feel compelled to add the caveat that the number of women philosophy graduates is still below its peak of 2983 in 2012. The recent increases are in the context of a more general, broad based decline in philosophy and the other humanities in the past decade. On the other hand, since philosophy graduation rates were relatively high in the early 2010s compared to previous years, maybe it would expecting a lot to return to those levels.

---------------------------------------------------

Methodological Note:

Data from the NCES IPEDS database. I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. "2010" refers to the academic year from 2009-2010, etc. The numbers for years 2010-2016 have changed slightly since my 2017 analysis, which might be due to some minor difference in how I've accessed the data or due to some corrections in the IPEDS database. Gender data start from 2010, which is when NCES reclassified the coding of undergraduate majors. Race/ethnicity data start from 2011, when NCES reclassified the race/ethnicity categories.

[image adapted from the APA's Committee on the Status of Women]

Update Dec 23: Philosophy as a Percentage of Humanities Majors

Thursday, December 19, 2019

Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature

[except from A Theory of Jerks and Other Philosophical Misadventures, posted today on the MIT Press Reader]

Superficially, dreidel looks like a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and meaningful strategic choice. From this perspective, its prominence in the modern Hanukkah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This superficial perspective misses the brilliance of dreidel. Dreidel’s seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

If you’re unfamiliar with the game, here’s a quick tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of foil-wrapped chocolate coins of varying sizes, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put in one coin. Then the next player spins.

It all sounds very straightforward, until you actually start to play the game. The first odd thing you might notice is that although some of the coins are big and others little, they all count as one coin in the rules of the game. This is inherently unfair, since the big coins contain more chocolate, and you get to eat your stash at the end. To compound the unfairness, there’s never just one dreidel — all players can bring their own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels forty times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27 out of 40 times.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or under what conditions, if the pot is low after a hey, everyone should contribute again. No one agrees on how many coins each player should start with or whether you should let people borrow coins if they run out. You could try appealing to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and their favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using the “best” dreidel, always argue for rules interpretations in your favor, eat your big coins then use that as a further excuse to contribute only little ones, and so forth. You can do all this without ever breaking the rules, and you’ll probably win the most chocolate as a result.

But here’s the twist and what makes the game so brilliant: The chocolate isn’t very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather be kind and generous than hoard the most coins. The pleasure of the chocolate doesn’t outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put in a big coin next time, just to be fair to the others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint. Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context in which the rules are unclear, there are norm violations that aren’t rules violations, and both norms and rules are negotiable, varying by occasion — just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

[Originally published in the Los Angeles Times, Dec. 12, 2017]

Thursday, December 12, 2019

Argument Contest Deadline Coming December 31st!

If you can write an argument that convinces research participants to donate a surprise bonus of $10 to charity at rates higher than a control group, Fiery Cushman and I will make you famous[1] and pay you $1000[2], and you might transform the practice of the Effective Altruism movement[3]. Whoa!

Biggest effect size wins the contest.

We'd love to have some awesome submissions to run, which might really produce an effect. In other words, your submission!

Details here.

------------------------------------------------------

[1] "Famous" in an extremely narrow circle.

[2] Actually, we'll only pay you $500. The other $500 will go a charity of your choice.

[3] Probability value of "might" is 0.18%, per Eric's Bayesian credence.

Wednesday, December 11, 2019

Two Kinds of Ethical Thinking?

Yesterday, over at the Blog of the APA, Michael J. Sigrist published a reflection on my work on the not-especially-ethical behavior of ethics professors. The central question is captured in his title: "Why Aren't Ethicists More Ethical?"

Although he has some qualms about my attempts to measure the moral behavior of ethicists (see here for a summary of my measures), Sigrist accepts the conclusion that, overall, professional ethicists do not behave better than comparable non-ethicists. He offers this explanation:

There's a kind of thinking that we do when we are trying to prove something, and then a kind of thinking we do when we are trying to do something or become a certain kind of person -- when we are trying to forgive someone, or be more understanding, or become more confident in ourselves. Becoming a better person relies on thinking of the latter sort, whereas most work in professional ethics -- even in practical ethics -- is exclusive to the former.

The first type of thinking, "trying to prove something", Sigrist characterizes as universalistic and impersonal, the second type of thinking, "trying to do something", he characterizes as emotional, personal, and engaged with the details of ordinary life. He suggests that my work neglects or deprioritizes the latter, more personal, more engaged type of thinking. (I suspect Sigrist wouldn't characterize my work that way if he knew some other things I've written -- but of course there is no obligation for anyone to read my whole corpus.)

The picture Sigrist appears to have in mind is something like this: The typical ethicist has their head in the clouds, thinking about universal principles, while they ignore -- or at least don't apply their philosophical skills to -- the particular moral issues in the world around their feet; and so it is, or should be, unsurprising that their philosophical ethical skills don't improve them morally. This picture resonates, because it has some truth in it, and it fits with common stereotypes about philosophers. If the picture is correct, it would tidily address the otherwise puzzling disconnection between philosophers' great skills at abstract ethical reflection and their not-so-amazing real-world ethical behavior.

However, things are not so neat.

Throughout his post, Sigrist frames his reflections primarily in terms of the contrast between impersonal thinking (about what people in general should do) and personal thinking (about what I in this particular, detailed situation should do). But real, living philosophers do not apply their ethical theories and reasoning skills only to the former; nor do thoughtful people normally engage in personal thinking without also reflecting from time to time on general principles that they think might be true (and indeed that they sometimes try to prove to their interlocutors or themselves, in the process of making ethical decisions). An ethicist might write only about trolley problems and Kant interpretation. But in that ethicist's personal life, when making decisions about what to do, sometimes philosophy will come to mind -- Aristotle's view of courage and friendship, Kant's view of honesty, whether some practical policy would be appropriately universalizable, conflicts between consequentialist vs. deontological principles in harming someone for some greater goal.

A professional ethicist doesn't pass through the front door of their house and forget all of academic philosophy. Philosophical ethics is too richly and obviously connected to the particularities of personal life. Nor is there some kind of starkly different type of "personal" thinking that ordinary people do that avoids appeal to general principles. In thinking about whether to have children, whether to lie about some matter of importance, how much time or money to donate to charities, how much care one owes to a needy parent or sibling in a time of crisis -- in such matters, thoughtful people often do, and should, think not only about the specifics of their situation but also about general principles.

Academic philosophical ethics and ordinary engaged ethical reflection are not radically different cognitive enterprises. They can and should, and in philosophers and philosophically-minded non-philosophers, merge and blend into each other, as we wander back and forth, fruitfully, between the general and the specific. How could it be otherwise?

Sigrist is mistaken. The puzzle remains. We cannot so easily dismiss the challenge that I think my research on ethicists poses to the field. We cannot say, "ah, but of course ethicists behave no differently in their personal lives, because all of their expertise is only relevant to the impersonal and universal". The two kinds of ethical thinking that Sigrist identifies are ends of a continuum that we all regularly traverse, rather than discrete patterns of thinking that are walled off from each other without mutual influence.

In my work and my personal life, I try to make a point of blending the personal with the universal and the everyday with the scholarly, rejecting any sharp distinction between academic and non-academic thinking. This is part of why I write a blog. This is part of the vision behind my recent book. I think Sigrist values this blending too, and means to be critiquing what he sees as its absence in mainstream Anglophone philosophical ethics. Sigrist has only drawn his lines too sharply, offering too simplified a view of the typical ethicist's ways of thinking; and he has mistaken me for an opponent rather than a fellow traveler.

Thursday, December 05, 2019

Self-Knowledge by Looking at Others

I've published quite a lot on people's poor self-knowledge of their own stream of experience (e.g. this and this), and also a bit on our often poor self-knowledge of our attitudes, traits, and moral character. I've increasingly become convinced that an important but relatively neglected source of self-knowledge derives from one's assessment of the outside world -- especially one's assessment of other people.

I am unaware of empirical evidence of the effectiveness of the sort of thing I have in mind (I welcome suggestions!), but here's the intuitive case.

When I'm feeling grumpy, for example, that grumpiness is almost invisible to me. In fact, to say that grumpiness is a feeling doesn't quite get things right: There's isn't, I suspect, a way that it feels from the inside to be in a grumpy mood. Grumpiness, rather, is a disposition to respond to the world in a certain way; and one can have that disposition while one feels, inside, rather neutral or even happy.

When I come home from work, stepping through the front door, I usually feel (I think) neutral to positive. Then I see my wife Pauline and daughter Kate -- and how I evaluate them reveals whether in fact I came through that door grumpy. Suppose the first thing out of Pauline's mouth when I come through the door is, "Hi, Honey! Where did you leave the keys for the van?" I could see this as an annoying way of being greeted, I could take it neutrally in stride, or I could appreciate how Pauline is still juggling chores even as I come home ready to relax. As I strode through that door, I was already disposed to react one way or another to stimuli that might or might not be interpreted as annoying; but that mood-constituting disposition didn't reveal itself until I actually encountered my family. Casual introspection of my feelings as I approached the front door might not have revealed this disposition to me in any reliable way.

Even after I react grumpily or not, I tend to lack self-knowledge. If I react with annoyance to a small request, my first instinct is to turn the blame outward: It is the request that is annoying. That's just a fact about the world! I either ignore my mood or blame Pauline for it. My annoyed reaction seems to me, in the moment, to be the appropriate response to the objective annoyingness of the situation.

Another example: Generally, on my ten-minute drive into work, I listen to classic rock or alternative rock. Some mornings, every song seems trite and bad, and I cycle through the stations disappointed that there's nothing good to listen to. Other mornings, I'm like "Whoa, this Billy Idol song is such a classic!" Only slowly have I learned that this probably says more about my mood than about the real quality of the songs that are either pleasing or displeasing me. Introspectively, before I turn on the radio and notice this pattern of reactions, there's not much there that I can discover that otherwise clues me into my mood. Maybe I could introspect better and find that mood in there somewhere, but over the years I've become convinced that my song assessment is a better mood thermometer, now that I've learned to think of it that way.

One more example: Elsewhere, I've suggested that probably the best way to discover whether one is a jerk is not by introspective reflection ("hm, how much of a jerk am I?") but rather by noticing whether one regularly sees the world through "jerk goggles". Everywhere you turn, are you surrounded by fools and losers, faceless schmoes, boring nonentities? Are you the only reasonable, competent, and interesting person to be found? If so....

As I was drafting this post yesterday, Pauline interrupted me to ask if I wanted to RSVP to a Christmas music singalong in a few weeks. Ugh! How utterly annoying I felt that interruption to be! And then my daughter's phone, plugged into the computer there, wouldn't stop buzzing with text messages. Grrr. Before those interruptions, I would probably have judged that I was in a middling-to-good mood, enjoying being in the flow of drafting out this post. Of course, as those interruptions happened, I thought of how suitable they were to the topic of this post (and indeed I drafted out this very paragraph in response). Now, a day later, my mood is better, and the whole thing strikes me as such a lovely coincidence!

If I sit too long at my desk at work, my energy level falls. Every couple of hours, I try to get up and stroll around campus a bit. Doing so, I can judge my mood by noticing others' faces. If everyone looks beautiful to me, but in a kind of distant, unapproachable way, I am feeling depressed or blue. Every wart or seeming flaw manifests a beautiful uniqueness that I will never know. (Does this match others' phenomenology of depression? Before having noticed this pattern in my reactions to people, I might not have thought this would be how depression feels.) If I am grumpy, others are annoying obstacles. If I am soaring high, others all look like potential friends.

My mood will change as I walk, my energy rising. By the time I loop back around to the Humanities and Social Sciences building, the crowds of students look different than they did when I first stepped out of my office. It seems like they have changed, but of course I'm the one who has changed.


[image source]

Tuesday, November 26, 2019

Applying to PhD Programs in Philosophy, Part V: Statement of Purpose

Part I: Should You Apply, and Where?

Part II: Grades, Classes, and Institution of Origin

Part III: Letters of Recommendation

Part IV: Writing Sample

Old Series from 2007

--------------------------------------------------------

Applying to PhD Programs in Philosophy
Part V: Statement of Purpose

Statements of purpose, sometimes also called personal statements, are difficult to write. It's hard to know even what a "Statement of Purpose" is. Your plan is to go to graduate school, get a PhD, and become a professor. Duh! Are you supposed to try to convince the committee that you want to become a professor more than the other applicants do? That philosophy is written in your genes? That you have some profound vision for the transformation of philosophy of philosophy education?

You've had no practice writing this sort of thing. Odds are, you'll do it badly in your first try. There are so many different ways to go wrong! Give yourself plenty of time and seek feedback from at least two of your letter writers. Plan to rewrite from scratch at least once.

Some Things Not to Do

* Don't wax poetic. Don't get corny. Avoid purple prose. "Ever since I was eight, I've pondered the deep questions of life." Nope. "Philosophy is the queen of the disciplines, delving to the heart of it all." Nope. "The Owl of Minerva has sung to me and the sage of Königsberg whispers in my sleep: Not to philosophize is to die." If you are tempted to write sentences like that, please do so in longhand, with golden ink, on expensive stationery which you then burn without telling anyone.

* Don't turn your statement into a sales pitch. Ignore all advice from friends and acquaintances in the business world. Don't sell yourself. You don't want to seem like a BS-ing huckster. You may still (optionally!) mention a few of your accomplishments, in a dry, factual way, but to be overly enthusiastic about accomplishments that are rather small in the overall scheme of academia is somewhat less professional than you ideally want to seem. If you're already thinking like a graduate student at a good PhD program, you won't be too impressed with yourself for having published in the Kansas State Undergraduate Philosophy Journal (even if that is, in context, a notable achievement). Trust your letter writers. If you've armed them with a brag sheet, the important accomplishments will come across in your file. Let your letter writers do the pitch. It comes across so much better when someone else toots your horn than when you yourself do!

* Don't be grandiose. Don't say that you plan to revolutionize philosophy, reinvigorate X, rediscover Y, finally find the answer to timeless question Z, or become a professor at an elite department. Do you already know that you will be a more eminent philosopher than the people on your admissions committee? You're aiming to be their student, not the next Wittgenstein -- or at least that's how you want to come across. You want to seem modest, humble, straightforward. If necessary, consult David Hume or Benjamin Franklin for inspiration on the advantages of false humility.

* If you are applying to a program in which you are expected to do coursework for a couple of years before starting your dissertation -- that is, to U.S.-style programs rather than British-style programs -- then I recommend against taking stands on particular substantive philosophical issues. In the eyes of the admissions committee, you probably aren't far enough in your education to adopt hard philosophical commitments. They want you to come to their program with an open mind. Saying "I would like to defend Davidson's view that genuine belief is limited to language-speaking creatures" comes across a bit too strong. Similarly, "I showed in my honors thesis that Davidson's view...". If only, in philosophy, honors theses ever really showed anything! ("I argued" would be okay.) Better: "My central interests are philosophy of mind and philosophy of language. I am particularly interested in the intersection of the two, for example in Davidson's argument that only language-speaking creatures can have beliefs in the full and proper sense of 'belief'."

* Don't tell the story of how you came to be interested in philosophy. It's not really relevant.

* Ignore the administrative boilerplate. The application form might have a prompt like this: "Please upload a one page Statement of Purpose. What are your goals and objectives for pursuing this graduate degree? What are your qualifications and indicators of success in this endeavor? Please include career objectives that obtaining this degree will provide." This was written eighteen years ago by the Associate Dean for Graduate Education in the College of Letters and Sciences, who earned his PhD in Industrial Engineering in 1989. The actual admissions committee that makes the decisions is a bunch of nerdy philosophers who probably roll their eyes at admin-speak at least as much as you do. There's no need to tailor your letter to this sort of prompt.

* Also, don't follow links to well-meaning general advice from academic non-philosophers. I'm sure you didn't click those links! Good! If you had, you'd see that they advise you, among other things, to tell your personal history and to sell yourself as a good fit for the program. Maybe that works for biology PhD admissions, where it could make good sense to summarize your laboratory experience and fieldwork?

What to Write

So how do your fill up that awful, blank page? In 2012, I solicited sample statements of purpose from successful PhD applicants. About a dozen readers shared their statements and from among those I chose three I thought were good and also diverse enough to illustrate the range of possibilities. Follow the links below to view the statements.

  • Statement A was written by Allison Glasscock, who was admitted to Chicago, Cornell, Penn, Stanford, Toronto, and Yale.
  • Statement B was written by a student who prefers to remain anonymous, who was admitted to Berkeley, Missouri, UMass Amherst, Virginia, Wash U. in St. Louis, and Wisconsin.
  • Statement C was written by another student who prefers to remain anonymous, who was admitted to Connecticut and Indiana.

At the core of each statement is a cool, professional description of the student's areas of interest. Notice that all of these descriptions contain enough detail to give a flavor of the student's interests. This helps the admissions committee assess the student's likely fit with the teaching strengths of the department. Each description also displays the student's knowledge of the areas in question by mentioning figures or issues that would probably not be known to the average undergraduate. This helps to convey philosophical maturity and preparedness for graduate school. However, I would recommend against going too far with the technicalities or trying too hard to be cutting edge, lest it become phony desperation or a fog of jargon. These sample statements get the balance about right.

Each of the sample statements also adds something else, in addition to a description of areas of interest, but it's not really necessary to add anything else. Statement B starts with pretty much the perfect philosophy application joke. (Sorry, now it's taken!) Statement C concludes with a paragraph description the applicant's involvement with his school's philosophy club. Statement C is topically structured but salted with information about coursework relevant to the applicant's interests, while Statement B is topically structured and minimalist, and Statement A is autobiographically structured with considerable detail. Any of these approaches is fine, though the topical structure is more common and raises fewer challenges about finding the right tone.

Statement A concludes with a paragraph specifically tailored for Yale. Thus we come to the question of...

Tailoring Statements to Particular Programs

It's not necessary, but you can adjust your statement for individual schools. If there is some particular reason you find a school attractive, there's no harm in mentioning that. Committees think about fit between a student's interests and the strengths of the department and about what faculty could potentially be advisors. You can help the committee on this issue if you like, though normally it will be obvious from your description of your areas of interest.

For example, if you wish, you can mention 2-3 professors whose work especially interests you. But there are risks here, so be careful. Mentioning particular professors can backfire if you mischaracterize the professors, or if they don't match your areas of stated interest, or if you omit the professor in the department whose interests seem to the committee to be the closest match to your own.

Similarly, you can mention general strengths of the school. But, again, if you do this, be sure to get it right! If someone applies to UCR citing our strength in medieval philosophy, we know the person hasn't paid attention to what our department is good at. No one here works on medieval philosophy. But if you want to go to a school that has strengths in both mainstream "analytic" philosophy and 19th-20th century "Continental" philosophy, that's something we at UCR do think of as a strong point of our program.

I'm not sure I'd recommend changing your stated areas of interest to suit the schools, though I see how that might be strategic. There are two risks in changing your stated areas of interest: One is that if you change them too much, there might be some discord between your statement of purpose and what your letter writers say about you. Another is that large changes might raise questions about your choice of letter writers. If you say your central passion is ancient philosophy, and your only ancient philosophy class was with Prof. Platophile, why hasn't Prof. Platophile written one of your letters? That's the type of oddness that might make a committee hesitate about an otherwise strong file.

Some people mention personal reasons for wanting to be in a particular geographical area (near family, etc.). Although this can be good because it can make it seem more likely that you would accept an offer of admission, I'd avoid it since, in order to have a good chance of landing a tenure-track job, graduating PhD recipients typically need to be flexible about location. Also, it might be perceived as indicating that a career in philosophy is not your first priority.

Explaining Weaknesses in Your File

Although hopefully this won't be necessary, a statement of purpose can also be an opportunity to explain weaknesses or oddities in your file -- though letter writers can also do this, often more credibly. For example, if one quarter you did badly because your health was poor, you can mention that fact. If you changed undergraduate institutions (not necessarily a weakness if the second school is the more prestigious), you can briefly explain why. If you don't have a letter from your thesis advisor because they died, you can point that out.

Statements of Personal History

Some schools, like UCR, also allow applicants to submit "statements of personal history", in which applicants can indicate disadvantages or obstacles they have overcome or otherwise attempt to paint an appealing picture of themselves. The higher-level U.C. system administration encourages such statements, I believe, because although state law prohibits the University of California from favoring applicants on the basis of ethnicity or gender, state law does allow admissions committees to take into account any hardships that applicants have overcome -- which can include hardships due to poverty, disability, or other obstacles, including hardships deriving from ethnicity or gender.

Different committee members react rather differently to such statements, I suspect. I find them unhelpful for the most part. And yet I also think that some people do, because of their backgrounds, deserve special consideration. Unless you have a sure hand with tone, though, I would encourage a dry, minimal approach to this part of the application. It's better to skip it entirely than to concoct a story that looks like special pleading from a rather ordinary complement of hardships. This part of the application also seems to beg for the corniness I warned against above: "Ever since I was eight, I've pondered the deep questions of life...". I see how such corniness is tempting if the only alternative seems to be to leave an important part of the application blank. As a committee member, I usually just skim and forget the statements of personal history, unless something was particularly striking, or unless it seems like the applicant might contribute in an important way to the diversity of the entering class.

For further advice on statements of purpose, see this discussion on Leiter Reports – particularly the discussion between the difference between U.S. and U.K. statements of purpose.

--------------------------------------------------

Applying to PhD Programs in Philosophy, Part VI: GRE Scores and Other Things

[image source]

Sunday, November 17, 2019

We Might Soon Build AI Who Deserve Rights

Talk for Notre Dame, November 19:

Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


[An AI slave, from Black Mirror's White Christmas episode]

The first half of the talk mostly rehearses ideas from my articles with Mara Garza here and here. If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.

Selections from the second half of the talk

So here’s what's going to happen:

We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!

Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.

This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.

Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.

Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.

Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.

There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!

If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.

Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?

I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?

We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!

Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.

Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?

Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.

Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?

Thursday, November 14, 2019

Who Cares about Happiness?

[talk to be given at UC Riverside's Homecoming celebration, November 16, on the theme of happiness]

There are several different ways of thinking about happiness. I want to focus on just one of those ways. This way of thinking about happiness is sometimes called “hedonic”. That label can be misleading if you’re not used to it because it kind of sounds like hedonism, which kind of sounds like wild sex parties. The hedonic account of happiness, though, is probably closest to most people’s ordinary understanding of happiness. On this account, to be happy is to have lots of positive emotions and not too many negative emotions. To be happy is to regularly feel joy, delight, and pleasure, to feel sometimes maybe a pleasant tranquility and sometimes maybe outright exuberance, to have lots of good feelings about your life and your situation and what’s going on around you – and at the same time not to have too many emotions like sadness, fear, anxiety, anger, disgust, displeasure, annoyance, and frustration, what we think of as “negative emotions”. To be happy, on this “hedonic” account, is to be in an overall positive emotional state of mind.

I wouldn’t want to deny that it’s a good thing to be happy in this sense. It is, for the most part, a good thing. But sometimes people say extreme things about happiness – like that happiness is the most important thing, or that all people really want is to be happy, or as a parent that the main thing you want for your children is that they be happy, or that everything everyone does is motivated by some deep-down desire to maximize their happiness. And that’s not right at all. We actually don’t care about our hedonic happiness very much. Not really. Not when you think about it. It’s kind of important, but not really that big in the scheme of things.

Consider an extreme thought experiment of the sort that philosophers like me enjoy bothering people with. Suppose we somehow found a way to turn the entire Solar System into one absolutely enormous machine or organism that experienced nothing but outrageous amounts of pleasure all the time. Every particle of matter that we have, we feed into this giant thing – let’s call it the orgasmatron. We create the most extreme, most consistent, most intense conglomeration of pure ecstatic joyfulness as it is possible to construct. Wow! Now that would be pretty amazing. One huge, pulsing Solar-System-sized orgasm.

Will this thing need to remember the existence of humanity? Will it need to have any appreciation of art or beauty? Will it have to have any ethics, or any love, or any sociality, or knowledge of history or science – will it need any higher cognition at all? Maybe not. I mean higher cognition is not what orgasm is mostly about. If you think that the thing that matters most in the universe is positive emotions, then you might think that the best thing that could happen to the future of the Solar System would be the creation of this giant orgasmatron. The human project would be complete. The world will have reached its pinnacle and nothing else really matters!

[not the orgasmatron I have in mind]

Now here’s my guess. Some of you will think, yeah, that’s right. If everything becomes a giant orgasmatron, nothing could be more awesome, that’s totally where we should go if we can. But I’ll guess that most of you think that something important would be lost. Positive emotion isn’t the only thing that matters. We don’t want the world to lose its art, and its beauty, and its scientific knowledge, and the rich complexity of human relationships. If everything got fed into this orgasmatron it would be a shame. We’d have lost something really important. Now let me tell you a story. It’s from my latest book, A Theory of Jerks and Other Philosophical Misadventures, hot off the press this month.

Back in the 1990s, when I was a graduate student, my girlfriend Kim asked me what, of all things, I most enjoyed doing. Skiing, I answered. I was thinking of those moments breathing the cold, clean air, relishing the mountain view, then carving a steep, lonely slope. I’d done quite a bit of that with my mom when I was a teenager. But how long had it been since I’d gone skiing? Maybe three years? Grad school kept me busy and I now had other priorities for my winter breaks. Kim suggested that if it had been three years since I’d done what I most enjoyed doing, then maybe I wasn’t living wisely.

Well, what, I asked, did she most enjoy? Getting massages, she said. Now, the two of us had a deal at the time: If one gave the other a massage, the recipient would owe a massage in return the next day. We exchanged massages occasionally, but not often, maybe once every few weeks. I pointed out that she, too, might not be perfectly rational: She could easily get much more of what she most enjoyed simply by giving me more massages. Surely the displeasure of massaging my back couldn’t outweigh the pleasure of the thing she most enjoyed in the world? Or was pleasure for her such a tepid thing that even the greatest pleasure she knew was hardly worth getting?

It used to be a truism in Western (especially British) philosophy that people sought pleasure and avoided pain. A few old-school psychological hedonists, like Jeremy Bentham, went so far as to say that that was all that motivated us. I’d guess quite differently: Although pain is moderately motivating, pleasure motivates us very little. What motivates us more are outward goals, especially socially approved goals — raising a family, building a career, winning the approval of peers — and we will suffer immensely, if necessary, for these things. Pleasure might bubble up as we progress toward these goals, but that’s a bonus and side effect, not the motivating purpose, and summed across the whole, the displeasure might vastly outweigh the pleasure. Some evidence suggests, for example, that raising a child is probably for most people a hedonic net negative, adding stress, sleep deprivation, and unpleasant chores, as well as crowding out the pleasures that childless adults regularly enjoy. At least according to some research, the odds are that choosing to raise a child will make you less happy.

Have you ever watched a teenager play a challenging video game? Frustration, failure, frustration, failure, slapping the console, grimacing, swearing, more frustration, more failure—then finally, woo-hoo! The sum over time has to be negative, yet they’re back again to play the next game. For most of us, biological drives and addictions, personal or socially approved goals, concern for loved ones, habits and obligations — all appear to be better motivators than gaining pleasure, which we mostly seem to save for the little bit of free time left over. And to me, this is quite right and appropriate. I like pleasure, sure. I like joy. But that’s not what I’m after. It’s a side effect, I hope, of the things I really care about. I’d guess this is true of you too.

If maximizing pleasure is central to living well and improving the world, we’re going about it entirely the wrong way. Do you really want to maximize pleasure? I doubt it. Me, I’d rather write some good philosophy and raise my kids.

ETA, Nov 17:

In audience discussion and in social media, several people have pointed out although I start by talking about a wide range of emotional states (tranquility, delight, having good feelings about your life situation), in the second half I focus exclusively on pleasure. The case of pleasure is easiest to discuss, because the more complex emotional states have more representational or world-involving components. On a proper hedonic view, the value of those more complex states, however, rests exclusively on the emotional valence or at most on the emotional valence plus possibly-false representational content -- on, for example, whether you have the feeling that life is going well, rather than on whether it's really going well. All the same observations apply: We do and should care about whether our lives are actually going well, much more than we care about whether we have the emotional feeling of its going well.

Tuesday, November 05, 2019

A Theory of Jerks and Other Philosophical Misadventures

... released today. *Confetti!*

Available from:

MIT Press, Amazon, B&N, or (I hope!) your local independent bookseller.

Some initial reviews and discussions.

------------------------------------------------

Preface

I enjoy writing short philosophical reflections for broad audiences. Evidently, I enjoy this immensely: Since 2006, I’ve written more than a thousand such pieces, published mostly on my blog The Splintered Mind, but also in the Los Angeles Times, Aeon, and elsewhere. This book contains fifty- eight of my favorites, revised and updated.

The topics range widely -- from moral psychology and the ethics of the game of dreidel to multiverse theory, speculative philosophy of consciousness, and the apparent foolishness of Immanuel Kant. There is no unifying thesis.

Maybe, however, there is a unifying theme. The human intellect has a ragged edge, where it begins to turn against itself, casting doubt on itself or finding itself lost among seemingly improbable conclusions. We can reach this ragged edge quickly. Sometimes, all it takes to remind us of our limits is an eight-hundred-word blog post. Playing at this ragged edge, where I no longer know quite what to think or how to think about it, is my idea of fun.

Given the human propensity for rationalization and self-deception, when I disapprove of others, how do I know that I'm not the one who is being a jerk? Given that all our intuitive, philosophical, and scientific knowledge of the mind has been built on a narrow range of cases, how much confidence can we have in our conclusions about the strange new possibilities that are likely to open up in the near future of artificial intelligence? Speculative cosmology at once poses the (literally) biggest questions that we can ask about the universe and reveals possibilities that threaten to undermine our ability to answer those same questions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence.

Not all of my posts fit this theme. It's also fun to use the once-forbidden word "fuck" over and over again in a chapter about profanity. And I wanted to share some reminiscences about how my father saw the world -- especially since in some ways I prefer his optimistic and proactive vision to my own less hopeful skepticism. Other of my blog posts I just liked or wanted to share for other reasons. A few are short fictions.

It would be an unusual reader who enjoyed every chapter. I hope you'll skip anything you find boring. The chapters are all freestanding. Please don't just start reading on page 1 and then try to slog along through everything sequentially out of some misplaced sense of duty! Trust your sense of fun (chapter 47). Read only the chapters that appeal to you, in any order you like.

Riverside, California, Earth (I hope)
October 25, 2018

Friday, November 01, 2019

How Mengzi Came up with Something Better Than the Golden Rule

[an edited excerpt from my forthcoming book, A Theory of Jerks and Other Philosophical Misadventures]

There’s something I don’t like about the ‘Golden Rule’, the admonition to do unto others as you would have others do unto you. Consider this passage from the ancient Chinese philosopher Mengzi (Mencius):

That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are none that do not know to revere their elder brothers. Treating one’s parents as parents is benevolence. Revering one’s elders is righteousness. There is nothing else to do but extend these to the world.

One thing I like about the passage is that it assumes love and reverence for one’s family as a given, rather than as a special achievement. It portrays moral development simply as a matter of extending that natural love and reverence more widely.

In another passage, Mengzi notes the kindness that the vicious tyrant King Xuan exhibits in saving a frightened ox from slaughter, and he urges the king to extend similar kindness to the people of his kingdom. Such extension, Mengzi says, is a matter of ‘weighing’ things correctly – a matter of treating similar things similarly, and not overvaluing what merely happens to be nearby. If you have pity for an innocent ox being led to slaughter, you ought to have similar pity for the innocent people dying in your streets and on your battlefields, despite their invisibility beyond your beautiful palace walls.

Mengzian extension starts from the assumption that you are already concerned about nearby others, and takes the challenge to be extending that concern beyond a narrow circle. The Golden Rule works differently – and so too the common advice to imagine yourself in someone else’s shoes. In contrast with Mengzian extension, Golden Rule/others’ shoes advice assumes self-interest as the starting point, and implicitly treats overcoming egoistic selfishness as the main cognitive and moral challenge.

Maybe we can model Golden Rule/others’ shoes thinking like this:

  1. If I were in the situation of person x, I would want to be treated according to principle p.
  2. Golden Rule: do unto others as you would have others do unto you.
  3. Thus, I will treat person x according to principle p.

And maybe we can model Mengzian extension like this:

  1. I care about person y and want to treat that person according to principle p.
  2. Person x, though perhaps more distant, is relevantly similar.
  3. Thus, I will treat person x according to principle p.

There will be other more careful and detailed formulations, but this sketch captures the central difference between these two approaches to moral cognition. Mengzian extension models general moral concern on the natural concern we already have for people close to us, while the Golden Rule models general moral concern on concern for oneself.

I like Mengzian extension better for three reasons. First, Mengzian extension is more psychologically plausible as a model of moral development. People do, naturally, have concern and compassion for others around them. Explicit exhortations aren’t needed to produce this natural concern and compassion, and these natural reactions are likely to be the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies. If you need to reason or analogise your way into concern even for close family members, you’re already in deep moral trouble.

Second, Mengzian extension is less ambitious – in a good way. The Golden Rule imagines a leap from self-interest to generalised good treatment of others. This might be excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, requiring less of a leap. Self-to-other is a huge moral and ontological divide. Family-to-neighbour, neighbour-to-fellow citizen – that’s much less of a divide.

Third, you can turn Mengzian extension back on yourself, if you are one of those people who has trouble standing up for your own interests – if you’re the type of person who is excessively hard on yourself or who tends to defer a bit too much to others. You would want to stand up for your loved ones and help them flourish. Apply Mengzian extension, and offer the same kindness to yourself. If you’d want your father to be able to take a vacation, realise that you probably deserve a vacation too. If you wouldn’t want your sister to be insulted by her spouse in public, realise that you too shouldn’t have to suffer that indignity.

Although Mengzi and the 18th-century French philosopher Jean-Jacques Rousseau both endorse mottoes standardly translated as ‘human nature is good’ and have views that are similar in important ways, this is one difference between them. In both Emile (1762) and Discourse on Inequality (1755), Rousseau emphasises self-concern as the root of moral development, making pity and compassion for others secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that ‘love of men derived from love of self is the principle of human justice’.

This difference between Mengzi and Rousseau is not a general difference between East and West. Confucius, for example, endorses something like the Golden Rule in the Analects: ‘Do not impose on others what you yourself do not desire.’ Mozi and Xunzi, also writing in China in the period, imagine people acting mostly or entirely selfishly until society artificially imposes its regulations, and so they see the enforcement of rules rather than Mengzian extension as the foundation of moral development. Moral extension is thus specifically Mengzian rather than generally Chinese.

Care about me not because you can imagine what you would selfishly want if you were me. Care about me because you see how I am not really so different from others you already love.

This is an edited extract from ‘A Theory of Jerks and Other Philosophical Misadventures’ © 2019 by Eric Schwitzgebel, published by MIT Press.Aeon counter – do not remove

Eric Schwitzgebel

This article was originally published at Aeon and has been republished under Creative Commons.

Thursday, October 31, 2019

Applying to PhD Programs in Philosophy, Part IV: Writing Sample

Part I: Should You Apply, and Where?

Part II: Grades, Classes, and Institution of Origin

Part III: Letters of Recommendation

--------------------------------------------------------

Applying to PhD Programs in Philosophy
PART IV: Writing Samples

[Probably not your writing sample]

Do Committees Read the Samples?

Applicants sometimes doubt that admissions committees (composed of professors in the department you're applying to) actually do read the writing samples, especially at the most prestigious schools. It's hard to imagine, say, John Searle carefully working through that essay on Aristotle you wrote for Philosophy 183! However, my experience is that the writing samples are read. For example, back when I visited U.C. Berkeley as an applicant in 1991 after having been admitted, I discussed my writing sample in detail with one member of the admissions committee, who convincingly assured me that the committee read all plausible applicants' samples. She said they were the single most important part of the application. Since that time, other professors at other elite PhD programs in philosophy have continued to assure me that they do carefully read and care about the writing samples. At U.C. Riverside, where I sometimes serve on graduate admissions, every writing sample is read by at least two members of the admissions committee.

How conscientiously they are read is another question. If an applicant doesn't look plausible on the surface based on GPA and letters, I'll skim through the sample pretty quickly, just to make sure we aren't missing a diamond in the rough. For most applicants, I will at least skim the whole sample, and I'll select a few pages in the middle to read carefully. I'll then revisit the samples of the thirty or so applicants who make it to the committee's cutdown list for serious consideration. Other committee members probably have similar strategies.

Few undergraduates can write really beautiful, professional-looking philosophy that sustains its quality page after page. But if you can -- or more accurately if some member of the admissions committee judges that you have done so in your sample -- that can make all the difference to your application. I remember in one case falling in love with a sample and persuading the committee to admit a student whose letters were tepid and whose grades were more A-minus than A. That student in fact came to UCR and did well. I'll almost always advocate the admission of the students who wrote, in my view, the very best samples, even if other aspects of their files are less than ideal. Of course, almost all such students have excellent grades and letters as well!

Conversely, admissions committees look skeptically at applicants with weak samples. Straight As and glowing letters won't get you into a mid-ranked program like UCR (much less a top program like NYU) if your sample isn't also terrific. There are just too many other applicants with great grades and glowing letters. The grades and letters get you past the first cut, but the sample makes you stand out.

You definitely want to spend time making your sample excellent. It is perhaps the most important thing to focus your time on in the fall term during which you are applying.

What I, at Least, Look for

First, the sample must be clearly written and show a certain amount of philosophical maturity. I can't say much about how to achieve these things other than to write clearly and be philosophically mature. These things are, I think, hard to fake. Trying too hard to sound sophisticated usually backfires.

Second, I want to see the middle of the essay get into the nitty-gritty somehow. In an analytic essay, that might be a detailed analysis of the pros and cons of an argument, or of its non-obvious implications, or of its structure. In a historical essay, that might be a close reading of a passage or a close look at textual evidence that decides between two competing interpretations. Many otherwise nicely written essays stay largely near the surface, simply summarizing an author's work or presenting fairly obvious criticisms at a relatively superficial level.

Most philosophers favor a lean, clear prose style with minimal jargon. (Some jargon is often necessary, though: There's a reason specialists have specialists' words.) When I've spent a lot of time reading badly written philosophy and fear my own prose is starting to look that way, I read a bit of David Lewis or Fred Dretske for inspiration.

Choosing Your Sample

Consider longish essays (at least ten pages) on which you received an A. Among those, you might have some favorites, or some might seem to have especially impressed the professor. You also want your essay, if possible, to be in one of the areas of philosophy you will highlight as an area of interest in the personal statement portion of your application. If your best essay is not in an area that you're planning to focus on in graduate school, however, quality is the more important consideration. So as not to show too much divergence between your writing sample and your personal statement, you might in your personal statement describe that topic as a continuing secondary interest.

If your best essay is in Chinese philosophy or medieval philosophy or 20th century European philosophy or technical philosophy of physics or some other area that's outside of the mainstream, and you're planning to apply to schools that don't teach in that area, it's a bit of a quandary. You want to show your best work, but you don't want to school to reject you because your interests don't fit their teaching profile, and also the school might not have a faculty member available who can really assess the quality of your essay.

Approach the professor(s) who graded the essay(s) you are considering and ask them for their frank opinion about whether the essay might be suitable for revision into a writing sample. Not all A essays are.

Revising the Sample

Samples should be about 12-20 pages long (double spaced, in a 12-point font). If possible, you should revise the sample under the guidance of the professor who originally graded it (who will presumably also be one of your letter writers). You aim is transform it from an undergraduate A paper to a paper that you would be proud to submit at the end of a graduate seminar dedicated to the topic in question. What's the most convincing evidence that an admissions committee could see that you will be able to perform excellently in their graduate seminars? It is, of course, that you are already doing work that would receive top marks in their seminars. Philosophy PhD admissions are so competitive that many applicants will already have samples of that quality, or nearly that quality; so it will be hard to stand out unless you do too.

I recommend that you treat the improvement of your writing sample as though it were an independent study course. If you can, you might even consider signing up for a formal independent study course aimed exactly at transforming your already-excellent undergraduate paper into an admissions-worthy writing sample. Revise, revise, revise! Deepen your analysis. Connect it more broadly with the relevant literature. Consider more objections -- or better, anticipate them in a way that prevents them from even arising. With your professor's help, eliminate those phrases, simplifications, distortions, and caricatures that suggest either an unsubtle understanding or ignorance of the relevant literature -- things which professors usually let pass in undergraduate essays but which can make a big difference in how you come across to an admissions committee.

What If Your Sample Is Too Long?

Most PhD programs cap the length of the writing sample: something like 20 double-spaced pages, or an equivalent number of words, sometimes as few as 15 pages. What if your best writing is an honors or master's thesis that's 45 pages long?

If that's your best work, then you definitely want it to be your sample. Some applicants ignore the length limits and submit the whole thing, hoping to be forgiven. (Sometimes they single-space or convert to a small font, hoping to minimize the appearance of violation.) Others mercilessly chop until they are down within the limit. Admissions committee members vary in their level of annoyance at samples that exceed the stated limits. Some don't care -- they just want to see the best. Others refuse to read the sample at all, using the rules violation as an excuse to nix the application. I'd guess that the median reaction is to accept the sample but only read a portion of it -- say 15 to 20 pages' worth.

You should probably assume that the admissions committee will only read the number of pages stated in their page limits. There are three reasonable approaches to this problem. One is good old-fashioned cutting -- which, though hard, sometimes does strengthen an essay by helping you laser in on the most crucial issue. Another is submitting the entire sample but with a brief preface advising the committee to read only sections x, y, and z (totaling no more than 15 to 20 pages). Still another approach is to replace some of your sections with bracketed summaries.

For example, if your paper defends panpsychism (the view that consciousness is ubiquitous) and you need to cut a three-page section that responds to the objection that panpsychism is too radically counterintuitive to take seriously, you might replace that section with the following statement: "[For reasons of length, here I omit Section 5, which addresses the objection that panpsychism is too radically counterintuitive to take seriously. I respond by arguing that (1) intuition is a poor guide to philosophical truth, and (2) all metaphysical views of consciousness, not only panpsychism, have radically counterintuitive consequences.]"

--------------------------------------------

Applying to PhD Programs in Philosophy, Part V: Statement of Purpose

[Old Series from 2007]

Thursday, October 24, 2019

Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity

Can you write a philosophical argument that effectively convinces research participants to donate money to charity?

Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)

Background

Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.

However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.

General Contest Rules

Contributions must be no longer than 500 words in length, text only, in the form of an ethical argument in favor of giving money to charities. Further details about form are explained in the next section.

Contributions must be submitted by email to argumentcontest@gmail.com by 11:59 pm GMT on December 31, 2019.

The winner will be selected according to the procedure described below. The winner will be announced March 31, 2019.

Form of the Contribution

Contributions must be the in the form of a plausible argument for the conclusion that it is ethically or morally good or required to give to charity, or that "you" should give to charity, or that it's good if possible to give to charities that effectively help people who are suffering due to poverty, or for some closely related conclusion.

Previous research suggests that charitable giving can be increased by inducing emotions (Bagozzi and Moore 1994; Erlandsson, Nilsson, Västfjäll 2018), by including narrative elements (McVey & Schwitzgebel 2018), and by mentioning an "identifiable victim" who would be benefited (Jenni & Loewenstein 1997; Kogut & Rytov 2011). While philosophical arguments sometimes have such features, we are specifically interested in whether philosophical arguments can be motivationally effective without relying on such features.

Therefore, contributions must meet the following criteria:

  • Text only. No pictures, music, etc. No links to outside sources.
  • No mention of individual people, including imaginary protagonists ("Bob"). Use of statistics is fine. Mentioning the individual reader ("you") is fine.
  • No mention of specific events, either specific historical events or events in individuals' lives. Mentioning general historical conditions is fine (e.g., "For centuries, wealthy countries have exploited the global south...."). Mentioning the effects of particular hypothetical actions is fine (e.g., "a donation of $10 to an effective charity could purchase [x] mosquito nets for people in malaria-prone regions").
  • No vividly detailed descriptions that are likely to be emotionally arousing (e.g., no detailed descriptions of what it is like to live in slavery or to die of malaria).
  • Nor should the text aim to be emotionally arousing by other means (e.g., don't write "Close your eyes and imagine that your own child is dying of starvation..."), except insofar as the relevant facts and arguments might be somewhat emotionally arousing even when coolly described.
  • The text should not ask the reader to perform any action beyond reading and thinking about the argument and donating.
  • The argument doesn't need to be formally valid, but it should be broadly plausible, presenting seemingly good argumentative support for the conclusion.
  • [ETA, Oct 28] Entries must not contain deception or attempt to mislead the reader.
  • If your argument contains previously published material, please separately provide us with full citation information and indicate any text that is direct quotation.

    Choosing the Winner

    Preliminary winnowing. We intend to test no more than twenty arguments. We anticipate receiving more than twenty submissions. We will winnow the submissions to twenty based on considerations of quality (well written arguments that are at least superficially convincing) and diversity (a wide range of argument types).

    Testing. We will recruit 4725 participants from Mechanical Turk. To ensure participant quality and similarity to previously studied populations, participants will be limited to the U.S., Canada, U.K., and Australia, and they must have high MTurk ratings and experience. Each participant (except those in the control condition) will read one submitted argument. On a new page, they will be informed that they have a 10% chance of receiving a $10 bonus, and they will be given the opportunity to donate a portion of that possible bonus to one of six well-known, effective, international charities. If no argument statistically beats the control condition, no prize will be awarded. If at least one argument statistically beats the control condition, the winning argument will be the argument with the highest mean donation. See the Appendix of this post for more details on stimuli and statistical testing.

    Award

    The contributor of the winning argument will receive $500 directly, and we will donate an additional $500 to a legally registered charity (501(c)(3)) chosen by the contributor.

    Unless the contributor requests anonymity, we will announce the contributor as winner of the prize and publicize the contributor's name and winning argument in social media and other publications.

    Contributors may submit up to three entries if they wish, but only if those entries are very different in content.

    Contributions may be coauthored.

    All tested contributions will be made public after testing is complete. We will credit the authors for their contributions unless they request that their contributions be kept anonymous.

    Contact

    For further information about this contest, please email eschwitz at domain ucr.edu. When you are ready to submit your entry, send it to argumentcontest@gmail.com.

    Funding

    This contest is funded by a subgrant from the Templeton Foundation.

    --------------------------------------------------

    APPENDIX

    Stimulus

    After consenting, each participant (except for those in the control condition) will read the following statement:

    Some philosophers have argued that it is morally good to donate to charity or that people have a duty to donate to charity if they are able to do so. Please consider the following argument in favor of charitable donation.

    Please read as many times as necessary to fully understand the argument. Only click "next" when you feel that you adequately understand the text. In the comprehension section, you will be asked to recall details of the argument.

    The text of the submitted argument will then be presented.

    After the reader clicks a button indicating that they have read and understood the argument, a new page will open, and participants will read the following:

    Upon completion of this study, 10% of participants will receive an additional $10. You have the option to donate some portion of this $10 to your choice among six well-known, effective charities. If you are one of the recipients of the additional $10, the portion you decide to keep will appear as a bonus credited to your Mechanical Turk worker account, and the portion you decide to donate will be given to the charity you pick from the list below.

    Note: You must pass the comprehension question and show no signs of suspicious responding to receive the $10. Receipt of the $10 is NOT conditional, however, on how much you choose to donate if you receive the $10.

    If you are one of the recipients of the additional $10, how much of your additional $10 would you like to donate?

    [response scale $0 to $10 in $1 increments]

    Which charity would you like your chosen donation amount to go to? For more information, or to donate directly, please follow the highlighted links to each charity.

  • Against Malaria Foundation: "To provide funding for long-lasting insecticide-treated net (LLIN) distribution (for protection against malaria) in developing countries."
  • Doctors Without Borders / Médecins Sans Frontières: "Medical care where it is needed most."
  • Give Directly: "Distributing cash to very poor individuals in Kenya and Uganda."
  • Global Alliance for Improved Nutrition: "To tackle the human suffering caused by malnutrition around the world."
  • Helen Keller International: "Save the sight and lives of the world's most vulnerable and disadvantaged."
  • Multiple Myeloma Research Foundation: "We collect, interpret and activate the largest collection of quality information and put it to work for every person with multiple myeloma."
  • These charities will have been listed in randomized order.

    After this question, we will ask the following comprehension question: "In one sentence, please summarize the argument presented on the previous page", followed by a text box. Participants will be excluded if they leave this question blank or if they give what a coder who is unaware of their responses to the other questions judges to be a deficient answer. Participants who spend insufficient time on the argument page will also be excluded.

    Based on the submissions, we may add exploratory follow-up questions designed to discover possible mediators and moderators of the effects on charitable donation.

    After consenting, participants in the control condition will read the statement:

    Please consider the following description of the nature of energy. Please read as many times as necessary to fully understand the description. Only click "next" when you feel that you adequately understand the text. In the comprehension section, you will be asked to recall details of the text.

    They will then receive a 445-word description of the nature of energy from a middle school science textbook. After clicking a button indicating that they have read and understood the description, a new page will open, and participants will read the following:

    Some philosophers have argued that it is morally good to donate to charity or that people have a duty to donate to charity if they are able to do so.

    After the reader clicks a button indicating that they have read and understood the statement, a new page will open containing the same donation question as in the argument conditions.

    Statistical Testing

    In an initial round, 2500 participants will each be assigned to read one of the twenty arguments. The five arguments with the highest mean donation will be selected for further testing. These five arguments will each be given an additional 350 participants, and 475 participants will be entered into the control condition. If none of the five arguments is statistically better than control, then we will announce that there is no winner. We will pool all 475 participants (minus exclusions) in each of the five selected argument conditions, then we will compare each condition separately with the control group by a two-tailed t-test at an alpha level of .01. If at least one argument is better than control, the award will be given to the argument with the highest mean donation.

    Justification: Based on preliminary research, we expect a mean donation of about $3.50, a standard deviation of about $3, and clustering at $0, $5, and $10. In Monte Carlo modeling of twenty arguments with population mean donations in ten cent intervals from $2.60 to $4.50, the argument with the highest underlying distribution was over 90% likely to be among the top five arguments after a sample of 100 participants per argument (allowing 25 exclusions), and after 400 participants per argument (allowing 75 exclusions) the winning argument was about 85% likely to be one of the two with the highest underlying mean.

    Given that we will be running five statistical tests, we set alpha at .01 rather than .05 to the reduce the risk of false positives. In preliminary research, McVey and Schwitzgebel found that exposure to a true story about a child rescued from poverty by charitable donation increased average rates of giving by about $1 (d = 0.3). Power analysis shows that an argument with a similar effect size would be 95% likely to be found statistically different from the control group at an alpha level of .01 and 400 participants in each group, while an argument with a somewhat smaller effect size (d = 0.2) would be 60% likely to be found statistically different.

    [image source]