Tuesday, November 26, 2019

Applying to PhD Programs in Philosophy, Part V: Statement of Purpose

Part I: Should You Apply, and Where?

Part II: Grades, Classes, and Institution of Origin

Part III: Letters of Recommendation

Part IV: Writing Sample

Old Series from 2007


Applying to PhD Programs in Philosophy
Part V: Statement of Purpose

Statements of purpose, sometimes also called personal statements, are difficult to write. It's hard to know even what a "Statement of Purpose" is. Your plan is to go to graduate school, get a PhD, and become a professor. Duh! Are you supposed to try to convince the committee that you want to become a professor more than the other applicants do? That philosophy is written in your genes? That you have some profound vision for the transformation of philosophy of philosophy education?

You've had no practice writing this sort of thing. Odds are, you'll do it badly in your first try. There are so many different ways to go wrong! Give yourself plenty of time and seek feedback from at least two of your letter writers. Plan to rewrite from scratch at least once.

Some Things Not to Do

* Don't wax poetic. Don't get corny. Avoid purple prose. "Ever since I was eight, I've pondered the deep questions of life." Nope. "Philosophy is the queen of the disciplines, delving to the heart of it all." Nope. "The Owl of Minerva has sung to me and the sage of Königsberg whispers in my sleep: Not to philosophize is to die." If you are tempted to write sentences like that, please do so in longhand, with golden ink, on expensive stationery which you then burn without telling anyone.

* Don't turn your statement into a sales pitch. Ignore all advice from friends and acquaintances in the business world. Don't sell yourself. You don't want to seem like a BS-ing huckster. You may still (optionally!) mention a few of your accomplishments, in a dry, factual way, but to be overly enthusiastic about accomplishments that are rather small in the overall scheme of academia is somewhat less professional than you ideally want to seem. If you're already thinking like a graduate student at a good PhD program, you won't be too impressed with yourself for having published in the Kansas State Undergraduate Philosophy Journal (even if that is, in context, a notable achievement). Trust your letter writers. If you've armed them with a brag sheet, the important accomplishments will come across in your file. Let your letter writers do the pitch. It comes across so much better when someone else toots your horn than when you yourself do!

* Don't be grandiose. Don't say that you plan to revolutionize philosophy, reinvigorate X, rediscover Y, finally find the answer to timeless question Z, or become a professor at an elite department. Do you already know that you will be a more eminent philosopher than the people on your admissions committee? You're aiming to be their student, not the next Wittgenstein -- or at least that's how you want to come across. You want to seem modest, humble, straightforward. If necessary, consult David Hume or Benjamin Franklin for inspiration on the advantages of false humility.

* If you are applying to a program in which you are expected to do coursework for a couple of years before starting your dissertation -- that is, to U.S.-style programs rather than British-style programs -- then I recommend against taking stands on particular substantive philosophical issues. In the eyes of the admissions committee, you probably aren't far enough in your education to adopt hard philosophical commitments. They want you to come to their program with an open mind. Saying "I would like to defend Davidson's view that genuine belief is limited to language-speaking creatures" comes across a bit too strong. Similarly, "I showed in my honors thesis that Davidson's view...". If only, in philosophy, honors theses ever really showed anything! ("I argued" would be okay.) Better: "My central interests are philosophy of mind and philosophy of language. I am particularly interested in the intersection of the two, for example in Davidson's argument that only language-speaking creatures can have beliefs in the full and proper sense of 'belief'."

* Don't tell the story of how you came to be interested in philosophy. It's not really relevant.

* Ignore the administrative boilerplate. The application form might have a prompt like this: "Please upload a one page Statement of Purpose. What are your goals and objectives for pursuing this graduate degree? What are your qualifications and indicators of success in this endeavor? Please include career objectives that obtaining this degree will provide." This was written eighteen years ago by the Associate Dean for Graduate Education in the College of Letters and Sciences, who earned his PhD in Industrial Engineering in 1989. The actual admissions committee that makes the decisions is a bunch of nerdy philosophers who probably roll their eyes at admin-speak at least as much as you do. There's no need to tailor your letter to this sort of prompt.

* Also, don't follow links to well-meaning general advice from academic non-philosophers. I'm sure you didn't click those links! Good! If you had, you'd see that they advise you, among other things, to tell your personal history and to sell yourself as a good fit for the program. Maybe that works for biology PhD admissions, where it could make good sense to summarize your laboratory experience and fieldwork?

What to Write

So how do your fill up that awful, blank page? In 2012, I solicited sample statements of purpose from successful PhD applicants. About a dozen readers shared their statements and from among those I chose three I thought were good and also diverse enough to illustrate the range of possibilities. Follow the links below to view the statements.

  • Statement A was written by Allison Glasscock, who was admitted to Chicago, Cornell, Penn, Stanford, Toronto, and Yale.
  • Statement B was written by a student who prefers to remain anonymous, who was admitted to Berkeley, Missouri, UMass Amherst, Virginia, Wash U. in St. Louis, and Wisconsin.
  • Statement C was written by another student who prefers to remain anonymous, who was admitted to Connecticut and Indiana.

At the core of each statement is a cool, professional description of the student's areas of interest. Notice that all of these descriptions contain enough detail to give a flavor of the student's interests. This helps the admissions committee assess the student's likely fit with the teaching strengths of the department. Each description also displays the student's knowledge of the areas in question by mentioning figures or issues that would probably not be known to the average undergraduate. This helps to convey philosophical maturity and preparedness for graduate school. However, I would recommend against going too far with the technicalities or trying too hard to be cutting edge, lest it become phony desperation or a fog of jargon. These sample statements get the balance about right.

Each of the sample statements also adds something else, in addition to a description of areas of interest, but it's not really necessary to add anything else. Statement B starts with pretty much the perfect philosophy application joke. (Sorry, now it's taken!) Statement C concludes with a paragraph description the applicant's involvement with his school's philosophy club. Statement C is topically structured but salted with information about coursework relevant to the applicant's interests, while Statement B is topically structured and minimalist, and Statement A is autobiographically structured with considerable detail. Any of these approaches is fine, though the topical structure is more common and raises fewer challenges about finding the right tone.

Statement A concludes with a paragraph specifically tailored for Yale. Thus we come to the question of...

Tailoring Statements to Particular Programs

It's not necessary, but you can adjust your statement for individual schools. If there is some particular reason you find a school attractive, there's no harm in mentioning that. Committees think about fit between a student's interests and the strengths of the department and about what faculty could potentially be advisors. You can help the committee on this issue if you like, though normally it will be obvious from your description of your areas of interest.

For example, if you wish, you can mention 2-3 professors whose work especially interests you. But there are risks here, so be careful. Mentioning particular professors can backfire if you mischaracterize the professors, or if they don't match your areas of stated interest, or if you omit the professor in the department whose interests seem to the committee to be the closest match to your own.

Similarly, you can mention general strengths of the school. But, again, if you do this, be sure to get it right! If someone applies to UCR citing our strength in medieval philosophy, we know the person hasn't paid attention to what our department is good at. No one here works on medieval philosophy. But if you want to go to a school that has strengths in both mainstream "analytic" philosophy and 19th-20th century "Continental" philosophy, that's something we at UCR do think of as a strong point of our program.

I'm not sure I'd recommend changing your stated areas of interest to suit the schools, though I see how that might be strategic. There are two risks in changing your stated areas of interest: One is that if you change them too much, there might be some discord between your statement of purpose and what your letter writers say about you. Another is that large changes might raise questions about your choice of letter writers. If you say your central passion is ancient philosophy, and your only ancient philosophy class was with Prof. Platophile, why hasn't Prof. Platophile written one of your letters? That's the type of oddness that might make a committee hesitate about an otherwise strong file.

Some people mention personal reasons for wanting to be in a particular geographical area (near family, etc.). Although this can be good because it can make it seem more likely that you would accept an offer of admission, I'd avoid it since, in order to have a good chance of landing a tenure-track job, graduating PhD recipients typically need to be flexible about location. Also, it might be perceived as indicating that a career in philosophy is not your first priority.

Explaining Weaknesses in Your File

Although hopefully this won't be necessary, a statement of purpose can also be an opportunity to explain weaknesses or oddities in your file -- though letter writers can also do this, often more credibly. For example, if one quarter you did badly because your health was poor, you can mention that fact. If you changed undergraduate institutions (not necessarily a weakness if the second school is the more prestigious), you can briefly explain why. If you don't have a letter from your thesis advisor because they died, you can point that out.

Statements of Personal History

Some schools, like UCR, also allow applicants to submit "statements of personal history", in which applicants can indicate disadvantages or obstacles they have overcome or otherwise attempt to paint an appealing picture of themselves. The higher-level U.C. system administration encourages such statements, I believe, because although state law prohibits the University of California from favoring applicants on the basis of ethnicity or gender, state law does allow admissions committees to take into account any hardships that applicants have overcome -- which can include hardships due to poverty, disability, or other obstacles, including hardships deriving from ethnicity or gender.

Different committee members react rather differently to such statements, I suspect. I find them unhelpful for the most part. And yet I also think that some people do, because of their backgrounds, deserve special consideration. Unless you have a sure hand with tone, though, I would encourage a dry, minimal approach to this part of the application. It's better to skip it entirely than to concoct a story that looks like special pleading from a rather ordinary complement of hardships. This part of the application also seems to beg for the corniness I warned against above: "Ever since I was eight, I've pondered the deep questions of life...". I see how such corniness is tempting if the only alternative seems to be to leave an important part of the application blank. As a committee member, I usually just skim and forget the statements of personal history, unless something was particularly striking, or unless it seems like the applicant might contribute in an important way to the diversity of the entering class.

For further advice on statements of purpose, see this discussion on Leiter Reports – particularly the discussion between the difference between U.S. and U.K. statements of purpose.


Applying to PhD Programs in Philosophy, Part VI: GRE Scores and Other Things

[image source]

Sunday, November 17, 2019

We Might Soon Build AI Who Deserve Rights

Talk for Notre Dame, November 19:

Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.

[An AI slave, from Black Mirror's White Christmas episode]

The first half of the talk mostly rehearses ideas from my articles with Mara Garza here and here. If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.

Selections from the second half of the talk

So here’s what's going to happen:

We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!

Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.

This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.

Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.

Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.

Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.

There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!

If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.

Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?

I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?

We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!

Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.

Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?

Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.

Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?

Thursday, November 14, 2019

Who Cares about Happiness?

[talk to be given at UC Riverside's Homecoming celebration, November 16, on the theme of happiness]

There are several different ways of thinking about happiness. I want to focus on just one of those ways. This way of thinking about happiness is sometimes called “hedonic”. That label can be misleading if you’re not used to it because it kind of sounds like hedonism, which kind of sounds like wild sex parties. The hedonic account of happiness, though, is probably closest to most people’s ordinary understanding of happiness. On this account, to be happy is to have lots of positive emotions and not too many negative emotions. To be happy is to regularly feel joy, delight, and pleasure, to feel sometimes maybe a pleasant tranquility and sometimes maybe outright exuberance, to have lots of good feelings about your life and your situation and what’s going on around you – and at the same time not to have too many emotions like sadness, fear, anxiety, anger, disgust, displeasure, annoyance, and frustration, what we think of as “negative emotions”. To be happy, on this “hedonic” account, is to be in an overall positive emotional state of mind.

I wouldn’t want to deny that it’s a good thing to be happy in this sense. It is, for the most part, a good thing. But sometimes people say extreme things about happiness – like that happiness is the most important thing, or that all people really want is to be happy, or as a parent that the main thing you want for your children is that they be happy, or that everything everyone does is motivated by some deep-down desire to maximize their happiness. And that’s not right at all. We actually don’t care about our hedonic happiness very much. Not really. Not when you think about it. It’s kind of important, but not really that big in the scheme of things.

Consider an extreme thought experiment of the sort that philosophers like me enjoy bothering people with. Suppose we somehow found a way to turn the entire Solar System into one absolutely enormous machine or organism that experienced nothing but outrageous amounts of pleasure all the time. Every particle of matter that we have, we feed into this giant thing – let’s call it the orgasmatron. We create the most extreme, most consistent, most intense conglomeration of pure ecstatic joyfulness as it is possible to construct. Wow! Now that would be pretty amazing. One huge, pulsing Solar-System-sized orgasm.

Will this thing need to remember the existence of humanity? Will it need to have any appreciation of art or beauty? Will it have to have any ethics, or any love, or any sociality, or knowledge of history or science – will it need any higher cognition at all? Maybe not. I mean higher cognition is not what orgasm is mostly about. If you think that the thing that matters most in the universe is positive emotions, then you might think that the best thing that could happen to the future of the Solar System would be the creation of this giant orgasmatron. The human project would be complete. The world will have reached its pinnacle and nothing else really matters!

[not the orgasmatron I have in mind]

Now here’s my guess. Some of you will think, yeah, that’s right. If everything becomes a giant orgasmatron, nothing could be more awesome, that’s totally where we should go if we can. But I’ll guess that most of you think that something important would be lost. Positive emotion isn’t the only thing that matters. We don’t want the world to lose its art, and its beauty, and its scientific knowledge, and the rich complexity of human relationships. If everything got fed into this orgasmatron it would be a shame. We’d have lost something really important. Now let me tell you a story. It’s from my latest book, A Theory of Jerks and Other Philosophical Misadventures, hot off the press this month.

Back in the 1990s, when I was a graduate student, my girlfriend Kim asked me what, of all things, I most enjoyed doing. Skiing, I answered. I was thinking of those moments breathing the cold, clean air, relishing the mountain view, then carving a steep, lonely slope. I’d done quite a bit of that with my mom when I was a teenager. But how long had it been since I’d gone skiing? Maybe three years? Grad school kept me busy and I now had other priorities for my winter breaks. Kim suggested that if it had been three years since I’d done what I most enjoyed doing, then maybe I wasn’t living wisely.

Well, what, I asked, did she most enjoy? Getting massages, she said. Now, the two of us had a deal at the time: If one gave the other a massage, the recipient would owe a massage in return the next day. We exchanged massages occasionally, but not often, maybe once every few weeks. I pointed out that she, too, might not be perfectly rational: She could easily get much more of what she most enjoyed simply by giving me more massages. Surely the displeasure of massaging my back couldn’t outweigh the pleasure of the thing she most enjoyed in the world? Or was pleasure for her such a tepid thing that even the greatest pleasure she knew was hardly worth getting?

It used to be a truism in Western (especially British) philosophy that people sought pleasure and avoided pain. A few old-school psychological hedonists, like Jeremy Bentham, went so far as to say that that was all that motivated us. I’d guess quite differently: Although pain is moderately motivating, pleasure motivates us very little. What motivates us more are outward goals, especially socially approved goals — raising a family, building a career, winning the approval of peers — and we will suffer immensely, if necessary, for these things. Pleasure might bubble up as we progress toward these goals, but that’s a bonus and side effect, not the motivating purpose, and summed across the whole, the displeasure might vastly outweigh the pleasure. Some evidence suggests, for example, that raising a child is probably for most people a hedonic net negative, adding stress, sleep deprivation, and unpleasant chores, as well as crowding out the pleasures that childless adults regularly enjoy. At least according to some research, the odds are that choosing to raise a child will make you less happy.

Have you ever watched a teenager play a challenging video game? Frustration, failure, frustration, failure, slapping the console, grimacing, swearing, more frustration, more failure—then finally, woo-hoo! The sum over time has to be negative, yet they’re back again to play the next game. For most of us, biological drives and addictions, personal or socially approved goals, concern for loved ones, habits and obligations — all appear to be better motivators than gaining pleasure, which we mostly seem to save for the little bit of free time left over. And to me, this is quite right and appropriate. I like pleasure, sure. I like joy. But that’s not what I’m after. It’s a side effect, I hope, of the things I really care about. I’d guess this is true of you too.

If maximizing pleasure is central to living well and improving the world, we’re going about it entirely the wrong way. Do you really want to maximize pleasure? I doubt it. Me, I’d rather write some good philosophy and raise my kids.

ETA, Nov 17:

In audience discussion and in social media, several people have pointed out although I start by talking about a wide range of emotional states (tranquility, delight, having good feelings about your life situation), in the second half I focus exclusively on pleasure. The case of pleasure is easiest to discuss, because the more complex emotional states have more representational or world-involving components. On a proper hedonic view, the value of those more complex states, however, rests exclusively on the emotional valence or at most on the emotional valence plus possibly-false representational content -- on, for example, whether you have the feeling that life is going well, rather than on whether it's really going well. All the same observations apply: We do and should care about whether our lives are actually going well, much more than we care about whether we have the emotional feeling of its going well.

Tuesday, November 05, 2019

A Theory of Jerks and Other Philosophical Misadventures

... released today. *Confetti!*

Available from:

MIT Press, Amazon, B&N, or (I hope!) your local independent bookseller.

Some initial reviews and discussions.



I enjoy writing short philosophical reflections for broad audiences. Evidently, I enjoy this immensely: Since 2006, I’ve written more than a thousand such pieces, published mostly on my blog The Splintered Mind, but also in the Los Angeles Times, Aeon, and elsewhere. This book contains fifty- eight of my favorites, revised and updated.

The topics range widely -- from moral psychology and the ethics of the game of dreidel to multiverse theory, speculative philosophy of consciousness, and the apparent foolishness of Immanuel Kant. There is no unifying thesis.

Maybe, however, there is a unifying theme. The human intellect has a ragged edge, where it begins to turn against itself, casting doubt on itself or finding itself lost among seemingly improbable conclusions. We can reach this ragged edge quickly. Sometimes, all it takes to remind us of our limits is an eight-hundred-word blog post. Playing at this ragged edge, where I no longer know quite what to think or how to think about it, is my idea of fun.

Given the human propensity for rationalization and self-deception, when I disapprove of others, how do I know that I'm not the one who is being a jerk? Given that all our intuitive, philosophical, and scientific knowledge of the mind has been built on a narrow range of cases, how much confidence can we have in our conclusions about the strange new possibilities that are likely to open up in the near future of artificial intelligence? Speculative cosmology at once poses the (literally) biggest questions that we can ask about the universe and reveals possibilities that threaten to undermine our ability to answer those same questions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence.

Not all of my posts fit this theme. It's also fun to use the once-forbidden word "fuck" over and over again in a chapter about profanity. And I wanted to share some reminiscences about how my father saw the world -- especially since in some ways I prefer his optimistic and proactive vision to my own less hopeful skepticism. Other of my blog posts I just liked or wanted to share for other reasons. A few are short fictions.

It would be an unusual reader who enjoyed every chapter. I hope you'll skip anything you find boring. The chapters are all freestanding. Please don't just start reading on page 1 and then try to slog along through everything sequentially out of some misplaced sense of duty! Trust your sense of fun (chapter 47). Read only the chapters that appeal to you, in any order you like.

Riverside, California, Earth (I hope)
October 25, 2018

Friday, November 01, 2019

How Mengzi Came up with Something Better Than the Golden Rule

[an edited excerpt from my forthcoming book, A Theory of Jerks and Other Philosophical Misadventures]

There’s something I don’t like about the ‘Golden Rule’, the admonition to do unto others as you would have others do unto you. Consider this passage from the ancient Chinese philosopher Mengzi (Mencius):

That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are none that do not know to revere their elder brothers. Treating one’s parents as parents is benevolence. Revering one’s elders is righteousness. There is nothing else to do but extend these to the world.

One thing I like about the passage is that it assumes love and reverence for one’s family as a given, rather than as a special achievement. It portrays moral development simply as a matter of extending that natural love and reverence more widely.

In another passage, Mengzi notes the kindness that the vicious tyrant King Xuan exhibits in saving a frightened ox from slaughter, and he urges the king to extend similar kindness to the people of his kingdom. Such extension, Mengzi says, is a matter of ‘weighing’ things correctly – a matter of treating similar things similarly, and not overvaluing what merely happens to be nearby. If you have pity for an innocent ox being led to slaughter, you ought to have similar pity for the innocent people dying in your streets and on your battlefields, despite their invisibility beyond your beautiful palace walls.

Mengzian extension starts from the assumption that you are already concerned about nearby others, and takes the challenge to be extending that concern beyond a narrow circle. The Golden Rule works differently – and so too the common advice to imagine yourself in someone else’s shoes. In contrast with Mengzian extension, Golden Rule/others’ shoes advice assumes self-interest as the starting point, and implicitly treats overcoming egoistic selfishness as the main cognitive and moral challenge.

Maybe we can model Golden Rule/others’ shoes thinking like this:

  1. If I were in the situation of person x, I would want to be treated according to principle p.
  2. Golden Rule: do unto others as you would have others do unto you.
  3. Thus, I will treat person x according to principle p.

And maybe we can model Mengzian extension like this:

  1. I care about person y and want to treat that person according to principle p.
  2. Person x, though perhaps more distant, is relevantly similar.
  3. Thus, I will treat person x according to principle p.

There will be other more careful and detailed formulations, but this sketch captures the central difference between these two approaches to moral cognition. Mengzian extension models general moral concern on the natural concern we already have for people close to us, while the Golden Rule models general moral concern on concern for oneself.

I like Mengzian extension better for three reasons. First, Mengzian extension is more psychologically plausible as a model of moral development. People do, naturally, have concern and compassion for others around them. Explicit exhortations aren’t needed to produce this natural concern and compassion, and these natural reactions are likely to be the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies. If you need to reason or analogise your way into concern even for close family members, you’re already in deep moral trouble.

Second, Mengzian extension is less ambitious – in a good way. The Golden Rule imagines a leap from self-interest to generalised good treatment of others. This might be excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, requiring less of a leap. Self-to-other is a huge moral and ontological divide. Family-to-neighbour, neighbour-to-fellow citizen – that’s much less of a divide.

Third, you can turn Mengzian extension back on yourself, if you are one of those people who has trouble standing up for your own interests – if you’re the type of person who is excessively hard on yourself or who tends to defer a bit too much to others. You would want to stand up for your loved ones and help them flourish. Apply Mengzian extension, and offer the same kindness to yourself. If you’d want your father to be able to take a vacation, realise that you probably deserve a vacation too. If you wouldn’t want your sister to be insulted by her spouse in public, realise that you too shouldn’t have to suffer that indignity.

Although Mengzi and the 18th-century French philosopher Jean-Jacques Rousseau both endorse mottoes standardly translated as ‘human nature is good’ and have views that are similar in important ways, this is one difference between them. In both Emile (1762) and Discourse on Inequality (1755), Rousseau emphasises self-concern as the root of moral development, making pity and compassion for others secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that ‘love of men derived from love of self is the principle of human justice’.

This difference between Mengzi and Rousseau is not a general difference between East and West. Confucius, for example, endorses something like the Golden Rule in the Analects: ‘Do not impose on others what you yourself do not desire.’ Mozi and Xunzi, also writing in China in the period, imagine people acting mostly or entirely selfishly until society artificially imposes its regulations, and so they see the enforcement of rules rather than Mengzian extension as the foundation of moral development. Moral extension is thus specifically Mengzian rather than generally Chinese.

Care about me not because you can imagine what you would selfishly want if you were me. Care about me because you see how I am not really so different from others you already love.

This is an edited extract from ‘A Theory of Jerks and Other Philosophical Misadventures’ © 2019 by Eric Schwitzgebel, published by MIT Press.Aeon counter – do not remove

Eric Schwitzgebel

This article was originally published at Aeon and has been republished under Creative Commons.