Sunday, November 17, 2019

We Might Soon Build AI Who Deserve Rights

Talk for Notre Dame, November 19:

Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


[An AI slave, from Black Mirror's White Christmas episode]

The first half of the talk mostly rehearses ideas from my articles with Mara Garza here and here. If we someday build AIs that are fully conscious, just like us, and have all the same kinds of psychological and social features that human beings do, in virtue of which human beings deserve rights, those AIs would deserve the same rights. In fact, we would owe them a special quasi-parental duty of care, due to the fact that we will have been responsible for their existence and probably to a substantial extent for their happy or miserable condition.

Selections from the second half of the talk

So here’s what's going to happen:

We will create more and more sophisticated AIs. At some point we will create AIs that some people think are genuinely conscious and genuinely deserve rights. We are already near that threshold. There’s already a Robot Rights movement. There’s already a society modeled on the famous animal rights organization PETA (People for the Ethical Treatment of Animals), called People for the Ethical Treatment of Reinforcement Learners. These are currently fringe movements. But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

So it might seem safer, if there is reasonable doubt, to assign rights to machines. But on reflection, this is not so safe. We want to be able to turn off our machines if we need to turn them off. Futurists like Nick Bostrom have emphasized, rightly in my view, the potential risks of our letting superintelligent machines loose into the world. These risks are greatly amplified if we too casually decide that such machines deserve rights and that deleting them is murder. Giving an entity rights entails sometimes sacrificing others’ interests for it. Suppose there’s a terrible fire. In one room there are six robots who might or might not be conscious. In another room there are five humans, who are definitely conscious. You can only save one group; the other group will die. If we give robots who might be conscious equal rights with humans who definitely are conscious, then we ought to go save the six robots and let the five humans die. If it turns out that the robots really, underneath it all, are just toasters, then that’s a tragedy. Let’s not too casually assign humanlike rights to AIs!

Unless there’s either some astounding saltation in the science of consciousness or some substantial deceleration in the progress of AI technology, it’s likely that we’ll face this dilemma. Either deny robots rights and risk perpetrating a Holocaust against them, or give robots rights and risk sacrificing real human beings for the benefit of mere empty machines.

This may seem bad enough, but the problem is even worse than I, in my sunny optimism, have so far let on. I’ve assumed that AI systems are relevant targets of moral concern if they’re human-grade – that is, if they are like us in their conscious capacities. But the odds of creating only human-grade AI are slim. In addition to the kind of AI we currently have, which I assume doesn’t have any serious rights or moral status, there are, I think, four broad moral categories into which future AI might fall: animal-grade, human-grade, superhuman, and divergent. I’ve only discussed human-grade AI so far, but each of these four classes raises puzzles.

Animal-grade AI. Not only human beings deserve moral consideration. So also do dogs, apes, and dolphins. Animal protection regulations apply to all vertebrates: Scientists can’t treat even frogs and lizards more roughly than necessary. The philosopher John Basl has argued that AI systems with cognitive capacities similar to vertebrates ought also to receive similar protections. Just as we shouldn’t torture and sacrifice a mouse without excellent reason, so also, according to Basl, we shouldn’t abuse and delete animal-grade AI. Basl has proposed that we form committees, modeled on university Animal Care and Use Committees, to evaluate cutting-edge AI research to monitor when we might be starting to cross this line.

Even if you think human-grade AI is decades away, it seems reasonable given the current chaos in consciousness studies, to wonder whether animal-grade consciousness might be around the corner. I myself have no idea if animal-grade AI is right around the corner or if it’s far away in the almost impossible future. And I think you have no idea either.

Superhuman AI. Superhuman AI, as I’m defining it here, is AI who has all of the features of human beings in virtue of which we deserve moral consideration but who also has some potentially morally important features far in excess of the human, raising the question of whether such AI might deserve more moral consideration than human beings.

There aren’t a whole lot of philosophers who are simple utilitarians, but let’s illustrate the issue using utilitarianism as an example. According to simple utilitarianism, we morally ought to do what maximizes the overall balance of pleasure to suffering in the world. Now let’s suppose we can create AI that’s genuinely capable of pleasure and suffering. I don’t know what it will take to do that – but not knowing is part of my point here. Let’s just suppose. Now if we can create such AI, then it might also be possible to create AI that is capable of much, much more pleasure than a human being is capable of. Take the maximum pleasure you have ever felt in your life over the course of one minute: call that amount of pleasure X. This AI is capable of feeling a billion times more pleasure than X in the space of that same minute. It’s a superpleasure machine!

If morality really demands that we should maximize the amount of pleasure in the world, it would thereby demand, or seem to demand, that we create as many of these superpleasure machines as we possibly can. We ought maybe even immiserate and destroy ourselves to do so, if enough AI pleasure is created as a result.

Even if you think pleasure isn’t everything – surely it’s something. If someday we could create superpleasure machines, maybe we morally ought to make as many as we can reasonably manage? Think of all the joy we will be bringing into the world! Or is there something too weird about that?

I’ve put this point in terms of pleasure – but whatever the source of value in human life is, whatever it is that makes us so awesomely special that we deserve the highest level of moral consideration – unless maybe we go theological and appeal to our status as God’s creations – whatever it is, it seems possible in principle that we could create that same thing in machines, in much larger quantities. We love our rationality, our freedom, our individuality, our independence, our ability to value things, our ability to participate in moral communities, our capacity for love and respect – there are lots of wonderful things about us! What if we were to design machines that somehow had a lot more of these things that we ourselves do?

We humans might not be the pinnacle. And if not, should we bow out, allowing our interests and maybe our whole species to be sacrificed for something greater? As much as I love humanity, under certain conditions I’m inclined to think the answer should probably be yes. I’m not sure what those conditions would be!

Divergent AI. The most puzzling case, I think, as well as the most likely, is divergent AI. Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.

Or consider the converse: a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?

Or consider a third type of divergence, what I’ve elsewhere called fission-fusion monsters. A fission-fusion monster is an entity that can divide and merge at will. It starts, perhaps, as basically a human-grade AI. But when it wants it can split into a million descendants, each of whom inherits all of the capacities, memories, plans, and preferences of the original AI. These million descendants can then go about their business, doing their independent things for a while, and then if they want, merge back together again into a unified whole, remembering what each individual did during its period of individuality. Other parts might not merge back but choose instead to remain as independent individuals, perhaps eventually coming to feel independent enough from the original to see the prospect of merging as something similar to death.

Without getting into details here, a fission-fusion monster would risk breaking our concept of individual rights – such as one person, one vote. The idea of individual rights rests fundamentally upon the idea of people as individuals – individuals who live in a single body for a while and then die, with no prospect of splitting or merging. What would happen to our concept of individual rights if we were to share the planet with entities for which our accustomed model of individuality is radically false?

Thursday, November 14, 2019

Who Cares about Happiness?

[talk to be given at UC Riverside's Homecoming celebration, November 16, on the theme of happiness]

There are several different ways of thinking about happiness. I want to focus on just one of those ways. This way of thinking about happiness is sometimes called “hedonic”. That label can be misleading if you’re not used to it because it kind of sounds like hedonism, which kind of sounds like wild sex parties. The hedonic account of happiness, though, is probably closest to most people’s ordinary understanding of happiness. On this account, to be happy is to have lots of positive emotions and not too many negative emotions. To be happy is to regularly feel joy, delight, and pleasure, to feel sometimes maybe a pleasant tranquility and sometimes maybe outright exuberance, to have lots of good feelings about your life and your situation and what’s going on around you – and at the same time not to have too many emotions like sadness, fear, anxiety, anger, disgust, displeasure, annoyance, and frustration, what we think of as “negative emotions”. To be happy, on this “hedonic” account, is to be in an overall positive emotional state of mind.

I wouldn’t want to deny that it’s a good thing to be happy in this sense. It is, for the most part, a good thing. But sometimes people say extreme things about happiness – like that happiness is the most important thing, or that all people really want is to be happy, or as a parent that the main thing you want for your children is that they be happy, or that everything everyone does is motivated by some deep-down desire to maximize their happiness. And that’s not right at all. We actually don’t care about our hedonic happiness very much. Not really. Not when you think about it. It’s kind of important, but not really that big in the scheme of things.

Consider an extreme thought experiment of the sort that philosophers like me enjoy bothering people with. Suppose we somehow found a way to turn the entire Solar System into one absolutely enormous machine or organism that experienced nothing but outrageous amounts of pleasure all the time. Every particle of matter that we have, we feed into this giant thing – let’s call it the orgasmatron. We create the most extreme, most consistent, most intense conglomeration of pure ecstatic joyfulness as it is possible to construct. Wow! Now that would be pretty amazing. One huge, pulsing Solar-System-sized orgasm.

Will this thing need to remember the existence of humanity? Will it need to have any appreciation of art or beauty? Will it have to have any ethics, or any love, or any sociality, or knowledge of history or science – will it need any higher cognition at all? Maybe not. I mean higher cognition is not what orgasm is mostly about. If you think that the thing that matters most in the universe is positive emotions, then you might think that the best thing that could happen to the future of the Solar System would be the creation of this giant orgasmatron. The human project would be complete. The world will have reached its pinnacle and nothing else really matters!

[not the orgasmatron I have in mind]

Now here’s my guess. Some of you will think, yeah, that’s right. If everything becomes a giant orgasmatron, nothing could be more awesome, that’s totally where we should go if we can. But I’ll guess that most of you think that something important would be lost. Positive emotion isn’t the only thing that matters. We don’t want the world to lose its art, and its beauty, and its scientific knowledge, and the rich complexity of human relationships. If everything got fed into this orgasmatron it would be a shame. We’d have lost something really important. Now let me tell you a story. It’s from my latest book, A Theory of Jerks and Other Philosophical Misadventures, hot off the press this month.

Back in the 1990s, when I was a graduate student, my girlfriend Kim asked me what, of all things, I most enjoyed doing. Skiing, I answered. I was thinking of those moments breathing the cold, clean air, relishing the mountain view, then carving a steep, lonely slope. I’d done quite a bit of that with my mom when I was a teenager. But how long had it been since I’d gone skiing? Maybe three years? Grad school kept me busy and I now had other priorities for my winter breaks. Kim suggested that if it had been three years since I’d done what I most enjoyed doing, then maybe I wasn’t living wisely.

Well, what, I asked, did she most enjoy? Getting massages, she said. Now, the two of us had a deal at the time: If one gave the other a massage, the recipient would owe a massage in return the next day. We exchanged massages occasionally, but not often, maybe once every few weeks. I pointed out that she, too, might not be perfectly rational: She could easily get much more of what she most enjoyed simply by giving me more massages. Surely the displeasure of massaging my back couldn’t outweigh the pleasure of the thing she most enjoyed in the world? Or was pleasure for her such a tepid thing that even the greatest pleasure she knew was hardly worth getting?

It used to be a truism in Western (especially British) philosophy that people sought pleasure and avoided pain. A few old-school psychological hedonists, like Jeremy Bentham, went so far as to say that that was all that motivated us. I’d guess quite differently: Although pain is moderately motivating, pleasure motivates us very little. What motivates us more are outward goals, especially socially approved goals — raising a family, building a career, winning the approval of peers — and we will suffer immensely, if necessary, for these things. Pleasure might bubble up as we progress toward these goals, but that’s a bonus and side effect, not the motivating purpose, and summed across the whole, the displeasure might vastly outweigh the pleasure. Some evidence suggests, for example, that raising a child is probably for most people a hedonic net negative, adding stress, sleep deprivation, and unpleasant chores, as well as crowding out the pleasures that childless adults regularly enjoy. At least according to some research, the odds are that choosing to raise a child will make you less happy.

Have you ever watched a teenager play a challenging video game? Frustration, failure, frustration, failure, slapping the console, grimacing, swearing, more frustration, more failure—then finally, woo-hoo! The sum over time has to be negative, yet they’re back again to play the next game. For most of us, biological drives and addictions, personal or socially approved goals, concern for loved ones, habits and obligations — all appear to be better motivators than gaining pleasure, which we mostly seem to save for the little bit of free time left over. And to me, this is quite right and appropriate. I like pleasure, sure. I like joy. But that’s not what I’m after. It’s a side effect, I hope, of the things I really care about. I’d guess this is true of you too.

If maximizing pleasure is central to living well and improving the world, we’re going about it entirely the wrong way. Do you really want to maximize pleasure? I doubt it. Me, I’d rather write some good philosophy and raise my kids.

ETA, Nov 17:

In audience discussion and in social media, several people have pointed out although I start by talking about a wide range of emotional states (tranquility, delight, having good feelings about your life situation), in the second half I focus exclusively on pleasure. The case of pleasure is easiest to discuss, because the more complex emotional states have more representational or world-involving components. On a proper hedonic view, the value of those more complex states, however, rests exclusively on the emotional valence or at most on the emotional valence plus possibly-false representational content -- on, for example, whether you have the feeling that life is going well, rather than on whether it's really going well. All the same observations apply: We do and should care about whether our lives are actually going well, much more than we care about whether we have the emotional feeling of its going well.

Tuesday, November 05, 2019

A Theory of Jerks and Other Philosophical Misadventures

... released today. *Confetti!*

Available from:

MIT Press, Amazon, B&N, or (I hope!) your local independent bookseller.

Some initial reviews and discussions.

------------------------------------------------

Preface

I enjoy writing short philosophical reflections for broad audiences. Evidently, I enjoy this immensely: Since 2006, I’ve written more than a thousand such pieces, published mostly on my blog The Splintered Mind, but also in the Los Angeles Times, Aeon, and elsewhere. This book contains fifty- eight of my favorites, revised and updated.

The topics range widely -- from moral psychology and the ethics of the game of dreidel to multiverse theory, speculative philosophy of consciousness, and the apparent foolishness of Immanuel Kant. There is no unifying thesis.

Maybe, however, there is a unifying theme. The human intellect has a ragged edge, where it begins to turn against itself, casting doubt on itself or finding itself lost among seemingly improbable conclusions. We can reach this ragged edge quickly. Sometimes, all it takes to remind us of our limits is an eight-hundred-word blog post. Playing at this ragged edge, where I no longer know quite what to think or how to think about it, is my idea of fun.

Given the human propensity for rationalization and self-deception, when I disapprove of others, how do I know that I'm not the one who is being a jerk? Given that all our intuitive, philosophical, and scientific knowledge of the mind has been built on a narrow range of cases, how much confidence can we have in our conclusions about the strange new possibilities that are likely to open up in the near future of artificial intelligence? Speculative cosmology at once poses the (literally) biggest questions that we can ask about the universe and reveals possibilities that threaten to undermine our ability to answer those same questions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence.

Not all of my posts fit this theme. It's also fun to use the once-forbidden word "fuck" over and over again in a chapter about profanity. And I wanted to share some reminiscences about how my father saw the world -- especially since in some ways I prefer his optimistic and proactive vision to my own less hopeful skepticism. Other of my blog posts I just liked or wanted to share for other reasons. A few are short fictions.

It would be an unusual reader who enjoyed every chapter. I hope you'll skip anything you find boring. The chapters are all freestanding. Please don't just start reading on page 1 and then try to slog along through everything sequentially out of some misplaced sense of duty! Trust your sense of fun (chapter 47). Read only the chapters that appeal to you, in any order you like.

Riverside, California, Earth (I hope)
October 25, 2018

Friday, November 01, 2019

How Mengzi Came up with Something Better Than the Golden Rule

[an edited excerpt from my forthcoming book, A Theory of Jerks and Other Philosophical Misadventures]

There’s something I don’t like about the ‘Golden Rule’, the admonition to do unto others as you would have others do unto you. Consider this passage from the ancient Chinese philosopher Mengzi (Mencius):

That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are none that do not know to revere their elder brothers. Treating one’s parents as parents is benevolence. Revering one’s elders is righteousness. There is nothing else to do but extend these to the world.

One thing I like about the passage is that it assumes love and reverence for one’s family as a given, rather than as a special achievement. It portrays moral development simply as a matter of extending that natural love and reverence more widely.

In another passage, Mengzi notes the kindness that the vicious tyrant King Xuan exhibits in saving a frightened ox from slaughter, and he urges the king to extend similar kindness to the people of his kingdom. Such extension, Mengzi says, is a matter of ‘weighing’ things correctly – a matter of treating similar things similarly, and not overvaluing what merely happens to be nearby. If you have pity for an innocent ox being led to slaughter, you ought to have similar pity for the innocent people dying in your streets and on your battlefields, despite their invisibility beyond your beautiful palace walls.

Mengzian extension starts from the assumption that you are already concerned about nearby others, and takes the challenge to be extending that concern beyond a narrow circle. The Golden Rule works differently – and so too the common advice to imagine yourself in someone else’s shoes. In contrast with Mengzian extension, Golden Rule/others’ shoes advice assumes self-interest as the starting point, and implicitly treats overcoming egoistic selfishness as the main cognitive and moral challenge.

Maybe we can model Golden Rule/others’ shoes thinking like this:

  1. If I were in the situation of person x, I would want to be treated according to principle p.
  2. Golden Rule: do unto others as you would have others do unto you.
  3. Thus, I will treat person x according to principle p.

And maybe we can model Mengzian extension like this:

  1. I care about person y and want to treat that person according to principle p.
  2. Person x, though perhaps more distant, is relevantly similar.
  3. Thus, I will treat person x according to principle p.

There will be other more careful and detailed formulations, but this sketch captures the central difference between these two approaches to moral cognition. Mengzian extension models general moral concern on the natural concern we already have for people close to us, while the Golden Rule models general moral concern on concern for oneself.

I like Mengzian extension better for three reasons. First, Mengzian extension is more psychologically plausible as a model of moral development. People do, naturally, have concern and compassion for others around them. Explicit exhortations aren’t needed to produce this natural concern and compassion, and these natural reactions are likely to be the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies. If you need to reason or analogise your way into concern even for close family members, you’re already in deep moral trouble.

Second, Mengzian extension is less ambitious – in a good way. The Golden Rule imagines a leap from self-interest to generalised good treatment of others. This might be excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, requiring less of a leap. Self-to-other is a huge moral and ontological divide. Family-to-neighbour, neighbour-to-fellow citizen – that’s much less of a divide.

Third, you can turn Mengzian extension back on yourself, if you are one of those people who has trouble standing up for your own interests – if you’re the type of person who is excessively hard on yourself or who tends to defer a bit too much to others. You would want to stand up for your loved ones and help them flourish. Apply Mengzian extension, and offer the same kindness to yourself. If you’d want your father to be able to take a vacation, realise that you probably deserve a vacation too. If you wouldn’t want your sister to be insulted by her spouse in public, realise that you too shouldn’t have to suffer that indignity.

Although Mengzi and the 18th-century French philosopher Jean-Jacques Rousseau both endorse mottoes standardly translated as ‘human nature is good’ and have views that are similar in important ways, this is one difference between them. In both Emile (1762) and Discourse on Inequality (1755), Rousseau emphasises self-concern as the root of moral development, making pity and compassion for others secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that ‘love of men derived from love of self is the principle of human justice’.

This difference between Mengzi and Rousseau is not a general difference between East and West. Confucius, for example, endorses something like the Golden Rule in the Analects: ‘Do not impose on others what you yourself do not desire.’ Mozi and Xunzi, also writing in China in the period, imagine people acting mostly or entirely selfishly until society artificially imposes its regulations, and so they see the enforcement of rules rather than Mengzian extension as the foundation of moral development. Moral extension is thus specifically Mengzian rather than generally Chinese.

Care about me not because you can imagine what you would selfishly want if you were me. Care about me because you see how I am not really so different from others you already love.

This is an edited extract from ‘A Theory of Jerks and Other Philosophical Misadventures’ © 2019 by Eric Schwitzgebel, published by MIT Press.Aeon counter – do not remove

Eric Schwitzgebel

This article was originally published at Aeon and has been republished under Creative Commons.

Thursday, October 31, 2019

Applying to PhD Programs in Philosophy, Part IV: Writing Samples

Part I: Should You Apply, and Where?

Part II: Grades, Classes, and Institution of Origin

Part III: Letters of Recommendation

--------------------------------------------------------

Applying to PhD Programs in Philosophy
PART IV: Writing Samples

[Probably not your writing sample]

Do Committees Read the Samples?

Applicants sometimes doubt that admissions committees (composed of professors in the department you're applying to) actually do read the writing samples, especially at the most prestigious schools. It's hard to imagine, say, John Searle carefully working through that essay on Aristotle you wrote for Philosophy 183! However, my experience is that the writing samples are read. For example, back when I visited U.C. Berkeley as an applicant in 1991 after having been admitted, I discussed my writing sample in detail with one member of the admissions committee, who convincingly assured me that the committee read all plausible applicants' samples. She said they were the single most important part of the application. Since that time, other professors at other elite PhD programs in philosophy have continued to assure me that they do carefully read and care about the writing samples. At U.C. Riverside, where I sometimes serve on graduate admissions, every writing sample is read by at least two members of the admissions committee.

How conscientiously they are read is another question. If an applicant doesn't look plausible on the surface based on GPA and letters, I'll skim through the sample pretty quickly, just to make sure we aren't missing a diamond in the rough. For most applicants, I will at least skim the whole sample, and I'll select a few pages in the middle to read carefully. I'll then revisit the samples of the thirty or so applicants who make it to the committee's cutdown list for serious consideration. Other committee members probably have similar strategies.

Few undergraduates can write really beautiful, professional-looking philosophy that sustains its quality page after page. But if you can -- or more accurately if some member of the admissions committee judges that you have done so in your sample -- that can make all the difference to your application. I remember in one case falling in love with a sample and persuading the committee to admit a student whose letters were tepid and whose grades were more A-minus than A. That student in fact came to UCR and did well. I'll almost always advocate the admission of the students who wrote, in my view, the very best samples, even if other aspects of their files are less than ideal. Of course, almost all such students have excellent grades and letters as well!

Conversely, admissions committees look skeptically at applicants with weak samples. Straight As and glowing letters won't get you into a mid-ranked program like UCR (much less a top program like NYU) if your sample isn't also terrific. There are just too many other applicants with great grades and glowing letters. The grades and letters get you past the first cut, but the sample makes you stand out.

You definitely want to spend time making your sample excellent. It is perhaps the most important thing to focus your time on in the fall term during which you are applying.

What I, at Least, Look for

First, the sample must be clearly written and show a certain amount of philosophical maturity. I can't say much about how to achieve these things other than to write clearly and be philosophically mature. These things are, I think, hard to fake. Trying too hard to sound sophisticated usually backfires.

Second, I want to see the middle of the essay get into the nitty-gritty somehow. In an analytic essay, that might be a detailed analysis of the pros and cons of an argument, or of its non-obvious implications, or of its structure. In a historical essay, that might be a close reading of a passage or a close look at textual evidence that decides between two competing interpretations. Many otherwise nicely written essays stay largely near the surface, simply summarizing an author's work or presenting fairly obvious criticisms at a relatively superficial level.

Most philosophers favor a lean, clear prose style with minimal jargon. (Some jargon is often necessary, though: There's a reason specialists have specialists' words.) When I've spent a lot of time reading badly written philosophy and fear my own prose is starting to look that way, I read a bit of David Lewis or Fred Dretske for inspiration.

Choosing Your Sample

Consider longish essays (at least ten pages) on which you received an A. Among those, you might have some favorites, or some might seem to have especially impressed the professor. You also want your essay, if possible, to be in one of the areas of philosophy you will highlight as an area of interest in the personal statement portion of your application. If your best essay is not in an area that you're planning to focus on in graduate school, however, quality is the more important consideration. So as not to show too much divergence between your writing sample and your personal statement, you might in your personal statement describe that topic as a continuing secondary interest.

If your best essay is in Chinese philosophy or medieval philosophy or 20th century European philosophy or technical philosophy of physics or some other area that's outside of the mainstream, and you're planning to apply to schools that don't teach in that area, it's a bit of a quandary. You want to show your best work, but you don't want to school to reject you because your interests don't fit their teaching profile, and also the school might not have a faculty member available who can really assess the quality of your essay.

Approach the professor(s) who graded the essay(s) you are considering and ask them for their frank opinion about whether the essay might be suitable for revision into a writing sample. Not all A essays are.

Revising the Sample

Samples should be about 12-20 pages long (double spaced, in a 12-point font). If possible, you should revise the sample under the guidance of the professor who originally graded it (who will presumably also be one of your letter writers). You aim is transform it from an undergraduate A paper to a paper that you would be proud to submit at the end of a graduate seminar dedicated to the topic in question. What's the most convincing evidence that an admissions committee could see that you will be able to perform excellently in their graduate seminars? It is, of course, that you are already doing work that would receive top marks in their seminars. Philosophy PhD admissions are so competitive that many applicants will already have samples of that quality, or nearly that quality; so it will be hard to stand out unless you do too.

I recommend that you treat the improvement of your writing sample as though it were an independent study course. If you can, you might even consider signing up for a formal independent study course aimed exactly at transforming your already-excellent undergraduate paper into an admissions-worthy writing sample. Revise, revise, revise! Deepen your analysis. Connect it more broadly with the relevant literature. Consider more objections -- or better, anticipate them in a way that prevents them from even arising. With your professor's help, eliminate those phrases, simplifications, distortions, and caricatures that suggest either an unsubtle understanding or ignorance of the relevant literature -- things which professors usually let pass in undergraduate essays but which can make a big difference in how you come across to an admissions committee.

What If Your Sample Is Too Long?

Most PhD programs cap the length of the writing sample: something like 20 double-spaced pages, or an equivalent number of words, sometimes as few as 15 pages. What if your best writing is an honors or master's thesis that's 45 pages long?

If that's your best work, then you definitely want it to be your sample. Some applicants ignore the length limits and submit the whole thing, hoping to be forgiven. (Sometimes they single-space or convert to a small font, hoping to minimize the appearance of violation.) Others mercilessly chop until they are down within the limit. Admissions committee members vary in their level of annoyance at samples that exceed the stated limits. Some don't care -- they just want to see the best. Others refuse to read the sample at all, using the rules violation as an excuse to nix the application. I'd guess that the median reaction is to accept the sample but only read a portion of it -- say 15 to 20 pages' worth.

You should probably assume that the admissions committee will only read the number of pages stated in their page limits. There are three reasonable approaches to this problem. One is good old-fashioned cutting -- which, though hard, sometimes does strengthen an essay by helping you laser in on the most crucial issue. Another is submitting the entire sample but with a brief preface advising the committee to read only sections x, y, and z (totaling no more than 15 to 20 pages). Still another approach is to replace some of your sections with bracketed summaries.

For example, if your paper defends panpsychism (the view that consciousness is ubiquitous) and you need to cut a three-page section that responds to the objection that panpsychism is too radically counterintuitive to take seriously, you might replace that section with the following statement: "[For reasons of length, here I omit Section 5, which addresses the objection that panpsychism is too radically counterintuitive to take seriously. I respond by arguing that (1) intuition is a poor guide to philosophical truth, and (2) all metaphysical views of consciousness, not only panpsychism, have radically counterintuitive consequences.]"

[Old Series from 2007]

Thursday, October 24, 2019

Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity

Can you write a philosophical argument that effectively convinces research participants to donate money to charity?

Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)

Background

Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.

However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.

General Contest Rules

Contributions must be no longer than 500 words in length, text only, in the form of an ethical argument in favor of giving money to charities. Further details about form are explained in the next section.

Contributions must be submitted by email to argumentcontest@gmail.com by 11:59 pm GMT on December 31, 2019.

The winner will be selected according to the procedure described below. The winner will be announced March 31, 2019.

Form of the Contribution

Contributions must be the in the form of a plausible argument for the conclusion that it is ethically or morally good or required to give to charity, or that "you" should give to charity, or that it's good if possible to give to charities that effectively help people who are suffering due to poverty, or for some closely related conclusion.

Previous research suggests that charitable giving can be increased by inducing emotions (Bagozzi and Moore 1994; Erlandsson, Nilsson, Västfjäll 2018), by including narrative elements (McVey & Schwitzgebel 2018), and by mentioning an "identifiable victim" who would be benefited (Jenni & Loewenstein 1997; Kogut & Rytov 2011). While philosophical arguments sometimes have such features, we are specifically interested in whether philosophical arguments can be motivationally effective without relying on such features.

Therefore, contributions must meet the following criteria:

  • Text only. No pictures, music, etc. No links to outside sources.
  • No mention of individual people, including imaginary protagonists ("Bob"). Use of statistics is fine. Mentioning the individual reader ("you") is fine.
  • No mention of specific events, either specific historical events or events in individuals' lives. Mentioning general historical conditions is fine (e.g., "For centuries, wealthy countries have exploited the global south...."). Mentioning the effects of particular hypothetical actions is fine (e.g., "a donation of $10 to an effective charity could purchase [x] mosquito nets for people in malaria-prone regions").
  • No vividly detailed descriptions that are likely to be emotionally arousing (e.g., no detailed descriptions of what it is like to live in slavery or to die of malaria).
  • Nor should the text aim to be emotionally arousing by other means (e.g., don't write "Close your eyes and imagine that your own child is dying of starvation..."), except insofar as the relevant facts and arguments might be somewhat emotionally arousing even when coolly described.
  • The text should not ask the reader to perform any action beyond reading and thinking about the argument and donating.
  • The argument doesn't need to be formally valid, but it should be broadly plausible, presenting seemingly good argumentative support for the conclusion.
  • [ETA, Oct 28] Entries must not contain deception or attempt to mislead the reader.
  • If your argument contains previously published material, please separately provide us with full citation information and indicate any text that is direct quotation.

    Choosing the Winner

    Preliminary winnowing. We intend to test no more than twenty arguments. We anticipate receiving more than twenty submissions. We will winnow the submissions to twenty based on considerations of quality (well written arguments that are at least superficially convincing) and diversity (a wide range of argument types).

    Testing. We will recruit 4725 participants from Mechanical Turk. To ensure participant quality and similarity to previously studied populations, participants will be limited to the U.S., Canada, U.K., and Australia, and they must have high MTurk ratings and experience. Each participant (except those in the control condition) will read one submitted argument. On a new page, they will be informed that they have a 10% chance of receiving a $10 bonus, and they will be given the opportunity to donate a portion of that possible bonus to one of six well-known, effective, international charities. If no argument statistically beats the control condition, no prize will be awarded. If at least one argument statistically beats the control condition, the winning argument will be the argument with the highest mean donation. See the Appendix of this post for more details on stimuli and statistical testing.

    Award

    The contributor of the winning argument will receive $500 directly, and we will donate an additional $500 to a legally registered charity (501(c)(3)) chosen by the contributor.

    Unless the contributor requests anonymity, we will announce the contributor as winner of the prize and publicize the contributor's name and winning argument in social media and other publications.

    Contributors may submit up to three entries if they wish, but only if those entries are very different in content.

    Contributions may be coauthored.

    All tested contributions will be made public after testing is complete. We will credit the authors for their contributions unless they request that their contributions be kept anonymous.

    Contact

    For further information about this contest, please email eschwitz at domain ucr.edu. When you are ready to submit your entry, send it to argumentcontest@gmail.com.

    Funding

    This contest is funded by a subgrant from the Templeton Foundation.

    --------------------------------------------------

    APPENDIX

    Stimulus

    After consenting, each participant (except for those in the control condition) will read the following statement:

    Some philosophers have argued that it is morally good to donate to charity or that people have a duty to donate to charity if they are able to do so. Please consider the following argument in favor of charitable donation.

    Please read as many times as necessary to fully understand the argument. Only click "next" when you feel that you adequately understand the text. In the comprehension section, you will be asked to recall details of the argument.

    The text of the submitted argument will then be presented.

    After the reader clicks a button indicating that they have read and understood the argument, a new page will open, and participants will read the following:

    Upon completion of this study, 10% of participants will receive an additional $10. You have the option to donate some portion of this $10 to your choice among six well-known, effective charities. If you are one of the recipients of the additional $10, the portion you decide to keep will appear as a bonus credited to your Mechanical Turk worker account, and the portion you decide to donate will be given to the charity you pick from the list below.

    Note: You must pass the comprehension question and show no signs of suspicious responding to receive the $10. Receipt of the $10 is NOT conditional, however, on how much you choose to donate if you receive the $10.

    If you are one of the recipients of the additional $10, how much of your additional $10 would you like to donate?

    [response scale $0 to $10 in $1 increments]

    Which charity would you like your chosen donation amount to go to? For more information, or to donate directly, please follow the highlighted links to each charity.

  • Against Malaria Foundation: "To provide funding for long-lasting insecticide-treated net (LLIN) distribution (for protection against malaria) in developing countries."
  • Doctors Without Borders / Médecins Sans Frontières: "Medical care where it is needed most."
  • Give Directly: "Distributing cash to very poor individuals in Kenya and Uganda."
  • Global Alliance for Improved Nutrition: "To tackle the human suffering caused by malnutrition around the world."
  • Helen Keller International: "Save the sight and lives of the world's most vulnerable and disadvantaged."
  • Multiple Myeloma Research Foundation: "We collect, interpret and activate the largest collection of quality information and put it to work for every person with multiple myeloma."
  • These charities will have been listed in randomized order.

    After this question, we will ask the following comprehension question: "In one sentence, please summarize the argument presented on the previous page", followed by a text box. Participants will be excluded if they leave this question blank or if they give what a coder who is unaware of their responses to the other questions judges to be a deficient answer. Participants who spend insufficient time on the argument page will also be excluded.

    Based on the submissions, we may add exploratory follow-up questions designed to discover possible mediators and moderators of the effects on charitable donation.

    After consenting, participants in the control condition will read the statement:

    Please consider the following description of the nature of energy. Please read as many times as necessary to fully understand the description. Only click "next" when you feel that you adequately understand the text. In the comprehension section, you will be asked to recall details of the text.

    They will then receive a 445-word description of the nature of energy from a middle school science textbook. After clicking a button indicating that they have read and understood the description, a new page will open, and participants will read the following:

    Some philosophers have argued that it is morally good to donate to charity or that people have a duty to donate to charity if they are able to do so.

    After the reader clicks a button indicating that they have read and understood the statement, a new page will open containing the same donation question as in the argument conditions.

    Statistical Testing

    In an initial round, 2500 participants will each be assigned to read one of the twenty arguments. The five arguments with the highest mean donation will be selected for further testing. These five arguments will each be given an additional 350 participants, and 475 participants will be entered into the control condition. If none of the five arguments is statistically better than control, then we will announce that there is no winner. We will pool all 475 participants (minus exclusions) in each of the five selected argument conditions, then we will compare each condition separately with the control group by a two-tailed t-test at an alpha level of .01. If at least one argument is better than control, the award will be given to the argument with the highest mean donation.

    Justification: Based on preliminary research, we expect a mean donation of about $3.50, a standard deviation of about $3, and clustering at $0, $5, and $10. In Monte Carlo modeling of twenty arguments with population mean donations in ten cent intervals from $2.60 to $4.50, the argument with the highest underlying distribution was over 90% likely to be among the top five arguments after a sample of 100 participants per argument (allowing 25 exclusions), and after 400 participants per argument (allowing 75 exclusions) the winning argument was about 85% likely to be one of the two with the highest underlying mean.

    Given that we will be running five statistical tests, we set alpha at .01 rather than .05 to the reduce the risk of false positives. In preliminary research, McVey and Schwitzgebel found that exposure to a true story about a child rescued from poverty by charitable donation increased average rates of giving by about $1 (d = 0.3). Power analysis shows that an argument with a similar effect size would be 95% likely to be found statistically different from the control group at an alpha level of .01 and 400 participants in each group, while an argument with a somewhat smaller effect size (d = 0.2) would be 60% likely to be found statistically different.

    [image source]

    Thursday, October 17, 2019

    I'm Morally Good Enough Already, Thanks!

    In a fascinating new paper (forthcoming in Psychological Science), Jessie Sun and Geoffrey Goodwin asked undergraduate students in psychology to rate themselves on several moral and non-moral dimensions, and they asked those same students to nominate "informants" who knew them well to rate them along the same dimensions. Non-moral traits included, for example, energy level ("being full of energy") and intellectual curiosity ("being curious about many different things"). Moral traits included specific traits such as fairness ("being a fair person") but also included self-ratings of overall morality ("being a person of strong moral character" and "acting morally"). They then asked both the target participants and their informants to express the extent to which they aimed to change these facts about themselves (e.g., "I want to be helpful and unselfish with others..." or "I want [target's name] to be helpful and unselfish with others...") from -2 ("much less than I currently am") to +2 ("much more than I currently am").

    Before I spill the beans, any guesses?

    I've already got some horses in this race. Based partly on Simine Vazire's work, partly on my general life experience, and partly on theoretical reflections about the semi-paradoxical nature of self-evaluations of jerkitude and general moral character, I have speculated that we should see little to no relationship between self-evaluations of general moral character and one's actual moral character. Also, based partly on recent work in social psychology and behavioral economics by Cialdini, Bicchieri, and others, and partly again on general life experience, I have conjectured that most people aim for moral mediocrity.

    You will be unsurprised, I suppose, to hear that I interpret Sun and Goodwin's results as broadly confirmatory of these predictions.

    To me, perhaps their most striking result -- though not Sun's and Goodwin's own point of emphasis -- is the almost non-existent correlation between self-ratings of general morality and informant ratings of general morality. Neither of their two samples of about 300-600 participants per group showed a statistically detectable relationship (there was a weak positive trend: r = .15 & .10, n.s). Self-ratings of some specific moral traits -- honesty, fairness, and loyalty -- also showed at best weak correlations with spotty statistical significance (r = 0 to .3, none significant in both samples). However, other specific moral traits showed better correlations (purity, compassion, and responsibility, r = .2 to .5 in both samples).

    In other words, Sun and Goodwin find basically no statistically detectable relationship between how morally good you say you are, and how honest and fair and loyal you say you are, and what your closest friends and family say about you.

    Could the informants be wrong and the self-ratings correct? Well, of course! That thing I did that seemed immoral, unfair, and dishonest... of course, it wasn't nearly as bad as it seemed. In fact it was good! But only I fully appreciate that, since only I know the full details of the situation. Informants might underestimate my moral character. (If this sounds like suspicious self-exculpation, well, at least sometimes our moral excuses have merit.)

    Alternatively, close friends and family might overestimate my moral character: The people who know me well who I nominate for a study like this might cut me more slack than I deserve. I might rightly be hard on myself for the dishonest things I've done that they don't know about or know about but forgive; or maybe they don't want to express their true middling opinion of the target participants in a study like this. Likely, something like this is going on in these data: Overall, informants gave higher moral ratings to target participants than the target participants gave to themselves -- practically at ceiling (mean 4.5 and 4.4 on a 1-5 scale, compared to 4.0 in the targets' self-ratings). Maybe this reflects the way the informants were chosen and how they were prompted to respond.

    Without a general moralometer, or even observational data about plausibly moral or immoral behavior, it's hard to know how accurate such self- and other-ratings are. Nonetheless, the discorrelation is striking. While "people who know you well" might easily be wrong about your moral character, you might think that, if anything, participants would tend to nominate informants whose views of them align with their own self-conceptions (their best friends and favorite family members), in which case any error would tend to be on the side of overcorrelation rather than undercorrelation. The lack of correlation suggests an abundance of moral disagreement and error somewhere. My guess would be everywhere, with ample problems on both sides, for multiple reasons. Moral self-assessment is hard, and friend-assessment is at least dicey.

    This isn't a general problem in the Sun and Goodwin data. The self-ratings and informant ratings of non-moral traits generally showed good correlations (mostly r = .5 to .7, p < .001) -- including for seemingly mushy traits like "aesthetic sensitivity" and "trust".

    How about the moral mediocrity thesis? Do people generally express a strong desire to improve morally? Not in Sun and Goodwin's data. Respondents tended to prioritize reducing negative emotionality (e.g., depression, anxiety) and improving achievement (productiveness, creative imagination). Moral improvement appeared near the bottom of their list of goals. Given the opportunity to choose their three top goals among 21 possible general self-improvement goals of this sort, only 3% of target respondents ranked general moral improvement among those three. People who rated themselves comparatively high in moral traits gave even lower priority to moral self improvement than people who rated themselves comparatively lower, suggesting that they are especially likely see themselves as already morally "good enough" -- even if, as I'm inclined to think, such self-ratings of morality are almost completely uncorrelated with genuine morality.

    [Detail of Figure 2, from Sun & Goodwin 2019; click to enlarge]

    One thing that Sun and Goodwin did not ask about, which might have been interesting to see, is whether people would express willingness to trade away moral traits for desirable non-moral traits: If they could become more creative and less anxious at the cost of becoming less honest and less morally good overall, would they? I'm not sure I would trust self-reports about this... but I'd at least be curious to ask.

    In their deeds, as revealed by the choices they make and the discussions they choose to have and not have and the goals they choose to pursue, people tend to show little interest in accurate moral self-assessment or in general moral self-improvement above a minimal, mediocre standard. In my experience, if asked explicitly, people won't typically own up to this. But maybe, as suggested by Sun's and Goodwin's data, they will admit it implicitly, or admit to pieces of it explicitly, if asked in the right kind of way.

    Tuesday, October 15, 2019

    New Kickstarter Project: Vital: The Future of Healthcare

    ... here.

    Help fund science fictional speculation on health technology!

    If the project is funded, I will contribute a new story I am writing about the possible future of mood and attitude control in schoolchildren.

    Thursday, October 10, 2019

    Applying to PhD Programs in Philosophy, Part III: Letters of Recommendation

    Part I: Should You Apply, and Where?

    Part II: Grades, Classes, and Institution of Origin

    Good grades alone won't secure admission to a PhD program in philosophy. Writing samples and letters of recommendation are also very important. I believe that writing samples should carry more weight than letters of recommendation (and admission committee members often say they do), but I suspect that in fact letters carry at least as much weight. An applicant needs at least three.

    Who to Ask

    If a professor gave you an A (not an A-minus) in an upper-division philosophy course, consider them a candidate to write a letter. You needn't have any special relationship with the professor, or have visited during office hours, or have taken multiple classes from them -- though all of these things can help. Don't be shy about asking; we're used to it!

    No matter how friendly they seem, you should be cautious about asking for letters from professors who have given you A-minuses or below, since if they have integrity in writing their letters, it will come out that your performance in their class was not quite top notch. If a professor has given you both an A and an A-minus, there might still have to be some restraint in the letter -- though less so if the A is the more recent grade.

    Letters from philosophers are distinctly preferable to letters from non-philosophers. Letters from eminent scholars are distinctly preferable to letters from assistant professors. Of course, these factors need to be weighed against the expected quality of the letter.

    You may submit more than the stated minimum of letters, but be advised that three strong letters looks considerably better in an application than three strong letters and a mediocre one.

    Although it's a delicate matter, you can ask a professor whether they think they can write a strong letter for you. If you feel doubt, and if you have a backup letter writer in mind, tactfully asking is probably a good idea.

    Should You Waive Your Right to See the Letter?

    Most applicants waive the right, and some professors will feel offended or put on the spot if an applicant does not waive the right.

    However, I confess that in my own case, I think I might be slightly less likely to say something negative, and I might think more carefully about how the letter would come across, if I think the applicant might view it. On the other hand, for the few very best of my letters, I might also slightly restrain my transports of enthusiasm. (I suspect professors don't really have good self-knowledge about such matters.)

    Enabling Your Professors to Write the Best Possible Letters

    Think of all those wonderful things you've done that don't show up on your transcript! You went to a bunch of talks at the APA last year when it was in town. You gave free tutoring to high school students. You won the Philosophy Department award for best undergraduate essay. All on your own, you read Kant's Critique of Pure Reason last summer and two commentaries on it. You play piano in nightclubs. You have two thousand Twitter followers. (Be careful, however, what you say on publicly viewable social media, since admissions committees might discover it.) You got a perfect score on the SAT. You work with a local charitable organization. You're captain of the college debate team.

    Your letter writers want to know these things. Such facts come across much better in letters than in your personal statement (where they might seem immodest or irrelevant). In letters, they can be integrated with other facts to draw a picture of you as an interesting, promising student. So give your letter writers a brag sheet and don't be modest! Err on the side of over-including things rather than under-including. Sit there while they read it so they have a chance to ask questions. Explain to them that it's just a brag sheet and that you realize that much or most of it might be irrelevant to their letters. If you're embarrassed, feel free to blame me! ("Well, on Eric Schwitzgebel's blog, he said I should give you a brag sheet with all of this kind of stuff, even though it's kind of embarrassing.")

    Give your professors copies of all of the essays you've written for them, including if possible their comments on those essays. I don't always remember what my students have written about, especially if it has been a year, even if the essays are excellent. With a copy of the essays in hand, I can briefly describe them -- their topics, what seemed especially good about them -- in a way that adds convincing detail to the letter and gives the impression that I really do know and remember the student's work.

    Give your letter writers copies of your personal statement. If a letter writer says "Augustin has a deep passion for epistemology and hopes to continue to study that in graduate school" and your personal statement says nothing about epistemology, it looks a bit odd. You want the portraits drawn by your letter writers and your own self-portrait to match. Also, personal statements are extremely hard to write well (more on that later!) and it's good to have feedback on them from your letter writers.

    Give your letter writers your transcript. They may not know you have excellent grades across the board. Once they know this, they can write a stronger letter and one that more concretely addresses your performance relative to other students at your school. Also, they might be able to comment helpfully to the admissions committee on aberrations in your transcript. ("Prof. Hubelhauser hasn't given a student an A since 2003" or "Although Vania's grades slipped a bit in Fall Quarter 2016, her mother was dying of cancer that term, and her previous and subsequent grades more accurately reflect her abilities". Of course, they can't write the latter unless you tell them.)

    Give your letter writers a list of all the schools you are applying to and their deadlines, ideally with the first deadline highlighted. This serves several functions: It tells them when the letter needs to be completed (the first deadline). It makes it convenient for them to confirm that they have received all of the schools' letter requests and sent out all of their letters. It is an opportunity for them to provide feedback on your choice of schools. (Maybe there's a school that would be a good fit that you are needlessly omitting?) And it gives them an occasion to reflect on whether they might want to customize their letters for some of the schools.

    Maybe I'm a little old fashioned, but I prefer all of this material printed in hard copy. Then I can just staple it together and easily access everything I need. But it probably wouldn't hurt to also send it electronically, for professors who prefer things that way.

    Give your letter writers all of this material at least one month before the first deadline.

    Gentle Reminders

    Professors are flaky and forgetful. They are hardly ever punished for such behavior, so their laxity is unsurprising. Also, it's part of the charm of being absent-minded and absorbed in deeper things like the fundamental structure of reality!

    Consequently, it is advisable to email your letter writers a gentle reminder a week before your first deadline. If you don't receive confirmation from the schools (some will give you confirmation, some won't) or from the letter writer, saying that the letters are sent, send another reminder a week after the deadline.

    Don't panic if the letters are late. Admissions committees are used to it, and they don't blame the applicant. However, if the letter still isn't in the file by the time the committee gets around to reading your application, it will probably never be read. (You may still be admitted if the two letters that did arrive were good ones.)

    If the school doesn't provide electronic confirmation that your application is received and complete, it might be advisable to email the secretarial staff a week or so after the deadline to confirm that your application is all in order.

    Advice to Letter Writers

    Reading hundreds of letters of recommendation, things become something of a blur. Most letters say "outstanding student" or "I'm delighted to recommend X" or "I'm confident X will succeed in graduate school in philosophy". It would be strange not to say something of this sort, but still -- my eyes start to glaze over. I suspect that trying to detect nuanced differences in such phrases is pointless, since I doubt such nuances closely track applicant quality. More helpful: (1.) Comparative evaluations like: "best philosophy major in this year's graduating class"; or "though only an undergraduate, one of three students, among 9, to earn an 'A' in my graduate seminar"; or "her GPA of 3.87 is second-highest among philosophy majors". (2.) Descriptions of concrete accomplishments: "Won the department's prize in 2018 for best undergraduate essay in philosophy"; or "President of the Philosophy Club". It's also nice to hear a little about the applicant's work and what's distinctive of her as a student and person.

    Regarding those little checkboxes on some schools' cover sheets ("top 5%, top 10%" etc.): My impression is that letter writers vary in their conscientiousness about such numbers and have different comparison groups in mind, so I tend to discount them unless backed up by specific comparison assessments in the letter. However, my experience is that other people on the admissions committee sometimes take the checkboxes more seriously.

    Most letter writers write the same letter for every school rather than addressing the specific paragraph-answer questions that some schools ask. However, if you think an applicant is a particularly good fit for one school, a specifically tailored letter that explains why can be helpful.

    Gifts of Thanks

    The best gift of thanks that you can give to your letter writers is to update them on your admissions and rejections from time to time. Even if it's a complete whiff and you're rejected everywhere, please do tell them. Also, maybe about year later, after you're in a graduate program, or alternatively after you're out of academia into the world of business or elsewhere, an update on how things are going is lovely to hear!

    Personally, I -- and I suspect most letter writers -- prefer not to receive chocolates or gift cards or such. Of course, we appreciate the thought behind such tokens, and there's nothing wrong with expressing appreciation this way. If you do this, please keep the monetary value low.

    [image source]

    Tuesday, October 08, 2019

    So 2018?

    I'm told that A Theory of Jerks and Other Philosophical Misadventures is now being printed. I haven't yet seen a physical copy, but hopefully soon! If you're thinking of reviewing it or commenting on it, that would be awesome, and I can see if I can talk MIT Press into sending you an advance copy.

    PS: Is the cover already dated? Will vaping hipsters soon seem so 2018, a relic of the past, to whom we owe mainly a wistful nostalgia?

    Friday, October 04, 2019

    What Makes for a Good Philosophical Argument, and The Common Ground Problem for Animal Consciousness

    What is it reasonable to hope for from a philosophical argument?

    Soundness would be nice -- a true conclusion that logically follows from true premises. But soundness isn't enough. Also, in another way, soundness is sometimes too much to demand.

    To see why soundness isn't enough, consider this argument:

    Premise: Snails have conscious sensory experiences, and ants have conscious sensory experiences.

    Conclusion: Therefore, snails have conscious sensory experiences.

    The argument is valid: The conclusion follows from the premises. For purposes of this post, let's assume that premise, about snails and ants, is also true and that the philosopher advancing the argument knows it to be true. If so, then the argument is sound and known to be so by the person advancing it. But it doesn't really work as an argument, since anyone who isn't already inclined to believe the conclusion won't be inclined to believe the premise. This argument isn't going to win anyone over.

    So soundness isn't sufficient for argumentative excellence. Nor is it necessary. An argument can be excellent if the conclusion is strongly suggested by the premises, despite lacking the full force of logical validity. That the Sun has risen many times in a regular way and that its doing so again tomorrow fits with our best scientific models of the Solar System is an excellent argument that it will rise again tomorrow, even though the conclusion isn't a 100% logical certainty given the premises.

    What then, should we want from a philosophical argument?

    First, let me suggest that a good philosophical argument needs a target audience, the expected consumers of the argument. For academic philosophical arguments, the target audience would presumably include other philosophers in one's academic community who specialize in the subarea. It might also include a broader range of academic philosophers or some segment of the general public.

    Second, an excellent philosophical argument should be such that the target audience ought to be moved by the argument. Unpacking "ought to be moved": A good argument ought to incline members of its target audience who began initially neutral or negative concerning its conclusion to move in the direction of endorsing its conclusion. Also, members of its target audience antecedently inclined in favor of the conclusion ought to feel that the argument provides good support for the conclusion, reinforcing their confidence in the conclusion.

    I intend this standard to be a normative standard, rather than a psychological standard. Consumers of the argument ought to be moved. Whether they are actually moved is another question. People -- even, sad to say, academic philosophers! -- are often stubborn, biased, dense, and careless. They might not actually be moved even if they ought to be moved. The blame for that is on them, not on the argument.

    I intend this standard as an imperfect generalization: It must be the case that generally the target audience ought to be moved. But if some minority of the target audience ought not to be moved, that's consistent with excellence of argument. One case would be an argument that assumes as a premise something widely taken for granted by the target audience (and reasonably so) but which some minority portion of the target audience does not, for their own good reasons, accept.

    I intend this standard to require only movement, not full endorsement: If some audience members initially have a credence of 10% in the conclusion and they are moved to a 35% credence after exposure to the argument, they have been moved. Likewise, someone whose credence is already 60% before reading the argument is moved in the relevant sense if they rationally increase their credence to 90% after exposure to the argument. But "movement" in the sense needn't be understood wholly in terms of credence. Some philosophical conclusions aren't so much true or false as endorseable in some other way -- beautiful, practical, appealing, expressive of a praiseworthy worldview. Movement toward endorsement on those grounds should also count as movement in the relevant sense.

    You might think that this standard -- that the target audience ought to be moved -- is too much to demand from a philosophical argument. Hoping that one's arguments are good enough to change reasonable people's opinions is maybe a lot to hope for. But (perhaps stubbornly?) I do hope for it. A good, or at least an excellent, philosophical argument should move its audience. If you're only preaching to the choir, what's the point?

    In his preface to Consciousness and Experience, William G. Lycan writes

    In 1987... I published a work entitled Consciousness. In it I claimed to have saved the materialist view of human beings from all perils.... But not everyone has been convinced. In most cases this is due to plain pigheadedness. But in others its results from what I now see to have been badly compressed and cryptic exposition, and in still others it is articulately grounded in a peril or two that I inadvertently left unaddressed (1996, p. xii).

    I interpret Lycan's preface as embracing something like my standard -- though with the higher bar of convincing the audience rather than moving the audience. Note also that Lycan's standard appears to be normative. There may be no hope of convincing the pigheaded; the argument need not succeed in that task to be excellent.

    So, when I write about the nature of belief, for example, I hope that reasonable academic philosophers who are not too stubbornly committed to alternative views, will find themselves moved in the direction of thinking that a dispositional approach (on which belief is at least as much about walking the walk as talking the talk) will be moved toward dispositionalism -- and I hope that other dispositionalists will feel reinforced in their inclinations. The target audience will feel the pull of the arguments. Even if they don't ultimately endorse my approach to belief, they will, I hope, be less averse to it than previously. Similarly, when I defend the view that the United States might literally be conscious, I hope that the target audience of materialistically-inclined philosophers will come to regard the group consciousness of a nation as less absurd than they probably initially thought. That would be movement!

    Recently, I have turned my attention to the consciousness, or not, of garden snails. Do garden snails have a real stream of conscious experience, like we normally assume that dogs and ravens have? Or is there "nothing it's like" to be a garden snail, in the way we normally assume there's nothing it's like to be a pine tree or a toy robot? In thinking about this question, I find myself especially struck by what I'll call The Common Ground Problem.

    The Common Ground Problem is this. To get an argument going, you need some common ground with your intended audience. Ideally, you start with some shared common ground, and then maybe you also introduce factual considerations from science or elsewhere that you expect they will (or ought to) accept, and then you deliver the conclusion that moves them your direction. But on the question of animal consciousness specifically, people start so far apart that finding enough common ground to reach most of the intended audience becomes a substantial problem, maybe even an insurmountable problem.

    I can illustrate the problem by appealing to extreme cases; but I don't think the problem is limited to extreme cases.

    Panpsychists believe that consciousness is ubiquitous. That's an extreme view on one end. Although not every panpsychist would believe that garden snails are conscious (they might think, for example, that subparts of the snail are conscious but not the snail as a whole), let's imagine a panpsychist who acknowledges snail consciousness. On the other end, some philosophers, such as Peter Carruthers, argue that even dogs might not be (determinately) conscious. Now let's assume that you want to construct an argument for (or against) the consciousness of garden snails. If your target audience includes the whole range of philosophers from panpsychists to people with very restrictive views about consciousness like Carruthers, it's very hard to see how you speak to that whole range of readers. What kind of argument could you mount that would reasonably move a target audience with such a wide spread of starting positions?

    Arguments about animal consciousness seem always to start already from a set of assumptions about consciousness (this kind of test would be sufficient, this other kind not; this thing is an essential feature of consciousness, the other thing not). The arguments will generally beg the question against audience members who start out with views too far away from one's own starting points.

    How many issues in philosophy have this kind of problem? Not all, I think! In some subareas, there are excellent arguments that can or should move, even if not fully convince, most of the target audience. Animal consciousness is, I suspect, unusual (but probably not unique) in its degree of intractability, and in the near-impossibility of constructing an argument that is excellent by the standard I have articulated.

    [image source]

    Friday, September 27, 2019

    Age Effects on SEP Citation, Plus the Baby Boom Philosophy Bust and The Winnowing of Greats

    I have a theory about the baby boom and academic philosophy in the major Anglophone countries. To explain and defend it, we'll need to work through some more numbers from my analysis of citations in the Stanford Encyclopedia of Philosophy.

    Recency Bias

    First, left's examine recency bias in the encyclopedia. I've done this by taking David Schwitzgebel's August 2019 scrape of the bibliographic sections of all the main-page SEP entries and searching for the first occurrence of "19", "20", or "forthcoming" on each line, then retrieving the first four characters from that location. Non-numbers (except "fort") and numbers <1900 or >2019 were excluded. Everything else was interpreted as a date. (I did not include dates from before 1900, since those works citation formats are less systematic, and often a translation date is cited rather than an original publication date.)

    The result is a pretty little curve peaking at 2003-2007:

    [click to enlarge]

    In 2014, I'd conducted a similar analysis. In those data, the peak was 1999-2003:

    [click to enlarge]

    And in 2010, I'd also done a similar analysis! The peak year was 2000:

    [click to enlarge]

    Thus, in the Stanford Encyclopedia, the most recent works appear to be somewhat disadvantaged compared to works about ten years old. Back in time from the peak years, there's a steep linear decline to about 1950, before which there are few citations and the citation rate becomes approximately flat. (Probably, serious curve fitting wouldn't show it to be three linear phases; but close enough.) Over the past nine years, the peak appears to have advanced by about five years. Since SEP entries are updated about every five years on average, we might expect some delays for that reason; and if people are a little lazy about updating references when they update their entries, that could explain why the peak isn't advancing as fast as the clock.

    I assume that all these effects are recency effects. Another alternative, of course, is that early 21st century philosophy is vastly better and more citable than earlier philosophy, so that a good a 23rd century encyclopedia would show a similar curve, also massively disproportionately citing early 21st century philosophers compared to 20th century philosophers. (If you find that plausible, I have a beautiful little Proof that P to sell you!)

    Based on these results, one might expect that the most-cited philosophers in the 2019 Stanford Encyclopedia would be those whose most influential works appeared around 2003-2007. However, that is not the case.

    I have a twofold explanation why: The Winnowing of Greats and The Baby Boom Philosophy Bust. But it's going to take a bit of data analyses to get there.

    Most Cited Philosophers, Oldest Generation

    For analysis, I have divided my list of the 295 most-cited philosophers into four generations based on age: 1900-1919 (oldest), 1920-1945 (pre-boom), 1946-1964 (boomers), and 1965-present (Generation X). Age was estimated based on birthyear as recorded in Wikipedia (for most authors) or estimated based on date of undergraduate degree (assuming age 22) or in a few cases date of PhD (assuming age 29). A CSV with the data is here. I welcome corrections.

    Looking at the oldest generation (1900-1919), we see some stalwarts near the top of the most-cited list: Quine at #2 and Davidson at #5. Chisholm, Strawson, Popper, Geach, Goodman, Mackie, and Anscombe all appear in the top 50. Interestingly, although 19 philosophers from this generation rank among the top 100, only 13 appear in the remainder of the list of 295.

    I'm inclined to attribute this to a phenomenon I call The Winnowing of Greats. This is the tendency for the difference between the top performers and the nearly-top performers in any group to come to seem larger with historical (and other types of) distance. We're still citing Quine and Davidson, and to some extent Richard Brandt (#129) and Norman Malcolm (#236), but less famous philosophers from that generation are quickly dropping off the radar.

    The intuitive idea of Winnowing of Greats is this: If you're close to a field and you want to list, say, ten leaders in that field in rank order, you might list A, B, C, D, E, F, G, H, I, and J. Another person, also close, might partly agree, maybe listing A, C, B, E, G, D, I, K, L, and M. With more distance, someone might only list or think of five -- likely A, B, C (consensus top) and two of D, E, F, or G, starting to forget about H and higher. Still later, people might only mention A, B, and C. Over time, these will come to seem the consensus "best" and thus the ones who need to be discussed on grounds of historical importance in addition to whatever other reasons there are to discuss them; and others will be relatively less mentioned and mostly forgotten except by specialists, and the gap in apparent importance between the top and the remainder will grow -- eventually becoming the "consensus of history".

    We could interpret such winnowing as a type of recency bias against all but the most famous, flowing from ignorance due to distance; or we could see it as a more legitimate winnowing process.

    Starting somewhere around rank #50, the philosophers from the oldest generation who are still ranked might strike those of us who know the history of 20th century philosophy to be ranked rather low relative to their historical importance. I interpret this as recency bias. Quantitative evidence of recency bias is this: Looking at only those philosophers on both the 2014 and 2019 lists, the average loss in rank between the two measures was 11 spots. (Going logarithmic, the average natural log of the rank is 4.11 in 2014 and 4.29 in 2019.)

    (For the curious, Chisholm was a notable decliner, rank 12 to 19, which is proportionally large in just 5 years, while Anscombe bucked the trend, climbing significantly, from 66 to 48.)

    Most Cited Philosophers, Pre-Boom Generation

    The dominant generation is the pre-boom generation (1920-1945). Although this generation includes the largest number of birth years, their dominance of the top of the list is too great to be explainable by that fact alone. This generation gives us six of the top ten (Lewis, Putnam, Rawls, Kripke, Williams, and Nozick) and 33 of the top 50. Most of these authors did their most influential work in the 1960s-1980s. Despite the citation curve peaking for works written in 2003-2007, foundational work by this generation is still being heavily cited. For example, the two most-cited works in the Stanford Encyclopedia are Rawls's A Theory of Justice (1971, cited in 115 entries) and Kripke's Naming and Necessity (1980, cited in 88 entries). (More data on this soon.)

    Time is starting to affect the rankings of this generation, too, with an average decline in rank of 8 (average difference in ln of .05). Notably, however, in the top 50, there is an average increase in rank of 3. (It's 1.4 if we exclude Pettit, whose rank increased markedly due to a methodological change: I now include second authors.) This difference in trajectory between the top and bottom is consistent with the Winnowing of Greats.

    According to a demographic theory that I call The Baby Boom Philosophy Bust, the Baby Boom generation had a substantial demographic advantage in academic philosophy in the United States. (This probably generalizes outside of the U.S. and outside of philosophy, but let me stick with what I know.) Undergraduate enrollments in the U.S. jumped from 2,444,900 in 1949-1950 to 3,639,847 in 1959-1960 to 8,004,660 to 1969-1970 to 11,569,899 in 1979-1980. After that, enrollments continued to grow, but at a much slower pace. The latter part of this period was of course when the baby boomers hit college, but the earlier part of the period was important too, in the wake of the G.I. Bill and the fast growth of the national prestige of higher education. This national prestige was, I conjecture, partly due to the prestige of the space race and the power of the atom bomb, and it extended into the humanities and arts partly due to the popularity of the idea of IQ and the emerging notion of "creativity". (I have a colleague here at UCR, Ann Goldberg, who is doing fascinating work on the history of the concept of creativity and its role in educational institutions.)

    Who was hired to teach all of these new undergraduates? It was of course, the pre-boom generation. A flood of pre-boom Assistant Professors hit the universities during this period. Doing their foundational early-career work in the 1960s-1970s, they set the agenda for the philosophy of the period. Then when the boomers got their PhDs and hit the job market in the 1980s, they discovered that pre-boomers were already astride the academy -- mid-career now, at the height of their influence, not yet ready to step aside for their younger generation. The job market was terrible, and those who made it into tenure-track positions found themselves in an academic world already dominated by Rawls, Lewis, Kripke, Fodor, etc., without a lot of new space at the top. My hypothesis is that this fact about academia in the 1980s and early 1990s means that the baby boomers grew philosophically in the shade of the pre-boom generation -- and not to the heights of prestige and influence that they would have grown to, had they not been so overshadowed in their early careers.

    With this hypothesis in mind....

    Most Cited Philosophers, Baby-Boom Generation

    The boomers (born 1946-1964) contribute two philosophers to the top ten: Nussbaum (#9) and Williamson (#10). Another five are among the top fifty: Fine, Sober, Kitcher, Hawthorne, Smith. (Hawthorne, born 1964, is right at the cutoff between Boom and Gen X, if I have his date right.) They are thus vastly underrepresented in the top 50 compared to the pre-boomers (7 vs 33). However, they are more proportionately represented in the list as a whole (113, compared to 129 for the pre-boomers).

    Could the boomers rise in relative prestige, so that if we did a similar analysis in ten or twenty years, we'd find them dominating the top 50 in the way the pre-boomers do now? I see three reasons not to think so.

    First, the boomers have already started declining in citation rate, comparing 2014 and 2019, with an average rank decline of 8 (ln = +.009). Mitigating this, however, if we look at the top 100, there's an overall average rank gain of 11 (ln = -.16) -- consistent with the winnowing hypothesis.

    Second, in other research, I've found that philosophers tend to reach peak influence around ages 55-70. Thus, boomers should be at their peak influence now and we shouldn't expect a lot more climbing overall.

    Third, as noted above, there is a strong recency bias in the Stanford Encyclopedia citations. This should tend to favor philosophers younger than the boomers, and increasingly so over time -- especially since philosophers on average tend to do their most influential work in their late 30s and 40s.

    Most Cited Philosophers, Generation X

    Gen Xers (born 1965-1980) are still too young to be very well represented among the top-cited philosophers in the Stanford Encyclopedia: Only 21 qualify for the list of 295, three in the top 100 (Chalmers, Schaffer, and Sider). In the past five years, the average rank gain in this group is 16 positions (ln = -.15), so, as one would expect, they are still on an upward trajectory. Also as one would expect, many of them are new to the list as of 2019 (11 of the 21), and so not included in these trajectory averages, though headed upward in another sense.

    It is, I think, too early to know if Generation X will ultimately prove also to have grown too much in the shade of the pre-boom generation. I sense that this might be so: Mainstream analytic philosophers still to a large extent live in a philosophical world whose agenda was set by Lewis, Kripke, Rawls, Williams, and Putnam.

    Side note on demographic diversity of most-cited Gen X philosophers: If my gender and race/ethnicity classifications are correct, then (perhaps surprisingly?) the most-cited Gen X philosophers are slightly farther from gender parity and racial diversity than the Boomers, with 3/21 women and no Latinx or non-White philosophers (compared to the Boomers' 17% women, 2% Latinx or non-White). However, since the numbers are small, this might be chance variation.

    Explanation of the Misalignment of Peak Citation Year and the Age of the Most-Cited Philosophers

    To cross my t's and dot my i's: Although the peak citation years are 2003-2007, the pre-boom generation is the most cited because, due to their demographic advantage in academia, they dominated philosophy from the 1970s at least into the 1990s (and maybe they still do, despite death and retirement), shading the boomers and maybe also the Gen-Xers. Although recent work is the most cited in the aggregate, the Winnowing process hasn't yet given us the distance required for consensus on the Greats, so those recent citations remain scattered among many authors.

    This post is already plenty long, so I won't bother crossing my x's and dotting my j's.