Wednesday, April 08, 2020

The Unreliability of Naive Introspection

Wesley Buckwalter has a new podcast Journal Entries, in which philosophers spend 30-50 minutes walking listeners through the main ideas of one of their papers, sometimes adding new subsequent reflections or thoughts about future research in the area.

Today's Journal Entry is my 2008 paper, "The Unreliability of Naive Introspection".

I make the case that Descartes had it backwards when he said that the outside world is known better and more directly than our experiences. We are often radically wrong about even basic features of our currently ongoing experience, even when we reflect attentively upon it with sincere effort in favorable conditions.


Does It Matter If the Passover Story Is Literally True?

[Originally published in the L.A. Times, April 9, 2017; revised version published as Chapter 25 of A Theory of Jerks and Other Philosophical Misadventures]

You probably already know the Passover story: how Moses asked Pharaoh to let his enslaved people leave Egypt, and how Moses’s god punished Pharaoh—killing the Egyptians’ firstborn sons while “passing over” the Jewish households. You might even know the new ancillary tale of the Passover orange. How much truth is there in these stories? At synagogues during Passover holiday, myth collides with fact, tradition with changing values. Negotiating this collision is the puzzle of modern religion.

Passover is a holiday of debate, reflection, and conversation. In 2016, as my family and I and the rest of the congregation waited for the Passover feast at our Reform Jewish temple, our rabbi prompted us: “Does it matter if the story of Passover isn’t literally true?”

Most people seemed to be shaking their heads. No, it doesn’t matter.

I was imagining the Egyptians’ sons. I am an outsider to the temple. My wife and teenage son are Jewish, but I am not. At the time, my nine-year-old daughter, adopted from China at age one, was describing herself as “half Jewish”.

I nodded my head. Yes, it does matter if the Passover story is literally true.

“Okay, Eric, why does it matter?” Rabbi Suzanne Singer handed me the microphone.

I hadn’t planned to speak. “It matters,” I said, “because if the story is literally true, then a god who works miracles really exists. It matters if there is a such a god or not. I don’t think I would like the ethics of that god, who kills innocent Egyptians. I’m glad there is no such god.

“It is odd,” I added, “that we have this holiday that celebrates the death of children, so contrary to our values now.”

The microphone went around, others in the temple responding to me. Values change, they said. Ancient war sadly but inevitably involved the death of children. We’re really celebrating the struggle of freedom for everyone....

Rabbi Singer asked if I had more to say in response. My son leaned toward me. “Dad, you don’t have anything more to say.” I took his cue and shut my mouth.

Then the Seder plates arrived with the oranges on them.

Seder plates have six labeled spots: two bitter herbs, charoset (a mix of fruit and nuts), parsley, a lamb bone, a boiled egg—each with symbolic value. There is no labeled spot for an orange.

The first time I saw an orange on a Seder plate, I was told this story about it: A woman was studying to be a rabbi. An orthodox rabbi told her that a woman belongs on the bimah (pulpit) like an orange belongs on the Seder plate. When she became a rabbi, she put an orange on the plate.

A wonderful story—a modern, liberal story. More comfortable than the original Passover story for a liberal Reform Judaism congregation like ours, proud of our woman rabbi. The orange is an act of defiance, a symbol of a new tradition that celebrates gender equality.

Does it matter if it’s true?

Here’s what actually happened. Dartmouth Jewish Studies Professor Susannah Heschel was speaking to a Jewish group at Oberlin College in Ohio. The students had written a story in which a girl asks a rabbi if there is room for lesbians in Judaism, and the rabbi rises in anger, shouting, “There’s as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!” The next Passover, Heschel, inspired by the students but reluctant to put anything as unkosher as bread on the Seder plate, used a tangerine instead.

The orange, then, though still an act of defiance, is also already a compromise and modification. The shouting rabbi is not an actual person but an imagined, simplified foe.

It matters that it’s not true. From the story of the orange, we learn a central lesson of Reform Judaism: that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, changing as our values change—but that only limited change is possible within a tradition-governed institution. An orange, but not a crust of bread.

In a way, my daughter and I are also oranges: a new type of presence in a Jewish congregation, without a marked place, welcomed this year, unsure we belong, at risk of rolling off.

In the car on the way home, my son scolded me: “How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously! You should have seen how they were looking at you, with so much anger. If you’d said more, they would practically have been ready to lynch you.”

Due to the seating arrangement, I had been facing away from most of the congregation. I hadn’t seen those faces. Were they really so outraged? Was my son telling me the truth on the way home that night? Or was he creating a simplified myth of me?

In belonging to an old religion, we honor values that are no longer entirely our own. We celebrate events that no longer quite make sense. We can’t change the basic tale of Passover. But we can add liberal commentary to better recognize Egyptian suffering, and we can add a new celebration of equality.

Although the new tradition, the orange, is an unstable thing atop an older structure that resists change, we can work to ensure that it remains. It will remain only if we can speak its story compellingly enough to also give our new values the power of myth.

------------------------------------------------

Related:

Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature (Salon.com, Dec. 22, 2019; also LA Times, Dec. 12, 2017 and Chapter 24 of A Theory of Jerks).

Birthday Cake and a Chapel (blog post April 21, 2018 and Chapter 36 of A Theory of Jerks

[image source]

Friday, April 03, 2020

Wisdom and Chaos

I. The Puzzle: Why Aren't Academic Philosophers Wise?

Etymologically, philosophy is the study of wisdom. In the popular imagination, philosophers sit cross-legged, uttering cryptic profundities through long white beards. Real philosophy professors spend considerable time reading texts from the "wisdom traditions", and on ethics, the meaning of life, and the fundamental nature of reality. So you might think that the average philosopher would be at least a little bit wiser than the average non-philosopher.

Since the wisdom-o-meter is still in early development, we don't yet have solid scientific evidence on this question. But my impression is that academic philosophers in the United States, as a group, are no wiser than others of their social class -- no wiser on average than chemistry professors, high school teachers, lawyers, or city administrators.

In other words, we don't seem to profit much, in terms of personal wisdom, from our philosophical reading and extended reflection on big-picture questions. Why is this?

One easy answer that will suggest itself to many professional philosophers is this. Most of our reading and reflection doesn't concern the kinds of issues central to wisdom. A philosopher of language might spend much of their professional time reading about the reference of proper names and donkey anaphora. An ethicist might focus on textual puzzles in Kant interpretation. Wisdom might no more tend to follow from those activities than from grading high school history homework or studying sulfates.

However, I think that answer is at best partial. Although one's philosophical research might mostly concern donkey anaphora, most philosophers spend most of their professional time teaching. We teach classes like "introduction to philosophy" and "contemporary moral issues" and "meaning, truth, and value" -- and in prepping and teaching these classes, as well as sometimes apart from class, most of us do engage questions about the meaning of life and what matters most in the big picture. Substantially more than the average chemistry professor, we read and teach classic texts that ordinary people turn to as sources of wisdom. It seems that we ought to learn some wisdom from doing so. The fact that we don't, or don't seem to, thus remains a puzzle.

II. The Nature of Wisdom.

To address the puzzle, we need to think about wisdom and its sources. What is wisdom?

Here's my proposal: Wisdom is the disposition to make decent choices in a wide range of circumstances. If you tend to make poor choices, you're not wise. If you tend to make good choices but only in a narrow range of familiar circumstances and any perturbance would throw you into bewildered disarray, you're also not wise. Wisdom involves stability of good practical judgment even when circumstances turn strange.

A decent choice isn't necessarily the best choice. Only someone with inhumanly heroic insight could be disposed generally to make the best choices in a wide range of circumstances. Decency is more about avoiding blunders -- bad decisions due to panic, short-term thinking, seriously misweighing one's values, overlooking obvious considerations, or grossly misjudging people's character and intentions.

A wide range of circumstances needn't mean all circumstances. How wide a range and what belongs in the range remains an open parameter in this account. If you're a man in a culture where it's not unusual for men to be called to battle, then wisdom probably requires that you be disposed to make decent choices if called to battle. If you're not in such a culture, maybe how you'd react in battle doesn't matter so much. We can also define subclasses of wisdom by considering narrower ranges of circumstances or narrower classes of decisions: wise in matters of child-rearing or in choosing friends or in financial matters.

Being calm and giving good advice, classically associated with wisdom, aren't part of my definition, but they flow naturally from it. Panic leads to bad decisions, so if you're prone to panic, you're probably not wise. Good hypothetical thinking is crucial to good decision making, so the wise will tend to have good judgment about circumstances they aren't in; and since giving bad advice is itself a type of bad decision, it's a failure of wisdom not to know one's limits well enough to stay silent rather than misdirect others.

III. Chaos and Wisdom.

It's an unfortunate feature of the human condition that we rarely learn from other people's mistakes. We insist on making the mistakes ourselves. (This seems to be especially true of teenagers and nations.) So unless you've personally lived through a wide range of circumstances and made a wide range of corresponding mistakes, you're unlikely to have acquired the knowledge necessary to navigate a diversity of situations without blundering. Narrow, stable lives will thus tend to generate less wisdom than chaotic lives with radical changes of circumstance.

This explains your grandmother's wisdom -- grandma who grew up in Hungary, fled the Nazis, built a new life in an unfamiliar country speaking an unfamiliar language, raised five children each with their own chaos, lost her husband, almost died of cancer, knew poverty and comfort, security and uncertainty, love and betrayal.

What is pretty much the least chaotic path through our culture? The academic path. Do what your teachers tell you. Get good grades. Go to graduate school and do it some more. Get a job. Get tenure. It's extremely competitive, but the path is orderly and laid out clearly in advance. Each thing flows neatly from the next. (I set aside the increasingly common chaos of the academic job market.) The set of skills and the range of challenges doesn't change radically over the course of one's life, and there normally are few disruptive conflicts with authority. You wrote decent essays in high school. You wrote better essays in college, then in grad school, and then as a faculty member. Philosophy, literature, and math are perhaps especially narrow, even among academic disciplines. In the laboratory sciences, one at least needs also to learn to manage people and equipment. [ETA: See Marcus Arvan's interesting comment below about whether this description of the academic career path is dated.]

The solution to the puzzle, then, is this. Although one can of course learn some wisdom from reading great philosophy and thinking about profound questions, that's not a particularly good or efficient way to acquire wisdom, compared to actually living through ups and downs and weirdness and chaos. Academic philosophers with narrow, orderly life paths won't fully catch up with grandma, despite some possible modest benefits from thinking hard about Kant, Buddhism, and Montaigne.

You'll be unsurprised to learn that this post was inspired by noting my own and others' reactions to the chaos of the pandemic.

------------------------------------------------

PS: Maybe reading personal essays like Montaigne's or engaging historical or fictional narratives is more effective in simulating alternative experiences than reading abstract arguments? My student Chris McVey has been working on this issue. See some of his data here. Martha Nussbaum's Poetic Justice is also relevant.

------------------------------------------------

Related:

Cheeseburger ethics (Aeon Magazine, 2015).

The moral behavior of ethicists (in J. Sytsma and W. Buckwalters, eds., A Companion to Experimental Philosophy, 2016)

[image source]

------------------------------------------------

If you enjoy my blog, check out my recent book: A Theory of Jerks and Other Philosophical Misadventures.

Tuesday, March 31, 2020

Charity Argument Contest Update

In october, Fiery Cushman and I announced a contest: Participants were to write a philosophical argument that attempts to convince research participants to donate a surprise bonus to charity. The winner would receive $500, and we would donate an additional $500 to the winner's choice of charity.

We planned to run the experiment in early 2020 and announce the winner by today, March 31. For a variety of reasons, the experiment has been delayed, but the contest is still on and we will announce the winner as soon as we can.

In the meantime, I hope Splintered Mind readers and contest entrants are managing well through the chaos.

Tuesday, March 24, 2020

Snail Weather

Yesterday was a good day to be a snail. Spring rains are rousing them from hibernation, and on my morning stroll I saw hundreds cruising the vacant sidewalks of suburban Riverside.


The streets were so quiet, even this snail appeared to be in no particular danger:

I took dozens of photos in the rain, my usual fifty-five minute walk stretching to an hour and a quarter. It was peaceful -- just me and the snails. Normally on a Monday morning, cars fill the streets, leaving for work and school. Yesterday I counted only fourteen cars the whole time I was out, less than one car every five minutes. Nor were there other pedestrians.

Alone in the rain with the snails, I thought back to couple of days previously, when my wife Pauline, my teenage daughter Kate, and I had been walking our dog in the sun. Pauline had wondered aloud if the world would be better without humans. Think of all the destruction and suffering we cause, she said.

But without humans, I replied, there'd be no science, no philosophy, no art, no heroes -- none of the distinctively human things that make Earth such an amazing planet! Isn't it better that the universe has planets like this, even with all the suffering we inflict on ourselves and other species, than it would be if every planet were just a paradise of cows?

But how much destruction and suffering is worth it, Pauline asked. What if we wiped out every species, including ourselves, and turned the planet into a sterile rock forever? Would our great accomplishments have been worth it?

Probably not, I conceded. But if we wiped out 90% of species and then the world recovered, with new wonderful species emerging later -- then we were worth it.

Kate had been listening in. I asked her opinion.

"The world would be better without humans," Kate said. She loves animals. She was thinking, I'm sure, of all the wild animals that would flourish better without us.

Two antinatalists in my own family!

I'll give Pauline and Kate this much: It's not a bad thing to let the snails to enjoy a day without us once in a while. I've noticed more birds and squirrels recently too. We humans can tuck in for a nap and let some other species cruise around for a while, in our suddenly quieter spaces.



The big, beautiful ones are common garden snails, Cornu aspersum. The low, sleek ones are decollates, Rumina decollata, predatory snail-eating snails, sneaking up on the garden snails in their slow-paced hunt.



------------------------------------------

If you enjoy my blog, check out my recent book: A Theory of Jerks and Other Philosophical Misadventures.

Friday, March 20, 2020

On Sharing Umbrellas

Sometimes I love a cloudburst. You're walking downtown. Suddenly the rain starts and you're under some random awning, shoulder to shoulder with strangers, sharing complaints and guesses about the weather. The rain eases a little, and the most hurried or the least concerned dash away, accepting wet faces and shoulders, while the more relaxed wait it out. I'm reminded of G.K. Chesterton's essay about the joy of chasing your hat when it's snatched by the wind "On Running After One's Hat" -- but a friendlier event, where people are thrown together instead of managing alone.

Last November, after a week of warm sunny weather here in southern California, a surprisingly rainy afternoon jumped on us. The first rain of the season is always fun, and I noticed students sharing umbrellas. Every umbrella, it seemed, had two or three students under it, some in coats, some in shorts -- sandals, boots, skirts, sweatshirts, flannel, summer dresses, all jumbled together, smiling and giggling.

Students and staff members sharing umbrellas seemed much liklier to be smiling than those walking solo. I put on my hat (no umbrella) and took a long stroll around campus in the rain, starting a count: student/non-student, umbrella/no umbrella, sharing/not sharing, smiling/not smiling, with a friend/alone. I developed a method and coding scheme on the fly. I chose observational subjects when I was behind them, not looking at their faces, to minimize experimenter bias, then somehow without seeming too weird or conspicuous I had to position myself to register, at a predetermined time, smile vs. no smile. I did a lot of speedwalking, corner cutting, and sprinting through the rain. The fairest comparison, I soon realized, would be groups of friends who all had umbrellas vs. groups of friends sharing umbrellas. After about forty minutes, I was thoroughly soaked (but having great time), and the rain let up.

I didn't yet have many good data points, since it could take sixty seconds to choose a group and position myself for an observation. Preliminary evidence suggested that my hypothesis would play out. I could see it in their faces. Being thrown together with a friend under an umbrella is one of the lovely little pleasures of life. There's the shoulder-bumping intimacy. How often are we so physically close with our friends? There's the novelty of the change of weather in dry California, which you can now jokingly grouse about together. There's a special pleasure, maybe too, in having something to offer a friend -- room under your umbrella -- which you can share without cost. It's a toy emergency: no real risk of harm, nothing serious at stake, but some of the same cooperative bonding as in real emergencies, some of the same intimacy, uncertainty, newness, lowered barriers.

After the rain stopped, I had only the beginning of a data set. No worries, I thought. I'll collect more data later, during the next unexpected rain. It didn't happen, though, in December, January, February. It had been a dry winter, and during the few rains, I didn't manage to find the time.

It's rainy again this week, after a warm February and early March. A wee bit of winter (SoCal style) is back. It would be a perfect time to don my hat and gather more data.

But of course, with the epidemic, no one was on campus this week. Wednesday was my last day to retrieve belongings from my office before complete lockdown through April. Campus was already looking a little dilapilated -- the paper and cardboard signs and flyers from a few weeks ago bent and weathered, abandoned, unreplaced. I saw only one person during my visit, someone in a winter jacket, turned away from me, head down, walking swiftly the other direction.

With the California governor's shelter-in-place order last night, sharing umbrellas with acquaintances, anywhere, is on pause -- one more small casualty of the pandemic.

So I'm sitting at home, staring through my window at the overcast sky while my wife and daughter sleep late. My son is self-quarantined on the other side of town after possible exposure during his truncated study abroad.

My umbrella research will wait til next year, I suppose, while I huddle with family, not knowing what kind of protection we might need, waiting out a different kind of storm.

[photo from Alamy, used with license]

Wednesday, March 18, 2020

Feeling Bored? Isolated? Insufficiently Supplied with Philosophical Weirdness? Watch or Listen to a Wide-Ranging Two-Hour Interview about My Work

Released Monday at Eclectic Spacewalk

Topics include:

* juvenile delinquency
* group consciousness and the "Chinese room"
* a cyberpunk spin on Kant's transcendental idealism
* ancient Chinese philosophers Mengzi and Xunzi on whether human nature is good
* science fiction as a form of philosophy
* garden snail sex
* approaching academic life with a childlike sense of fun
* and more!

Eclectic Spacewalk is just getting started, so if you like the interview, please subscribe and support them!

Bonus feature if you the YouTube video: You can play a new COVID19 themed game. Every time you see me touch my face, squirt a bit of sanitizer on your hands!

-------------------------------------------

Timestamps:

  • Eric’s dad was a grad student in the famous Harvard (Timothy Leary & Ram Das) LSD Studies, and invented the ankle monitoring system for arrestees (00:04:28)
  • Eric did his post graduate work at UC Berkley under John Searle of “The Chinese Room” thought experiment fame - a critique of “The Turing Test” (00:11:14)
  • What exactly is consciousness? (00:17:35)
  • Can collectives, societies, companies, ideas, or countries like the United States be conscious? (00:21:00)
  • Eric’s thoughts on Object Oriented Ontology and speculative realism (00:25:52)
  • Kant meets cyberpunk (00:29:38)
  • Unknown Unknowns, and the quest for consilience, and the Fermi paradox (00:34:31)
  • Part Two:

  • Philosophical outlook on altered states of consciousness (00:43:17)
  • The great debate between Mengzi & Xunzi about whether human nature is good or evil. (00:47:21)
  • Moral psychology, business ethics, and how much can someone gain from thinking philosophically? (00:53:08)
  • Making experiments to test philosophical and moral inquiries (00:58:17)
  • Science fiction as a philosophy & ethics of technology (01:01:37)
  • Upcoming anthology: “Philosophy through science fiction stories” (01:05:44)
  • Discussing films Ex Machina & Arrival (01:10:11)
  • The bizarre, weird, and complex lives of garden snails (01:15:24)
  • The love of writing, running a blog called “The Splintered Mind,” and everyone is really a philosopher and interested in the deepest mysteries of existence (01:22:55)
  • Eric’s new book: “A Theory of Jerks and other Philosophical Misadventures" (01:29:36)
  • The re-connection of psychology and philosophy (01:36:53)
  • Recommending Zhuangzi (Butterfly Dream) and John Stuart Mill (On Liberty) and Montaigne (Personal essays like On Solitude) (01:39:05)
  • How has teaching philosophy changed you? Different teaching methods starting with moral questions first. (01:42:38)
  • How has your influences changed over time? (01:49:01)
  • What can we gain philosophically from the idea of the “The Overview Effect?” (01:54:49)

  • Friday, March 13, 2020

    The Academic Jerk: A Wildlife Guide

    This post originally appeared as "The Jerks of Academe" in the Chronicle of Higher Education, Jan 31, 2020. The awesome art was created by Lars Leetaru for the Chronicle and is used by permission.]

    ----------------------------------------------------------

    This morning you probably didn’t look in the mirror and ask, “Am I a jerk?” And if you did, I wouldn’t believe your answer. Jerks usually don’t know that they are jerks.

    Jerks mostly travel in disguise, even from themselves. But the rising tide (or is it just the increasing visibility?) of scandal, grisly politics, bureaucratic obstructionism, and toxic advising in academia reveals the urgent need of a good wildlife guide by which to identify the varieties of academic jerk.

    So consider what follows a public service of sorts. I offer it in sad remembrance of the countless careers maimed or slain by the beasts profiled below. I hope you will forgive me if on this occasion I use “he” as a gender-neutral pronoun.

    The Big Shot

    The Big Shot is the most easily identified of all academic jerks. You can spot him a mile away. His plumage is so grand! (Or so he thinks.) His publications so widely cited! (At least by the right people.) His editorial board memberships so dignified! (Not that anyone else noticed.) You will never fully appreciate the Big Shot’s genius, but if you cite him copiously and always defer to his judgment, he’ll think you have above-average intelligence.

    The Creepy Hugger

    To those unfamiliar with his ways, the Creepy Hugger appears the opposite of the Big Shot. He will seem kind, modest, and charming, despite his impressive accomplishments. This is his alluring disguise. You will flee to him for comfort and protection after abuse by the other types of academic jerk. The Creepy Hugger with lecherous motivations is one variety, but not the only one, nor the most common. More frequently you’ll encounter the type who takes advantage of his power to extract favors and “friendship” that you would not otherwise give. His arm is around your shoulder while he complains about his colleagues. He invites you for beers that you feel obliged to consume in feigned bonhomie. You meet his family and are expected to be sweet and sociable. Because you are so nice, and because he seems so enamored of you, you proofread his drafts and help organize his office. Soon, he will be distracted by someone better and forget you exist – unless he can gain advantage by presenting you as his protégé.

    The Sadistic Bureaucrat

    You will recognize the Sadistic Bureaucrat by the little smile he can’t quite suppress as he informs you that your reimbursement application was not completed correctly. Your visa approval process is delayed. The only available time slot for your class is seven in the morning, and your sabbatical request is denied. He is really so sorry. But, he reminds you, the policies are clearly listed in the faculty manual. It would be unfair, don’t you see, to make an exception. Somehow, his friends don’t seem to suffer under the policies in quite the same way. The Sadistic Bureaucrat washes away his moral qualms about granting exceptions to others by relishing his great fairness and rigorous principle when applying the rules to you.

    The Embittered Downdragger

    You and the Embittered Downdragger agree that the Big Shot is not nearly as brilliant as he imagines – neither, the Downdragger adds, is that other scholar, whose work you rather admire. The Embittered Downdragger is distinctly unimpressed that you finally managed to publish in a so-called “elite” venue. And your great teaching evaluations? They prove only that you cater to student demand for easy A’s. The Embittered Downdragger has only published a few articles. His students complain about him. He serves no important administrative role. This is because he knows that the system is corrupt. He rolls his eyes at the award you just won and the invitation you just received, of which you had, until then, felt rather proud. His “no” vote can be relied on for every policy change, every new initiative, and every tenure case.

    This list is neither exhaustive nor exclusive. Jerkitude manifests in wondrous variety and not all the species have yet been cataloged. Hybrids abound – for example, the past-his-prime Big Shot who is becoming an Embittered Downdragger.

    If you spot one of these jerks in the wild – at a conference hotel, on the other side of the seminar table, at a campuswide committee meeting – react as if you had spotted a bear. They are dangerous, unpredictable creatures, best avoided if possible. Do not try to cuddle up close, thinking you can befriend them without getting hurt. Do not try to seduce them with treats. Walk as far away as possible. Jerks are best viewed from a distance, with telescopic lens.

    If surprised up close by an angry jerk, stand tall, if you can raise yourself to intimidating height. If it’s a grizzly, though, play dead.

    ----------------------------------------------------------

    But what if you are the jerk?

    It’s generally difficult to recognize and acknowledge one’s vices. No one wants to see themselves as flaky or vain. We try to ignore evidence of such character deficiencies in ourselves, and we find rationalizing excuses. But if we look close enough and long enough we can wincingly recognize such shortcomings.

    Self-knowledge of jerkitude is more recalcitrant. Big Shots will not see themselves as Big Shots – at least not that kind. Sadistic Bureaucrats and Embittered Downdraggers will rarely recognize the true shape and extent of their awfulness. We can admit, when pressed, that we are flaky and vain, but we can’t admit, not deep down, that we are the Creepy Huggers students whisper about in the halls.

    Jerkitude, though it comes in many varieties, has a central defining feature: culpably failing to appreciate the perspectives of the people around you. The Big Shot fails to appreciate the intellectual merits of his colleagues. The Creepy Hugger fails to appreciate how the power dynamics of “friendliness” are experienced by those he wraps his arms around. The Sadistic Bureaucrat fails to appreciate the merit of most other people’s excuses and the difficulty of negotiating complex, unfamiliar rules. The Embittered Downdragger fails to appreciate the value of accomplishments beyond his own.

    Illegitimately devaluing others’ goals and ignoring their opinions – this is the essence of being a jerk. It’s a peculiarly epistemic vice, one that works to prevent its own detection by painting the world in seemingly objective self-flattering colors and by thwarting the jerk’s ability to respectfully hear others’ critical feedback. Jerkitude flourishes in ignorance of itself.

    But all hope is not lost. Though I doubt that the most horrible jerks among us will ever change their ways, the best chance to attain a glimmer of self-knowledge is to think phenomenologically – that is, to think about how the world in general looks through your eyes, and then to compare that vision with the world as seen by the jerk. Do you see the world through jerk goggles?

    You’re important, and you’re surrounded by idiots! You can’t believe they gave that award to that absolute dolt. Her work isn’t nearly as good as yours. And why are you wasting time with this student? Can’t he see you have a ton of important things you need to get done? That new article should have cited your work here and here and here. Is the author ignorant? Is she intentionally downplaying how much she’s borrowing from you? Ugh, your colleague is making a case for Distinguished Professor, but you’re clearly more deserving. No need to read work by scholars you haven’t heard of. It can’t be good if they aren’t well known…. You’re thinking like a Big Shot.

    You’re not like those other professors, formal and standoffish and so full of themselves. You’re an egalitarian. Your students are peers, and, well, you guess you’re kind of cool. It’s kind of big of you to step down the social hierarchy like this, relating so well with your inferiors – whoops, you didn’t mean “inferiors”! It’s fun that you can tease her, call her an “asshole” in a joking way, say her thesis work is totally stupid. She knows you’re just razzing her. It sure is nice of her to help you organize your office. You guess you do kind of deserve that, because – whoops! You mean of course you would do the same for her…. You’re thinking like a Creepy Hugger.

    Box A correct. Box B correct. Box C, oh, tsk-tsk … no, no, no. This will need to be redone. You can’t approve it this way. They did it wrong, and the policies aren’t really under your control. Option A: If excuse is from a friend. Ah, you see the problem! Of course, we can get this fixed. The rules serve us, not us the rules. Mistakes happen – we’re human, after all. Option B: If excuse is not from a friend. The rules need to be applied consistently. It’s only fair to the others. Clear rules are what make the institutions work, and it’s important to be even-handed and careful. You’re sorry about all the trouble this is causing – though maybe in your secret heart not so sorry. Did you just now feel a little rush of pleasure at the power you exerted over them? No, of course not! Really it’s too bad they’ll have to return to their home country / not get sabbatical / lose the grant money…. You’re thinking like a Sadistic Bureaucrat.

    Wow, you find this description of jerks to be so on target! You’re not like any of them! The whole system is rotten. Peer review is basically a scam. And the students – lazy complainers! None of them really deserve As, but with all the grade inflation you’ll have to give out a few good marks. You give sarcastic congratulations to your friends on their great success in the Game!... You’re thinking like an Embittered Downdragger.

    I have drawn these four types as caricatures. We – you and I – know we’re not that awful … right?

    But there’s a reason I find it so easy to imagine the inner life of these jerks. It’s my own inner life, sometimes. I catch myself thinking in these ways, and I worry. That sting of worry is the moral self-knowledge I treasure – the seeing that it is so, which makes it less so.

    ----------------------------------------------------------

    For more:

    A Theory of Jerks (Aeon Magazine, Jun 4, 2014)

    A Theory of Jerks and Other Philosophical Misadventures (MIT Press, 2019).

    How to Get a Big Head in Academia (blog post, Sep 20, 2010)

    Cheeseburger Ethics (Aeon Magazine, Jul 14, 2015)

    Wednesday, March 11, 2020

    Snail and Slug Consciousness and Semi-Unlimited (?) Associative Learning

    I've just finished reading Simona Ginsburg's and Eva Jablonka's tome on consciousness in non-human animals, The Evolution of the Sensitive Soul. It is an impressively wide-ranging work, covering huge swaths of philosophy, biology, and psychology for many different species. (For an article-length version of their view, see here.)

    Ginsburg's and Jablonka's central idea is that consciousness (i.e., phenomenal consciousness, subjective experience, being an entity that there's "something it's like" to be) requires something they call Unlimited Associative Learning. They argue that we see consciousness and Unlimited Associative Learning in vertebrates, at least some arthropods (especially insects), and in some mollusks (especially cephalopods) but not other mollusks (e.g., sea hares), and not in most other animal phyla (e.g., annelida such as earthworms or cnidaria such as jellyfish). If you wonder -- as I do -- where we should draw the line between animal species with consciousness and those without consciousness, theirs is one of the most interesting and well-defended proposals.

    I'm not convinced for two broad reasons I discuss here and here. I think all general theories of consciousness suffer from at least the following two epistemic shortcomings. First, all such theories beg the question, right from the start, against plausible views endorsed by leading researchers who see consciousness as either much more abundant in the universe or much less abundant in the universe (e.g., panpsychism and Integrated Information Theory on the abundant side, theories that require sophisticated self-representation on the other side). Second, all such theories are ineliminably grounded in human introspection and verbal report, creating too narrow an evidence base for confident extrapolation to very different species.

    But today I don't want to focus on those broad reasons. As regular readers of this blog know, I love snails. So I was interested to note that Ginsburg and Jablonka specifically highlight two genera of terrestrial gastropod (the Limax slug and the Helix snail) as potentially in the "gray area" between the conscious and nonconscious species (p. 395). And I think if you pull a bit on the thread they leave open here, it exposes some troubles that are specific to their theory.

    Ginsburg's and Jablonka's view depends essentially on a distinction between Limited Associative Learning and Unlimited Associative Learning. Associative learning, as you might remember from psychology class, is the usual sort of classical and operant conditioning we see when a dog learns to salivate upon hearing a bell associated with receiving food or when a rat learns to press on a lever for a reward. Unlimited Associative Learning, as Ginsburg and Jablonka define it, "refers to an animal's ability to ascribe motivational value to a compound stimulus or action pattern and to use it as the basis for future learning" (p. 35, italics added). Unlimited Associative Learning allows "open-ended behavioral adjustments" (p. 225) and "has, by definition, enormous generativity. The number of associations among stimuli and the number of possible reinforced actions that can be generated are practically limitless" (p. 347). An animal with Limited Associative Learning, in contrast, can only associate "simple ('elemental') stimuli and stereotypical actions" (p. 225).

    Immediately, one might notice the huge gap between Limited Associative Learning (no learning of compound stimuli, no stringing together of compound actions) and truly open-ended, truly "unlimited" Unlimited Associative Learning with full generativity and "practically limitless" possibilities for learning. Mightn't there be some species in the middle, with some ability to learn compound stimuli, and some ability to string together compound actions, but only a very limited ability to do so, far, far short of full combinatorial generativity? For example... the garden snail?

    Terrestrial snails and slugs are not the geniuses of the animal world. With only about 60,000 neurons in their central nervous system, you wouldn't expect them to be. They don't have the amazing behavioral flexibility and complex learning abilities of monkeys or pigeons. There's not a whole lot they can do. I'd be very surprised, for example, if you could train them to always choose a stimulus of intermediate size between two other stimuli, or if you could train them to engage in long strings of novel behavior. (Certainly, I have heard no reports of this.) But it does seem like they can be trained with some compound stimuli -- not simply "elemental" stimuli. For example, Limax slugs can apparently be trained to avoid the combined scent of A and B, while they remain attracted to A and B separately (Hopfield and Gelperin 1989) -- compound stimulus learning. Terrestrial gastopods also tend to have preferred home locations and home ranges, rather than always moving toward attractive stimuli and away from unattractive stimuli in an unstructured way, and it is likely (but not yet proven) that their homing behavior requires some memory of temporally or spatially compound olfactory and possibly other stimuli (Tomiyama 1992; Stringer et al. 2018).

    Nor is it clear that even rat learning is fully generative and compoundable. As Ginsburg and Jablonka acknowledge (p. 303), in the 1960s John Garcia and Robert A. Koelling famously found that although rats could readily be trained to associate audiovisual stimuli with electric shock and gustatory stimuli with vomiting, the reverse associations (audiovisual with vomiting and gustatory with shock) are much more difficult to establish.

    Between, on the one hand, "Limited Associative Learning" which is noncompound and reflex and, on the other hand, fully compoundable, fully generative "Unlimited Associative Learning" stands a huge range of potential associative abilities, which with intentional oxymoronity we might call Semi-Unlimited Associative Learning. Ginsburg's and Jablonka's system does not leave theoretical space for this possibility. Terrestrial gastropods might well fall smack into the middle of this space, thus suggesting (once again!) that they are the coolest of animals if you are interested in messing up philosophers' and psychologists' neat theories of consciousness.

    Go, Slugs!

    [image source Platymma tweediei]

    Friday, March 06, 2020

    Applying to PhD Programs in Philosophy, Part VII: After You Hear Back

    Part I: Should You Apply, and Where?

    Part II: Grades, Classes, and Institution of Origin

    Part III: Letters of Recommendation

    Part IV: Writing Sample

    Part V: Statement of Purpose

    Part VI: GRE Scores and Other Things

    Old Series from 2007

    --------------------------------------------------------

    Applying to PhD Programs in Philosophy
    Part VII: After You Hear Back

    When You'll Hear and When You'll Have to Decide

    There's a general agreement among philosophy PhD programs that applicants have until April 15 to decide whether to accept an offer of admission. This deadline drives the process.

    Schools with a hard cap on their admissions offers might be permitted by the administration to admit only eight students, for example, or to offer funding (in the form of TA-ships and fellowships) to only eight students. These schools will try to admit those eight students quickly (in February, maybe) and will often pressure those students to decide quickly so that, if the student declines, another student further down the list can be admitted or offered funding.

    Other departments will target a certain entering class size and admit approximately twice that many students (more or less, depending on "yield" rates in recent years) with the expectation that about half will decline. In principle, these departments could admit all of those students early in the process, but in fact things often fall behind. Departments might sometimes be conservative in their early admissions to avoid the risk of being committed to too large an entering class. Later, if the number of students accepting offers is falling short of expectations, a few may be admitted late in the process.

    If you're at the top of the department's list, expect (typically, depending on the department's speed) to hear around mid-February to mid-March. Applicants lower on the list might not hear until April -- even April 15 itself! You might not hear good news about funding, in particular, until very near the April 15 deadline, if the department has a hard cap on funding. Be ready on April 15 to make an immediate decision about an offer should one come -- and don't be too far from the phone! It's not unreasonable to ask for an additional day or two to decide, should you hear on April 15th, but the department might or might not comply with such a request.

    It's generally in the interest of the applicants, then, to wait on their decisions until April 15. However, it is in the interest of departments to extract decisions from applicants as early as possible. Unfortunate!

    Occasionally, if an entering class is looking smaller than expected, a department may admit someone after April 15th. That student may already have committed to another school. This needs to be handled delicately, since the school is counting on you to attend and might have turned away another applicant in favor of you. My own view is that the interests of the student generally ought to outweigh the interests of the program in such cases. If one program is much more appealing than the other, I'd recommend reneging with a heartfelt apology!

    Funding Offers

    Most top-50 ranked PhD programs do not expect students to pay their way through graduate school. They'll offer funding (at poverty levels) in the form of TAships and fellowships. When comparing funding offers between schools, don't just look at the raw dollar amounts. Some schools inflate their dollar amounts by adding the cost of tuition to their stated funding totals -- money which of course comes right back to them. Make sure, also, that your funding offer includes student medical insurance.

    Most departments will guarantee students five years of support in some combination of fellowship and TAship. If you're on fellowship you're paid just for being a student! (Sweet!) A typical offer at a typical department will be for one year of fellowship (your first year, when you aren't really advanced enough a student to be a T.A., anyway, in the eyes of many departments) and four years of TAship. Students especially targeted by the department may receive additional fellowship years. (Outstanding GPA and GRE scores help a lot here, since the high-level administrators who often give out those fellowship packages can evaluate those numbers better than they can evaluate writing samples and letters of recommendation.) Although most PhD programs expect most of their students to pay their way through most of their years by TAing, a few schools -- especially the smaller private schools -- don't expect much TAing from their students and offer comparatively more fellowship support.

    You might also consider how much is expected of a T.A.: Teaching one section of 25 students is much easier than teaching three sections of 25 which in turn is easier (usually) than teaching an entire course on your own. Also consider what happens when your guaranteed years of funding run out, since most students at most schools run out of guaranteed funding before they complete their degrees.

    Don't expect too much wiggle room in negotiations about funding. But if a comparable department is offering you a better package than the school that would otherwise be your first choice, it can't hurt to politely mention that fact to the chair of the admissions committee.

    Financial offers generally don't include summer funding, though often students can apply for a limited number of summer-school teaching positions.

    Letting People Know Where You've Been Admitted

    Let your letter writers know where you've been admitted -- or even if you haven't been admitted anywhere -- and ultimately where you decide to go. It's only polite, since they put in work on your behalf. It helps them have a better sense, too, of what to expect for future students. And besides, they might have some helpful advice.

    Admissions committee chairs also like to know where you've been admitted and where you decide to go (if not to their school) and why. You needn't share this information if you don't want to, but it helps them in thinking about future admissions. For example, if lots of admittees are going to comparably ranked schools because those schools have better funding offers, admissions committees can make a case for more funding to the college administrators. If admittees are declining mostly for much better-ranked schools, then committees know that their low yield rates are due to having a strong batch of applicants. Etc.

    Visiting Departments

    I highly recommend visiting the departments to which you've been admitted -- but only after you've been admitted. Admitted students, whom departments now want and are competing to attract, are treated much differently than students who have merely applied or who are on the "waiting list" (if there is one), who will be seen as petitioners. Unfortunately, then, it won't be possible to properly visit departments that admit you at the last minute.

    Some departments have money to help students fly out to visit, others don't. It doesn't hurt to ask politely. In any case, let the admissions committee chair know you intend to visit. Even if funding isn't available, she can help arrange your stay -- for example by mentioning what times would be good or bad and maybe finding a graduate student willing to let you crash on their couch for a night or two.

    There are two main reasons to visit departments: First and obviously, it can help you decide where to go. But second, and less obviously, it is a valuable educational experience in its own right.

    The second point first: As I mentioned in Part I, students who spend their whole time in one department often have a provincial view of philosophy. Even visiting another department for a few days can crack that provincialism and give an invigorating and liberating, broader perspective on the field. Also, you will never again be treated as well by eminent professors as you will when you are a prospective (admitted!) graduate student. The country's best-known philosophers will take you out to lunch or coffee for an hour and genuinely listen to your views on philosophical topics. They'll be solicitous of you. They'll value your opinion. Graduate students -- who at top schools sometimes soon become influential professors themselves -- will engage you in long discussions about the state of philosophy, and you'll (sometimes) feel a real camraderie. My own graduate school tour, for which I set aside three full weeks (for six campuses) was one the highlights of my philosophical education.

    To maximize all this, try to stay at each campus for a few weekdays. Weekends don't really count. If you have to cut classes, cut classes. This is much more important than whether you get an A or a B in Phil 176. Also, I'd recommend emailing in advance the professors you'd like to meet and asking them if they're willing to go out for coffee with you.

    When you visit a school, the department will generally set you up with first- and second-year students to meet. No harm in that, but bear in mind that first- and second-year students are often still in the glow of having been admitted and they haven't yet started the most difficult part of their education, their dissertation. Insist on meeting students in their 5th year and beyond, especially students working with advisors you imagine you might be working with. In my experience, such students will generally be brutally honest. Unlike new graduate students and unlike professors they don't really care whether you come to their school or not, so they have little motive to draw a rosy picture. And often they're just itching to have someone to grouse to.

    Not everyone who read the 2007 version of this post took my advice about talking with advanced graduate students, so let me emphasize it just a bit more. I think this is the single most important thing you can do. I don't have statistics on this, but my impression is that only about half of students finish their PhDs in philosophy, and among those who don't finish the majority peter out during their dissertation phase, after already haven given four, five, six, seven, eight years of their life to the program. The reasons for fade out are complex: lack of funding, perfectionism, procrastination, loss of inspiration, confusion about what to do -- almost never, I think, lack of ability -- and also bad advising or at least lack of encouragement, support, and timely feedback from one's dissertation chair. It is very important to have a realistic sense of this before you enroll in a PhD program (it's bad almost everywhere, but not equally bad), especially if the students of one of your prospective advisors are among those who tend to struggle or fade out.

    Relatedly, don't expect professors' solicitious treatment necessarily to continue after you've enrolled. The advanced students' opinions about the professors are probably a better gauge of how you'll actually be treated. Nonetheless, if you talk substance with professors on philosophical topics you care about, you can get a sense of whether you're likely to see eye-to-eye philosophically.

    Gosh, with this new emphasis, this section is sounding a bit like a downer. I don't really mean it that way. Take a look again at the "Yippie!" button. Yippee! In many respects, graduate school is terrific and writing a dissertation is an amazing experience unlike anything else in your life in terms of the depth of study and scholarly satisfaction you can experience. But... eyes open about the challenges.

    The Summer Before

    Students often seem to be shy about showing their faces around the department to which they've been admitted until either classes start or there's some formal introductory event. No need for this. Move in early. Meet some professors and ask them for some reading suggestions pertinent to your shared interests or classes you'll be taking with them in the fall. Get a running start. Professors are often quite interested in meeting the new students -- until the inevitable disappointment of discovering that on average they're only average! But if you get a running start, maybe that's a sign that you'll be an unusually good student...?

    ETA (March 8): Wait Lists

    If you've been told you've been "waitlisted"? Probably, you should interpret this as "unlikely to be admitted" unless you have specifically been told that you are high on the wait list and have a decent chance of admission. (On the other hand, if you have been told the latter, believe it.) Normally, there isn't a formally ranked wait list, just a sense of who are among the dozens of students who were considered seriously but not offered admission. If yield is low, some these students' applications will be revisited, prioritized partly on grounds of balance (e.g., if acceptances are coming from students in Area A but not Area B, Area B students are more likely to be reconsidered).

    ETA (March 10): If You Can't Visit

    I have no especially creative ideas here, but it is especially pertinent this year due to the epidemic. I recommend video or phone conversations with prospective advisors and with advanced graduate students.

    If it's possible to get a list of contact information for all graduate students, along with their year and areas of interest, that might be especially helpful, so you can choose students whose experiences might be representative of your own rather than being funneled to a few of the most enthusiastic students whose areas of interest and faculty advisors might be very different from what you expect yours to be.

    [image source]

    Wednesday, February 26, 2020

    Applying to PhD Programs in Philosophy, Part VI: GRE Scores and Other Things

    Part I: Should You Apply, and Where?

    Part II: Grades, Classes, and Institution of Origin

    Part III: Letters of Recommendation

    Part IV: Writing Sample

    Part V: Statement of Purpose

    Old Series from 2007

    --------------------------------------------------------

    Applying to PhD Programs in Philosophy
    Part VI: GRE Scores and Other Things

    GRE Scores

    GRE scores tend to be required for U.S. programs, but they are less important than grades, letters, writing sample, and statement of purpose. Some schools don't even require them. According to this site, that currently includes Cornell, Emory, Illinois-Chicago, Michigan, Penn, and Wisconsin-Madison (among US PhD programs). [ETA: According to the comments below, Boston U. and Northwestern now also don't require the GRE.]

    In my experience, some members of admissions committees take GRE scores seriously, others ignore them entirely, and still others employ minimum scores as a preliminary screen and otherwise disregard them. This will almost certainly vary by committee member, institution, year, and applicant details. (Foreign applicants, for example, might not be expected to have taken or done well on the GRE.)

    Higher-level administrators play a role here: They oversee the admissions process, and in many schools they make the final decisions about fellowship funding. Since these administrators can really only evaluate your GPA, institution of origin, and GRE scores, students who do well on the GRE are more likely to get better funding offers than students with lower scores on the GRE -- for example, more years of fellowship without teaching (being paid simply to be a student!). Also, it looks good for the department if the students they admit have better average grades and GRE scores than the students in psychology, economics, etc. Since philosophy students on average do amazingly well on the GRE, even philosophers who don't think of GRE as diagnostic can find themselves citing students' GRE scores to make the case for financial support and for the superiority of philosophy over all other disciplines!

    Therefore, I recommend that you practice for the GRE and retake it if your performance is disappointing. However, I don't recommend intensive training for the GRE. Devote that time, instead, to revising your writing sample and doing as well as possible in your classes and/or independent work.

    Although averages will vary by school, my sense is that among students admitted to UC Riverside (currently ranked #32 in the U.S.), a typical GRE score is 160-167 verbal (86th-98th percentile) and 153-165 quantitative (51st-89th percentile), with totals in the 320-330 range. (No one I know takes the Writing score seriously, but 5 is a typical score.) Much lower would potentially be a disadvantage, whereas a nearly perfect score would be an advantage. Let me emphasize, however, that at UCR, and I believe most other places, a low score is not a defeater: Students with weak or (e.g., if foreign) no GREs are regularly admitted if their application is otherwise strong. Conversely, great GREs are at best a small favorable factor, more likely to help with fellowship opportunities than with admission itself.

    There is no GRE subject test in philosophy.

    [One philosopher taking a test]

    Race, Gender, and Disability

    Applications will often have optional tick-boxes in which you can indicate race/ethnicity, gender, veteran status, disability status, or membership in other social categories. Letter writers must also choose pronouns, and they might choose to mention disability or race if they think it's relevant. (Some would never mention such things. Others think they help the applicant by doing so, if the applicant is a member of a historically underrepresented group. If you prefer to keep the information confidential, tell your letter writers in advance.) Committees will often guess gender and ethnicity based on names.

    Women and people of color are notoriously underrepresented in U.S. academic philosophy, compared to most other disciplines (data on other dimensions of diversity are harder to obtain). I believe there are persistent systemic biases. However, I also believe that most admissions committees would like to counter these biases and see a broader diversity in the field. Admissions committees may nonetheless show bias implicitly in how they read a file from "María Gonzalez" compared to a file from "Jake Miller", or in how they read a file from someone with a serious disability. For these reasons and others, it is perfectly reasonable not to want to disclose your race, gender, disability status, etc., to the extent these can be hidden. Don't let yourself be pressured into revealing something you're not comfortable revealing.

    Schools that allow a "personal" statement in addition to a statement of purpose invite applicants to expand on obstacles they have overcome or other ways that they might contribute to the diversity of the graduate program. For discussion, see my advice on Statements of Purpose.

    Presentations, Publications, Life Experience

    If you have published a paper in an undergraduate journal or if you have presented at an undergraduate conference, or if you have other achievements of that sort, briefly mention it in your statement of purpose. However, they normally don't count for much.

    If you have life experience relevant to your proposed area of study, also mention this in your statement of purpose -- but only do this if it is genuinely relevant, and err on the side of being brief and factual rather than overplaying it. For example, if you want to study philosophy of law and you have some work experience in law, mention it. If you want to study philosophy of race and you have worked with an organization focused on racial justice, briefly describe your experience and its relevance to your philosophical interests.

    In disciplines other than philosophy, laboratory experience, work experience, and life experience are often an important part of the application. In philosophy, however, unless your situation is unusual, admissions decisions are almost always based on academic performance plus considerations of fit, balance, and diversity in the entering class, with other considerations having little weight.

    Reapplying to Programs You Were Rejected from Last Year

    Yes, this is fine! Likely, the admissions committee's composition will have partly changed, so you might get a fresh set of eyes. Also, hopefully, your application will be somewhat stronger.

    Late Applications

    ... are sometimes accepted. This will vary by school.

    Personal Contacts and Connections

    Such things don't help much, I suspect, unless they bring substantive new information. If a professor at some point had a good, substantive, philosophical conversation with an applicant and mentions that to the committee, that might help a bit. But seeking out professors for such purposes could backfire if it seems like brown-nosing, or if the applicant seems immature, arrogant, or not particularly philosophically astute. Some professors may be very much swayed by personal connections, I suppose. I myself, however, often have a slightly negative feeling that I'm being "played" if someone who is applying to our PhD program contacts me during application season.

    If you seek to build a personal connection with a professor, it's best to do so after application season is over or long before you have begun applications. The best way to build a connection is this: Carefully read something recently written by the professor (within the past four years maybe), then ask an interesting and well-informed question about it. You can send them the question by email or possibly ask them face to face at a conference or a local event. The odds of an email reply are probably below 50% and tend to be lower for the best-known faculty, who are inundated with emails from strangers. The chance of a sustained correspondence is even lower, but it's not unheard of.

    Unless you are genuinely brimming with inspiration and enthusiasm, you probably won't want to attempt to build these kinds of connections as an undergraduate. However, I recommend remembering this advice for later. If and when you are an advanced graduate student, building connections in this way, outside of your home department, can be both intellectually rewarding and good for your career.

    ----------------------------------------------------------

    In the final part of this series I will discuss what to do after you hear back. (Here's the 2007 version.)

    [image source]

    Wednesday, February 19, 2020

    Do Business Ethics Classes Make Students More Ethical? Students and Instructors Agree: They Do!

    I'm inclined to think that university ethics classes typically have little effect on students' real-world moral behavior.

    I base this skepticism partly on Joshua Rust's and my finding, across a wide variety of measures, that ethics professors generally don't behave much differently than other professors -- and if they don't behave differently, why would students? And I base it partly on my (now somewhat dated) review of business ethics and medical ethics instruction specifically, which finds shoddy research methods and inconsistent results suggestive of an underlying non-effect.[1]

    On the other hand, part of the administrative justification of ethics classes -- especially medical ethics and business ethics -- appears to be the hope that students will eventually act more ethically as a result of having taken these courses. Administrators and instructors who aim at this result presumably expect that the classes are at least sometimes effective.

    The issue, perhaps surprisingly, isn't very well studied. I parody only slightly when I say that the typical study on this topic asks students at the end of class "are you more ethical now?", and when they respond "yes" at rates greater than chance, the researcher concludes that the instruction was effective.

    -----------------------------------------------------

    Nina Strohminger and I thought we'd ask instructors and students what they thought about this. We wanted to know two things. First, do instructors and students think that business ethics instruction should aim at improving students morally? Second, do they think that business ethics classes do in fact tend to improve students morally?

    Our respondents were 101 business ethics instructors at the 2018 Society for Business Ethics conference, plus students from three very different universities: 339 students from Penn (an Ivy League university with an elite business school), 173 students from UC Riverside (a large state university), and 81 students from Seattle University (a small-to-medium-sized Jesuit university, where Jessica Imanaka coordinated the distribution). Surveys were anonymous, pen and paper. Students completed their surveys on the spot near the beginning of the first day of instruction in business ethics courses.

    Using a five-point scale from "not at all important" to "extremely important", Question 1 asked respondents to "rate the importance of the following goals that YOU PERSONALLY AIM to get [to have your students get] from your business ethics classes:

  • An intellectual appreciation of fundamental ethical principles
  • An understanding of what specific business practices are considered ethical and unethical, whether or not I [they] choose to comply with those practices
  • Tools for thinking in a more sophisticated way about ethical quandaries
  • Interesting readings and fun puzzle cases that feed my [their] intellectual curiosity
  • Practical knowledge that will help me be a more ethical business leader [them be more ethical business leaders] in the future
  • Satisfying my [their] degree requirements
  • Grades that will look good on my [their] transcripts
  • Brackets indicate changes for the instructors' version.

    The target prompt was the fifth: Practical knowledge that will help them be more ethical business leaders in the future.

    [students in a business ethics class]

    Responses were near ceiling. 58% of students rated practical knowledge that will help them be more ethical business leaders as "extremely important" to them, the highest possible choice. The mean response was 4.44 on the 1-5 scale. This was the highest mean response among the seven possible goals. 40% of students rated it more highly than they rated "satisfying my degree requirements" and 48% rated it more highly than "grades that will look good on my transcript". Responses were similar for all three schools. If we accept these self-reports, gaining practical knowledge that will help them actually become more ethical is one of students' most important personal aims in taking business ethics classes.

    Instructors' responses were similar: 58% said it was personally "extremely important" to them to have students gain practical knowledge that will help them be more ethical business leaders in the future. The mean response was 4.41 on the 1-5 scale.

    Question 2 asked students and instructors to guess each other's goals (with the same seven possible goals). Students tended to think that professors would also very highly rate (mean 4.71) "practical knowledge that will help students be more ethical business leaders in the future". Professors tended to think that students would regard such knowledge as important (mean 4.09) but not as important as satisfying degree requirements (mean 4.42).

    Question 3 asked respondents how likely they thought it was that "the average students gets the following things from their [your] business ethics classes". The same seven goals were presented, with a 1 - 5 response scale from "not at all likely" to "extremely likely".

    Overall, both students and instructors expressed optimism: Both groups' mean response to this question was 3.84 on the 1-5 scale.

    Based on this part of the questionnaire, it looks like students and instructors agree: It's important to them that their business ethics classes produce practical knowledge that helps students become more ethical business leaders, and they think that their business ethics classes do tend to have that effect.

    On the second page of the questionnaire, we asked these questions directly.

    Question 4: Do you think that, as a result of having taken [your] business ethics classes, [your] students on average will behave more ethically, less ethically, or about the same as if they had not taken a business ethics course?

    Among instructors, 64% said more ethical, 35% said about the same, and 1% said less ethical. Among students, 54% said more ethical, 45% said about the same, and again only 1% said less ethical.

    Question 5: To what extent do you agree that the central aim of business ethics instruction should be to make students more ethical? [1 - 5 scale from "strongly disagree" to "strongly agree"]

    Among instructors, 63% agreed or strongly agreed and only 19% disagreed or strongly disagreed. Among students, 67% agreed or strongly agreed and only 9% disagreed or strongly disagreed.

    The results of these direct questions thus broadly fit with the results in terms of specific goals. Either way you ask, both business ethics students and business ethics instructors say that business ethics classes should and do make students more ethical.

    -----------------------------------------------------

    Many cautions and caveats apply. The results might be influenced by "socially desirable responding" -- respondents' tendency to express attitudes that they think will be socially approved (maybe especially if they think their instructors might be watching). Also, instructors attending a business ethics conference might not be representative of business ethics instructors as a whole -- maybe more gung-ho. Students and instructors might not know their own goals and values. They might be excessively optimistic about the transformative power of university instruction. Etc. I confess to having some doubts.

    Nonetheless, I was struck by the apparent degree of consensus, among students and instructors, that business ethics classes should lead students to become more ethical, and by the majority opinion that they do indeed have that effect.

    -----------------------------------------------------

    Note:

    [1] However, Peter Singer, Brad Cokelet, and I have also recently conducted a study that suggests that under certain conditions teaching the philosophical material on meat ethics can lead students to purchase less meat at campus dining locations.

    Friday, February 14, 2020

    Thoughts on Conjugal Love

    For Valentine's Day, some thoughts on love.

    In 2003, my Swiss friends Eric and Anne-Françoise Rose asked me to contribute something to their wedding ceremony. Here’s a lightly revised version of what I wrote, concerning conjugal love, the distinctive kind of love between spouses.

    #

    Love is not a feeling. Feelings come and go, while love is steady. Feelings are passions in the classic sense of passion, which shares a root with “passive” – they arrive mostly unbidden, unchosen. Love, in contrast, is something built. The passions felt by teenagers and writers of romantic lyrics, felt so intensely and often so temporarily, are not love – though they might sometimes be the prelude to it.

    Rather than a feeling, love is a way of structuring your values, goals, and reactions. Central to love is valuing the good of the other for their own sake. Of course, we all care about the good of other people we know, for their own sake and not just for other ends. Only if the regard is deep, only if we so highly value the other’s well-being that we are willing to thoroughly restructure our own goals to accommodate it, and only if this restructuring is so rooted that it automatically informs our reactions to the person and to news that could affect them, do we possess real love.

    Conjugal love involves all of this, but it is also more than this. In conjugal love, one commits to seeing one’s life always with the other in view. One commits to pursuing one’s major projects, even when alone, in a kind of implicit conjunction with the other. One’s life becomes a co-authored work.

    Parental love for a young child might be purer and more unconditional than conjugal love. The parent expects nothing back from a young child. The parent needn’t share plans and ideals with an infant. Later, children will grow away into their separate lives, independent of parents’ preferences, while we retain our parental love for them.

    Conjugal love, because it involves the collaborative construction of a joint life, can’t be unconditional in this way. If the partners don’t share values and a vision, they can’t steer a mutual course. If one partner develops too much of a separate vision or doesn’t openly and in good faith work with the other toward their joint goals, conjugal love fails and is, at best, replaced with some more general type of loving concern.

    Nevertheless, to dwell on the conditionality of conjugal love, and to develop a set of contingency plans should it fail, is already to depart from the project of jointly fabricating a life, and to begin to develop individual goals opposing those of the partner. Conjugal love requires an implacable, automatic commitment to responding to all major life events through the mutual lens of marriage. One can’t embody such a commitment while harboring serious back-up plans and persistent thoughts about the contingency of the relationship.

    Is it paradoxical that conjugal love requires lifelong commitment without contingency plans, yet at the same time is contingent in a way that parental love is not? No, there is no paradox. If you believe something is permanent, you can make lifelong promises and commitments contingent upon it, because you believe the thing will never fail you. Lifelong commitments can be built upon bedrock, solid despite their dependency on that rock.

    This, then, is the significance of the marriage ceremony: It is the expression of a mutual unshakeable commitment to build a joint life together, where each partner’s commitment is possible, despite the contingency of conjugal love, because each partner trusts the other partner’s commitment to be unshakeable.

    A deep faith and trust must therefore underlie true conjugal love. That trust is the most sacred and inviolable thing in a marriage, because it is the very foundation of its possibility. Deception and faithlessness destroy conjugal love because, and to the extent that, they undermine that trust. For the same reason, honest and open interchange about long-standing goals and attitudes is at the heart of marriage.

    Passion alone can’t ground conjugal trust. Neither can shared entertainments and the pleasure of each other’s company. Both partners must have matured enough that their core values are stable. They must be unselfish enough to lay everything on the table for compromise, apart from those permanent, shared values. And they must resist the tendency to form secret, selfish goals. Only to the degree they approach these ideals are partners worthy of the trust that makes conjugal love possible.

    [For the final, published version of this essay, please see A Theory of Jerks and Other Philosophical Misadventures.]

    [image source]

    Tuesday, February 11, 2020

    Question: Why Do Great Philosophers Embrace Such Wacky Views? Answer: The World Itself Is Wacky

    Recently, philosopher Michael Huemer seems intent on irritating philosophers of every stripe. (This isn't necessarily a bad thing.) On Saturday, he took aim at philosophical heroes, arguing that "great philosophers are bad philosophers". He notes that great philosophers tend to confidently defend bizarre conclusions, which he suggests reveals their poor judgment; and often they rely, he says, on arguments so terrible that "even an undergrad" can see the fallacies and non sequiturs. As examples, he offers Socrates's bad arguments against Thrasymachus in Book I of the Republic, Hume's "absurdly skeptical" conclusions in the Treatise and Enquiries, and Kant's willingness to take his thinly defended "categorical imperative" to absurd conclusions, such as not telling a lie even to prevent a murder.

    If you don't already know this material, I won't detain you with explanations here -- Huemer's are succinct and readable. I allow that on the face of it, Huemer has a pretty good case. And he's not targeting obscure philosophers or obscure passages. These are some of the most famous parts of some of the most famous works in the Western canon. And the views and arguments are decidedly... well, let's go with wacky. Nor is Huemer especially cherry picking. There's a lot of wacky-seeming stuff in other canonical philosophers too, for example, Leibniz on monads, Nietzsche on eternal recurrence, Descartes on animal (non-)minds, David Lewis on the real existence of possible worlds....

    Huemer has an explanation. He suggests that what makes a philosopher "great" is that the philosopher advances intriguing ideas that future generations find worth arguing about. Ordinary, bland truths, convincingly defended, don't really heat up a conversation. When faced with a compelling argument for a reasonable conclusion, people might react with something like, "yeah, that sounds right," and just move on. If in contrast you say, "there is no self" or "you shouldn't even lie to a murderer chasing an innocent person" (and for whatever sociological reason people take you seriously), that can really start up a good debate! Maybe a debate that lasts centuries. Possibly, the only people willing to advance such claims are bad philosophers -- philosophers who lack the good judgment to recognize the absurdity of their conclusions and who lack the critical chops to recognize that their supporting arguments are rotten. Hence, great philosophers are bad philosophers. QED!

    Is Huemer's argument a good one? Or is it, perhaps instead, a great one (in the strict Huemerian sense of "great")?

    I am probably a good target audience for Huemer's argument: Regular readers will know that I am quite happy to attribute plain old bad argumentation to some of the great historical philosophers, including Kant and Laozi, in accordance with my rejection of excessive charity in reading history of philosophy. Although I like Hume and Plato and (some parts of) Kant, I'm not bothered by Huemer's suggestion and I rather enjoy the idea that the great philosophers are fallible boneheads just like the rest of us.

    However, I have one observation about a piece of the story that Huemer's hypothesis leaves unexplained, and I have a competing explanation to offer instead.

    Here's what Huemer leaves unexplained: The lack of "good" philosophers in the historical record.

    If Huemer's hypothesis were correct, you'd think that among the contemporaries of Plato, Hume, and Kant would be good philosophers who defend sensible views on solid grounds. These philosophers might not get as much attention as the provocative philosophers, but it would be odd if historical records of them entirely disappeared. But there are no philosophers -- or at least (as I'll explain below) no ambitious metaphysicians -- who appear to meet Huemer's standard of being a "good" philosopher.

    Huemer suggests that Aristotle might be somewhat better than the trio he highlights, even if not entirely good. On Facebook, some others suggested maybe Thomas Reid might be a good philosopher who was a contemporary of Hume and Kant. But I don't think Aristotle or Reid are probably good by Huemer's standards. Some of Aristotle's and Reid's views are quite strange, and their arguments for those strange views aren't reliably sensible. For example, Reid, despite his reputation as a "common sense" philosopher, argues that material objects have no causal power and can't even hold together into consistent shapes, without the constant intervention of immaterial souls (an opinion he acknowledges is contrary to the views of the "vulgar"). I have argued that there are some metaphysical issues -- particularly the issue of the relation between mind and body -- where not a single philosopher in the whole history of Earth has been able to articulate a fleshed-out positive theory that isn't both highly dubious and in some respects radically contrary both to our current common sense and to the common sense of their own historical era. (I am still willing to entertain possible counterexamples, if you have some to suggest.)

    Why is this? Why are philosophical theories about the metaphysics of mind (and, I'd suggest, at least also personal identity, causation, and object individuation) all so bizarre and dubious? Here's my hypothesis: The world is bizarre and (for the foreseeable future) philosophically intractable. This is my competing explanation of the bizarre and dubious claims that Huemer has noted often occupy center stage in the history of philosophy.

    The world is bizarre in the following sense: Some things that are true of it are radically contrary to common sense. In physics, consider quantum mechanics and relativity theory. And in philosophy, the bizarreness is epistemically intractable, for the foreseeable future, for the following pair of reasons: (1.) Our common sense about fundamental issues of metaphysics is probably inconsistent at root, and if so, no self-consistent well-developed metaphysics could possibly adhere to all of it. (This explains the inevitable bizarreness.) And (2.) In the domains under discussion, empirical methods are indecisive, and we need to rely on this flawed, inconsistent common sense to a substantial degree. This generates intractable debates where the violations of common sense of one theory become the commonsensical starting presuppositions of competitor theories, which then bring radical violations of common sense of their own. No theory decisively meets all reasonable criteria of excellence. (This explains the inevitable dubiety.)

    Great philosophers are undaunted! Amid the competing bizarrenesses, they find some to favor. (The epistemic landscape isn't totally flat: There still are considerations pro and con and better and worse ideas.) They defend their favored views as best they can -- of course indecisively, given the bad epistemic situation of ambitious metaphysical philosophy.

    How about arguments we now think of as "good" arguments for sensible conclusions? Either (a.) they are unambitious, rather than going after the really huge, intractable issues (especially in fundamental metaphysics), or (b.) they are flawed for reasons that remain mostly invisible to their proponents (i.e., probably you. Sorry!), or (3.) they are forms of skepticism about the enterprise.

    This metaphilosophy is probably at its most plausible when applied to fundamental issues of metaphysics. The best examples of totally weird views and arguments tend to be in metaphysics. Maybe other subfields work differently? (I do think, however, that ethics might soon face a cognitive and methodological crisis, when confronted with a range of Artificial Intelligence cases for which it is conceptually unprepared.)

    Great philosophers embrace bizarre views because our ordinary commonsense understanding of the world is so radically deficient that no non-bizarre view is defensible or even, once one tries to specify the details, coherently articulatable. Great philosophers confront this bizarreness, defending their best guess with the indecisive argumentative tools they have, pushing us forward into the weird unknown.

    [image source]