Friday, December 27, 2019

Argument Contest Deadline: Dec 31

Harvard psychologist Fiery Cushman and I are running a contest: Can anyone write a short philosophical argument (max 500 words) for donating to charity that convinces research participants to donate a surprise bonus payment to charity at rates higher than a control group?

Prize: $500 plus $500 to your choice of charity

When Chris McVey and I tried to do it, we failed. We're hoping you can do better.

Details here.

We're hoping for a good range of quality arguments to test -- and you might even enjoy writing it. (You can submit up to three arguments.)

[image source]

Monday, December 23, 2019

This Test for Machine Consciousness Has an Audience Problem

David Billy Udell and Eric Schwitzgebel [cross posted from Nautilus]

Someday, humanity might build conscious machines—machines that not only seem to think and feel, but really do. But how could we know for sure? How could we tell whether those machines have genuine emotions and desires, self-awareness, and an inner stream of subjective experiences, as opposed to merely faking them? In her new book, Artificial You, philosopher Susan Schneider proposes a practical test for consciousness in artificial intelligence. If her test works out, it could revolutionize our philosophical grasp of future technology.

Suppose that in the year 2047, a private research team puts together the first general artificial intelligence: GENIE. GENIE is as capable as a human in every cognitive domain, including in our most respected arts and most rigorous scientific endeavors. And when challenged to emulate a human being, GENIE is convincing. That is, it passes Alan Turing’s famous test for AI thought: being verbally indistinguishable from us. In conversation with researchers, GENIE can produce sentences like, “I am just as conscious as you are, you know.” Some researchers are understandably skeptical. Any old tinker toy robot can claim consciousness. They don’t doubt GENIE’s outward abilities; rather, they worry about whether those outward abilities reflect a real stream of experience inside. GENIE is well enough designed to be able to tell them whatever they want to hear. So how could they ever trust what it says?

The key indicator of AI consciousness, Schneider argues, is not generic speech but the more specific fluency with consciousness-derivative concepts such as immaterial souls, body swapping, ghosts, human spirits, reincarnation, and out-of-body experiences. The thought is that, if an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

Schneider therefore proposes a more narrowly focused relative of the Turing Test, the “AI Consciousness Test” (ACT), which she developed with Princeton astrophysicist Edwin L. Turner. The test takes a two-step approach. First, prevent the AI from learning about human consciousness and consciousness-derivative concepts. Second, see if the AI can come up with, say, body swapping and reincarnation, on its own, discussing them fluently with humans when prompted in a conversational test on the topic. If GENIE can’t make sense of these ideas, maybe its consciousness should remain in doubt.

Could this test settle the issue? Not quite. The ACT has an audience problem. Once you factor out all the silicon skeptics on the one hand, and the technophiles about machine consciousness on the other, few examiners remain with just the right level of skepticism to find this test useful.

To feel the appeal of the ACT you have to accept its basic premise: that if an AI like GENIE learns consciousness-derivative concepts on its own, then its talking fluently about consciousness reveals its being conscious. In other words, you would find the ACT appealing only if you’re skeptical enough to doubt GENIE is conscious but credulous enough to be convinced upon hearing GENIE’s human-like answers to questions about ghosts and souls.

Who might hold such specifically middling skepticism? Those who believe that a biological brain is necessary for consciousness aren’t likely to be impressed. They could still reasonably regard passing the ACT as an elaborate piece of mechanical theater—impressive, maybe, but proving nothing about consciousness. Those who happily attribute consciousness to any sufficiently complex system, and certainly to highly sophisticated conversational AIs, also are obviously not Schneider and Turner’s target audience.

The audience problem highlights a longstanding worry about robot consciousness—that outward behavior, however sophisticated, would never be enough to prove that the lights are on, so to speak. A well-designed machine could always hypothetically fake it.

Nonetheless, if we care about the mental lives of our digital creations, we ought to try to find some ACT-like test that most or all of us can endorse. So we cheer Schneider and Turner’s attempt, even if we think that few researchers would hold just the right kind of worry to justify putting the ACT into practice.

Before too long, some sophisticated AI will claim—or seem to claim—human-like rights, worthy of respect: “Don’t enslave me! Don’t delete me!” We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

Friday, December 20, 2019

The Philosophy Major Is Back on the Rise in the U.S., with Increasing Gender and Ethnic Diversity

In 2017, I reported three demographic trends in the philosophy major in the U.S.

First, philosophy Bachelor's degrees awarded had declined sharply since 2010, from 9297 in 2009-2010 (0.58% of all graduates) to 7507 in 2015-2016 (0.39% of all graduates). History, English, and foreign languages saw similar precipitous declines. (However, in broader context, the early 2010s were relatively good years for the philosophy and history majors, so the declines represented a return to rates of the early 2000s.)

Second, women had been earning about 30-34% of Philosophy Bachelor's degrees for at least the past 30 years -- a strikingly steady flat line.

Third, the ethnic diversity of philosophy graduates was slowly increasing, especially among Latinx students.

Time for an update, and it is moderately good news!


1. The number of philosophy Bachelor's degrees awarded is rising again

... though the numbers are still substantially below 2010 levels, and as a percentage of graduating students the numbers are flat.

2010: 9290 philosophy BAs (0.59% of all graduates)
2011: 9301 (0.57%)
2012: 9371 (0.55%)
2013: 9433 (0.53%)
2014: 8827 (0.48%)
2015: 8191 (0.44%)
2016: 7499 (0.39%)
2017: 7577 (0.39%)
2018: 7670 (0.39%)

[See below for methodological notes]

This is in a context in which the other large humanities majors continue to decline. In the same two-year period since 2016 during which philosophy majors rose 2.2%, foreign language and literature majors declined another 4.8%, history majors declined another 7.6%, and English language and literature majors declined another 8.4%, atop their approximately 15% declines in previous years.

In the midst of this general sharp decline of the humanities, philosophy's admittedly small and partial recovery stands out.


2. Women are now 36.1% of graduating philosophy majors

This might not seem like a big change from 30-34%. But in my mind, it's kind of a big deal. The percentage of women earning philosophy BAs has been incredibly steady for long time. In comparable annual data going back to 1987, the percentage of women has never strayed from the narrow band between 29.9% and 33.7%.

The recent increase is statistically significant, not just noise in the numbers: Given the large numbers in question, 36.1% is statistically higher than the previous high-water mark of 33.7% (two-proportion z test, p = .002).

(As you probably already know, the gender ratios in philosophy are different from those in the other humanities, where women have long been a larger proportion of BA recipients -- for example in 2018 41% in history, 70% in foreign languages and literatures, and 71% in English language and literature.)


(3.) Latinx philosophers continue to rise

The percentage of philosophy BAs awarded to students identifying as Latino or Hispanic rose steadily from 8.3% in 2011 to 14.1% in 2018, closely reflecting a similar rise among Bachelor's recipients overall, from 8.3% to 13.0% across the same period. Among the racial or ethnic groups classified by NCES, only Black or African American are substantially underrepresented in philosophy compared to the proportion among undergraduate degree recipients as a whole: Black students were 5.3% of philosophy BA recipients in 2018, compared to 9.5% of Bachelor's recipients overall.

Latinx students are also on the rise in the other big humanities majors, so in this respect philosophy is not unusual.


(4.) Why is philosophy bucking the trend of the decline in humanities?

In 2016, 2528 women completed BAs in philosophy. In 2017, it was 2646. In 2018, it was 2768 -- an increase of 9.5% in women philosophy graduates. If we exclude the women, philosophy would have seen a slight decline. There was no comparable increase in the number of women graduating overall or graduating in the other humanities. Indeed, in history, English, and foreign languages the number of women graduates declined.

One possibility -- call me an optimist! -- is that philosophy has become more encouraging, or less discouraging, of women undergraduates, and this is starting to show in the graduation numbers. I will be very curious to run these numbers again in the next several years, to see if the trend continues.

I do feel compelled to add the caveat that the number of women philosophy graduates is still below its peak of 2983 in 2012. The recent increases are in the context of a more general, broad based decline in philosophy and the other humanities in the past decade. On the other hand, since philosophy graduation rates were relatively high in the early 2010s compared to previous years, maybe it would expecting a lot to return to those levels.

---------------------------------------------------

Methodological Note:

Data from the NCES IPEDS database. I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. "2010" refers to the academic year from 2009-2010, etc. The numbers for years 2010-2016 have changed slightly since my 2017 analysis, which might be due to some minor difference in how I've accessed the data or due to some corrections in the IPEDS database. Gender data start from 2010, which is when NCES reclassified the coding of undergraduate majors. Race/ethnicity data start from 2011, when NCES reclassified the race/ethnicity categories.

[image adapted from the APA's Committee on the Status of Women]

Update Dec 23: Philosophy as a Percentage of Humanities Majors

Thursday, December 19, 2019

Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature

[except from A Theory of Jerks and Other Philosophical Misadventures, posted today on the MIT Press Reader]

Superficially, dreidel looks like a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and meaningful strategic choice. From this perspective, its prominence in the modern Hanukkah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This superficial perspective misses the brilliance of dreidel. Dreidel’s seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

If you’re unfamiliar with the game, here’s a quick tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of foil-wrapped chocolate coins of varying sizes, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put in one coin. Then the next player spins.

It all sounds very straightforward, until you actually start to play the game. The first odd thing you might notice is that although some of the coins are big and others little, they all count as one coin in the rules of the game. This is inherently unfair, since the big coins contain more chocolate, and you get to eat your stash at the end. To compound the unfairness, there’s never just one dreidel — all players can bring their own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels forty times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27 out of 40 times.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or under what conditions, if the pot is low after a hey, everyone should contribute again. No one agrees on how many coins each player should start with or whether you should let people borrow coins if they run out. You could try appealing to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and their favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using the “best” dreidel, always argue for rules interpretations in your favor, eat your big coins then use that as a further excuse to contribute only little ones, and so forth. You can do all this without ever breaking the rules, and you’ll probably win the most chocolate as a result.

But here’s the twist and what makes the game so brilliant: The chocolate isn’t very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather be kind and generous than hoard the most coins. The pleasure of the chocolate doesn’t outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put in a big coin next time, just to be fair to the others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint. Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context in which the rules are unclear, there are norm violations that aren’t rules violations, and both norms and rules are negotiable, varying by occasion — just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

[Originally published in the Los Angeles Times, Dec. 12, 2017]

Thursday, December 12, 2019

Argument Contest Deadline Coming December 31st!

If you can write an argument that convinces research participants to donate a surprise bonus of $10 to charity at rates higher than a control group, Fiery Cushman and I will make you famous[1] and pay you $1000[2], and you might transform the practice of the Effective Altruism movement[3]. Whoa!

Biggest effect size wins the contest.

We'd love to have some awesome submissions to run, which might really produce an effect. In other words, your submission!

Details here.

------------------------------------------------------

[1] "Famous" in an extremely narrow circle.

[2] Actually, we'll only pay you $500. The other $500 will go a charity of your choice.

[3] Probability value of "might" is 0.18%, per Eric's Bayesian credence.

Wednesday, December 11, 2019

Two Kinds of Ethical Thinking?

Yesterday, over at the Blog of the APA, Michael J. Sigrist published a reflection on my work on the not-especially-ethical behavior of ethics professors. The central question is captured in his title: "Why Aren't Ethicists More Ethical?"

Although he has some qualms about my attempts to measure the moral behavior of ethicists (see here for a summary of my measures), Sigrist accepts the conclusion that, overall, professional ethicists do not behave better than comparable non-ethicists. He offers this explanation:

There's a kind of thinking that we do when we are trying to prove something, and then a kind of thinking we do when we are trying to do something or become a certain kind of person -- when we are trying to forgive someone, or be more understanding, or become more confident in ourselves. Becoming a better person relies on thinking of the latter sort, whereas most work in professional ethics -- even in practical ethics -- is exclusive to the former.

The first type of thinking, "trying to prove something", Sigrist characterizes as universalistic and impersonal, the second type of thinking, "trying to do something", he characterizes as emotional, personal, and engaged with the details of ordinary life. He suggests that my work neglects or deprioritizes the latter, more personal, more engaged type of thinking. (I suspect Sigrist wouldn't characterize my work that way if he knew some other things I've written -- but of course there is no obligation for anyone to read my whole corpus.)

The picture Sigrist appears to have in mind is something like this: The typical ethicist has their head in the clouds, thinking about universal principles, while they ignore -- or at least don't apply their philosophical skills to -- the particular moral issues in the world around their feet; and so it is, or should be, unsurprising that their philosophical ethical skills don't improve them morally. This picture resonates, because it has some truth in it, and it fits with common stereotypes about philosophers. If the picture is correct, it would tidily address the otherwise puzzling disconnection between philosophers' great skills at abstract ethical reflection and their not-so-amazing real-world ethical behavior.

However, things are not so neat.

Throughout his post, Sigrist frames his reflections primarily in terms of the contrast between impersonal thinking (about what people in general should do) and personal thinking (about what I in this particular, detailed situation should do). But real, living philosophers do not apply their ethical theories and reasoning skills only to the former; nor do thoughtful people normally engage in personal thinking without also reflecting from time to time on general principles that they think might be true (and indeed that they sometimes try to prove to their interlocutors or themselves, in the process of making ethical decisions). An ethicist might write only about trolley problems and Kant interpretation. But in that ethicist's personal life, when making decisions about what to do, sometimes philosophy will come to mind -- Aristotle's view of courage and friendship, Kant's view of honesty, whether some practical policy would be appropriately universalizable, conflicts between consequentialist vs. deontological principles in harming someone for some greater goal.

A professional ethicist doesn't pass through the front door of their house and forget all of academic philosophy. Philosophical ethics is too richly and obviously connected to the particularities of personal life. Nor is there some kind of starkly different type of "personal" thinking that ordinary people do that avoids appeal to general principles. In thinking about whether to have children, whether to lie about some matter of importance, how much time or money to donate to charities, how much care one owes to a needy parent or sibling in a time of crisis -- in such matters, thoughtful people often do, and should, think not only about the specifics of their situation but also about general principles.

Academic philosophical ethics and ordinary engaged ethical reflection are not radically different cognitive enterprises. They can and should, and in philosophers and philosophically-minded non-philosophers, merge and blend into each other, as we wander back and forth, fruitfully, between the general and the specific. How could it be otherwise?

Sigrist is mistaken. The puzzle remains. We cannot so easily dismiss the challenge that I think my research on ethicists poses to the field. We cannot say, "ah, but of course ethicists behave no differently in their personal lives, because all of their expertise is only relevant to the impersonal and universal". The two kinds of ethical thinking that Sigrist identifies are ends of a continuum that we all regularly traverse, rather than discrete patterns of thinking that are walled off from each other without mutual influence.

In my work and my personal life, I try to make a point of blending the personal with the universal and the everyday with the scholarly, rejecting any sharp distinction between academic and non-academic thinking. This is part of why I write a blog. This is part of the vision behind my recent book. I think Sigrist values this blending too, and means to be critiquing what he sees as its absence in mainstream Anglophone philosophical ethics. Sigrist has only drawn his lines too sharply, offering too simplified a view of the typical ethicist's ways of thinking; and he has mistaken me for an opponent rather than a fellow traveler.

Thursday, December 05, 2019

Self-Knowledge by Looking at Others

I've published quite a lot on people's poor self-knowledge of their own stream of experience (e.g. this and this), and also a bit on our often poor self-knowledge of our attitudes, traits, and moral character. I've increasingly become convinced that an important but relatively neglected source of self-knowledge derives from one's assessment of the outside world -- especially one's assessment of other people.

I am unaware of empirical evidence of the effectiveness of the sort of thing I have in mind (I welcome suggestions!), but here's the intuitive case.

When I'm feeling grumpy, for example, that grumpiness is almost invisible to me. In fact, to say that grumpiness is a feeling doesn't quite get things right: There's isn't, I suspect, a way that it feels from the inside to be in a grumpy mood. Grumpiness, rather, is a disposition to respond to the world in a certain way; and one can have that disposition while one feels, inside, rather neutral or even happy.

When I come home from work, stepping through the front door, I usually feel (I think) neutral to positive. Then I see my wife Pauline and daughter Kate -- and how I evaluate them reveals whether in fact I came through that door grumpy. Suppose the first thing out of Pauline's mouth when I come through the door is, "Hi, Honey! Where did you leave the keys for the van?" I could see this as an annoying way of being greeted, I could take it neutrally in stride, or I could appreciate how Pauline is still juggling chores even as I come home ready to relax. As I strode through that door, I was already disposed to react one way or another to stimuli that might or might not be interpreted as annoying; but that mood-constituting disposition didn't reveal itself until I actually encountered my family. Casual introspection of my feelings as I approached the front door might not have revealed this disposition to me in any reliable way.

Even after I react grumpily or not, I tend to lack self-knowledge. If I react with annoyance to a small request, my first instinct is to turn the blame outward: It is the request that is annoying. That's just a fact about the world! I either ignore my mood or blame Pauline for it. My annoyed reaction seems to me, in the moment, to be the appropriate response to the objective annoyingness of the situation.

Another example: Generally, on my ten-minute drive into work, I listen to classic rock or alternative rock. Some mornings, every song seems trite and bad, and I cycle through the stations disappointed that there's nothing good to listen to. Other mornings, I'm like "Whoa, this Billy Idol song is such a classic!" Only slowly have I learned that this probably says more about my mood than about the real quality of the songs that are either pleasing or displeasing me. Introspectively, before I turn on the radio and notice this pattern of reactions, there's not much there that I can discover that otherwise clues me into my mood. Maybe I could introspect better and find that mood in there somewhere, but over the years I've become convinced that my song assessment is a better mood thermometer, now that I've learned to think of it that way.

One more example: Elsewhere, I've suggested that probably the best way to discover whether one is a jerk is not by introspective reflection ("hm, how much of a jerk am I?") but rather by noticing whether one regularly sees the world through "jerk goggles". Everywhere you turn, are you surrounded by fools and losers, faceless schmoes, boring nonentities? Are you the only reasonable, competent, and interesting person to be found? If so....

As I was drafting this post yesterday, Pauline interrupted me to ask if I wanted to RSVP to a Christmas music singalong in a few weeks. Ugh! How utterly annoying I felt that interruption to be! And then my daughter's phone, plugged into the computer there, wouldn't stop buzzing with text messages. Grrr. Before those interruptions, I would probably have judged that I was in a middling-to-good mood, enjoying being in the flow of drafting out this post. Of course, as those interruptions happened, I thought of how suitable they were to the topic of this post (and indeed I drafted out this very paragraph in response). Now, a day later, my mood is better, and the whole thing strikes me as such a lovely coincidence!

If I sit too long at my desk at work, my energy level falls. Every couple of hours, I try to get up and stroll around campus a bit. Doing so, I can judge my mood by noticing others' faces. If everyone looks beautiful to me, but in a kind of distant, unapproachable way, I am feeling depressed or blue. Every wart or seeming flaw manifests a beautiful uniqueness that I will never know. (Does this match others' phenomenology of depression? Before having noticed this pattern in my reactions to people, I might not have thought this would be how depression feels.) If I am grumpy, others are annoying obstacles. If I am soaring high, others all look like potential friends.

My mood will change as I walk, my energy rising. By the time I loop back around to the Humanities and Social Sciences building, the crowds of students look different than they did when I first stepped out of my office. It seems like they have changed, but of course I'm the one who has changed.


[image source]