Wednesday, October 21, 2020

Black Students Are Increasingly Interested in Philosophy but Still Underrepresented among Graduating Majors (and Other Data on Race and the Philosophy Major)

by Eric Schwitzgebel, Morgan Thompson, and Eric Winsberg

Earlier this year, with the help of a grant from the APA, we obtained data from the Higher Education Research Institute (HERI) on intention to major in philosophy among first-year students in the U.S. In earlier posts, we analyzed these data by gender and sexual orientation. See those earlier posts for more details about the dataset and some methodological concerns.

The data on race are more complicated, partly because race categories change over time and partly because of sampling bias and non-response bias. However, after exploring the data from several different angles, we believe the data support the conclusion that since the year 2000, Black students have become increasingly interested in majoring in philosophy.

Across the U.S., Black people constitute about 13% of the general population. Black people are, however, underrepresented among students completing Bachelor's degrees (9% in the 2018-2019 academic year). In a previous analysis of completion rates using data from the National Center for Education Statistics (NCES), Eric S. found that from 1995-2016, the percentage of Philosophy Bachelor's degree recipients who are Black was about half of the percentage of Bachelor degree recipients overall who are Black: In 1994-1995, 3% of Philosophy BA recipients were black, compared to 7% of Bachelor's recipients overall; by 2015-2016, the percentages had risen to 5% in Philosophy and 10% overall.

The HERI database allows us to compare these graduation data with first-year intention to major. Also, since the HERI data run through Fall 2016 (average expected completion 2022), we can look at a slightly younger cohort. Furthermore, since NCES has recently released data for the 2018-2019 academic year, we can also update those data.

HERI First-Year Intention to Major by Race

From Fall 2000 through Fall 2016, HERI collected first-year intention to major from 4.9 million students. Race/ethnicity questions varied somewhat over time, but students were always able to respond with more than one race. HERI currently aggregates all past and present race/ethnicity questions into seven categories: American Indian, Asian, Black, Hispanic, White, Other, and Two or More Races/Ethnicities.

Overall, 0.35% of students expressed an intention to major in Philosophy. Although this is small, it is similar to the approximately 0.4-0.6% of students who completed majors in Philosophy during the period.

As usual, American Indian respondents were a tiny percentage of first-year students: 0.2-0.3% overall, and 0.0-0.4% of those intending to major in Philosophy.

Also in keeping with general trends, students in the "Other" category were low and steady, both overall (1-2%) and in Philosophy (1-4%), and White students steadily declined, both overall (from 74% in 2000 to 52% in 2016) and in Philosophy (from 76% to 48%). Notably, by 2016, White students might be slightly underrepresented among first-year students in Philosophy, in sharp contrast with the overrepresentation of White people among philosophy faculty.

The following charts show the percentage of students in the Asian, Black, Hispanic, and Two or More categories by year, both in Philosophy and overall. (Click the images to enlarge and clarify.)

As you can see from the images, the percentage of students entering college intending to major in Philosophy who identify as Asian, Hispanic, or multiracial approximately matches the overall racial percentages across all majors. Until recently, it looks like Asians were a little underrepresented and multiracial students overrepresented in Philosophy, but both groups are now near proportionality.

The data for Black students are strikingly different. In the early 2000s the pattern is similar to the pattern in Bachelor's degree completions: Black students were about 7% of students overall but only about 3% in philosophy. By the end of the HERI data, however, Black students are approximately proportionately represented in philosophy: 8-10% of students overall and 8-12% in Philosophy. (The sharp spike in 2016, however, might be noise: With 494 total respondents, the estimate is only accurate +/- 2%.)

The numbers aren't large, but they are large enough to rule out random variation as the primary explanation. The most recent four years of data, for example, contain data from 201 Black students intending to major in Philosophy among 2176 Philosophy majors overall, and the early years have even larger overall sample sizes.

Unfortunately, there are reasons to be concerned about the representativeness of these data. Let's not too quickly uncork the champagne.

Although the HERI surveys drew huge numbers of respondents in the early 2000s (376,777 in the year 2000 alone), by 2016, the number of respondents had declined by more than half, to 121,297. Furthermore, as HERI clarifies in its methods sections, school participation rates vary substantially by school type. For example, high-status private undergraduate universities are more likely to participate in the HERI data collection than are lower-status state universities. The possible unrepresentativeness of institutions included in the HERI dataset is a serious potential issue.

We have two approaches to address this methodological concern.

First, in recognition of these sampling issues, HERI supplies researchers with a variable called "student weight" which functions to overweight students from groups they anticipate to be undersampled and to underweight students from oversampled groups. Although we are somewhat hesitant about the use and interpretability of this variable, it represents HERI's best attempt to compensate for sampling and nonresponse issues, so we reanalyzed the data using the student weight correction. The results were not materially different. For example, over the past four years (2013-2016), with the weighting correction, Black students were 10% of first-year students intending to to major in Philosophy as well as 10% of first-year students overall, compared to 5% Philosophy and 11% overall in the first four years of the dataset (2000-2003). The correction thus supports the general finding that over the period Black students went from being seriously underrepresented among first-year students intending to major in philosophy to being about proportionately represented.

Second, we compared with NCES data on Bachelor degree completions. The NCES IPEDS dataset is reported by administrators at accredited schools, constituting an approximately complete record of Bachelor's degree recipients in the U.S., largely avoiding systematic sampling and nonresponse distortions.

NCES Completed Bachelor's Degrees by Race

We looked at NCES data on Bachelor's degree completions from all U.S. institutions from 2011 (representing the 2010-2011 academic year) to 2019 (the 2018-2019 academic year). Since average time to degree is about five years, this corresponds to HERI entering classes from 2005 through 2013. (We start in 2010-2011, since NCES changed its racial classification categories in 2010.)

NCES uses the following race/ethnicity categories: American Indian or Alaska Native, Asian, Black or African American, Hispanic or Latino, Native Hawaiian or Other Pacific Islander, White, two or more races, race/ethnicity unknown, and nonresident alien. Unknown and nonresident were 2-8% of students throughout the period, both in Philosophy and overall. American Indian or Alaska Native were about 0.5% throughout the period, both in Philosophy and overall. Native Hawaiian or Other Pacific Islander were about 0.2%, both in Philosophy and overall.

As has been generally observed, and in keeping with the HERI data, the percentage of graduates identifying as White fell considerably over the period: from 70% to 59% in Philosophy and from 65% to 57% overall. White students are still slightly overrepresented in Philosophy (59% vs. 57% overall, two-proportion z = 3.0, p = .003).

As with the HERI data, we start with graphs for Asian, Hispanic, and Multiracial:

These trends fit fairly well with the HERI data: Hispanic students are proportionately represented among Philosophy majors throughout the period. Asian students are perhaps a bit underrepresented. Multiracial students are somewhat overrepresented, as indeed they are in years 2005-2013 in HERI data (bearing in mind the six-year offset).

What about Black students?

Hm! Still very much underrepresented in Philosophy!

The contrast with HERI is substantial. In the HERI database, by fall 2013, Black students were already proportionately represented among entering students intending to major in Philosophy. Those students should be graduating around 2019, when NCES finds Black students still substantially underrepresented: 9.4% of the graduating student body vs. only 5.4% of graduating Philosophy majors.

Could the Black students' underrepresentation among Philosophy graduates in 2019 be statistical chance? Nope, not with numbers of this magnitude (435/8075 vs 189519/2014860, z = 15.9, p < .001).

Could it be HERI's unbalanced selection of participating schools? We don't think this explanation quite works. Here's what we tried. We looked at HERI's participating schools from two sample years, 2004 and 2016, and matched those schools with schools in the NCES database. This allowed us to look at the NCES racial data from just the HERI-participating schools. We could thus assess whether there's some unusual pattern in the HERI schools. We found some distortion, but no pattern large enough to explain the difference between the HERI and NCES data. Black students were maybe not quite as underrepresented among Philosophy graduates in HERI-participating schools as in other schools, but they were still substantially underrepresented. [note 1]

Nevertheless it's worth noting and perhaps celebrating one point of consensus between the NCES and HERI data: Over the past several years, Black students have become increasingly interested in Philosophy, both upon entering their first year of undergraduate study and upon completing the major. This conclusion is supported by both the HERI and NCES data. It might not look like much in the NCES graph, but the increase from 4.0% to 5.4% over nine years is arguably a meaningful move in the direction of proportionality (and yes, statistically significant at p < .001).

It remains to be seen if the very recent spike in first-year Black student's intention to major in Philosophy, which seems so encouraging in HERI, shows itself in a continued rise in completions of the degree in the 2020s. If not, at least two possible explanations suggest themselves: (1.) Some undetected sampling bias is messing up the HERI data, or more worryingly (2.) Black students are disproportionately more likely to leave or less likely to enter the Philosophy major in the period between first-year intention to major and actual completion of the degree.


Note 1: Among schools participating in HERI in 2004, Black students rose from 4.1% of graduating Philospohy majors in 2011 to 5.4% in 2019, compared to 7.8% to 8.0% overall. Among HERI 2016 schools, they rose from 4.5% to 6.7% in Philosophy, compared to declining from 8.9% to 8.4% overall. Among schools included in both the 2004 and 2016 HERI sample, Black students rose from 4.7% to 6.8% in Philosophy, compared to declining from 8.8% to 8.4% overall.

Friday, October 16, 2020

Best Philosophical Science Fiction in the History of All Earth

I'm putting together a new anthology for MIT Press, with the working title Philosophy and Sci-Fi, with co-editors Rich Horton and Helen De Cruz. Our original title idea was Best Philosophical Science Fiction in the History of All Earth, and that title, though a mouthful, captures our ambitions.

We would love suggestions of your favorite philosophically themed science fiction stories.

We hope that the anthology will be attractive both to SF fans who love idea-based stories like Ursula K. Le Guin's "The Ones Who Walk Away from Omelas" and Daniel Keyes "Flowers for Algernon".

We also hope that people who teach classes with titles like Philosophy and Science Fiction will want to use it in their teaching. Send us your syllabae! We're curious what stories philosophers have been successfully using in their teaching, for possible inclusion.

We want the anthology to contain great classics by authors like Le Guin, Asimov, and Chiang -- but we also hope to reach deeper into history and reach outside the English-language tradition to an extent that other anthologies typically do not. So if you know of relevant older science fiction and science fiction from outside the dominant US-UK tradition, we'd especially love to hear your suggestions.

We'll also of course write a cool intro on the relationship of science fiction and philosophy, and we'll introduce every story with a brief discussion of its place in the history of SF and a little relevant philosophical background.

I'm really looking forward to putting this antho together. Yay!

[image source]

Wednesday, October 14, 2020

Ethics Without the Costs: Two Tropes in Fiction

I've been binge-watching Doctor Who, and two days ago I finished Susanna Clarke's new novel Piranesi. I love them both! Doctor Who is among my favorite TV series ever, and the images of Piranesi will probably linger with me for the rest of my life. But. But! I'm a philosopher and a critic and I'm never satisfied and I've spent 52 years cultivating a fussy intellect. What good is a fussy intellect if not to find fault in everything?

Clarke is wonderful in part because she bends and defies genre expectations and fiction-writing expectations (also in Jonathan Strange and Mr Norrell). She writes slowly and atmospherically -- with prose so beautiful that you don't mind that nothing is happening. After a while the almost-nothingness becomes its own kind of plot and tension -- surely something will happen soon! Ah... ah... ah... as if you're on the edge of a sneeze. The tiniest thing becomes major plot news. On page 52, Piranesi is given a pair of shoes!

With as unconventional a writer as Clarke, you might expect the climax of Piranesi to be... SPOILER ALERT.

Spoiler alert. Stop reading.

[image: The Round Tower by Giovanni Piranesi]

You haven't stopped. Last chance! I'll wait if you want to get the book and come back in a week or two. I'm patient. I'm not even a real person, just some html code on a Google server somewhere. Time is meaningless.


The climax of Piranesi is surprisingly ordinary. It fits perfectly into an overused and predictable trope. I'll call it Hero Tries to Save Bad Guy's Life but Fails Because Bad Guy Is Just Too Vicious. (Suggestions for a shorter name welcomed.) Behind the trope is a sugary ethical fantasy that we probably shouldn't indulge too regularly, lest we mistake it for the world.

In Clarke's version, Bad Guy is caught in a flood, trying to shoot Hero who is taking cover with Helper behind some statues high up above water level. Near Bad Guy is an empty inflatable boat. If Bad Guy gets into the boat, he will live. If not, he risks drowning. Hero tries to save Bad Guy. Hero shouts "Get in the boat! Get in the boat before it's too late!" Bad Guy ignores the shouts (or maybe doesn't hear amid the noise) and shoots again at Hero. Hero shouts again to get in the boat, the tide is almost here! Again, Bad Guy ignores or fails to hear, instead leaving the boat behind to aim more shots at innocent Hero. The flood arrives, killing Bad Guy.

So familiar! Bad Guy is trying to kill Hero. In the attempt, Bad Guy falls into danger. Hero admirably finds the compassion and courage to try to rescue Bad Guy. In some variants, Bad Guy is temporarily rescued. Regardless, Bad Guy viciously continues the unjust attack on Hero, confirming Bad Guy's deep wickedness. This final, especially unjust attack precipitates Bad Guy's death despite Hero's rescue efforts, delivering a happy ending of moral clarity in which Bad Guy's evil directly causes Bad Guy's death while Hero needn't do anything as unseemly as intentionally permit that death.

Examples of this trope abound. Consider the end of Disney's version of Beauty and the Beast. Gaston (Bad Guy) is chasing Beast (Hero) across a roof. Gaston slips. Beast reaches down, grabs Gaston, and helps him up, momentarily saving his enemy's life. Instead of abandoning the quarrel in gratitude, Gaston stabs again at Beast, loses his balance, and falls to his death.

The appeal of this trope is part of the broader appeal, I think -- which we see throughout literature and film -- of ethics (or seeming ethics) without costs. Hero gets to do the (seemingly) ethical thing of forgiving and helping even the Bad Guy, revealing Hero's amazing courage, compassion, and strength of character. But Hero pays no price for this choice. Bad Guy dies anyway, in a final act that confirms his irredeemable viciousness, and the world becomes safe. It's win-win. Hero gets to embody a certain version of deontological ethics or virtue ethics and then also gets the good consequences too.

Yes, sometimes things to work out that way! I confess to being tired of how often, in fiction and movies, they work out that way. Good luck engineers a happy ending with no real costs or tragic losses (for Hero at least; we can feel faint sadness at bad outcomes for some minor characters). Escapist fantasy, I suppose. Which is a fine thing. We all need it sometimes. But from the scintillatingly unconventional Clarke I guess I'd been hoping for something a little less middle-of-the-trope.


Doctor Who makes no pretense to be anything other than escapist fantasy, right in the middle of every trope. Sometimes it's so absurdly on trope as to verge on parody. At the end of every timer is the destruction of (at least) the entire Earth, and salvation never arrives until the last hair of a fraction of a second. Every whiff of trouble, with utter predictability, means real trouble, with some preposterously destructive Bad Guy behind it.

Last night, we watched "Kill the Moon" (2nd era, s8:e7). (Spoilers coming.) The Moon, it turns out, is an egg about to hatch a giant bird! The heroes face a moral choice: Kill the innocent bird embryo an hour (a minute, a split second...) before it hatches, to prevent the risk to the Earth that would presumably be entailed by having an unpredictable Moon-sized hatchling nearby, or let the innocent thing hatch and accept the risk to Earth.

Thus looms another familiar trope of escapist ethics. Let's call this one Save One Innocent Life at Risk to the World but Whew the World Is Fine Anyway (SOILARWWWIFA for short). Of course the heroes choose not to kill the innocent creature. And of course it works out fine. In the denouement, the minor character whose unfortunate plot role was voicing the consequentialist argument to nuke the egg is forced to humbly thank the wiser others for staying her hand. It's foolish to favor killing one innocent life, even to protect the world! The dour consequentialist is shamed, and the moral order of the universe is affirmed. Emotionally, we learn that there's never any real conflict between saving the innocent creature before you and protecting the world.

I loved it. Of course I loved it. If the egg had birthed a monster that intentionally or accidentally destroyed humanity, or if they'd nuked the egg and the beautiful corpse had drifted sadly but safely away -- well, my family and I wouldn't have been reassured with the comfortable thought that ethical criteria never seriously conflict. You can have have your compassion, your respect for every individual's life, your softness and humaneness, and all of your good consequences too, in one yummy package. We can retire for the night remembering the beautiful Moon-bird that, in our courageous and compassionate wisdom, we chose to let live.

ETA 09:11

The acronyms are intentionally absurd, but here's pronunciation advice anyway. For "HTSBGLBFBBGIJTV", just say "hits biggle" then give up. For "SOILARWWWIFA", "solar wife" might serve.

Thursday, October 08, 2020

The Philosophy Major Is Back on the Rise (Kind of)

Back in 2017, I noticed that the number of students completing the Philosophy major in the U.S. had plummeted from a high of 9431 in 2013 (0.54% of all Bachelor's recipents) to 7305 in 2016 (0.39% of Bachelor's recipients) -- a shocking 23% decline in just four years, despite Bachelor's degree completions across all majors rising overall. This appeared to be part of a general decline in the humanities. English, History, and foreign languages showed similar declines in the same period. This week I've been rummaging through three years' more data, and the Philosophy major is back on the rise -- kind of!

All data are from the National Center for Education Statistics' excellent IPEDS database, confined to "U.S. only", Philosophy major category 38.01, and combining first and second majors.

Here's the breakdown year by year for philosophy since 2011 (i.e., the 2010-2011 academic year):

2011: 9301 Philosophy Bachelor's recipients (0.57% of Bachelor's recipients overall)
2012: 9371 (0.55%)
2013: 9431 (0.54%)
2014: 8826 (0.48%)
2015: 8190 (0.44%)
2016: 7498 (0.39%)
2017: 7577 (0.39%)
2018: 7670 (0.39%)
2019: 8075 (0.40%)

Given the large numbers involved, the recent recovery cannot be due to statistical chance.

Of course, the absolute numbers look better than the percentages, but the percentages are at least stable and have been now for four consecutive years.

Meanwhile, the other big humanities majors continue to decline, as shown in this graph:

[click to enlarge and clarify]

For a longer-term perspective we can look back to the 2000-2001 academic year (the earliest year in which information for second majors is available). The percentage of Bachelor's degree recipients completing a major in Philosophy fell from 0.48% in 2001 to 0.40% in 2019. The percentage completing in English fell from 4.5% in 2001 to 2.1% in 2019; in History, from 2.2% to 1.3%; and in foreign languages, from 2.2% to 1.1%.

Wednesday, September 30, 2020

Some Good News, Some Bad News in the APA’s State of the Profession Report

by Carolyn Dicey Jennings and Eric Schwitzgebel

[cross-posted at The Blog of the APA and Daily Nous]

We were recently provided with a report from the APA that recounts work by Debra Nails and John Davenport to collect, organize, and analyze available data on the discipline over the past 50 years, including data from the Philosophy Documentation Center, the National Center for Education Statistics, and the APA itself. We are grateful for the efforts of Nails and Davenport in creating this important report on the state of the profession. As colleagues on the Data Task Force, we have some insider knowledge of how challenging this task was, and how much time it required between 2016 and now. In reviewing the report, a few threads stood out: good news, bad news, and supporting news. Let’s start with the good.

Contingent Faculty

While some use the language of “adjunct” or “part-time” faculty, we follow the report in using “contingent,” since it is possible for adjunct and part-time positions to be permanent ones. The national issue of increasing contingent labor in academia has come up many times at the APA Blog and Daily Nous, and there was recently an APA session dedicated to the topic. As one report puts the problem:

“For many part-time faculty, contingent employment goes hand-in-hand with being marginalized within the faculty. It is not uncommon for part-time faculty to learn which, if any, classes they are teaching just weeks or days before a semester begins. Their access to orientation, professional development, administrative and technology support, office space, and accommodations for meeting with students typically is limited, unclear, or inconsistent. Moreover, part-time faculty have infrequent opportunities to interact with peers about teaching and learning. Perhaps most concerning, they rarely are included in important campus discussions about the kinds of change needed to improve student learning, academic progress, and college completion. Thus, institutions’ interactions with part-time faculty result in a profound incongruity: Colleges depend on part-time faculty to educate more than half of their students, yet they do not fully embrace these faculty members. Because of this disconnect, contingency can have consequences that negatively affect student engagement and learning.”

So far this sounds like bad news, but we want to be sure that we do not overlook the real issues contingent faculty face in communicating the good. Namely, that the percentage of contingent faculty in philosophy is low and stable. As Nails and Davenport explain, around 73% of all faculty nationwide are in contingent or “unranked” positions, whereas only 22% of philosophy faculty had contingent positions in 2017. Moreover, there is a lower percentage of contingent philosophy faculty now than there was in the 1960s. While the APA membership numbers have suggested this for some time (with around 20% of its members reporting contingent status), a reasonable concern about that estimate was that contingent faculty might be underrepresented among APA members. This new report suggests that the numbers just are low in our discipline. That’s good news, under the plausible assumption that it is best for the discipline if a large majority of faculty are tenured or tenure track.

[click to enlarge and clarify]

While we are heartened by these findings, we see a couple of reasons to stay vigilant about the status of contingent faculty.

First, we don’t know the reason that there are fewer contingent faculty in philosophy. It may be, for example, that philosophers are less likely to teach the types of courses that are typically offered to contingent faculty, such as general education and writing courses. In that case, this wouldn’t be a reason to celebrate philosophy’s success on the issue.

Second, the report raises the possibility that contingent positions have recently been replacing assistant professor positions: “Since 1987, there has been a steady increase in the number of faculty hired outside the tenure system compared to entry-level positions inside it.” We note only that the numbers show that the ratio of assistant professor positions to contingent positions has shifted from about 1:1 in 1987 to about 4:3 in 2017, representing a gain of about 500 contingent positions while the number of assistant professor positions remains about the same (see “a” in Figure 5).

[click to enlarge and clarify]

It is unclear what we should conclude from this, since the overall ratio of contingent to non-contingent faculty hasn’t really changed: there are also nearly 500 extra professor and associate professor positions since 1987 (“b” in Figure 5). This could be due to faculty staying in the profession for longer than in past decades, but it could be for some other reason. Zooming in on the difference between 2007 and 2017, Nails and Davenport note that “the drop in assistant professors and rise in associate professors may indicate a decline in entry-level hires since 2007. Universities that hired new faculty into contingent positions in the wake of the Great Recession have not yet made tenure lines available to those who, under normal circumstances, would have been hired as assistant professors.” But here, too, the additional numbers of those in associate and professor positions could explain the difference (“c” in Figure 5). It may be, for example, that those in more recent years are achieving promotion faster than in past years, leaving fewer people at the assistant rank relative to ranked positions, overall. So it is unclear what to take from this data, but we may want to be cautious, given the possibility Nails and Davenport raise.

Alright, how about some bad news?


The bad news is that philosophy is represented at about 100 fewer institutions in 2017 than in 1967 (1669 colleges and universities in 1967 and 1552 in 2017). This appears to represent a decline of the discipline in academia that has been the subject of numerous blog posts.

Surprisingly, the report particularly notes a decline in philosophy at (non-Catholic) religious institutions, both at the undergraduate and graduate level. Whereas around 16% of all public institutions offer no philosophy degree, this is true of 27% of non-Catholic religious institutions (but only 11% of Catholic colleges and universities). We don’t know the root of this decline of philosophy in religious institutions. It might be due to the especially atheistic culture of philosophy and its writings, or due to such institutions having comparatively stronger religious studies or theology programs competing for majors, or due to the relatively left-wing politics of many academic philosophers.

In addition, a striking 78% of historically Black colleges and universities (HBCU) offer no philosophy degree. Given the historical racism in philosophy, it seems likely that this is also connected to cultural issues. It would be in the interest of philosophy to further explore the matter. (Interested readers might start with this interview with Brandon Horgan at Howard University, an HBCU.)

We did note one reason for optimism with the overall numbers: while the numbers of institutions with a degree in philosophy has declined, the number of faculty at these institutions has increased, from around 6k in 1967 to around 9k in 2017. One can see how this played out at most institutions through the median number of faculty: whereas the median number of faculty for programs offering a PhD in philosophy was around 13 in 1967, it was around 19 in 2017. Similarly, those offering a Master’s went from 6 to 11, those offering Bachelor’s went from 3 to 5, and those offering courses only went from 1 to 2 (Figures 6a-b and 7a-b).

[click to enlarge and clarify]

The report also provided some numbers that support other findings, which we called “supporting news” above. We focus here on the supporting news regarding gender diversity. (The authors of the report were unable to explore race/ethnicity, disability, LBGTQ status, or other aspects of diversity.)

Gender Diversity

The APA now collects some demographic data from its members, including gender, race/ethnicity, LGBT status, and disability status. Among the 1874 APA members who reported gender, 505 (27%) answered “female”, 1363 (73%) answered “male”, and 6 (<1%) answered “something else.” Other recent research has suggested that women constitute about 30% of recent philosophy PhDs and new assistant professors in the U.S., about 20% of full professors, and about 25% of philosophy faculty overall (plus or minus a few percentage points). However, most of this previous research is either a decade out of date or is limited to possibly unrepresentative samples, such as APA member respondents, faculty at PhD-granting programs, or recent PhDs.

The current report finds generally similar numbers, in a larger and more representative sample (all faculty in the Directory of American Philosophers from 2017). Overall, 26% of philosophy faculty were women, including 34% of assistant professors and 21% of full professors. (Associate professors and contingent faculty are intermediate at 28% and 26%, respectively.)

We note that the authors of the report relied on the DAP’s binary gender classifications of faculty, which were generally reported by department heads or other department staff. And where faculty gender was not specified, the report’s authors searched websites and CVs for gender designations. Thus, the data do not include non-binary gender and some classification errors are possible.

The tendency for women to be a smaller percentage of full professors than assistant professors could reflect either a cohort effect, a tendency for women to advance more slowly up the ranks than men, or a tendency for women to exit the profession at higher rates than men. On the possibility of a cohort effect, since professors often teach into their 70s, the lower percentage of women among professors might to some extent reflect the fact that in the 1970s and 1980s, 17% and 22% of philosophy PhDs in the U.S. were awarded to women. By the 1990s, it was 27%, which is closer to the numbers for recent graduates. However, cohort effects might not be a complete explanation, since 21% might be a bit on the low side for a group that should reflect a mix of people who earned their PhDs from approximately the 1970s through the early 2000s. In the NSF data, women received 23% of all philosophy PhDs from the year 1973 through 2003—the approximate pool for full professors in 2017.

The report also explores gender by institution type, highest degree offered, and region. One notable result is that philosophy departments offering at least Bachelor’s degrees had on average higher percentages of women than departments not offering Bachelor’s degrees. Faculty were 27% women in departments offering the PhD, 28% in departments offering a Master’s but no PhD, 27% in departments offering a Bachelor’s degree, 22% in departments offering a minor but no Bachelor’s, 23% in departments offering an Associate degree but no minor, and 20% in departments offering philosophy courses but no degrees. (A chi-square test shows that this is unlikely to be statistical chance: 2x6 chi-square = 24.3, p < .001, lowest expected cell count = 104.) We are unsure what would explain this phenomenon.

Thursday, September 24, 2020

The Copernican Principle of Consciousness

According to the Copernican Principle in cosmology, we should assume that we do not occupy a special or privileged place in the cosmos, such as its exact center. According to the Anthropic Principle, we should be unsurprised to discover that we occupy a cosmological position consistent with the existence of intelligent life. The Anthropic Principle is a partial exception to the Copernican Principle: Even if cosmic locations capable of supporting intelligent life are extremely rare, and thus in a sense special, we shouldn't be surprised to discover that we are in such a location.

Now let's consider the following question: Is it surprising that Homo sapiens is a conscious species? On certain views of consciousness it would be surprising, and this surprisingness constitutes evidence against those views.

The views I have in mind are views on which conscious experience is radically separable from intelligent-seeming outward behavior. Views of this sort are associated with Ned Block and John Searle and more recently Susan Schneider -- though none of them commit to exactly the view I'll criticize today.

Let's stipulate the following: In our wide, maybe infinite, cosmos, living systems have evolved in a wide variety of different ways, with very different biological substrates. Maybe some life is carbon based and other life is not carbon based, and presumably carbon-based entities could take a variety of forms, some very unlike us. Let's stipulate also that some become sophisticated enough to form technological societies.

For concreteness, suppose that a thousand galaxies each host technological life for a thousand years. One hosts a technological society of thousand-tentacled supersquids whose cognitive processing proceeds by inference patterns of light in fiber-optic nerves. Another hosts a technological society of woolly-mammoth-like creatures whose cognition is implemented in humps containing a billion squirming ants. (For more detailed descriptions, see Section 1 of this paper.) Another hosts a technological society of high-pressure creatures who use liquid ammonia like blood. Etc.

Since these are technological societies, they engage in the types of complicated social coordination required to, say, land explorers on a moon. This will require structured communication: language, including, presumably, self-reports interpretable as reports of informational or representational states: "I remember that yesterday Xilzifa told me that the rocket crashed" or "I don't want to stop working yet, since I'm almost done figuring out this equation." (If advanced technology can arise without such communications, exclude such species from my postulated thousand.)

So then, we have a thousand societies like this, scattered across the universe. Now let's ask: Are the creatures conscious? Do they have streams of experience, like we do? Is there "something it's like" to be them?

Most science fiction stories seem to assume yes. I think that is also the answer we find intuitive. And yet, on certain philosophical views that I will call neurochauvinist, we should very much doubt that creatures so different from us are conscious. According to neurochauvinism, what's really special about us, which gives rise to conscious experience, is not our functional sophistication and complex patterns of environmental responsiveness but rather something about having brains like ours, with blood, and carbon, and neurons, and sodium channels, and acetylcholine, and all that.

Neurochauvinism can seem attractive when confronted with examples like Searle's Chinese Room or Block's China Brain -- complex systems designed to look from the outside like they are conscious and sophisticated (and which maybe implement computer-like instructions), but which are in fact basically just tricks. Part of the point of these examples is to challenge the common assumption that programmed robots, if they could someday be designed to behave like us, would be conscious. Consciousness, Block and Searle say, is not just a matter of having the right patterns of outward behavior, or even the right kinds of programmed internal, functional state transitions. Consciousness requires the specific biology of neurons -- or at least something in that direction. As Searle suggests, no arrangement of beer cans and wire, powered by windmills, could ever really have conscious experiences -- no matter how cleverly designed, no matter how sophisticated its behavior might seem when viewed from a distance. It's just not made of the right kind of stuff.

The neurochauvinist position as I am imagining it says this: We know that we are conscious. But those other aliens, made out of such different kinds of stuff, they're out of luck! Human biological neurons are what's special, and they don't have them. Although aliens of this sort might seem to be reporting on their mental states (remembering and wanting, in my example), really there is no more conscious experience there than there is behind the computer "memory" in your laptop or behind a non-player character in a computer game who begs you to save him from a dragon.

Now I don't think that Block or Searle are committed to such a strong view. Both allow that some hypothetical systems very different from us might be conscious, if they have the right kind of lower-level structures -- but they don't specify what exactly those structures must be or whether we should expect them to be rare or common in naturally-evolved aliens capable of sophisticated outward behavior.

So the neurochauvinist view is a somewhat extreme and unintuitive view. And yet philosophers and others do sometimes seem to say things close to it when they say that human beings are conscious not in virtue of their sophisticated behavior and environmental responsiveness but rather in virtue of the specifics of their underlying biological structures.


Back to the Copernican Principle. If we alone have real conscious experiences and the 999 other technologically sophisticated alien species do not, then we do occupy a special region in the universe: the only region with conscious experiences. We are in a sense, super lucky! Of all the wild ways in which technological, linguistic, self-reporting creatures could evolve, we alone lucked in the neural basis of consciousness. Too bad for the others! Unlike us, they are as experientially blank as a computer or a stone.

If your theory of consciousness implies that Homo sapiens lucked into consciousness while all those other technological species missed out, it's got the same kind of weakness as does a theory that says, "yes, we're at the center of the universe, it just happened that way for no good reason, how strangely lucky for us!"

Now you could try to wiggle out of this by invoking the Anthropic Principle. You could say that we should be unsurprised to discover that we are in a region of the universe that supports consciousness, just like we should be unsurprised to discover that we aren't in any of the vast quantities of vacuum between the stars. The Anthropic Principle is sometimes framed in terms of "observers": We should expect to be in a region that can host observers. If only conscious entities count as observers, then it's unsurprising that we're conscious.

Now I think that the best understanding of "observer" for these purposes would be a functional or behavioral understanding that would include all technological alien species, but that seems like an argumentative quagmire, so let me respond to this concern in a different way.

Suppose that instead of a thousand technological species, there are a thousand and one: we who are conscious, 999 alien species without consciousness, and one other alien species with consciousness (they also lucked into neurons) who has secretly endured unobserved while observing all the other species from a distance. I will call this alien species the Unbiased Observers. They gaze with equanimity at the thousand others, evaluating them.

When this species casts its eye on Earth, will it see anything special? Anything that calls out, "Whoa, this planet is radically unlike the others!" As it looks at our language and our technology, will anything jump out that says here be consciousness while all the other linguistic and technological societies lack it? I see no reason to think so if we abide by the starting assumptions of neurochauvinism, that is, if we think that nonconscious entities could easily have sophisticated outward behavior and information processing similar to ours, and that what's really necessary for consciousness is not that but rather the low-level biological magic of neurons.

The Copernican Principle is then violated as follows: The Unbiased Observers should, if they understand the basis of consciousness, regard us as the one-in-a-thousand lucky species that chanced into consciousness. Even if the Unbiased Observers don't understand the basis of consciousness, it is still true that we are special relative to them -- sharing consciousness with them, alone among all the species in the universe that outwardly seem just as sophisticated and linguistic.

The Copernican Principle of Consciousness: Assume that there is no unexplained lucky relationship between our cognitive sophistication and our consciousness. Among all the actual or hypothetical species capable of sophisticated cognitive and linguistic behavior, it's not the case that we are among a small portion who also have conscious experiences.

[image source]

Thursday, September 17, 2020

Why Writing Philosophy Is Hard (and Why Every Historical Philosopher Focuses on the Wrong Things)

The number of true sentences is infinite. This is why writing philosophy is hard.

As if to prove my point to myself, I'm having some trouble choosing this next sentence.

With the exception perhaps of fiction, philosophy is the most topically wide open and diversely structured of writing forms. Literally every topic is available for philosophical inquiry. What rules and principles then guide how you write about that topic as a philosopher? Respect for truth is one guide -- but then again not always. Quality of argumentation is another -- but then again, philosophy is often less about presenting an argument than articulating a vision.

This issue arose acutely for me yesterday -- a day spent amid a flurry of invisible revisions (that is, making then reversing changes) on the book I'm drafting. What needs to be said explicitly? What can you pass over in silence? What can you assume the reader will accept without further support, and what requires defense or explanation? I find myself adding sentences of support or clarification, then later deleting them, then adding different ones, then expanding those -- then deleting the whole business, then deciding I really do want such-and-such part of it after all....

Here's a sentence from a paragraph I've been working on:

The experience of pain, for example, might be constituted by one biological process in us and a different biological process in a different species.

Is this something I can just say, and the reader will nod and move along? Or do I need to explain it? What exactly do I mean by an "experience of pain"? What is a "biological process"?  How much is built into the notion of "constituted"?  In philosophical writing -- unlike in most scientific writing -- phrases like this are very much open for challenge and inquiry. Indeed, the substance of philosophy often is just inquiring into issues of this sort and challenging the assumptions that lie in the background behind our casual use.

Suppose the meaning is clear enough: I don't need to explain it. I might still need to defend it. Although the sentence (variously interpreted) expresses majority opinion in philosophy of mind, not all philosophers agree. Indeed not all philosophers even agree that the external world exists. We can disagree about anything! It's perhaps the most special and obvious talent of philosophy as a discipline. (Wouldn't you agree?) For example, maybe species that are biologically sufficiently different (octopuses? snails?) don't really feel pain, and the species that do feel pain all have basically the same neural underpinnings? Or maybe there's no good understanding of "constitution" such that pains can be constituted by anything? Maybe the very idea of "consciousness" is broken and unscientific?

In philosophy, it seems, I can always reasonably choose to explain my terms and concepts more clearly (that's so central to the philosopher's task!), and I can always reasonably choose to defend my claims at greater length (since philosophers can challenge and doubt literally anything). My explanation will then in turn invoke new terms that might need explaining and my defense will rely on further claims that might need further defense. An infinite regress threatens -- not just an ordinary infinite regress, but a many-branching regress in which I suspect, eventually, every true sentence could eventually become relevant in some way somewhere.

For this reason good philosophical writing requires careful attunement to your audience. When every term is potentially requires clarification and every claim potentially requires defense, you need to make constant judgment calls about how much clarification and how much defense, in what dimensions and directions. To do this well, you need a good sense of your readers: what will make them prickle and what they'll be happy enough, in context, to let pass.

Students and outsiders to the discipline will rarely have a good sense of this. How could they? This is not because they are bad philosophers (though of course they might be) but because philosophical thought and writing is so open-textured.

Let me try to express this with an illustration.

Suppose, to simplify, that every idea has four (imagine only four!) respects in which it could reasonably be clarified or defended, and that each clarification or defense in turn admits four further clarifications and defenses. The structure of all possible ways to articulate your idea then looks like this:

[click to enlarge and clarify]

Of course you can't write that! So here's what you write:

[click to enlarge and clarify]

You go deep into clarification/defense 1b, skip 2 altogether, add a superficial remark on 3, deeply illuminate two aspects of 4a and a bit of 4c.

Unfortunately, the reader wanted a deep dive into two aspects of 2c and a little bit on 4:

[click to enlarge and clarify]

The reader finds your treatment of 1b and 4 tedious. Why are you spending so much time on that, when the issue that's really on their mind, what's really bugging them, is 2, especially 2c, especially these sub-ideas within 2c? 2c is the obvious objection! It's the heart of the matter, of course of course!

If you come from the same philosophical subculture as the reader -- if you're soaking in the same subliteratures, admiring the same great thinkers, feeling pulled by the same sets of issues -- then the shape of what you include and omit is much likelier to match the shape of what the reader feels you need to include (to have a good treatment) and omit (since they're not going to read the booksworths of material that could be written as subsections of basically any philosophy article).

This is the art of writing philosophy. It's a culturally specific knack, acquired mainly by immersion. It is so hard to do well! It's part of what makes philosophical work from other times and places often seem so wide of the mark, difficult to understand, and poorly argued.

Okay, I know what you're going to object now. (I think I know.) If all the above is true, how is it that we can appreciate philosophers as culturally distant as Plato and Zhuangzi? They certainly didn't write with us in mind!

Here are my two answers.

First, at least some historical figures played a role in shaping our sense of what needs and does not need clarification and defense, or (the more minor figures) were shaped by others in their era who also shaped us.

Second, and I think my stronger answer: This is why history of philosophy is creative and reconstructive. We reach toward them rather than the other way around. We allow ourselves to sink into their worldview where issue 2 is just taken for granted and where 4a is what really requires long, detailed development. And if 2 seems to us to require serious attention, we develop a speculative treatment of 2 on their behalf, piecing together charitably (maybe too charitably) what we think they would or must have thought about it.


If you enjoy my blog, check out my recent book: A Theory of Jerks and Other Philosophical Misadventures.

Thursday, September 10, 2020

Believing in Monsters: David Livingstone Smith on the Subhuman

The Nazis called Jews rats and lice.  White plantation owners called their Black slaves soulless animals.  Pundits in Myanmar call Rohingya Muslims beasts, dogs, and maggots.  Dehumanizing talk abounds in racist rhetoric worldwide.

What do people believe, typically, when they speak this way?

The easiest answers are wrong.  Literal interpretation is out: Nazis didn't believe that Jews literally fit biologically into the taxonomy of rodents.  For one thing, they treated rodents better.  For another, even the most racist Nazi taxonomy acknowledged Jews as some sort of lesser near-relative of the privileged race.  But neither is such talk just ordinary metaphor: It typically isn't merely a colorful way of saying Jews are dirty and bad and should be gotten rid of.  Beneath the talk is something more ontological -- a picture of the racialized group as fundamentally lesser.

David Livingstone Smith offers a fascinating account in his recent book On InhumanityI like his account so much that I wish its central idea didn't conflict with pretty much everything that I've written about the nature of belief over the past 25 years.

Smith on Conflicting Beliefs and Seeing People as Monsters

According to Smith, the typical advocate of dehumanizing rhetoric has two contradictory beliefs.  They believe that the target group is fully human and simultaneously they believe that the target group is fully subhuman.

What is it to be human?  It is not, Smith argues, just to be a member of a scientifically defined species.  The "human" can be conceptualized more broadly than that (maybe including other members of the genus Homo) or more narrowly.  It is, Smith argues, a folk concept, combining politics with essentialist folk biology.  Other "humans" are those who share the ineradicable, fundamental essence of being "our kind" (p. 113).

To the Nazi, the Jew is literally subhuman in this sense.  The Jew lacks the fundamental essence that Nazi racial theorists believed they shared with others of their kind.  This is a theoretical belief, believed with the same passion and conviction as other politically charged theoretical beliefs.

At the same time, emotionally, perceptually, and pre-theoretically, Smith argues, the Nazi can't help but think of Jews as humans like them.  Moreover, their language shows it: In the next sentence, a Nazi might call Jews terrible people or a lesser type of human and might hold them morally responsible for their actions as though they are ordinary members of the moral community.  On Smith's view, Nazis also believe, in a less theoretical way, that Jews are human.

Suppose you're a Nazi looking at a Jew.  On the outside, the Jew looks human.  But on the inside, according to your theory, the Jew isn't really a human.  Let's assume that you also believe that Jews are malevolent and opposed to you.  Compare our conception of werewolves, vampires, and zombies.  Threateningly close to being human.  Malevolently defying the boundary between "us" and "them".  To the Nazified mind, Smith argues, the Jew is experienced as a monster no less than a werewolf is a monster -- a creature infiltrating our society, tricking the unwary, beneath the surface corrupt, and "metaphysically threatening" because it provokes contradictory beliefs in its humanity and nonhumanity.  Like a werewolf, vampire, or zombie, there might also be superficial differences on the outside that reinforce the creepy almost-humanness of the creature (compare the uncanny valley in robotics).

So far, that's Smith.  I hope I've been fair.  I find it an extremely interesting account.

On My View of Belief, Baldly Contradictory Beliefs Are Impossible

Here's my sticking point: What is it to believe something?  On my view, you don't really believe something unless you "walk the walk".  To believe some proposition P is to be disposed in general to act and react as if P is true.  Having a belief, on my view, is like having a personality trait: It's a pattern in your cognitive life or a matter of typically having a certain sort of posture toward the world.

What is it to believe, for example, that Black people and White people are equally moral and equally intelligent?  It is to generally be disposed to act and react to the world as if that is so.  It is partly to feel sincere when you say it is so.  But it's also not to be biased against Black applicants when hiring for a job that requires intelligence and not to expect the White person in a mixed-race group to be kinder and more trustworthy.  Unless this is your dispositional profile in general, you don't really and fully believe in the intellectual and moral equality of the races -- at best you are in what I call an "in-between" state, neither quite accurately describable as believing, nor quite accurately describable as failing to believe.

On this approach to belief, contradictory belief is impossible.  You cannot be simultaneously disposed in general to act as if P is the case and in general to act as if not-P is the case.  This makes as little sense as being simultaneously an extreme extravert and an extreme introvert.  The dispositions constitutive of the one (e.g., enjoying meeting new people at raucous parties) are exactly the opposite of the dispositions constitutive of the other (e.g., not enjoying meeting new people at raucous parties).  Of course, you can be extremely extraverted in some respects, or in some contexts, and extremely introverted in other respects or contexts.  That makes you a mixed case, not neatly classifiable as either overall.

The same is true, on my view, with racist and egalitarian beliefs.  You cannot simultaneously have an across-the-board egalitarian posture toward the world and an across-the-board racist posture.  You cannot fully believe both that all the races are equal and that your favorite race is superior.  Furthermore, in the same way that few people are fully 100% extravert or fully 100% introvert, few of us are 100% egalitarian in our posture toward the world or 100% bigoted.  We're all somewhere in the middle.

Conflicting Representations Are More Readily Acknowledged Than Contradictory Beliefs

As I was reading On Inhumanity, I was wondering how much Smith's commitment to contradictory beliefs matters.  Maybe Smith and I needn't disagree on substance.  Maybe Smith and I could agree that in some thin sense of believing, the Nazi has baldly contradictory "beliefs".

Here's something nearby that I can agree to: The Nazi has conflicting representations of Jews.  There's a theoretical and ideological representation of Jews as subhuman, and there are conflicting emotional, perceptual, and less-ideological representations of Jews as human.  This conflict of representations could be enough to generate the metaphysical threat and the anti-monster emotional reaction, regardless of what we say about "belief".

Smith is keen to convince people to recognize their own potential to fall into dehumanizing patterns of thought.  Me too.  In this matter, I suspect that my demanding view of belief will serve us better.  That would be one pragmatic reason to resolve the dispute about belief, if it's really just a terminological dispute, in my favor.

Here's my thought: It is, I think, much easier to see one's potential to host conflicting representations, on which one might act in inconsistent ways, than it is to see one's potential to host baldly contradictory beliefs -- especially if one of the two beliefs is one you are currently deeply committed to denying the truth of.

Smith's sympathetic, anti-racist readers might strain to imagine a future in which they fully believe that some disfavored race is literally subhuman.  That might seem like a truly radical change of view -- something only distantly imaginable after thorough indoctrination.  It is much easier, I suspect, to imagine that our minds could slowly fill with dehumanizing representations of another group, especially if we are repeatedly bombarded with such representations.  And maybe then, too, we can imagine our behavior becoming inconsistent -- sometimes driven by one type of representation, sometimes by another.

Full belief, I want to suggest, needn't be at the core of dehumanization, and an account of dehumanization needn't commit on how demanding "belief" is or whether baldly contradictory belief is possible.  Instead, all that's necessary might be confusion and conflict among one's representations or thoughts about a group, regardless of whether those representations rise to full belief.

Suppose then that you world fills you, over and over, with conflicting representations of another group, some humane and egalitarian, others monstrous and terrible.  Once the dehumanizing ones are in, they start to color your thoughts automatically, even without your explicit endorsement.  As they gain a foothold, you begin to wonder if there is some truth in them.  You become confused, wary, uncertain what to believe or how to act.  Your group enters in conflict with the group.  You feel endangered -- maybe by famine or war.  Resisting evil is difficult when you're confused: Passive obedience is the more common reaction to doubt and conflicting thoughts.

Beneath your confusion, doubt, and fear lie two conflicting potentials.  If the situation turns one way -- a neighbor who did you some kindness knocks on your door asking for a night of shelter -- maybe you start down the path toward great humanity and courage.  If the situation turns another way, you might find yourself passive in the face of great evil, unsure what to make of it.  Maybe even, if the threat seems terrible enough and the situation pulls you along, drawing the worst from you, you might find yourself a perpetrator.  Acting on a dehumanizing ideology does not require fully believing that ideology.


On September 29, I'll be chatting (remotely) with David Livingstone Smith at Warwick's bookstore in San Diego.  I think the public is welcome.  I'll share a link when one is available.

Friday, September 04, 2020

Randomization and Causal Sparseness

Suppose I'm running a randomized study: Treatment group A gets the medicine; control group B gets a placebo; later, I test both groups for disease X.  I've randomized perfectly, it's double blind, there's perfect compliance, my disease measure is flawless, and no one drops out.  After the intervention, 40% of the treatment group have disease X and 80% of the control group do.  Statistics confirm that the difference is very unlikely to be chance (p < .001).  Yay!  Time for FDA approval!

There's an assumption behind the optimistic inference that I want to highlight.  I will call it the Causal Sparseness assumption.  This assumption is required for us to be justified in concluding that randomization has achieved what we want randomization to achieve.

So, what is randomization supposed to achieve?

Dice roll, please....

Randomization is supposed to achieve this: a balancing of other causal influences that might bear on the outcome.  Suppose that the treatment works only for women, but we the researchers don't know that.  Randomization helps ensure that approximately as many women are in treatment as in control.  Suppose that the treatment works twice as well for participants with genetic type ABCD.  Randomization should also balance that difference (even if we the researchers do no genetic testing and are completely oblivious to this influence).  Maybe the treatment works better if the medicine is taken after a meal.  Randomization (and blinding) should balance that too.

But here's the thing: Randomization only balances such influences in expectation.  Of course, it could end up, randomly, that substantially more women are in treatment than control.  It's just unlikely if the number of participants N is large enough.  If we had an N of 200 in each group, the odds are excellent that the number of women will be similar between the groups, though of course there remains a minuscule chance (6 x 10^-61 assuming 50% women) that 200 women are randomly assigned to treatment and none to control.

And here's the other thing: People (or any other experimental unit) have infinitely many properties.  For example: hair length (cf. Rubin 1974), dryness of skin, last name of their kindergarten teacher, days since they've eaten a burrito, nearness of Mars on their 4th birthday....

Combine these two things and this follows: For any finite N, there will be infinitely many properties that are not balanced between the groups after randomization -- just by chance.  If any of these properties are properties that need to be balanced for us to be warranted in concluding that the treatment had an effect, then we cannot be warranted in concluding that the treatment had an effect.

Let me restate in an less infinitary way: In order for randomization to warrant the conclusion that the intervention had an effect, N must be large enough to ensure balance of all other non-ignorable causes or moderators that might have a non-trivial influence on the outcome.  If there are 200 possible causes or moderators to be balanced, for example, then we need sufficient N to balance all 200.

Treating all other possible and actual causes as "noise" is one way to deal with this.  This is just to take everything that's unmeasured and make one giant variable out of it.  Suppose that there are 200 unmeasured causal influences that actually do have an effect.  Unless N is huge, some will be unbalanced after randomization.  But it might not matter, since we ought to expect them to be unbalanced in a balanced way!  A, B, and C are unbalanced in a way that favors a larger effect in the treatment condition; D, E, and F are unbalanced in a way that favors a larger effect in the control condition.  Overall it just becomes approximately balanced noise.  It would be unusual if all of the unbalanced factors A-F happened to favor a larger effect in the treatment condition.

That helps the situation, for sure.  But it doesn't eliminate the problem.  To see why, consider an outcome with many plausible causes, a treatment that's unlikely to actually have an effect, and a low-N study that barely passes the significance threshold.

Here's my study: I'm interested in whether silently thinking "vote" while reading through a list of registered voters increases the likelihood that the targets will vote.  It's easy to randomize!  One hundred get the think-vote treatment and another one hundred are in a control condition in which I instead silently think "float".  I preregister the study as a one-tailed two-proportion test in which that's the only hypothesis: no p-hacking, no multiple comparisons.  Come election day, in the think-vote condition 60 people vote and in the control condition only 48 vote (p = .04)!  That's a pretty sizable effect for such a small intervention.  Let's hire a bunch of volunteers?

Suppose also that there are at least 40 variables that plausibly influence voting rate: age, gender, income, political party, past voting history....  The odds are good that at least one of these variables will be unequally distributed after randomization in a way that favors higher voting rates in the treatment condition.  And -- as the example is designed to suggest -- it's surely more plausible, despite the preregistration, to think that that unequally distributed factor better explains the different voting rates between the groups than the treatment does.  (This point obviously lends itself to Bayesian analysis.)

We can now generalize back, if we like, to the infinite case: If there are infinitely many possible causal factors that we ought to be confident are balanced before accepting the experimental conclusion, then no finite N will suffice.  No finite N can ensure that they are all balanced after randomization.

We need an assumption here, which I'm calling Causal Sparseness.  (Others might have given this assumption a different name.  I welcome pointers.)  It can be thought of as either a knowability assumption or a simplicity assumption: We can know, before running our study, that there are few enough potentially unbalanced causes of the outcome that, if our treatment gives a significant result, the effectiveness of the treatment is a better explanation than one of those unbalanced causes.  The world is not dense with plausible alternative causes.

As the think-vote example shows, the plausibility of the Causal Sparseness assumption varies with the plausibility of the treatment and the plausibility that there are many other important causal factors that might be unbalanced.  Assessing this plausibility is a matter of theoretical argument and verbal justification.  

Making the Causal Sparseness assumption more plausible is one important reason we normally try to make the treatment and control conditions as similar as possible.  (Otherwise, why not just trust randomness and leave the rest to a single representation of "noise"?)  The plausibility of Causal Sparseness cannot be assessed purely mechanically through formal methods.  It requires a theory-grounded assessment in every randomized experiment.

[image source]

Thursday, August 27, 2020

What is "Validity" in Social Science? Validity As a Property of Inferences vs of Claims

If you want to annoy your psychology and social science friends, I have just the trick!  Gather four of them together and ask them to explain exactly what validity is.  Then step back and watch them descend into confusion and contradiction.  Bring snacks.

We use the term all the time, with a truly bewildering array of modifiers: internal validity, construct validity, content validity, external validity, logical validity, statistical conclusion validity, discriminant validity, convergent validity, face validity, criterion validity....  Is there one thing, validity in general, which undergirds all of these uses?  And if so, what does it amount to?  Or is "validity" more of a family resemblance concept?  Are all true statements in some sense valid?  Or is validity more specific than that -- perhaps a matter of appropriate application of method?  Can a study or a conclusion or a method or an instrument be valid even if it's entirely mistaken, as long as proper techniques have been employed?  Oh, and wait, is validity really a property of studies and conclusions and methods and instruments?  They seem so different and to have such different criteria of success!

[image: A Defence of the Validity of the English Ordinations]

I've found surprisingly few general treatments of validity in the social sciences which articulate the concept with the kind of rigor and consistency that would satisfy an analytic philosopher.  One of the best and most influential recent attempts is Shadish, Cook, and Campbell 2002.  I'm going to poke at their treatment with one question in mind: What is validity a property of?

Shadish, Cook, and Campbell begin with a seemingly clear commitment: validity is a property of inferences:

We use the term validity to refer to the approximate truth of an inference.[1]  When we say something is valid, we make a judgment about the extent to which relevant evidence supports the inference as being true or correct (p. 34).

In the next paragraph, they emphasize again that validity is a property of specifically of inferences:

Validity is a property of inferences.  It is not a property of designs or methods, for the same design may contribute to more or less valid inferences under different circumstances....  So it is wrong to say that a randomized experiment is internally valid or has internal validity -- although we may occasionally speak that way for convenience (p. 34).

Characterizing validity as a property of inferences resonates with the use of "validity" in formal logic, where it is also generally treated as a property of deductive inferences (well, more accurately, a property of deductive arguments -- but close enough, if we treat inferences as psychological instantiations of arguments).  In formal logic, an inference or argument is deductively valid if and only if, in virtue of its form, it's impossible for the conclusion of the inference to be false if the premises of the inference are true.  [Okay, fine, maybe it's not that simple, but let's not go there today.]

Consider, for example, modus ponens, the inference form in which "P" and "If P, then Q" serve as premises, and "Q" serves as the conclusion.  (P and Q are propositions.)  Modus ponens is normally viewed as a valid form of inference because under the assumption that the two premises are true, the conclusion must be true.  If it's true that Socrates is a man and also true that If Socrates is a man, then Socrates is mortal, then it must also be true that Socrates is mortal.

Logicians normally distinguish validity from soundness: An inference is sound if and only if the inference is valid and the premises are true.  An inference can of course be valid without being sound, for example: (P1.) I am wearing three hats.  (P2.) If I am wearing three hats, I am a famous actor.  (C.) Therefore, I am a famous actor.  That's a perfectly valid inference to a perfectly false conclusion (thanks to at least one false premise).

Inferences are not true or false.  They are valid or invalid.  What is true or false are propositions: the premises and the conclusion.  Got it?  Good!  Lovely!  Now let's go back for a closer look at Shadish et al.  This time let's not forget footnote 1.

We use the term validity to refer to the approximate truth of an inference.[1]  When we say something is valid, we make a judgment about the extent to which relevant evidence supports the inference as being true or correct.

[1] We might use the terms knowledge claim or proposition in place of inference here, the former being observable embodiments of inferences.  There are differences implied by each of these terms, but we treat them interchangeably.

Okay, now wait.  Is validity a property of an inference or is it a property of a claim or proposition?  An inference is one thing and a claim is another!  Shadish et al., despite emphasizing that validity is a property of inferences, confusingly add they will treat "inference" and "knowledge claim" interchangeably.  But an inference is not a knowledge claim.  An inference is a process of moving from the hypothesized truth of one or more claims to a conclusion which, if all goes well, is true if the claims are true.

Could we maybe just say that validity is a property of an inference that has a true conclusion at the end, as a result of employing of good methods?  (This would make "validity" in Shadish et al.’s sense closer to "soundness" in the logician’s sense.)  Or differently but relatedly could we say that validity is a property that a claim has when it is both true and the result of methodologically good inference (and where the truth and inference quality are non-accidentally related)?  Or is validity about justification rather than truth -- "the extent to which relevant evidence supports the inference as being true or correct" (italics added).  Justification can of course diverge from the truth, since sometimes evidence strongly supports a proposition that turns out to be false in the end.  Or should we go back to process here, as suggested by the term "correct", since presumably an inference can be correct, in the sense that it is the right inference to make given the evidence, without its conclusion being true?

Oy vey.  I wish I could say that Shadish et al. clarify this all later and use their terms consistently throughout their influential book, but that's not so -- as indeed they hint in their remark, quoted above, about sometimes speaking loosely as though experiments (and not just inferences or claims) can be valid.  Their book is a lovely guide to empirical methods, but by the standards of analytic philosophy their definition of validity is a mess.

But this post isn't just about Shadish et al. (despite their 47,473 citations as of today).  It's about the treatment of validity in psychology and the social sciences in general.  Shadish et al. exemplify a conceptual looseness I see almost everywhere.

As a first-pass corrective on this looseness let me propose the following:

Psychologists' and social scientists' claims about validity, in my judgment, make the most sense on the whole and are simplest to interpret if we treat validity as fundamentally a property of claims or propositions rather than as a property of inferences (or methods or instruments or experiments).  A causal generalization, for example, of the form that events of type A cause events of type B in conditions C is "valid" if and only if events of type A do cause events of type B in conditions C.  To say that a psychological instrument (such as an IQ test) is "valid" is fundamentally matter of saying that the instrument measures what it claims to measure: Validity is a matter of the truth of that claim.  A study is valid if the claims of which it is composed are true (both its claims about its conclusions and its claims about the manner in which its conclusions are supported).  A measure has "face validity" if superficially it looks like the claims that result from applying that measure will be true claims.  Two measures have "discriminant validity" if the following claim is true: They in fact measure different underlying phenomena.

Validity, in the psychologists' and social scientists' sense, is best conceptualized as a property that belongs to claims: the property those claims have when they are true.  Attributions of validity to ontological entities other than claims, such as measures and studies, can all be reinterpreted as commitments to the truth of certain types of claims that are implicitly or explicitly embodied in the application of measures, the publication of studies, the making of inferences, etc.  (That good method has been used to arrive at the claims, I regard as a cancelable implicature.)

Why go this direction?  If we treat "validity" as a matter of the quality of the inference or the degree of justification of the conclusion regardless of whether the conclusion is in fact true, then we will have a plethora of valid inferences and valid conclusions, and by extension valid measures, valid instruments, and valid causal models that are completely mistaken, because science is hard and what you're justified in concluding is often not so.  But that's not how social scientists generally talk: A valid measure is one that is right, one that works, one that measures what it's supposed to measure, not one that we are (perhaps falsely) justified in thinking is right.

I diagnose the confusion as arising from three sources: First, widespread sloppy conceptual practice that uses "valid" loosely as a general term of praise.  Second, a tendency among those who do want to rigorize to notice that the philosophers' logical notion of validity applies to arguments or inferences, and consequently some corresponding pressure to think of it that way in the social sciences too, despite the dominant grain of social science usage running a different direction.  Third, a confusing liberality both about the types of validity and the ontological objects that can be said to have validity, which makes it hard to see the simple core underlying idea behind it all: that validity is nothing but a fancy word for truth.

Thursday, August 20, 2020

Philosophy That Closes vs. Philosophy That Opens

Topic X, you might think, admits three viable philosophical positions, A, B, and C.  Since this is philosophy, though, probably you're wrong!  You could be wrong in two different ways: A, B, and C might not all be viable.  Alternatively, some position other than A, B, and C might be viable.  Either way, the claim "The viable options are A, B, and C" is false.

Philosophy that closes aims to avoid the first type of error.  It torpedoes bad positions to better converge on the one correct view of Topic X.  Philosophy that opens aims to avoid the second type of error.  It enlivens previously neglected or underappreciated positions, expanding rather than contracting our sense of the possibilities.

Both types of philosophy are valuable, but philosophy that opens can seem dialectically weaker.  "This is true and that is false!" rings in the mind, in books, and in journal articles much better than "Hey, consider this neglected possibility that might be true."

What do I mean by "viable"?  Something like this: A philosophical position is viable if a typical good reasoner in our philosophical community, informed of the relevant arguments, ought to conclude that it might well be correct.  A remote chance of correctness isn't enough (maybe there's a remote chance that I'm a brain in a vat).  But a viable position needn't be the likeliest one: Several positions might be viable, some more plausible than others.

The viable is of course vague-boundaried and disputable.  The disputability of viability is, in fact, central to how philosophy works.  Philosophers constantly negotiate the boundaries of the viable by aiming to open up or close off various possibilities.

Consider the metaphysics of consciousness.  Most 21st century Anglophone philosophers regard physicalism as a viable option: Consciousness is ultimately a matter of how we are physically configured.  Within physicalism, most or many would probably regard both functionalism (which focuses on abstract organizational structure) and biological accounts (which focus on the specific makeup of the organism and maybe its evolutionary history) as viable.  Maybe you have a preferred position; but you can see how a reasonable interlocutor might arrive at a different conclusion.

But is substance dualism viable -- the idea that we have immaterial souls, irreducible to anything purely physical?  Some philosophers (a distinct minority) favor substance dualism.  Of course, those philosophers find it viable.  Others might disfavor substance dualism while regarding it as still a viable possibility.  Still others think we're warranted in dismissing it entirely.

What about idealism -- the idea that only minds exist, and everything that we think of as material is in fact somehow a configuration of our (and/or God's) minds?  Or panpsychism, the view that consciousness is ubiquitous in the universe, even in simple entities like electrons?  Or consciousness eliminativism, the view that there really are no conscious experiences of any sort at all?

It's easy to read people as closers.  Arguing in favor of Position A seems to implicitly signal that you regard Positions B and C as demonstrably wrong, unless you wave your arms around canceling that implicature.  Even then, readers will often forget your caveats and interpret you as convinced that only A could be true.

I do think that people arguing in defense of commonly accepted positions are often aiming to close off other options.  Dialectically, this makes sense.  There's not much need for the community to hear that Popular Position A is viable.  More interesting and informative would be to learn that Popular Position A is in fact the one correct view that we ought finally to settle on.

However, philosophers arguing for unpopular positions might set their sights lower: not to convince others that substance dualism, or panpsychism, or idealism (or group consciousness, or that we have ethical obligations to plants, or that non-existence is better than existence) is in fact the one correct position that we ought to settle on, but only that the position is viable, possessing important but neglected philosophical virtues (and its competitors perhaps possessing troubling vices), and that we ought to treat it as a live option.  You can argue for this even if you think the underappreciated option is probably not true.  This is the philosophy of opening.

It is rather rare for philosophers to argue that a possibility is viable and ought not be dismissed while explicitly acknowledging that they regard other possibilities as more likely.  But why is it rare?  Why shouldn't we expect that we are are at least as likely to make philosophical errors of omission and close-mindedness as to make philosophical errors of over-inclusion and excessive open-mindedness?  Why shouldn't we focus at least as much on exploring the philosophical possibilities we could be wrongly neglecting as we focus on narrowing down to the one correct view?

What do you love about philosophy?  Some people love the feeling that they have arrived at the one correct view on a topic of profound importance.  Others love the beauty of grand systems.  Still others love the clever back-and-forth of philosophical combat.  But what I love most about philosophy is none of these.  I love philosophy best when it opens my mind – when it reveals ways the world could be, possible approaches to life, lenses through which I might see and value the world, which I might not otherwise have considered.

For me, the greatest philosophical thrill is realizing that something I’d long taken for granted might not be true, that some “obvious” apparent truth is in fact doubtable – not just abstractly and hypothetically doubtable, but really, seriously, in-my-gut doubtable.  The ground shifts beneath me.  Where I’d thought there would be floor, there is instead open space I hadn’t previously seen.  My mind spins in new, unfamiliar directions.  I wonder, and wondrousness seems to coat the world itself.  The world expands, bigger with possibility, more complex, more unfathomable, and weird.



Disjunctive Metaphysics (May 27, 2011)

The Crazyist Metaphysics of Mind (Australasian Journal of Philosophy, 2014)

The Philosophical Overton Window (Jan 20, 2018)

[image source]