Wednesday, September 30, 2020

Some Good News, Some Bad News in the APA’s State of the Profession Report

by Carolyn Dicey Jennings and Eric Schwitzgebel

[cross-posted at The Blog of the APA and Daily Nous]

We were recently provided with a report from the APA that recounts work by Debra Nails and John Davenport to collect, organize, and analyze available data on the discipline over the past 50 years, including data from the Philosophy Documentation Center, the National Center for Education Statistics, and the APA itself. We are grateful for the efforts of Nails and Davenport in creating this important report on the state of the profession. As colleagues on the Data Task Force, we have some insider knowledge of how challenging this task was, and how much time it required between 2016 and now. In reviewing the report, a few threads stood out: good news, bad news, and supporting news. Let’s start with the good.

Contingent Faculty

While some use the language of “adjunct” or “part-time” faculty, we follow the report in using “contingent,” since it is possible for adjunct and part-time positions to be permanent ones. The national issue of increasing contingent labor in academia has come up many times at the APA Blog and Daily Nous, and there was recently an APA session dedicated to the topic. As one report puts the problem:

“For many part-time faculty, contingent employment goes hand-in-hand with being marginalized within the faculty. It is not uncommon for part-time faculty to learn which, if any, classes they are teaching just weeks or days before a semester begins. Their access to orientation, professional development, administrative and technology support, office space, and accommodations for meeting with students typically is limited, unclear, or inconsistent. Moreover, part-time faculty have infrequent opportunities to interact with peers about teaching and learning. Perhaps most concerning, they rarely are included in important campus discussions about the kinds of change needed to improve student learning, academic progress, and college completion. Thus, institutions’ interactions with part-time faculty result in a profound incongruity: Colleges depend on part-time faculty to educate more than half of their students, yet they do not fully embrace these faculty members. Because of this disconnect, contingency can have consequences that negatively affect student engagement and learning.”

So far this sounds like bad news, but we want to be sure that we do not overlook the real issues contingent faculty face in communicating the good. Namely, that the percentage of contingent faculty in philosophy is low and stable. As Nails and Davenport explain, around 73% of all faculty nationwide are in contingent or “unranked” positions, whereas only 22% of philosophy faculty had contingent positions in 2017. Moreover, there is a lower percentage of contingent philosophy faculty now than there was in the 1960s. While the APA membership numbers have suggested this for some time (with around 20% of its members reporting contingent status), a reasonable concern about that estimate was that contingent faculty might be underrepresented among APA members. This new report suggests that the numbers just are low in our discipline. That’s good news, under the plausible assumption that it is best for the discipline if a large majority of faculty are tenured or tenure track.

[click to enlarge and clarify]

While we are heartened by these findings, we see a couple of reasons to stay vigilant about the status of contingent faculty.

First, we don’t know the reason that there are fewer contingent faculty in philosophy. It may be, for example, that philosophers are less likely to teach the types of courses that are typically offered to contingent faculty, such as general education and writing courses. In that case, this wouldn’t be a reason to celebrate philosophy’s success on the issue.

Second, the report raises the possibility that contingent positions have recently been replacing assistant professor positions: “Since 1987, there has been a steady increase in the number of faculty hired outside the tenure system compared to entry-level positions inside it.” We note only that the numbers show that the ratio of assistant professor positions to contingent positions has shifted from about 1:1 in 1987 to about 4:3 in 2017, representing a gain of about 500 contingent positions while the number of assistant professor positions remains about the same (see “a” in Figure 5).

[click to enlarge and clarify]

It is unclear what we should conclude from this, since the overall ratio of contingent to non-contingent faculty hasn’t really changed: there are also nearly 500 extra professor and associate professor positions since 1987 (“b” in Figure 5). This could be due to faculty staying in the profession for longer than in past decades, but it could be for some other reason. Zooming in on the difference between 2007 and 2017, Nails and Davenport note that “the drop in assistant professors and rise in associate professors may indicate a decline in entry-level hires since 2007. Universities that hired new faculty into contingent positions in the wake of the Great Recession have not yet made tenure lines available to those who, under normal circumstances, would have been hired as assistant professors.” But here, too, the additional numbers of those in associate and professor positions could explain the difference (“c” in Figure 5). It may be, for example, that those in more recent years are achieving promotion faster than in past years, leaving fewer people at the assistant rank relative to ranked positions, overall. So it is unclear what to take from this data, but we may want to be cautious, given the possibility Nails and Davenport raise.

Alright, how about some bad news?


The bad news is that philosophy is represented at about 100 fewer institutions in 2017 than in 1967 (1669 colleges and universities in 1967 and 1552 in 2017). This appears to represent a decline of the discipline in academia that has been the subject of numerous blog posts.

Surprisingly, the report particularly notes a decline in philosophy at (non-Catholic) religious institutions, both at the undergraduate and graduate level. Whereas around 16% of all public institutions offer no philosophy degree, this is true of 27% of non-Catholic religious institutions (but only 11% of Catholic colleges and universities). We don’t know the root of this decline of philosophy in religious institutions. It might be due to the especially atheistic culture of philosophy and its writings, or due to such institutions having comparatively stronger religious studies or theology programs competing for majors, or due to the relatively left-wing politics of many academic philosophers.

In addition, a striking 78% of historically Black colleges and universities (HBCU) offer no philosophy degree. Given the historical racism in philosophy, it seems likely that this is also connected to cultural issues. It would be in the interest of philosophy to further explore the matter. (Interested readers might start with this interview with Brandon Horgan at Howard University, an HBCU.)

We did note one reason for optimism with the overall numbers: while the numbers of institutions with a degree in philosophy has declined, the number of faculty at these institutions has increased, from around 6k in 1967 to around 9k in 2017. One can see how this played out at most institutions through the median number of faculty: whereas the median number of faculty for programs offering a PhD in philosophy was around 13 in 1967, it was around 19 in 2017. Similarly, those offering a Master’s went from 6 to 11, those offering Bachelor’s went from 3 to 5, and those offering courses only went from 1 to 2 (Figures 6a-b and 7a-b).

[click to enlarge and clarify]

The report also provided some numbers that support other findings, which we called “supporting news” above. We focus here on the supporting news regarding gender diversity. (The authors of the report were unable to explore race/ethnicity, disability, LBGTQ status, or other aspects of diversity.)

Gender Diversity

The APA now collects some demographic data from its members, including gender, race/ethnicity, LGBT status, and disability status. Among the 1874 APA members who reported gender, 505 (27%) answered “female”, 1363 (73%) answered “male”, and 6 (<1%) answered “something else.” Other recent research has suggested that women constitute about 30% of recent philosophy PhDs and new assistant professors in the U.S., about 20% of full professors, and about 25% of philosophy faculty overall (plus or minus a few percentage points). However, most of this previous research is either a decade out of date or is limited to possibly unrepresentative samples, such as APA member respondents, faculty at PhD-granting programs, or recent PhDs.

The current report finds generally similar numbers, in a larger and more representative sample (all faculty in the Directory of American Philosophers from 2017). Overall, 26% of philosophy faculty were women, including 34% of assistant professors and 21% of full professors. (Associate professors and contingent faculty are intermediate at 28% and 26%, respectively.)

We note that the authors of the report relied on the DAP’s binary gender classifications of faculty, which were generally reported by department heads or other department staff. And where faculty gender was not specified, the report’s authors searched websites and CVs for gender designations. Thus, the data do not include non-binary gender and some classification errors are possible.

The tendency for women to be a smaller percentage of full professors than assistant professors could reflect either a cohort effect, a tendency for women to advance more slowly up the ranks than men, or a tendency for women to exit the profession at higher rates than men. On the possibility of a cohort effect, since professors often teach into their 70s, the lower percentage of women among professors might to some extent reflect the fact that in the 1970s and 1980s, 17% and 22% of philosophy PhDs in the U.S. were awarded to women. By the 1990s, it was 27%, which is closer to the numbers for recent graduates. However, cohort effects might not be a complete explanation, since 21% might be a bit on the low side for a group that should reflect a mix of people who earned their PhDs from approximately the 1970s through the early 2000s. In the NSF data, women received 23% of all philosophy PhDs from the year 1973 through 2003—the approximate pool for full professors in 2017.

The report also explores gender by institution type, highest degree offered, and region. One notable result is that philosophy departments offering at least Bachelor’s degrees had on average higher percentages of women than departments not offering Bachelor’s degrees. Faculty were 27% women in departments offering the PhD, 28% in departments offering a Master’s but no PhD, 27% in departments offering a Bachelor’s degree, 22% in departments offering a minor but no Bachelor’s, 23% in departments offering an Associate degree but no minor, and 20% in departments offering philosophy courses but no degrees. (A chi-square test shows that this is unlikely to be statistical chance: 2x6 chi-square = 24.3, p < .001, lowest expected cell count = 104.) We are unsure what would explain this phenomenon.

Thursday, September 24, 2020

The Copernican Principle of Consciousness

According to the Copernican Principle in cosmology, we should assume that we do not occupy a special or privileged place in the cosmos, such as its exact center. According to the Anthropic Principle, we should be unsurprised to discover that we occupy a cosmological position consistent with the existence of intelligent life. The Anthropic Principle is a partial exception to the Copernican Principle: Even if cosmic locations capable of supporting intelligent life are extremely rare, and thus in a sense special, we shouldn't be surprised to discover that we are in such a location.

Now let's consider the following question: Is it surprising that Homo sapiens is a conscious species? On certain views of consciousness it would be surprising, and this surprisingness constitutes evidence against those views.

The views I have in mind are views on which conscious experience is radically separable from intelligent-seeming outward behavior. Views of this sort are associated with Ned Block and John Searle and more recently Susan Schneider -- though none of them commit to exactly the view I'll criticize today.

Let's stipulate the following: In our wide, maybe infinite, cosmos, living systems have evolved in a wide variety of different ways, with very different biological substrates. Maybe some life is carbon based and other life is not carbon based, and presumably carbon-based entities could take a variety of forms, some very unlike us. Let's stipulate also that some become sophisticated enough to form technological societies.

For concreteness, suppose that a thousand galaxies each host technological life for a thousand years. One hosts a technological society of thousand-tentacled supersquids whose cognitive processing proceeds by inference patterns of light in fiber-optic nerves. Another hosts a technological society of woolly-mammoth-like creatures whose cognition is implemented in humps containing a billion squirming ants. (For more detailed descriptions, see Section 1 of this paper.) Another hosts a technological society of high-pressure creatures who use liquid ammonia like blood. Etc.

Since these are technological societies, they engage in the types of complicated social coordination required to, say, land explorers on a moon. This will require structured communication: language, including, presumably, self-reports interpretable as reports of informational or representational states: "I remember that yesterday Xilzifa told me that the rocket crashed" or "I don't want to stop working yet, since I'm almost done figuring out this equation." (If advanced technology can arise without such communications, exclude such species from my postulated thousand.)

So then, we have a thousand societies like this, scattered across the universe. Now let's ask: Are the creatures conscious? Do they have streams of experience, like we do? Is there "something it's like" to be them?

Most science fiction stories seem to assume yes. I think that is also the answer we find intuitive. And yet, on certain philosophical views that I will call neurochauvinist, we should very much doubt that creatures so different from us are conscious. According to neurochauvinism, what's really special about us, which gives rise to conscious experience, is not our functional sophistication and complex patterns of environmental responsiveness but rather something about having brains like ours, with blood, and carbon, and neurons, and sodium channels, and acetylcholine, and all that.

Neurochauvinism can seem attractive when confronted with examples like Searle's Chinese Room or Block's China Brain -- complex systems designed to look from the outside like they are conscious and sophisticated (and which maybe implement computer-like instructions), but which are in fact basically just tricks. Part of the point of these examples is to challenge the common assumption that programmed robots, if they could someday be designed to behave like us, would be conscious. Consciousness, Block and Searle say, is not just a matter of having the right patterns of outward behavior, or even the right kinds of programmed internal, functional state transitions. Consciousness requires the specific biology of neurons -- or at least something in that direction. As Searle suggests, no arrangement of beer cans and wire, powered by windmills, could ever really have conscious experiences -- no matter how cleverly designed, no matter how sophisticated its behavior might seem when viewed from a distance. It's just not made of the right kind of stuff.

The neurochauvinist position as I am imagining it says this: We know that we are conscious. But those other aliens, made out of such different kinds of stuff, they're out of luck! Human biological neurons are what's special, and they don't have them. Although aliens of this sort might seem to be reporting on their mental states (remembering and wanting, in my example), really there is no more conscious experience there than there is behind the computer "memory" in your laptop or behind a non-player character in a computer game who begs you to save him from a dragon.

Now I don't think that Block or Searle are committed to such a strong view. Both allow that some hypothetical systems very different from us might be conscious, if they have the right kind of lower-level structures -- but they don't specify what exactly those structures must be or whether we should expect them to be rare or common in naturally-evolved aliens capable of sophisticated outward behavior.

So the neurochauvinist view is a somewhat extreme and unintuitive view. And yet philosophers and others do sometimes seem to say things close to it when they say that human beings are conscious not in virtue of their sophisticated behavior and environmental responsiveness but rather in virtue of the specifics of their underlying biological structures.


Back to the Copernican Principle. If we alone have real conscious experiences and the 999 other technologically sophisticated alien species do not, then we do occupy a special region in the universe: the only region with conscious experiences. We are in a sense, super lucky! Of all the wild ways in which technological, linguistic, self-reporting creatures could evolve, we alone lucked in the neural basis of consciousness. Too bad for the others! Unlike us, they are as experientially blank as a computer or a stone.

If your theory of consciousness implies that Homo sapiens lucked into consciousness while all those other technological species missed out, it's got the same kind of weakness as does a theory that says, "yes, we're at the center of the universe, it just happened that way for no good reason, how strangely lucky for us!"

Now you could try to wiggle out of this by invoking the Anthropic Principle. You could say that we should be unsurprised to discover that we are in a region of the universe that supports consciousness, just like we should be unsurprised to discover that we aren't in any of the vast quantities of vacuum between the stars. The Anthropic Principle is sometimes framed in terms of "observers": We should expect to be in a region that can host observers. If only conscious entities count as observers, then it's unsurprising that we're conscious.

Now I think that the best understanding of "observer" for these purposes would be a functional or behavioral understanding that would include all technological alien species, but that seems like an argumentative quagmire, so let me respond to this concern in a different way.

Suppose that instead of a thousand technological species, there are a thousand and one: we who are conscious, 999 alien species without consciousness, and one other alien species with consciousness (they also lucked into neurons) who has secretly endured unobserved while observing all the other species from a distance. I will call this alien species the Unbiased Observers. They gaze with equanimity at the thousand others, evaluating them.

When this species casts its eye on Earth, will it see anything special? Anything that calls out, "Whoa, this planet is radically unlike the others!" As it looks at our language and our technology, will anything jump out that says here be consciousness while all the other linguistic and technological societies lack it? I see no reason to think so if we abide by the starting assumptions of neurochauvinism, that is, if we think that nonconscious entities could easily have sophisticated outward behavior and information processing similar to ours, and that what's really necessary for consciousness is not that but rather the low-level biological magic of neurons.

The Copernican Principle is then violated as follows: The Unbiased Observers should, if they understand the basis of consciousness, regard us as the one-in-a-thousand lucky species that chanced into consciousness. Even if the Unbiased Observers don't understand the basis of consciousness, it is still true that we are special relative to them -- sharing consciousness with them, alone among all the species in the universe that outwardly seem just as sophisticated and linguistic.

The Copernican Principle of Consciousness: Assume that there is no unexplained lucky relationship between our cognitive sophistication and our consciousness. Among all the actual or hypothetical species capable of sophisticated cognitive and linguistic behavior, it's not the case that we are among a small portion who also have conscious experiences.

[image source]

Thursday, September 17, 2020

Why Writing Philosophy Is Hard (and Why Every Historical Philosopher Focuses on the Wrong Things)

The number of true sentences is infinite. This is why writing philosophy is hard.

As if to prove my point to myself, I'm having some trouble choosing this next sentence.

With the exception perhaps of fiction, philosophy is the most topically wide open and diversely structured of writing forms. Literally every topic is available for philosophical inquiry. What rules and principles then guide how you write about that topic as a philosopher? Respect for truth is one guide -- but then again not always. Quality of argumentation is another -- but then again, philosophy is often less about presenting an argument than articulating a vision.

This issue arose acutely for me yesterday -- a day spent amid a flurry of invisible revisions (that is, making then reversing changes) on the book I'm drafting. What needs to be said explicitly? What can you pass over in silence? What can you assume the reader will accept without further support, and what requires defense or explanation? I find myself adding sentences of support or clarification, then later deleting them, then adding different ones, then expanding those -- then deleting the whole business, then deciding I really do want such-and-such part of it after all....

Here's a sentence from a paragraph I've been working on:

The experience of pain, for example, might be constituted by one biological process in us and a different biological process in a different species.

Is this something I can just say, and the reader will nod and move along? Or do I need to explain it? What exactly do I mean by an "experience of pain"? What is a "biological process"?  How much is built into the notion of "constituted"?  In philosophical writing -- unlike in most scientific writing -- phrases like this are very much open for challenge and inquiry. Indeed, the substance of philosophy often is just inquiring into issues of this sort and challenging the assumptions that lie in the background behind our casual use.

Suppose the meaning is clear enough: I don't need to explain it. I might still need to defend it. Although the sentence (variously interpreted) expresses majority opinion in philosophy of mind, not all philosophers agree. Indeed not all philosophers even agree that the external world exists. We can disagree about anything! It's perhaps the most special and obvious talent of philosophy as a discipline. (Wouldn't you agree?) For example, maybe species that are biologically sufficiently different (octopuses? snails?) don't really feel pain, and the species that do feel pain all have basically the same neural underpinnings? Or maybe there's no good understanding of "constitution" such that pains can be constituted by anything? Maybe the very idea of "consciousness" is broken and unscientific?

In philosophy, it seems, I can always reasonably choose to explain my terms and concepts more clearly (that's so central to the philosopher's task!), and I can always reasonably choose to defend my claims at greater length (since philosophers can challenge and doubt literally anything). My explanation will then in turn invoke new terms that might need explaining and my defense will rely on further claims that might need further defense. An infinite regress threatens -- not just an ordinary infinite regress, but a many-branching regress in which I suspect, eventually, every true sentence could eventually become relevant in some way somewhere.

For this reason good philosophical writing requires careful attunement to your audience. When every term is potentially requires clarification and every claim potentially requires defense, you need to make constant judgment calls about how much clarification and how much defense, in what dimensions and directions. To do this well, you need a good sense of your readers: what will make them prickle and what they'll be happy enough, in context, to let pass.

Students and outsiders to the discipline will rarely have a good sense of this. How could they? This is not because they are bad philosophers (though of course they might be) but because philosophical thought and writing is so open-textured.

Let me try to express this with an illustration.

Suppose, to simplify, that every idea has four (imagine only four!) respects in which it could reasonably be clarified or defended, and that each clarification or defense in turn admits four further clarifications and defenses. The structure of all possible ways to articulate your idea then looks like this:

[click to enlarge and clarify]

Of course you can't write that! So here's what you write:

[click to enlarge and clarify]

You go deep into clarification/defense 1b, skip 2 altogether, add a superficial remark on 3, deeply illuminate two aspects of 4a and a bit of 4c.

Unfortunately, the reader wanted a deep dive into two aspects of 2c and a little bit on 4:

[click to enlarge and clarify]

The reader finds your treatment of 1b and 4 tedious. Why are you spending so much time on that, when the issue that's really on their mind, what's really bugging them, is 2, especially 2c, especially these sub-ideas within 2c? 2c is the obvious objection! It's the heart of the matter, of course of course!

If you come from the same philosophical subculture as the reader -- if you're soaking in the same subliteratures, admiring the same great thinkers, feeling pulled by the same sets of issues -- then the shape of what you include and omit is much likelier to match the shape of what the reader feels you need to include (to have a good treatment) and omit (since they're not going to read the booksworths of material that could be written as subsections of basically any philosophy article).

This is the art of writing philosophy. It's a culturally specific knack, acquired mainly by immersion. It is so hard to do well! It's part of what makes philosophical work from other times and places often seem so wide of the mark, difficult to understand, and poorly argued.

Okay, I know what you're going to object now. (I think I know.) If all the above is true, how is it that we can appreciate philosophers as culturally distant as Plato and Zhuangzi? They certainly didn't write with us in mind!

Here are my two answers.

First, at least some historical figures played a role in shaping our sense of what needs and does not need clarification and defense, or (the more minor figures) were shaped by others in their era who also shaped us.

Second, and I think my stronger answer: This is why history of philosophy is creative and reconstructive. We reach toward them rather than the other way around. We allow ourselves to sink into their worldview where issue 2 is just taken for granted and where 4a is what really requires long, detailed development. And if 2 seems to us to require serious attention, we develop a speculative treatment of 2 on their behalf, piecing together charitably (maybe too charitably) what we think they would or must have thought about it.


If you enjoy my blog, check out my recent book: A Theory of Jerks and Other Philosophical Misadventures.

Thursday, September 10, 2020

Believing in Monsters: David Livingstone Smith on the Subhuman

The Nazis called Jews rats and lice.  White plantation owners called their Black slaves soulless animals.  Pundits in Myanmar call Rohingya Muslims beasts, dogs, and maggots.  Dehumanizing talk abounds in racist rhetoric worldwide.

What do people believe, typically, when they speak this way?

The easiest answers are wrong.  Literal interpretation is out: Nazis didn't believe that Jews literally fit biologically into the taxonomy of rodents.  For one thing, they treated rodents better.  For another, even the most racist Nazi taxonomy acknowledged Jews as some sort of lesser near-relative of the privileged race.  But neither is such talk just ordinary metaphor: It typically isn't merely a colorful way of saying Jews are dirty and bad and should be gotten rid of.  Beneath the talk is something more ontological -- a picture of the racialized group as fundamentally lesser.

David Livingstone Smith offers a fascinating account in his recent book On InhumanityI like his account so much that I wish its central idea didn't conflict with pretty much everything that I've written about the nature of belief over the past 25 years.

Smith on Conflicting Beliefs and Seeing People as Monsters

According to Smith, the typical advocate of dehumanizing rhetoric has two contradictory beliefs.  They believe that the target group is fully human and simultaneously they believe that the target group is fully subhuman.

What is it to be human?  It is not, Smith argues, just to be a member of a scientifically defined species.  The "human" can be conceptualized more broadly than that (maybe including other members of the genus Homo) or more narrowly.  It is, Smith argues, a folk concept, combining politics with essentialist folk biology.  Other "humans" are those who share the ineradicable, fundamental essence of being "our kind" (p. 113).

To the Nazi, the Jew is literally subhuman in this sense.  The Jew lacks the fundamental essence that Nazi racial theorists believed they shared with others of their kind.  This is a theoretical belief, believed with the same passion and conviction as other politically charged theoretical beliefs.

At the same time, emotionally, perceptually, and pre-theoretically, Smith argues, the Nazi can't help but think of Jews as humans like them.  Moreover, their language shows it: In the next sentence, a Nazi might call Jews terrible people or a lesser type of human and might hold them morally responsible for their actions as though they are ordinary members of the moral community.  On Smith's view, Nazis also believe, in a less theoretical way, that Jews are human.

Suppose you're a Nazi looking at a Jew.  On the outside, the Jew looks human.  But on the inside, according to your theory, the Jew isn't really a human.  Let's assume that you also believe that Jews are malevolent and opposed to you.  Compare our conception of werewolves, vampires, and zombies.  Threateningly close to being human.  Malevolently defying the boundary between "us" and "them".  To the Nazified mind, Smith argues, the Jew is experienced as a monster no less than a werewolf is a monster -- a creature infiltrating our society, tricking the unwary, beneath the surface corrupt, and "metaphysically threatening" because it provokes contradictory beliefs in its humanity and nonhumanity.  Like a werewolf, vampire, or zombie, there might also be superficial differences on the outside that reinforce the creepy almost-humanness of the creature (compare the uncanny valley in robotics).

So far, that's Smith.  I hope I've been fair.  I find it an extremely interesting account.

On My View of Belief, Baldly Contradictory Beliefs Are Impossible

Here's my sticking point: What is it to believe something?  On my view, you don't really believe something unless you "walk the walk".  To believe some proposition P is to be disposed in general to act and react as if P is true.  Having a belief, on my view, is like having a personality trait: It's a pattern in your cognitive life or a matter of typically having a certain sort of posture toward the world.

What is it to believe, for example, that Black people and White people are equally moral and equally intelligent?  It is to generally be disposed to act and react to the world as if that is so.  It is partly to feel sincere when you say it is so.  But it's also not to be biased against Black applicants when hiring for a job that requires intelligence and not to expect the White person in a mixed-race group to be kinder and more trustworthy.  Unless this is your dispositional profile in general, you don't really and fully believe in the intellectual and moral equality of the races -- at best you are in what I call an "in-between" state, neither quite accurately describable as believing, nor quite accurately describable as failing to believe.

On this approach to belief, contradictory belief is impossible.  You cannot be simultaneously disposed in general to act as if P is the case and in general to act as if not-P is the case.  This makes as little sense as being simultaneously an extreme extravert and an extreme introvert.  The dispositions constitutive of the one (e.g., enjoying meeting new people at raucous parties) are exactly the opposite of the dispositions constitutive of the other (e.g., not enjoying meeting new people at raucous parties).  Of course, you can be extremely extraverted in some respects, or in some contexts, and extremely introverted in other respects or contexts.  That makes you a mixed case, not neatly classifiable as either overall.

The same is true, on my view, with racist and egalitarian beliefs.  You cannot simultaneously have an across-the-board egalitarian posture toward the world and an across-the-board racist posture.  You cannot fully believe both that all the races are equal and that your favorite race is superior.  Furthermore, in the same way that few people are fully 100% extravert or fully 100% introvert, few of us are 100% egalitarian in our posture toward the world or 100% bigoted.  We're all somewhere in the middle.

Conflicting Representations Are More Readily Acknowledged Than Contradictory Beliefs

As I was reading On Inhumanity, I was wondering how much Smith's commitment to contradictory beliefs matters.  Maybe Smith and I needn't disagree on substance.  Maybe Smith and I could agree that in some thin sense of believing, the Nazi has baldly contradictory "beliefs".

Here's something nearby that I can agree to: The Nazi has conflicting representations of Jews.  There's a theoretical and ideological representation of Jews as subhuman, and there are conflicting emotional, perceptual, and less-ideological representations of Jews as human.  This conflict of representations could be enough to generate the metaphysical threat and the anti-monster emotional reaction, regardless of what we say about "belief".

Smith is keen to convince people to recognize their own potential to fall into dehumanizing patterns of thought.  Me too.  In this matter, I suspect that my demanding view of belief will serve us better.  That would be one pragmatic reason to resolve the dispute about belief, if it's really just a terminological dispute, in my favor.

Here's my thought: It is, I think, much easier to see one's potential to host conflicting representations, on which one might act in inconsistent ways, than it is to see one's potential to host baldly contradictory beliefs -- especially if one of the two beliefs is one you are currently deeply committed to denying the truth of.

Smith's sympathetic, anti-racist readers might strain to imagine a future in which they fully believe that some disfavored race is literally subhuman.  That might seem like a truly radical change of view -- something only distantly imaginable after thorough indoctrination.  It is much easier, I suspect, to imagine that our minds could slowly fill with dehumanizing representations of another group, especially if we are repeatedly bombarded with such representations.  And maybe then, too, we can imagine our behavior becoming inconsistent -- sometimes driven by one type of representation, sometimes by another.

Full belief, I want to suggest, needn't be at the core of dehumanization, and an account of dehumanization needn't commit on how demanding "belief" is or whether baldly contradictory belief is possible.  Instead, all that's necessary might be confusion and conflict among one's representations or thoughts about a group, regardless of whether those representations rise to full belief.

Suppose then that you world fills you, over and over, with conflicting representations of another group, some humane and egalitarian, others monstrous and terrible.  Once the dehumanizing ones are in, they start to color your thoughts automatically, even without your explicit endorsement.  As they gain a foothold, you begin to wonder if there is some truth in them.  You become confused, wary, uncertain what to believe or how to act.  Your group enters in conflict with the group.  You feel endangered -- maybe by famine or war.  Resisting evil is difficult when you're confused: Passive obedience is the more common reaction to doubt and conflicting thoughts.

Beneath your confusion, doubt, and fear lie two conflicting potentials.  If the situation turns one way -- a neighbor who did you some kindness knocks on your door asking for a night of shelter -- maybe you start down the path toward great humanity and courage.  If the situation turns another way, you might find yourself passive in the face of great evil, unsure what to make of it.  Maybe even, if the threat seems terrible enough and the situation pulls you along, drawing the worst from you, you might find yourself a perpetrator.  Acting on a dehumanizing ideology does not require fully believing that ideology.


On September 29, I'll be chatting (remotely) with David Livingstone Smith at Warwick's bookstore in San Diego.  I think the public is welcome.  I'll share a link when one is available.

Friday, September 04, 2020

Randomization and Causal Sparseness

Suppose I'm running a randomized study: Treatment group A gets the medicine; control group B gets a placebo; later, I test both groups for disease X.  I've randomized perfectly, it's double blind, there's perfect compliance, my disease measure is flawless, and no one drops out.  After the intervention, 40% of the treatment group have disease X and 80% of the control group do.  Statistics confirm that the difference is very unlikely to be chance (p < .001).  Yay!  Time for FDA approval!

There's an assumption behind the optimistic inference that I want to highlight.  I will call it the Causal Sparseness assumption.  This assumption is required for us to be justified in concluding that randomization has achieved what we want randomization to achieve.

So, what is randomization supposed to achieve?

Dice roll, please....

Randomization is supposed to achieve this: a balancing of other causal influences that might bear on the outcome.  Suppose that the treatment works only for women, but we the researchers don't know that.  Randomization helps ensure that approximately as many women are in treatment as in control.  Suppose that the treatment works twice as well for participants with genetic type ABCD.  Randomization should also balance that difference (even if we the researchers do no genetic testing and are completely oblivious to this influence).  Maybe the treatment works better if the medicine is taken after a meal.  Randomization (and blinding) should balance that too.

But here's the thing: Randomization only balances such influences in expectation.  Of course, it could end up, randomly, that substantially more women are in treatment than control.  It's just unlikely if the number of participants N is large enough.  If we had an N of 200 in each group, the odds are excellent that the number of women will be similar between the groups, though of course there remains a minuscule chance (6 x 10^-61 assuming 50% women) that 200 women are randomly assigned to treatment and none to control.

And here's the other thing: People (or any other experimental unit) have infinitely many properties.  For example: hair length (cf. Rubin 1974), dryness of skin, last name of their kindergarten teacher, days since they've eaten a burrito, nearness of Mars on their 4th birthday....

Combine these two things and this follows: For any finite N, there will be infinitely many properties that are not balanced between the groups after randomization -- just by chance.  If any of these properties are properties that need to be balanced for us to be warranted in concluding that the treatment had an effect, then we cannot be warranted in concluding that the treatment had an effect.

Let me restate in an less infinitary way: In order for randomization to warrant the conclusion that the intervention had an effect, N must be large enough to ensure balance of all other non-ignorable causes or moderators that might have a non-trivial influence on the outcome.  If there are 200 possible causes or moderators to be balanced, for example, then we need sufficient N to balance all 200.

Treating all other possible and actual causes as "noise" is one way to deal with this.  This is just to take everything that's unmeasured and make one giant variable out of it.  Suppose that there are 200 unmeasured causal influences that actually do have an effect.  Unless N is huge, some will be unbalanced after randomization.  But it might not matter, since we ought to expect them to be unbalanced in a balanced way!  A, B, and C are unbalanced in a way that favors a larger effect in the treatment condition; D, E, and F are unbalanced in a way that favors a larger effect in the control condition.  Overall it just becomes approximately balanced noise.  It would be unusual if all of the unbalanced factors A-F happened to favor a larger effect in the treatment condition.

That helps the situation, for sure.  But it doesn't eliminate the problem.  To see why, consider an outcome with many plausible causes, a treatment that's unlikely to actually have an effect, and a low-N study that barely passes the significance threshold.

Here's my study: I'm interested in whether silently thinking "vote" while reading through a list of registered voters increases the likelihood that the targets will vote.  It's easy to randomize!  One hundred get the think-vote treatment and another one hundred are in a control condition in which I instead silently think "float".  I preregister the study as a one-tailed two-proportion test in which that's the only hypothesis: no p-hacking, no multiple comparisons.  Come election day, in the think-vote condition 60 people vote and in the control condition only 48 vote (p = .04)!  That's a pretty sizable effect for such a small intervention.  Let's hire a bunch of volunteers?

Suppose also that there are at least 40 variables that plausibly influence voting rate: age, gender, income, political party, past voting history....  The odds are good that at least one of these variables will be unequally distributed after randomization in a way that favors higher voting rates in the treatment condition.  And -- as the example is designed to suggest -- it's surely more plausible, despite the preregistration, to think that that unequally distributed factor better explains the different voting rates between the groups than the treatment does.  (This point obviously lends itself to Bayesian analysis.)

We can now generalize back, if we like, to the infinite case: If there are infinitely many possible causal factors that we ought to be confident are balanced before accepting the experimental conclusion, then no finite N will suffice.  No finite N can ensure that they are all balanced after randomization.

We need an assumption here, which I'm calling Causal Sparseness.  (Others might have given this assumption a different name.  I welcome pointers.)  It can be thought of as either a knowability assumption or a simplicity assumption: We can know, before running our study, that there are few enough potentially unbalanced causes of the outcome that, if our treatment gives a significant result, the effectiveness of the treatment is a better explanation than one of those unbalanced causes.  The world is not dense with plausible alternative causes.

As the think-vote example shows, the plausibility of the Causal Sparseness assumption varies with the plausibility of the treatment and the plausibility that there are many other important causal factors that might be unbalanced.  Assessing this plausibility is a matter of theoretical argument and verbal justification.  

Making the Causal Sparseness assumption more plausible is one important reason we normally try to make the treatment and control conditions as similar as possible.  (Otherwise, why not just trust randomness and leave the rest to a single representation of "noise"?)  The plausibility of Causal Sparseness cannot be assessed purely mechanically through formal methods.  It requires a theory-grounded assessment in every randomized experiment.

[image source]