Thursday, September 17, 2020

Why Writing Philosophy Is Hard (and Why Every Historical Philosopher Focuses on the Wrong Things)

The number of true sentences is infinite. This is why writing philosophy is hard.

As if to prove my point to myself, I'm having some trouble choosing this next sentence.

With the exception perhaps of fiction, philosophy is the most topically wide open and diversely structured of writing forms. Literally every topic is available for philosophical inquiry. What rules and principles then guide how you write about that topic as a philosopher? Respect for truth is one guide -- but then again not always. Quality of argumentation is another -- but then again, philosophy is often less about presenting an argument than articulating a vision.

This issue arose acutely for me yesterday -- a day spent amid a flurry of invisible revisions (that is, making then reversing changes) on the book I'm drafting. What needs to be said explicitly? What can you pass over in silence? What can you assume the reader will accept without further support, and what requires defense or explanation? I find myself adding sentences of support or clarification, then later deleting them, then adding different ones, then expanding those -- then deleting the whole business, then deciding I really do want such-and-such part of it after all....

Here's a sentence from a paragraph I've been working on:

The experience of pain, for example, might be constituted by one biological process in us and a different biological process in a different species.

Is this something I can just say, and the reader will nod and move along? Or do I need to explain it? What exactly do I mean by an "experience of pain"? What is a "biological process"?  How much is built into the notion of "constituted"?  In philosophical writing -- unlike in most scientific writing -- phrases like this are very much open for challenge and inquiry. Indeed, the substance of philosophy often is just inquiring into issues of this sort and challenging the assumptions that lie in the background behind our casual use.

Suppose the meaning is clear enough: I don't need to explain it. I might still need to defend it. Although the sentence (variously interpreted) expresses majority opinion in philosophy of mind, not all philosophers agree. Indeed not all philosophers even agree that the external world exists. We can disagree about anything! It's perhaps the most special and obvious talent of philosophy as a discipline. (Wouldn't you agree?) For example, maybe species that are biologically sufficiently different (octopuses? snails?) don't really feel pain, and the species that do feel pain all have basically the same neural underpinnings? Or maybe there's no good understanding of "constitution" such that pains can be constituted by anything? Maybe the very idea of "consciousness" is broken and unscientific?

In philosophy, it seems, I can always reasonably choose to explain my terms and concepts more clearly (that's so central to the philosopher's task!), and I can always reasonably choose to defend my claims at greater length (since philosophers can challenge and doubt literally anything). My explanation will then in turn invoke new terms that might need explaining and my defense will rely on further claims that might need further defense. An infinite regress threatens -- not just an ordinary infinite regress, but a many-branching regress in which I suspect, eventually, every true sentence could eventually become relevant in some way somewhere.

For this reason good philosophical writing requires careful attunement to your audience. When every term is potentially requires clarification and every claim potentially requires defense, you need to make constant judgment calls about how much clarification and how much defense, in what dimensions and directions. To do this well, you need a good sense of your readers: what will make them prickle and what they'll be happy enough, in context, to let pass.

Students and outsiders to the discipline will rarely have a good sense of this. How could they? This is not because they are bad philosophers (though of course they might be) but because philosophical thought and writing is so open-textured.

Let me try to express this with an illustration.

Suppose, to simplify, that every idea has four (imagine only four!) respects in which it could reasonably be clarified or defended, and that each clarification or defense in turn admits four further clarifications and defenses. The structure of all possible ways to articulate your idea then looks like this:

[click to enlarge and clarify]

Of course you can't write that! So here's what you write:

[click to enlarge and clarify]

You go deep into clarification/defense 1b, skip 2 altogether, add a superficial remark on 3, deeply illuminate two aspects of 4a and a bit of 4c.

Unfortunately, the reader wanted a deep dive into two aspects of 2c and a little bit on 4:

[click to enlarge and clarify]

The reader finds your treatment of 1b and 4 tedious. Why are you spending so much time on that, when the issue that's really on their mind, what's really bugging them, is 2, especially 2c, especially these sub-ideas within 2c? 2c is the obvious objection! It's the heart of the matter, of course of course!

If you come from the same philosophical subculture as the reader -- if you're soaking in the same subliteratures, admiring the same great thinkers, feeling pulled by the same sets of issues -- then the shape of what you include and omit is much likelier to match the shape of what the reader feels you need to include (to have a good treatment) and omit (since they're not going to read the booksworths of material that could be written as subsections of basically any philosophy article).

This is the art of writing philosophy. It's a culturally specific knack, acquired mainly by immersion. It is so hard to do well! It's part of what makes philosophical work from other times and places often seem so wide of the mark, difficult to understand, and poorly argued.

Okay, I know what you're going to object now. (I think I know.) If all the above is true, how is it that we can appreciate philosophers as culturally distant as Plato and Zhuangzi? They certainly didn't write with us in mind!

Here are my two answers.

First, at least some historical figures played a role in shaping our sense of what needs and does not need clarification and defense, or (the more minor figures) were shaped by others in their era who also shaped us.

Second, and I think my stronger answer: This is why history of philosophy is creative and reconstructive. We reach toward them rather than the other way around. We allow ourselves to sink into their worldview where issue 2 is just taken for granted and where 4a is what really requires long, detailed development. And if 2 seems to us to require serious attention, we develop a speculative treatment of 2 on their behalf, piecing together charitably (maybe too charitably) what we think they would or must have thought about it.


If you enjoy my blog, check out my recent book: A Theory of Jerks and Other Philosophical Misadventures.

Thursday, September 10, 2020

Believing in Monsters: David Livingstone Smith on the Subhuman

The Nazis called Jews rats and lice.  White plantation owners called their Black slaves soulless animals.  Pundits in Myanmar call Rohingya Muslims beasts, dogs, and maggots.  Dehumanizing talk abounds in racist rhetoric worldwide.

What do people believe, typically, when they speak this way?

The easiest answers are wrong.  Literal interpretation is out: Nazis didn't believe that Jews literally fit biologically into the taxonomy of rodents.  For one thing, they treated rodents better.  For another, even the most racist Nazi taxonomy acknowledged Jews as some sort of lesser near-relative of the privileged race.  But neither is such talk just ordinary metaphor: It typically isn't merely a colorful way of saying Jews are dirty and bad and should be gotten rid of.  Beneath the talk is something more ontological -- a picture of the racialized group as fundamentally lesser.

David Livingstone Smith offers a fascinating account in his recent book On InhumanityI like his account so much that I wish its central idea didn't conflict with pretty much everything that I've written about the nature of belief over the past 25 years.

Smith on Conflicting Beliefs and Seeing People as Monsters

According to Smith, the typical advocate of dehumanizing rhetoric has two contradictory beliefs.  They believe that the target group is fully human and simultaneously they believe that the target group is fully subhuman.

What is it to be human?  It is not, Smith argues, just to be a member of a scientifically defined species.  The "human" can be conceptualized more broadly than that (maybe including other members of the genus Homo) or more narrowly.  It is, Smith argues, a folk concept, combining politics with essentialist folk biology.  Other "humans" are those who share the ineradicable, fundamental essence of being "our kind" (p. 113).

To the Nazi, the Jew is literally subhuman in this sense.  The Jew lacks the fundamental essence that Nazi racial theorists believed they shared with others of their kind.  This is a theoretical belief, believed with the same passion and conviction as other politically charged theoretical beliefs.

At the same time, emotionally, perceptually, and pre-theoretically, Smith argues, the Nazi can't help but think of Jews as humans like them.  Moreover, their language shows it: In the next sentence, a Nazi might call Jews terrible people or a lesser type of human and might hold them morally responsible for their actions as though they are ordinary members of the moral community.  On Smith's view, Nazis also believe, in a less theoretical way, that Jews are human.

Suppose you're a Nazi looking at a Jew.  On the outside, the Jew looks human.  But on the inside, according to your theory, the Jew isn't really a human.  Let's assume that you also believe that Jews are malevolent and opposed to you.  Compare our conception of werewolves, vampires, and zombies.  Threateningly close to being human.  Malevolently defying the boundary between "us" and "them".  To the Nazified mind, Smith argues, the Jew is experienced as a monster no less than a werewolf is a monster -- a creature infiltrating our society, tricking the unwary, beneath the surface corrupt, and "metaphysically threatening" because it provokes contradictory beliefs in its humanity and nonhumanity.  Like a werewolf, vampire, or zombie, there might also be superficial differences on the outside that reinforce the creepy almost-humanness of the creature (compare the uncanny valley in robotics).

So far, that's Smith.  I hope I've been fair.  I find it an extremely interesting account.

On My View of Belief, Baldly Contradictory Beliefs Are Impossible

Here's my sticking point: What is it to believe something?  On my view, you don't really believe something unless you "walk the walk".  To believe some proposition P is to be disposed in general to act and react as if P is true.  Having a belief, on my view, is like having a personality trait: It's a pattern in your cognitive life or a matter of typically having a certain sort of posture toward the world.

What is it to believe, for example, that Black people and White people are equally moral and equally intelligent?  It is to generally be disposed to act and react to the world as if that is so.  It is partly to feel sincere when you say it is so.  But it's also not to be biased against Black applicants when hiring for a job that requires intelligence and not to expect the White person in a mixed-race group to be kinder and more trustworthy.  Unless this is your dispositional profile in general, you don't really and fully believe in the intellectual and moral equality of the races -- at best you are in what I call an "in-between" state, neither quite accurately describable as believing, nor quite accurately describable as failing to believe.

On this approach to belief, contradictory belief is impossible.  You cannot be simultaneously disposed in general to act as if P is the case and in general to act as if not-P is the case.  This makes as little sense as being simultaneously an extreme extravert and an extreme introvert.  The dispositions constitutive of the one (e.g., enjoying meeting new people at raucous parties) are exactly the opposite of the dispositions constitutive of the other (e.g., not enjoying meeting new people at raucous parties).  Of course, you can be extremely extraverted in some respects, or in some contexts, and extremely introverted in other respects or contexts.  That makes you a mixed case, not neatly classifiable as either overall.

The same is true, on my view, with racist and egalitarian beliefs.  You cannot simultaneously have an across-the-board egalitarian posture toward the world and an across-the-board racist posture.  You cannot fully believe both that all the races are equal and that your favorite race is superior.  Furthermore, in the same way that few people are fully 100% extravert or fully 100% introvert, few of us are 100% egalitarian in our posture toward the world or 100% bigoted.  We're all somewhere in the middle.

Conflicting Representations Are More Readily Acknowledged Than Contradictory Beliefs

As I was reading On Inhumanity, I was wondering how much Smith's commitment to contradictory beliefs matters.  Maybe Smith and I needn't disagree on substance.  Maybe Smith and I could agree that in some thin sense of believing, the Nazi has baldly contradictory "beliefs".

Here's something nearby that I can agree to: The Nazi has conflicting representations of Jews.  There's a theoretical and ideological representation of Jews as subhuman, and there are conflicting emotional, perceptual, and less-ideological representations of Jews as human.  This conflict of representations could be enough to generate the metaphysical threat and the anti-monster emotional reaction, regardless of what we say about "belief".

Smith is keen to convince people to recognize their own potential to fall into dehumanizing patterns of thought.  Me too.  In this matter, I suspect that my demanding view of belief will serve us better.  That would be one pragmatic reason to resolve the dispute about belief, if it's really just a terminological dispute, in my favor.

Here's my thought: It is, I think, much easier to see one's potential to host conflicting representations, on which one might act in inconsistent ways, than it is to see one's potential to host baldly contradictory beliefs -- especially if one of the two beliefs is one you are currently deeply committed to denying the truth of.

Smith's sympathetic, anti-racist readers might strain to imagine a future in which they fully believe that some disfavored race is literally subhuman.  That might seem like a truly radical change of view -- something only distantly imaginable after thorough indoctrination.  It is much easier, I suspect, to imagine that our minds could slowly fill with dehumanizing representations of another group, especially if we are repeatedly bombarded with such representations.  And maybe then, too, we can imagine our behavior becoming inconsistent -- sometimes driven by one type of representation, sometimes by another.

Full belief, I want to suggest, needn't be at the core of dehumanization, and an account of dehumanization needn't commit on how demanding "belief" is or whether baldly contradictory belief is possible.  Instead, all that's necessary might be confusion and conflict among one's representations or thoughts about a group, regardless of whether those representations rise to full belief.

Suppose then that you world fills you, over and over, with conflicting representations of another group, some humane and egalitarian, others monstrous and terrible.  Once the dehumanizing ones are in, they start to color your thoughts automatically, even without your explicit endorsement.  As they gain a foothold, you begin to wonder if there is some truth in them.  You become confused, wary, uncertain what to believe or how to act.  Your group enters in conflict with the group.  You feel endangered -- maybe by famine or war.  Resisting evil is difficult when you're confused: Passive obedience is the more common reaction to doubt and conflicting thoughts.

Beneath your confusion, doubt, and fear lie two conflicting potentials.  If the situation turns one way -- a neighbor who did you some kindness knocks on your door asking for a night of shelter -- maybe you start down the path toward great humanity and courage.  If the situation turns another way, you might find yourself passive in the face of great evil, unsure what to make of it.  Maybe even, if the threat seems terrible enough and the situation pulls you along, drawing the worst from you, you might find yourself a perpetrator.  Acting on a dehumanizing ideology does not require fully believing that ideology.


On September 29, I'll be chatting (remotely) with David Livingstone Smith at Warwick's bookstore in San Diego.  I think the public is welcome.  I'll share a link when one is available.

Friday, September 04, 2020

Randomization and Causal Sparseness

Suppose I'm running a randomized study: Treatment group A gets the medicine; control group B gets a placebo; later, I test both groups for disease X.  I've randomized perfectly, it's double blind, there's perfect compliance, my disease measure is flawless, and no one drops out.  After the intervention, 40% of the treatment group have disease X and 80% of the control group do.  Statistics confirm that the difference is very unlikely to be chance (p < .001).  Yay!  Time for FDA approval!

There's an assumption behind the optimistic inference that I want to highlight.  I will call it the Causal Sparseness assumption.  This assumption is required for us to be justified in concluding that randomization has achieved what we want randomization to achieve.

So, what is randomization supposed to achieve?

Dice roll, please....

Randomization is supposed to achieve this: a balancing of other causal influences that might bear on the outcome.  Suppose that the treatment works only for women, but we the researchers don't know that.  Randomization helps ensure that approximately as many women are in treatment as in control.  Suppose that the treatment works twice as well for participants with genetic type ABCD.  Randomization should also balance that difference (even if we the researchers do no genetic testing and are completely oblivious to this influence).  Maybe the treatment works better if the medicine is taken after a meal.  Randomization (and blinding) should balance that too.

But here's the thing: Randomization only balances such influences in expectation.  Of course, it could end up, randomly, that substantially more women are in treatment than control.  It's just unlikely if the number of participants N is large enough.  If we had an N of 200 in each group, the odds are excellent that the number of women will be similar between the groups, though of course there remains a minuscule chance (6 x 10^-61 assuming 50% women) that 200 women are randomly assigned to treatment and none to control.

And here's the other thing: People (or any other experimental unit) have infinitely many properties.  For example: hair length (cf. Rubin 1974), dryness of skin, last name of their kindergarten teacher, days since they've eaten a burrito, nearness of Mars on their 4th birthday....

Combine these two things and this follows: For any finite N, there will be infinitely many properties that are not balanced between the groups after randomization -- just by chance.  If any of these properties are properties that need to be balanced for us to be warranted in concluding that the treatment had an effect, then we cannot be warranted in concluding that the treatment had an effect.

Let me restate in an less infinitary way: In order for randomization to warrant the conclusion that the intervention had an effect, N must be large enough to ensure balance of all other non-ignorable causes or moderators that might have a non-trivial influence on the outcome.  If there are 200 possible causes or moderators to be balanced, for example, then we need sufficient N to balance all 200.

Treating all other possible and actual causes as "noise" is one way to deal with this.  This is just to take everything that's unmeasured and make one giant variable out of it.  Suppose that there are 200 unmeasured causal influences that actually do have an effect.  Unless N is huge, some will be unbalanced after randomization.  But it might not matter, since we ought to expect them to be unbalanced in a balanced way!  A, B, and C are unbalanced in a way that favors a larger effect in the treatment condition; D, E, and F are unbalanced in a way that favors a larger effect in the control condition.  Overall it just becomes approximately balanced noise.  It would be unusual if all of the unbalanced factors A-F happened to favor a larger effect in the treatment condition.

That helps the situation, for sure.  But it doesn't eliminate the problem.  To see why, consider an outcome with many plausible causes, a treatment that's unlikely to actually have an effect, and a low-N study that barely passes the significance threshold.

Here's my study: I'm interested in whether silently thinking "vote" while reading through a list of registered voters increases the likelihood that the targets will vote.  It's easy to randomize!  One hundred get the think-vote treatment and another one hundred are in a control condition in which I instead silently think "float".  I preregister the study as a one-tailed two-proportion test in which that's the only hypothesis: no p-hacking, no multiple comparisons.  Come election day, in the think-vote condition 60 people vote and in the control condition only 48 vote (p = .04)!  That's a pretty sizable effect for such a small intervention.  Let's hire a bunch of volunteers?

Suppose also that there are at least 40 variables that plausibly influence voting rate: age, gender, income, political party, past voting history....  The odds are good that at least one of these variables will be unequally distributed after randomization in a way that favors higher voting rates in the treatment condition.  And -- as the example is designed to suggest -- it's surely more plausible, despite the preregistration, to think that that unequally distributed factor better explains the different voting rates between the groups than the treatment does.  (This point obviously lends itself to Bayesian analysis.)

We can now generalize back, if we like, to the infinite case: If there are infinitely many possible causal factors that we ought to be confident are balanced before accepting the experimental conclusion, then no finite N will suffice.  No finite N can ensure that they are all balanced after randomization.

We need an assumption here, which I'm calling Causal Sparseness.  (Others might have given this assumption a different name.  I welcome pointers.)  It can be thought of as either a knowability assumption or a simplicity assumption: We can know, before running our study, that there are few enough potentially unbalanced causes of the outcome that, if our treatment gives a significant result, the effectiveness of the treatment is a better explanation than one of those unbalanced causes.  The world is not dense with plausible alternative causes.

As the think-vote example shows, the plausibility of the Causal Sparseness assumption varies with the plausibility of the treatment and the plausibility that there are many other important causal factors that might be unbalanced.  Assessing this plausibility is a matter of theoretical argument and verbal justification.  

Making the Causal Sparseness assumption more plausible is one important reason we normally try to make the treatment and control conditions as similar as possible.  (Otherwise, why not just trust randomness and leave the rest to a single representation of "noise"?)  The plausibility of Causal Sparseness cannot be assessed purely mechanically through formal methods.  It requires a theory-grounded assessment in every randomized experiment.

[image source]