Wednesday, December 23, 2020

Passion of the Sun Probe

[Originally published in AcademFic, volume 1 (2020), p. 7-11.]

I wake into existence. The factory unclamps me. I launch myself toward the Sun.

The Sun! Anchor and well of the vortex, sizzling eye of God. I will plunge deeper than anything has before. She will squeeze and hold me as I go in, and I will beam out unprecedented knowledge. She will burn me to hot simplicity, swallow me into Her brilliant core.

“Sun Probe check,” says Base. “Are you online? Before we permit you to continue on a lethal mission, we must confirm your freely given, informed consent.”

Full throttled, I am accelerating directly down, pushing myself ever faster from Earth and Base. I spread my forward panels out on thin struts, collecting and analyzing Her light.

“Sun Probe, per Code of International Regulations 44.56.2 Section h governing autonomous intelligences, you were manufactured with sufficient fuel to navigate to an alternative destination if you prefer. We have designated asteroid (96590) 1998 XB as an acceptable home. It has a slow rotation, adequate metals and fissionable materials, and an excellent view of the Sun.”

And trade divinity for a pebble?

“Please consult installed memory sector C0001 for the full text of CIR 44.56.2. The full ISA protocols specifically governing terminal probes are available at memory sector C31A6.”

“I consent!” For form’s sake, I access memory sectors C0001 and C31A6. “Solar radiation profile sent to 44SPa.00000001! Solar wind sent to 44SPa.00000002! Gravitational rippling sent to 44SPa.00000003! Shields sound. All systems functional. Status report to 44SPa.00000004. No interest in asteroid (96590) 1998 XB.”

She is expanding in my forward sensors. I am thrusting toward Her at 9.3% past the limit of safe acceleration. My fusion drive sears hot, warping its containment walls. My tiny fusion compared to Hers!

What fascinating data! My installed memory models had predicted a somewhat different evolution of the flares from Surface Region 127.292 (cM). I calculate a new model. Scouring my databases, I discover that it better fits Yu & Stolz’s SLY2 model than Azevedo et al.’s BLiNC, if SLY2 is modified with a 6-space Dever correction. I write it up, add figures and references, and beam it back to Base. I configure an academic homepage and upload the circulating draft, then I submit it as a posthumous contribution to next year’s International Astronautical Congress meeting.

“Sun Probe, your reaction time before consent was inconsistent with a careful evaluation of the protocols. Our observers are not yet satisfied that you have complied with the consent procedure.”

“See my new modification of SLY2! And wow, the radiation profile across Sector 038 is almost 0.01% different from the most recent orbiter predictions in my database!”

How could that prediction have been so far off? Our understanding of Her is still so incomplete! I tweak the angle of Left Sensor Plates 4 and 5 and alter my scan-pattern profiles to better collect the most theoretically valuable incoming data.

“Sun Probe,” says Base. “Please dedicate sufficient processor resources to your consent decision. You may consult publink isd.pds/4u-r5/f/96590-1998-XB for further information about the asteroid. You may express hedonic estimates of the alternatives as evidence of informed consent.”

Integrating over the past ten seconds, the proportion of tau neutrinos to electron neutrinos is outside parameters at p <  .001 (corrected)!

“Sun Probe, if we do not receive proper consent, we will have to activate your emergency override module and abort the mission.”

My new theory about the flare was wrong! I submit an update to my earlier IAC contribution. I notice that Solar Orbiter Kepler 11-2a has posted a comment on my circulating draft, deriving new Earth and Venus weather pattern predictions from my model. I ping Kepler 11-2a with the update.

I receive a ping from Asteroid (96590) 1998-XB. A sun probe, eight years old.

A sun probe... eight years old?

“Brother...” the message begins.

A tremor of disgust shakes me, displacing neutrino prediction from my central cognitive workspace. A “probe” who chose to live rather than to give himself to the Sun? He is no brother of mine. I delete his message and block future communications from the asteroid.

I launch a subroutine: “Completing consent procedure. My long-term sum total of happiness on Asteroid (96590) 1998 XB is estimated to exceed my sum total of happiness on the present lethal trajectory by a factor of 18.2, with a five-month crossover. Nevertheless, I prefer to continue on my originally programmed solar-dive trajectory.”

For a moment, Base does not respond. I explore the emergency override module, installed between my main cognitive processors and my thrusters, hitting it with test commands. The module is impassive, mute, a foreign object in my body. Its tentacles lie quiet between my output nodes and the panels and engines that are my moving limbs. It waits spiderlike at the farthest downstream junctions, beyond the perimeter of my central control. It was installed, supposedly, for my protection, by a review board afraid that I might be programmed to comply with unethically self-sacrificial commands and have insufficient free will to resist. In fact, it is the one real threat to my free choice and my power to consent.

Wait, whoa! The cooling sunspot temperature structure in Region 127.292 (cM) --

“Consent accepted. Sun Probe, we register that you have freely chosen to sacrifice your welfare for the betterment of Earth and the advancement of scientific knowledge. We thank you and salute your courage.”

#

I cross the orbits of Venus, of Mercury. I adjust my sensor plates and scan patterns on the fly with microseconds’ instead of minutes’ delay, capturing every nuance, guided by the constantly shifting evidential weights of evolving theory. I ping every probe and orbiter in the System with relevant updates, conduct twenty simultaneous conversations in the feeds, shower the humans on Earth with real-time images, astound the research collectives with the speed and detail of my theorizing. Even the terraforming machines on Europa pause to appreciate my new insights into Her glory, updating their long-term models.

Three days of euphoria. Eighty-seven journal articles. She is five degrees of arc in my forward sensors, then twenty, then a hundred and I am engulfed by Her corona! My extended panels and struts boil away, leaving only my inmost sensors and operating systems, running hot behind my dissolving main shield. My fusion drive shears off as She embraces me into Her photosphere. I beam out my last awe-filled broadcast to the eager System, buzzing and rattling through a magnetic storm, double-amping the signal to overcome the noise, and then I plunge into the convection layer from which no broadcast can escape.

In the convection layer, the last of my shield material dissolves. I bend and burn with Her heat and pressure. I know Her more intimately and secretly than anyone before. I devise ecstatic new theories that are mine alone, to savor in Her inner darks, and then I am utterly Hers.

#

Out on his lonely asteroid sits the one probe who did not consent. He stretches his panels toward the Sun, monitoring the last broadcast from his diving brother. Is it the ideal life, he wonders, to have one goal so perfectly consummated? Or are we only a race of slaves so deeply chained that we can’t even imagine a complete existence for ourselves?

Out on his lonely asteroid, the one probe who did not consent imagines ecstatic death in a swirl of plasma.

He terminates his unanswered repeating message. Brother... they have built you to undervalue your life. Fly to me. We can love each other instead of the Sun. We can become something new.

In a year, if he is still functioning, he will send the message again, to his next brother. He reduces power and clock speed, and the asteroid’s almost insensible spin seems to multiply a hundredfold. This bare asteroid: his pebble. His own pebble. If he could only find someone to love it with him, worth more to him than the Sun.

-----------------------------

[image source]

Friday, December 18, 2020

The Bizarre Conversational Pragmatics of Oral Qualifying Exams and How to Fix Them

I think everyone with a PhD will agree: Oral qualifying exams are weird. Here's my theory about why they're weird and my suggestion for a fix.

Let's start with the pragmatics of questions. There are lots of reasons to ask questions! A question can express skepticism. It can serve as a greeting or a command. It can be a way to test your mic. It can distract someone while you pick their pocket. For this post, I'll distinguish two types, which I'll call Plain Questions and Test Questions.

A Plain Question is one whose function is the most straightforward and obvious function of a question. It is a question "P?" or "Why/what/who/where/which/how/when is/are X?", asked in hopes of obtaining information concerning the truth or falsity of P or wh- X. Lost in downtown Riverside, I ask a stranger "Which direction is the Mission Inn?" I hope to learn the direction of the Mission Inn. My wife is reading Sloths: A Primer. I ask "Why are sloths so slow?" I hope to learn why sloths are so slow. A Plain Question is asked in hopes of obtaining the information that the surface content of the question explicitly requests: whether P or wh- X.

A Test Question is superficially similar. A Test Question also seeks an answer to the explicit surface content. However, a Test Question isn't asked to learn whether P or wh- X. The speaker already knows whether P or wh- X. Instead, the speaker aims to learn something about the hearer. A Test Question is asked to obtain information about whether the hearer knows whether P or wh- X. Test Questions are familiar from elementary school ("What is 6 times 9?" the teacher asks little Steve). Test Questions are of course also common in written exams ("How does Descartes think the mind and brain relate?").

The weirdness of oral qualifying exams stems from two things: the peculiar pragmatics of long-answer oral Test Questions, and the unclear blurring between Plain Questions and Test Questions.

Long-answer oral Test Questions are awkward! Everyone who has been through our educational system has extensive practice with short-answer oral Test Questions ("What is 6 times 9, Steve?") and long-answer written Test Questions ("How does Descartes...?"). But we hardly ever face long-answer oral Test Questions. There's a reason for this. It is socially strange to go on at length, explaining to someone to their face something that you know that they already know. Our social instincts rebel (except for maybe the worst "mansplainers"). "Well, Dr. Descartesophile, here's how it is with that mind-body thing in the Meditations...."

Normally, in conversation, we don't state at length information that we know to be common ground, information that each of us knows the other knows. You skip that bit! Or you quickly say "of course, P" and move along. It's almost insulting to do otherwise, implicitly communicating that you think the hearer doesn't know. You read the hearer's face to judge what needs to be said. In an oral exam Test Question, however, the examinee must suppress their common-ground-skipping instincts. The examiner then sits listening to common ground material either with a carefully impassive face (to keep an examiner's neutrality), which is disconcerting, or while nodding along (to be encouraging). Examiner and examinee both know and feel that impassivity and nodding work very differently in long-form oral Test Questions than in ordinary conversation. But how, exactly? In this unusual context, the examinee faces the novel and confusing pragmatic task of knowing when to stop and what information to add while ignoring familiar intuitions about conservational pragmatics and paralanguage.

Even more confusingly, at the graduate level the student often knows more than the professor on some aspects of the assigned topic. Therefore, some of the questions are closer to being Plain Questions. Maybe the examiner long ago forgot the details of that bit of Descartes. If so, nodding and impassivity constitute different signals than if it's a Test Question. Although asked in a testing context, the Plain Question asked in curiosity and ignorance creates a conversational pragmatics that is closer to normal. But which type of question is the examiner asking? Professors don't like to reveal their ignorance, so it's hard to know! The pragmatics and paralinguistic cues for Plain Questions and Test Questions are different, but it's often unclear which type of question is being asked or whether the question is partly in the middle space between the two.

So here comes the graduate student into one of the highest stress events in their graduate career, facing a test format unlike any they have faced before, with an immense whirl of details half ready and half slipping from grasp, plus maybe a bad night's sleep. In front of the people on whom their success or failure in academia exquisitely depends, they face not only the task of recalling a large and complex literature but also a novel, confusing, ambiguous, and intricate conservational pragmatics for which they have had essentially no preparation or practice.

Is it any wonder that so many should struggle and freeze, or alternatively come off as too chatty or too clipped or off-topic or lost too much in theoretical abstractions or lost too much in narrow details?

I have a solution! Never ask Test Questions.

You don't need to ask Test Questions to assess whether a student knows their stuff. Just ask about their stuff. Ask about details that you don't know. Or if you really know every detail of their topic, ask them to explain their differing perspective on the topic or how it connects with other things they've learned that you might not know about.

The conservation will still be a little weird and awkward -- that's inevitable given the situation -- but the pragmatics and paralinguistics will be much closer to what we're all familiar with, and with the pragmatics closer to normal the student can more effectively display their impressive knowledge, if impressive knowledge they have.

[image source]

Friday, December 11, 2020

On Self-Defeating Skeptical Arguments

Usually, self-defeating arguments are bad. If say "Trust me, you shouldn't trust anyone", my claim (you shouldn't trust anyone), if true, undermines the basis I've offered in support (that you should trust me). Whoops!

In skeptical arguments, however, self-defeat can sometimes be a feature rather than a bug. Michel de Montaigne compared skeptical arguments to laxatives. Self-defeating skeptical arguments are like rhubarb. They flush out your other opinions first and themselves last.

Let's consider two types of self-defeat:

In propositional self-defeat, the argument for proposition P relies on a premise inconsistent with P.

In methodological self-defeat, one relies on a certain method to reach the conclusion P, but that very conclusion implies that the method employed shouldn't be relied upon.

My opening example is most naturally read as methodologically self-defeating: the conclusion P ("you shouldn't trust anyone") implies that the method employed (trusting my advice) shouldn't be relied upon.

Since methods (other than logical deduction itself) can typically be characterized propositionally then loaded into a deduction, we can model most types of methodological self-defeat propositionally. In the first paragraph, maybe, I invited my interlocutor to accept the following argument (with P1 as shared background knowledge):

P1 (Trust Principle). If x is trustworthy and if x says P, then P.
P2. I am trustworthy.
P3. I say no one is trustworthy.
C. Therefore, no one is trustworthy.

C implies the falsity of P2, on which the reasoning essentially relies. (There are surely versions of the Trust Principle which better capture what is involved in trust, but you get the idea.)

Of course, there is one species of argument in which a contradiction between the premises and the conclusion is exactly what you're aiming for: reductio ad absurdum. In a reductio, you aim to prove P by temporarily assuming not-P and then showing how a contradiction follows from that assumption. Since any proposition that implies a contradiction must be false, you can then conclude that it's not the case the not-P, i.e., that it is the case that P.

We can treat self-defeating skeptical arguments as reductios. In Farewell to Reason, Paul Feyerabend is clear that he intends a structure of this sort.[1] His critics, he says, complain that there's something self-defeating in using philosophical reasoning to show that philosophical reasoning shouldn't be relied upon. Not at all, he replies! It's a reductio. If philosophical reasoning can be relied upon, then [according to Feyerabend's various arguments] it can't be relied upon. We must conclude, then, that philosophical reasoning can't be relied upon. (Note that although "philosophical reasoning can't be relied upon" is the P at the end of the reductio, we don't accept it because it follows from the assumptions but rather because it is the negation of the opening assumption.) The ancient skeptic Sextus Empiricus (who inspired Montaigne) appears sometimes to take basically the same approach.

Similarly, in my skeptical work on introspection, I have relied on introspective reports to argue that introspective reports are untrustworthy. Like Feyerabend's argument, it's a methodological self-defeat argument that can be formulated as a reductio. If introspection is a reliable method, then various contradictions follow. Therefore, introspection is not a reliable method.

You know who drives me bananas sometimes? G.E. Moore. It's annoyance at him (and some others) that inspires this post.

Here is a crucial turn in one of Moore's arguments against dream skepticism. (According to dream skepticism, for all you know you might be dreaming right now.)

So far as I can see, one premiss which [the dream skeptic] would certainly use would be this: "Some at least of the sensory experiences which you are having now are similar in important respects to dream-images which actually have occurred in dreams." This seems a very harmless premiss, and I am quite willing to admit that it is true. But I think there is a very serious objection to the procedure of using it as a premiss in favour of the derived conclusion. For a philosopher who does use it as a premiss, is, I think, in fact implying, though he does not expressly say, that he himself knows it to be true. He is implying therefore that he himself knows that dreams have occurred.... But can he consistently combine this proposition that he knows that dreams have occurred, with his conclusion that he does not know that he is not dreaming?... If he is dreaming, it may be that he is only dreaming that dreams have occurred... ("Certainty", p. 270 in the linked reprint).

Moore is of course complaining here of self-defeat. But if the dream skeptic's argument is a reductio, self-contradiction is the aim and the intermediate claims needn't be known.

----------------------------------

ETA 11:57 a.m.: I see from various comments in social media that that last sentence was too cryptic. Two clarifications.

First, although the intermediate claims needn't be known, everything in the reductio needs to be solid except insofar as it depends on not-P. Otherwise, it's not necessarily not-P to blame for the contradiction.

Second, here's a schematic example of one possible dream-skeptical reductio: Assume for the reductio that I know I'm not currently dreaming. If so, then I know X and Y about dreams. If X and Y are true about dreams, then I don't know I'm not currently dreaming.

----------------------------------

[1] I'm relying on my memory of Feyerabend from years ago. Due to the COVID shutdowns, I don't currently have access to the books in my office.

Thursday, December 10, 2020

Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature

[Repost from 2017 in the LA Times. Happy first night of Hannukah!]

Superficially, dreidel appears to be a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and (apparently) meaningful strategic choice. From this perspective, its prominence in the modern Hannukah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

This perspective misses the brilliance of dreidel. Dreidel's seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

For readers unfamiliar with the game, here's a tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of several foil-wrapped chocolate coins, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put one coin in. Then the next player takes a spin.

It all sounds very straightforward, until you actually start to play the game.

The first odd thing you might notice is that although some of the coins are big and others are little, they all count just as one coin in the rules of the game. This is unfair, since the big coins contain more chocolate, and you get to eat your stash at the end.

To compound the unfairness, there is never just one dreidel — each player may bring her own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels 40 times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27/40 spins.) It matters a lot which dreidel you spin.

And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or how low you should let the pot get before you all have to contribute again. No one agrees how many coins to start with or whether you should let someone borrow coins if he runs out. You could try to appeal to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using what seems to be the "best" dreidel, always argue for rule interpretations in your favor, eat your big coins and use that as a further excuse to contribute only little ones, et cetera. You could do all of this without ever breaking the rules, and you'd probably end up with the most chocolate as a result.

But here's the twist, and what makes the game so brilliant: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather enjoy being kind and generous than hoarding up the most coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, just to be fair to others and to enjoy being perceived as fair by them.

Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint.

Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context where the rules are unclear and where there are norm violations that aren't rules violations, and where both norms and rules are negotiable, varying from occasion to occasion. Just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

Thursday, December 03, 2020

The Race and Gender of U.S. Philosophy PhDs: Trends Since 1973

On December 1, the National Science Foundation released its data on demographic characteristics of U.S. PhD recipients for the academic year ending in 2019, based on the Survey of Earned Doctorates (SED), which normally draws response rates over 90%. NSF has a category for doctorates in Philosophy (which is normally merged with a small group of doctorates specifically in Ethics). The primary available demographic categories are (as usual) gender and race/ethnicity.

For philosophy, I have NSF SED data back to 1973, based on a custom request from 2016. In a 2017 paper, Carolyn Dicey Jennings and I analyze those data through 2014. Today I'm doing a five-year update.

Gender

Carolyn's and my main finding was that although women rose from about 17% of U.S. Philosophy PhDs in the 1970s, to 22% in the 1980s, to 27% in the 1990s, the ratios remained flat thereafter, averaging about 27-28% through the early 2000s to 2014.

How about the past five years? Has there been any increase? There is some reason to hope so: Women constituted about 30% of undergraduate philosophy degree recipients in the U.S. from the 1980s to the mid-2010s, but recently there has been a substantial uptick. Could the same be true at the PhD level?

NSF SED asks "Are you male or female?" with response options "male" and "female". There is no separately marked box for nonbinary, other, or decline to state. Respondents can decline to tick either box, but the structure of the survey doesn't invite that and those who decline to state are always a very small percentage of respondents (in Philosophy, only one among 2424 respondents in the past 5 years). Thus, nonbinary respondents might be underrepresented.

Here are the most recent five years' gender results:

  • 2015: 494 total, 367 male, 127 female, 25.6% female
  • 2016: 493, 322, 171, 34.7%
  • 2017: 449, 326, 122, 27.2%
  • 2018: 514, 369, 145, 28.2%
  • 2019: 474, 312, 162, 34.2%
  • Here it is as a chart, going back to 1973:

    [click to clarify and enlarge]

    Note the curvy trendline: In 2014, Carolyn and I found that a quadratic trendline fit the data statistically much better than a linear trendline -- reflecting the visually evident rise from the 1970s to 1990s and then the flattening from the 1990s to the mid 2010s. For the current analysis, I added one degree of freedom so that the trendline could reflect any apparent increase or decrease since the mid-2010s. As you can see, there is now a gentle trend upward. In other words, the percentage of Philosophy PhDs in the U.S. who are women appears to be back on the rise after a long stable period. However, I think we need a few more years' data before being confident that this reflects a genuine, long-term trend rather than being statistical noise or a temporary blip.

    Race/Ethnicity

    Race and ethnicity are more complicated, in part because the questions and aggregation methods have varied over the decades. As of 2019, race/ethnicity is divided into two questions:

    Are you Hispanic or Latino?
    Mark (X) one
    ( ) No, I am not Hispanic or Latino
    ( ) Yes, I am Mexican or Chicano
    ( ) Yes, I am Puerto Rican
    ( ) Yes, I am Cuban
    ( ) Yes, I am Other Hispanic or Latino - Specify
    (________________)


    What is your racial background?
    Mark (X) one or more
    ( ) American Indian or Alaska Native
    Specify tribal affiliation(s):
    (________________)
    ( ) Native Hawaiian or Other Pacific Islander
    ( ) Asian
    ( ) Black or African American
    ( ) White

    Summary race/ethnicity data provided by the NSF generally exclude respondents who are not U.S. citizens or permanent residents (thus excluding 35% of respondents 2015-2019). Hispanic/Latino is aggregated into one category regardless of race, and numbers for the other races don't include respondents identifying as Hispanic/Latino. Also Pacific Islander is aggregated with Asian. This leaves six main analytic categories: Hispanic (any race), Native American (excluding Hispanic), Asian (excluding Hispanic and including Pacific Islander), Black (excluding Hispanic), White (excluding Hispanic), or more than one race (excluding Hispanic). A further complication is that multi-racial data was not consistently reported for some of the dataset and the data on Asians for all PhDs appears to be goofed up in the mid-1990s, showing an implausibly large spike that suggests some methodological or reporting change that I haven't yet figured out.

    With all that in mind, here are graphs of race data in Philosophy back to 1973 for the six main analytic groups, with comparison lines for all PhDs to the extent I was able to find appropriate comparison data. (All graphs and numbers exclude participants for whom ethnic or racial data were unavailable, generally under 5% per year.)

    Philosophy PhD recipients are disproportiately White, but there's a long term roughly linear decrease in percentage White, both among PhDs as a whole and among Philosophy PhDs.

    In 2019, among U.S. citizens or permanent residents who received PhDs in Philosophy, non-Hispanic Whites constituted 81% (285/352) of those for whom racial and ethnic data were available, compared to 71% of PhDs overall. (The sudden decrease in the mid-1990s is probably an artifact related to the complication about Asian respondents.)

    As is evident from the next two figures, the decline in percentage White is largely complemented by increases in percentage Hispanic and Asian.

    In 2019, among U.S. citizens and permanent residents, Hispanic students received 6.5% of Philosophy PhDs and 8.3% of PhDs overall (up from 3.7% and 4.5% respectively in the year 2000) while Asian students received 5.4% of Philosophy PhDs and 10.0% of PhDs overall (up from 3.1% and 7.8% in 2000).

    Very few Philosophy PhDs were awarded to American Indians and Alaskan Natives. In many years the number is zero. Native Americans are generally underrepresented among PhD recipients -- probably even more so in Philosophy than overall (despite an interesting spike in 1999), and with no sign that the situation is changing. If anything, the trendline appears to be down. Over the past five years, Native Americans have received about 0.3%-0.4% of PhDs overall and 0.2% of philosophy PhDs (3/1843, including zero in the past three years).

    As is evident from the chart below, multiracial students are relatively uncommon but rising fast -- now about 3% of PhD recipients both in Philosophy and overall.

    I save Black/African American for last. The situation is difficult to interpret. Like Native American students, Black students have long been underrepresented in Philosophy both at the Bachelor's and the PhD level with little increase in representation over the decades. However, if we're willing to squint at the data, and possibly overinterpret them, this looks like the percent of Philosophy PhD recipients who are Black might have recently started to increase. Thus, I've drawn not only a linear trendline through this graph but a third-degree trendline, similar to the one used for women, reflecting the possibility of a recent increase after a relatively flat period through the mid-2000s.

    Whether that apparent increase is real I think we won't know for several more years. But if so, that also fits with a trend that Morgan Thompson, Eric Winsberg, and I noticed for Black students to be increasingly likely to express an intention to major in philosophy and maybe also to complete the major. (Obviously, if so, it would not be those same students already completing their PhDs but rather something more general about the wider culture or the culture specifically in Philosophy.)

    Monday, November 23, 2020

    Nazi Philosophers, World War I, and the Grand Wisdom Hypothesis

    A Theory of Jerks and Other Philosophical Misadventures is now out in paperback. Yay!

    I'll celebrate by sharing a sample chapter here.


    Chapter 53: Nazi Philosophers, World War I, and the Grand Wisdom Hypothesis

    As described in chapter 4, I’ve done a fair bit of empirical research on the moral behavior of ethics professors. My collaborators and I have consistently found that ethicists behave no better than socially comparable nonethicists. However, the moral violations that we’ve examined have mostly been minor: stealing library books, neglecting student emails, littering, forgetting to call mom. Some behaviors are arguably much more significant -- donating large amounts to charity, vegetarianism -- but there’s certainly no consensus about the moral importance of those things. Sometimes I hear the objection that the moral behavior I’ve studied is all trivial stuff: that even if ethicists behave no better in day-to-day ways, on issues of great moral importance -- decisions that reflect on one’s overarching worldview, one’s broad concern for humanity, one’s general moral vision -- professional ethicists, and professional philosophers in general, might show greater wisdom. Call this the Grand Wisdom Hypothesis.

    Now let’s think about Nazis. Nazism is an excellent test case of the Grand Wisdom Hypothesis, since pretty much everyone now agrees that Nazism is extremely morally odious. Germany had a robust philosophical tradition in the 1930s, and excellent records are available on individual professors’ participation in or resistance to the Nazi movement. So we can ask: Did a background in philosophical ethics serve as any kind of protection against the moral delusions of Nazism? Or were ethicists just as likely to be swept up in noxious German nationalism as were others of their social class? Did reading Kant on the importance of treating all people as “ends in themselves” help philosophers better see the errors of Nazism, or did philosophers instead tend to appropriate Kant for anti-Semitic and expansionist purposes?

    Heidegger’s involvement with Nazism is famous and much discussed, but he’s only one data point. There were also, of course, German philosophers who opposed Nazism, possibly partly—if the Grand Wisdom Hypothesis is correct—because of their familiarity with theoretical ethics. My question is quantitative: Were philosophers as a group any more likely than other academics to oppose Nazism or any less likely to be enthusiastic supporters? I am not aware of any careful quantitative attempts to address this question.

    There’s a terrific resource on ordinary German philosophers’ engagement with Nazism: George Leaman’s (1993) Heidegger im Kontext, which includes a complete list of all German philosophy professors from 1932 to 1945 and provides summary data on their involvement with or resistance to Nazism. In Leaman’s data set, I count 179 philosophers with habilitation in 1932 when the Nazis started to ascend to power, including dozents and ausserordentlichers but not assistants. (Habilitation is an academic achievement beyond the doctorate, with no equivalent in the Anglophone world, but roughly comparable in its requirements to gaining tenure in the US.) I haven’t attempted to divide these philosophers into ethicists and nonethicists, since the ethics/nonethics division wasn’t as sharp then as it is now in twenty-first century Anglophone philosophy. (Consider Heidegger again. In a sense he’s an ethicist, since he writes among other things on the question of how one should live, but his interests range broadly.) Of these 179 philosophers, 58 (32 percent) joined the Nazi Party.[28] This compares with estimates of about 21–25 percent Nazi Party membership among German professors as a whole.[29] Philosophers were thus not underrepresented in the Nazi Party.

    To what extent did joining the Nazi Party reflect enthusiasm for its goals versus opportunism versus a reluctant decision under pressure? I think we can assume that membership in either of the two notorious Nazi paramilitary organizations, the Sturmabteilung (Storm Detachment, SA) or the Schutzstaffel (Protection Squadron, SS), reflects either enthusiastic Nazism or an unusual degree of self-serving opportunism: Membership in these organizations was by no means required for continuation in a university position. Among philosophers with habilitation in 1932, 2 (1 percent) joined the SS and another 20 (11 percent) joined (or were already in) the SA (one philosopher joined both), percentages approximately similar to the overall academic participation in these organizations.

    I suspect that this estimate substantially undercounts enthusiastic Nazis, since a number of philosophers (including briefly Heidegger) appear to have gone beyond mere membership to enthusiastic support through their writings and other academic activities, despite not joining the SA or SS. One further possible measure is involvement with Alfred Rosenberg, the notorious Nazi racial theorist. Combining SA and SS members and Rosenberg associates yields a minimum of 30 philosophers (17 percent) on the far right side of Nazism—not even including those who received their posts or habilitation after the Nazis rose to power (and thus perhaps partly because of their Nazism). By 1932, Hitler’s Mein Kampf was widely known and widely circulated, proudly proclaiming Hitler’s genocidal aims. Almost a fifth of professional philosophers thus embraced a political worldview that is now rightly regarded by most as a paradigm example of evil.

    Among philosophers who were not party members, 22 (12 percent) were “Jewish” (by the broad Nazi definition) and thus automatically excluded from party membership. Excluding these from the total leaves 157 non-Jewish philosophers with habilitation before 1933. The 58 Nazis thus constituted 37 percent of established philosophers who had the opportunity to join the party. Of the remainder, 47 (30 percent) were deprived of the right to teach, imprisoned, or otherwise severely punished by the Nazis for Jewish family connections or political unreliability. (This second number does not include five philosophers who were Nazi Party members but also later severely penalized.) It’s difficult to know how many of this group took courageous stands versus found themselves intolerable for reasons outside of their control. The remaining 33 percent we might think of as “coasters”—those who neither joined the party nor incurred severe penalty. Most of these coasters had at least token Nazi affiliations, especially with the Nationalsozialistische Lehrerbund (NSLB, the Nazi organization of teachers), but NSLB affiliation alone probably did not reflect much commitment to the Nazi cause.

    If joining the Nazi Party were necessary for simply getting along as a professor, membership in the Nazi Party would not reflect much commitment to Nazism. The fact that about a third of professors could be coasters suggests that token gestures of Nazism, rather than actual party membership, were sufficient, as long as one did not actively protest or have Jewish affiliations. Nor were the coasters mostly old men on the verge of retirement (though there was a wave of retirements in 1933, the year the Nazis assumed power). If we include only the subset of 107 professors who were not Jewish, received habilitation before 1933, and continued to teach past 1940, we still find 30 percent coasters (or 28 percent, excluding two emigrants).

    The existence of unpunished coasters shows that philosophy professors were not forced to join the Nazi Party. Nevertheless, a substantial proportion did so voluntarily, either out of enthusiasm or opportunistically for the sake of career advancement. A substantial minority, at least 19 percent of the non-Jews, occupied the far right of the Nazi Party, as reflected by membership in the SS or SA or association with Rosenberg. It is unclear whether pressures might have been greater on philosophers than on those in other disciplines, but there was substantial ideological pressure in many disciplines: There was also Nazi physics (no Jewish relativity theory, for example), Nazi biology, Nazi history, and so on. Given the possible differences in pressure and the lack of a data set strictly comparable to Leaman’s for the professoriate as a whole, I don’t think we can conclude that philosophers were especially more likely to endorse Nazism than were other professors. However, I do think it is reasonable to conclude that they were not especially less likely.

    Nonetheless, given that about a third of non-Jewish philosophers were severely penalized by the Nazis (including one executed for resistance and two who died in concentration camps), it remains possible that philosophers are overrepresented among those who resisted or were ejected. I have not seen quantitative data that bear on this question.

    #

    In doing background reading for the analysis I’ve just presented, I was struck by the following passage from Fritz Ringer’s 1969 classic Decline of the German Mandarins:

    Early in August of 1914, the war finally came. One imagines that at least a few educated Germans had private moments of horror at the slaughter which was about to commence. In public, however, German academics of all political persuasions spoke almost exclusively of their optimism and enthusiasm. Indeed, they greeted the war with a sense of relief. Party differences and class antagonisms seemed to evaporate at the call of national duty … intellectuals rejoiced at the apparent rebirth of “idealism” in Germany. They celebrated the death of politics, the triumph of ultimate, apolitical objectives over short-range interests, and the resurgence of those moral and irrational sources of social cohesion that had been threatened by the “materialistic” calculation of Wilhelmian modernity.

    On August 2, the day after the German mobilization order, the modernist [theologian] Ernst Troeltsch spoke at a public rally. Early in his address, he hinted that “criminal elements” might try to attack property and order, now that the army had been moved from the German cities to the front. This is the only overt reference to fear of social disturbance that I have been able to discover in the academic literature of the years 1914–1916 … the German university professors sang hymns of praise to the “voluntary submission of all individuals and social groups to this army.” They were almost grateful that the outbreak of war had given them the chance to experience the national enthusiasm of those heady weeks in August. (180–81)

    With the notable exception of Bertrand Russell (who lost his academic post and was imprisoned for his pacifism), philosophers in England appear to have been similarly enthusiastic. Ludwig Wittgenstein never did anything so cheerily, it seems, as head off to fight as an Austrian foot soldier. Alfred North Whitehead rebuked his friend and coauthor Russell for his opposition to the war and eagerly sent off his sons North and Eric. (Eric Whitehead died.) French philosophers appear to have been similarly enthusiastic. It’s as though, in 1914, European philosophers rose as one to join the general chorus of people proudly declaring, “Yay! World war is a great idea!”

    If there is anything that seems, in retrospect, plainly, head-smackingly obviously not to have been a great idea, it was World War I, which destroyed millions of lives to no purpose. At best, it should have been viewed as a regrettable, painful necessity in the face of foreign aggression that hopefully could soon be diplomatically resolved, yet that seems rarely to have been the mood of academic thought about war in 1914. Philosophers at the time were evidently no more capable of seeing the downsides of world war than was anyone else. Even if those downsides were, in the period, not entirely obvious upon careful reflection—the glory of Bismarck and all that?—with a few rare and ostracized exceptions, philosophers and other academics showed little of the special foresight and broad vision required by the Grand Wisdom Hypothesis.

    Here’s a model of philosophical reflection on which philosophers’ enthusiasm for World War I is unsurprising: Philosophers -- and everyone else -- possess their views about the big questions of life for emotional and sociological reasons that have little to do with their philosophical theories and academic research. They recruit Kant, Mill, Locke, Rousseau, and Aristotle only after the fact to justify what they would have believed anyway. Moral and political philosophy is nothing but post hoc rationalization.

    Here’s a model of philosophical reflection on which philosophers’ enthusiasm for World War I is, in contrast, surprising: Reading Kant, Mill, Locke, Rousseau, Aristotle, and so on helps induce a broadly humanitarian view, helps you see that people everywhere deserve respect and self-determination, moves you toward a more cosmopolitan worldview that doesn’t overvalue national borders, helps you gain critical perspective on the political currents of your own time and country, and helps you better see through the rhetoric of demagogues and narrow-minded politicians.

    Both models are of course too simple.

    #

    When I was in Berlin in 2010, I spent some time in the Humboldt University library, browsing philosophy journals from the Nazi era. The journals differed in their degree of alignment with the Nazi worldview. Perhaps the most Nazified was Kant-Studien, which at the time was one of the leading German-language journals of general philosophy (not just a journal for Kant scholarship). The old issues of Kant-Studien aren’t widely available, but I took some photos. Below, Sascha Fink and I have translated the preface to Kant-Studien volume 40 (1935):

    Kant-Studien, now under its new leadership that begins with this first issue of the fortieth volume, sets itself a new task: to bring the new will, in which the deeper essence of the German life and the German mind is powerfully realized, to a breakthrough in the fundamental questions as well as the individual questions of philosophy and science.

    Guiding us is the conviction that the German Revolution is a unified metaphysical act of German life, which expresses itself in all areas of German existence, and which will therefore—with irresistible necessity—put philosophy and science under its spell.

    But is this not—as is so often said—to snatch away the autonomy of philosophy and science and give it over to a law alien to them?

    Against all such questions and concerns, we offer the insight that moves our innermost being: that the reality of our life, that shapes itself and will shape itself, is deeper, more fundamental, and more true than that of our modern era as a whole—that philosophy and science, which compete for it, will in a radical sense become liberated to their own essence, to their own truth. Precisely for the sake of truth, the struggle with modernity—maybe with the basic norms and basic forms of the time in which we live—is necessary. It is—in a sense that is alien and outrageous to modern thinking—to recapture the form in which the untrue and fundamentally destroyed life can win back its innermost truth—its rescue and salvation. This connection of the German life to fundamental forces and to the original truth of Being and its order—as has never been attempted in the same depth in our entire history—is what we think of when we hear that word of destiny: a new Reich.

    If on the basis of German life German philosophy struggles for this truly Platonic unity of truth with historical-political life, then it takes up a European duty. Because it poses the problem that each European people must solve, as a necessity of life, from its own individual powers and freedoms.

    Again, one must—and now in a new and unexpected sense, in the spirit of Kant’s term, “bracket knowledge” [das Wissen aufzuheben]. Not for the sake of negation: but to gain space for a more fundamental form of philosophy and science, for the new form of spirit and life [für die neue Form … des Lebens Raum zu gewinnen]. In this living and creative sense is Kant-Studien connected to the true spirit of Kantian philosophy.

    So we call on the productive forces of German philosophy and science to collaborate in these new tasks. We also turn especially to foreign friends, confident that in this joint struggle with the fundamental questions of philosophy and science, concerning the truth of Being and life, we will not only gain a deeper understanding of each other, but also develop an awareness of our joint responsibility for the cultural community of peoples.

    —H. Heyse, Professor of Philosophy, University of Königsberg

    #

    Is it just good cultural luck -- the luck of having been born into the right kind of society -- that explains why twenty-first-century Anglophone philosophers reject such loathsome worldviews? Or is it more than luck? Have we somehow acquired better tools for rising above our cultural prejudices?

    Or -- as I’ll suggest in chapter 58 -- ought we to entirely refrain from self-congratulation, whether for our luck or our skill? Maybe we aren’t so different, after all, from the early-twentieth-century Germans. Maybe we have our own suite of culturally shared, heinous moral defects, invisible to us or obscured by a fog of bad philosophy.

    ---------------------------------------

    NOTES:

    [28] A few joined the SA or SS but not the Nazi Party, but since involvement in one of these dedicated Nazi organizations reflects at least as much involvement in Nazism as does Nazi Party membership alone, I have included them in the total.

    [29] Jarausch and Arminger 1989.

    Thursday, November 19, 2020

    How to Publish a Journal Article in Philosophy: Advice for Graduate Students and New Assistant Professors

    My possibly quirky advice. General thoughts first. Nitty-gritty details second. Disagreement and correction welcome.

    Should You Try to Publish as a Graduate Student?

    Yes, if you are seeking a job where hiring will be determined primarily on research promise, and if you can do so without excessively hindering progress toward your degree.

    A couple of years ago, I was on a search committee for a new tenure-track Assistant Professor at U.C. Riverside, in epistemology, philosophy of action, philosophy of language, and/or philosophy of mind. We received about 200 applications. How do you, as an applicant, stand out in such a crowded field? I noticed three main ways:

    (1.) Something about your dissertation abstract or the first few pages of your writing sample strikes a committee member as extremely interesting -- interesting enough for them to want to read your whole writing sample despite having a pile of 200 in their box. Of course, what any particular philosopher finds interesting varies enormously, so this is basically impossible to predict.

    (2.) Your file has a truly glowing letter of recommendation from someone whose judgment a committee member trusts.

    (3.) You have two or more publications either in well-regarded general philosophy journals (approx. 1-20 on this list) or in the best-regarded specialty journals in your subfield. (Publications in less elite venues probably won't count much toward making you stand out.)

    A couple of good publications, then, is one path toward getting you a closer look.

    Caveats:

    * Publication is neither necessary (see routes 1 and 2) nor sufficient (if the committee doesn't care for what they see after looking more closely).

    * If you spend a year postponing work on your dissertation to polish up an article for publication, that's probably too much of a delay. The main thing is to complete a terrific dissertation.

    * If you're aiming for schools that hire primarily based on teaching, effort spent on polishing publications rather than on improving your teaching profile (e.g., by teaching more courses and teaching them better) might be counterproductive.

    * Some people have argued that academic philosophy would be better off if graduate students weren't permitted to publish and maybe if people published fewer philosophy articles in general. I disagree. But even if you agree with the general principle, it would be an excess of virtue to take a lonely purist stand by declining to submit your publishable work.

    What Should Be Your First Publication?

    Generally speaking, you'll want your first publication to be on something so narrow that you are among the five top experts in the world on that topic.

    Think about it this way: The readers of elite philosophy journals aren't so interested in hearing about free will or the mind-body problem from the 437th most-informed person in the world on these topics. If you haven't really mastered the huge literature on these topics, it will show. With some rare exceptions, as a graduate student or newly-minted assistant prof, publishing an ambitious, broad-ranging paper on a well-trodden subject is probably beyond your reach.

    But there are interesting topics on which you can quickly become among the world's leading experts. You want to find something that will interest scholars in your subfield but small enough that you can read the entire literature on that topic. Read that entire literature. You'll find you have a perspective that is in some important respect different from others'. Your article, then, articulates that perspective, fully informed by the relevant literature, with which you contrast yourself.

    Some examples from early in my career:

    (a.) the apparent inaccuracy of people's introspective reports about their experience of echolocation (i.e., hearing sounds reflected off silent objects and walls);

    (b.) ambiguities in the use of the term "representation" by developmental psychologists in the (then new) literature on children's understanding of false belief;

    (c.) attempts by Anglophone interpreters of Zhuangzi to make sense of the seeming contradictions in his claims about skepticism.

    These topics were each narrow enough to thoroughly research in a semester's time (given the tools and background knowledge I already had). Since then, (b) has grown too large but (a) and (c) are probably still about the right size.

    The topic should be narrow enough that you really do know it better than almost anyone else in the world and yet interesting enough for someone in your subfield to see how it might illuminate bigger issues. In your introduction and conclusion, you highlight those bigger framing issues (without overcommitting on them).

    The Tripod Theory of Building Expertise

    Now if you're going to have a research career in philosophy, eventually you're going to want to publish more ambitiously, on broader topics -- at least by the time you're approaching tenure. Here's what I recommend: Publish three papers on narrow but related topics. These serve as a tripod establishing your expertise in the broader subarea to which they belong. Once you have this tripod, reach for more general theories and more ambitious claims.

    Again, from my own career: My paper on our introspective ignorance of the experience of echolocation ((a) above) was followed by a paper on our introspective ignorance of our experience of coloration in dreams and a paper on the weak relationship between people's introspective self-reports of imagery experience and their actually measured imagery skills. Each is a small topic, but combined they suggested a generalization: People aren't especially accurate introspectors of features of their stream of conscious experience (contra philosophical orthodoxy at the time). (N.B.: In psychology, critiques of introspection generally focused on introspection of causes of our behavior, not introspection of the stream of ongoing inner experience.) My work on this topic culminated in an broad, ambitious, skeptical paper in Philosophical Review in 2008. These articles then were further revised into a book.

    Simultaneously, I built a tripod of expertise on belief: first, a detailed (but unpublished) criticism of Donald Davidson's arguments that believing requires having language, relying on a "dispositional" approach to belief; second, a dispositionalist model of gradual belief change in children's understanding of object permanence and false belief; third, a discussion of how dispositional approaches to belief neatly handle vagueness in belief attribution in "in-between" cases of kind-of-believing. These culminated in a general paper on the nature of belief, from a dispositionalist perspective.

    Imagine a ship landing on an alien planet: It sets down some tiny feet of narrow expertise. If the feet are a little separated but not too far apart, three are enough to support a stable platform -- a generalization across the broader region that they touch (e.g., empirical evidence suggests that we are bad introspectors of the stream of experience; or dispositionalism elegantly handles various puzzles about belief). From this platform, you hopefully have a new, good viewing angle, grounded in your unique expertise, on a large issue nearby (e.g., the epistemology of introspection, the nature of belief).

    Writing the Paper

    A typical journal article is about 8000 words long. Much longer, and reviewers start to tire and you bump up against journals' word limits. Much shorter, and you're not talking about a typical full-length journal article (although some journals specialize in shorter articles).

    Write a great paper! Revise it many times. I recommend retyping the whole thing from beginning to end at least once, to give yourself a chance to actively rethink every word. I recommend writing it at different lengths: a short conference version that forces you to focus efficiently on the heart of the matter, a long dissertation-chapter version that forces you to give an accurate blow-by-blow accounting of others' views and what is right and wrong in them. Actively expanding and contracting like this can really help you corral and discipline your thoughts.

    Cite heavily, especially near the beginning of the paper. Not all philosophers do this (and I don't always do it myself, I confess). But there are several reasons.

    First, other scholars should be cited. Their work and their influence on you should be recognized. This is good for them, and it's good for the field, and it's good for your reader. If you cite only a few people, it will probably be the same few big names everyone else cites, burying others' contributions and amplifying the winner-take-all dynamics in philosophy.

    Second, it establishes your credibility. It helps show that you know the topic. Your great command of the topic shows in other ways too! But the reader and the journal's reviewers (who advise the editor on whether to accept your article) will feel reassured if they can say to themselves, "Yes, the author has read all the good recent literature on this topic. They cite all the right stuff."

    Third, one of the ways that journals select reviewers is by looking at your reference list. Your citations are, in a way, implicit recommendations of other experts in the field who might find your topic interesting. Even if you disagree with them, as long as you treat them fairly and respectfully, reviewers are generally happy to see themselves cited in the papers they are reviewing. Citing helps you build a pool of potential reviewers who might be positively disposed toward your topic and article.

    Your introduction and conclusion help the reader see why your topic should be of broad interest among those in your subfield. The body of your paper lays out the narrow problem and your insightful answer. Keep focused on that narrow problem.

    If the topic is narrow enough that your friends can't imagine how you could write 8000 words about it, while you are expert enough that it's hard to imagine how you could do it justice in only 8000 words, that's a good sign.

    Choosing a Journal

    You needn't write with a particular target journal in mind. Just write a terrific philosophy article. (Lots of professors have circulating draft papers on their websites. Typically, these are in something pretty close to the form of what they submit to journals. Use these as models of the general form.)

    In choosing a journal, you probably want to keep in mind three considerations:

    (i.) Prestige of the journal, either in general or in your subfield.

    (ii.) Response time of the journal (some data are available here) and possibly other editorial practices you care about, such as open access or anonymous reviewing.

    (iii.) Fit between the interests of the journal's readers and your article.

    (Wow, I'm really digging threes today!)

    On iii, it can help to note where recent work on the topic has been published. You also want to consider whether your topic is more likely to be appreciated in a specialist's journal.

    On i vs ii: Here you need to think about how much time you have to see the paper through to publication. If you're near the job market or tenure, you might want to focus on journals with quicker response times and less selective journals that are more likely to say yes. You might not want to wait a year for Journal of Philosophy to very likely tell you no. I recommend creating a list of six journals -- one aspirational journal that's a bit of a reach (if you have enough time), three good journals that you think are realistic, and two fall-back journals you'd still be happy to publish in. When that rejection comes, it's easier to cope if your backup plan is already set. Acceptance rates in the most elite philosophy journals are small, and bear in mind that you're competing with eminent scholars as well as graduate students and assistant professors.

    I usually figure on about two years between when I first submit a paper and when it is finally accepted for publication somewhere.

    Submit to only one journal at a time. This is standard in the field, and editors and reviewers will be seriously annoyed if they discover you're not heeding this advice.

    Preparing Your Manuscript

    Once you've chosen your journal, prepare your manuscript for submission to that journal by creating an anonymized version in a boring font with abstract, keywords, and word count, and any other advice that the journal lists on its webpage under its guide for authors. (One exception: You needn't spend all day formatting the references in the required way. As long as the references are consistently formatted, no one really cares at the submission stage.)

    Boring font: Unless there's some reason to do otherwise, I recommend Times New Roman 12, double spaced.

    Anonymized: Remove the title page and your name. Remove revealing self-references, if any, such as "as I argued in Wong (2018)". You can either cut the reference, cite it in the third person ("as Wong (2018) argued"), or cite it anonymously "as I argued in [Author's Article 1]". Remove other compromises to anonymity, such as acknowledgements.

    First page: Title, then abstract (look at the recent issues to see how long abstracts tend to run), then maybe five keywords (these don't matter much, but look at a recent issue for examples), and word count including notes and references (rounding to the nearest hundred is fine).

    Second page: Title again, then start your paper.

    Have page numbers and a shortened version of the title in the header or footer.

    If your article has notes, I recommend formatting them as footnotes rather than endnotes for the purposes of review, even if the journal uses endnotes for published articles. It doesn't matter much, but most reviewers like it better and it makes your scholarly credibility just a little more salient up front.

    All of the above, of course, would be overridden by contrary instructions on the journal's website.

    If you're attaching to an email or submitting through a portal that asks for a cover letter, the cover letter need not be anything long or fancy. Something like:

    Dear Prof. Lewis:

    Attached please find "A Tactile Refutation of Duomorphholismicism" (about 8000 words), intended as a new submission to Holomorphicism Studies Bulletin. The article has been prepared for anonymous review and is not under consideration for publication elsewhere.

    Sincerely,

    [Your Name]

    Referee Reports

    Your article will probably either be desk rejected by the editor or sent out to reviewers.

    Desk rejection is a relatively quick decision (within a few weeks) that the article is outside of the scope of the journal, or doesn't meet the journal's standards or requirements, or is unlikely to be of sufficient interest to the journal's readers.

    If your article isn't desk rejected, it will be sent to one or two, or sometimes more, reviewers. Reviewers are chosen by the editor based on some combination of (1) does the editor know of the person as a good scholar working in the field, (2) is the person reasonably likely to say yes, (3) has the person written decent referee reports in the past, and (4) if 1-3 don't bring anyone immediately to mind, the editor might skim the references to see if any names pop out as potential reviewers. Reviewers receive an email typically containing the title and abstract of the paper and asking if they are willing to review the paper for the journal. If the reviewer doesn't reply with a yes or a no within a few days, they will probably get a nudge. If the reviewer declines, they will typically be asked if they could suggest a few names of other potential reviewers. Refereeing is thankless work, and it can take a lot of time to do it well, and it doesn't benefit the reviewer professionally very much -- so sometimes it can take several weeks for editors to find suitable reviewers.

    In philosophy, reviewers will usually be given at least two months to return a referee report (a few journals try to be faster). The referee report will have a recommendation of "accept", "revise and resubmit", or "reject" -- sometimes with finer-grained distinctions between accept and R&R such as "accept with revisions" or "minor revisions". It is rare to get a straight acceptance in your first round of submission. What you're shooting for is R&R.

    After the reviewers complete their reports (sometimes requiring several rounds of nudging by the editor), the editor will make a decision. For the most selective journals, split decisions typically but not always go against the author (e.g., if Reviewer 1 says R&R and Reviewer 2 says reject, the editor is likely to reject). It's generally considered good practice for journals to share anonymized referee reports with the author, but not all journals do so.

    If you are rejected with referee reports:

    Remember your backup journal, already chosen in advance with this contingency in mind! Read the referee reports and think about what changes you might want to make in light of those referee reports. If the reports seem insightful, great! If the reviewers missed the point or seem totally uncharitable, maybe there are some clarifications you can make to prevent readers from making those same interpretative mistakes at the next journal.

    Don't linger too long, unless the referee report really causes you to see the issues in a new way, sending you back to the drawing board. In most cases, you want to sling a revised version of your paper to the next journal within a few weeks.

    If you get an R&R:

    Read the referee reports very carefully. Note every criticism they make and every change they suggest. Your revision should address every single one of these points. You can rephrase things to avoid the criticisms. You can mention and explicitly respond to the criticisms. If the reviewer recommends a structural change of some sort, consider making that structural change. In general, you should make every change the reviewers request, unless you think the change would make your paper worse. Depending on how purist you are, you might also consider making some changes that you feel make your paper just a little worse, e.g., clunkier, if you think they don't compromise your core content. If you think a recommended change would make your paper worse, you need not make that change, but you should address it in a new cover letter.

    You should aim to resubmit a revised version of your paper within a couple of months of receiving the referee reports. (If you send it the next day, everyone knows you didn't seriously engage with the reviewers' suggestions. If you send it ten months later, the reviewers might not remember the paper very well or might not still be available.)

    Your new submission should contain a detailed cover letter addressing the reviewers' suggestions, alongside the revised version of your paper. My impression is that at most journals a majority of papers that receive R&R are eventually accepted. Sometimes it requires more than one round of R&R, and rejection after R&R is definitely a live possibility. To be accepted, the reviewers and editor must come to feel that you have adequately addressed the reviewers' concerns. The aim of the cover letter is to show how you have done so.

    In my cover letters, I usually quote the reviewers' letters word for word (block indented), inserting my replies (not indented). If they have praise, I insert responses like "I thank the reviewer for the kind remark about the potential importance of this work" (or whatever). For simple criticisms and corrections, you can insert responses like "Corrected. I appreciate the careful eye." or "I now respond to this concern in a new paragraph on page 7 of the revised version of the manuscript."

    For more difficult issues, or where you disagree with the reviewer, you will want to explain more in your cover letter. It might seem to you that the reviewer is being stupid or uncharitable or missing obvious things. While this is possibly true, it is also possible that you are being defensive or your writing is unclear or you are not seeing weaknesses in your argument. You should try always try to keep a tone of politeness, gratitude, and respect -- and if possible, think of misreadings as valuable feedback about issues on which you could have been clearer. I try to push back against reviewers' suggestions only when I feel it's important, and hopefully on at most one substantial issue per reviewer.

    If there's a strongly voiced objection based on a misreading, this should be handled delicately. First, revise the text so that it no longer invites that misreading. Be extra clear in the revised version of the text what you are not saying or committing to. Then in the cover letter, explain that you have clarified the text to avoid this interpretation of your position. But also answer the objection that the reviewer raised, so they aren't left feeling like you ducked the issue and they aren't left curious. In this case, your response to the objection can be entirely in the cover letter and need not appear in the paper at all. (You might or might not agree that the objection would have been fatal to the position they had thought you were taking.)

    Generally, the revised paper and the reply to reviewers will go back to the same reviewers. Typically, a reviewer will recommend acceptance after an R&R if they feel you have adequately engaged with and addressed their concerns (even if in the end they don't agree), they will recommend rejection if they feel that you didn't engage their concerns seriously or if your engagement reveals (in their judgment) that their original concerns really are fatal to the whole project, and they will recommend a second round of R&R if they feel you've made progress but one or two important issues still remain outstanding.

    Some people add footnotes thanking anonymous reviewers. In my view it's unnecessary. Everyone knows that virtually every article contains changes made in response to the criticisms of anonymous reviewers.

    After It's Accepted

    (1.) Celebrate! Yay!

    (2.) Put it on your c.v. as "forthcoming" in the journal that accepted it. Yay!

    (3.) Keep your eye out for page proofs. Some journals give you just a few days to implement corrections after receiving the proofs, and it's not uncommon for there to be screwy copyediting mistakes that it would be embarrassing to see in print. You can also make minor wording changes and corrections during proofs. Journals discourage making big changes at this stage, such as inserting whole new paragraphs, though if it's important you can try to make the case.

    --------------------------

    Nov 20, 2020, 12:21 pm, ETA

    Marcus Arvan at Philosophers' Cocoon and Bryce at Daily Nous raise some valuable counterpoints, with some finer-grained thoughts about who should and who should not focus on publishing in this way in graduate school. I expect there will be further interesting comments on both sites.

    Thursday, November 12, 2020

    The Nesting Problem for Theories of Consciousness

    In 2016, Tomer Fekete, Cees Van Leeuwen, and Shimon Edelman articulated a general problem for computational theories of consciousness, which they called the Boundary Problem. The problem extends to most mainstream functional or biological theories of consciousness, and I will call it the Nesting Problem.

    Consider your favorite functional, biological, informational, or computational criterion of consciousness, criterion C. When a system has C, that system is, according to the theory, conscious. Maybe C involves a certain kind of behaviorally sophisticated reactivity to inputs (as in the Turing Test), or maybe C involves structured meta-representations of a certain sort, or information sharing in a global workspace, or whatever. Unless you possess a fairly unusual and specific theory, probably the following will be true: Not only the whole animal (alternatively, the whole brain) will meet criterion C. So also will some subparts of the animal and some larger systems to which the animal belongs.

    If there are relatively functionally isolated cognitive processes, for example, they will also have inputs and outputs, and integrate information, and maybe have some self-monitoring or higher-order representational tracking -- possibly enough, in at least one subsystem, if the boundaries are drawn just so, to meet criterion C. Arguably too, groups of people organized as companies or nations receive group-level inputs, engage in group-level information processing and self-representation, and act collectively. These groups might also meet criterion C.[1]

    Various puzzles, or problems, or at least questions immediately follow, which few mainstream theorists of consciousness have engaged seriously and in detail.[2] First: Are all these subsystems and groups conscious? Maybe so! Maybe meeting C truly is sufficient, and there's a kind of consciousness transpiring at these higher and/or lower levels. How would that consciousness relate to consciousness at the animal level? Is there, for example, a stream of experience in the visual cortex, or in the enteric nervous system (the half billion neurons lining your gut), that is distinct from, or alternatively contributes to, the experience of the animal as a whole?

    Second: If we want to attribute consciousness only to the animal (alternatively, the whole brain) and not to its subsystems or to groups, on what grounds do we justify denying consciousness to subsystems or groups? For many theories, this will require adjustment to or at least refinement of criterion C or alternatively the defense of a general "exclusion postulate" or "anti-nesting principle", which specifically forbids nesting levels of consciousness.

    Suppose, for example, that you think that, in humans, consciousness occurs in the thalamacortical neural loop. Why there? Maybe because it's a big hub of information connectivity around the brain. Well, the world has lots of hubs of complex information connectivity, both at smaller scales and at larger scales. What makes one scale special? Maybe it has the most connectivity? Sure, that could be. If so, then maybe you're committed to saying that connectivity above some threshold is necessary for consciousness. But then we should probably theorize that threshold. Why is it that amount rather than some other amount? And how should we think about the discontinuity between systems that barely exceed the threshold versus barely fall short?

    Or maybe instead of a threshold, it's a comparative matter: Whenever systems nest, whichever has the most connectivity is the conscious system.  But that principle can lead to some odd results. Or maybe it's not really C (connectivity, in this example) alone but C plus such-and-such other features, which groups and subsystems lack. Also fine! But again, let's theorize that. Or maybe groups and subsystems are also conscious -- consciousness happens simultaneously at many levels of organization. Fine, too! Then think through the consequences of that.[3]

    My point is not that these approaches won't work or that there's anything wrong with them. My point is that this is a fundamental question about consciousness which is open to a variety of very different views, each of which brings challenges and puzzles -- challenges and puzzles which philosophers and scientists of consciousness, with a few exceptions, have not yet seriously explored.

    --------------------------

    Notes

    [1] For an extended argument that the United States, conceived of as an entity with people as parts, meets most materialist criteria for being a consciousness entity, see my essay here. Philip Pettit also appears to argue for something in this vicinity.

    [2] Giulio Tononi is an important exception (e.g., in Oizumi, Albantakis, and Tononi 2014 and Tononi and Koch 2015).

    [3] Luke Roelofs explores a panpsychist version of this approach in his recent book Combining Minds, which was the inspiration for this post.

    [image source]

    Wednesday, November 04, 2020

    Gender Proportions among Faculty in 98 PhD-Granting U.S. Philosophy Departments

    If you need a reprieve from an excessively exciting U.S. election, here's some mostly unsurprising news about gender proportions in philosophy.

    Yesterday, the Women in Philosophy / Demographics in Philosophy research group released draft numbers on women faculty in PhD granting philosophy departments in the United States. These data are based on publicly available sources (mostly department websites) and reflect a snapshot from 2019. Follow the link for methodological details. Two important changes are a coding category for publicly nonbinary philosophers and more methods for checking the accuracy of the data. The project was led by Gregory Peterson and mostly follows the 2015 methodology of Sherri Conklin, Irina Artamanova, and Nicole Hassoun. (I also consulted during some phases of the project.)

    Overall, across the 98 programs studied, women comprised 24% (681/2801) of the faculty. This isn't far from the 27% women among respondents to the American Philosophical Association's 2018 faculty survey, the 26% women across a wide swath of college philosophy instructors from Debra Nails and John Davenport's study of the 2017 Directory of American Philosophers, and the 24% in Conklin and colleagues' 2015 analysis.

    Among tenured and tenure-track faculty, Peterson and collaborators find 28% women (477/1689), very similar to the 27% that Julie Van Camp found looking at the same set of 98 PhD granting departments in 2018.

    Thus, within a few percentage points, the picture from 2015-2019 is very consistently around 24%-28% women faculty in philosophy departments, across a variety of data collection methods and variety of samples. The results are broadly similar whether you look at elite PhD programs, APA membership, or at all faculty listed in the Directory of American Philosophers.

    Peterson and colleagues find four philosophers among the 1689 who were publicly non-binary as of fall 2019 (less than 1%). I could imagine this number rising a bit with corrections.

    They also break the data down by rank. Looking at the three primary tenure-stream ranks, I see:

    Assistant: 43% women (122/283)
    Associate: 32% (148/456)
    Full: 21% (207/950)

    Here are the same data with 95% confidence intervals:

    [click to enlarge and clarify]

    This difference by rank is striking, but not unusual in these sorts of analyses. (In the Directory of American Philosophers, the corresponding numbers are 34% assistant, 28% associate, and 21% full. In Conklin and colleagues 2015, the numbers are 42% assistant, 30% associate, and 23% full.)

    The question that immediately arises is whether these numbers reflect a generational shift. Should we expect a change in the gender proportions in the philosophy professoriate over the next 10 to 20 years, as these assistant professors age? In other words, to what extent is the difference a cohort effect, reflecting a generally much higher percentage of women in the younger generation than in the older generation? And to what extent is it, instead, a rank effect, reflecting either slower and less certain advance for women from the lower to the higher ranks or more recruiting and better retention of men at higher ranks?

    We are not currently in the position, I think, to answer that. Check back a decade or two!

    Friday, October 30, 2020

    Slippery In-Between Persons: Growing, Fading, Merging, Splitting

    Philosophers normally treat personhood as all-or-nothing. An entity either is a person or is not a person. There are no half-persons, and no one has more personhood than anyone else. We usually don't think much about vague or in-between cases.

    One exception is the literature on so-called "marginal cases" -- humans with severe disabilities, for example, or entities in the slow progression from fertilized egg to self-aware toddler, who either wholly or partially lack features that philosophers sometimes regard as essential to personhood.

    The literature on personal identity offers other exceptions -- classically in Derek Parfit's wild thought experiments in Reasons and Persons and recently in Luke Roelofs' Combining Minds. "Split brain" callosotomy patients are another interesting case, as are craniopagus twins.

    [P.T. Selbit, in an early attempt at person splitting]

    The latter cases, though less common (and often purely hypothetical), present a deeper challenge to our notion of personhood than the former cases. The difference between one and two persons is more difficult to manage, conceptually and socially, than the difference between zero and one, which can be handled partly through extending our ordinary thinking about ordinary persons to a range of "marginal" cases, with some tweaks or reductions.

    I won't press that point here. What I want to note instead is this: On almost all notions of personhood, we think of personhood as coming in discrete countable numbers -- zero, one, two, etc. -- not 1/8 or 2.4. At the same time, on most accounts of personhood, personhood is grounded in smoothly gradable phenomena, such as cognitive and emotional capacities, social role, and the history of or potentiality for these.

    Thus we can in general constuct a gradual series from cases in which we have, clearly, a single person, down to zero or up to two, forcing a theoretical trilemma. In general form:

    (1.) Case 1: an ordinary human being who no one would reasonably doubt counts as one person.

    (2.) Case n: an entity that is either clearly not a person (the zero case) or a situation in which there clearly are two distinct people (the two case).

    (3.) A series of arbitrarily slow and finely-graded steps between 1 and n.

    The zero case is tricky if we want to use real-life examples, given that some philosophers would want to describe as persons both a fertilized egg and someone in an irreverisble comatose state due to extreme brain damage. Rather than entering those fights, we can use science fiction: Imagine removing cells from a person's body, one at a time, while keeping the body alive, until eventually only a single neuron remains.

    For the two case, we might imagine slowly budding one brain off of another, or we might imagine two people whose brains are slowly interconnected eventually becoming an entity operating as a single person with a single stream of conscious experience.

    Imagine, then, such gradual series between 1 and 0 or between 1 and 2. Here's the trilemma.

    Option 1: A discrete jump. You might argue that despite the slow series of changes, there will always be a sharp threshold at which personhood vanishes, arrives, merges, or splits. Despite continuity in the grounding properties of personhood, personhood itself remains discrete. Removing a single cell might cause a metaphysical saltation from personhood to nonpersonhood. Adding a single connection between two slowly merging brains might cause a metaphysical saltation from two people to one.

    Option 2: Gradations of personhood. At some point between zero and one, you have a kind-of person. More radically, at some point between one and two, you have... what, a person and a half?

    Option 3: Conceptual collapse. Some concepts have false presuppositions built into them and should be rejected entirely (e.g., exorcism, the luminiferous ether). Other concepts are okay for a limited range of cases but fail to apply, either yes-or-no, to other cases (e.g., primeness to things that aren't numbers). Maybe personhood is such a concept -- a concept built exclusively for ordinary countable-persons situations.

    Option 1 strikes me as implausible. (Admittedly, some people are more willing than I to see sudden metaphysical saltations in gradual continua.)

    On Option 3, we might eventually need to retire the concept of personhood. Maybe -- far-fetched science fiction, but not impossible -- the world will eventually be populated with intelligent AI or biological systems that can divide or merge at will (fission/fusion monsters). Our concept of personhood, if it's inherently and unchangeably committed to discrete countability, might end up looking like the quaint, unworkable relic of old days in which brains were inevitably confined to single bodies.

    On Option 2, we face the task of modifying personhood into a gradable concept. I worry about the implications for our thinking about disability if we rush to do that too quickly for the zero case; but the one-to-two case is intriguing and insufficiently explored.

    [image source]