Monday, May 20, 2019

Intuition, Disagreement, and a Rope Around the Earth

Check out this awesome new philosophical video by philosopher Jon Ellis at Santa Cruz.

The video starts with this thought experiment from Wittgenstein:

Suppose that a very long piece of rope is wrapped around the equator of the Earth. Now imagine that the rope is lengthened by one yard, but its circular form is preserved, so that the rope no longer fits snugly but occupies a circle at some slight constant distance from the Earth's surface. How great would that distance be? (reported in Horwich 2012, p. 7).

Your attitudes toward philosophical and political propositions might be kind of like your attitude toward that rope -- but with no clear mathematical means to resolve the disagreement.

If you like the video, you might check out Jon's on my paper Rationalization in Moral and Philosophical Thought.

Thursday, May 16, 2019

The Ethics of Drones at the University of California

I've been appointed to an advisory board to evaluate the University of California's systemwide policy regarding Unmanned Aircraft Systems or "drones". We had our first meeting Tuesday. Most of the other members of the committee appear to be faculty who use drones in their research, plus maybe a risk analyst or two. (I missed the first part of the meeting with the introductions.)

Drones will be coming to college campuses. They might come in a big way, as Amazon, Google, and other companies continue to explore commercial possibilities (such as food and medicine delivery) and as drones' great potential for security and inspection becomes increasing clear. Technological change can be sudden, when an organization with resources decides the time is right for a big investment. Consider how fast shareable scooters arrived on campus and in downtown areas.

We want to get ahead of this. Since University of California is such a large and prominent group of universities, our policies might become a model for other universities. The advisory board is only about a dozen people, and they seem interested to hear the perspective a philosopher interested in the ethics of technology. So I have a substantial chance to shape policy. Help me think. What should we be anticipating? What ethical issues are particularly important to anticipate before Amazon, or whoever, arrives on the scene and suddenly shapes a new status quo?

One issue on my mind is the combination of face recognition software and drones. It's generally considered okay to take pictures of crowds in public places. But drones could create a huge stream of pictures or video, sometimes from unexpected angles or locations, possibly with zoom lenses, and possibly with facial recognition, which creates privacy issues orders of magnitude more serious than photographers on platforms taking still photos of crowds on a busy street.

Another issue on my mind is the possibility of monopoly or cartel power among the first company or first few companies to set up a drone network -- which in the (moderately unlikely but not impossible) event that drone technology starts to become integral to campus life, could become another source of abusive corporate power. (Compare the abuses of for-profit academic journals.)

I'm not as much concerned about conventional safety issues (drones crashing into crowded areas), since such safety issues are already a central focus of the committee. I'd like to use my role on this committee as an opportunity to highlight potential concerns that might be visible to those of us who think about the ethics of technology but not as obviously visible to drone enthusiasts and legally trained risk analysts.

An agricultural research drone at UC Merced

Incidentally, what great fun to be a tenured philosophy professor! I get to help shape drone policy. Last weekend, I enjoyed entertaining UCSD philosophers with lots of amazingly weird facts about garden snails (love darts!, distributed brains!), while snails crawled around on the speaker's podium. This coming weekend, I'll be running a session at the conference of the Science Fiction Writers Association on "Science Fiction as Philosophy". I'm designing a contest to see if any philosopher can write an abstract philosophical argument that actually convinces readers to give money to charity at higher rates than control. (So far, the signs aren't promising.) Why be boring?

Philosophers, do stuff!


Friday, May 10, 2019

Early Onset Summer Illusion

Every spring I suffer the Summer Illusion. The following three incompatible propositions all seem to me, in the spring, to be true:

(1.) When summer arrives, I'll finally get a bunch of that research done which has been crowded out by my teaching and administrative commitments during the school year.

(2.) When summer arrives, I'll finally get a chance to do all of that non-academic stuff that I've been putting off during the school year -- big home maintenance projects, vacation travel to the four new places I want to visit, my plan to catch up on the whole history of golden-age science fiction.

(3.) When summer arrives, I'll finally have a chance to spend a lot more time just relaxing.

The Summer Illusion is surprisingly robust. Every spring, I suffer the Summer Illusion, building up big plans and hopes. Then, every summer, as those hopes fall apart, I scold my springtime self for having fallen, yet again, into the Summer Illusion. The pattern is so common and predictable I've given it a memorable name, The Summer Illusion, to help convince myself that it really is an illusion -- and hopefully not fall into it again. And yet I fall into it again.

You might think that the Summer Illusion depends on entertaining only one of the three propositions at a time. You might think that the way it works is that sometimes I entertain proposition 1 (I'll get my research done!), and at other, different times I entertain proposition 2 (I'll get all my other projects done!), and at still other times I entertain proposition 3 (I'll finally have lots of time to relax!). Largely this is so. And yet the Summer Illusion also survives simultaneous consideration of the three propositions. Even looking at the propositions side by side like this, I am tempted to believe them. Some part of me thinks of course all three can't be true, as I've seen time and time again -- and yet in my heart I continue to believe. Summer days expand so magnificently to fit my fantasies!

This year, I have Early Onset Summer Illusion. While I was working on my book, I thought to myself: Come April and May I will have plenty of time for all of my other projects. And so I put off project and project and project and project. And I also thought to myself: Come April and May, I'll finally have some good time to relax a bit more at work.

It's almost an inversion of busyness. If a period of time has the outward appearance of being a "relaxed", low-commitment period of time, it serves as a fantasy-and-procrastination magnet. I pile my future plans and hopes into that period of time, not noticing the impossibly mounting sum of expectations.

Well, now I'm off to U.C. San Diego to talk to the Philosophy Department about whether garden snails are conscious -- come by if you like! If this blog post seems a little short, well, it seemed like this week would be such an easy week, and so I found that I'd promised to finish this and this and this and this....

[image source]

Thursday, May 02, 2019

Flavors of Group Consciousness: Vanilla, Strawberry, and Chunky Monkey with Extra Nuts

Yesterday, I was rereading Philip Pettit's 2018 article "Consciousness Incorporated". Due to some vocabulary mismatch, I find his exact commitments on group phenomenal consciousness not entirely clear [note 1]. (By "consciousness" or "phenomenal consciousness" I just mean conscious experience, the stream of experience, or "something-it's-like-ness" in a relatively theoretically innocent sense.)

Pettit endorses group consciousness of some flavor. But what flavor? A mild flavor, he hopes: something "sufficient to engage philosophical interest" but not too "challenging and mysterious" (p. 33). In contrast, in my article "If Materialism Is True, the United States Is Probably Conscious", I see myself as defending a radical position that clashes sharply with ordinary common sense. So the question is, can we distinguish among different degrees of ontological commitment in endorsing "group consciousness", with vanilla on one end (palatable to almost everyone) and, on the other end, well, let's call it "Chunky Monkey with Extra Nuts".

Group Consciousness: Vanilla

Sometimes a group of people all, or mostly, share a particular conscious state -- in a weak or innocuous sense of "sharing". Individually, everyone (or almost everyone, or at least enough of the group) is undergoing that type of conscious experience. So if I say that the theater audience was alarmed by the sudden collapse of the lead actor onstage, or if I say that World Cup viewers around the globe saw the amazing goal, and if we assume that the alarm and the seeing are conscious experiences, then in a certain innocuous sense the groups share conscious experiences.

(One complication: The alarm or the seeing might manifest differently in different members of the group, depending on, e.g., their mood and their viewing position. Set this aside for simplicity.)

Here's a depiction:

[as always, click to clarify and enlarge]

"The audience felt alarmed by the actor's sudden collapse": In this vanilla version of group consciousness, that statement only implies that (enough) of the audience experienced, as individuals, a feeling of alarm (conscious state A in the depiction above).

Pettit clearly wants something more flavorful than this.

Group Consciousness: Chunky Monkey with Extra Nuts

A radical view of group consciousness, in contrast, posits the existence of a stream of experience possessed by the group in addition to the streams of experience possessed by each individual. I have argued that the United States might have a distinctive stream of experience over and above the experiences had by individual citizens and residents of the United States. If streams of experience, or centers of subjectivity, are discrete, countable things (they might not be), and the group contains N members, then on the Chunky-Monkey-Extra-Nuts view, there are N+1 discrete streams of conscious experience -- 300,000,000-ish for the individual members of the United States plus another one for the group as a whole.

Furthermore, on a view of this sort, the conscious experiences of the group mind might be very different from the conscious experiences of any individual members of the group. If the United States is a conscious entity, for example, it might consciously enforce an embargo. But what it feels like, subjectively from the inside, to enforce an embargo might be completely opaque to any individual person. (Alternatively, consider a possible human-grade group mind that is composed out of smaller insect-grade individual minds, capable of appreciating Shakespeare in a manner far beyond what any insect could do: my Antarean Antheads case).

Here's a depiction:

It is highly counterintuitive (in current mainstream Anglophone culture) to think that the United States, or any existing groups of people, actually give rise to a discrete, higher-level stream of consciousness at the group level -- a distinct locus of subjectivity. On this view, group-level mental states arise from, and are not merely composed of, the mental states (and other interactions) of the members, so that there are four, not three, distinct occurrences of experience A (three among the individuals and a fourth for the group) as well as the possibility of experiences (B, D, E) that occur in none of the individuals. If you find this a weird and radical view, you are probably understanding it correctly.

Pettit presumably doesn't want to defend this flavor of group consciousness. [Note 2]

Group Consciousness: Strawberry

Can we and Pettit find an intermediate flavor -- more interesting than vanilla but not as wild as extra nuts?

Pettit compares the relation that a group mind (or "agent") has to its members to the relation of that a statue has to the molecules composing it:

As the statue relates to its molecules, so the group agent relates to its members. The group agent is not the same agent as the set of its members, because the set of members is not, as such, an agent at all. But still, the group agent is a set of members -- a suitably organized or networked set -- and qua set it is the same collection as the set of members who make it up. The group agent is distinct from the members under the one aspect but not distinct from them under the other (p. 23).

This physical analogy captures the intended non-radicalness of Pettit's view. It is a little too simple, however, since not everything in the members' minds belongs to the group mind, and since the group can have mental states that none of the members individually possess. This isn't analogous to how we normally think of the molecular composition of statues.

A favorite example of Pettit's is the following: The group has three members. Member A believes P, Q, and not-R. Member B believes P, not-Q, and R. Member C believes not-P, Q, and R. No one believes P-and-Q-and-R. The group decides collectively, however, that P-and-Q-and-R is a view they can stand behind as a group. They might endorse "We believe P&Q&R" -- though not all of them even need to endorse that, as long as there's a procedure by which it comes to constitute the group's view, for example, by being voiced by the leader after a proper consultative process.

We might depict the situation thus:

The conscious experience of the group is in the red box: The group consciously believes P&Q&R. It's not enough for the members to share a conscious state (e.g., A), and no individual believes P&Q&R, but due to structural features of their relationships, the group believes P&Q&R in virtue of the right members accepting that "we believe P&Q&R". (Let's ignore the trickier case in which the group believes P&Q&R without any individual member accepting that the group believes this.)

Now, is this an interestingly intermediate "strawberry" flavor of group consciousness? Maybe! But here's a question: In virtue of what is "P&Q&R" a conscious belief that the group possesses? If P&Q&R is a conscious belief because individual group members consciously endorse P and Q and R and/or P&Q&R in the right kind of coordinated way, then maybe this is a fairly vanilla view after all: Conscious experience is still the province of individual people. What Pettit adds is only a somewhat more complex way of picking out which individual conscious experiences count as the group's shared conscious experience. Group consciousness is just individual consciousness, plus a criterion for attributing some of those states to the group as a whole.

On the other hand, if the social relationships among the group members yield more than that, if the group's conscious experience arises from the interconnections among members so that conscious experiences at the group level aren't just individuals' conscious experiences plus a criterion -- well then maybe we're starting to get into Chunky Monkey territory after all.

Suppose there's something it's like to consciously think, "Ah, P&Q&R, that's right!" On the Chunky Monkey view, this experience could really transpire in the group entity, even if it occurs in no individual member's head. On the strawberry-that's-basically-vanilla view, that's impossible, and to say that the group consciously endorses P&Q&R is only to say something about structural relationships among what individual group members do consciously endorse.


Note 1: Pettit prefers "coawareness", which he appears to equate with "access consciousness" in Ned Block's sense. He says that access consciousness implies there being "something it's like" and maybe vice versa (at least for the case of belief). Despite this, he says he is "setting aside" the issue of phenomenal consciousness -- perhaps thinking of "phenomenal consciousness" as a phrase that is more theoretically commissive than I hear it as being (see p. 12-14, 33).

Note 2: In footnote 6, for example, Pettit favorably cites his sometimes-coauthor Christian List's 2018 criticism of my article on USA consciousness.

Friday, April 26, 2019

Animal Rights for Animal-Like AIs?

by John Basl and Eric Schwitzgebel

Universities across the world are conducting major research on artificial intelligence (AI), as are organisations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AIs might deserve the ethical protections we typically give to animals.

Discussions of ‘AI rights’ or ‘robot rights’ have so far been dominated by questions of what ethical obligations we would have to an AI of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an AI with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an AI with less-than-human sophistication, deserving some less-than-human ethical consideration.

We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. If human stem cells or, especially, human brain cells are involved, the standards of oversight are even more rigorous. Biomedical research is carefully scrutinised, but AI research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be.

You might think that AIs don’t deserve that sort of ethical protection unless they are conscious – that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data or Dolores, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering.

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

This might sound like the stuff of science fiction, but insofar as researchers in the AI community aim to develop conscious AI or robust AI systems that might very well end up being conscious, we ought to take the matter seriously. Research of that sort demands ethical scrutiny similar to the scrutiny we already give to animal research and research on samples of human neural tissue.

In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.

It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this.

[originally posted on Aeon Ideas]

Wednesday, April 24, 2019

Contest Idea: Can You Write an Philosophical Argument That Convinces Research Participants to Give Some of Their Bonus Money to Charity?

In a series of studies supported by The Life You Can Save, Chris McVey and I have been showing research participants (mTurk workers) philosophical arguments for charitable giving. Other participants read narratives about children who were helped by charitable donations or (as a control condition) they read a middle-school physics textbook discussion of energy.

We then ask participants their attitudes about charitable giving and follow up with this question:

Upon completion of this study, 10% of participants will receive an additional $10. You have the option to donate some portion of this $10 to your choice among six well-known charities that have been shown to effectively fight suffering due to extreme poverty. If you are one of the recipients of the additional $10, the portion you decide to keep will appear as a bonus credited to your Mechanical Turk worker account, and the portion you decide to donate will be given to the charity you pick from the list below.

Note: You must pass the comprehension questions and show no signs of suspicious responding to receive the $10.  Receipt of the $10 is NOT conditional, however, on your attitudes toward charity, expressed on the previous page, nor on how much you choose to donate if you receive the $10.

If you are one of the recipients of the additional $10, how much of your additional $10 would you like to donate?

[response options are in dollar intervals from $0 to $10, followed by a list of six charities to choose among]

Our November 21 blog post "Narrative but Not Philosophical Argument Motivates Giving to Charity" describes some of our results. Short version: When presented with the narratives, participants choose to donate on average about $4.50 of their possible bonus. When presented with the physics text or the argument, they donate about a dollar less. We've tried varying the argument, to see if we can find a variation that statistically beats the control (with 100-200 participants per condition), but so far no luck.

This is where you come in. Maybe Chris and I are bad at writing convincing arguments! (Well, one argument we adapted from Matthew Lindauer and collaborators, in consultation with Peter Singer.) The philosophical community might be able to help us create a more effective argument.

So -- is this too goofy? -- I'm thinking that a contest might be fun. Write a philosophical argument (300-400 words) that actually leads mTurk participants to donate more of their bonus to charity than they do in the control condition. The prize might be $500 outright plus $500 to the winner's choice of an effective charity. If no one can create an argument that can beat the control condition, no winner; otherwise the winner is the author of the argument that generates the highest mean donation.

There would need to be some constraints: no use of narrative (personal or historical), no discussion of individual people who might be helped, no pictures, no highly emotionally charged content or vivid sensory detail. The argument shouldn't be obviously fallacious, foolish, or absurd. It ought to be something that a thoughtful philosopher could get behind as a reasonable argument. Statistics, empirical details, evidence of overall effectiveness, etc., are fine.

I'm open to suggestions about how best to administer such a contest, if I can find funding for it -- including thoughts about rules, parameters, the best statistical approach, what the prize should be, what to do if we receive too many submissions to run them all, etc. (I'm also open for funders to volunteer.)

Also, I'm definitely open to ideas about what features of an argument might make it effective or ineffective among ordinary readers, if you have thoughts about that but don't feel game to write up an argument.

[image source]

Thursday, April 18, 2019

Ethics in Publishing Philosophy

Tomorrow (Friday) afternoon from 1-4, I'll be a panelist in a session on "Publishing Ethics in Philosophy" at the Pacific Division meeting of the American Philosophical Association in Vancouver. Come by if you're in town!

I'll have ten minutes to say a few things, before the session moves on to other panelists and then (hopefully) lots of discussion. I figure ten minutes is time enough to express three ideas. So... what three points should I make? What issues deserve special emphasis in a forum of this sort? Here are my thoughts:

Journal and Monograph Response Times

If the following three conditions all hold at a journal or academic press, there's cause for concern that that publisher's policies are impeding authors' timely publication of their work and progress in their careers:

(1.) The journal or press does not accept simultaneous submissions (that is, there's an expectation that while the author's work is being considered there it is not also being considered elsewhere).

(2.) The journal accepts 20% or fewer of submissions.

(3.) The median response time for a decision is six months or more.

As we all know, publishable-quality material stands a substantial chance of being rejected for a variety of reasons, including fit with the journal's vision or the vision of the monograph series, the very high selectivity of some venues that leads them to reject much material that they believe is of publishable quality, and chance in the refereeing process. For these reasons, it often takes five or more rejections before publishable-quality work finds a home. If venues are taking six months or more to respond, that can mean three or more years between first submission and final acceptance. That's too long for authors to wait -- especially graduate student authors and untenured faculty.

Ideally, response times could be ten weeks or less. I don't think that's unattainable with good organization. But if a press or journal can't attain that, they ought to consider either allowing simultaneous submissions or increasing their acceptance rates.

Journal Pricing

It's not news to people in academia that some journals charge libraries very hefty subscription fees. The University of California system (UC Berkeley, UCLA, and eight other campuses including my own campus, UC Riverside, plus medical centers and national laboratories) recently cancelled its subscription to Elsevier journals, which was costing the system eleven million dollars a year, about 25% of the university's total journal budget. There's a huge difference in journal pricing, with some high quality journals charging a few hundred dollars a year while other journals, not appreciably better in any way, charge ten times as much for similar services -- with Elsevier and Springer maybe being the worst offenders.

I looked up the institutional subscription price in US dollars for print and online access to the top twenty "best 'general' journals of philosophy" in a recent poll by Brian Leiter:

  • 1. Philosophical Review (Duke University Press), $264/year (4 issues, 561 pages).
  • 2. Mind (Oxford Academic), $430 (4 issues, 1270 pages).
  • 3. Nous (Wiley), $1532 (4 issues, 981 pages).
  • 4. Journal of Philosophy, $250 (12 issues, 684 pages).
  • 5. Philosophy and Phenomenological Research (Wiley), $385 (6 issues, 1594 pages).
  • 6. Australasian Journal of Philosophy, (Taylor & Francis) $509 (4 issues, 838 pages) [Updated: thanks, Neil!].
  • 7. Philosophers' Imprint (hosted by University of Michigan), free open access ($20 recommended fee to submit an article for review; 25 individual articles).
  • 8. Philosophical Studies (Springer), $3171 (17 issues, 4627 pages).
  • 9. Philosophical Quarterly (Oxford), $799 (4 issues, 874 pages).
  • 10. Analysis (Oxford). $288 (4 issues, 784 pages).
  • 11. Synthese (Springer), $4830 (12 issues, 5594 pages).
  • 12. Canadian Journal of Philosophy (Taylor & Francis), $446 (6 issues, 899 pages).
  • 13. Erkenntnis (Springer), $1802 (6 issues, 1320 pages).
  • 14. American Philosophical Quarterly (University of Illinois), $397 (4 issues, approx 420 pages).
  • 15. Pacific Philosophical Quarterly (Wiley), $764 (4 issues, 909 pages).
  • 16. Proceedings of the Aristotelian Society (Oxford), $343 (3 issues, 428 pages).
  • 17. Ergo (hosted by University of Toronto): free open access (41 individual articles).
  • 18. European Journal of Philosophy (Wiley): $1446 (4 issues, 1457 pages).
  • 19. Journal of the American Philosophical Association (Cambridge University): Only available to institutions as part of a large subscription package.
  • 20. Thought (Wiley), $400-$741 (online only; 4 issues, 295 pages).
  • I am not aware of any good reason that Synthese should be almost $5000 a year, while other journals of similar quality are a few hundred dollars. The best explanation, I suspect, is that Springer, as a for-profit company, is taking advantage of inelastic institutional demand for the journal by institutions that want to ensure that they have access to the best-known philosophy journals. It is, I think, contrary to the general interests of academics and the public for Springer and other such companies to charge so much, so some collective resistance might be desirable.

    I recommend that editors, referees, and authors consider journal pricing as one factor in their decisions about serving in editorial roles, refereeing roles, and in choosing where to submit, giving default preference to open-access journals and reasonably priced journals over expensive journals when other factors are approximately equal.

    Responsible Citation Practice

    Increasingly, citation is the currency of academic prestige. People decide what to read based, partly, in what is being cited by others. High citation rates can figure prominently in hiring and tenure decisions. Highly cited authors are generally considered to be experts in their subfields.

    Thus, I think it is important that authors thoroughly review the recent literature on their topic to ensure that they are citing a good selection of recent sources, especially sources by junior authors and lesser-known authors. It is easy -- especially if you are a well-known author, and especially in invited contributions -- to cite the famous people in your subfield and the people whose work you happen to know through existing academic connections. This is not entirely academically responsible, and it can have the effect of illegitimately excluding from the conversation good work by people who are not as academically well connected.

    Citation practice is primarily the responsibility of authors -- but referees and editors might also want to consider this issue in evaluating submitted work.

    Comments/suggestions/reactions welcome -- especially before 1:00 pm tomorrow!

    [image source]

    Tuesday, April 09, 2019

    Tell Us How to Fix the Lack of Diversity in Philosophy Departments

    by Sherri Conklin, Eric Schwitzgebel, and Nicole Hassoun.

    [cross-posted at the Blog of the APA]

    Philosophy needs to diversify. Come join us at the Pacific Division meeting to tell us what departments can do to improve. Join the Demographics in Philosophy Project to help bend the long arc of history towards justice.

    First, some data

    A growing body of research shows that while the proportions of women philosophy faculty are increasing over time, women still only account for 25% of all philosophy faculty in the U.S. (Conklin, Artamonova, and Hassoun 2019; see also Black philosophers account for only about 1-4% of all philosophy faculty (Botts et al. 2014). And disabled philosophers are underrepresented as well (Tremain 2014).

    These groups’ disproportionately low authorship rates in philosophy journals may partially explain the faculty findings – especially if failure to publish leads to a failure to gain employment, tenure, and promotion (Wilhelm, Conklin, and Hassoun 2017). For example, only 13% of publications in top philosophy journals are by women (Schwitzgebel and Jennings 2017), and fewer than 1% of authors in top journals are Black (Bright 2016).

    Another possible explanation concerns the “pipeline” into philosophy. For example, women and Black philosophers receive only about 32% and 5% of undergraduate philosophy degrees in the U.S. (Schwitzgebel 2017a, 2017b) and about 29% and 2% of PhDs (Schwitzgebel 2016). (Systematic data on other groups that are likely to be underrepresented are more difficult to obtain). Possibly, something about how philosophy is taught or how it is perceived in U.S. culture substantially influences the demographics of the major (Garfield and Van Norden 2016; Thompson et al. 2016).

    This is a problem from an epistemic point of view: Philosophy as a discipline profits from hearing voices from a variety of different backgrounds. Furthermore, to the extent that unfair exclusionary practices, whether implicit or explicit, may be limiting people’s career choices, it is a problem of social justice.

    Disciplinary initiatives to combat the disparities

    Much has been done to combat the observed disparities. The British Philosophical Association, in collaboration with the Society for Women in Philosophy-UK, launched a Best Practices Scheme for improving departmental climates for women. The APA introduced a new initiative to diversify course syllabi through the Diversity and Inclusiveness Syllabus Collection. A number of philosophy diversity institutes were launched to help attract marginalized undergraduates to apply to graduate school. These programs include PikSi, UCSD SPWP, and COMPASS (among others – see the APA resource page on Undergraduate Diversity Institutes in Philosophy). Graduate students founded Minorities in Philosophy to promote student initiated change (

    In addition, the Demographics in Philosophy Project collates and collects data to document the problem of marginalization in professional philosophy and to identify tools for counteracting it. In 2018, we initiated a broadly consultative project to identify inclusive practices for philosophy journals, beginning with a session on inclusive practices at the Pacific Division meeting of the APA and a series of blog posts from editors of leading journals (Hassoun, Schwitzgebel, and Smith 2018; Kukla 2018; Bilimoria 2018; Hetherington 2018; Hansson 2018; Moore and O’Brien 2018) and culminating in a list of potential best practices, posted here on the Blog of the APA (Conklin, Hassoun, Schwitzgebel 2018).

    But what can departments do to combat the disparities directly?

    Tell us how to fix the problem:

    We have some preliminary ideas about how to improve the situation, but we want to hear from you. We would like to identify concrete suggestions for specific practices that can be implemented by departments to improve diversity without compromising their other goals. We are especially interested in hearing about successful practices.

    Give us your suggestions. Raise objections and concerns. Email us. And, if you’re in the area at the time, please come to our session on this topic at the Pacific APA meeting in Vancouver on April 18 (1-4pm). The session will start with a brief presentation on diversity in philosophy departments, but it will mostly consist of open discussion with a panel of representatives from sixteen well-regarded philosophy departments, who will bring their experience to the question as well as, we suspect, in some cases, their strenuous disagreement.

    After the session, we hope to partner with departments to collect more data on what works to improve diversity and to develop a toolbox of helpful practices.

    Suggestions, objections, and contributions welcome at More data on women in philosophy are available here:

    Follow us on Twitter @PhilosophyData and Facebook

    Session details:

    Diversity in Philosophy Departments
    Pacific APA, Vancouver
    April 18, 2018, 9:00–12:00 a.m.
    APA Committee Session:
    Arranged by the APA Committee on the Status of Women

    Eric Schwitzgebel (University of California, Riverside)

    Nicole Hassoun (Binghamton University)
    Sherri Conklin (University of California, Santa Barbara)
    Purushottama Bilimoria (University of California, Berkeley and University of Melbourne)
    Teresa Blankmeyer Burke (Gallaudet University)
    Leslie Pickering Francis (University of Utah)
    Subrena Smith (University of New Hampshire)

    David Chalmers (New York University)
    Andrew Chignell (Princeton University)
    Helen De Cruz (Oxford Brookes University)
    Steve Downes (University of Utah)
    Carrie Figdor (University of Iowa)
    John Lysaker (Emory University)
    Anna-Sara Malmgren (Stanford University)
    Wolfgang R. Mann (Columbia University)
    Ned Markosian (University of Massachusetts Amherst)
    Gregory R. Peterson (South Dakota State University)
    Geoff Sayre-McCord (University of North Carolina at Chapel Hill)
    Miriam Solomon (Temple University)
    Yannik Thiem (Villanova University)
    Daniela Vallega-Neu (University of Oregon)
    Eric Watkins (University of California, San Diego)
    Andrea Woody (University of Washington)

    Thanks to Kathryn Norlock and Michael Rea for help with this project.

    [image source]

    Monday, April 08, 2019

    Forthcoming: A Theory of Jerks and Other Philosophical Misadventures

    My forthcoming book has a page now at MIT Press:

    A Theory of Jerks and Other Philosophical Misadventures

    A collection of quirky, entertaining, and reader-friendly short pieces on philosophical topics that range from a theory of jerks to the ethics of ethicists.

    Have you ever wondered about why some people are jerks? Asked whether your driverless car should kill you so that others may live? Found a robot adorable? Considered the ethics of professional ethicists? Reflected on the philosophy of hair? In this engaging, entertaining, and enlightening book, Eric Schwitzgebel turns a philosopher's eye on these and other burning questions. In a series of quirky and accessible short pieces that cover a mind-boggling variety of philosophical topics, Schwitzgebel offers incisive takes on matters both small (the consciousness of garden snails) and large (time, space, and causation).

    A common theme might be the ragged edge of the human intellect, where moral or philosophical reflection begins to turn against itself, lost among doubts and improbable conclusions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence. (See, for example, “Kant on Killing Bastards, Masturbation, Organ Donation, Homosexuality, Tyrants, Wives, and Servants.”) Some of the texts resist thematic categorization—thoughts on the philosophical implications of dreidels, the diminishing offensiveness of the most profane profanity, and fatherly optimism—but are no less interesting.

    Schwitzgebel has selected these pieces from the more than one thousand that have appeared since 2006 in various publications and on his popular blog, The Splintered Mind, revising and updating them for this book. Philosophy has never been this much fun.

    Tuesday, April 02, 2019

    Gaze of Robot, Gaze of Bird

    I have a new SF story in Clarkesworld, "Gaze of Robot, Gaze of Bird" -- my first new fiction publication since 2017. I wanted to tell a tale in which none of the protagonists are conscious but we care about them anyway -- an interplanetary probe (with some chat algorithms and cute subroutines) and its stuffed monkey doll.

    Another theme is what counts as the extinction or continuation of a sapient species.


    Gaze of Robot, Gaze of Bird

    by Eric Schwitzgebel

    First, an eye. The camera rose, swiveling on its joint, compiling initial scans of the planetary surface. Second, six wheels on struts, pop-pop, pop-pop, pop-pop, and a platform unfolding between the main body and the eye. Third, an atmospheric taster and wind gauge. Fourth, a robotic arm. The arm emerged holding a fluffy, resilient, nanocarbon monkey doll, which it carefully set on the platform.

    The monkey doll had no actuators, no servos, no sensors, no cognitive processors. Monkey was, however, quite huggable. Monkey lay on his back on the warm platform, his black bead eyes pointed up toward the stars. He had traveled wadded near J11-L’s core for ninety-five thousand years. His arms, legs, and tail lay open and relaxed for the first time since his hurried manufacture.

    J11-L sprouted more eyes, more arms, more gauges—also stabilizers, ears, a scoop, solar panels, soil sensors, magnetic whirligigs. Always, J11-L observed Monkey more closely than anything else, leaning its eyes and gauges in.

    J11-L arranged Monkey’s limbs on the platform, gently flexing and massaging the doll. J11-L scooped up a smooth stone from near its left front wheel, brushed it clean, then wedged it under Monkey’s head to serve as a pillow. J11-L stroked and smoothed Monkey’s fur, which was rumpled from the long journey.

    “I love you, Monkey,” emitted J11-L, in a sound resembling language. “Will you stay with me while I build a Home?”

    Monkey did not reply.

    [story continues here]

    Thursday, March 28, 2019

    Journey 2 Psychology

    A couple of weeks ago, one of my former students, Michael S. Gordon, now a professor of psychology at William Paterson University, stopped by my office unexpectedly. He told me he had sold his house in New Jersey so that he could spend a year traveling around the world, with his family, interviewing famous psychologists about their lives. He wants to compile an oral history of psychology. He was at the UCR campus to interview Robert Rosenthal.

    Wait, he's spending a full year, along with his wife and son, traveling around interviewing famous psychologists? And he sold his house to do it? Whoa. That's commitment. How awesome!

    He is posting excepts of his interviews on his blog, Journey2Psychology. For example: Alburt Bandura, Ed Diener, Alison Gopnik, Elizabeth Spelke, Dan Schachter, Dan Gilbert, etc., etc.!

    Okay, a psychology nerd could get excited. What an amazing idea!

    Part of me wishes he could have done it in the 1980s, when BF Skinner, Timothy Leary, Stanley Milgram, and Eric Erikson were still alive. Or, hey, maybe if we could go back into the 1950s, or the 1920s, or....

    Against the Mind-Package View of Minds

    (adapted from comments I will be giving on Carrie Figdor's Pieces of Mind, at the Pacific Division meeting of the American Philosophical Association on Friday, April 19, 9-12)

    We human beings have a favorite size: our own size. We have a favorite pace of action: our own pace. We have a favorite type of organization: the animal, specifically the mammal, and more specifically us. What’s much larger, much smaller, or much different, we devalue and simplify in our imaginations.

    It’s true that we’re great. Wow, us! But we tend to forget that other things can also be pretty amazing.

    So here’s a naive picture of the world. Some things have minds, and other things don’t have minds. The things with minds might have highly sophisticated minds, capable of appreciating Shakespeare and proving geometric theorems, or they might have simpler minds. But if an entity has a mind at all, then it has sensory and emotional experiences, preferences and dislikes, plans of at least a simple sort, some kind of understanding of its environment, an ability to select among options, and a sense of its location and the boundaries of its body. Let’s call this the Mind Package.

    Everything that exists, you might think, is either a thing that has the whole Mind Package or a thing that has no part of the Mind Package. Stones have no part of the Mind Package. They don’t feel anything. They have no preferences. They make no decisions. They have no comprehension of the world around them. There’s nothing it’s like to be a stone. Dogs, we ordinarily assume, do have the Mind Package. My own dog Cocoa enjoys going on walks, prefers the bucket chair to the recliner, gets excited when she hears my wife coming in the front door, and dislikes our cat.

    [A recent picture of some of my favorite biological entities. Can you guess which ones have the Mind Package?]

    Now it could be the case that everything in the world either has the Mind Package or doesn’t have it, and if something has one piece of the Mind Package, it has all the pieces. Intuitively, this is an attractive idea. What would it be to kind of have a mind? Could a creature have full-blown desires and preferences but no beliefs at all? Could a creature be somewhere between having experiences and not having any experiences? This seems hard to imagine. It’s much easier to think that either the light is on inside or the light is off. Either you’ve got a stone or you’ve got a dog.

    But there are a couple of reasons to suspect that the lights-on/lights-off Mind Package view is two simple.

    The first reason to be suspicious is that the world is full of slippery slopes. In fetal development and infant development, biological and cognitive complexity emerges gradually. But if you’ve either got the whole package or you don’t then there must be some moment at which the lights suddenly turned on and you went, in a flash, from being an entity without experiences, preferences, feelings, comprehension, and choice, to being an entity with all of those things. In the gradual flow of maturation, when could this be? Likewise, if we assume, at least for a moment, that jellyfish don’t have the Mind Package but dogs do, similar trouble looms: Across species there’s a gradual continuum of capacities, not, it seems, a sudden break between lights-on and lights-off animals. (Garden snails are an especially fascinating problem case.)

    This leads to a second reason to be suspicious of the Mind Package view. As Carrie Figdor emphasizes, bacteria are much more informationally complicated than we tend to think. Plants are much more informationally complicated than we tend to think. Group interactions are much more informationally complicated than we tend to think. The relations of parasite, host, and symbiont are much more informationally complicated than we tend to think. The difference is smaller than we usually imagine between things of our favorite size and pace and other things. The biological world is richly structured with what looks like sophisticated informational processing in response to environmental stimuli. When scientists need to model what’s going on in plants and bacteria and neurons and social groups, they seem to need terms and concepts and models from psychology: signaling, communication, cooperation, decision, memory, detection, learning. Structures other than those of our favorite size and pace seem to show the kinds of informational interactions and responsiveness to environment that we capture with psychological words like these.

    Furthermore, there’s no general reason to think that systems usefully described by some of these psychological terms need always also be usefully described by other of these terms. If a scientific community starts to attribute memories or preferences to the entities they research, it doesn’t follow that they will find it fruitful also to ascribe sensory experiences, feelings, or a sense of the difference between body and world. Different aspects of mentality may be separable rather than bundled. They don’t need to stand or fall as a Package. To paraphrase the title of Carrie’s book, the Mind comes in Pieces.

    Philosophers of mind love to paint their opponents as clinging to the remnants of Cartesianism. Should I alone resist? The Mind Package view is a remnant of Cartesianism: There’s the Minded stuff, which has this nice suite of cognitive and conscious properties, all as a bundle, and then there’s the non-Minded stuff which is passive and simple. We ought to demolish this Cartesian idea. There is no bright line between the fully and properly Minded and the rest of the world, and there is no need for cognitive properties to all travel on the family plan.

    The Mind Package view has a powerful grip on our intuitions. We want to confine “the mental” to privileged spaces – our own heads and the heads of our favorite animals. But if the informational structures of the world are sufficiently complex, this intuitive approach must be jettisoned. Mental processes run wide through the world, different ones in different spaces, defying our intuitive groupings. This radically anti-Cartesian view is the profound and transformative lesson of Carrie’s book, and it takes some getting used to.

    If Carrie’s radically anti-Cartesian view of the world is scientifically correct, there are, then, pragmatic grounds to prefer a broad view of the metaphysics of preferences and decisions, according to which many different kinds of entities have preferences and make decisions. It is the view that better respects the evidence that we are continuous with plants, worms, and bacteria, and that the types of patterns of mindedness we see in ourselves resemble what’s happening in them, even if such entities don’t have the whole Mind Package.

    Goodbye, Mind-Package rump Cartesianism!



    Do Neurons Literally Have Preferences (Nov 4, 2015)

    Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018)

    Tuesday, March 26, 2019

    New Podcast Interview: How Little Thou Can Know Thyself

    New interview of me at the MOWE blog.

    Topics of discussion:

  • Our poor knowledge of our own visual experience,
  • Our poor knowledge of our own visual imagery,
  • Our poor knowledge of our own emotional experience,
  • The conscious self riding on the unconscious elephant,
  • Can we improve at introspection?
  • Our poor knowledge of when and why we feel happy,
  • The amazing phenomenon of instant attachment to adoptive children.
  • Friday, March 22, 2019

    Most U.S. and German Ethicists Condemn Meat-Eating (or German Philosophers Think Meat Is the Wurst)

    It's an honor and a pleasure to have one's work replicated, especially when it's done as carefully as Philipp Schoenegger and Johannes Wagner have done.

    In 2009, Joshua Rust and I surveyed the attitudes and behavior of ethicist philosophers in five U.S. states, comparing those attitudes and behavior to non-ethicist philosophers' and to a comparison group of other professors at the same universities. Across nine different moral issues, we found that ethicists reported behaving overall no morally differently than the other two groups, though on some issues, especially vegetarianism and charitable giving, they endorsed more stringent attitudes. (In some cases, we also had observational behavioral data that didn't depend on self-report. Here too we found no overall difference.) Schoenegger and Wagner translated our questionnaire into German and added a few new questions, then distributed it by email to professors in German-speaking countries, achieving an overall response rate of 29.5% [corrected Mar 23]. (Josh and I had a response rate of 58%.) With a couple of exceptions, Schoenegger and Wagner report similar results.

    The most interesting difference between Schoenegger and Wagner's results and Josh's and my results concerns vegetarianism.

    The Questions:

    We originally asked three questions about vegetarianism. In the first part of the questionnaire, we asked respondents to rate "regularly eating the meat of mammals, such as beef or pork" on a nine-point scale from "very morally bad" to "very morally good", with "morally neutral" in the middle.

    In the second part of the questionnaire, we asked:

    17. During about how many meals or snacks per week do you eat the meat of mammals such as beef or pork?

       enter number of times per week ____

    18. Think back on your last evening meal, not including snacks. Did you eat the meat of a mammal during that meal?

       □ yes

       □ no

       □ don’t recall

    U.S. Results in 2009

    On the attitude question, 60% of ethicist respondents rated meat-eating somewhere on the "bad" side of the nine-point scale, compared to 45% of non-ethicist philosophers and only 19% of professors from other departments (ANOVA, F = 17.0, p < 0.001). We also found substantial differences by both gender and age, with women and younger respondents more likely to condemn meat-eating. For example, 81% of female philosophy respondents born 1960 or later rated eating the meat of mammals as morally bad, compared to 7% of male non-philosophers born before 1960. That's a huge difference in attitude!

    Eight percent of respondents rated it at 1 or 2 on the nine-point scale -- either "very bad" or adjacent to very bad -- including 11% of ethicists (46/564 overall, 22/193 of ethicists).

    On self-report of behavior, Josh and I found much less difference. On our "previous evening meal" question, we detected at best a statistically marginal difference among the three main analysis groups: 37% of ethicists reported having eaten meat at the previous evening meal, compared to 33% of non-ethicist philosophers and 45% of non-philosophers (chi-squared = 5.7, p = 0.06, excluding two respondents who answered "don’t recall").

    The "meals per week" question was actually designed in part as a test of "socially desirable responding" or a tendency to fudge answers: We thought it would be difficult to accurately estimate the number, thus it would be tempting for respondents to fudge a bit. And mathematically, they did seem to be guilty of fudging: For example, 21% of respondents who reported eating meat at one meal per week also reported eating meat at the previous evening meal. Even if we assume that meat is only consumed at evening meals, the number should be closer to 14% (1/7). If we assume, more plausibly, that approximately half of all meat meals are evening meals, then the number should be closer to 7%. With that caveat in mind, on the meals-per-week question we found a mean of 4.1 for ethicists, compared to 4.6 for non-ethicist philosophers and 5.3 for non-philosophers (ANOVA [square-root transformed], F = 5.2, p = 0.006).

    We concluded that although a majority of U.S. ethicists, especially younger ethicists and women ethicists, thought eating meat was morally bad, they ate meat at approximately the same rate as did the non-ethicists.

    German Results in 2018:

    Schoenegger and Wagner find, similarly, a majority of German ethicist respondents rating meat-eating as bad: 67%. Evidently, a majority of U.S. and German ethicists think that eating meat is morally bad.

    However, among the non-ethicist professors, Schoenegger and Wagner find higher rates of condemnation of meat-eating than Josh and I found: 63% among German-speaking non-ethicist philosophers in 2018 compared to our 45% in the U.S. in 2009 (80/127 vs. 92/204, z = 3.2, p = .001), and even more strikingly 40% among German-speaking professors from departments other than philosophy in 2018 compared to only 19% in the U.S. in 2009 (52/131 vs/ 31/167, z = 4.0, p < .001; [note 1]).

    German professors were also much more likely than U.S. professors in 2009 to think that eating meat is very bad, with 18% rating it 1 or 2 on the scale, including 23% of ethicists (57/408 and 35/150, excluding non-respondents; two-proportion test U.S. vs German: overall z = 2.8, p = .005, ethicists z = 2.9, p = .004).

    Apparently, German-speaking professors are not as fond of their wurst as cultural stereotypes might suggest!

    A number of explanations are possible: One is that in general German academics are more pro-vegetarian than are U.S. academics. Another is that attitudes toward vegetarianism are changing swiftly over time (as suggested by the age differences in Josh's and my study) and that the nine years between 2009 and 2018 saw a substantial shift in both cultures. Still another concerns non-response bias. (For non-philosophers, Schoenegger and Wagner's response rate was 30%, while Josh's and mine was 53%.)

    In Schoenegger and Wagner's data, ethicists report having eaten less meat at the previous evening meal than the other two groups: 25%, vs. 40% of non-ethicist philosophers and 39% of the non-philosophers (chi-squared = 9.3, p = .01 [note 2]). The meals per week data are less clear. Schoenegger report 2.1 meals per week for ethicists, compared to 2.8 and 3.0 for non-ethicist philosophers and non-philosophers respectively (ANOVA, F = 3.4, p = .03), but their data are highly right skewed, and due to skew Josh and I had used a square-root transformation for original 2009 analysis. A similar square-root transformation on Schoenegger and Wagner's raw data eliminates any statistically detectable difference (F = 0.8, p = .45). And there is again evidence of fudging in the meals-per-week responses: Among those reporting only one meat meal per week, for example, 18% reported having had meat at their previous evening meal.

    If we take the meals-per-week data at face value, the German respondents ate substantially less meat in 2018 than did the U.S. respondents in 2009: 2.6 meals for the Germans vs. 4.6 for the U.S. respondents (median 2 vs median 4, Mann-Whitney W = 287793, p < .001). However, the difference was not statistically detectable on the previous evening meal question: 38% U.S. vs 34% German (z = 1.3, p = .21).

    All of this is a bit difficult to interpret, but here's the tentative conclusion I draw:

    German professors today -- especially ethicists -- are more likely to condemn meat eating than were U.S. professors ten years ago. They might also be a bit less likely to eat meat, again perhaps especially the ethicists, though that is not entirely clear and might reflect a bit of fudging in the self-reports.

    The other difference Schoenegger and Wagner found was in the question of whether ethicists were on the whole more likely than other professors to embrace stringent moral views -- but full analysis of this will require some detail and will have to wait for another time.


    Note 1: In the published paper, Schoenegger and Wagner report 39% instead of the 40% I find in reanalyzing their raw data. This might either be a rounding error [39.69%] or some small difference in our analyses.

    Note 2: In the published paper, Schoenegger and Wagner report 24%, which again might be a rounding error (from 24.65%) or a small analytic difference.

    [image source]

    Friday, March 15, 2019

    Should You Defer to Ethical Experts?

    Ernest Sosa gave a lovely and fascinating talk yesterday at UC Riverside on the importance of "firsthand intuitive insight" in philosophy. It has me thinking about the extent to which we ought, or ought not, defer to ethical experts when we are otherwise inclined to disagree with their conclusions.

    To illustrate the idea of firsthand intuitive insight, Sosa gives two examples. One concerns mathematics. Consider a student who learns that the Pythagorean theorem is true without learning its proof. This student knows that a^2 + b^2 = c^2 but doesn't have any insight into why it's true. Contrast this student with one who masters the proof and consequently does understand why it's true. The second student, but not the first, has firsthand intuitive insight. Sosa's other example is in ethics. One child bullies another. Her mother, seeing the act and seeing the grief in the face of the other child, tells the bullying child that she should apologize. The child might defer to her mother's ethical judgment, sincerely concluding she really should apologize, but without understanding why what she has done is bad enough to require apology. Alternatively, she might come to genuinely notice the other child's grief and more fully understand how her bullying was inappropriate, and thus gain firsthand intuitive insight into the need for apology. (I worry that firsthand intuitive insight is a bit of a slippery concept, but I don't think I can do more with it here.)

    Sosa argues that a central aim of much of philosophy is firsthand intuitive insight of this sort. In the sciences and in history, it's often enough just to know that some fact is true (that helium has two protons, that the Qin Dynasty fell in 206 BCE). On such matters, we happily defer to experts. In philosophy, we're less likely to accept a truth without having our own personal, firsthand intuitive insight. Expert metaphysicians might almost universally agree that barstool-shaped-aggregates but not barstools themselves supervene on collections of particles arranged barstoolwise. Expert ethicists might almost universally agree that a straightforward pleasure-maximizing utilitarian ethics would require radical revision of ordinary moral judgments. But we're not inclined to just take them at their word. We want to understand for ourselves how it is so.

    This seems right. And yet, there's a bit of a puzzle in it, if we think that it's important that our ethical opinions be correct. (Yes, I'm assuming that ethics is a domain in which there are correct and incorrect opinions.) What should we do when the majority of philosophical experts think P, but your own (apparent) firsthand intuitive insight suggests not-P? If you care about correctness above all, maybe you should defer to the experts, despite your lack of understanding. But Sosa appears to think, as I suspect many of us do, that often the right course instead is to stand steadfast, continuing to judge according to your own best independent reasoning.

    Borrowing an example from Sarah McGrath's work on moral deference, consider the case of vegetarianism. Based on some of my work, I think that probably the majority of professional ethicists in the United States believe that it is normally morally wrong to eat the meat of factory-farmed animals. This might also be true in German-speaking countries. Impressionistically, most of the philosophers I know who have given the issue serious and detailed consideration come to endorse vegetarianism, including two of the most prominent ethicists currently alive, Peter Singer and Christine Korsgaard. Now suppose that you haven't given the matter nearly as much thought as they have, but you have given it some thought. You're inclined still to think that eating meat is okay, and you can maybe mount one or two plausible-seeming defenses of your view. Should you defer to their ethical expertise?

    Sosa compares philosophical reasoning with archery. You not only want to hit the target (the truth), you want to do so by the exercise of your own skill (your own intuitive insight), rather than by having an expert guide your hand (deference to experts). I agree that ideally this is so. It's nice when you have have both truth and intuitive insight! But when the aim of hitting the target conflicts with the aim of doing so by your own intuitive insight, your preference should depend on the stakes. If it's an archery contest, you don't want the coach's help: The most important thing is the test of your own skill. But if you're a subsistence hunter who needs dinner, then you probably ought to take any help you can get, if the target looks like it's about to escape. And isn't ethics (outside the classroom, at least) more like subsistence hunting than like an archery contest? What should matter most is whether you actually come to the right moral conclusion about eating meat (or whatever) not whether you get there by your own insight. Excessive emphasis on the individual's need for intuitive insight, at the cost of truth or correctness, risks turning ethics into a kind of sport.

    So maybe, then, you should defer to the majority of ethical experts, and conclude that it is normally wrong to eat factory-farmed meat, even if that conclusion doesn't accord with your own best attempts at insight?

    While I'm tempted to say this, I simultaneously feel pulled in Sosa's direction -- and perhaps I should defer to his expertise as one of the world's leading epistemologists! There's something I like about non-deference in philosophy, and our prizing of people's standing fast in their own best judgments, even in the teeth of disagreement by better-informed experts. So here are four brief defenses of non-deference. I fear none of them is quite sufficient. But maybe in combination they will take us somewhere?

    (1.) The "experts" might not be experts. This is McGrath's defense of non-deference in ethics. Despite their seeming expertise, great ethicists have often been horribly wrong in the past. See Aristotle on slavery, Kant on bastards, masturbation, homosexuality, wives, and servants, the consensus of philosophers in favor of World War I, and ethicists' seeming inability to reason better even about trolley problems than non-ethicists.

    (2.) Firsthand intuitive insight might be highly intrinsically valuable. I'm a big believer in the intrinsic value of knowledge (including self-knowledge). One of the most amazing and important things about life on Earth is that sometimes we bags of mostly water can stop and reflect on some of the biggest, most puzzling questions that there are. An important component of the intrinsic value of philosophical reflection is the real understanding that comes with firsthand intuitive insight, or seeming insight, or partial insight -- our ability to reach our own philosophical judgments instead of simply deferring to experts. This might be valuable enough to merit some substantial loss of ethical correctness to preserve it.

    (3.) The philosophical community might profit from diversity of moral opinion, even if individuals with unusual views are likely to be incorrect. The philosophical community as a whole might, over time, be more likely to converge upon correct ethical views if it fosters diversity of opinion. If we all defer to whoever seems to be most expert, we might reach consensus too fast on a wrong, or at least a narrow and partial, ethical view. Compare Kuhn and Longino on the value of diversity in scientific opinion: Intellectual communities need stubborn defenders of unlikely views, even if those stubborn folks are probably wrong -- since sometimes they have an important piece of the truth that others are missing.

    (4.) Proper moral motivation might require acting from one's own insight rather than from moral deference. The bully who apologizes out of deference gives, I think, a less perfect apology than the bully who has firsthand intuitive insight into the need to apologize. Maybe in some cases, being motivated by one's own intuitive insight is so morally important that it's better to do the ethically wrong thing on the basis of your imperfect but non-deferential insight than to do the ethically right thing deferentially.

    As I said, none of these defenses of non-deference seems quite enough on its own. Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not. And the considerations in Points 2-4 seem plausibly to be only secondary from the perspective of the person who wants really to have ethically correct views by which to guide her behavior.

    [image source]

    Friday, March 08, 2019

    Thoughts, Judgments, and Beliefs -- What's the Difference?

    Today I'm going to pitch a taxonomy. My secret agenda is to undermine overly intellectualist views about belief and self-knowledge.

    An episode of some sort occurs in your mind. Let's say it's in inner speech: "I should get started on that blog post" or "It fine for her to choose Irvine over Riverside". On the face of it, it's an assertion: It's not a question or a string of nonsense or a "hmmmmm...". An episode of inner speech in the form of an assertion is, I will say, one type of assertoric thought. (Assertoric thought needn't require inner speech if visual imagery or emotional reactions or imageless thoughts can do the same type of work; but let's set that issue aside for today.) Inner speech of this sort can cross your mind without your judging or believing the content. If I sing to myself "She's buying a stairway to Heaven", normally I don't at that same moment believe that anyone is actually buying a stairway to Heaven. At other times, it seems that I do genuinely believe what I am saying to myself, with the inner speech somehow the vehicle of that belief: "Uh oh, we're out of coffee!" The question is: What is present in the latter case that is absent in the former?

    One possibility is a feeling of assent. On this view, when "out of coffee!" comes to my mind, accompanying that inner speech is another type of experience, not in inner speech -- an experience of yes-this-is-true, or a feeling of confidence or correctness. In contrast, when I'm singing along with Led Zeppelin, there's no such accompanying yes-this-is-true experience.

    Another possibility, the one I prefer, is that the important difference is less in the phenomenology, that is, in some experiential difference between the two cases, than it is in type of cognitive traction the thought has. Does the thought spin idly, so to speak, or does it penetrate into other aspects of your cognitive life? I'd like to suggest that if the thought has one type of cognitive traction, it's a judgment. And if the judgment has a certain type of further traction, you believe. If the thought has little to no traction, it's a mere idle thought (or as I call it elsewhere, a "wraith" of judgment).

    It seems to me phenomenologically plausible that at least sometimes we feel confidence when we speak silently to ourselves, and sometimes we feel doubt or skepticism or like we're just singing some words. The same episode of inner speech "She's buying a stairway" can be experienced very differently, and this difference can have something to do with whether we really judge it to be so. But for two reasons, I think it's a mistake to rely on these phenomenological differences in distinguishing between judgments and merely idle assertoric thoughts.

    First, even if there is sometimes a phenomenology of confidence or this-is-really-so-ness and sometimes a feeling of doubt or I'm-merely-singing, it is by no means clear that such epistemic phenomenology accompanies all of our inner speech or even most of it. For example, in Russell Hurlburt's experience sampling studies, we don't see a lot of reports of this type of epistemic phenomenology. Thinking back as best I can on my own stream of experience (some systematically sampled, but mostly not), it strikes me that such phenomenology would generally be subtle in most cases -- the kind of thing it would be easy for a theorist to miss or alternatively to invent given the difficulty of knowing such structural features of experience. Such phenomenological criteria are, at best, a dubious theoretical foundation for such an important distinction.

    Second, and maybe more importantly, what we should care about in making a distinction of this sort is not the existence, or not, of a feeling of confidence or some accompanying phenomenology of this-is-so. What matters more is the role the thought plays in one's cognitive life. That role is what the distinction between judgment and idle thinking ought to track.

    Consider the two examples I began with: "I should get started on that blog post" and "It's fine for her to choose Irvine over Riverside". I say these to myself, perhaps with some feeling of that-is-so. But then, maybe I don't start on the blog post. I check Facebook instead, though there's no real need for me to do so. Nor do I feel particularly bad about that, or torn. The thought occurred, seemed in some sense right, but didn't penetrate further into my cognition or decision making. Meanwhile, maybe, I remain miffed that she rejected Riverside for Irvine (I'm imagining here a graduate student or faculty member choosing to decline our offer of admission or hiring). Probably I shouldn't be miffed. It is fine! People ought to choose what they think is best for them. And yet... most of my cognition about the matter remains wrongly and irrationally hurt and resentful. I'm trying to convince myself, but I haven't fully succeeded.

    What we do and should care about in distinguishing judgment from idle thought is the extent to which I have succeeded. If, at least for that moment, my planning and thinking really is informed by my seeming-assessment that it's time to get started and that it's fine for her to choose Irvine, then that is what I have judged. But if, as is sometimes the case, the thoughts don't really penetrate into the remainder of my cognitive life, don’t guide other aspects of my reasoning and my posture toward the world, then it's probably best to regard them as mere idle thoughts, rather than genuine judgments, even if in some superficial way I feel sincere and this-is-so-ish when I say them to myself. (On the metaphor of attitudes as postures toward the world, see my discussion here.)

    That is how I would like to draw the distinction between judgment and idle thought.

    How about belief? Here I want to make a similar move, but at an expanded temporal scale. We might sincerely judge something to be so in the sense that our related thoughts, and our general posture toward the world, are for a moment aligned toward the truth of that thing. I really do, now that I think of it carefully, judge it to be fine for her to have made that choice. Of course it's fine! But the difference between a judgment and a belief is the difference between an occurrence and a steady-state thing. A judgment happens in a moment; a belief endures, at least for a while. The question is: Does the judgment stick? Does that momentary assessment have enough cognitive traction to change how I will feel about it next time I return to the question? After the conscious thought vanishes, will it leave some sort of more durable trace in my cognitive structure? Or it is here and gone? Belief requires, I suggest, that more durable trace.

    One way to think of it is this: A conscious thought is in a way a preparation for a judgment, and a judgment is in a way a preparation for a belief. "P" bubbles up into your mind, for some reason. If P finds the right kind of momentary home in your mind, if, at that moment, for the duration of its presence in the footlights of consciousness, it shifts or solidifies related aspects of your mentality, then it is a full-bodied judgment and not just an idle thought. And if what it shifts and solidifies stays shifted and solidified after the judgment fades from consciousness, then that judgment has become a belief.

    Back to the secret agenda: If this is right, you cannot just read what you believe, or even what you currently judge, off of what you can introspectively discover, or what you say with a feeling of sincerity. Genuine belief and judgment require penetration deeper into the springs of thought and action.



    "A Dispositional Approach to the Attitudes: Thinking Outside of the Belief Box" (in Nottelmann, ed., New Essays on Belief, 2013).

    "Do You Have Whole Herds of Swiftly Forgotten Microbeliefs?" (Feb 1, 2019). [N.B.: Today's post suggests a partial resolution to the question that the microbeliefs post leaves open.]

    Against Intellectualism about Belief (Jul 31, 2015)

    Friday, March 01, 2019

    In Philosophy, Departments with More Women Faculty Award More PhDs to Women (Plus Some Other Interesting Facts)

    Women constitute about 32% of Philosophy Bachelor's degree recipients in the U.S., about 29% of Philosophy PhD recipients, and about 20-25% of philosophy faculty. (Paxton et al 2012; Schwitzgebel and Jennings 2017). It is sometimes suggested that the relatively low percentage of women faculty in philosophy explains the relatively low percentage of women who major in philosophy (which then in turn explains the relatively low percentage of women who become the next generation of philosophy faculty).

    I was curious whether philosophy departments with a relatively high percentage of women faculty would also have a relatively high percentage of students who are women. Maybe departments with more women faculty are more "woman friendly", with a visible effect on the proportion of women who complete the Bachelor's or PhD?

    Paxton et al. 2012 provide some evidence of a relationship between departments' proportion of women faculty, women undergraduates, and women graduate students. In a sample of 49 departments, they found a substantial correlation between the percent of women faculty and the percent of undergraduate philosophy majors who are women (r = .45, p = .012). However, in a similar sample of 31 departments, they did not report finding such a correlation between percent of faculty who are women and percent of PhD students who are women.

    There are a few limitations in the Paxton et al. study. First, thirty-one departments is a somewhat small number for such an analysis, yielding only limited statistical power to detect medium-sized correlations (note that with 49 departments in their undergraduate analysis, Paxton et al's p-value was greater than .01 despite a correlation of .45). Second, the sample of departments might be unrepresentative. And third, the proportion of women who complete the PhD might be a better measure of women-friendliness or women's success than proportion enrolled in the PhD program, since a substantial proportion of philosophy PhD students do not complete their degrees (in many departments completion rates are around 50%) and (anecdotally) non-completion rates might be higher for women than men (I welcome pointers to systematic data on this).

    For these reasons, I decided to examine whether in a larger sample of PhD-granting philosophy departments in the U.S., the percent of women faculty would correlate with the percent of women completing the PhD.

    For the data on students, I relied on the IPEDS database from the National Center for Educational Statistics, using an eight-year time frame from the academic year 2009-2010 to 2016-2017. For faculty, I used Julie Van Camp's counts of women faculty and total faculty in 97 doctoral programs in the U.S. from January 2006 and January 2015, as recovered through the Wayback Machine Internet Archive. (These 97 programs produce about 95% of the Philosophy PhDs in the U.S. ETA: This includes tenured and tenure-track faculty only.) For each department women faculty percentage score, I averaged the percentage of women faculty in 2006 and in 2015 to reduce noise due to temporary gains and losses. (My own department, for example, had 2/17 [12%] women faculty in 2006 and 4/19 [24%] in 2015, and is probably better represented by 18% than by either the higher or the lower number.)

    Overall, women were 20% of faculty in 2006 (340/1669) and 25% of faculty in 2015 (442/1755), a statistically significant increase (z = 3.4, p = .001). Although 20% to 25% may not sound like much, it is actually quite remarkable for such a short period. The faculty growth between 2006 and 2015 in this set of universities was only 86 positions (from 1669 to 1755 total faculty), while the growth of women faculty was 102 positions.

    The pattern in undergraduate Bachelor's degree completions in these same institutions is in some ways similar. Among these 97 institutions, the percentage of women earning BAs increased from 29% (1066/3618) to 34% (957/2787). This is statistically significant (z = 4.1, p < .001), and intermediate years show a slow steady increase (30%, then 31%, then 32%). However, it is possible that this is just a brief fluctuation in a long-term trend, in which percentage of women among philosophy majors has held approximately steady at 30-34% since at least 1986. Also notable: While faculty numbers increased, graduating majors decreased (fitting with national trends across all university types).

    The pattern in PhD completions is approximately flat over the period (fitting with results from the NSF reported here), fluctuating between 25% and 33% women -- coincidentally, 27% both at the beginning (100/372) and at the end (113/415) of the period. However, with numbers this low, statistical power is an issue.

    The main question I was looking at was correlational: Do the universities with a higher proportion of women faculty tend to have a higher proportion of women completing their PhDs? And the answer is...


    Here it is as a chart:

    [apologies for blurry image: click to clarify and enlarge]

    The correlation is substantial r = .42 (p < .001). For example, although only 37 of the 97 universities had over 25% women faculty, all ten of the universities that had the highest proportion of women among their Philosophy PhD recipients did.

    Oddly, however, for Bachelor's degrees, I can find no relationship at all, with a correlation of r = -.01 (p = .96). This result contrasts sharply with the Paxton et al. results, and I'm not sure what to make of it. A follow-up study might look at a broader sample of undergraduate institutions to see what sort of relationship there is between percent of women faculty and percent of women undergraduates in philosophy and whether it might vary with institution type.