Friday, June 29, 2018

Sorry about the Lingering Unapproved Comments!

... It looks like Blogger has stopped giving me notifications of comments to approve. I found a huge backlog, most of which I have approved. Let me see if I can turn notifications back on.

It will take me til next week to catch up on responding to a selection of the comments that have been lingering. Sorry to have neglected you!

Will Future Generations Find Us Especially Morally Loathsome?

Ethical norms change. Although reading Confucius doesn't feel like encountering some wholly bizarre, alien moral system, some ethical ideas do differ dramatically over time and between cultures. Genocide and civilian-slaughtering aggressive warfare are now widely considered to be among the evilest things people can do, yet they appear to be celebrated in the Bible (especially Deuteronomy and Joshua) and we still name children after Alexander "the Great". Many seemingly careful thinkers, including notoriously Aristotle and Locke, wrote justifications of slavery. Much of the world has only recently opened its eyes to the historically common oppression of women, homosexuals, low-status workers, people with disabilities, and ethnic minorities.

We probably haven't reached the end of moral change. In a few centuries, people might look back on our current norms with the same mix of appreciation and condemnation that we now look back on ethical norms common in Warring States China and Early Modern Europe.

Indeed, future generations might find our generation to be especially vividly loathsome, since we are the first generation creating an extensive video record of our day-to-day activities.

It’s one thing to know, in the abstract, that Rousseau fathered five children with a lover he regarded as too dull-witted to be worth attempting to formally educate, and that he demanded against her protests that their children be sent to (possibly very high mortality) orphanages [see esp. Confessions, Book VII]. It would be quite another if we had baby pictures and video of Rousseau's interactions with Thérèse. It's one thing to know, in the abstract, that Aristotle had a wife and a life of privilege. It would be quite another to watch video of him proudly enacting sexist and classist values we now find vile. Future generations that detest our sexual practices, or our consumerism, or our casual destruction of the environment, or our neglect of the sick and elderly, might be especially horrified to view these practices in vivid detail.

By "we" and "our" practices and values, I mean the typical practices and values of highly educated readers from early 21st-century democracies -- the notional readership of this blog. Maybe climate change proves to be catastrophic: Crops fail, low-lying cities are flooded, a billion desperate people are displaced or malnourished and tossed into war. Looking back on video of a philosopher of our era proudly stepping out of his shiny, privately-owned minivan, across his beautiful irrigated lawn in the summer heat, into his large chilly air-conditioned house, maybe wearing a leather hat, maybe sharing McDonald's ice-cream cones with his kids -- looking back, that is, on what I (of course this is me) think of as a lovely family moment -- might this seem to some future Bangladeshi philosopher as vividly disgusting as I suspect I would find Aristotle's treatment of Greek slaves?

#

If we are currently at the moral pinnacle, any change in future values will be a change for the worse. Future generations might condemn our mixing of the races, for example. They might be disgusted to see pictures of interracial couples walking together in public and raising their mixed-race children. Or they might condemn us for clothing customs that they come to view as obscene. However, I feel comfortable saying that they'd be wrong to condemn us, if those were the reasons why.

But it seems unlikely that we are at the pinnacle; and thus it seems likely that future generations might have some excellent moral reason to condemn us. More likely than our being at the moral pinnacle, it seems to me, is that either (a.) there has been a slow trajectory toward better values over the centuries (as argued by Steven Pinker) and that the trajectory will continue, or alternatively that (b.) shifts in value are more or less a random walk up, down, and sideways, in which case it would be unlikely chance if we happened to be at the peak right now. I am assuming here the same kind of non-relativism that most people assume in condemning Nazism and in thinking that it constitutes genuine moral progress to recognize the equal moral status of women and men.

(To someone who endorses most of the widely-shared values of their group it is almost just analytically the case that they will see their group's values as the peak. Suppose you endorse the mainstream values in your group -- values A, B, C, D, E, and F. Elsewhere, the mainstream values might instead be A, not-B, D, E, F and G, or A, C, not-D, not-E, H and I. Of course it will seem to you that you're the group that got it right -- exactly A, B, C, D, E, and F! It will seem to you that changes from past values have been good, and the likely future rejection of your values will be mistaken. This is basically the old man's "kids these days!" complaint, writ large.)

I worry then, that we might be in a situation similar to Aristotle's: horribly wrong (most of us) on some really important moral issues, though it doesn't feel like we're wrong, and although we think we are applying our excellent minds excellently to the matter, with wisdom and good sense. I worry that we, or I, might be using philosophy to justify the 21st-century college-educated North American's moral equivalent of keeping slaves, oppressing women, and launching genocidal war.

Is there some way of gaining insight into this possibility? Some way to get a temperature reading, so to speak, on our unrecognized evil?

Here's one thing I don't think will work: Rely on the ethical reasoning of the highest status philosophers in our society. If you've read any of my work on Kant's applied ethics, German philosophers' failure to reject Nazism, and the morality of ethics professors, you'll know why I say this.

#

I'd suggest, or at least I'd hope, that if future generations rightly condemn us, it won't be for something we'd find incomprehensible. It won't be because we sometimes chose blue shirts over red ones or because we like to smile at children. It will be for things that we already have an inkling might be wrong, and which some people do already condemn as wrong. As Michele Moody-Adams emphasizes in her discussion of slavery and cultural relativism (Moody-Adams 1997, ch. 2), in every slave culture there were always some voices condemning the injustice of slavery -- among them, typically, the slaves themselves -- and it required a kind of affected ignorance to disregard those voices. As a clue to our own evil, we might look to minority moral opinions in our own culture.

I tend to disagree with those minority opinions. I tend to think that the behavior of my social group is more or less fine, or at least forgivably mediocre. If someone advances a minority ethical view I disagree with, I'm philosopher enough to concoct some superficially plausible defenses. What I worry is that a properly situated observer might recognize those defenses to be no better than Hans Heyse's defense of Nazism or Kant's critique of masturbation.

Moody-Adams suggests that we can begin to transcend our cultural and historical moral boundaries though moral reflection and moral imagination. In the epilogue of her 1997 book, she finds hope in the kind of moral reflection that involves self-scrutiny, vivid imagination, a wide-ranging contact with other disciplines and traditions, a recognition of minority voices, and serious engagement with the concrete details of everyday moral inquiry.

Hey, that sounds pretty good! I'll put, or try to put, my hopes there too.

Wednesday, June 20, 2018

The Perceived Importance of Kant, as Measured by Advertisements for Specialists in His Work

I'm revising a couple of my old posts on Kant for my next book, and I wanted some quantitative data on the importance of Kant in Anglophone philosophy departments.

There's a Leiter poll, where Kant ranks as the third "most important" philosopher of all time after Plato and Aristotle. That's pretty high! But a couple of measures suggest he might be even more important than number three. In terms of appearance in philosophy abstracts, he might be number one. Kant* appears 4370 times since 2010 in Philosophers Index abstracts, compared to 2756 for Plato*, 3349 for Aristot*, 1096* for Hume*, 1545 for Nietzsch*, and 1110 for Marx*. I've tried a bunch of names and found no one higher.

But maybe the most striking measure of a philosopher's perceived importance is when philosophy departments advertise for specialists specifically in that person's work. By this measure, Kant is the winner, hands-down. Not even close!

Here's what I did: I searched PhilJobs -- currently the main resource for philosophy jobs in the Anglophone world -- for permanent or tenure-track positions posted from June 1, 2015 to June 18, 2018. "Kant*" yields 30 ads (of 910 in the database), among which 17 contained "Kant" or "Kantian" in the line for "Area of Specialization". One said "excluding Kant", so let's toss that one out, leaving 29 and 16. Four were specifically asking for "post-Kantian" philosophy (which presumably excludes Kant, but it's testament to his influence that a historical period is referred to in this way), but most were advertising either for a Kant specialist (e.g., UNC Chapel Hill searched in AOS "Kant's theoretical philosophy") or Kant among other things (e.g., Notre Dame "Kant and/or early modern"). Where "Kant" was not in the AOS line, his name was either in the Area Of Competence line or somewhere in the body of the ad [note 1].

In sum, the method above yields:
Kant: 29 total PhilJobs hits, 16 in AOS (12 if you exclude "post-Kantian").

Here are some others:

Plato*: 3, 0.
Aristot*: 2, 0.
Hume*: 1, 0.
Confuc*: 1, 0.
Aquin*: 3, 1 (all Catholic universities).
Nietzsch*: 0, 0.
Marx*: 5, 1. (4/5 Chinese universities).

As I said, hands down. Kant runs away with the title, Plato and Confucius shading their eyes in awe as they watch him zoom toward the horizon.

Note 1: If "Kant" was in the body of the ad, it was sometimes because the university was mentioning their department's strength in Kant rather than searching for someone in Kant, but for my purposes if a department is self-describing its strengths in that way, that's also a good signal of Kant's perceived importance, so I haven't excluded those cases.

[image source]

Thursday, June 14, 2018

Slippery Slope Arguments and Discretely Countable Subjects of Experience

I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while.

To build a slippery slope argument concerning the presence of consciousness, do this:

* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).

* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)

* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.

* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].

Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)

However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.

Consider these three familiar slippery slopes.

(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)

(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.

(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.

Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).

(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.

Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:

(a.) Deny one of the anchors. There was group consciousness all along, perhaps!

(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).

(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.

(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.

I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.

[image adapted from Pixabay]

Wednesday, June 06, 2018

Research Funding: The Pretty-Proposal Approach vs the Recent-Past-Results Approach

Say you have some money and you want to fund some research. You're an institution of some sort: NSF, Templeton, MacArthur, a university's Committee on Research. How do you decide who gets your money?

Here are two broad approaches:

The Pretty Proposal Approach. Send out a call for applications. Give the money to the researchers who make the best case that they have an awesome research plan.

The Recent-Past-Results Approach. Figure out who in the field has recently been doing the best research of the sort you want to fund. Give them money for more such research.

[ETA for clarity, 09:46] The ideal form of the Recent-Past-Results Approach is one in which the researcher does not even have to write a proposal!

Of course both models have advantages and disadvantages. But on the whole, I'd suggest, too much funding is distributed based on the pretty proposal model and insufficient money based on the recent-past-result model.


I see three main advantages to the Pretty Proposal Approach:

First, and very importantly in my mind, the PPA is egalitarian. It doesn't matter what you've done in the past. If you have a great proposal, you deserve funding!

Second, two researchers with equally good track records might have differently promising future plans, and this approach (if it goes well) will reward the researcher with the more promising plans.

Third, the institution can more precisely control exactly what research projects are funded (possibly an advantage from the perspective of the institution).


But the Pretty Proposal Approach has some big downsides compared to the Recent-Past-Results Approach:

First, in my experience, researchers spend a huge amount of time writing pretty proposals, and the amount of time has been increasing sharply. This is time they don't spend on research itself. In the aggregate, this is a huge loss to academic research productivity (e.g., see here and here). The Recent-Past-Results approach, in contrast, needn't involve any active asking by the researcher (if the granting agency does the work of finding promising recipients), or submission only of a cv and recent publications. This would allow academics to deploy more of their skills and time on the research itself, rather than on constructing beautiful requests for money.

Second, past research performance probably better predicts future research performance than do promises of future research performance. I am unaware of data specifically on this question, but in general I find it better policy to anticipate what people will do based on what they've done in the past than based on the handsome promises they make when asking for money. If this is correct, then better research is likely to be funded on a Recent-Past-Results approach. (Caveat: Most grant proposals already require some evidence of your expertise and past work, which can help mitigate this disadvantage.)

Third, the best researchers are often opportunistic and move fast. They will do better research if they can pursue emerging opportunities and inspirations than if they are tied to a proposal written a year or more before.

In my view, the downsides of the dominant Pretty Proposal Approach are sufficiently large that we should shift a substantial proportion (not all) of our research funding toward the Recent-Past-Results Approach.


What about the three advantages of the Pretty Proposal Approach?

The third advantage of the PPA -- increased institutional power -- is not clearly an all-things-considered advantage. Researchers who have recently done good work in the eyes of grant evaluators might be better at deciding the specific best uses of future research resources than are those grant evaluators themselves. Institutions understandably want some control; but they can exert this control by conditional granting: "We offer you this money to spend on research on Topic X (meeting further Criteria Y and Z), if you wish to do more such research."

The second advantage of the PPA -- more funding for similar researchers with differently promising plans -- can be partly accommodated by retaining the Pretty Proposal Approach as a substantial component of research funding. I certainly wouldn't want to see all funding to be based on Recent Past Results!

The first advantage of the PPA -- egalitarianism -- is the most concerning to me. I don't think we want to see elite professors and friends of the granting committees getting ever more of the grant money in a self-reinforcing cycle. A Recent-Past-Results Approach should implement stringent measures to reduce the risk of this outcome. Here are a few possibilities:

Prioritize researchers with less institutional support. If two researchers have similarly excellent past results but one has achieved those results with less institutional support -- a higher teaching load, less previous grant funding -- then prioritize the one with less support. Especially prioritize funding research by people with decent track records and very little institutional support, perhaps even over those with very good track records and loads of institutional support. This helps level the playing field, and it also might produce better results overall, since those with the least existing institutional support might be the ones who would most benefit from an increase in support.

Low-threshold equal funding. Create some low bar, then fund everyone at the same small level once they cross that bar. This might be good practice for universities funding small grants for faculty conference travel, for example (compared to faculty having to write detailed justifications for conference travel).

Term limits. Require a five-year hiatus, for example, after five years of funding so that other researchers have a chance at showing what they can do when they receive funding.

[ETA 10:37] In favoring more emphasis on the Recent-Past-Results Approach, I am not suggesting that everyone write Pretty Proposals with cvs attached and then the funding is decided mostly based on cv. That would combine the time disadvantage of writing Pretty Proposals with the inegalitarian disadvantage of the Recent-Past-Results Approach, and it would add misdirection since people would be invited to think that writing a good proposal is important. (Any resemblance of the real grants process to this dystopian worst-of-all-worlds approach is purely coincidental.) I am proposing either no submission at all by the grant recipient (models include MacArthur "genius" grants and automatic faculty start-up funds) or a very minimal description of topic, with no discussion of methods, impact, previous literature, etc.

[Still another ETA, 11:03] I hadn't considered random funding! See here and here (HT Daniel Brunson). An intriguing idea, perhaps in combination with a low threshold of some sort.

Related Posts:

Related Posts: How to Give $1 Million a Year to Philosophers (Mar 18, 2013).

Against Increasing the Power of Grant Agencies in Philosophy (Dec 23, 2011).

Friday, June 01, 2018

Does It Harm Philosophy as a Discipline to Discuss the Apparently Meager Practical Effects of Studying Ethics?

I've done a lot of empirical work on the apparently meager practical effects of studying philosophical ethics. Although most philosophers seem to view my work either neutrally or positively, or have concerns about the empirical details of this or that study, others react quite negatively to the whole project, more or less in principle.

About a month ago on Facebook, Samuel Rickless did such a nice job articulating some general concerns (see his comment on this public post) that I thought I'd quote his comments here and share some of my reactions.

First, My Research:

* In a series of studies published from 2009 to 2014, mostly in collaboration with Joshua Rust (and summarized here), I've empirically explored the moral behavior of ethics professors. As far as I know, no one else had ever systematically examined this question. Across 17 measures of (arguably) moral behavior, ranging from rates of charitable donation to staying in contact with one's mother to vegetarianism to littering to responding to student emails to peer ratings of overall moral behavior, I have found not a single main measure on which ethicists appeared to act morally better than comparison groups of other professors; nor do they appear to behave better overall when the data are merged meta-analytically. (Caveat: on some secondary measures we found ethicists to behave better. However, on other measures we found them to behave worse, with no clearly interpretable overall pattern.)

* In a pair of studies with Fiery Cushman, published in 2012 and 2015, I've found that philosophers, including professional ethicists, seem to be no less susceptible than non-philosophers to apparently irrational order effects and framing effects in their evaluation of moral dilemmas.

* More recently, I've turned my attention to philosophical pedagogy. In an unpublished critical review from 2013, I found little good empirical evidence that business ethics or medical ethics instruction has any practical effect on student behavior. I have been following up with some empirical research of my own with several different collaborators. None of it is complete yet, but preliminary results tend to confirm the lack of practical effect, except perhaps when there's the right kind of narrative or emotional engagement. On grounds of armchair plausibility, I tend to favor multi-causal, canceling explanations over the view that philosophical reflection is simply inert (contra Jon Haidt); thus I'm inclined to explore how backfire effects might on average tend to cancel positive effects. It was a post on the possible backfire effects of teaching ethics that prompted Rickless's comment.

Rickless's Objection:
(shared with permission, adding lineation and emphasis for clarity)

Rickless: And I’ll be honest, Eric, all this stuff about how unethical ethicists are, and how counterproductive their courses might be, really bothers me. It’s not that I think that ethics courses can’t be improved or that all ethicists are wonderful people. But please understand that the takeaway from this kind of research and speculation, as it will likely be processed by journalists and others who may well pick up and run with it, will be that philosophers are shits whose courses turn their students into shits. And this may lead to the defunding of philosophy, the removal of ethics courses from business school, and, to my mind, a host of other consequences that are almost certainly far worse than the ills that you are looking to prevent.

Schwitzgebel: Samuel, I understand that concern. You might be right about the effects. However, I also think that if it is correct that ethics classes as standardly taught have little of the positive effect that some administrators and students hope for from them, we as a society should know that. It should be explored in a rigorous way. On the possibly bright side, a new dimension of my research is starting to examine conditions under which teaching does have a positive measurable effect on real-world behavior. I am hopeful that understanding that better will lead us to teach better.

Rickless: In theory, what you say about knowing that courses have little or no positive effect makes sense. But in practice, I have the following concerns.

First, no set of studies could possibly measure all the positive and negative effects of teaching ethics this way or that way. You just can’t control all the potentially relevant variables, in part because you don’t know what all the potentially relevant variables are, in part because you can’t fix all the parameters with only one parameter allowed to vary.

Second, you need to be thinking very seriously about whether your own motives (particularly motives related to bursting bubbles and countering conventional wisdom) are playing a role in your research, because those motives can have unseen effects on the way that research is conducted, as well as the conclusions drawn from it. I am not imputing bad motives to you. Far from it, and quite the opposite. But I think that all researchers, myself included, want their research to be striking and interesting, sometimes surprising.

Third, the tendency of researchers is to draw conclusions that go beyond the actual evidence.

Fourth, the combination of all these factors leads to conclusions that have a significant likelihood of being mistaken.

Fifth, those conclusions will likely be taken much more seriously by the powers-that-be than by the researchers themselves. All the qualifiers inserted by researchers are usually removed by journalists and administrators.

Sixth, the consequences on the profession if negative results are taken seriously by persons in positions of power will be dire.

Under the circumstances, it seems to me that research that is designed to reveal negative facts about the way things are taught had better be airtight before being publicized. The problem is that there is no such research. This doesn’t mean that there is no answer to problems of ineffective teaching. But that is an issue for another day.

My Reply:

On the issue of motives: Of course it is fun to have striking research! Given my general skepticism about self-knowledge, including of motives, I won't attempt self-diagnosis. However, I will say that except for recent studies that are not yet complete, I have published every empirical study I've done on this topic, with no file-drawered results. I am not selecting only the striking material for publication. Also, in my recent pedagogy research I am collaborating with other researchers who very much hope for positive results.

On the likelihood of being mistaken: I acknowledge that any one study is likely to be mistaken. However, my results are pretty consistent across a wide variety of methods and behavior types, including some issues specifically chosen with the thought that they might show ethicists in a good light (the charity and vegetarianism measures in Schwitzgebel and Rust 2014). I think this adds to credibility, though it would be better if other researchers with different methods and theoretical perspectives attempted to confirm or disconfirm our findings. There is currently one replication attempt ongoing among German-language philosophers, so we will see how that plays out!

On whether the powers-that-be will take the conclusions more seriously than the researchers: I interpret Rickless here as meaning that they will tend to remove the caveats and go for the sexy headline. I do think that is possible. One potentially alarming fact from this point of view is that my most-cited and seemingly best-known study is the only study where I found ethicists seeming to behave worse than the comparison groups: the study of missing library books. However, it was also my first published study on the topic, so I don't know to what extent the extra attention is a primacy effect.

On possibly dire consequences: The most likely path for dire consequences seems to me to be this: Part of the administrative justification for requiring ethics classes might be the implicit expectation that university-level ethics instruction positively influences moral behavior. If this expectation is removed, so too is part of the administrative justification for ethics instruction.

Rickless's conclusion appears to be that no empirical research on this topic, with negative or null results, should be published unless it is "airtight", and that it is practically impossible for such research to be airtight. From this I infer that Rickless thinks either that (a.) only positive results should be published, while negative or null results remain unpublished because inevitably not airtight, or that (b.) no studies of this sort should be published at all, whether positive, negative, or null.

Rickless's argument has merit, and I see the path to this conclusion. Certainly there is a risk to the discipline in publishing negative or null results, and one ought to be careful.

However, both (a) and (b) seem to be bad policy.

On (a): To think that only positive results should be published (or more moderately that we should have a much higher bar for negative or null results than for positive ones) runs contrary to the standards of open science that have recently received so much attention in the social psychology replication crisis. In the long run it is probably contrary to the interests of science, philosophy, and society as a whole for us to pursue a policy that will create an illusory disproportion of positive research.

That said, there is a much more moderate strand of (a) that I could endorse: Being cautious and sober about one's research, rather than yielding to the temptation to inflate dubious, sexy results for the sake of publicity. I hope that in my own work I generally meet this standard, and I would recommend that same standard for both positive and negative or null research.

On (b): It seems at least as undesirable to discourage all empirical research on these topics. Don't we want to know the relationship between philosophical moral reflection and real-world moral behavior? Even if you think that studying the behavior of professional ethicists in particular is unilluminating, surely studying the effects of philosophical pedagogy is worthwhile. We should want to know what sorts of effects our courses have on the students who take them and under what conditions -- especially if part of the administrative justification for requiring ethics courses is the assumption that they do have a practical effect. To reject the whole enterprise of empirically researching the effects of studying philosophy because there's a risk that some studies will show that studying philosophy has little practical impact on real-world choices -- that seems radically antiscientific.

Rickless raises legitimate worries. I think the best practical response is more research, by more research groups, with open sharing of results, and open discussions of the issue by people working from a wide variety of perspectives. In the long run, I hope that some of my null results can lay the groundwork for a fuller understanding of the moral psychology of philosophy. Understanding the range of conditions under which philosophical moral reflection does and does not have practical effects on real-world behavior should ultimately empower rather than disempower philosophy as a discipline.

[image source]