Wednesday, June 20, 2018

The Perceived Importance of Kant, as Measured by Advertisements for Specialists in His Work

I'm revising a couple of my old posts on Kant for my next book, and I wanted some quantitative data on the importance of Kant in Anglophone philosophy departments.

There's a Leiter poll, where Kant ranks as the third "most important" philosopher of all time after Plato and Aristotle. That's pretty high! But a couple of measures suggest he might be even more important than number three. In terms of appearance in philosophy abstracts, he might be number one. Kant* appears 4370 times since 2010 in Philosophers Index abstracts, compared to 2756 for Plato*, 3349 for Aristot*, 1096* for Hume*, 1545 for Nietzsch*, and 1110 for Marx*. I've tried a bunch of names and found no one higher.

But maybe the most striking measure of a philosopher's perceived importance is when philosophy departments advertise for specialists specifically in that person's work. By this measure, Kant is the winner, hands-down. Not even close!

Here's what I did: I searched PhilJobs -- currently the main resource for philosophy jobs in the Anglophone world -- for permanent or tenure-track positions posted from June 1, 2015 to June 18, 2018. "Kant*" yields 30 ads (of 910 in the database), among which 17 contained "Kant" or "Kantian" in the line for "Area of Specialization". One said "excluding Kant", so let's toss that one out, leaving 29 and 16. Four were specifically asking for "post-Kantian" philosophy (which presumably excludes Kant, but it's testament to his influence that a historical period is referred to in this way), but most were advertising either for a Kant specialist (e.g., UNC Chapel Hill searched in AOS "Kant's theoretical philosophy") or Kant among other things (e.g., Notre Dame "Kant and/or early modern"). Where "Kant" was not in the AOS line, his name was either in the Area Of Competence line or somewhere in the body of the ad [note 1].

In sum, the method above yields:
Kant: 29 total PhilJobs hits, 16 in AOS (12 if you exclude "post-Kantian").

Here are some others:

Plato*: 3, 0.
Aristot*: 2, 0.
Hume*: 1, 0.
Confuc*: 1, 0.
Aquin*: 3, 1 (all Catholic universities).
Nietzsch*: 0, 0.
Marx*: 5, 1. (4/5 Chinese universities).

As I said, hands down. Kant runs away with the title, Plato and Confucius shading their eyes in awe as they watch him zoom toward the horizon.

Note 1: If "Kant" was in the body of the ad, it was sometimes because the university was mentioning their department's strength in Kant rather than searching for someone in Kant, but for my purposes if a department is self-describing its strengths in that way, that's also a good signal of Kant's perceived importance, so I haven't excluded those cases.

[image source]

Thursday, June 14, 2018

Slippery Slope Arguments and Discretely Countable Subjects of Experience

I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while.

To build a slippery slope argument concerning the presence of consciousness, do this:

* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).

* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)

* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.

* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].

Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)

However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.

Consider these three familiar slippery slopes.

(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)

(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.

(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.

Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).

(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.

Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:

(a.) Deny one of the anchors. There was group consciousness all along, perhaps!

(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).

(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.

(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.

I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.

[image adapted from Pixabay]

Wednesday, June 06, 2018

Research Funding: The Pretty-Proposal Approach vs the Recent-Past-Results Approach

Say you have some money and you want to fund some research. You're an institution of some sort: NSF, Templeton, MacArthur, a university's Committee on Research. How do you decide who gets your money?

Here are two broad approaches:

The Pretty Proposal Approach. Send out a call for applications. Give the money to the researchers who make the best case that they have an awesome research plan.

The Recent-Past-Results Approach. Figure out who in the field has recently been doing the best research of the sort you want to fund. Give them money for more such research.

[ETA for clarity, 09:46] The ideal form of the Recent-Past-Results Approach is one in which the researcher does not even have to write a proposal!

Of course both models have advantages and disadvantages. But on the whole, I'd suggest, too much funding is distributed based on the pretty proposal model and insufficient money based on the recent-past-result model.

I see three main advantages to the Pretty Proposal Approach:

First, and very importantly in my mind, the PPA is egalitarian. It doesn't matter what you've done in the past. If you have a great proposal, you deserve funding!

Second, two researchers with equally good track records might have differently promising future plans, and this approach (if it goes well) will reward the researcher with the more promising plans.

Third, the institution can more precisely control exactly what research projects are funded (possibly an advantage from the perspective of the institution).

But the Pretty Proposal Approach has some big downsides compared to the Recent-Past-Results Approach:

First, in my experience, researchers spend a huge amount of time writing pretty proposals, and the amount of time has been increasing sharply. This is time they don't spend on research itself. In the aggregate, this is a huge loss to academic research productivity (e.g., see here and here). The Recent-Past-Results approach, in contrast, needn't involve any active asking by the researcher (if the granting agency does the work of finding promising recipients), or submission only of a cv and recent publications. This would allow academics to deploy more of their skills and time on the research itself, rather than on constructing beautiful requests for money.

Second, past research performance probably better predicts future research performance than do promises of future research performance. I am unaware of data specifically on this question, but in general I find it better policy to anticipate what people will do based on what they've done in the past than based on the handsome promises they make when asking for money. If this is correct, then better research is likely to be funded on a Recent-Past-Results approach. (Caveat: Most grant proposals already require some evidence of your expertise and past work, which can help mitigate this disadvantage.)

Third, the best researchers are often opportunistic and move fast. They will do better research if they can pursue emerging opportunities and inspirations than if they are tied to a proposal written a year or more before.

In my view, the downsides of the dominant Pretty Proposal Approach are sufficiently large that we should shift a substantial proportion (not all) of our research funding toward the Recent-Past-Results Approach.

What about the three advantages of the Pretty Proposal Approach?

The third advantage of the PPA -- increased institutional power -- is not clearly an all-things-considered advantage. Researchers who have recently done good work in the eyes of grant evaluators might be better at deciding the specific best uses of future research resources than are those grant evaluators themselves. Institutions understandably want some control; but they can exert this control by conditional granting: "We offer you this money to spend on research on Topic X (meeting further Criteria Y and Z), if you wish to do more such research."

The second advantage of the PPA -- more funding for similar researchers with differently promising plans -- can be partly accommodated by retaining the Pretty Proposal Approach as a substantial component of research funding. I certainly wouldn't want to see all funding to be based on Recent Past Results!

The first advantage of the PPA -- egalitarianism -- is the most concerning to me. I don't think we want to see elite professors and friends of the granting committees getting ever more of the grant money in a self-reinforcing cycle. A Recent-Past-Results Approach should implement stringent measures to reduce the risk of this outcome. Here are a few possibilities:

Prioritize researchers with less institutional support. If two researchers have similarly excellent past results but one has achieved those results with less institutional support -- a higher teaching load, less previous grant funding -- then prioritize the one with less support. Especially prioritize funding research by people with decent track records and very little institutional support, perhaps even over those with very good track records and loads of institutional support. This helps level the playing field, and it also might produce better results overall, since those with the least existing institutional support might be the ones who would most benefit from an increase in support.

Low-threshold equal funding. Create some low bar, then fund everyone at the same small level once they cross that bar. This might be good practice for universities funding small grants for faculty conference travel, for example (compared to faculty having to write detailed justifications for conference travel).

Term limits. Require a five-year hiatus, for example, after five years of funding so that other researchers have a chance at showing what they can do when they receive funding.

[ETA 10:37] In favoring more emphasis on the Recent-Past-Results Approach, I am not suggesting that everyone write Pretty Proposals with cvs attached and then the funding is decided mostly based on cv. That would combine the time disadvantage of writing Pretty Proposals with the inegalitarian disadvantage of the Recent-Past-Results Approach, and it would add misdirection since people would be invited to think that writing a good proposal is important. (Any resemblance of the real grants process to this dystopian worst-of-all-worlds approach is purely coincidental.) I am proposing either no submission at all by the grant recipient (models include MacArthur "genius" grants and automatic faculty start-up funds) or a very minimal description of topic, with no discussion of methods, impact, previous literature, etc.

[Still another ETA, 11:03] I hadn't considered random funding! See here and here (HT Daniel Brunson). An intriguing idea, perhaps in combination with a low threshold of some sort.

Related Posts:

Related Posts: How to Give $1 Million a Year to Philosophers (Mar 18, 2013).

Against Increasing the Power of Grant Agencies in Philosophy (Dec 23, 2011).

Friday, June 01, 2018

Does It Harm Philosophy as a Discipline to Discuss the Apparently Meager Practical Effects of Studying Ethics?

I've done a lot of empirical work on the apparently meager practical effects of studying philosophical ethics. Although most philosophers seem to view my work either neutrally or positively, or have concerns about the empirical details of this or that study, others react quite negatively to the whole project, more or less in principle.

About a month ago on Facebook, Samuel Rickless did such a nice job articulating some general concerns (see his comment on this public post) that I thought I'd quote his comments here and share some of my reactions.

First, My Research:

* In a series of studies published from 2009 to 2014, mostly in collaboration with Joshua Rust (and summarized here), I've empirically explored the moral behavior of ethics professors. As far as I know, no one else had ever systematically examined this question. Across 17 measures of (arguably) moral behavior, ranging from rates of charitable donation to staying in contact with one's mother to vegetarianism to littering to responding to student emails to peer ratings of overall moral behavior, I have found not a single main measure on which ethicists appeared to act morally better than comparison groups of other professors; nor do they appear to behave better overall when the data are merged meta-analytically. (Caveat: on some secondary measures we found ethicists to behave better. However, on other measures we found them to behave worse, with no clearly interpretable overall pattern.)

* In a pair of studies with Fiery Cushman, published in 2012 and 2015, I've found that philosophers, including professional ethicists, seem to be no less susceptible than non-philosophers to apparently irrational order effects and framing effects in their evaluation of moral dilemmas.

* More recently, I've turned my attention to philosophical pedagogy. In an unpublished critical review from 2013, I found little good empirical evidence that business ethics or medical ethics instruction has any practical effect on student behavior. I have been following up with some empirical research of my own with several different collaborators. None of it is complete yet, but preliminary results tend to confirm the lack of practical effect, except perhaps when there's the right kind of narrative or emotional engagement. On grounds of armchair plausibility, I tend to favor multi-causal, canceling explanations over the view that philosophical reflection is simply inert (contra Jon Haidt); thus I'm inclined to explore how backfire effects might on average tend to cancel positive effects. It was a post on the possible backfire effects of teaching ethics that prompted Rickless's comment.

Rickless's Objection:
(shared with permission, adding lineation and emphasis for clarity)

Rickless: And I’ll be honest, Eric, all this stuff about how unethical ethicists are, and how counterproductive their courses might be, really bothers me. It’s not that I think that ethics courses can’t be improved or that all ethicists are wonderful people. But please understand that the takeaway from this kind of research and speculation, as it will likely be processed by journalists and others who may well pick up and run with it, will be that philosophers are shits whose courses turn their students into shits. And this may lead to the defunding of philosophy, the removal of ethics courses from business school, and, to my mind, a host of other consequences that are almost certainly far worse than the ills that you are looking to prevent.

Schwitzgebel: Samuel, I understand that concern. You might be right about the effects. However, I also think that if it is correct that ethics classes as standardly taught have little of the positive effect that some administrators and students hope for from them, we as a society should know that. It should be explored in a rigorous way. On the possibly bright side, a new dimension of my research is starting to examine conditions under which teaching does have a positive measurable effect on real-world behavior. I am hopeful that understanding that better will lead us to teach better.

Rickless: In theory, what you say about knowing that courses have little or no positive effect makes sense. But in practice, I have the following concerns.

First, no set of studies could possibly measure all the positive and negative effects of teaching ethics this way or that way. You just can’t control all the potentially relevant variables, in part because you don’t know what all the potentially relevant variables are, in part because you can’t fix all the parameters with only one parameter allowed to vary.

Second, you need to be thinking very seriously about whether your own motives (particularly motives related to bursting bubbles and countering conventional wisdom) are playing a role in your research, because those motives can have unseen effects on the way that research is conducted, as well as the conclusions drawn from it. I am not imputing bad motives to you. Far from it, and quite the opposite. But I think that all researchers, myself included, want their research to be striking and interesting, sometimes surprising.

Third, the tendency of researchers is to draw conclusions that go beyond the actual evidence.

Fourth, the combination of all these factors leads to conclusions that have a significant likelihood of being mistaken.

Fifth, those conclusions will likely be taken much more seriously by the powers-that-be than by the researchers themselves. All the qualifiers inserted by researchers are usually removed by journalists and administrators.

Sixth, the consequences on the profession if negative results are taken seriously by persons in positions of power will be dire.

Under the circumstances, it seems to me that research that is designed to reveal negative facts about the way things are taught had better be airtight before being publicized. The problem is that there is no such research. This doesn’t mean that there is no answer to problems of ineffective teaching. But that is an issue for another day.

My Reply:

On the issue of motives: Of course it is fun to have striking research! Given my general skepticism about self-knowledge, including of motives, I won't attempt self-diagnosis. However, I will say that except for recent studies that are not yet complete, I have published every empirical study I've done on this topic, with no file-drawered results. I am not selecting only the striking material for publication. Also, in my recent pedagogy research I am collaborating with other researchers who very much hope for positive results.

On the likelihood of being mistaken: I acknowledge that any one study is likely to be mistaken. However, my results are pretty consistent across a wide variety of methods and behavior types, including some issues specifically chosen with the thought that they might show ethicists in a good light (the charity and vegetarianism measures in Schwitzgebel and Rust 2014). I think this adds to credibility, though it would be better if other researchers with different methods and theoretical perspectives attempted to confirm or disconfirm our findings. There is currently one replication attempt ongoing among German-language philosophers, so we will see how that plays out!

On whether the powers-that-be will take the conclusions more seriously than the researchers: I interpret Rickless here as meaning that they will tend to remove the caveats and go for the sexy headline. I do think that is possible. One potentially alarming fact from this point of view is that my most-cited and seemingly best-known study is the only study where I found ethicists seeming to behave worse than the comparison groups: the study of missing library books. However, it was also my first published study on the topic, so I don't know to what extent the extra attention is a primacy effect.

On possibly dire consequences: The most likely path for dire consequences seems to me to be this: Part of the administrative justification for requiring ethics classes might be the implicit expectation that university-level ethics instruction positively influences moral behavior. If this expectation is removed, so too is part of the administrative justification for ethics instruction.

Rickless's conclusion appears to be that no empirical research on this topic, with negative or null results, should be published unless it is "airtight", and that it is practically impossible for such research to be airtight. From this I infer that Rickless thinks either that (a.) only positive results should be published, while negative or null results remain unpublished because inevitably not airtight, or that (b.) no studies of this sort should be published at all, whether positive, negative, or null.

Rickless's argument has merit, and I see the path to this conclusion. Certainly there is a risk to the discipline in publishing negative or null results, and one ought to be careful.

However, both (a) and (b) seem to be bad policy.

On (a): To think that only positive results should be published (or more moderately that we should have a much higher bar for negative or null results than for positive ones) runs contrary to the standards of open science that have recently received so much attention in the social psychology replication crisis. In the long run it is probably contrary to the interests of science, philosophy, and society as a whole for us to pursue a policy that will create an illusory disproportion of positive research.

That said, there is a much more moderate strand of (a) that I could endorse: Being cautious and sober about one's research, rather than yielding to the temptation to inflate dubious, sexy results for the sake of publicity. I hope that in my own work I generally meet this standard, and I would recommend that same standard for both positive and negative or null research.

On (b): It seems at least as undesirable to discourage all empirical research on these topics. Don't we want to know the relationship between philosophical moral reflection and real-world moral behavior? Even if you think that studying the behavior of professional ethicists in particular is unilluminating, surely studying the effects of philosophical pedagogy is worthwhile. We should want to know what sorts of effects our courses have on the students who take them and under what conditions -- especially if part of the administrative justification for requiring ethics courses is the assumption that they do have a practical effect. To reject the whole enterprise of empirically researching the effects of studying philosophy because there's a risk that some studies will show that studying philosophy has little practical impact on real-world choices -- that seems radically antiscientific.

Rickless raises legitimate worries. I think the best practical response is more research, by more research groups, with open sharing of results, and open discussions of the issue by people working from a wide variety of perspectives. In the long run, I hope that some of my null results can lay the groundwork for a fuller understanding of the moral psychology of philosophy. Understanding the range of conditions under which philosophical moral reflection does and does not have practical effects on real-world behavior should ultimately empower rather than disempower philosophy as a discipline.

[image source]

Thursday, May 24, 2018

An Argument Against Every General Theory of Consciousness

As a philosophical expert on theories of consciousness, I try to keep abreast of the most promising recent theories. I also sometimes receive unsolicited emails from scholars who have developed a theory that they believe deserves attention. It's fun to see the latest cleverness, and it's my job to do so, but I always know in advance that I won't be convinced.

I'd like to hope that it's not just that I'm a dogmatic skeptic about general theories of consciousness. In "The Crazyist Metaphysics of Mind", I argue that our epistemic tools for evaluating general theories of consciousness are, for the foreseeable future, too flimsy for the task, since all evaluations of such theories must be grounded in some combination of dubious (typically question-begging) scientific theory, dubious commonsense judgment (shaped by our limited social and evolutionary history), and broad criteria of general theoretical virtue like simplicity or elegance (typically indecisive among theories that are live competitors).

Today, let me try another angle. Ultimately, it's a version of my question-beggingness complaint, but more specific.

Premise 1: There is no currently available decisive argument against panpsychism, the view that everything is conscious, even very simple things, like solitary hydrogen ions in deep space. Panpsychism is, of course, bizarrely contrary to common sense, but (as I also argue in The Crazyist Metaphysics of Mind) all well-developed general theories of consciousness will have some features that are bizarrely contrary to common sense, so although violation of common sense is a cost that creates an explanatory burden, it is not an insurmountable theory-defeater. Among prominent researchers who defend panpsychism or at least treat seriously a view in the neighborhood of panpsychism are Giulio Tononi, David Chalmers, Galen Strawson, and Philip Goff.

There are at least three reasons to take panpsychism seriously. (1.) If, as some have argued, consciousness is a fundamental feature of the world, or a property not reducible to other properties, it would be unsurprising if such a feature were approximately as widespread as other fundamental features such as mass and charge. (2.) Considering the complexity of our experience (e.g., our visual experience) and the plausibly similar complexity of the experience of other organisms with sophisticated sensory systems, one might find oneself on a slippery slope toward thinking that the least complex experience would be possessed by very simple entities indeed (see Chalmers 1996, p 293-7, for a nice exposition of this argument). (3.) Despite my qualms about Integrated Information Theory, there's an attractive theoretical elegance to the idea that consciousness arises from the integration of information, and thus that very simple systems that integrate just a tiny bit of information will correspondingly have just a tiny bit of consciousness.

Premise 2: There is no currently available decisive argument against theories of consciousness that require sophisticated self-representation of the sort that is likely to be absent from entities that lack theories of mind. On extreme versions of this view, even dogs and infants might not have conscious experience. (Again, highly contrary to common sense, but!) Among prominent researchers who have taken such a view seriously are Daniel Dennett and Peter Carruthers (though recently Carruthers has suggested that there might be no fact of the matter about the phenomenal consciousness, or not, of non-human animals).

There are at least three reasons to take seriously such a restrictive view of consciousness: (1.) If one wants to exit the slippery slope to panpsychism, one possibly attractive place to do so is at the gap between creatures who are capable of explicitly representing their own mental states and those that cannot do so. (2.) Consciousness, as was noted by Franz Brentano (and recently emphasized by David Rosenthal, Uriah Kriegel, and others), might plausibly always involve some sort of self-awareness of the fact that one is conscious -- apparently a moderately sophisticated self-representational capacity of some sort. (3.) There's a theoretical elegance to self-representational theories of consciousness. If consciousness doesn't just always arise when information is integrated in a system, an attractive explanation of what else is needed is some sort of sophisticated ability of a system to represent its own representational states.

Now you might understandably think that either panpsychism or a human-only views of consciousness is so extreme that we can be epistemically justified in confidently rejecting one or the other. If so, we can run the argument with weaker versions of Premise 1 and/or Premise 2:

Premise 1a (weaker): There is no currently available decisive argument against theories of consciousness that treat consciousness as very widespread, including perhaps in organisms with fairly small and simple brains, or in some near-future AI systems.

Premise 2a (weaker): There is no currently available decisive argument against theories of consciousness that treat consciousness as narrowly restricted to a class of fairly sophisticated entities, perhaps only mammals and birds and similar organisms capable of complex, flexible learning, and no AI systems in the foreseeable future.

Premise 3: All general theories of consciousness commit to the falsity of either Premise 1, Premise 2, or both (alternatively Premise 1a, Premise 2a, or both). If they do not so commit, then they aren't general theories of consciousness, though they may of course be perfectly fine narrow theories of consciousness, e.g., theories of consciousness as it happens to arise in human beings. (I've got a parallel argument against general theories of consciousness even as they apply just to human beings, based on considerations from Schwitzgebel 2011, ch. 6, but not today.)

Therefore, all general theories of consciousness commit to the falsity of some view against which there is no currently available decisive argument. They thereby commit beyond the evidence. They must either assume, or accept on only indecisive evidence, either the falsity of panpsychism, or the falsity of sophisticated self-representational views of consciousness, or both. In other words, they inevitably beg the question against, or at best indecisively argue against, some views we cannot yet justifiably reject.

Still, go ahead and build your theory of consciousness. You might even succeed in building the true theory of consciousness, if it isn't yet out there! Science and philosophy needs bold theoretical adventurers. But if a skeptic on the sidelines remains unconvinced, thinking that you have not convincingly dispatched some possible alternative approaches, the skeptic will probably be right.

ETA: In order to constitute an argument against a candidate theory, as opposed to merely an objection to such theories, perhaps I need to put some weight on the positive arguments in favor of views of consciousness that conflict with the theory being defended. Thanks to David Chalmers and Francois Kammerer on Facebook for pushing me on this point.

[image source]

Tuesday, May 15, 2018

"What Is It Like" Interview Now Freely Available

... here.

It's a fairly long read (about 8000 words), but I gave a lot of thought to Cliff's questions, and I hope the result is both interesting and revealing.

Read it, and learn about:

* my unconventional parents, including how we celebrated Christmas and my part-time work as a chambermaid;

* the peculiar story of how I once found my lost wallet;

* sneaking into György Gergely's cognitive development class at Berkeley, and how I think "Stanford school" philosophy of science should inform philosophy of mind;

* what four-year-olds and philosophers have in common;

* why blogging is the ideal form for philosophy;

* and much more!

Also please consider supporting Cliff Sosis's "What Is It Like to Be a Philosopher?" interviews by funding him on PayPal or Patreon.

Friday, May 11, 2018

Is C-3PO Alive?

by Will Swanson and Eric Schwitzgebel

Droids—especially R2-D2, C-3PO, and BB-8—propel the plot of the Star Wars movies. A chance encounter between R2-D2 and Luke Skywalker in “Episode IV: A New Hope” starts Luke on his fateful path to joining the rebel forces, becoming a Jedi, and meeting his father. More recently, BB-8 plays a similar role for Rey in “Episode VII: The Force Awakens”. But the droids are more than convenient plot devices. They are full-blooded characters, with their own quirks, goals, preferences, and vulnerabilities. The droids face the same existential threats as anyone else; and most of us still squirm in our theater seats on their behalf when danger looms.

Our response to the Star Wars droids relies on the tacit assumption that they are living lives -- lives that can be improved or worsened, sustained or lost. This raises the question: Is C-3PO alive? Or more precisely, if we someday built a robot like C-3PO, would it be alive?

In a way, it seems obvious that, yes, C-3PO is alive. Vaporizing him would be murder! One could have a funeral over his remains, reminiscing about all the good things he did in his lifetime.

But of course, the experts on life are biologists. And if you look at standard biology textbook descriptions of the characteristics of life (e.g., here), it looks like robots wouldn’t qualify. They don’t grow or reproduce, or share common descent with other living organisms. They don’t contain organic molecules like nucleic acid. They don’t have biological cells. They don’t seem to have arisen from a Darwinian evolutionary process. Few people (and probably fewer professional biologists) would say that a Roomba vacuum cleaner is alive, except in some kind of metaphorical sense; and in these respects, C-3PO is similar, despite being more complicated – just as we are similar to but more complicated than microscopic worms. The science covering C-3PO is not biology, but robotics.

Despite what looks like bad news for C-3PO from biology textbook definitions of life, on closer consideration we should reject the biology-textbook-list approach to robot cases. Our attitude toward these lists should probably be closer to the capacious attitude typical of astrobiologists (e.g., Benner 2010). If we’re considering what “life” is, really, in the broad, philosophical way that we do when considering the possibility of alien life, the standard lists start to look very Earth-bound and chauvinistic.

  • Common descent. Unless we wish to exclude the possibility of life originating independently on other planets, we should not treat common descent as a necessary condition for life.
  • Organic molecules. If we allow for life to arise independently on other planets, we should also be wary of expecting the resultant life to closely resemble biological life on Earth. We should not require the presence of organic molecules like nucleic acids.
  • Reproduction. While it is true that biological living things tend to reproduce when given the opportunity, reproduction is far from necessary for life. Consider the mule, the sterile worker ant, and the deliberately childless human. Nor should we require that life forms originate from reproductive processes. If life began without reproduction once, it can begin again, perhaps many times over!
  • Participation in Darwinian processes. Explanations invoking evolution by natural selection have revealed many of nature’s secrets. Nevertheless, evolution is not a locally defined property of living individuals. It refers to the processes that shape individuals over generations. It’s unclear why belonging to a group that has undergone Darwinian selection in the past should matter to whether an individual, considered now, is alive.
  • Growth. Depending on the sense of growth in question, robots may or may not grow. If growth means nothing more than change over time in accordance an internal protocol, then at least some robots, learning ones, are able to grow. If growth means simply getting radically bigger (or developing from a small seed or embryo into a large adult), then requiring growth risks excluding or marginalizing many organisms that are uncontroversially alive, such as bacteria that reproduce by fission.
  • Other list members -- self-maintenance (if a robot can charge its own battery...), having heterogeneous organized parts, and responding to stimuli -- pose no challenge to the idea that robots are alive.

    What about metabolism? Perhaps this is an essential feature of life. Do robots do this?

    Tweaking a suggestion from Peter Godfrey-Smith (2013), a first pass on a definition for metabolism is the cooperation of diverse parts within an organism (implicitly, a thing that meets other criteria for life) to use energy and other resources to maintain the structure of the organism. If “maintaining structure” amounts to maintaining operational readiness, then this definition provides no reason to deny metabolism to robots, especially robots that do things like auto-update, repel virus programs, and draw from external energy resources as needed. If “maintaining structure” refers specifically to the upkeep required to keep a physical body from degrading, then most simple robots would be excluded, but C-3PO would still qualify, if he can polish his head and order a replacement arm.

    Even so, this second approach might define metabolism too narrowly. Defining metabolism in terms of maintenance in a narrow sense, after all, cannot accommodate the other ends to which organisms put energy in coordinated, non-accidental ways. Consider growth and development. The caterpillar’s metamorphosis hardly counts as maintenance of structure in any straightforward sense, yet we should count the energy transformations needed to effect that change as part of the caterpillar’s metabolism. More strikingly, living things often use energy to undermine their structure: think of cells undergoing programmed cell death or humans committing suicide.

    We can accommodate these cases by broadening the definition of metabolism to encompass any coordinated use of energy within a living thing to achieve the ends of that living thing. On this broader definition, there is no reason to deny metabolism to robots.

    With all of this in mind, we think it’s not unreasonable to stick with our gut intuition that C-3PO is alive. What is essential to being a living thing is not so much one’s biological history or composition by organic molecules, but rather the use of internal or environmental resources to accomplish the functional aims of the system.

    How sophisticated does a system need to be to qualify as living, by these standards? Should we maybe say that even a Roomba is alive, after all? In a series of entertaining experiments, Kate Darling has shown that ordinary people are often quite reluctant to smash up cute robots. Despite Darling’s own expressed view that such robots aren’t alive, maybe part of what is holding people back is some of the same thing that holds some of us back from wanting to crush spiders -- a kind of emotional reverence for forms of life.

    A darling robot:

    Philosophers have grown used to functionalism about mind -- that is, they seem generally willing to accept or at least take seriously the possibility that consciousness might be realized in non-biological substrates. Nevertheless, functionalism about life is less readily accepted. Perhaps philosophical reflection about the possibility of robotic life can help us recognize that our concern over the lives and deaths of our favorite robot characters may be perfectly justified.

    [C-3PO image source] [Pleo image source]

    Wednesday, May 09, 2018

    What Is It Like Interview

    Cliff Sosis has interviewed me for his wonderful What Is It Like to Be a Philosopher series. For the first week, it is only available to Silver level patrons on Patreon. Next week, he'll release it more broadly.

    Please consider supporting Cliff's work. Cliff's interviews knit together questions about philosophy with questions about childhood, family, hobbies, passions, formative experiences -- giving the reader a fuller sense of the whole person than one generally sees. Check out his interviews with Josh Knobe, Kwame Appiah, David Chalmers, Sally Haslanger, Peter Singer, and Kate Manne, for example.


    In this interview Eric Schwitzgebel, professor of philosophy at University of California Riverside, and I discuss his father’s collaboration with Timothy Leary, his uncle who invented ankle monitors, pretending to be a mannequin for his father’s class, Christmas with an electric blue Buddha, his mother’s anti-theist views, being a chambermaid and skiing, writing plays, Rosencrantz and Guildenstern are Dead, T.S. Eliot, Stephen Jay Gould, Stanford and Kuhn, Feyerabend and Zhuangzi, disliking analytic philosophy, moving from Deleuze and Derrida to Hume and Dretske, living in a hippy co-op, wearing a dress as a political statement, memorizing Sylvia Plath, Dretske and Dupré, the Gourmet Report, working with Elisabeth Lloyd and John Searle, the allegations against Searle, the grad culture at Berkeley, love and death, the Bay Area Philosophy of Science reading group, Peter Godfrey-Smith, Stanford School philosophy of science, Bayes or Bust?, sneaking into György Gergely’s class, Alison Gopnik’s generosity, meeting his wife via a classified ad, the job market in 97, landing a job at U.C. Riverside where he is to this day, how the department has changed and he has changed as a teacher, his class on Evil, his work on the moral behavior of ethics professors, The Splintered Mind and philosophy on the internet, his theory of jerks, Brian Weatherson, experimental philosophy, our philosophical blind spots, his writing routine and process, work-life balance, My Dinner with Andre, election night 2008 versus election night 2016, and his last meal…

    Friday, May 04, 2018

    The Rise and Fall of Philosophical Jargon

    In 2010, I defined a discussion arc, in philosophy, as a curve displaying the relative frequency at which a term or phrase appears among the abstracts of philosophical articles. Since then, I've done a few analyses using discussion arcs, mostly searching for philosophers' names (here, here, here, here, here).

    Today I thought it would be fun to look at philosophical jargon -- its rise and fall, how much it varies over time, and as a measure of what is hot. Maybe you'll find it fun too!

    I rely on abstract searches in The Philosophers Index. NGram is nifty, but it doesn't do well with trends specifically in academic philosophy (see here). JStor is interesting too, but it's a limited range of journals, especially for articles less than six years old.

    First, I constructed a representative universe of articles (limiting the search to journal articles only): In this case, I searched for "the" in the abstracts, in five year intervals from 1940-present, except merging 1940-1949 for smoothness and a large enough sample. Then I searched for target terms in abstracts in the same five-year intervals. I constructed the curves by dividing the number of occurrences of the target term by the number of articles in the representative universe in each period.

    Some topics and terms are perennial, rising and falling a little, but not in any dramatic way. Others increase or decrease fairly steadily without a clear peak (allowing for some noisiness especially in the early data). For example, here are the are the arcs for happiness, Kant*, and skepticism or scepticism (all three fairly steady), evolution* and democra* (both rising), and induction (falling):

    (The asterisk is a truncation symbol; the y-axis is percentage of all abstracts containing the word.)

    [apologies for blurry image; click to enlarge and clarify]

    (You thought happiness was more important to philosophers than Kant? Wrong!)

    One way to measure how trendy or steady a topic is, is the ratio of percentage of discussion in its peak period, compared to its average discussion. Exactly equal discussion over the whole period would yield a ratio of 1:1. Fairly steady discussion with some noise would be 1 to 1.5. Topics that rise and fall more substantially would be proportionally higher. Call this the Max/Average Ratio. For the six topics above, the Max/Average Ratios are:

  • Kant*: 1.3
  • happiness: 1.4
  • skepticism or scepticism: 1.4
  • evolution*: 1.5
  • democra*: 2.0
  • induction: 3.0
  • Evolution*, though it approximately triples over the period, has a Max/Average Ratio not too far from one. Democra* rises from a substantially lower level of discussion than does evolution* and has a higher Max/Average Ratio. Induction crashes down to about a sixth of its initial level of discussion (0.174% in the first four periods to 0.028% in the final two) -- hence its moderately large ratio.

    Now let's consider some jargon terms that more clearly reflect hot topics.

    Since the scale is logarithmic, periods of zero incidence are not charted. Also remember that logarithmic scales visually compress peaks relative to linear scales. For example, though the decline of "language of thought" is not so visually striking, usage was in fact about seven times as much at peak as it is now.

    "Grue" was introduced by Nelson Goodman to describe a puzzle about how we know the future in his "New Riddle of Induction" in 1955. As you can see from the chart, discussion peaked 10-15 years later, in the late 1960s. "The original position" was introduced by John Rawls in his 1971 A Theory of Justice, describing part of an idealized decision-making process, and discussion of it peaked in the late 1980s, about 15 years later. "Supervenience" has a murkier origin, but was popularized in philosophy by R.M. Hare in 1952, to talk about how one set of properties might covary with another (for example the moral and the physical). Discussion peaked about 40 years later in the early 1990s. Hilary Putnam introduced "Twin Earth" (a planet with XYZ instead of water) in a thought experiment in 1975, and discussion peaked 15-20 years later in the early 1990s. "Radical interpretation" was introduced by Donald Davidson in the early 1970s, peaking 15 years later in the late 1980s. Finally, the "language of thought" was introduced by Jerry Fodor in his 1975 book of the same title, peaking 15-20 years later in the early 1990s.

    With the exception of supervenience -- maybe partly because the concept took some time to transition from ethics to the metaphysics of mind -- the pattern is remarkably consistent, with peaks about 15-20 years after a famous introduction event. This pacing reminds me of my earlier research suggesting that individual philosophers tend to receive peak discussion around ages 55-70, despite producing their most influential work on average about 20 years earlier (NB: the two data sets are only partly comparable, but I'm pretty sure the generalization is roughly true anyway). This is the pace of philosophy.

    For these terms and phrases, the Max/Average Ratios are a bit higher than for the rising and falling topics sampled above:

  • superven*: 2.7
  • "radical interpretation": 3.4
  • "Twin Earth: 3.9
  • "the original position": 4.2
  • "language of thought": 4.6
  • grue: 5.3
  • The Max/Average Ratio, of course, doesn't really capture rising and falling; and the ratio will be inflated for more recently introduced terms, assuming virtually zero incidence before introduction.

    For a better measure of peakiness, we can examine the ratio of the maximum discussion to the average discussion in the first two and final two time periods. To avoid infinite peakiness and overstating the peakiness of rare terms, I'll assume a floor level of discussion of .01% in any period. Call this Peakiness with a capital P. Five of the six topics in the first group have a Peakiness between 1.3 and 2.0, and evolution* has 2.9.

    In contrast:

  • superven*: 3.8
  • "Twin Earth": 4.7
  • "radical interpretation": 6.3
  • "the original position": 8.4
  • "language of thought": 8.8
  • grue: 50.6
  • Grue was hot! Although its peak discussion was about the same as superven*, it has crashed far worse -- or at least it has, so far. If we had a longer time period to play with, we could try to make the analyses more temporally stable by sampling a window of 50 years before and after peak, thus giving superven* more of a fair chance to finish its crash, as it were.

    Okay, how about newer jargon? Let's try a few. I guess first I should say that jargon is wonderful and valuable, and I actually love the grue and Twin Earth thought experiments. Also some jargon becomes a permanent part of the discipline -- such as "dualism" and "secondary qualities". Maybe grue and Twin Earth will also prove in the long run to be permanent valuable contributions to the philosopher's toolbox, just in a lower-key way than back when they were hot topics. I don't really mean "jargon" as a term of abuse.

    In 1974, Robert Kirk introduced "zombies" in the philosophical sense (entities physically indistinguishable from people but with no conscious experience), but usage didn't really take off until they got discussion in several articles in Journal of Consciousness Studies in 1995 and in David Chalmers' influential The Conscious Mind in 1996. Contrary to popular rumor, the zombie doesn't appear to be dead quite yet. "Epistemic injustice" was introduced by Miranda Fricker in the late 1990s and early 2000s. "Virtue epistemology" was introduced by Ernest Sosa in the early 1990s. Fictionalism has a longer and more complicated history in logic and philosophy of math.

    The "explanatory gap" between physical or functional explanations and subjective conscious experience was introduced by Joseph Levine in 1983, but doesn't hit the abstracts until some papers of his in the early 1990s. "Experimental philosophy" in its earlier uses refers to the early modern scientific work of Newton, Boyle, and others. It's recent usage to refer to experimental work on folk intuitions about philosophical scenarios hits the abstracts all at once with five papers in 2007. Consistently with my twenty-year hypothesis, of these, "explanatory gap" is the only one that shows signs of being past its peak (despite hopes expressed by some of my Facebook friends). Maybe fictionalis* is running longer.

    Okay, I can't resist trying a few more discussion arcs, just to see how they play out.

    "Possible worlds" goes back at least to Leibniz, but its first appearance in the abstracts was by Saul Kripke in a 1959 article. It peaks at 2.0% in the late 1960s, but has enduring popularity (currently 0.4%). "Sense data" as the objects of perception was introduced in the early 20th century by G.E. Moore and Bertrand Russell and has lots of discussion in the beginning of this dataset (1.7%), crashing down to levels appropriate to a historical relic (0.02%). "Qualia" has a couple occurrences early in the abstracts and traces back to C.S. Peirce in the 19th century, then hits the abstracts again with an article by Sydney Shoemaker in 1975, peaking in the late 1990s.

    Supererogation (morally good acts beyond what is morally required) entered modern discussion in the late 1950s and early 1960s (first hitting the abstracts with Joel Feinberg in 1961), then peaked in the late 1980s -- but it looks like it might be staging a comeback? Wittgenstein introduced the idea of the "language game" in his posthumous Philosophical Investigations (1953), with discussion peaking in the late 1970s. Thomas Nagel introduced "moral luck" in a classic 1979 article, and although it peaked in the late 2000s, it hasn't yet declined much from that peak.

    Possible worlds has the highest Peakiness of the lot -- though nothing like grue -- at 8.4. "Language game" is next at 5.1. The rest aren't very Peaky, ranging from 2.1 to 2.9.

    I've spent more hours this week doing this than I probably should have, given all my other commitments -- but there's something almost meditative about data-gathering, and the arcs yield a perspective I appreciate on the historical brevity of philosophical trends.

    Tuesday, May 01, 2018

    Please Rate My Blog Posts for Inclusion in My Next Book

    I'm working away on selecting and revising blog posts and op-eds for my next book. Readers' feedback has been very helpful in narrowing down to just the most memorable and interesting posts! My final poll, 22 selected posts on philosophical method and the sociology of philosophy is live today.

    As with all the other polls, this poll contains links to the original posts so so you can refresh your memory if you want. But there's no need to rate all of the listed posts! Even if you just remember one or two that you like, it would be useful for me to know that.

    Below are all seven polls.

    Polls 3, 5, and 6 have low response rates. It would be terrific if you could click through and rate a few posts that you like or remember, from one or more of those polls.


  • 1. moral psychology
  • 2. technology
  • 3. belief, desire, and self-knowledge
  • 4. culture and humor
  • 5. cosmology, skepticism, and weird minds
  • 6. consciousness
  • 7. philosophical method and the sociology of philosophy
  • (new as of today)

    Thursday, April 26, 2018

    Three Ways Your Ethics Class Might Backfire

    ... if your aim is to encourage students to actually act better. (Of course, this might not be among your aims.)

    (This post was inspired by Janet Stemwedel's Facebook/Twitter post about students cheating in her ethics class, and subsequent discussion.)

    1. Creating the appearance that every pro has an equally good con. Annette Baier is among those who have emphasized this risk. A typical philosophical teaching style is to present both sides of every major topic discussed, in a more-or-less even-handed way. We tell students that they are welcome to defend either the pro or the con, and we encourage contrarian students who challenge the reasoning and conclusions of the assigned authors. This even-handed debate-like format might lead some students to think that, in ethical reasoning, there are no right or wrong answers to be found, just interminable back and forth. Probably this attitude will have little effect on students' practical choices outside of the classroom; but if it does have an effect, it might be to weaken their sense that ethical principles that they might otherwise have acted on are as sound and indisputable as they would previously have thought.

    2. Improving one's skill at moral rationalization. Suppose you want to do X -- steal a library book, for example. Of course, you wouldn't do that. It's wrong! But wait. Remember that ethics class you took? Maybe you can construct a utilitarian defense of stealing the book. No one would miss it that much, and you'd benefit greatly from keeping it. The institution has much more money than you do, and can easily replace it. Stealing the book would maximize human happiness! (Especially yours.) Such reasoning is rationalization, in the pejorative sense of that term, if your reasoning is basically just a biased search for reasons in support of the self-serving conclusion you'd like to reach. If you're tempted to do something morally wrong, skill at philosophical reasoning, and knowledge of a diverse range of possibly relevant moral principles, might enable you to better construct superficially attractive arguments that free you to feel okay doing the bad thing that you might otherwise have unreflectively avoided.

    3. Giving the sense that unethical behavior is pervasive. I've argued that people mostly aim for moral mediocrity. They aim, that is, not to be morally good by absolute standards but rather to be about as morally good as their peers, not especially better, not especially worse. If so, then changes in your perception of what is typical behavior among your peers can cause you to calibrate your own behavior up or down. If most people litter, or cheat, or selfishly screw over their co-workers, then it doesn't seem so bad if you do too, at least a little. (Conversely, if you learn that almost everyone around you is honorable and true, that can inspire you not to want to be the one schmuck.) Some ethics classes, perhaps especially business ethics classes, focus on case studies of grossly unethical behavior. This company did this bad thing, this other company did this other bad thing, still another company did this other horrible thing.... Without a complementary range of inspirational examples of morally laudable behavior by other companies, students might get the sense that the world is even fuller of malfeasance than they had previously thought, leading them to calibrate their sense of mediocrity down. If so many other people do so many bad things, then it hardly matters (perhaps even it's only fair) if I fudge a bit on my expense report.

    Do ethics classes actually have any of these backfire effects? I think we really have no idea. The issue remains almost entirely unstudied in any rigorous, empirical way.

    [image source]

    Wednesday, April 25, 2018

    Help Me Choose Posts for My Next Book: Consciousness

    As I've mentioned several times, my next book will consist of selected revised blog posts and op-eds. I've narrowed it down to about 150 posts, in seven broad categories. I'd really appreciate your help in narrowing it down more!


  • moral psychology
  • technology
  • belief, desire, and self-knowledge
  • culture and humor
  • cosmology, skepticism, and weird minds
  • consciousness (live as of today)
  • metaphilosophy and sociology of philosophy
  • Every week or so I'm posting a poll with about twenty posts or op-eds to rate, on one of those seven themes. I've found the polls helpful in thinking about what has resonated with readers or been memorable for them. Many thanks to those of you who have responded!

    Each poll will also contain links to the original posts and op-eds so you can refresh your memory if you want. But there's no need to rate all of the listed posts! Even if you just remember one or two that you like, it would be useful for me to know that.

    Today's poll, 19 selected posts on consciousness.

    (image from

    Saturday, April 21, 2018

    Birthday Cake and a Chapel

    Last weekend, at my 50th birthday party, one guest asked, "Now that you're fifty, what wisdom do you have to share?" I answered, "Eat more birthday cake!"

    He seemed disappointed with my reply. I'm a philosopher; don't I have something better to say than "eat more cake"? Well, partly my reply was more serious than he may have realized; and partly I wanted to dodge the expectation that I have any special wisdom to share because of my age or profession. Still, I could have given a better answer.

    So earlier this week, I drafted a post on love, meaningful work, joy, and kindness. Some kind of attempt at wisdom. Then I thought, too, of course one also needs health and security. Well, it's an ordinary list; I wouldn't pretend otherwise. Maybe my best attempt at wisdom reveals my lack of any special wisdom. Better to just stick with "eat more birthday cake"? I couldn't quite click the orange "publish" button.

    Thursday, a horrible thing happened to someone I love. I won't share the details, for the person's privacy. But that evening I found myself in a side room of the Samuelson Chapel at California Lutheran University. The chapel made me think of my father, who had been a long-time psychology professor at CLU. (I've posted a reminiscence of him here.)

    In the 1980s, CLU was planning to build a new chapel at the heart of campus, and my father was on the committee overseeing the architectural plans. As I recall, he came home one evening and said that the architect had submitted plans for a boring, rectangular chapel. Most of the committee had been ready to approve the plans, but he had objected.

    "Why build a boring, blocky chapel?" he said. "Why not build something glorious and beautiful? It will be more expensive, yes. But I think if we can show people something gorgeous and ambitious, we will find the money. Alumni will be happier to contribute, the campus will be inspired it, and it will be a landmark for decades to come." Of course, I'm not sure of his exact words, but something like that.

    So on my father's advice the committee sent the plans back to be entirely rethought.

    Samuelson Chapel today:

    Not ostentatious, not grandiose, but neither just a boring box. A bit of modest beauty on campus.

    As I sat alone in a side room of Samuelson Chapel on that horrible evening, I heard muffled music through the wall -- someone rehearsing on the chapel piano. The pianist was un-self-conscious in his pauses and explorations, experimenting, not knowing he had an audience. I sensed him appreciating his music's expansive sound in the high-ceilinged, empty sanctuary. I could hear the skill in his fingers, and his gentle, emotional touch.

    In my draft post on wisdom, I'd emphasized setting aside time to relish small pleasures -- small pleasures like second helpings of birthday cake. But more cake isn't really the heart of it.

    In Samuelson Chapel, on a horrible night, I marveled at the beauty of the music through the wall. How many events, mostly invisible to us, have converged to allow that moment? The pianist, I'm sure, knew nothing of my father and his role in making the chapel what it is. There is something stunning, awesome, almost incomprehensible about our societies and relations and dependencies, about the layers and layers of work and passion by which we construct possibilities for future action, about our intricate biologies unreflectively maintained, about the evolutionary history that lays the ground of all of this, about the deepness of time.

    As I drove home the next morning, I found myself still stunned with awe. I can drive 75 miles an hour in a soft seat on a ten-lane freeway through Pasadena -- a freeway roaring with thousands of other cars, somehow none of us crashing, and all of it so taken for granted that we can focus mostly on sounds from our radios. One tiny part of the groundwork is the man who fixed the wheel of the tractor of the farmer who grew the wheat that became part of the bread of the sandwich of a construction worker who, sixty years ago, helped lay the first cement for this particular smooth patch of freeway. Hi, fella!

    The second helping of birthday cake, last weekend, which I jokingly offered to my guest as my best wisdom -- it was made from a box mix by my eleven-year-old daughter and hand-decorated by her. How many streams of chance and planning must intermix to give our guests that mouthful of sweetness? Why not take a second helping, after all?

    I think maybe this is what we owe back to the universe, in exchange for our existence -- some moments of awe-filled wonder at how it all has come together to shape us.

    Wednesday, April 18, 2018

    Help Me Choose Posts for My Next Book: Cosmology, Skepticism, and Weird Minds

    As I've mentioned several times, my next book will consist of selected revised blog posts and op-eds. I've narrowed it down to about 150 posts, in seven broad categories. I'd really appreciate your help in narrowing it down more!


  • moral psychology
  • technology
  • belief, desire, and self-knowledge
  • culture and humor
  • cosmology, skepticism, and weird minds (live as of today)
  • consciousness
  • metaphilosophy and sociology of philosophy
  • Every week or so I'm posting a poll with about twenty posts or op-eds to rate, on one of those seven themes. I've found the polls helpful in thinking about what has resonated with readers or been memorable for them. Many thanks to those of you who have responded!

    Each poll will also contain links to the original posts and op-eds so you can refresh your memory if you want. But there's no need to rate all of the listed posts! Even if you just remember one or two that you like, it would be useful for me to know that.

    Today's poll, 23 selected posts on cosmology, skepticism, and weird minds.

    [image source]

    Friday, April 13, 2018

    Why the Epistemology of Conscious Perception Needs a Theory of Consciousness

    On a certain type of classical "foundationalist" view in epistemology, knowledge of your sensory experience grounds knowledge of the outside world: Your knowledge that you're seeing a tree, for example, is based on or derived from your knowledge that you're having sensory experiences of greens and browns in a certain configuration in a certain part of your visual field. In earlier work, I've argued that this can't be right because our knowledge of external things (like trees) is much more certain and secure than our knowledge of our sensory experiences.

    Today I want to suggest that foundationalist or anti-foundationalist claims are difficult to evaluate without at least an implicit background theory of consciousness. Consider for example these three simple models of the relation between sensory experience, knowledge of sensory experience, and knowledge of external objects. The arrows below are intended to be simultaneously causal and epistemic, with the items on the left both causing and epistemically grounding the items on the right. (I've added small arrows to reflect that there are always also other causal processes that contribute to each phase.)

    [apologies for blurry type: click to enlarge and clarify]

    Model A is a type of classical foundationalist picture. In Model B, knowledge of external objects arises early in cognitive processing and informs our sensory experiences. In Model C, sensory experience and knowledge of external objects arise in parallel.

    Of course these models are far too simple! Possibly, the process looks more like this:

    How do we know which of the three models is closest to correct? This is, I think, very difficult to assess without a general theory of consciousness. We know that there's sensory experience, and we know that there's knowledge of sensory experience, and we know that there's knowledge of external objects, and that all of these things happen at around the same time in our minds; but what exactly is the causal relation among them? Which happens first, which second, which third, and to what extent do they rely on each other? These fine-grained questions about temporal ordering and causal influence are, I think, difficult to discern from introspection and thought experiments.

    Even if we allow that knowledge of external things informs our sense experience of those things, that can easily be incorporated in a version of the classical foundationalist model A, by allowing that the process is iterative: At time 1, input causes experience which causes knowledge of experience which causes knowledge of external things; then again at time 2; then again at time 3.... The outputs of earlier iterations could then be among the small-arrow inputs of later iterations, explaining whatever influence knowledge of outward things has on sensory experiences within a foundationalist picture.

    On some theories, consciousness arises relatively early in sensory processing -- for example, in theories where sensory experiences are conscious by virtue of their information's being available for processing by downstream cognitive systems (even if that availability isn't much taken advantage of). On other theories, sensory consciousness arises much later in cognition, only after substantial downstream processing (as in some versions of Global Workspace theory and Higher-Order theories). Although the relationship needn't be strict, it's easy to see how views according to which consciousness arises relatively early fit more naturally with foundationalist models than views according to which consciousness arises much later.

    The following magnificent work of art depicts me viewing a tree:

    [as always, click to enlarge and clarify]

    Light from the sun reflects off the tree, into my eye, back to primary visual cortex, then forward into associative cortex where it mixes with associative processes and other sensory processes. In my thought bubble you see my conscious experience of the tree. The question is, where in this process does this experience arise?

    Here are three possibilities:

    Until we know which of these approaches is closest to the truth, it's hard to see how we could be in a good position to settle questions about foundationalism or anti-foundationalism in the epistemology of conscious perception.

    (Yes, I know I've ignored embodied cognition in this post. Of course, throwing that into the mix makes matters even more complicated!)

    Tuesday, April 10, 2018

    Help Me Choose Posts for My Next Book: Culture and Humor

    As I've mentioned, my next book will consist of selected revised blog posts and op-eds. I've narrowed it down to about 150 posts, in seven broad categories. I'd really appreciate your help in narrowing it down more.

  • moral psychology
  • technology
  • belief, desire, and self-knowledge
  • culture and humor (live as of today)
  • cosmology, skepticism, and weird minds
  • consciousness
  • metaphilosophy and sociology of philosophy
  • Every week or so I'll post a poll with about twenty posts or op-eds to rate, on one of those seven themes. So far, I have found the polls helpful in thinking about what has resonated with readers or been memorable for them. Many thanks to those of you who have responded!

    Each poll will also contain links to the original posts and op-eds so you can refresh your memory if you want. But there's need to rate all of the listed posts! Even if you just remember one or two that you like, it would be useful for me to know that.

    Today's poll, 25 selected posts on culture, including some attempts at humor.

    [image source]

    Thursday, April 05, 2018

    The Experience of Reading: Empirical Evidence

    What do you experience while reading? Do you experience inner speech, as though you or the author are saying the words aloud? Do you experience visual imagery? Do you experience the black marks on the white page? All of these at once? Different ones at different times, depending on how you're engaging with the text?

    Although educators, cognitive psychologists, and literary critics often make claims about readers' typical experience, few researchers have bothered to ask readers, in any systematic way, these basic questions about their experience. [Note 1] So Alan Tonnies Moore and I decided to try doing that. Alan's work on this topic became his 2016 dissertation, and we have now have a paper forthcoming in Consciousness and Cognition (final submitted manuscript available here).

    In each of three experiments, we presented readers with several hundred words of text. In two of these experiments, a beep interrupted participants' reading. Immediately after the beep, readers were to report what was in their experience in the final split second before the beep. We collected both general free-response descriptions of their experience and yes/no/maybe reports about whether they were experiencing visual imagery, inner speech, and visual experience of the words on the page (all phrases defined beforehand). In all three experiments, we also collected readers' retrospective assessments of how frequently they experienced visual imagery, inner speech, and the words on the page while reading the passage we had presented.

    At the end of each experiment, participants answered several questions about the text they had just finished reading. Some questions we thought might relate to visual imagery (such as memory for visual detail), other questions we thought might relate to inner speech (such as memory for rhyme), and still other questions we thought might relate to visual experience of words on the page (such as memory of the font). We were curious whether performance on those questions would correlate with reported experience. Do visual imagers, for example, remember more visual detail?

    Here are the main things we found:

    (1.) People differ immensely in what types of experiences they report while reading. Some people report visual imagery all the time; others report it rarely or never; and still others (the majority) report visual imagery fairly often but not all of the time. Similarly for inner speech and words on the page.

    To see this, here are a couple of histograms [click to enlarge and clarify].

    Readers' retrospective reports in Experiment 2 (Experiments 1 and 3 are similar):

    Readers' yes/no/maybe reports immediately after the beep, also in Experiment 2:

    (2.) Inner speech is less commonly reported than many researchers suppose. This has also been emphasized in Russ Hurlburt's related work on the topic. Although some researchers claim or implicitly assume that inner speech is normally present while reading, we found it in a little more than half of the samples (see the histograms above). Visual imagery was more commonly reported than inner speech.

    (3.) Reported experience varies with passage type, but not by a lot. In Experiment 2, we presented readers with richly visually descriptive prose passages, rhyming poetry, and dramatic dialogue, thinking that readers might experience these types of passages differently. Differences were in the predicted directions, but weren't large. For example, visual imagery was reported in 78% of the beeped moments during richly descriptive prose passages vs 66% of the poetry passages and 69% of the dramatic dialogue (chi-square = 14.4, p = .006). Inner speech was reported in 65% of the beeped moments during dramatic dialogue passages vs 59% of the poetry passages and 53% of the descriptive prose (chi-square = 19.1, p = .001).

    (4.) There was little or no relationship between reported experience and seemingly related comprehension or skill tasks. For example, people who reported seeing the words of text on the page were not detectably more likely to remember the font used. People who reported visual imagery were not detectably more likely to remember the color of objects described in passages. People who reported inner speech were not more likely to disambiguate difficult-to-pronounce words by reference to the rhyme scheme. Although all of this is possibly disappointing, it fits with some of my previous work on the poor relationship between self-reported experience and performance at behavioral tasks.

    Alan and I believe that sampling studies will soon become an important tool in empirical aesthetics, and we hope that this study helps to lay some of the groundwork for that.

    Full version of our paper available here.


    Note 1: One important exception to this generalization is Russ Hurlburt in his 2016 book with Marco Caracciolo and in his paper with several collaborators forthcoming in Journal of Consciousness Studies.)


    Related Posts:

    What Do You Think About While Watching The Nutcracker? (Dec 17, 2007)

    The Experience of Reading (Nov 25, 2009).

    The Experience of Reading: Imagery, Inner Speech, and Seeing the Words on the Page (Aug 28, 2013).

    Waves of Mind Wandering During Live Performances (Jan 15, 2014)