Friday, June 14, 2019

Will Philosophy Ever Come to an End?

A couple of weeks ago, I was chatting with the prominent Chinese science fiction writer Xia Jia, who is visiting UC Riverside for a year. She asked me whether I thought that if we were to create, or become, post- or transhuman superintellects, would all important philosophical questions be answered?

You might think so. If fundamental philosophical questions about knowledge, value, meaning, and mentality aren't entirely hopeless -- if we can make some imperfect progress on them, even with our frail human intellects, so badly designed for abstract philosophical theorizing -- then presumably entities who are vastly intellectually superior to us in the right dimensions could make more, maybe vastly more, philosophical progress. Maybe they could resolve deep philosophical questions as easily as we humans can solve two-digit multiplication problems.

Or here's another thought: If all the facts of the universe are ultimately facts about microphysics and larger-level patterns among entities constituted microphysically, then the main block to philosophical understanding might be only the limits of our observational methods and our computational power. Although no superintelligence in the universe could realistically calculate every motion of every particle over all of time, maybe all of the "big picture" general issues at the core of philosophy would prove tractable with much better observational and calculational tools.

And yet...

I want to say no. Philosophy never could be fully "solved", even by a superintelligence. (It might end, of course, in some other way than being fully solved, but that's not the kind of end Xia or I had in mind.)

First reason: Any intelligent system will be unable to fully predict itself. It will thus always remain partly unknown to itself. This lack of self-knowledge will remain an ineradicable seed for new philosophy.

To predict its own behavior a system will require a subsystem or subprocess dedicated to the task of prediction. That subsystem or subprocess could potentially model all of the entity's other subsystems and subprocesses to an arbitrarily high degree of detail. But the subsystem could not model itself in perfect detail without creating a perfect model of its modeling procedures. But then, to fully predict itself, it would need a perfect model of its perfect model of its modeling procedures, and so on, off into a vicious infinite regress.

Furthermore, some calculations are sufficiently complicated that the only way to predict their outcome is to actually do them. For any complex cognitive task, there will (plausibly) be a minimum amount of time required to physically construct and run the process by which it is done. If there is no limit to the complexity of some problems, there will also (plausibly) be no limit to the minimum amount of time even an ideal process would require to perform the cognitive task, even if the cognitive task is completable in principle. Therefore, given any finite amount of time to construct the prediction there will always be some outcomes that a superintelligence will be unable to foresee.

Now even if you grant that no superintelligent system could fully predict itself, it doesn't straightaway follow that philosophical questions will remain. Maybe the only sorts of questions that escape the superintelligence's predictive powers are details insufficiently grand to qualify as philosophical -- like the 10^10^100th digit of pi?

No, realistically, the actual self-predictive power of any practical superintelligence will always fall far, far short of that. As long as it has some challenging tasks and interests, it won't be able to predict exactly how it will cope with them until it actually copes with them. It won't know the outcome of its mega-intelligent processes until it runs them. So it will always remain partly a mystery to itself. It will be left to wonder uncertainly about what to value and prioritize in light of its ignorance about its own future values and priorities. I'd call that philosophy enough.

Second reason: No amount of superintelligence can, I suspect, entirely answer the question of fundamental values. I don't intend this in any especially mysterious way. I'm not appealing to spooky values that somehow escape all empirical inquiry. But it does seem to me that a general-capacity superintelligence ought always be able to question what it cares about most. A superintelligence might calculate with high certainty that the best thing to do next, all things considered, would be A. But it could reopen the question of the value weightings that it brings to that calculation.

Again, we face a kind of regress. Given values A, B, and C, weighted thus-and-so relative to each other, A might be clearly the best choice. But why value A more than C? Well, the intelligence could do further inquiry into the value of A -- but that inquiry too will be based on a set of values that it is at least momentarily holding fixed. It could challenge those values using still other values....

The alternative seems to be the view that there's only one possible overall value system that a superintelligence could arrive at, and that once it has arrived there it need never reflect on its values again. This strikes me as implausible, when I think about the diversity of things that people value and about how expanding capacities and experience increase that diversity rather than shrink it. As new situations, opportunities, and vistas open up for any being, no matter how intelligent, it will have new occasions to reflect on changes to its value system. Maybe it invents whole new forms of math, or art, or pleasure -- novel enough that big questions arise about how to weigh the pursuit of these new endeavors against other pursuits it values, and unpredictable enough in long term outcome that some creativity will be needed to manage the uncertain comparison.

No superintelligence could never become so intelligent as to put all philosophical questions permanently to rest.


Related: Possible Psychology of a Matrioshka Brain (Oct 9, 2014)

[image source]

Friday, June 07, 2019

Why Academic Philosophy Ought to Be One of the Most Demographically Diverse Disciplines, Instead of One of the Least

Academic philosophy in the U.S. remains largely male. In 2017 (the most recent data available), only 27% of PhDs in Philosophy were granted to women, according to the National Science Foundation's Survey of Earned Doctorates. This percentage hasn't budged for decades: In the 1990s, 27% of Philosophy PhDs were women. In the 2000s, also 27%. (See here.) Among major field and subfield categories with at least 200 PhDs awarded in 2017, only major fields Engineering and Mathematics had a smaller proportion of women (25%) and some subfields within Engineering, Mathematics, and the Physical Sciences. (Physical Sciences overall had 33% women PhD recipients.) In the humanities, arts, and social sciences, only Economics (34%) and Religious Studies (35%) awarded less than 40% of their PhDs to women.

Philosophy in the U.S. remains largely non-Hispanic white: 86% in the NSF SED data from 2017. Among the 92 major fields and subfields awarding doctorates to at least 200 U.S. citizens or permanent residents who reported their ethnicity, only Ecology was more white (89%). In 2017, only 20/340 (6%) of Philosophy PhD recipients reported being Hispanic or Latino, 11 reported being Asian (3%), 4 reported being Black or African American (1%), 0 (0%) reported being American Indian or Alaska Native, and 8 (2%) reported being mixed race or other. The percentage of Hispanic and Asian Philosophy PhD recipients has very slowly increased over time, but the percentage of Black and Native American Philosophy PhD recipients has remained essentially flat at 1%-2% and 0%-1% respectively since the beginning of the recorded data in the 1970s (see here).

Although systematic demographic data on disability, LBGTQ status, economic disadvantage, and other types of demographic diversity are not as readily available, academic philosophy in the U.S. might not be especially diverse in these respects either. (See, for example, this recent testimonial by a transgender graduate student.)

It is sometimes suggested that even in a fully egalitarian society, equally welcoming of people from all backgrounds, we should not expect an exactly proportional representation of women and of the races in academic philosophy. Some academic fields might be naturally more attractive to men and white folks, others to women and black folks, and with equal opportunity, people in these different demographic categories might sort themselves disproportionately. Little boys disproportionately like monster trucks and little girls disproportionately like cute ponies, even if their parents (supposedly) don't force such preferences upon them. A career in academic philosophy might be like that. Philosophy might be the monster truck of academic disciplines.

While such reasoning might or might not apply to Ecology and Mechanical Engineering, such claims cannot, I think, be true of academic philosophy as properly practiced. Academic philosophy should in fact skew the opposite direction, with unusual demographic backgrounds disproportionately over-represented.

Academic philosophy is not about one thing. It's about everything. It concerns the entire universe and the whole human condition. One gender or one ethnicity may care especially much about monster trucks or black holes, but one gender or one ethnicity should not similarly tend to care more than another about the human condition in general. We all do, or should, care about philosophy. You may have no theory of black holes, but for sure you have philosophical views, at least implicitly -- background ethical positions, background assumptions about the general nature of things, a background sense of the sources of knowledge, opinions about death and the possibility or not of an afterlife, aesthetic opinions, political values. Academic philosophy is, or should be, just the most general academic treatment of issues such as these. It is unlikely that in an egalitarian society, women and non-whites would be less interested in exploring fundamental questions about the world and the human condition than are white men, or less inclined to pursue them given the opportunity.

Maybe something about the highly abstract nature of academic philosophy, or its combativeness, or its roots in the European tradition tends to draw white men and repel others? I am not sure that's right, but even if so, these are accidental features of the field, which we ought to consider reforming. Philosophy can work as well by science fictional narrative [1] or engaged dialogue as by highly abstract argumentation. It needn't, and I think shouldn't, be as combative as it often is. And the ignorance and disrespect U.S. philosophers often display toward non-European traditions is a flaw we should repair, rather than a feature to be taken for granted.

There is one structural feature of academic philosophy that I do think ought to influence its demographic proportions: Its celebration of the presentation of novel views and arguments, minority positions, and challenges to what people ordinarily take for granted. Philosophy is, and should be, to a substantial extent, about considering new ideas, rethinking convention, exploring radical and strange-seeming possibilities. For these reasons, outsiders to the cultural mainstream and people who have lived with the disadvantages of existing cultural structures and worldviews, ought to be especially valued and welcomed in the discipline -- overrepresented rather than underrepresented.


Note 1: If you think that science fiction is mostly white male, you need to update to the 21st century, friend. For example, check out last year's Nebula Award nominees.

[image source]

Friday, May 31, 2019

The Dualist's Quadrilemma -- and All of Ours?

Old-school substance dualists hold that people not only have material or physical bodies but also immaterial souls, and that the soul rather than the body is the locus and origin of conscious experience (qualia, "something-it's-like"-ness, phenomenality). Two great advantages of this view are (1.) that it avoids the puzzle of having to explain how dumb, bumping matter can give rise to something as seemingly ontologically radically different as consciousness (at least many people find this puzzling), (2.) that it promises hope of an afterlife, since possibly the soul could continue to exist after the body has died.

However, substance dualism faces a scope of ensoulment problem, or what I'll call the dualist's quadrilemma. The more I think about this quadrilemma, however, the more I think some version of it might trouble virtually all theories of consciousness.

Who has a soul? I see four possible answers, each of which is problematic:

(1.) Only human beings have souls. (At least on Earth. Let's bracket Martians and angels.) At the moment of conception or at the moment of birth, God or nature gives us a soul, but no dog or chimpanzee or raven has a soul. From this it follows, since souls are the locus of conscious experience, that dogs and chimpanzees and ravens have no conscious experiences -- no emotional experiences, no experiences of pain or hunger, no visual or olfactory experiences. There is nothing it's like to be a dog or chimpanzee or raven -- they are, so to speak, entirely experientially blank, as blank as we normally assume a toy robot to be. They emit behavior similar to the behavior we emit when we experience pain or hunger, and they have nervous systems that closely resemble ours, but that is misleading. They lack the soul-stuff that turns on the lights.

This view is difficult to accept, both on commonsensical and on scientific grounds. Ordinarily, we think that dogs, chimpanzees, and ravens do have experiences, even if their experiences are not as cognitively complex as ours. And scientifically, this view seems to overestimate the gulf between us and our nearest biological relatives -- and furthermore seems to require that there was some discrete moment in our evolutionary history when we changed from unensouled to ensouled creatures (Australopithecus anamensis? Homo habilus?), despite, presumably, no radical saltation in our physiology.

(2.) Everything has a soul! Maybe we all are subparts of a single, grand, universe-sized soul; or maybe there are many, many, tiny souls for tiny objects such as electrons.

Although panpsychist views of this sort have received increasing attention in the philosophy and psychology of consciousness recently, most people in our culture appear to find panpsychism too bizarre to accept. My own view is that one of the main pressures in favor of panpsychism is the seeming unpalatability of the other three horns of this quadrilemma.

(3.) There's a line in the sand. Somewhere between electrons and humans, there's a sharp line between the ensouled and the unensouled creatures. Maybe mammals have souls but no other animal does. Or maybe toads have souls but (cognitively simpler) pond frogs don't.

The problem with this view is that physiology and cognitive sophistication comes in degrees, with no sharp dividing line among the species. If having a soul matters, then there ought to be some radical difference between the souled and unensouled creatures -- at least in their cognition, and probably also in their physiology. But the only place it seems at all plausible to draw a sharp line is between human beings and all the rest -- which puts us back on Horn 1.

[illustration of one possible theory of ensoulment; image source]

(4.) Having a soul isn't a yes/no thing but rather a matter of degree. Some creatures are half-ensouled or 6% ensouled.

The problem with this view is that is requires an entirely novel metaphysics that is difficult to envision. What would it be to be kind of ensouled? Some properties lend themselves to in-between, indeterminate cases: a color might be on the vague boundary between blue and not-quite-blue, a person might be in the vague region between being an extravert and being not quite an extravert. We can imagine how such cases go and build a metaphysics of colors and personality traits to accommodate in-between cases and matters of degree. But souls seem like the kinds of things that one either has or doesn't have, with no in-between cases. I am not aware of any philosopher who has attempted to construct a metaphysics of half-souls, and it's hard to see how this would go.

Now you might say so much the worse for substance dualism! But I think non-dualists face the same quadrilemma, even if not quite as vividly.

What kinds of creatures are conscious? If we don't want to say "only humans" and we don't want to say "everything", then we need either a bright line somewhere or we need a concept of in-between consciousness. A bright line seems implausible given the continuity of cognitive capacities and physiology across living species, in the course of fetal development, and in the course of evolution. So are we (even non-dualists) then pushed into conceptualizing consciousness as the kind of thing that one can kind of have, or half have? I, at least, find this difficult to conceive, and I know of no good attempts to make theoretical sense of the idea. On the face of it, a stream of conscious experience appears to be something you either have (however small or snail-like) or fail to have -- like a soul. There's either a center of subjectivity where experiences arise, or there isn't.

So maybe I need to reconcile myself to one of the other three horns? But they're all so unattractive!

Thursday, May 23, 2019

Science Fiction as Philosophy

Ploddingly detailed expository arguments deserve a central role in academic philosophy. Yay for boring stuff![*] But emotionally engaging fiction can be philosophy too. And science fiction or "speculative fiction" has a special philosophical value that is insufficiently appreciated by mainstream philosophers.

I am inspired to write this after having organized and chaired a session on Science Fiction as Philosophy at the SFWA Nebula conference last weekend. (SFWA is the Science Fiction & Fantasy Writers of America, the main professional organization of SF writers in the U.S.)

Canonically recognized Western philosophers have often worked through fiction: Sartre's plays, Camus's stories, Nietzsche's Zarathustra, Rousseau's Emile and Heloise, to some extent Plato's dialogues, and (a personal favorite) Voltaire's Candide. My favorite non-Western philosopher, Zhuangzi, often uses brief parables or goofy stories (such as his famous butterfly dream). And I would argue that great works of fiction are often philosophical in the sense that they inspire, or become the medium of, potentially transformative reflection on the human condition -- even if those works aren't normally treated as "works of philosophy". In the Western literary tradition, for example: Shakespeare, George Eliot, Dickens, Dostoyevsky, Proust, Faulkner.

What is philosophy? I reject the idea that philosophy is argument. If philosophy is argument, then Confucius's Analects is not philosophy, and the pre-Socratics' fragments are not philosophy, and the aphorisms of Nietzsche and Wittgenstein are not philosophy. I say, instead: If an essay, or a parable, or a dialogue, or an aphorism, or a movie engages the reader toward new reflections on fundamental questions about meaning, value, the human condition, the nature of knowledge or art or morality or love or mentality, pushing us out of our settled and conventional ways of thinking, challenging us to explore and reconsider -- that's philosophy. Most real philosophy, as experienced by most people, takes the form of fiction.

Although expository essays have many virtues, they also have limitations. Compared to fictions, expository essays tend to lack imaginative specificity and emotional power. Philosophy looks different through the lens of imagination and emotion. It's one thing to consider, wholly abstractly, some principle like "in an emergency, you should act to maximize the expected number of lives saved". Maybe it sounds pretty good in the abstract (perhaps with some modifications to consider quality of life or expected remaining life years). But it's hard really to evaluate an abstract claim without trying some thought experiments. For example, if the only way to save five innocent people in a hideout would be to kill a noisily crying baby, ought you do it, as the abstract principle says you should?

Our philosophical evaluations are dry and empty if we don't challenge ourselves to emotionally engage with imaginatively vivid scenarios and consequences. We needn't always judge that overall the best thing to do is the thing that's most emotionally attractive when vividly imagined, but we should at least think through how it might really feel to live one way or another. Philosophers' paragraph-long thought experiments start us down the path. But more vivid, richly imagined fictions take us farther. Fiction and abstract expository argument have complementary roles to play in philosophy. Each needs the other.

Science fiction or speculative fiction deserves a special role. "Literary fiction" imagines scenarios that are broadly within the normal run of human experience. Speculative fiction, as I define it, imagines scenarios beyond the normal run of human experience. Speculative fiction can pull apart things that normally go together, can highlight and exaggerate one aspect of life so that we can see it better, can imagine possible transformations of our world and society. Speculative fiction, when written with philosophical purpose, is philosophical thought experiment with blood and bones.

Consider George Orwell's 1984 and Animal Farm, Ursula K. Le Guin's "The Ones Who Walk Away from Omelas", great philosophical SF movies like The Matrix and Her, great philosophical TV shows like Star Trek: The Next Generation or Black Mirror. All of them imagine a way the world could be, or a helpfully simplified and cartooned world with certain aspects exaggerated, and they challenge us to think better about fundamental questions of human value and the human condition -- and they do so in a way that no abstract essay could.

Today in my upper-division class Philosophy of Mind I will teach Star Trek: The Next Generation's episode on whether the robot Data deserves human rights, alongside expository prose by John Searle and Daniel Dennett's famous philosophical story "Where Am I?". Terrific philosophy, all -- one no less than the others.

[image source]


[*] Though see also Trusting Your Sense of Fun (Jan 2, 2013).

Monday, May 20, 2019

Intuition, Disagreement, and a Rope Around the Earth

Check out this awesome new philosophical video by philosopher Jon Ellis at Santa Cruz.

The video starts with this thought experiment from Wittgenstein:

Suppose that a very long piece of rope is wrapped around the equator of the Earth. Now imagine that the rope is lengthened by one yard, but its circular form is preserved, so that the rope no longer fits snugly but occupies a circle at some slight constant distance from the Earth's surface. How great would that distance be? (reported in Horwich 2012, p. 7).

Your attitudes toward philosophical and political propositions might be kind of like your attitude toward that rope -- but with no clear mathematical means to resolve the disagreement.

If you like the video, you might check out Jon's on my paper Rationalization in Moral and Philosophical Thought.

Thursday, May 16, 2019

The Ethics of Drones at the University of California

I've been appointed to an advisory board to evaluate the University of California's systemwide policy regarding Unmanned Aircraft Systems or "drones". We had our first meeting Tuesday. Most of the other members of the committee appear to be faculty who use drones in their research, plus maybe a risk analyst or two. (I missed the first part of the meeting with the introductions.)

Drones will be coming to college campuses. They might come in a big way, as Amazon, Google, and other companies continue to explore commercial possibilities (such as food and medicine delivery) and as drones' great potential for security and inspection becomes increasing clear. Technological change can be sudden, when an organization with resources decides the time is right for a big investment. Consider how fast shareable scooters arrived on campus and in downtown areas.

We want to get ahead of this. Since University of California is such a large and prominent group of universities, our policies might become a model for other universities. The advisory board is only about a dozen people, and they seem interested to hear the perspective a philosopher interested in the ethics of technology. So I have a substantial chance to shape policy. Help me think. What should we be anticipating? What ethical issues are particularly important to anticipate before Amazon, or whoever, arrives on the scene and suddenly shapes a new status quo?

One issue on my mind is the combination of face recognition software and drones. It's generally considered okay to take pictures of crowds in public places. But drones could create a huge stream of pictures or video, sometimes from unexpected angles or locations, possibly with zoom lenses, and possibly with facial recognition, which creates privacy issues orders of magnitude more serious than photographers on platforms taking still photos of crowds on a busy street.

Another issue on my mind is the possibility of monopoly or cartel power among the first company or first few companies to set up a drone network -- which in the (moderately unlikely but not impossible) event that drone technology starts to become integral to campus life, could become another source of abusive corporate power. (Compare the abuses of for-profit academic journals.)

I'm not as much concerned about conventional safety issues (drones crashing into crowded areas), since such safety issues are already a central focus of the committee. I'd like to use my role on this committee as an opportunity to highlight potential concerns that might be visible to those of us who think about the ethics of technology but not as obviously visible to drone enthusiasts and legally trained risk analysts.

An agricultural research drone at UC Merced

Incidentally, what great fun to be a tenured philosophy professor! I get to help shape drone policy. Last weekend, I enjoyed entertaining UCSD philosophers with lots of amazingly weird facts about garden snails (love darts!, distributed brains!), while snails crawled around on the speaker's podium. This coming weekend, I'll be running a session at the conference of the Science Fiction Writers Association on "Science Fiction as Philosophy". I'm designing a contest to see if any philosopher can write an abstract philosophical argument that actually convinces readers to give money to charity at higher rates than control. (So far, the signs aren't promising.) Why be boring?

Philosophers, do stuff!


Friday, May 10, 2019

Early Onset Summer Illusion

Every spring I suffer the Summer Illusion. The following three incompatible propositions all seem to me, in the spring, to be true:

(1.) When summer arrives, I'll finally get a bunch of that research done which has been crowded out by my teaching and administrative commitments during the school year.

(2.) When summer arrives, I'll finally get a chance to do all of that non-academic stuff that I've been putting off during the school year -- big home maintenance projects, vacation travel to the four new places I want to visit, my plan to catch up on the whole history of golden-age science fiction.

(3.) When summer arrives, I'll finally have a chance to spend a lot more time just relaxing.

The Summer Illusion is surprisingly robust. Every spring, I suffer the Summer Illusion, building up big plans and hopes. Then, every summer, as those hopes fall apart, I scold my springtime self for having fallen, yet again, into the Summer Illusion. The pattern is so common and predictable I've given it a memorable name, The Summer Illusion, to help convince myself that it really is an illusion -- and hopefully not fall into it again. And yet I fall into it again.

You might think that the Summer Illusion depends on entertaining only one of the three propositions at a time. You might think that the way it works is that sometimes I entertain proposition 1 (I'll get my research done!), and at other, different times I entertain proposition 2 (I'll get all my other projects done!), and at still other times I entertain proposition 3 (I'll finally have lots of time to relax!). Largely this is so. And yet the Summer Illusion also survives simultaneous consideration of the three propositions. Even looking at the propositions side by side like this, I am tempted to believe them. Some part of me thinks of course all three can't be true, as I've seen time and time again -- and yet in my heart I continue to believe. Summer days expand so magnificently to fit my fantasies!

This year, I have Early Onset Summer Illusion. While I was working on my book, I thought to myself: Come April and May I will have plenty of time for all of my other projects. And so I put off project and project and project and project. And I also thought to myself: Come April and May, I'll finally have some good time to relax a bit more at work.

It's almost an inversion of busyness. If a period of time has the outward appearance of being a "relaxed", low-commitment period of time, it serves as a fantasy-and-procrastination magnet. I pile my future plans and hopes into that period of time, not noticing the impossibly mounting sum of expectations.

Well, now I'm off to U.C. San Diego to talk to the Philosophy Department about whether garden snails are conscious -- come by if you like! If this blog post seems a little short, well, it seemed like this week would be such an easy week, and so I found that I'd promised to finish this and this and this and this....

[image source]

Thursday, May 02, 2019

Flavors of Group Consciousness: Vanilla, Strawberry, and Chunky Monkey with Extra Nuts

Yesterday, I was rereading Philip Pettit's 2018 article "Consciousness Incorporated". Due to some vocabulary mismatch, I find his exact commitments on group phenomenal consciousness not entirely clear [note 1]. (By "consciousness" or "phenomenal consciousness" I just mean conscious experience, the stream of experience, or "something-it's-like-ness" in a relatively theoretically innocent sense.)

Pettit endorses group consciousness of some flavor. But what flavor? A mild flavor, he hopes: something "sufficient to engage philosophical interest" but not too "challenging and mysterious" (p. 33). In contrast, in my article "If Materialism Is True, the United States Is Probably Conscious", I see myself as defending a radical position that clashes sharply with ordinary common sense. So the question is, can we distinguish among different degrees of ontological commitment in endorsing "group consciousness", with vanilla on one end (palatable to almost everyone) and, on the other end, well, let's call it "Chunky Monkey with Extra Nuts".

Group Consciousness: Vanilla

Sometimes a group of people all, or mostly, share a particular conscious state -- in a weak or innocuous sense of "sharing". Individually, everyone (or almost everyone, or at least enough of the group) is undergoing that type of conscious experience. So if I say that the theater audience was alarmed by the sudden collapse of the lead actor onstage, or if I say that World Cup viewers around the globe saw the amazing goal, and if we assume that the alarm and the seeing are conscious experiences, then in a certain innocuous sense the groups share conscious experiences.

(One complication: The alarm or the seeing might manifest differently in different members of the group, depending on, e.g., their mood and their viewing position. Set this aside for simplicity.)

Here's a depiction:

[as always, click to clarify and enlarge]

"The audience felt alarmed by the actor's sudden collapse": In this vanilla version of group consciousness, that statement only implies that (enough) of the audience experienced, as individuals, a feeling of alarm (conscious state A in the depiction above).

Pettit clearly wants something more flavorful than this.

Group Consciousness: Chunky Monkey with Extra Nuts

A radical view of group consciousness, in contrast, posits the existence of a stream of experience possessed by the group in addition to the streams of experience possessed by each individual. I have argued that the United States might have a distinctive stream of experience over and above the experiences had by individual citizens and residents of the United States. If streams of experience, or centers of subjectivity, are discrete, countable things (they might not be), and the group contains N members, then on the Chunky-Monkey-Extra-Nuts view, there are N+1 discrete streams of conscious experience -- 300,000,000-ish for the individual members of the United States plus another one for the group as a whole.

Furthermore, on a view of this sort, the conscious experiences of the group mind might be very different from the conscious experiences of any individual members of the group. If the United States is a conscious entity, for example, it might consciously enforce an embargo. But what it feels like, subjectively from the inside, to enforce an embargo might be completely opaque to any individual person. (Alternatively, consider a possible human-grade group mind that is composed out of smaller insect-grade individual minds, capable of appreciating Shakespeare in a manner far beyond what any insect could do: my Antarean Antheads case).

Here's a depiction:

It is highly counterintuitive (in current mainstream Anglophone culture) to think that the United States, or any existing groups of people, actually give rise to a discrete, higher-level stream of consciousness at the group level -- a distinct locus of subjectivity. On this view, group-level mental states arise from, and are not merely composed of, the mental states (and other interactions) of the members, so that there are four, not three, distinct occurrences of experience A (three among the individuals and a fourth for the group) as well as the possibility of experiences (B, D, E) that occur in none of the individuals. If you find this a weird and radical view, you are probably understanding it correctly.

Pettit presumably doesn't want to defend this flavor of group consciousness. [Note 2]

Group Consciousness: Strawberry

Can we and Pettit find an intermediate flavor -- more interesting than vanilla but not as wild as extra nuts?

Pettit compares the relation that a group mind (or "agent") has to its members to the relation of that a statue has to the molecules composing it:

As the statue relates to its molecules, so the group agent relates to its members. The group agent is not the same agent as the set of its members, because the set of members is not, as such, an agent at all. But still, the group agent is a set of members -- a suitably organized or networked set -- and qua set it is the same collection as the set of members who make it up. The group agent is distinct from the members under the one aspect but not distinct from them under the other (p. 23).

This physical analogy captures the intended non-radicalness of Pettit's view. It is a little too simple, however, since not everything in the members' minds belongs to the group mind, and since the group can have mental states that none of the members individually possess. This isn't analogous to how we normally think of the molecular composition of statues.

A favorite example of Pettit's is the following: The group has three members. Member A believes P, Q, and not-R. Member B believes P, not-Q, and R. Member C believes not-P, Q, and R. No one believes P-and-Q-and-R. The group decides collectively, however, that P-and-Q-and-R is a view they can stand behind as a group. They might endorse "We believe P&Q&R" -- though not all of them even need to endorse that, as long as there's a procedure by which it comes to constitute the group's view, for example, by being voiced by the leader after a proper consultative process.

We might depict the situation thus:

The conscious experience of the group is in the red box: The group consciously believes P&Q&R. It's not enough for the members to share a conscious state (e.g., A), and no individual believes P&Q&R, but due to structural features of their relationships, the group believes P&Q&R in virtue of the right members accepting that "we believe P&Q&R". (Let's ignore the trickier case in which the group believes P&Q&R without any individual member accepting that the group believes this.)

Now, is this an interestingly intermediate "strawberry" flavor of group consciousness? Maybe! But here's a question: In virtue of what is "P&Q&R" a conscious belief that the group possesses? If P&Q&R is a conscious belief because individual group members consciously endorse P and Q and R and/or P&Q&R in the right kind of coordinated way, then maybe this is a fairly vanilla view after all: Conscious experience is still the province of individual people. What Pettit adds is only a somewhat more complex way of picking out which individual conscious experiences count as the group's shared conscious experience. Group consciousness is just individual consciousness, plus a criterion for attributing some of those states to the group as a whole.

On the other hand, if the social relationships among the group members yield more than that, if the group's conscious experience arises from the interconnections among members so that conscious experiences at the group level aren't just individuals' conscious experiences plus a criterion -- well then maybe we're starting to get into Chunky Monkey territory after all.

Suppose there's something it's like to consciously think, "Ah, P&Q&R, that's right!" On the Chunky Monkey view, this experience could really transpire in the group entity, even if it occurs in no individual member's head. On the strawberry-that's-basically-vanilla view, that's impossible, and to say that the group consciously endorses P&Q&R is only to say something about structural relationships among what individual group members do consciously endorse.


Note 1: Pettit prefers "coawareness", which he appears to equate with "access consciousness" in Ned Block's sense. He says that access consciousness implies there being "something it's like" and maybe vice versa (at least for the case of belief). Despite this, he says he is "setting aside" the issue of phenomenal consciousness -- perhaps thinking of "phenomenal consciousness" as a phrase that is more theoretically commissive than I hear it as being (see p. 12-14, 33).

Note 2: In footnote 6, for example, Pettit favorably cites his sometimes-coauthor Christian List's 2018 criticism of my article on USA consciousness.

Friday, April 26, 2019

Animal Rights for Animal-Like AIs?

by John Basl and Eric Schwitzgebel

Universities across the world are conducting major research on artificial intelligence (AI), as are organisations such as the Allen Institute, and tech companies including Google and Facebook. A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs. Now is the time to start thinking about whether, and under what conditions, these AIs might deserve the ethical protections we typically give to animals.

Discussions of ‘AI rights’ or ‘robot rights’ have so far been dominated by questions of what ethical obligations we would have to an AI of humanlike or superior intelligence – such as the android Data from Star Trek or Dolores from Westworld. But to think this way is to start in the wrong place, and it could have grave moral consequences. Before we create an AI with humanlike sophistication deserving humanlike ethical consideration, we will very likely create an AI with less-than-human sophistication, deserving some less-than-human ethical consideration.

We are already very cautious in how we do research that uses certain nonhuman animals. Animal care and use committees evaluate research proposals to ensure that vertebrate animals are not needlessly killed or made to suffer unduly. If human stem cells or, especially, human brain cells are involved, the standards of oversight are even more rigorous. Biomedical research is carefully scrutinised, but AI research, which might entail some of the same ethical risks, is not currently scrutinised at all. Perhaps it should be.

You might think that AIs don’t deserve that sort of ethical protection unless they are conscious – that is, unless they have a genuine stream of experience, with real joy and suffering. We agree. But now we face a tricky philosophical question: how will we know when we have created something capable of joy and suffering? If the AI is like Data or Dolores, it can complain and defend itself, initiating a discussion of its rights. But if the AI is inarticulate, like a mouse or a dog, or if it is for some other reason unable to communicate its inner life to us, it might have no way to report that it is suffering.

A puzzle and difficulty arises here because the scientific study of consciousness has not reached a consensus about what consciousness is, and how we can tell whether or not it is present. On some views – ‘liberal’ views – for consciousness to exist requires nothing but a certain type of well-organised information-processing, such as a flexible informational model of the system in relation to objects in its environment, with guided attentional capacities and long-term action-planning. We might be on the verge of creating such systems already. On other views – ‘conservative’ views – consciousness might require very specific biological features, such as a brain very much like a mammal brain in its low-level structural details: in which case we are nowhere near creating artificial consciousness.

It is unclear which type of view is correct or whether some other explanation will in the end prevail. However, if a liberal view is correct, we might soon be creating many subhuman AIs who will deserve ethical protection. There lies the moral risk.

Discussions of ‘AI risk’ normally focus on the risks that new AI technologies might pose to us humans, such as taking over the world and destroying us, or at least gumming up our banking system. Much less discussed is the ethical risk we pose to the AIs, through our possible mistreatment of them.

This might sound like the stuff of science fiction, but insofar as researchers in the AI community aim to develop conscious AI or robust AI systems that might very well end up being conscious, we ought to take the matter seriously. Research of that sort demands ethical scrutiny similar to the scrutiny we already give to animal research and research on samples of human neural tissue.

In the case of research on animals and even on human subjects, appropriate protections were established only after serious ethical transgressions came to light (for example, in needless vivisections, the Nazi medical war crimes, and the Tuskegee syphilis study). With AI, we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-edge AI research with these questions in mind. Such committees, much like animal care committees and stem-cell oversight committees, should be composed of a mix of scientists and non-scientists – AI designers, consciousness scientists, ethicists and interested community members. These committees will be tasked with identifying and evaluating the ethical risks of new forms of AI design, armed with a sophisticated understanding of the scientific and ethical issues, weighing the risks against the benefits of the research.

It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this.

[originally posted on Aeon Ideas]

Wednesday, April 24, 2019

Contest Idea: Can You Write an Philosophical Argument That Convinces Research Participants to Give Some of Their Bonus Money to Charity?

In a series of studies supported by The Life You Can Save, Chris McVey and I have been showing research participants (mTurk workers) philosophical arguments for charitable giving. Other participants read narratives about children who were helped by charitable donations or (as a control condition) they read a middle-school physics textbook discussion of energy.

We then ask participants their attitudes about charitable giving and follow up with this question:

Upon completion of this study, 10% of participants will receive an additional $10. You have the option to donate some portion of this $10 to your choice among six well-known charities that have been shown to effectively fight suffering due to extreme poverty. If you are one of the recipients of the additional $10, the portion you decide to keep will appear as a bonus credited to your Mechanical Turk worker account, and the portion you decide to donate will be given to the charity you pick from the list below.

Note: You must pass the comprehension questions and show no signs of suspicious responding to receive the $10.  Receipt of the $10 is NOT conditional, however, on your attitudes toward charity, expressed on the previous page, nor on how much you choose to donate if you receive the $10.

If you are one of the recipients of the additional $10, how much of your additional $10 would you like to donate?

[response options are in dollar intervals from $0 to $10, followed by a list of six charities to choose among]

Our November 21 blog post "Narrative but Not Philosophical Argument Motivates Giving to Charity" describes some of our results. Short version: When presented with the narratives, participants choose to donate on average about $4.50 of their possible bonus. When presented with the physics text or the argument, they donate about a dollar less. We've tried varying the argument, to see if we can find a variation that statistically beats the control (with 100-200 participants per condition), but so far no luck.

This is where you come in. Maybe Chris and I are bad at writing convincing arguments! (Well, one argument we adapted from Matthew Lindauer and collaborators, in consultation with Peter Singer.) The philosophical community might be able to help us create a more effective argument.

So -- is this too goofy? -- I'm thinking that a contest might be fun. Write a philosophical argument (300-400 words) that actually leads mTurk participants to donate more of their bonus to charity than they do in the control condition. The prize might be $500 outright plus $500 to the winner's choice of an effective charity. If no one can create an argument that can beat the control condition, no winner; otherwise the winner is the author of the argument that generates the highest mean donation.

There would need to be some constraints: no use of narrative (personal or historical), no discussion of individual people who might be helped, no pictures, no highly emotionally charged content or vivid sensory detail. The argument shouldn't be obviously fallacious, foolish, or absurd. It ought to be something that a thoughtful philosopher could get behind as a reasonable argument. Statistics, empirical details, evidence of overall effectiveness, etc., are fine.

I'm open to suggestions about how best to administer such a contest, if I can find funding for it -- including thoughts about rules, parameters, the best statistical approach, what the prize should be, what to do if we receive too many submissions to run them all, etc. (I'm also open for funders to volunteer.)

Also, I'm definitely open to ideas about what features of an argument might make it effective or ineffective among ordinary readers, if you have thoughts about that but don't feel game to write up an argument.

[image source]

Thursday, April 18, 2019

Ethics in Publishing Philosophy

Tomorrow (Friday) afternoon from 1-4, I'll be a panelist in a session on "Publishing Ethics in Philosophy" at the Pacific Division meeting of the American Philosophical Association in Vancouver. Come by if you're in town!

I'll have ten minutes to say a few things, before the session moves on to other panelists and then (hopefully) lots of discussion. I figure ten minutes is time enough to express three ideas. So... what three points should I make? What issues deserve special emphasis in a forum of this sort? Here are my thoughts:

Journal and Monograph Response Times

If the following three conditions all hold at a journal or academic press, there's cause for concern that that publisher's policies are impeding authors' timely publication of their work and progress in their careers:

(1.) The journal or press does not accept simultaneous submissions (that is, there's an expectation that while the author's work is being considered there it is not also being considered elsewhere).

(2.) The journal accepts 20% or fewer of submissions.

(3.) The median response time for a decision is six months or more.

As we all know, publishable-quality material stands a substantial chance of being rejected for a variety of reasons, including fit with the journal's vision or the vision of the monograph series, the very high selectivity of some venues that leads them to reject much material that they believe is of publishable quality, and chance in the refereeing process. For these reasons, it often takes five or more rejections before publishable-quality work finds a home. If venues are taking six months or more to respond, that can mean three or more years between first submission and final acceptance. That's too long for authors to wait -- especially graduate student authors and untenured faculty.

Ideally, response times could be ten weeks or less. I don't think that's unattainable with good organization. But if a press or journal can't attain that, they ought to consider either allowing simultaneous submissions or increasing their acceptance rates.

Journal Pricing

It's not news to people in academia that some journals charge libraries very hefty subscription fees. The University of California system (UC Berkeley, UCLA, and eight other campuses including my own campus, UC Riverside, plus medical centers and national laboratories) recently cancelled its subscription to Elsevier journals, which was costing the system eleven million dollars a year, about 25% of the university's total journal budget. There's a huge difference in journal pricing, with some high quality journals charging a few hundred dollars a year while other journals, not appreciably better in any way, charge ten times as much for similar services -- with Elsevier and Springer maybe being the worst offenders.

I looked up the institutional subscription price in US dollars for print and online access to the top twenty "best 'general' journals of philosophy" in a recent poll by Brian Leiter:

  • 1. Philosophical Review (Duke University Press), $264/year (4 issues, 561 pages).
  • 2. Mind (Oxford Academic), $430 (4 issues, 1270 pages).
  • 3. Nous (Wiley), $1532 (4 issues, 981 pages).
  • 4. Journal of Philosophy, $250 (12 issues, 684 pages).
  • 5. Philosophy and Phenomenological Research (Wiley), $385 (6 issues, 1594 pages).
  • 6. Australasian Journal of Philosophy, (Taylor & Francis) $509 (4 issues, 838 pages) [Updated: thanks, Neil!].
  • 7. Philosophers' Imprint (hosted by University of Michigan), free open access ($20 recommended fee to submit an article for review; 25 individual articles).
  • 8. Philosophical Studies (Springer), $3171 (17 issues, 4627 pages).
  • 9. Philosophical Quarterly (Oxford), $799 (4 issues, 874 pages).
  • 10. Analysis (Oxford). $288 (4 issues, 784 pages).
  • 11. Synthese (Springer), $4830 (12 issues, 5594 pages).
  • 12. Canadian Journal of Philosophy (Taylor & Francis), $446 (6 issues, 899 pages).
  • 13. Erkenntnis (Springer), $1802 (6 issues, 1320 pages).
  • 14. American Philosophical Quarterly (University of Illinois), $397 (4 issues, approx 420 pages).
  • 15. Pacific Philosophical Quarterly (Wiley), $764 (4 issues, 909 pages).
  • 16. Proceedings of the Aristotelian Society (Oxford), $343 (3 issues, 428 pages).
  • 17. Ergo (hosted by University of Toronto): free open access (41 individual articles).
  • 18. European Journal of Philosophy (Wiley): $1446 (4 issues, 1457 pages).
  • 19. Journal of the American Philosophical Association (Cambridge University): Only available to institutions as part of a large subscription package.
  • 20. Thought (Wiley), $400-$741 (online only; 4 issues, 295 pages).
  • I am not aware of any good reason that Synthese should be almost $5000 a year, while other journals of similar quality are a few hundred dollars. The best explanation, I suspect, is that Springer, as a for-profit company, is taking advantage of inelastic institutional demand for the journal by institutions that want to ensure that they have access to the best-known philosophy journals. It is, I think, contrary to the general interests of academics and the public for Springer and other such companies to charge so much, so some collective resistance might be desirable.

    I recommend that editors, referees, and authors consider journal pricing as one factor in their decisions about serving in editorial roles, refereeing roles, and in choosing where to submit, giving default preference to open-access journals and reasonably priced journals over expensive journals when other factors are approximately equal.

    Responsible Citation Practice

    Increasingly, citation is the currency of academic prestige. People decide what to read based, partly, in what is being cited by others. High citation rates can figure prominently in hiring and tenure decisions. Highly cited authors are generally considered to be experts in their subfields.

    Thus, I think it is important that authors thoroughly review the recent literature on their topic to ensure that they are citing a good selection of recent sources, especially sources by junior authors and lesser-known authors. It is easy -- especially if you are a well-known author, and especially in invited contributions -- to cite the famous people in your subfield and the people whose work you happen to know through existing academic connections. This is not entirely academically responsible, and it can have the effect of illegitimately excluding from the conversation good work by people who are not as academically well connected.

    Citation practice is primarily the responsibility of authors -- but referees and editors might also want to consider this issue in evaluating submitted work.

    Comments/suggestions/reactions welcome -- especially before 1:00 pm tomorrow!

    [image source]

    Tuesday, April 09, 2019

    Tell Us How to Fix the Lack of Diversity in Philosophy Departments

    by Sherri Conklin, Eric Schwitzgebel, and Nicole Hassoun.

    [cross-posted at the Blog of the APA]

    Philosophy needs to diversify. Come join us at the Pacific Division meeting to tell us what departments can do to improve. Join the Demographics in Philosophy Project to help bend the long arc of history towards justice.

    First, some data

    A growing body of research shows that while the proportions of women philosophy faculty are increasing over time, women still only account for 25% of all philosophy faculty in the U.S. (Conklin, Artamonova, and Hassoun 2019; see also Black philosophers account for only about 1-4% of all philosophy faculty (Botts et al. 2014). And disabled philosophers are underrepresented as well (Tremain 2014).

    These groups’ disproportionately low authorship rates in philosophy journals may partially explain the faculty findings – especially if failure to publish leads to a failure to gain employment, tenure, and promotion (Wilhelm, Conklin, and Hassoun 2017). For example, only 13% of publications in top philosophy journals are by women (Schwitzgebel and Jennings 2017), and fewer than 1% of authors in top journals are Black (Bright 2016).

    Another possible explanation concerns the “pipeline” into philosophy. For example, women and Black philosophers receive only about 32% and 5% of undergraduate philosophy degrees in the U.S. (Schwitzgebel 2017a, 2017b) and about 29% and 2% of PhDs (Schwitzgebel 2016). (Systematic data on other groups that are likely to be underrepresented are more difficult to obtain). Possibly, something about how philosophy is taught or how it is perceived in U.S. culture substantially influences the demographics of the major (Garfield and Van Norden 2016; Thompson et al. 2016).

    This is a problem from an epistemic point of view: Philosophy as a discipline profits from hearing voices from a variety of different backgrounds. Furthermore, to the extent that unfair exclusionary practices, whether implicit or explicit, may be limiting people’s career choices, it is a problem of social justice.

    Disciplinary initiatives to combat the disparities

    Much has been done to combat the observed disparities. The British Philosophical Association, in collaboration with the Society for Women in Philosophy-UK, launched a Best Practices Scheme for improving departmental climates for women. The APA introduced a new initiative to diversify course syllabi through the Diversity and Inclusiveness Syllabus Collection. A number of philosophy diversity institutes were launched to help attract marginalized undergraduates to apply to graduate school. These programs include PikSi, UCSD SPWP, and COMPASS (among others – see the APA resource page on Undergraduate Diversity Institutes in Philosophy). Graduate students founded Minorities in Philosophy to promote student initiated change (

    In addition, the Demographics in Philosophy Project collates and collects data to document the problem of marginalization in professional philosophy and to identify tools for counteracting it. In 2018, we initiated a broadly consultative project to identify inclusive practices for philosophy journals, beginning with a session on inclusive practices at the Pacific Division meeting of the APA and a series of blog posts from editors of leading journals (Hassoun, Schwitzgebel, and Smith 2018; Kukla 2018; Bilimoria 2018; Hetherington 2018; Hansson 2018; Moore and O’Brien 2018) and culminating in a list of potential best practices, posted here on the Blog of the APA (Conklin, Hassoun, Schwitzgebel 2018).

    But what can departments do to combat the disparities directly?

    Tell us how to fix the problem:

    We have some preliminary ideas about how to improve the situation, but we want to hear from you. We would like to identify concrete suggestions for specific practices that can be implemented by departments to improve diversity without compromising their other goals. We are especially interested in hearing about successful practices.

    Give us your suggestions. Raise objections and concerns. Email us. And, if you’re in the area at the time, please come to our session on this topic at the Pacific APA meeting in Vancouver on April 18 (1-4pm). The session will start with a brief presentation on diversity in philosophy departments, but it will mostly consist of open discussion with a panel of representatives from sixteen well-regarded philosophy departments, who will bring their experience to the question as well as, we suspect, in some cases, their strenuous disagreement.

    After the session, we hope to partner with departments to collect more data on what works to improve diversity and to develop a toolbox of helpful practices.

    Suggestions, objections, and contributions welcome at More data on women in philosophy are available here:

    Follow us on Twitter @PhilosophyData and Facebook

    Session details:

    Diversity in Philosophy Departments
    Pacific APA, Vancouver
    April 18, 2018, 9:00–12:00 a.m.
    APA Committee Session:
    Arranged by the APA Committee on the Status of Women

    Eric Schwitzgebel (University of California, Riverside)

    Nicole Hassoun (Binghamton University)
    Sherri Conklin (University of California, Santa Barbara)
    Purushottama Bilimoria (University of California, Berkeley and University of Melbourne)
    Teresa Blankmeyer Burke (Gallaudet University)
    Leslie Pickering Francis (University of Utah)
    Subrena Smith (University of New Hampshire)

    David Chalmers (New York University)
    Andrew Chignell (Princeton University)
    Helen De Cruz (Oxford Brookes University)
    Steve Downes (University of Utah)
    Carrie Figdor (University of Iowa)
    John Lysaker (Emory University)
    Anna-Sara Malmgren (Stanford University)
    Wolfgang R. Mann (Columbia University)
    Ned Markosian (University of Massachusetts Amherst)
    Gregory R. Peterson (South Dakota State University)
    Geoff Sayre-McCord (University of North Carolina at Chapel Hill)
    Miriam Solomon (Temple University)
    Yannik Thiem (Villanova University)
    Daniela Vallega-Neu (University of Oregon)
    Eric Watkins (University of California, San Diego)
    Andrea Woody (University of Washington)

    Thanks to Kathryn Norlock and Michael Rea for help with this project.

    [image source]

    Monday, April 08, 2019

    Forthcoming: A Theory of Jerks and Other Philosophical Misadventures

    My forthcoming book has a page now at MIT Press:

    A Theory of Jerks and Other Philosophical Misadventures

    A collection of quirky, entertaining, and reader-friendly short pieces on philosophical topics that range from a theory of jerks to the ethics of ethicists.

    Have you ever wondered about why some people are jerks? Asked whether your driverless car should kill you so that others may live? Found a robot adorable? Considered the ethics of professional ethicists? Reflected on the philosophy of hair? In this engaging, entertaining, and enlightening book, Eric Schwitzgebel turns a philosopher's eye on these and other burning questions. In a series of quirky and accessible short pieces that cover a mind-boggling variety of philosophical topics, Schwitzgebel offers incisive takes on matters both small (the consciousness of garden snails) and large (time, space, and causation).

    A common theme might be the ragged edge of the human intellect, where moral or philosophical reflection begins to turn against itself, lost among doubts and improbable conclusions. The history of philosophy is humbling when we see how badly wrong previous thinkers have been, despite their intellectual skills and confidence. (See, for example, “Kant on Killing Bastards, Masturbation, Organ Donation, Homosexuality, Tyrants, Wives, and Servants.”) Some of the texts resist thematic categorization—thoughts on the philosophical implications of dreidels, the diminishing offensiveness of the most profane profanity, and fatherly optimism—but are no less interesting.

    Schwitzgebel has selected these pieces from the more than one thousand that have appeared since 2006 in various publications and on his popular blog, The Splintered Mind, revising and updating them for this book. Philosophy has never been this much fun.

    Tuesday, April 02, 2019

    Gaze of Robot, Gaze of Bird

    I have a new SF story in Clarkesworld, "Gaze of Robot, Gaze of Bird" -- my first new fiction publication since 2017. I wanted to tell a tale in which none of the protagonists are conscious but we care about them anyway -- an interplanetary probe (with some chat algorithms and cute subroutines) and its stuffed monkey doll.

    Another theme is what counts as the extinction or continuation of a sapient species.


    Gaze of Robot, Gaze of Bird

    by Eric Schwitzgebel

    First, an eye. The camera rose, swiveling on its joint, compiling initial scans of the planetary surface. Second, six wheels on struts, pop-pop, pop-pop, pop-pop, and a platform unfolding between the main body and the eye. Third, an atmospheric taster and wind gauge. Fourth, a robotic arm. The arm emerged holding a fluffy, resilient, nanocarbon monkey doll, which it carefully set on the platform.

    The monkey doll had no actuators, no servos, no sensors, no cognitive processors. Monkey was, however, quite huggable. Monkey lay on his back on the warm platform, his black bead eyes pointed up toward the stars. He had traveled wadded near J11-L’s core for ninety-five thousand years. His arms, legs, and tail lay open and relaxed for the first time since his hurried manufacture.

    J11-L sprouted more eyes, more arms, more gauges—also stabilizers, ears, a scoop, solar panels, soil sensors, magnetic whirligigs. Always, J11-L observed Monkey more closely than anything else, leaning its eyes and gauges in.

    J11-L arranged Monkey’s limbs on the platform, gently flexing and massaging the doll. J11-L scooped up a smooth stone from near its left front wheel, brushed it clean, then wedged it under Monkey’s head to serve as a pillow. J11-L stroked and smoothed Monkey’s fur, which was rumpled from the long journey.

    “I love you, Monkey,” emitted J11-L, in a sound resembling language. “Will you stay with me while I build a Home?”

    Monkey did not reply.

    [story continues here]

    Thursday, March 28, 2019

    Journey 2 Psychology

    A couple of weeks ago, one of my former students, Michael S. Gordon, now a professor of psychology at William Paterson University, stopped by my office unexpectedly. He told me he had sold his house in New Jersey so that he could spend a year traveling around the world, with his family, interviewing famous psychologists about their lives. He wants to compile an oral history of psychology. He was at the UCR campus to interview Robert Rosenthal.

    Wait, he's spending a full year, along with his wife and son, traveling around interviewing famous psychologists? And he sold his house to do it? Whoa. That's commitment. How awesome!

    He is posting excepts of his interviews on his blog, Journey2Psychology. For example: Alburt Bandura, Ed Diener, Alison Gopnik, Elizabeth Spelke, Dan Schachter, Dan Gilbert, etc., etc.!

    Okay, a psychology nerd could get excited. What an amazing idea!

    Part of me wishes he could have done it in the 1980s, when BF Skinner, Timothy Leary, Stanley Milgram, and Eric Erikson were still alive. Or, hey, maybe if we could go back into the 1950s, or the 1920s, or....

    Against the Mind-Package View of Minds

    (adapted from comments I will be giving on Carrie Figdor's Pieces of Mind, at the Pacific Division meeting of the American Philosophical Association on Friday, April 19, 9-12)

    We human beings have a favorite size: our own size. We have a favorite pace of action: our own pace. We have a favorite type of organization: the animal, specifically the mammal, and more specifically us. What’s much larger, much smaller, or much different, we devalue and simplify in our imaginations.

    It’s true that we’re great. Wow, us! But we tend to forget that other things can also be pretty amazing.

    So here’s a naive picture of the world. Some things have minds, and other things don’t have minds. The things with minds might have highly sophisticated minds, capable of appreciating Shakespeare and proving geometric theorems, or they might have simpler minds. But if an entity has a mind at all, then it has sensory and emotional experiences, preferences and dislikes, plans of at least a simple sort, some kind of understanding of its environment, an ability to select among options, and a sense of its location and the boundaries of its body. Let’s call this the Mind Package.

    Everything that exists, you might think, is either a thing that has the whole Mind Package or a thing that has no part of the Mind Package. Stones have no part of the Mind Package. They don’t feel anything. They have no preferences. They make no decisions. They have no comprehension of the world around them. There’s nothing it’s like to be a stone. Dogs, we ordinarily assume, do have the Mind Package. My own dog Cocoa enjoys going on walks, prefers the bucket chair to the recliner, gets excited when she hears my wife coming in the front door, and dislikes our cat.

    [A recent picture of some of my favorite biological entities. Can you guess which ones have the Mind Package?]

    Now it could be the case that everything in the world either has the Mind Package or doesn’t have it, and if something has one piece of the Mind Package, it has all the pieces. Intuitively, this is an attractive idea. What would it be to kind of have a mind? Could a creature have full-blown desires and preferences but no beliefs at all? Could a creature be somewhere between having experiences and not having any experiences? This seems hard to imagine. It’s much easier to think that either the light is on inside or the light is off. Either you’ve got a stone or you’ve got a dog.

    But there are a couple of reasons to suspect that the lights-on/lights-off Mind Package view is two simple.

    The first reason to be suspicious is that the world is full of slippery slopes. In fetal development and infant development, biological and cognitive complexity emerges gradually. But if you’ve either got the whole package or you don’t then there must be some moment at which the lights suddenly turned on and you went, in a flash, from being an entity without experiences, preferences, feelings, comprehension, and choice, to being an entity with all of those things. In the gradual flow of maturation, when could this be? Likewise, if we assume, at least for a moment, that jellyfish don’t have the Mind Package but dogs do, similar trouble looms: Across species there’s a gradual continuum of capacities, not, it seems, a sudden break between lights-on and lights-off animals. (Garden snails are an especially fascinating problem case.)

    This leads to a second reason to be suspicious of the Mind Package view. As Carrie Figdor emphasizes, bacteria are much more informationally complicated than we tend to think. Plants are much more informationally complicated than we tend to think. Group interactions are much more informationally complicated than we tend to think. The relations of parasite, host, and symbiont are much more informationally complicated than we tend to think. The difference is smaller than we usually imagine between things of our favorite size and pace and other things. The biological world is richly structured with what looks like sophisticated informational processing in response to environmental stimuli. When scientists need to model what’s going on in plants and bacteria and neurons and social groups, they seem to need terms and concepts and models from psychology: signaling, communication, cooperation, decision, memory, detection, learning. Structures other than those of our favorite size and pace seem to show the kinds of informational interactions and responsiveness to environment that we capture with psychological words like these.

    Furthermore, there’s no general reason to think that systems usefully described by some of these psychological terms need always also be usefully described by other of these terms. If a scientific community starts to attribute memories or preferences to the entities they research, it doesn’t follow that they will find it fruitful also to ascribe sensory experiences, feelings, or a sense of the difference between body and world. Different aspects of mentality may be separable rather than bundled. They don’t need to stand or fall as a Package. To paraphrase the title of Carrie’s book, the Mind comes in Pieces.

    Philosophers of mind love to paint their opponents as clinging to the remnants of Cartesianism. Should I alone resist? The Mind Package view is a remnant of Cartesianism: There’s the Minded stuff, which has this nice suite of cognitive and conscious properties, all as a bundle, and then there’s the non-Minded stuff which is passive and simple. We ought to demolish this Cartesian idea. There is no bright line between the fully and properly Minded and the rest of the world, and there is no need for cognitive properties to all travel on the family plan.

    The Mind Package view has a powerful grip on our intuitions. We want to confine “the mental” to privileged spaces – our own heads and the heads of our favorite animals. But if the informational structures of the world are sufficiently complex, this intuitive approach must be jettisoned. Mental processes run wide through the world, different ones in different spaces, defying our intuitive groupings. This radically anti-Cartesian view is the profound and transformative lesson of Carrie’s book, and it takes some getting used to.

    If Carrie’s radically anti-Cartesian view of the world is scientifically correct, there are, then, pragmatic grounds to prefer a broad view of the metaphysics of preferences and decisions, according to which many different kinds of entities have preferences and make decisions. It is the view that better respects the evidence that we are continuous with plants, worms, and bacteria, and that the types of patterns of mindedness we see in ourselves resemble what’s happening in them, even if such entities don’t have the whole Mind Package.

    Goodbye, Mind-Package rump Cartesianism!



    Do Neurons Literally Have Preferences (Nov 4, 2015)

    Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018)

    Tuesday, March 26, 2019

    New Podcast Interview: How Little Thou Can Know Thyself

    New interview of me at the MOWE blog.

    Topics of discussion:

  • Our poor knowledge of our own visual experience,
  • Our poor knowledge of our own visual imagery,
  • Our poor knowledge of our own emotional experience,
  • The conscious self riding on the unconscious elephant,
  • Can we improve at introspection?
  • Our poor knowledge of when and why we feel happy,
  • The amazing phenomenon of instant attachment to adoptive children.
  • Friday, March 22, 2019

    Most U.S. and German Ethicists Condemn Meat-Eating (or German Philosophers Think Meat Is the Wurst)

    It's an honor and a pleasure to have one's work replicated, especially when it's done as carefully as Philipp Schoenegger and Johannes Wagner have done.

    In 2009, Joshua Rust and I surveyed the attitudes and behavior of ethicist philosophers in five U.S. states, comparing those attitudes and behavior to non-ethicist philosophers' and to a comparison group of other professors at the same universities. Across nine different moral issues, we found that ethicists reported behaving overall no morally differently than the other two groups, though on some issues, especially vegetarianism and charitable giving, they endorsed more stringent attitudes. (In some cases, we also had observational behavioral data that didn't depend on self-report. Here too we found no overall difference.) Schoenegger and Wagner translated our questionnaire into German and added a few new questions, then distributed it by email to professors in German-speaking countries, achieving an overall response rate of 29.5% [corrected Mar 23]. (Josh and I had a response rate of 58%.) With a couple of exceptions, Schoenegger and Wagner report similar results.

    The most interesting difference between Schoenegger and Wagner's results and Josh's and my results concerns vegetarianism.

    The Questions:

    We originally asked three questions about vegetarianism. In the first part of the questionnaire, we asked respondents to rate "regularly eating the meat of mammals, such as beef or pork" on a nine-point scale from "very morally bad" to "very morally good", with "morally neutral" in the middle.

    In the second part of the questionnaire, we asked:

    17. During about how many meals or snacks per week do you eat the meat of mammals such as beef or pork?

       enter number of times per week ____

    18. Think back on your last evening meal, not including snacks. Did you eat the meat of a mammal during that meal?

       □ yes

       □ no

       □ don’t recall

    U.S. Results in 2009

    On the attitude question, 60% of ethicist respondents rated meat-eating somewhere on the "bad" side of the nine-point scale, compared to 45% of non-ethicist philosophers and only 19% of professors from other departments (ANOVA, F = 17.0, p < 0.001). We also found substantial differences by both gender and age, with women and younger respondents more likely to condemn meat-eating. For example, 81% of female philosophy respondents born 1960 or later rated eating the meat of mammals as morally bad, compared to 7% of male non-philosophers born before 1960. That's a huge difference in attitude!

    Eight percent of respondents rated it at 1 or 2 on the nine-point scale -- either "very bad" or adjacent to very bad -- including 11% of ethicists (46/564 overall, 22/193 of ethicists).

    On self-report of behavior, Josh and I found much less difference. On our "previous evening meal" question, we detected at best a statistically marginal difference among the three main analysis groups: 37% of ethicists reported having eaten meat at the previous evening meal, compared to 33% of non-ethicist philosophers and 45% of non-philosophers (chi-squared = 5.7, p = 0.06, excluding two respondents who answered "don’t recall").

    The "meals per week" question was actually designed in part as a test of "socially desirable responding" or a tendency to fudge answers: We thought it would be difficult to accurately estimate the number, thus it would be tempting for respondents to fudge a bit. And mathematically, they did seem to be guilty of fudging: For example, 21% of respondents who reported eating meat at one meal per week also reported eating meat at the previous evening meal. Even if we assume that meat is only consumed at evening meals, the number should be closer to 14% (1/7). If we assume, more plausibly, that approximately half of all meat meals are evening meals, then the number should be closer to 7%. With that caveat in mind, on the meals-per-week question we found a mean of 4.1 for ethicists, compared to 4.6 for non-ethicist philosophers and 5.3 for non-philosophers (ANOVA [square-root transformed], F = 5.2, p = 0.006).

    We concluded that although a majority of U.S. ethicists, especially younger ethicists and women ethicists, thought eating meat was morally bad, they ate meat at approximately the same rate as did the non-ethicists.

    German Results in 2018:

    Schoenegger and Wagner find, similarly, a majority of German ethicist respondents rating meat-eating as bad: 67%. Evidently, a majority of U.S. and German ethicists think that eating meat is morally bad.

    However, among the non-ethicist professors, Schoenegger and Wagner find higher rates of condemnation of meat-eating than Josh and I found: 63% among German-speaking non-ethicist philosophers in 2018 compared to our 45% in the U.S. in 2009 (80/127 vs. 92/204, z = 3.2, p = .001), and even more strikingly 40% among German-speaking professors from departments other than philosophy in 2018 compared to only 19% in the U.S. in 2009 (52/131 vs/ 31/167, z = 4.0, p < .001; [note 1]).

    German professors were also much more likely than U.S. professors in 2009 to think that eating meat is very bad, with 18% rating it 1 or 2 on the scale, including 23% of ethicists (57/408 and 35/150, excluding non-respondents; two-proportion test U.S. vs German: overall z = 2.8, p = .005, ethicists z = 2.9, p = .004).

    Apparently, German-speaking professors are not as fond of their wurst as cultural stereotypes might suggest!

    A number of explanations are possible: One is that in general German academics are more pro-vegetarian than are U.S. academics. Another is that attitudes toward vegetarianism are changing swiftly over time (as suggested by the age differences in Josh's and my study) and that the nine years between 2009 and 2018 saw a substantial shift in both cultures. Still another concerns non-response bias. (For non-philosophers, Schoenegger and Wagner's response rate was 30%, while Josh's and mine was 53%.)

    In Schoenegger and Wagner's data, ethicists report having eaten less meat at the previous evening meal than the other two groups: 25%, vs. 40% of non-ethicist philosophers and 39% of the non-philosophers (chi-squared = 9.3, p = .01 [note 2]). The meals per week data are less clear. Schoenegger report 2.1 meals per week for ethicists, compared to 2.8 and 3.0 for non-ethicist philosophers and non-philosophers respectively (ANOVA, F = 3.4, p = .03), but their data are highly right skewed, and due to skew Josh and I had used a square-root transformation for original 2009 analysis. A similar square-root transformation on Schoenegger and Wagner's raw data eliminates any statistically detectable difference (F = 0.8, p = .45). And there is again evidence of fudging in the meals-per-week responses: Among those reporting only one meat meal per week, for example, 18% reported having had meat at their previous evening meal.

    If we take the meals-per-week data at face value, the German respondents ate substantially less meat in 2018 than did the U.S. respondents in 2009: 2.6 meals for the Germans vs. 4.6 for the U.S. respondents (median 2 vs median 4, Mann-Whitney W = 287793, p < .001). However, the difference was not statistically detectable on the previous evening meal question: 38% U.S. vs 34% German (z = 1.3, p = .21).

    All of this is a bit difficult to interpret, but here's the tentative conclusion I draw:

    German professors today -- especially ethicists -- are more likely to condemn meat eating than were U.S. professors ten years ago. They might also be a bit less likely to eat meat, again perhaps especially the ethicists, though that is not entirely clear and might reflect a bit of fudging in the self-reports.

    The other difference Schoenegger and Wagner found was in the question of whether ethicists were on the whole more likely than other professors to embrace stringent moral views -- but full analysis of this will require some detail and will have to wait for another time.


    Note 1: In the published paper, Schoenegger and Wagner report 39% instead of the 40% I find in reanalyzing their raw data. This might either be a rounding error [39.69%] or some small difference in our analyses.

    Note 2: In the published paper, Schoenegger and Wagner report 24%, which again might be a rounding error (from 24.65%) or a small analytic difference.

    [image source]