Thursday, July 12, 2018

Two Ways of Being a Group Mind: Synchronic vs. Diachronic

Based on last week's post, I am now seeing ads for sunglasses everywhere, as if to say "Welcome, Eric, to the internet hypermind! Did you say SUNGLASSES?!"

Speaking of hyperminds....

I want to distinguish two ways of being a group mind, since I know you care immensely about the cognitive architecture of group minds I'm a dork. My thought is that the philosophical issues of group consciousness and personal identity play out differently in the two types of case.

Synchronic:

Examples: Star Trek's Borg (mostly), Ann Leckie's ancillaries, highly interconnected nations (according to me), Vernor Vinge's tines.

Synchronic group minds are probably the default conception. A bunch of independent, or semi-independent, or independent-enough entities remain in constant or constant-enough communication with each other. Their communication is sufficiently rich or sufficiently well structured that it gives rise to group-level mental states in the whole. In the most interesting case, the individual entities are distinct enough to have rich mental lives but also the group is well enough coordinated that it also, simultaneously, has a distinctive, rich mental life above and beyond that of its members.

Diachronic:

Examples: David Brin's Kiln People, Linda Nagata's ghosts, Benjamin Kinney's forks.

In a diachronic group mind, communication is relatively rare but high bandwidth and transformative. Post-communication, the individuals inherit mental states from the others. In the most interesting case, the inheritance process is very different from just listening credulously to someone's testimony; rather it's more like a direct transfer of memories, plans, and opinions, maybe even values or personality. Imagine "forking" into three versions of yourself, going your separate ways for the day, and then at the end of the day merging back into a single individual, integrating the memories and plans of each, and averaging any general changes in value, opinion, or personality. Tomorrow and every day thereafter, you will fork and merge in the same way.

[Cerberus might not be integrated enough to be a good example of a group mind, but I didn't want to attach another darn picture of the Borg.]

Tradeoffs Between Group-Level and Individual-Level Personhood and Autonomy:

As I have described it here, delay between information transfer episodes is the fundamental difference between these types of group minds: whether the minds are in constant or constant-enough communication, or whether instead they communicate only at long intervals. Obviously, temporal distance admits of degree, but this difference in degree creates structural pressures. If communication is infrequent, its effects have to be radical if it is to give rise to an entity sufficiently integrated to be worth calling a "group mind". If every day group of friends meets to exchange information and plan the next day's activities, in the ordinary way people sometimes do this, I suppose that in some weak sense they have formed a group mind. But they haven't done so in the radical science-fictional sense I'm imagining. For example, if there were five friends who did this, there would still be exactly five persons -- entities with serious rights whose destruction would we worth calling murder. For the emergence of something more metaphysically and morally interesting, the exchange has to be radical enough to challenge the boundaries of personal identity.

Conversely, if communication is constant and its effects are radical, it's not clear that we have a group of individuals in any interesting sense: We might just have a single non-group entity that happens to be spatially scattered (as in my Martian Smartspiders).

In other words, to be a philosophically-interesting group entity there must be some sort of interestingly autonomous mentality both at the individual level and at the group level. Massive transformative communication (as in diachronic merging of memories and values) radically reduces autonomy: If communication is both massively transformative and very frequent, there's no chance for interesting person-like autonomy at the individual level. If communication is neither massively transformative nor very frequent, there's no chance for interesting person-like autonomy at the group level.

Consciousness:

Our intuitive judgments about group-level consciousness are probably pretty crappy (as I've argued here and here). But our general theories about consciousness as they apply to the group level are probably even crappier (as I've argued here and here). At the same time, whether the group as a whole has a stream of conscious experience over and above the consciousness of its individual members seems like a very important question if we're interested in its mentality and whether it deserves moral status as a person. So we're kind of stuck. We'll have to guess.

Plausibly, in the diachronic case there is no stream of consciousness beyond that of the merging individuals. When there's one body at night, there's one stream of consciousness (at most, if it's dreaming). When there are three bodies off doing their thing, there are three streams of consciousness. We might be able to create some problematic boundary cases during the merge, but maybe that's marginal enough to dismiss with a bit of hand waving.

The synchronic case is, I think, more conceptually challenging with respect to consciousness. If we allow that minimally interactive groups do not give rise to group level consciousness and we also allow that a fully informationally integrated but spatially distributed entity does give rise to consciousness, it seems that we can create a slippery slope from one case to the other by adding more integration and communication (for example here). At some point, if there is enough coherent behavior, self-representation, and information exchange at the group level, most standard functionalist views of consciousness (unless they accept an anti-nesting principle) should allow that each individual member of the group would have a stream of experience and also that there would be a further, different stream of experience at the group level. But it's a tricky question how much integration and information exchange, and with what kind of structural properties, is necessary for group-level consciousness to arise.

Personhood:

One interesting issue that arises is the extent to which an individual's beliefs about what counts as "self-interest" and "death" define the boundaries of their personhood. Consider a diachronic case: You are walking back home after your day out and about town, with a wallet full of money and interesting new information about a job opportunity tomorrow, and you are about to merge back together with the two other entities you forked off from this morning. Is this death? Are "you" going to be gone after the merge, your memories absorbed into some entity who is not you (but who you might care about even more than you care about yourself)? In walking back, are you magnanimously sacrificing your life to give your money and information to the entity who will exist tomorrow? Would it be more in your self-interest to run away and blow your wad on something fun for this current body? Or, instead, will it still be "you" tomorrow, post-merge, with that information and that money? To some extent, in unclear cases of this sort, I think it might depend on how you think and feel about it: It's to some extent up to you whether to conceptualize the merging together as death or not.

A parallel issue might arise with synchronic groups, though my hunch is that it would play out differently. Synchronic groups, as I'm imagining them, don't have identity-threatening splits and merges. The individual members of synchronic groups would seem to have the same types of rights that otherwise similar individuals who aren't members of synchronic group minds would have -- rights depending on (for example, but it's not this simple) their capacity to suffer and think and choose as individuals. They might choose, as individuals, to view the group welfare as much more important than their own welfare (as a soldier might choose to die for sake of country); but unless there's some real loss of autonomy or consciousness, this doesn't threaten their status as persons or redefine the boundaries of what counts as death.

Related:

Possible Architectures of Group Minds: Perception (May 4, 2016)

Possible Architectures of Group Minds: Memory (Jun 14, 2016)

Group Minds on Ringworld (Oct 24, 2012)

If Materialism Is True, the United States Is Probably Conscious (academic essay in Philosophical Studies, 2015)

Our Moral Duties to Monsters (Mar 8, 2014)

Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity (Aug 20, 2016).

[image source]

Friday, July 06, 2018

How to Create Immensely Valuable New Worlds by Donning Your Sunglasses

[A satisfied REI customer, creating whole gobs of new worlds.]

One world is good, so two is better, right?

(If you think one world is bad and two are worse, just invert the reflections below.)

Here's one way to look at it: Run a universe, or a world, from beginning to end, sum up all the good stuff, subtract all the bad stuff, and note the (hopefully positive) total. Now consider, from your end-of-the-universe God's-eye point of view: Should you launch another world similar to the previous one? Well, of course you should! There would be even more good stuff, and a higher positive total, after that second world has been run. Similar considerations suggest that two good worlds running in parallel would also be better than a single good world.

(To avoid problems with summing infinitudes, let's assume finite worlds with finite value. For some complexities regarding the value or disvalue of repetition specifically see this earlier post.)

Now on some interpretations of quantum mechanics, every time there's a quantum event with different possible outcomes, all of the outcomes occur, each in a different world. Such "many world" interpretations often describe the world as "splitting" into two worlds -- one world in which Outcome A occurs and one in which Outcome B occurs. You too, the observer, will split: One copy of you goes into World A, observing Outcome A, and the other goes into World B, observing Outcome B.

In a classic article on the many-worlds interpretation of quantum mechanics, Bryce S. DeWitt writes:

This universe is constantly splitting into a stupendous number of branches, all resulting from the measurementlike interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.... I still recall vividly the shock I experienced on first encountering this multiworld concept. The idea of 10^100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable.... (DeWitt 1970, p. 33).

Notice that on DeWitt's portrayal, there are a finite but large number of worlds: 10^100+. Now here's the normative question I want to consider: Is there positive value -- would it be ethically or prudentially or aesthetically good -- to increase the amount of splitting, so that there are more worlds rather than fewer?

On the face of it, it seems that, yes, it would be good if there were more worlds. If each world independently considered is good, then plausibly more worlds is better. Suppose World W has positive value V. Suppose now that it splits into two worlds that are very similar but not identical: W1 with value V1 and W2 with value V2. If we assume V1 ~= V2 ~= V and that the value of worlds is additive, then after the split the whole W1+W2 is approximately twice the value of W. We have doubled the amount of value that the cosmos as a whole contains!

Now you might object in one of three ways:

(1.) You might reject the whole splitting-worlds interpretation. Fair enough! But then you're not really playing the game. I'm interested in thinking about the normative consequences assuming that the world does split as DeWitt describes.

(2.) You might reject the assumption that V1 ~= V. For example, you might think that after splitting, each world has only approximately half the value that the world before the split had, so that V1 + V2 ~= V. Then no value would be gained by splitting. A tempting thought, perhaps. But why would splitting make a world half as valuable? It's not clear. One consequence of this view is that our world, constantly splitting, would be constantly halving in value, so that its value is plummeting by orders of magnitude every second. Hmmm, that seems no less bizarre than the idea that splitting doubles the value of cosmos. Now if you thought that the other worlds already existed before the split, then you might reject V1 ~= V, since the subset of worlds in V1 would be about half (or whatever) of the total number of worlds in V. But the whole idea of splitting worlds is not that the worlds already existed. It's that they are created. So we're talking about whether creating a new valuable world adds value to the cosmos as a whole. Pending a good argument otherwise, it seems like it should.

(3.) You might reject additivity. You might say that although V1 ~= V2 ~= V, you can't simply sum V1 and V2 to get ~= 2V. I can feel the pull of this idea. If the worlds were strictly identical, you might say "well, there's no point in duplicating everything again"! But (a.) The worlds are not strictly identical, and in fact (given chaos theory) they might diverge increasingly over time. And (b.) Duplication plausibly adds value when worlds are temporally separated (at the end of the universe, it seems like not re-running the thing again would involve omitting possible future pleasures that future inhabitants of the world would have), and side-by-side splitting wouldn't seem to be very normatively different in principle.

Okay, let's pretend you're convinced. When new worlds arise through quantum mechanical splitting, that's terrific. Whole realms of value are added! The value of the cosmos as a whole approximately doubles. Now, accepting this, is there anything you should do differently?

Consider the following possible principle:

Conservation of Splitting. Over any time interval t, the world will split N times, no matter what you do.

As far as I can see, there's no reason to accept Conservation of Splitting. It seems like you can create situations in which there would be relatively more splitting or relatively less splitting. You can run more quantum mechanical experiments or fewer. You can make more quantum mechanical measurements or fewer. And if you run more experiments and make more measurements, there will be more splitting, and thus more worlds.

For example, that pair of polarized sunglasses you have, sitting in that dark sunglass case in that dark drawer? When a photon passes or fails to pass through a polarized lens, that's a quantum mechanical event -- a chance event, which, on this interpretation, results in a splitting of worlds. In some worlds the photon goes through; in others it does not. You could take those sunglasses out of the drawer. You could go to the beach and wear them in the sun. Many more photons will pass through those lenses! Maybe about 10^18 more photons per second. If each of those photons has an independent quantum chance of passing or not passing through those lenses, splitting the world, then you're creating 2^10^18 new worlds per second just by sitting on the beach -- worlds that would not have been created had the sunglasses remained cased in the drawer. Think of all the value you're adding to the cosmos!

Effective altruism is a movement that recommends using reason and evidence to do the most good you can do. Normally, effective altruists recommend doing things like donating money to charities that effectively help to alleviate suffering due to poverty. But the value of saving one life has to be tiny compared to the value of creating 2^10^18 new universes per second! So instead of staying indoors in the shade, writing a check to the Against Malaria Foundation, maybe you'd do better to spend the day at the beach.

Even if you have only a 0.001% credence that the splitting worlds interpretation of quantum mechanics is true, the expected utility of sitting on the beach might far exceed that of donating to poverty relief.

I know that it doesn't seem intuitively very plausible that going to the beach is far morally better than donating money to effective, life-saving charities, but consequentialist philosophers are often willing to admit what's ethically best might not match our intuitions about what's ethically best.

PS: I feel so bad about making this argument that I just donated to Oxfam.

--------------------------------------------------

Related Posts:

Duplicating the Universe (Apr 29, 2015)

Goldfish Pool Immortality (May 30, 2014)

How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)

[image source]

Friday, June 29, 2018

Sorry about the Lingering Unapproved Comments!

... It looks like Blogger has stopped giving me notifications of comments to approve. I found a huge backlog, most of which I have approved. Let me see if I can turn notifications back on.

It will take me til next week to catch up on responding to a selection of the comments that have been lingering. Sorry to have neglected you!

Will Future Generations Find Us Especially Morally Loathsome?

Ethical norms change. Although reading Confucius doesn't feel like encountering some wholly bizarre, alien moral system, some ethical ideas do differ dramatically over time and between cultures. Genocide and civilian-slaughtering aggressive warfare are now widely considered to be among the evilest things people can do, yet they appear to be celebrated in the Bible (especially Deuteronomy and Joshua) and we still name children after Alexander "the Great". Many seemingly careful thinkers, including notoriously Aristotle and Locke, wrote justifications of slavery. Much of the world has only recently opened its eyes to the historically common oppression of women, homosexuals, low-status workers, people with disabilities, and ethnic minorities.

We probably haven't reached the end of moral change. In a few centuries, people might look back on our current norms with the same mix of appreciation and condemnation that we now look back on ethical norms common in Warring States China and Early Modern Europe.

Indeed, future generations might find our generation to be especially vividly loathsome, since we are the first generation creating an extensive video record of our day-to-day activities.

It’s one thing to know, in the abstract, that Rousseau fathered five children with a lover he regarded as too dull-witted to be worth attempting to formally educate, and that he demanded against her protests that their children be sent to (possibly very high mortality) orphanages [see esp. Confessions, Book VII]. It would be quite another if we had baby pictures and video of Rousseau's interactions with Thérèse. It's one thing to know, in the abstract, that Aristotle had a wife and a life of privilege. It would be quite another to watch video of him proudly enacting sexist and classist values we now find vile. Future generations that detest our sexual practices, or our consumerism, or our casual destruction of the environment, or our neglect of the sick and elderly, might be especially horrified to view these practices in vivid detail.

By "we" and "our" practices and values, I mean the typical practices and values of highly educated readers from early 21st-century democracies -- the notional readership of this blog. Maybe climate change proves to be catastrophic: Crops fail, low-lying cities are flooded, a billion desperate people are displaced or malnourished and tossed into war. Looking back on video of a philosopher of our era proudly stepping out of his shiny, privately-owned minivan, across his beautiful irrigated lawn in the summer heat, into his large chilly air-conditioned house, maybe wearing a leather hat, maybe sharing McDonald's ice-cream cones with his kids -- looking back, that is, on what I (of course this is me) think of as a lovely family moment -- might this seem to some future Bangladeshi philosopher as vividly disgusting as I suspect I would find Aristotle's treatment of Greek slaves?

#

If we are currently at the moral pinnacle, any change in future values will be a change for the worse. Future generations might condemn our mixing of the races, for example. They might be disgusted to see pictures of interracial couples walking together in public and raising their mixed-race children. Or they might condemn us for clothing customs that they come to view as obscene. However, I feel comfortable saying that they'd be wrong to condemn us, if those were the reasons why.

But it seems unlikely that we are at the pinnacle; and thus it seems likely that future generations might have some excellent moral reason to condemn us. More likely than our being at the moral pinnacle, it seems to me, is that either (a.) there has been a slow trajectory toward better values over the centuries (as argued by Steven Pinker) and that the trajectory will continue, or alternatively that (b.) shifts in value are more or less a random walk up, down, and sideways, in which case it would be unlikely chance if we happened to be at the peak right now. I am assuming here the same kind of non-relativism that most people assume in condemning Nazism and in thinking that it constitutes genuine moral progress to recognize the equal moral status of women and men.

(To someone who endorses most of the widely-shared values of their group it is almost just analytically the case that they will see their group's values as the peak. Suppose you endorse the mainstream values in your group -- values A, B, C, D, E, and F. Elsewhere, the mainstream values might instead be A, not-B, D, E, F and G, or A, C, not-D, not-E, H and I. Of course it will seem to you that you're the group that got it right -- exactly A, B, C, D, E, and F! It will seem to you that changes from past values have been good, and the likely future rejection of your values will be mistaken. This is basically the old man's "kids these days!" complaint, writ large.)

I worry then, that we might be in a situation similar to Aristotle's: horribly wrong (most of us) on some really important moral issues, though it doesn't feel like we're wrong, and although we think we are applying our excellent minds excellently to the matter, with wisdom and good sense. I worry that we, or I, might be using philosophy to justify the 21st-century college-educated North American's moral equivalent of keeping slaves, oppressing women, and launching genocidal war.

Is there some way of gaining insight into this possibility? Some way to get a temperature reading, so to speak, on our unrecognized evil?

Here's one thing I don't think will work: Rely on the ethical reasoning of the highest status philosophers in our society. If you've read any of my work on Kant's applied ethics, German philosophers' failure to reject Nazism, and the morality of ethics professors, you'll know why I say this.

#

I'd suggest, or at least I'd hope, that if future generations rightly condemn us, it won't be for something we'd find incomprehensible. It won't be because we sometimes chose blue shirts over red ones or because we like to smile at children. It will be for things that we already have an inkling might be wrong, and which some people do already condemn as wrong. As Michele Moody-Adams emphasizes in her discussion of slavery and cultural relativism (Moody-Adams 1997, ch. 2), in every slave culture there were always some voices condemning the injustice of slavery -- among them, typically, the slaves themselves -- and it required a kind of affected ignorance to disregard those voices. As a clue to our own evil, we might look to minority moral opinions in our own culture.

I tend to disagree with those minority opinions. I tend to think that the behavior of my social group is more or less fine, or at least forgivably mediocre. If someone advances a minority ethical view I disagree with, I'm philosopher enough to concoct some superficially plausible defenses. What I worry is that a properly situated observer might recognize those defenses to be no better than Hans Heyse's defense of Nazism or Kant's critique of masturbation.

Moody-Adams suggests that we can begin to transcend our cultural and historical moral boundaries though moral reflection and moral imagination. In the epilogue of her 1997 book, she finds hope in the kind of moral reflection that involves self-scrutiny, vivid imagination, a wide-ranging contact with other disciplines and traditions, a recognition of minority voices, and serious engagement with the concrete details of everyday moral inquiry.

Hey, that sounds pretty good! I'll put, or try to put, my hopes there too.

Wednesday, June 20, 2018

The Perceived Importance of Kant, as Measured by Advertisements for Specialists in His Work

I'm revising a couple of my old posts on Kant for my next book, and I wanted some quantitative data on the importance of Kant in Anglophone philosophy departments.

There's a Leiter poll, where Kant ranks as the third "most important" philosopher of all time after Plato and Aristotle. That's pretty high! But a couple of measures suggest he might be even more important than number three. In terms of appearance in philosophy abstracts, he might be number one. Kant* appears 4370 times since 2010 in Philosophers Index abstracts, compared to 2756 for Plato*, 3349 for Aristot*, 1096* for Hume*, 1545 for Nietzsch*, and 1110 for Marx*. I've tried a bunch of names and found no one higher.

But maybe the most striking measure of a philosopher's perceived importance is when philosophy departments advertise for specialists specifically in that person's work. By this measure, Kant is the winner, hands-down. Not even close!

Here's what I did: I searched PhilJobs -- currently the main resource for philosophy jobs in the Anglophone world -- for permanent or tenure-track positions posted from June 1, 2015 to June 18, 2018. "Kant*" yields 30 ads (of 910 in the database), among which 17 contained "Kant" or "Kantian" in the line for "Area of Specialization". One said "excluding Kant", so let's toss that one out, leaving 29 and 16. Four were specifically asking for "post-Kantian" philosophy (which presumably excludes Kant, but it's testament to his influence that a historical period is referred to in this way), but most were advertising either for a Kant specialist (e.g., UNC Chapel Hill searched in AOS "Kant's theoretical philosophy") or Kant among other things (e.g., Notre Dame "Kant and/or early modern"). Where "Kant" was not in the AOS line, his name was either in the Area Of Competence line or somewhere in the body of the ad [note 1].

In sum, the method above yields:
Kant: 29 total PhilJobs hits, 16 in AOS (12 if you exclude "post-Kantian").

Here are some others:

Plato*: 3, 0.
Aristot*: 2, 0.
Hume*: 1, 0.
Confuc*: 1, 0.
Aquin*: 3, 1 (all Catholic universities).
Nietzsch*: 0, 0.
Marx*: 5, 1. (4/5 Chinese universities).

As I said, hands down. Kant runs away with the title, Plato and Confucius shading their eyes in awe as they watch him zoom toward the horizon.

Note 1: If "Kant" was in the body of the ad, it was sometimes because the university was mentioning their department's strength in Kant rather than searching for someone in Kant, but for my purposes if a department is self-describing its strengths in that way, that's also a good signal of Kant's perceived importance, so I haven't excluded those cases.

[image source]

Thursday, June 14, 2018

Slippery Slope Arguments and Discretely Countable Subjects of Experience

I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while.

To build a slippery slope argument concerning the presence of consciousness, do this:

* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).

* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)

* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.

* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].

Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)

However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.

Consider these three familiar slippery slopes.

(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)

(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.

(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.

Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).

(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.

Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:

(a.) Deny one of the anchors. There was group consciousness all along, perhaps!

(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).

(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.

(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.

I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.

[image adapted from Pixabay]

Wednesday, June 06, 2018

Research Funding: The Pretty-Proposal Approach vs the Recent-Past-Results Approach

Say you have some money and you want to fund some research. You're an institution of some sort: NSF, Templeton, MacArthur, a university's Committee on Research. How do you decide who gets your money?

Here are two broad approaches:

The Pretty Proposal Approach. Send out a call for applications. Give the money to the researchers who make the best case that they have an awesome research plan.

The Recent-Past-Results Approach. Figure out who in the field has recently been doing the best research of the sort you want to fund. Give them money for more such research.

[ETA for clarity, 09:46] The ideal form of the Recent-Past-Results Approach is one in which the researcher does not even have to write a proposal!

Of course both models have advantages and disadvantages. But on the whole, I'd suggest, too much funding is distributed based on the pretty proposal model and insufficient money based on the recent-past-result model.


I see three main advantages to the Pretty Proposal Approach:

First, and very importantly in my mind, the PPA is egalitarian. It doesn't matter what you've done in the past. If you have a great proposal, you deserve funding!

Second, two researchers with equally good track records might have differently promising future plans, and this approach (if it goes well) will reward the researcher with the more promising plans.

Third, the institution can more precisely control exactly what research projects are funded (possibly an advantage from the perspective of the institution).


But the Pretty Proposal Approach has some big downsides compared to the Recent-Past-Results Approach:

First, in my experience, researchers spend a huge amount of time writing pretty proposals, and the amount of time has been increasing sharply. This is time they don't spend on research itself. In the aggregate, this is a huge loss to academic research productivity (e.g., see here and here). The Recent-Past-Results approach, in contrast, needn't involve any active asking by the researcher (if the granting agency does the work of finding promising recipients), or submission only of a cv and recent publications. This would allow academics to deploy more of their skills and time on the research itself, rather than on constructing beautiful requests for money.

Second, past research performance probably better predicts future research performance than do promises of future research performance. I am unaware of data specifically on this question, but in general I find it better policy to anticipate what people will do based on what they've done in the past than based on the handsome promises they make when asking for money. If this is correct, then better research is likely to be funded on a Recent-Past-Results approach. (Caveat: Most grant proposals already require some evidence of your expertise and past work, which can help mitigate this disadvantage.)

Third, the best researchers are often opportunistic and move fast. They will do better research if they can pursue emerging opportunities and inspirations than if they are tied to a proposal written a year or more before.

In my view, the downsides of the dominant Pretty Proposal Approach are sufficiently large that we should shift a substantial proportion (not all) of our research funding toward the Recent-Past-Results Approach.


What about the three advantages of the Pretty Proposal Approach?

The third advantage of the PPA -- increased institutional power -- is not clearly an all-things-considered advantage. Researchers who have recently done good work in the eyes of grant evaluators might be better at deciding the specific best uses of future research resources than are those grant evaluators themselves. Institutions understandably want some control; but they can exert this control by conditional granting: "We offer you this money to spend on research on Topic X (meeting further Criteria Y and Z), if you wish to do more such research."

The second advantage of the PPA -- more funding for similar researchers with differently promising plans -- can be partly accommodated by retaining the Pretty Proposal Approach as a substantial component of research funding. I certainly wouldn't want to see all funding to be based on Recent Past Results!

The first advantage of the PPA -- egalitarianism -- is the most concerning to me. I don't think we want to see elite professors and friends of the granting committees getting ever more of the grant money in a self-reinforcing cycle. A Recent-Past-Results Approach should implement stringent measures to reduce the risk of this outcome. Here are a few possibilities:

Prioritize researchers with less institutional support. If two researchers have similarly excellent past results but one has achieved those results with less institutional support -- a higher teaching load, less previous grant funding -- then prioritize the one with less support. Especially prioritize funding research by people with decent track records and very little institutional support, perhaps even over those with very good track records and loads of institutional support. This helps level the playing field, and it also might produce better results overall, since those with the least existing institutional support might be the ones who would most benefit from an increase in support.

Low-threshold equal funding. Create some low bar, then fund everyone at the same small level once they cross that bar. This might be good practice for universities funding small grants for faculty conference travel, for example (compared to faculty having to write detailed justifications for conference travel).

Term limits. Require a five-year hiatus, for example, after five years of funding so that other researchers have a chance at showing what they can do when they receive funding.

[ETA 10:37] In favoring more emphasis on the Recent-Past-Results Approach, I am not suggesting that everyone write Pretty Proposals with cvs attached and then the funding is decided mostly based on cv. That would combine the time disadvantage of writing Pretty Proposals with the inegalitarian disadvantage of the Recent-Past-Results Approach, and it would add misdirection since people would be invited to think that writing a good proposal is important. (Any resemblance of the real grants process to this dystopian worst-of-all-worlds approach is purely coincidental.) I am proposing either no submission at all by the grant recipient (models include MacArthur "genius" grants and automatic faculty start-up funds) or a very minimal description of topic, with no discussion of methods, impact, previous literature, etc.

[Still another ETA, 11:03] I hadn't considered random funding! See here and here (HT Daniel Brunson). An intriguing idea, perhaps in combination with a low threshold of some sort.

Related Posts:

Related Posts: How to Give $1 Million a Year to Philosophers (Mar 18, 2013).

Against Increasing the Power of Grant Agencies in Philosophy (Dec 23, 2011).

Friday, June 01, 2018

Does It Harm Philosophy as a Discipline to Discuss the Apparently Meager Practical Effects of Studying Ethics?

I've done a lot of empirical work on the apparently meager practical effects of studying philosophical ethics. Although most philosophers seem to view my work either neutrally or positively, or have concerns about the empirical details of this or that study, others react quite negatively to the whole project, more or less in principle.

About a month ago on Facebook, Samuel Rickless did such a nice job articulating some general concerns (see his comment on this public post) that I thought I'd quote his comments here and share some of my reactions.

First, My Research:

* In a series of studies published from 2009 to 2014, mostly in collaboration with Joshua Rust (and summarized here), I've empirically explored the moral behavior of ethics professors. As far as I know, no one else had ever systematically examined this question. Across 17 measures of (arguably) moral behavior, ranging from rates of charitable donation to staying in contact with one's mother to vegetarianism to littering to responding to student emails to peer ratings of overall moral behavior, I have found not a single main measure on which ethicists appeared to act morally better than comparison groups of other professors; nor do they appear to behave better overall when the data are merged meta-analytically. (Caveat: on some secondary measures we found ethicists to behave better. However, on other measures we found them to behave worse, with no clearly interpretable overall pattern.)

* In a pair of studies with Fiery Cushman, published in 2012 and 2015, I've found that philosophers, including professional ethicists, seem to be no less susceptible than non-philosophers to apparently irrational order effects and framing effects in their evaluation of moral dilemmas.

* More recently, I've turned my attention to philosophical pedagogy. In an unpublished critical review from 2013, I found little good empirical evidence that business ethics or medical ethics instruction has any practical effect on student behavior. I have been following up with some empirical research of my own with several different collaborators. None of it is complete yet, but preliminary results tend to confirm the lack of practical effect, except perhaps when there's the right kind of narrative or emotional engagement. On grounds of armchair plausibility, I tend to favor multi-causal, canceling explanations over the view that philosophical reflection is simply inert (contra Jon Haidt); thus I'm inclined to explore how backfire effects might on average tend to cancel positive effects. It was a post on the possible backfire effects of teaching ethics that prompted Rickless's comment.

Rickless's Objection:
(shared with permission, adding lineation and emphasis for clarity)

Rickless: And I’ll be honest, Eric, all this stuff about how unethical ethicists are, and how counterproductive their courses might be, really bothers me. It’s not that I think that ethics courses can’t be improved or that all ethicists are wonderful people. But please understand that the takeaway from this kind of research and speculation, as it will likely be processed by journalists and others who may well pick up and run with it, will be that philosophers are shits whose courses turn their students into shits. And this may lead to the defunding of philosophy, the removal of ethics courses from business school, and, to my mind, a host of other consequences that are almost certainly far worse than the ills that you are looking to prevent.

Schwitzgebel: Samuel, I understand that concern. You might be right about the effects. However, I also think that if it is correct that ethics classes as standardly taught have little of the positive effect that some administrators and students hope for from them, we as a society should know that. It should be explored in a rigorous way. On the possibly bright side, a new dimension of my research is starting to examine conditions under which teaching does have a positive measurable effect on real-world behavior. I am hopeful that understanding that better will lead us to teach better.

Rickless: In theory, what you say about knowing that courses have little or no positive effect makes sense. But in practice, I have the following concerns.

First, no set of studies could possibly measure all the positive and negative effects of teaching ethics this way or that way. You just can’t control all the potentially relevant variables, in part because you don’t know what all the potentially relevant variables are, in part because you can’t fix all the parameters with only one parameter allowed to vary.

Second, you need to be thinking very seriously about whether your own motives (particularly motives related to bursting bubbles and countering conventional wisdom) are playing a role in your research, because those motives can have unseen effects on the way that research is conducted, as well as the conclusions drawn from it. I am not imputing bad motives to you. Far from it, and quite the opposite. But I think that all researchers, myself included, want their research to be striking and interesting, sometimes surprising.

Third, the tendency of researchers is to draw conclusions that go beyond the actual evidence.

Fourth, the combination of all these factors leads to conclusions that have a significant likelihood of being mistaken.

Fifth, those conclusions will likely be taken much more seriously by the powers-that-be than by the researchers themselves. All the qualifiers inserted by researchers are usually removed by journalists and administrators.

Sixth, the consequences on the profession if negative results are taken seriously by persons in positions of power will be dire.

Under the circumstances, it seems to me that research that is designed to reveal negative facts about the way things are taught had better be airtight before being publicized. The problem is that there is no such research. This doesn’t mean that there is no answer to problems of ineffective teaching. But that is an issue for another day.

My Reply:

On the issue of motives: Of course it is fun to have striking research! Given my general skepticism about self-knowledge, including of motives, I won't attempt self-diagnosis. However, I will say that except for recent studies that are not yet complete, I have published every empirical study I've done on this topic, with no file-drawered results. I am not selecting only the striking material for publication. Also, in my recent pedagogy research I am collaborating with other researchers who very much hope for positive results.

On the likelihood of being mistaken: I acknowledge that any one study is likely to be mistaken. However, my results are pretty consistent across a wide variety of methods and behavior types, including some issues specifically chosen with the thought that they might show ethicists in a good light (the charity and vegetarianism measures in Schwitzgebel and Rust 2014). I think this adds to credibility, though it would be better if other researchers with different methods and theoretical perspectives attempted to confirm or disconfirm our findings. There is currently one replication attempt ongoing among German-language philosophers, so we will see how that plays out!

On whether the powers-that-be will take the conclusions more seriously than the researchers: I interpret Rickless here as meaning that they will tend to remove the caveats and go for the sexy headline. I do think that is possible. One potentially alarming fact from this point of view is that my most-cited and seemingly best-known study is the only study where I found ethicists seeming to behave worse than the comparison groups: the study of missing library books. However, it was also my first published study on the topic, so I don't know to what extent the extra attention is a primacy effect.

On possibly dire consequences: The most likely path for dire consequences seems to me to be this: Part of the administrative justification for requiring ethics classes might be the implicit expectation that university-level ethics instruction positively influences moral behavior. If this expectation is removed, so too is part of the administrative justification for ethics instruction.

Rickless's conclusion appears to be that no empirical research on this topic, with negative or null results, should be published unless it is "airtight", and that it is practically impossible for such research to be airtight. From this I infer that Rickless thinks either that (a.) only positive results should be published, while negative or null results remain unpublished because inevitably not airtight, or that (b.) no studies of this sort should be published at all, whether positive, negative, or null.

Rickless's argument has merit, and I see the path to this conclusion. Certainly there is a risk to the discipline in publishing negative or null results, and one ought to be careful.

However, both (a) and (b) seem to be bad policy.

On (a): To think that only positive results should be published (or more moderately that we should have a much higher bar for negative or null results than for positive ones) runs contrary to the standards of open science that have recently received so much attention in the social psychology replication crisis. In the long run it is probably contrary to the interests of science, philosophy, and society as a whole for us to pursue a policy that will create an illusory disproportion of positive research.

That said, there is a much more moderate strand of (a) that I could endorse: Being cautious and sober about one's research, rather than yielding to the temptation to inflate dubious, sexy results for the sake of publicity. I hope that in my own work I generally meet this standard, and I would recommend that same standard for both positive and negative or null research.

On (b): It seems at least as undesirable to discourage all empirical research on these topics. Don't we want to know the relationship between philosophical moral reflection and real-world moral behavior? Even if you think that studying the behavior of professional ethicists in particular is unilluminating, surely studying the effects of philosophical pedagogy is worthwhile. We should want to know what sorts of effects our courses have on the students who take them and under what conditions -- especially if part of the administrative justification for requiring ethics courses is the assumption that they do have a practical effect. To reject the whole enterprise of empirically researching the effects of studying philosophy because there's a risk that some studies will show that studying philosophy has little practical impact on real-world choices -- that seems radically antiscientific.

Rickless raises legitimate worries. I think the best practical response is more research, by more research groups, with open sharing of results, and open discussions of the issue by people working from a wide variety of perspectives. In the long run, I hope that some of my null results can lay the groundwork for a fuller understanding of the moral psychology of philosophy. Understanding the range of conditions under which philosophical moral reflection does and does not have practical effects on real-world behavior should ultimately empower rather than disempower philosophy as a discipline.

[image source]

Thursday, May 24, 2018

An Argument Against Every General Theory of Consciousness

As a philosophical expert on theories of consciousness, I try to keep abreast of the most promising recent theories. I also sometimes receive unsolicited emails from scholars who have developed a theory that they believe deserves attention. It's fun to see the latest cleverness, and it's my job to do so, but I always know in advance that I won't be convinced.

I'd like to hope that it's not just that I'm a dogmatic skeptic about general theories of consciousness. In "The Crazyist Metaphysics of Mind", I argue that our epistemic tools for evaluating general theories of consciousness are, for the foreseeable future, too flimsy for the task, since all evaluations of such theories must be grounded in some combination of dubious (typically question-begging) scientific theory, dubious commonsense judgment (shaped by our limited social and evolutionary history), and broad criteria of general theoretical virtue like simplicity or elegance (typically indecisive among theories that are live competitors).

Today, let me try another angle. Ultimately, it's a version of my question-beggingness complaint, but more specific.

Premise 1: There is no currently available decisive argument against panpsychism, the view that everything is conscious, even very simple things, like solitary hydrogen ions in deep space. Panpsychism is, of course, bizarrely contrary to common sense, but (as I also argue in The Crazyist Metaphysics of Mind) all well-developed general theories of consciousness will have some features that are bizarrely contrary to common sense, so although violation of common sense is a cost that creates an explanatory burden, it is not an insurmountable theory-defeater. Among prominent researchers who defend panpsychism or at least treat seriously a view in the neighborhood of panpsychism are Giulio Tononi, David Chalmers, Galen Strawson, and Philip Goff.

There are at least three reasons to take panpsychism seriously. (1.) If, as some have argued, consciousness is a fundamental feature of the world, or a property not reducible to other properties, it would be unsurprising if such a feature were approximately as widespread as other fundamental features such as mass and charge. (2.) Considering the complexity of our experience (e.g., our visual experience) and the plausibly similar complexity of the experience of other organisms with sophisticated sensory systems, one might find oneself on a slippery slope toward thinking that the least complex experience would be possessed by very simple entities indeed (see Chalmers 1996, p 293-7, for a nice exposition of this argument). (3.) Despite my qualms about Integrated Information Theory, there's an attractive theoretical elegance to the idea that consciousness arises from the integration of information, and thus that very simple systems that integrate just a tiny bit of information will correspondingly have just a tiny bit of consciousness.

Premise 2: There is no currently available decisive argument against theories of consciousness that require sophisticated self-representation of the sort that is likely to be absent from entities that lack theories of mind. On extreme versions of this view, even dogs and infants might not have conscious experience. (Again, highly contrary to common sense, but!) Among prominent researchers who have taken such a view seriously are Daniel Dennett and Peter Carruthers (though recently Carruthers has suggested that there might be no fact of the matter about the phenomenal consciousness, or not, of non-human animals).

There are at least three reasons to take seriously such a restrictive view of consciousness: (1.) If one wants to exit the slippery slope to panpsychism, one possibly attractive place to do so is at the gap between creatures who are capable of explicitly representing their own mental states and those that cannot do so. (2.) Consciousness, as was noted by Franz Brentano (and recently emphasized by David Rosenthal, Uriah Kriegel, and others), might plausibly always involve some sort of self-awareness of the fact that one is conscious -- apparently a moderately sophisticated self-representational capacity of some sort. (3.) There's a theoretical elegance to self-representational theories of consciousness. If consciousness doesn't just always arise when information is integrated in a system, an attractive explanation of what else is needed is some sort of sophisticated ability of a system to represent its own representational states.

Now you might understandably think that either panpsychism or a human-only views of consciousness is so extreme that we can be epistemically justified in confidently rejecting one or the other. If so, we can run the argument with weaker versions of Premise 1 and/or Premise 2:

Premise 1a (weaker): There is no currently available decisive argument against theories of consciousness that treat consciousness as very widespread, including perhaps in organisms with fairly small and simple brains, or in some near-future AI systems.

Premise 2a (weaker): There is no currently available decisive argument against theories of consciousness that treat consciousness as narrowly restricted to a class of fairly sophisticated entities, perhaps only mammals and birds and similar organisms capable of complex, flexible learning, and no AI systems in the foreseeable future.

Premise 3: All general theories of consciousness commit to the falsity of either Premise 1, Premise 2, or both (alternatively Premise 1a, Premise 2a, or both). If they do not so commit, then they aren't general theories of consciousness, though they may of course be perfectly fine narrow theories of consciousness, e.g., theories of consciousness as it happens to arise in human beings. (I've got a parallel argument against general theories of consciousness even as they apply just to human beings, based on considerations from Schwitzgebel 2011, ch. 6, but not today.)

Therefore, all general theories of consciousness commit to the falsity of some view against which there is no currently available decisive argument. They thereby commit beyond the evidence. They must either assume, or accept on only indecisive evidence, either the falsity of panpsychism, or the falsity of sophisticated self-representational views of consciousness, or both. In other words, they inevitably beg the question against, or at best indecisively argue against, some views we cannot yet justifiably reject.

Still, go ahead and build your theory of consciousness. You might even succeed in building the true theory of consciousness, if it isn't yet out there! Science and philosophy needs bold theoretical adventurers. But if a skeptic on the sidelines remains unconvinced, thinking that you have not convincingly dispatched some possible alternative approaches, the skeptic will probably be right.

ETA: In order to constitute an argument against a candidate theory, as opposed to merely an objection to such theories, perhaps I need to put some weight on the positive arguments in favor of views of consciousness that conflict with the theory being defended. Thanks to David Chalmers and Francois Kammerer on Facebook for pushing me on this point.

[image source]

Tuesday, May 15, 2018

"What Is It Like" Interview Now Freely Available

... here.

It's a fairly long read (about 8000 words), but I gave a lot of thought to Cliff's questions, and I hope the result is both interesting and revealing.

Read it, and learn about:

* my unconventional parents, including how we celebrated Christmas and my part-time work as a chambermaid;

* the peculiar story of how I once found my lost wallet;

* sneaking into György Gergely's cognitive development class at Berkeley, and how I think "Stanford school" philosophy of science should inform philosophy of mind;

* what four-year-olds and philosophers have in common;

* why blogging is the ideal form for philosophy;

* and much more!

Also please consider supporting Cliff Sosis's "What Is It Like to Be a Philosopher?" interviews by funding him on PayPal or Patreon.

Friday, May 11, 2018

Is C-3PO Alive?

by Will Swanson and Eric Schwitzgebel

Droids—especially R2-D2, C-3PO, and BB-8—propel the plot of the Star Wars movies. A chance encounter between R2-D2 and Luke Skywalker in “Episode IV: A New Hope” starts Luke on his fateful path to joining the rebel forces, becoming a Jedi, and meeting his father. More recently, BB-8 plays a similar role for Rey in “Episode VII: The Force Awakens”. But the droids are more than convenient plot devices. They are full-blooded characters, with their own quirks, goals, preferences, and vulnerabilities. The droids face the same existential threats as anyone else; and most of us still squirm in our theater seats on their behalf when danger looms.

Our response to the Star Wars droids relies on the tacit assumption that they are living lives -- lives that can be improved or worsened, sustained or lost. This raises the question: Is C-3PO alive? Or more precisely, if we someday built a robot like C-3PO, would it be alive?

In a way, it seems obvious that, yes, C-3PO is alive. Vaporizing him would be murder! One could have a funeral over his remains, reminiscing about all the good things he did in his lifetime.

But of course, the experts on life are biologists. And if you look at standard biology textbook descriptions of the characteristics of life (e.g., here), it looks like robots wouldn’t qualify. They don’t grow or reproduce, or share common descent with other living organisms. They don’t contain organic molecules like nucleic acid. They don’t have biological cells. They don’t seem to have arisen from a Darwinian evolutionary process. Few people (and probably fewer professional biologists) would say that a Roomba vacuum cleaner is alive, except in some kind of metaphorical sense; and in these respects, C-3PO is similar, despite being more complicated – just as we are similar to but more complicated than microscopic worms. The science covering C-3PO is not biology, but robotics.

Despite what looks like bad news for C-3PO from biology textbook definitions of life, on closer consideration we should reject the biology-textbook-list approach to robot cases. Our attitude toward these lists should probably be closer to the capacious attitude typical of astrobiologists (e.g., Benner 2010). If we’re considering what “life” is, really, in the broad, philosophical way that we do when considering the possibility of alien life, the standard lists start to look very Earth-bound and chauvinistic.

  • Common descent. Unless we wish to exclude the possibility of life originating independently on other planets, we should not treat common descent as a necessary condition for life.
  • Organic molecules. If we allow for life to arise independently on other planets, we should also be wary of expecting the resultant life to closely resemble biological life on Earth. We should not require the presence of organic molecules like nucleic acids.
  • Reproduction. While it is true that biological living things tend to reproduce when given the opportunity, reproduction is far from necessary for life. Consider the mule, the sterile worker ant, and the deliberately childless human. Nor should we require that life forms originate from reproductive processes. If life began without reproduction once, it can begin again, perhaps many times over!
  • Participation in Darwinian processes. Explanations invoking evolution by natural selection have revealed many of nature’s secrets. Nevertheless, evolution is not a locally defined property of living individuals. It refers to the processes that shape individuals over generations. It’s unclear why belonging to a group that has undergone Darwinian selection in the past should matter to whether an individual, considered now, is alive.
  • Growth. Depending on the sense of growth in question, robots may or may not grow. If growth means nothing more than change over time in accordance an internal protocol, then at least some robots, learning ones, are able to grow. If growth means simply getting radically bigger (or developing from a small seed or embryo into a large adult), then requiring growth risks excluding or marginalizing many organisms that are uncontroversially alive, such as bacteria that reproduce by fission.
  • Other list members -- self-maintenance (if a robot can charge its own battery...), having heterogeneous organized parts, and responding to stimuli -- pose no challenge to the idea that robots are alive.

    What about metabolism? Perhaps this is an essential feature of life. Do robots do this?

    Tweaking a suggestion from Peter Godfrey-Smith (2013), a first pass on a definition for metabolism is the cooperation of diverse parts within an organism (implicitly, a thing that meets other criteria for life) to use energy and other resources to maintain the structure of the organism. If “maintaining structure” amounts to maintaining operational readiness, then this definition provides no reason to deny metabolism to robots, especially robots that do things like auto-update, repel virus programs, and draw from external energy resources as needed. If “maintaining structure” refers specifically to the upkeep required to keep a physical body from degrading, then most simple robots would be excluded, but C-3PO would still qualify, if he can polish his head and order a replacement arm.

    Even so, this second approach might define metabolism too narrowly. Defining metabolism in terms of maintenance in a narrow sense, after all, cannot accommodate the other ends to which organisms put energy in coordinated, non-accidental ways. Consider growth and development. The caterpillar’s metamorphosis hardly counts as maintenance of structure in any straightforward sense, yet we should count the energy transformations needed to effect that change as part of the caterpillar’s metabolism. More strikingly, living things often use energy to undermine their structure: think of cells undergoing programmed cell death or humans committing suicide.

    We can accommodate these cases by broadening the definition of metabolism to encompass any coordinated use of energy within a living thing to achieve the ends of that living thing. On this broader definition, there is no reason to deny metabolism to robots.

    With all of this in mind, we think it’s not unreasonable to stick with our gut intuition that C-3PO is alive. What is essential to being a living thing is not so much one’s biological history or composition by organic molecules, but rather the use of internal or environmental resources to accomplish the functional aims of the system.

    How sophisticated does a system need to be to qualify as living, by these standards? Should we maybe say that even a Roomba is alive, after all? In a series of entertaining experiments, Kate Darling has shown that ordinary people are often quite reluctant to smash up cute robots. Despite Darling’s own expressed view that such robots aren’t alive, maybe part of what is holding people back is some of the same thing that holds some of us back from wanting to crush spiders -- a kind of emotional reverence for forms of life.

    A darling robot:

    Philosophers have grown used to functionalism about mind -- that is, they seem generally willing to accept or at least take seriously the possibility that consciousness might be realized in non-biological substrates. Nevertheless, functionalism about life is less readily accepted. Perhaps philosophical reflection about the possibility of robotic life can help us recognize that our concern over the lives and deaths of our favorite robot characters may be perfectly justified.

    [C-3PO image source] [Pleo image source]

    Wednesday, May 09, 2018

    What Is It Like Interview

    Cliff Sosis has interviewed me for his wonderful What Is It Like to Be a Philosopher series. For the first week, it is only available to Silver level patrons on Patreon. Next week, he'll release it more broadly.

    Please consider supporting Cliff's work. Cliff's interviews knit together questions about philosophy with questions about childhood, family, hobbies, passions, formative experiences -- giving the reader a fuller sense of the whole person than one generally sees. Check out his interviews with Josh Knobe, Kwame Appiah, David Chalmers, Sally Haslanger, Peter Singer, and Kate Manne, for example.

    Teaser:

    In this interview Eric Schwitzgebel, professor of philosophy at University of California Riverside, and I discuss his father’s collaboration with Timothy Leary, his uncle who invented ankle monitors, pretending to be a mannequin for his father’s class, Christmas with an electric blue Buddha, his mother’s anti-theist views, being a chambermaid and skiing, writing plays, Rosencrantz and Guildenstern are Dead, T.S. Eliot, Stephen Jay Gould, Stanford and Kuhn, Feyerabend and Zhuangzi, disliking analytic philosophy, moving from Deleuze and Derrida to Hume and Dretske, living in a hippy co-op, wearing a dress as a political statement, memorizing Sylvia Plath, Dretske and Dupré, the Gourmet Report, working with Elisabeth Lloyd and John Searle, the allegations against Searle, the grad culture at Berkeley, love and death, the Bay Area Philosophy of Science reading group, Peter Godfrey-Smith, Stanford School philosophy of science, Bayes or Bust?, sneaking into György Gergely’s class, Alison Gopnik’s generosity, meeting his wife via a classified ad, the job market in 97, landing a job at U.C. Riverside where he is to this day, how the department has changed and he has changed as a teacher, his class on Evil, his work on the moral behavior of ethics professors, The Splintered Mind and philosophy on the internet, his theory of jerks, Brian Weatherson, experimental philosophy, our philosophical blind spots, his writing routine and process, work-life balance, My Dinner with Andre, election night 2008 versus election night 2016, and his last meal…

    Friday, May 04, 2018

    The Rise and Fall of Philosophical Jargon

    In 2010, I defined a discussion arc, in philosophy, as a curve displaying the relative frequency at which a term or phrase appears among the abstracts of philosophical articles. Since then, I've done a few analyses using discussion arcs, mostly searching for philosophers' names (here, here, here, here, here).

    Today I thought it would be fun to look at philosophical jargon -- its rise and fall, how much it varies over time, and as a measure of what is hot. Maybe you'll find it fun too!

    I rely on abstract searches in The Philosophers Index. NGram is nifty, but it doesn't do well with trends specifically in academic philosophy (see here). JStor is interesting too, but it's a limited range of journals, especially for articles less than six years old.

    First, I constructed a representative universe of articles (limiting the search to journal articles only): In this case, I searched for "the" in the abstracts, in five year intervals from 1940-present, except merging 1940-1949 for smoothness and a large enough sample. Then I searched for target terms in abstracts in the same five-year intervals. I constructed the curves by dividing the number of occurrences of the target term by the number of articles in the representative universe in each period.

    Some topics and terms are perennial, rising and falling a little, but not in any dramatic way. Others increase or decrease fairly steadily without a clear peak (allowing for some noisiness especially in the early data). For example, here are the are the arcs for happiness, Kant*, and skepticism or scepticism (all three fairly steady), evolution* and democra* (both rising), and induction (falling):

    (The asterisk is a truncation symbol; the y-axis is percentage of all abstracts containing the word.)

    [apologies for blurry image; click to enlarge and clarify]

    (You thought happiness was more important to philosophers than Kant? Wrong!)

    One way to measure how trendy or steady a topic is, is the ratio of percentage of discussion in its peak period, compared to its average discussion. Exactly equal discussion over the whole period would yield a ratio of 1:1. Fairly steady discussion with some noise would be 1 to 1.5. Topics that rise and fall more substantially would be proportionally higher. Call this the Max/Average Ratio. For the six topics above, the Max/Average Ratios are:

  • Kant*: 1.3
  • happiness: 1.4
  • skepticism or scepticism: 1.4
  • evolution*: 1.5
  • democra*: 2.0
  • induction: 3.0
  • Evolution*, though it approximately triples over the period, has a Max/Average Ratio not too far from one. Democra* rises from a substantially lower level of discussion than does evolution* and has a higher Max/Average Ratio. Induction crashes down to about a sixth of its initial level of discussion (0.174% in the first four periods to 0.028% in the final two) -- hence its moderately large ratio.

    Now let's consider some jargon terms that more clearly reflect hot topics.

    Since the scale is logarithmic, periods of zero incidence are not charted. Also remember that logarithmic scales visually compress peaks relative to linear scales. For example, though the decline of "language of thought" is not so visually striking, usage was in fact about seven times as much at peak as it is now.

    "Grue" was introduced by Nelson Goodman to describe a puzzle about how we know the future in his "New Riddle of Induction" in 1955. As you can see from the chart, discussion peaked 10-15 years later, in the late 1960s. "The original position" was introduced by John Rawls in his 1971 A Theory of Justice, describing part of an idealized decision-making process, and discussion of it peaked in the late 1980s, about 15 years later. "Supervenience" has a murkier origin, but was popularized in philosophy by R.M. Hare in 1952, to talk about how one set of properties might covary with another (for example the moral and the physical). Discussion peaked about 40 years later in the early 1990s. Hilary Putnam introduced "Twin Earth" (a planet with XYZ instead of water) in a thought experiment in 1975, and discussion peaked 15-20 years later in the early 1990s. "Radical interpretation" was introduced by Donald Davidson in the early 1970s, peaking 15 years later in the late 1980s. Finally, the "language of thought" was introduced by Jerry Fodor in his 1975 book of the same title, peaking 15-20 years later in the early 1990s.

    With the exception of supervenience -- maybe partly because the concept took some time to transition from ethics to the metaphysics of mind -- the pattern is remarkably consistent, with peaks about 15-20 years after a famous introduction event. This pacing reminds me of my earlier research suggesting that individual philosophers tend to receive peak discussion around ages 55-70, despite producing their most influential work on average about 20 years earlier (NB: the two data sets are only partly comparable, but I'm pretty sure the generalization is roughly true anyway). This is the pace of philosophy.

    For these terms and phrases, the Max/Average Ratios are a bit higher than for the rising and falling topics sampled above:

  • superven*: 2.7
  • "radical interpretation": 3.4
  • "Twin Earth: 3.9
  • "the original position": 4.2
  • "language of thought": 4.6
  • grue: 5.3
  • The Max/Average Ratio, of course, doesn't really capture rising and falling; and the ratio will be inflated for more recently introduced terms, assuming virtually zero incidence before introduction.

    For a better measure of peakiness, we can examine the ratio of the maximum discussion to the average discussion in the first two and final two time periods. To avoid infinite peakiness and overstating the peakiness of rare terms, I'll assume a floor level of discussion of .01% in any period. Call this Peakiness with a capital P. Five of the six topics in the first group have a Peakiness between 1.3 and 2.0, and evolution* has 2.9.

    In contrast:

  • superven*: 3.8
  • "Twin Earth": 4.7
  • "radical interpretation": 6.3
  • "the original position": 8.4
  • "language of thought": 8.8
  • grue: 50.6
  • Grue was hot! Although its peak discussion was about the same as superven*, it has crashed far worse -- or at least it has, so far. If we had a longer time period to play with, we could try to make the analyses more temporally stable by sampling a window of 50 years before and after peak, thus giving superven* more of a fair chance to finish its crash, as it were.

    Okay, how about newer jargon? Let's try a few. I guess first I should say that jargon is wonderful and valuable, and I actually love the grue and Twin Earth thought experiments. Also some jargon becomes a permanent part of the discipline -- such as "dualism" and "secondary qualities". Maybe grue and Twin Earth will also prove in the long run to be permanent valuable contributions to the philosopher's toolbox, just in a lower-key way than back when they were hot topics. I don't really mean "jargon" as a term of abuse.

    In 1974, Robert Kirk introduced "zombies" in the philosophical sense (entities physically indistinguishable from people but with no conscious experience), but usage didn't really take off until they got discussion in several articles in Journal of Consciousness Studies in 1995 and in David Chalmers' influential The Conscious Mind in 1996. Contrary to popular rumor, the zombie doesn't appear to be dead quite yet. "Epistemic injustice" was introduced by Miranda Fricker in the late 1990s and early 2000s. "Virtue epistemology" was introduced by Ernest Sosa in the early 1990s. Fictionalism has a longer and more complicated history in logic and philosophy of math.

    The "explanatory gap" between physical or functional explanations and subjective conscious experience was introduced by Joseph Levine in 1983, but doesn't hit the abstracts until some papers of his in the early 1990s. "Experimental philosophy" in its earlier uses refers to the early modern scientific work of Newton, Boyle, and others. It's recent usage to refer to experimental work on folk intuitions about philosophical scenarios hits the abstracts all at once with five papers in 2007. Consistently with my twenty-year hypothesis, of these, "explanatory gap" is the only one that shows signs of being past its peak (despite hopes expressed by some of my Facebook friends). Maybe fictionalis* is running longer.

    Okay, I can't resist trying a few more discussion arcs, just to see how they play out.

    "Possible worlds" goes back at least to Leibniz, but its first appearance in the abstracts was by Saul Kripke in a 1959 article. It peaks at 2.0% in the late 1960s, but has enduring popularity (currently 0.4%). "Sense data" as the objects of perception was introduced in the early 20th century by G.E. Moore and Bertrand Russell and has lots of discussion in the beginning of this dataset (1.7%), crashing down to levels appropriate to a historical relic (0.02%). "Qualia" has a couple occurrences early in the abstracts and traces back to C.S. Peirce in the 19th century, then hits the abstracts again with an article by Sydney Shoemaker in 1975, peaking in the late 1990s.

    Supererogation (morally good acts beyond what is morally required) entered modern discussion in the late 1950s and early 1960s (first hitting the abstracts with Joel Feinberg in 1961), then peaked in the late 1980s -- but it looks like it might be staging a comeback? Wittgenstein introduced the idea of the "language game" in his posthumous Philosophical Investigations (1953), with discussion peaking in the late 1970s. Thomas Nagel introduced "moral luck" in a classic 1979 article, and although it peaked in the late 2000s, it hasn't yet declined much from that peak.

    Possible worlds has the highest Peakiness of the lot -- though nothing like grue -- at 8.4. "Language game" is next at 5.1. The rest aren't very Peaky, ranging from 2.1 to 2.9.

    I've spent more hours this week doing this than I probably should have, given all my other commitments -- but there's something almost meditative about data-gathering, and the arcs yield a perspective I appreciate on the historical brevity of philosophical trends.