Thursday, January 30, 2014

An Objection to Group Consciousness Suggested by David Chalmers

For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.

To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.

The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.

This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.

To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?

Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that

(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, and

(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.

Contra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.

Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.

The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.

Thursday, January 23, 2014

New Essay in Draft: The Moral Behavior of Ethicists

... which is a recurrent topic of my research, as regular readers of this blog will know.

This new paper, co-authored with Joshua Rust, summarizes our work on the topic to date and offers a quantitative meta-analysis that supports our overall finding that professors of ethics behave neither morally better nor morally worse overall than do philosophers not specializing in ethics.

You might find it entirely unsurprising that ethicists should behave no differently than other professors. If you do find it unsurprising (Josh and I don't), you might still be interested in looking at another of Josh's and my papers, in which we think through some of the theoretical implications of this finding.

Tuesday, January 21, 2014

Stanislaw Lem's Proof that the External World Exists

Slowly catching up on science fiction classics, reading Lem's Solaris, I'm struck by how the narrator, Kris, escapes a skeptical quandary. Worried that his sensory experiences might be completely delusional, Kris concocts the following empirical test:

I instructed the satellite to give me the figure of the galactic meridians it was traversing at 22-second intervals while orbiting Solaris, and I specified an answer to five decimal points.

Then I sat and waited for the reply. Ten minutes later, it arrived. I tore off the strip of freshly printed paper and hid it in a drawer, taking care not to look at it.... Then I sat down to work out for myself the answer to the question I had posed. For an hour or more, I integrated the equations....

If the figures obtained from the satellite were simply the product of my deranged mind, they could not possibly coincide with [my hand calculations]. My brain might be unhinged, but it could not conceivably complete with the Station's giant computer and secretly perform calculations requiring several months' work. Therefore if the figures corresponded, it would follow that the Station's computer really existed, that I had really used it, and that I was not delirious (1961/1970, p. 50-51).

Except in detail, Kris's test closely resembles an experiment Alan Moore and I have used in our attempt to empirically establish the existence of the external world (full paper in draft here).

Kris is hasty in concluding from this experiment that he must have used an actually existing computer. Kris might, for example, have been victim of a deceiver with great computational powers, who can give him the meridians within ten minutes of his asking. And Kris would have done better, I think, to have looked at the readout before having done his own calculations. By not looking until the end, he leaves open the possibility that he delusively creates the figures supposedly from the satellite only after he has derived the correct answers himself. Assuming he can trust his memory and arithmetical abilities for at least a short duration (and if not, he's really screwed), Kris should look at the satellite's figures first, holding them steady before his mind, while he confirms by hand that the numbers make mathematical sense.

Increasingly, I think the greatest science fiction writers are also philosophers. Exploring the limits of technological possibility inevitably involves confronting the central issues of metaphysics, epistemology, and human value.

Wednesday, January 15, 2014

Waves of Mind-Wandering in Live Performances

I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)

I've posted about this issue before; and although professional philosophy talks aren't paradigmatic examples of aesthetic performances, I have beeped people during some of my talks. One striking result: People spend lots of time thinking about things other than the explicit content of the performance -- for example, thinking instead about needing to go to the bathroom, or a sports bet they just won, or the weird color of an advertising flyer. And I'd bet Nutcracker audiences are similarly scatterbrained. (See also Schooler, Reichle, and Halpern 2004; Schubert, Vincs, and Stevens 2013.)

(image source: *)

But I also get the sense that if I pause, I can gather the audience up. A brief pause is commanding -- in music (e.g. Roxanne), in film -- but especially in a live performance like a talk. Partly, I suspect this is due to contrast with previous noise levels, but also it seems to raise curiosity about what's next -- a topic change, a point of emphasis, some unplanned piece of human behavior. (How interesting it is when the speaker drops his cup! -- much more interesting, usually, in a sad and wonderfully primate way, than the talk itself.)

I picture people's conscious attention coming in waves. We launch out together reasonably well focused, but soon people start drifting their various directions. The speaker pauses or does something else that draws attention, and that gathers everyone briefly back together. Soon the audience is off drifting again.

We could study this with beepers. We could see if I'm right about pauses. We could see what parts of performance tend to draw people back from their wanderings and what parts of performance tend to escape conscious attention. We could see how immersive a performance is (in one sense of "immersive") by seeing how frequently people report being off topic vs. on a tangentially related topic vs. being focused on the immediate content of the performance. We could vastly improve our understanding of the audience experience. New avenues for criticism could open up. Knowing how to capture and manipulate the waves could help writers and performers create a performance more in line with their aesthetic goals. Maybe artists could learn to want waves and gatherings of a certain sort, adding a new dimension to their aesthetic goals.

As far as I can tell, no one has ever done a systematic experience sampling study during aesthetic experience that explores these issues. It's time.

Friday, January 10, 2014

Skeptical Fog vs. Real Fog

I am a passenger in a jumbo jet that is descending through turbulent night fog into New York City. I'm not usually nervous about flying, but the turbulence is getting to me. I know that the odds of dying in a jet crash with a major US airline are well below one in a million, but descent in difficult weather conditions is among the most dangerous parts of flight -- so maybe I should estimate my odds of death in the next few minutes as about one in a million or one in ten million? I can't say those are odds I'm entirely happy about.

But then I think: Maybe some radically skeptical scenario is true. Maybe, for example, I'm a short-term sim -- an artificial, computerized being in a small world, doomed soon to be shut down or deleted. I don't think that is at all likely, but I don't entirely rule it out. I have about a 1% credence that some radically skeptical scenario or other is true, and about 0.1% credence, specifically, that I'm in a short-term sim. In a substantial portion of these radically skeptical scenarios, my life will be over soon. So my credence that my life will soon end for some skeptical-scenario type reason is maybe about one in a thousand or one in ten thousand -- orders of magnitude higher than my credence that my life will soon end for the ordinary-plane-crash type of reason.

Still, the plane-crash possibility worries me more than the skeptical possibility.

Does the fact that these skeptical reflections leave me emotionally cold show that I don't really, "deep down", have even a one-in-a-million credence in at least the imminent-death versions of the skeptical scenarios? Now maybe I shouldn't worry about those scenarios even if I truly assign a non-trivial credence to them. After all, there's nothing I can do about them, no action that I can reasonably take in light of them. I can't, for example, buy sim-insurance. But if that's why the scenarios leave me unmoved, the same is true about the descending plane. There's nothing I can do about the fog; I need to just sit tight. As a general matter helplessness doesn't eliminate anxiety.

Here my interest in radical skepticism intersects another of my interests, the nature of belief. What would be involved in really believing that there is a non-trivial chance that one will soon die because some radically skeptical scenario is true? Does genuine belief only require saying these things to oneself, with apparent sincerity, and thinking that one accepts them? Or do they need to get into your gut?

My view is that it's an in-between case. To believe, on my account, is to have a certain dispositional profile -- to be disposed to reason, and to act and react, both inwardly and outwardly, as ordinary people would expect someone with that belief to do, given their other related attitudes. So, for example, to believe that something carries a 1/10,000 risk of death is in part to be disposed sincerely to say it does and to draw conclusions from that fact (e.g., that it's riskier than something with a 1/1,000,000 risk of death); but it is also to have certain emotional reactions, to spontaneously draw upon it in one's everyday thinking, and to guide one's actions in light of it. I match the dispositional profile, to some extent, for believing there's a small but non-trivial chance I might soon die for skeptical-scenario-type reasons -- for example, I will sincerely say this when reflecting in my armchair -- but in other important ways I seem not to match the relevant dispositional profile.

It is not at all uncommon for people intellectually to accept certain propositions -- for example, that their marriage is one of the most valuable things in their lives, or that it's more important for their children to be happy than to get good grades, or that custodians deserve as much respect as professors -- while in their emotional reactions and spontaneous thinking, they do not very closely match the dispositional profile constitutive of believing such things. I have argued that this is one important way in which we can occupy the messy middle space between being accurately describable as believing something and being accurately describable as failing to believe it. My own low-but-not-negligible credence in radically skeptical scenarios is something like this, I suppose.

Thursday, January 02, 2014

Our Possible Imminent Divinity

We might soon be gods.

John Searle might be right that digital computers could never be conscious. Or the pessimists might be right who say we will blow ourselves up before we ever advance far enough to create real consciousness in computers. But let's assume, for the sake of argument, that Searle and the pessimists are wrong: In a few decades we will be producing genuinely conscious artificial intelligences in substantial quantity.

We will then have at least some features of gods: We will have created a new type of being, perhaps in our image. We will presumably have the power to shape our creations' personalities to suit us, to make them feel blessed or miserable, to hijack their wills to our purposes, to condemn them to looping circuits of pain or reward, to command their worship if we wish.

If consciousness is only possible in fully embodied robots, our powers might stop approximately there, but if we can create conscious beings inside artificial environments, we become even more truly divine. Imagine a simulated world inside a computer with its own laws and containing multiple conscious beings whose sensory inputs all flow in according to the rules of that world and whose actions are all expressed in that world -- The Sims but with conscious AIs.

[image from http://tapirangkasaterbabas.blogspot.com; go ahead and apply feminist critique whenever ready]

Now we can command not only the AI beings themselves but their entire world.

We approach omnipotence: We can create miracles. We can drop in Godzilla, we can revive the dead, we can move a mountain, undo errors, create or end the whole world at a whim. Zeus would be envious.

We approach omniscience: We can look at any part of the world, look inside anyone's mind, see the past if we have properly recorded it -- possibly, too, predict the future, depending on the details of the program.

We stand outside of space and to some extent time: Our created beings can point any direction of the sphere and not point at us -- we are everywhere and nowhere, not on their map, though capable of seeing and reaching anywhere. If the sim has a fast clock relative to our time, we can seem to endure for millenia or longer. We can pause their time and do whatever we like unconstrained by their clock. We can rewind to save points and thus directly view and interact with the past, perhaps sprouting off new worlds from it or rewriting the history of the one world.

But will we be benevolent gods? What duties will we have to our creations, and how well will we execute those duties? Philosophers don't discuss this issue as much as they should. (Nick Bostrom and Eliezer Yudkowsky are exceptions, and there's some terrific science fiction, e.g., Ted Chiang. In this story, R. Scott Bakker and I pit the duty to maximize happiness against the duty to give our creations autonomy and self-knowledge.)

Though to our creations we will literally have the features of divinity and they might rightly call us their gods, from the perspective of this level of reality we might remain very mortal, weak, and flawed. We might even ourselves be the playthings of still higher gods.

Wednesday, January 01, 2014

What I Wrote in 2013

I hope it is not too vain to begin 2014 with a retrospect of what I wrote in 2013. I kind of enjoy gathering it here -- it helps convince me that my solitary office labors are not a waste -- and maybe some readers will find it useful.

This work appeared in print in 2013:

This work is finished and forthcoming:
Also in 2013, I began writing short speculative fiction in earnest.  I am not sure how this will turn out; I think it's too early for me to know if I'm any good at it.  My first effort, a collaboration with professional fiction writer R. Scott Bakker, appeared in Nature ("Reinstalling Eden", listed above).  I wish I could post drafts on my website and solicit feedback, as I do with my philosophy articles, but fiction venues seem to dislike that.

Update January 2:
I fear the ill-chosen title of this post might give some people the misleading impression that I wrote all of this material during 2013.  Most of the work that appeared in print was finalized before 2013, and a fair portion of the other work was at least in circulating draft before 2013.  Here's how things stood at the end of 2012; lots of overlap!

Thursday, December 26, 2013

The Moral Epistemology of the Jerk

The past few days, I've been appreciating the Grinch's perspective on Christmas -- particularly his desire to drop all the presents off Mount Crumpit. An easy perspective for me to adopt! I've already got my toys (mostly books via Amazon, purchased any old time I like), and there's such a grouchy self-satisfaction in scoffing, with moralistic disdain, at others' desire for their own favorite luxuries.

(image from http://news.mst.edu)

When I write about jerks -- and the Grinch is a capital one -- it's always with two types of ambivalence. First, I worry that the term invites the mistaken thought that there is a particular and readily identifiable species of people, "jerks", who are different in kind from the rest of us. Second, I worry about the extent to which using this term rightly turns the camera upon me myself: Who am I to call someone a jerk? Maybe I'm the jerk here!

My Grinchy attitudes are, I think, the jerk bubbling up in me; and as I step back from the moral condemnations toward which I'm tempted, I find myself reflecting on why jerks make bad moralists.

A jerk, in my semi-technical definition, is someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The Grinch doesn't respect the Whos, doesn't value their perspectives. He doesn't see why they might enjoy presents and songs, and he doesn't accord any weight to their desires for such things. This is moral and epistemic failure, intertwined.

The jerk fails as a moralist -- fails, that is, in the epistemic task of discovering moral truths -- for at least three reasons.

(1.) Mercy is, I think, near the heart of practical, lived morality. Virtually everything everyone does falls short of perfection. Her turn of phrase is less than perfect, she arrives a bit late, her clothes are tacky, her gesture irritable, her choice somewhat selfish, her coffee less than frugal, her melody trite -- one can create quite a list! Practical mercy involves letting these quibbles pass forgiven or even better entirely unnoticed, even if a complaint, were it made, would be just. The jerk appreciates neither the other's difficulties in attaining all the perfections he himself (imagines he) has nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralizing principle comes naturally to the jerk, while it is alien to the jerk's opposite, the sweetheart. The jerk will sometimes give mercy, but if he does, he does so unequally -- the flaws and foibles that are forgiven are exactly the ones the jerk recognizes in himself or has other special reasons to be willing to forgive.

(2.) The jerk, in failing to respect the perspectives of others, fails to appreciate the delight others feel in things he does not himself enjoy -- just as the Grinch fails to appreciate the Whos' presents and songs. He is thus blind to the diversity of human goods and human ways of life, which sets his principles badly askew.

(3.) The jerk, in failing to respect the perspectives of others, fails to be open to frank feedback from those who disagree with him. Unless you respect another person, it is difficult to be open to accepting the possible truth in hard moral criticisms from that person, and it is difficult to triangulate epistemically with that person as a peer, appreciating what might be right in that person's view and wrong in your own. This general epistemic handicap shows especially in moral judgment, where bias is rampant and peer feedback essential.

For these reasons, and probably others, the jerk suffers from severe epistemic shortcomings in his moral theorizing. I am thus tempted to say that the first question of moral theorizing should not be something abstract like "what is to be done?" or "what is the ethical good?" but rather "am I a jerk?" -- or more precisely, "to what extent and in what ways am I a jerk?" The ethicist who does not frankly confront herself on this matter, and who does not begin to execute repairs, works with deficient tools. Good first-person ethics precedes good second-person and third-person ethics.

Wednesday, December 18, 2013

Should I Try to Fly, Just on the Off-Chance That This Might Be a Dreambody?

I don't often attempt to fly when walking across campus, but yesterday I gave it a try. I was going to the science library to retrieve some books on dreaming. About halfway there, in the wide-open mostly-empty quad, I spread my arms, looked at the sky, and added a leap to one of my steps.

My thinking was this: I was almost certainly awake -- but only almost certainly! As I've argued, I think it's hard to justify much more than 99.9% confidence that one is awake, once one considers the dubitability of all the empirical theories and philosophical arguments against dream doubt. And when one's confidence is imperfect, it will sometimes be reasonable to act on the off-chance that one is mistaken -- whenever the benefits of acting on that off-chance are sufficiently high and the costs sufficiently low.

I imagined that if I was dreaming, it would be totally awesome to fly around, instead of trudging along. On the other hand, if I was not dreaming, it seemed no big deal to leap, and in fact kind of fun -- maybe not entirely in keeping with the sober persona I (feebly) attempt to maintain as a professor, but heck, it's winter break and no one's around. So I figured, why not give it a whirl?

I'll model this thinking with a decision matrix, since we all love decision matrices, don't we? Call dream-flying a gain of 100, waking leap-and-fail a loss of 0.1, dreaming leap-and-fail a loss of only 0.01 (since no one will really see me), and continuing to walk in the dream a loss of 1 (since why bother with the trip if it's just a dream?). All this is relative to a default of zero for walking, awake, to the library. (For simplicity, I assume that if I'm dreaming things are overall not much better or worse than if I'm awake, e.g., that I can get the books and work on my research tomorrow.) I'd been reading about false awakenings, and at that moment 99.7% confidence in my wakefulness seemed about right to me. The odds of flying conditional upon dreaming I held to be about 50/50, since I don't always succeed when I try to fly in my dreams.

So here's the payoff matrix:

Plugging into the expected value formula:

Leap = (.003)(.5)(100) + (.003)(.5)(-0.01) + (.997)(-0.1) = approx. +.05.

Not Leap = (.003)(-1) + (.997)(0) = -.003.

Leap wins!

Of course, this decision outcome is highly dependent on one's degree of confidence that one is awake, on the downsides of leaping if it's not a dream, on the pleasure one takes in dream-flying, and on the probability of success if one is in fact dreaming. I wouldn't recommend attempting to fly if, say, you're driving your son to school or if you're standing in front of a class of 400, lecturing on evil.

But in those quiet moments, as you're walking along doing nothing else, with no one nearby to judge you -- well maybe in such moments spreading your wings can be the most reasonable thing to do.

Wednesday, December 11, 2013

How Subtly Do Philosophers Analyze Moral Dilemmas?

You know the trolley problems. A runaway train trolley will kill five people ahead on the tracks if nothing is done. But -- yay! -- you can intervene and save those five people! There's a catch, though: your intervention will cost one person's life. Should you intervene? Both philosophers' and non-philosophers' judgments vary depending on the details of the case. One interesting question is how sensitive philosophers and non-philosophers are to details that might be morally relevant (as opposed to presumably irrelevant distracting features like order of presentation or the point-of-view used in expressing the scenario).

Consider, then, these four variants of the trolley dilemma:

Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.

Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.

Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.

Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)

Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.

Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)

In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.

About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.

Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?

In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).

I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.

How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).

However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).

Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.

Thursday, December 05, 2013

Dream Skepticism and the Phenomenal Shadow of Belief

Ernest Sosa has argued that we do not form beliefs when we dream. If I dream that a tiger is chasing me, I do not really believe that a tiger is chasing me. If I dream that I am saying to myself "I'm awake!" I do not really believe that I'm awake. Real beliefs are more deeply integrated than are these dream-mirages with my standing attitudes and my waking behavior. If so, it follows that if I genuinely believe that I'm awake, necessarily I am correct; and conversely if I believe I'm dreaming, necessarily I'm wrong. The first belief is self-verifying; the second self-defeating. Deliberating between them, I should not choose the self-defeating one, nor should I decline to choose, as though these two options were of equal epistemic merit. Rather, I should settle upon the self-verifying belief that I am awake. Thus, dream skepticism is vanquished!

One nice thing about Sosa's argument is that it does not require that dream experience differ from waking experience in any of the ways that dreams and waking life are sometimes thought to differ (e.g., dream experience needn't be gappier, or less coherent, or more like imagery experience than like perceptual experience). The argument would still work even if dream experience were, as Sosa says, "internally indistinguishable" from waking experience.

This seeming strength of the argument, though, seems to me to signal a flaw. Suppose that dreaming life is in fact in every respect phenomenally indistinguishable from waking life -- indistinguishable from the inside, as it were -- and accordingly that I could easily experience exactly *this* while sleeping; and furthermore suppose that I dream extensively every night and that most of my dreams have mundane everyday content just like that of my waking life. None of this should affect Sosa's argument. And suppose further that I am in fact now awake (and thus capable of forming beliefs about whether I am dreaming, per Sosa), and that I know that due to a horrible disease I acquired at age 35, I spend almost all of my life in dreaming sleep so that 90% of the time when I have experiences of this sort (as if in my office, thinking about philosophy, working on a blog post...) I am sleeping. Unless there's something I'm aware of that points toward this not being a dream, shouldn't I hesitate before jumping to the conclusion that this time, unlike all those others, I really am awake? Probabilities, frequencies, and degrees of resemblance seem to matter, but there is no room for them in Sosa's argument.

Maybe we don't form beliefs when we dream -- Sosa, and also Jonathan Ichikawa, have presented some interesting arguments along those lines. But if there is no difference from the inside between dreams and waking, then my dreaming self, when he was dreaming about considering dream skepticism (e.g., here) did something that was phenomenally indistinguishable from forming the belief that he was thinking about philosophy, something that was phenomenally indistinguishable from forming the belief that was affirming or denying or suspending belief about the question of whether he was dreaming -- and then the question becomes: How do I know that I'm not doing that very same thing right now?

Call it dream-shadow believing: It's like believing, except that it happens only in dreams. If dream-shadow believing is possible, then if I dream-shadow believe that I am dreaming, necessarily I am correct; if I dream-shadow believe that I am awake, necessarily I am wrong. The first is self-verifying, the second self-defeating. The skeptic can now ask: Should I try to form the belief that I am awake or instead the dream-shadow belief that I am dreaming? -- and to this question, Sosa's argument gives no answer.

Update, 3:28 pm:

Jonathan Ichikawa has kindly reminded me that he presented similar arguments against Sosa back in 2007 -- which I knew (in fact, Jonathan thanks me in the article for my comments) but somehow forgot. Jonathan runs the reply a bit differently, in terms of quasi-affirming (which is neutral between genuine affirming and something phenomenally indistinguishable from affirming, but which one can do in a dream) rather than in terms of dream-shadow believing. Perhaps my dream-shadow belief formulation enables a parity-of-argument objection, if (given the phenomenal indistinguishability of dreams and waking) the argument that one should settle on self-verifying dream-shadow belief is as strong an argument as is Sosa's original argument.

Wednesday, November 27, 2013

Reinstalling Eden

If someday we can create consciousness inside computers, what moral obligations will we have to the conscious beings we create?

R. Scott Bakker and I have written a short story about this, which came out today in Nature.

You might think that it would be a huge moral triumph to create a society of millions of actually conscious, happy beings inside one's computer, who think they are living, peacefully and comfortably, in the base level of reality -- Eden, but better! Divinity done right!

On the other hand, there might be something creepy and problematic about playing God in that way. Arguably, such creatures should be given self-knowledge, autonomy, and control over their own world -- but then we might end up, again, with evil, or even with an entity both intellectually superior to us and hostile.

[For Scott's and my first go-round on these issues, see here.]

Friday, November 22, 2013

Introspecting My Visual Experience "as of" Seeing a Hat?

In "The Unreliability of Naive Introspection" (here and here), I argue, contra a philosophical tradition going back at least to Descartes, that we have much better knowledge of middle-sized objects in the world around us than we do of our stream of sensory experience while perceiving those objects.

As I write near the end of that paper:

The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.
Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).

One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.

I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.

First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.

Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.

(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)

So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.

Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).

Update, November 25 [revised 5:24 pm]:

Paul Boghossian writes:

I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.

I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.

Roger White then made the same point without using the brain in a vat scenario.

I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!

Friday, November 15, 2013

Skepticism, Godzilla, and the Artificial Computerized Many-Branching You

Nick Bostrom has argued that we might be sims. A technologically advanced society might use hugely powerful computers, he says, to run "ancestor simulations" containing actually conscious people who think they are living, say, on Earth in the early 21st century but who in fact live entirely inside an advanced computational system. David Chalmers has considered a similar possibility in his well-known commentary on the movie The Matrix.

Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.

Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.

However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.

Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.

Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.

The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.

Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?

Friday, November 08, 2013

Expert Disagreement as a Reason for Doubt about the Metaphysics of Mind (Or: David Chalmers Exists, Therefore You Don't Know)

Probably you have some opinions about the relative merit of different metaphysical positions about the mind, such as materialism vs. dualism vs. idealism vs. alternatives that reject all three options or seek to compromise among them. Of course, no matter what your position is, there are philosophers who will disagree with you -- philosophers whom you might normally regard as your intellectual peers or even your intellectual superiors in such matters – people, that is, who would seem to be at least as well-informed and intellectually capable as you are. What should you make of that fact?

Normally, when experts disagree about some proposition, doubt about that proposition is the most reasonable response. Not always, though! Plausibly, one might disregard a group of experts if those experts are: (1.) a tiny minority; (2.) plainly much more biased than the remaining experts; (3.) much less well-informed or intelligent than the remaining experts; or (4.) committed to a view that is so obviously undeserving of credence that we can justifiably disregard anyone who espouses it. None of these four conditions seems to apply to dissent within the metaphysics of mind. (Maybe we could exclude a few minority positions for such reasons, but that will hardly resolve the issue.)

Thomas Kelly (2005) has argued that you may disregard peer dissent when you have “thoroughly scrutinized the available evidence and arguments” on which your disagreeing peer’s judgment is based. But we cannot disregard peer disagreement in philosophy of mind on the grounds that this condition is met. The condition is not met! No philosopher has thoroughly scrutinized the evidence and arguments on which all of her disagreeing peers’ views are based. The field is too large. Some philosophers are more expert on the literature on a priori metaphysics, others on arguments in the history of philosophy, others on empirical issues; and these broad literatures further divide into subliteratures and sub-subliteratures with which philosophers are differently acquainted. You might be quite well informed overall. You’ve read Jackson’s (1986) Mary argument, for example, and some of the responses to it. You have an opinion. Maybe you have a favorite objection. But unless you are a serious Mary-ologist, you won’t have read all of the objections to that argument, nor all the arguments offered against taking your favorite objection seriously. You will have epistemic peers and probably epistemic superiors whose views are based on arguments which you have not even briefly examined, much less thoroughly scrutinized.

Furthermore, epistemic peers, though overall similar in intellectual capacity, tend to differ in the exact profile of virtues they possess. Consequently, even assessing exactly the same evidence and arguments, convergence or divergence with one’s peers should still be epistemically relevant if the evidence and arguments are complicated enough that their thorough scrutiny challenges the upper range of human capacity across several intellectual virtues – a condition that the metaphysics of mind appears to meet. Some philosophers are more careful readers of opponents’ views, some more facile with complicated formal arguments, some more imaginative in constructing hypothetical scenarios, etc., and world-class intellectual virtue in any one of these respects can substantially improve the quality of one’s assessments of arguments in the metaphysics of mind. Every philosopher’s preferred metaphysical position is rejected by a substantial proportion of philosophers who are overall approximately as well informed and intellectually virtuous as she is, and who are also in some respects better informed and more intellectually virtuous than she is. Under these conditions, Kelly’s reasons for disregarding peer dissent do not apply, and a high degree of confidence in one’s position is epistemically unwarranted.

Adam Elga (2007) has argued that you can discount peer disagreement if you reasonably regard the fact that the seeming-peer disagrees with you as evidence that, at least on that one narrow topic, that person is not in fact a full epistemic equal. Thus, a materialist might see anti-materialist philosophers of mind, simply by the virtue of their anti-materialism, as evincing less than a perfect level-headedness about the facts. This is not, I think, entirely unreasonable. But it's also fully consistent with still giving the fact of disagreement some weight as a source of doubt. And since your best philosophical opponents will exceed you in some of their intellectual virtues and know some facts and arguments, which they consider relevant or even decisive, which you have not fully considered, you ought to give the fact of dissent quite substantial weight as a source of doubt.

Imagine an array of experts betting on a horse race: Some have seen some pieces of the horses’ behavior in the hours before the race, some have seen other pieces; some know some things about the horses’ performance in previous races, some know other things; some have a better eye for a horse’s mood, some have a better sense of the jockeys. You see Horse A as the most likely winner. If you learn that other experts with different, partly overlapping evidence and skill sets also favor Horse A, that should strengthen your confidence; if you learn that a substantial portion of those other experts favor B or C instead, that should lessen your confidence. This is so even if you don’t see all the experts quite as peers, and even if you treat an expert’s preference for B or C as grounds to wonder about her good judgment.

Try this thought experiment. You are shut in a seminar room, required to defend your favorite metaphysics of mind for six hours (or six days, if you prefer) against the objections of Ned Block, David Chalmers, Daniel Dennett, and Saul Kripke. Just in case we aren’t now living in the golden age of metaphysics of mind, let’s add Kant, Leibniz, Hume, Zhu Xi, and Aristotle too. (First we’ll catch them up on recent developments.) If you don’t imagine yourself emerging triumphant, then you might want to acknowledge that the grounds for your favorite position might not really be very compelling.

It is entirely possible to combine appropriate intellectual modesty with enthusiasm for a preferred view. Consider everyone’s favorite philosophy student: She vigorously champions her opinions, while at the same time being intellectually open and acknowledging the doubt that appropriately flows from her awareness that others think otherwise, despite those others being in some ways better informed and more capable than she is. Even the best professional philosophers still are such students, or should aspire to be, only in a larger classroom. So pick a favorite view! Distribute one’s credences differentially among the options. Suspect the most awesome philosophers of poor metaphysical judgment. But also: Acknowledge that you don't really know.

[For more on disagreement in philosophy see here and here. This post is adapted from my paper in draft The Crazyist Metaphysics of Mind.]

Friday, November 01, 2013

Striking Confirmation of the Spelunker Illusion

In 2010, I worked up a post on what I dubbed The Spelunker Illusion (see also the last endnote of my 2011 book). Now, hot off the press at Psychological Science, Kevin Dieter and colleagues offer empirical confirmation.

The Spelunker Illusion, well-known among cave explorers, is this: In absolute darkness, you wave your hand before your eyes. Many people report seeing the motion of the hand, despite the absolute darkness. If a friend waves her hand in front of your face, you don't see it.

I see three possible explanations:

(1.) The brain's motor output and your own proprioceptive input create hints of visual experience of hand motion.

(2.) Since you know you are moving your hand, you interpret low-level sensory noise in conformity with your knowledge that your hand is in such-and-such a place, moving in such-and-such a way, much as you might see a meaningful shape in a random splash of line segments.

(3.) There is no visual experience of motion at all, but you mistakenly think there is such experience because you expect there to be. (Yes, I think you can be radically wrong about your own stream of sensory experience.)

Dieter and colleagues had participants wave their hands in front of their faces while blindfolded. About a third reported seeing motion. (None reported seeing motion when the experimenter waved his hand before the participants.) Dieter and colleagues add two interesting twists: One is that they add a condition in which participants wave a cardboard silhouette of a hand rather than the hand itself. Under these conditions the effect remains, almost as strong as when the hand itself is waved. The other twist is that they track participants' eye movements.

Eye movements tend to be jerky, jumping around the scene. One exception to this, however, is smooth pursuit, when one stabilizes ones gaze on a moving object. This is not under voluntary control: Without an object to track, most people cannot move their eyes smoothly even if they try. In 1997, Katsumi Watanabe and Shinsuke Shimojo found that although people had trouble smoothly moving their eyes in total darkness, they could do so if they were trying to track their ("invisible") hand motion in darkness. Dieter and colleagues confirmed smooth hand-tracking in blindfolded participants and, strikingly, found that participants who reported sensations of hand motion were able to move their eyes much more smoothly than those who reported no sensations of motion.

I'm a big fan of corroborating subjective reports about consciousness with behavioral measures that are difficult to fake, so I love this eye-tracking measure. I believe that it speaks pretty clearly against hypothesis (3) above.

Dieter and colleagues embrace hypothesis (1): Participants have actual visual experience of their hands, caused by some combination of proprioceptive inputs and efferent copies of their motor outputs. However, it's not clear to me that we should exclude hypothesis (2). And (1) and (2) are, I think, different. People's experience in darkness is not merely blank or pure black, but contains a certain amount (perhaps a lot) of noise. Hypothesis (2) is that the effect arises "top down", as it were, from one's high-level knowledge of the position of one's hand. This top-down knowledge then allows you to experience that noisy buzz as containing motion -- perhaps changing the buzz itself, or perhaps not. (As long as one can find a few pieces of motion in the noise to string together, one might even fairly smoothly track that motion with one's eyes.)

Here's one way to start to pull (1) apart from (2): Have someone else move your hand in front of your face, so that your hand motion is passive. Although this won't eliminate proprioceptive knowledge of one's hand position, it should eliminate the cues from motor output. If efferent copies of motor output drive the Spelunker Illusion, then the Spelunker Illusion should disappear in this condition.

Another possibility: Familiarize participants with a swinging pendulum synchronized with a sound, then suddenly darken the room. If hypothesis (2) is correct and the sound is suggestive enough of the pendulum's exact position, perhaps participants will report still visually experiencing that motion.

Update, April 28, 2014:

Leonard Rosgole and Miguel Roig point out to me that these phenomena were reported in the psychological literature in Hofstetter 1970, Brosgole and Neylon 1973, Brosgole and Roig 1983. If you're aware of earlier sources, I'd be curious to know.

Tuesday, October 29, 2013

Being Two People at Once, with the Help of Linda Nagata

In the world of Linda Nagata's Nanotech Succession, you can be two people at once. And whether you are in fact two people at once, I'd suggest, depends on the attitude each part takes toward the splitting-fusing process.

"Two people at once" isn't how Nagata puts it. In her terminology, one being, the original person, continues in standard embodied form, while another being, a "ghost" -- inhabits some other location, typically someone else's "atrium". Suppose you want to have an intimate conversation long-distance. In Nagata's world, you can do it like this: Create a duplicate of your entire psychology (memories, personality traits, etc. -- for the sake of argument, let's allow that this can be done) and transfer that information to someone else. The recipient then implements your psychology in a dedicated processing space, her atrium. At the same time, your physical appearance is overlaid upon the recipient's sensory inputs. To her (though to no one else around) it will look like you are in the room. The person hosting you in her atrium will then interact with you, for example by saying "Hi, long time no see!" Her speech will be received as inputs to the virtual ghost-you in her atrium, and this ghost-you will react in just the same way you would react, for example by saying "You haven't aged a bit!" and stepping forward for a hug. Your host will then experience that speech overlaid on her auditory inputs, your bodily movement overlaid on her visual inputs, and the warmth of your hug overlaid on her tactile inputs. She will react accordingly, and so forth.

The ghost in the atrium will, of course, consciously experience all this (no Searlean skepticism about conscious AI here). When the conversation is over, the atrium will be emptied and the full memory of these experiences will be messaged back to the original you. The original you -- which meanwhile has been having its own stream of experiences -- will accept the received memories as autobiographical. The newly re-merged you, on Earth, will remember that conversation you had on Mars, which occurred on the same day you were also busy doing lots of other things on Earth.

If you know the personal identity literature in philosophy, you might think of instantiating the ghost as a "fission" case -- a case in which one person splits into two different people, similar to the case of having each hemisphere of your brain is transplanted separately into a different body, or the case of stepping into a transporter on Earth and having copies of you emerge simultaneously on Mars and Venus to go their separate ways ever after. Philosophers usually suppose that such fissions produce two distinct identities.

The Nagata case is different. You fission, and both of the resulting fission products know they are going to merge back together again; and then once they do merge, both strands of the history are regarded equally as part of your autobiography. The merged entity regards itself as being responsible for the actions of the split-off ghost -- can be embarrassed by its gaffes, held to its promises, and prosecuted for its crimes, and it will act out the ghost's decisions without needing to rethink them.

Contrast assimilation into the Borg of the Star Trek universe. The Borg, a large group entity, absorbs the memories of various assimilated beings (like individual human beings). But the Borg treats the personal history of the assimilated being non-autobiographically -- for example without accepting responsibility for the assimilated entity's past actions and plans.

What makes the difference between an identity-preserving fission-and-merge and an identity-breaking fission-and-merge is, I propose, the entities' implicit and explicit attitudes about the merge. If pre-fission I think "I am going to be Eric Schwitzgebel, in two places", and then in the fissioned state I think "I am here but another copy of me is also running elsewhere", and then after fusion I think "Both of those Eric Schwitzgebels are equally part of my own past" -- and if I also implicitly accept all this, e.g., by not feeling compelled to rethink one Eric Schwitzgebel's decisions more than the other's -- and perhaps especially if the rest of society shares my view of these matters, then I have been one entity in two places.

To see that this is really about the content of the relevant attitudes and not about, say, the kind of continuity of memory, values, and personality usually emphasized in psychological approaches to personal identity, consider what would happen if I had a very different attitude toward ghosts. If I saw the ghost as a mere slave distinct from me, then during the split my ghost might be thinking "damn, I'm only a ghost and my life will expire at the end of this conversation"; and after the merge, I'll tend to think of my ghost's behaviors as not really having been my own, despite my memories of those behaviors from a first-person point of view. The ghost will not bothered having made decisions or promises intending to bind me, knowing I would not accept them as my own if he did. And I'll be embarrassed by the ghost's behavior not in the same way I would be embarrassed by my own behavior but instead in something like the way I would be embarrassed by a child's or employee's behavior -- especially, perhaps, if the ghost does something that I wouldn't have done in light of its knowledge that, being merely a ghost, it would imminently die. The metaphysics of identity will thus turn upon the participant beings' attitudes about what preserves identity.

Tuesday, October 22, 2013

On the Intrinsic Value of Moral Reflection

Here's a hypothetical, not too far removed from reality: What if I discovered, to my satisfaction, that moral reflection -- the kind of intellectual thinking about ethical issues that is near the center of moral philosophy -- tended to lead people toward less true (or, if you prefer, more noxious) moral views than they started with? And what if, because of that, it tended also to lead people toward somewhat worse moral behavior overall? And suppose I saw no reason to think myself likely to be an exception to that tendency. Should I abandon moral reflection?

What is the point of moral reflection?

If the point is to discover what is really morally the case -- well, there's reason to doubt that philosophical styles of moral reflection are highly effective at achieving that goal. Philosophers' moral theories are often simplistic, problematic, totalizing -- too rigid in some places, too flexible in others, recruitable for clever justifications of noxious behavior, from sexual harassment to Nazism to sadistic parenting choices. Uncle Irv, who never read Kant or Mill and has little patience for the sorts of intellectual exercises we philosophers love, might have much better moral knowledge than most philosophers; and you and I might have had better moral knowledge than we do, had we shared his skepticism about philosophy.

If the point of philosophical moral reflection is to transform oneself into a morally better person -- well, there are reasons to doubt it has that effect, too.

But I would not give it up. I would not give it up, even at some moderate cost to my moral knowledge and moral behavior. Uncle Irv is missing something. And a world of Uncle Irvs would be a world vastly worse than this world, in a way I care about -- much as, perhaps, a world without metaphysical speculation would be worse than this world, even if metaphysical speculation is mostly bunk, or a world without bad art would be worse than this world or a world of a hundred billion contented cows would be worse than this world.

If I think about what I want in a world, I want people struggling to think through morality, even if they mostly fail -- even if that struggle rather more often brings them down than up.

Tuesday, October 15, 2013

An Argument That the Ideal Jerk Must Remain Ignorant of His Jerkitude

As you might know, I'm working on a theory of jerks. Here's the central idea a nutshell:

The jerk is someone who culpably fails to respect the perspectives of other people around him, treating them as tools to be manipulated or idiots to be dealt with, rather than as moral and epistemic peers.
The characteristic phenomenology of the jerk is "I'm important and I'm surrounded by idiots!" To the jerk, it's a felt injustice that he must wait in the post-office line like anyone else. To the jerk, the flight attendant asking him to hang up his phone is a fool or a nobody unjustifiably interfering with his business. Students and employees are lazy complainers. Low-level staff failed to achieve meaningful careers through their own incompetence. (If the jerk himself is in a low-level position, it's either a rung on the way up or the result of injustices against him.)

My thought today is: It is partly constitutive of being a jerk that the jerk lacks moral self-knowledge of his jerkitude. Part of what it is to fail to respect the perspectives of others around you is to fail to see your dismissive attitude toward them as morally inappropriate. The person who disregards the moral and intellectual perspectives of others, if he also acutely feels the wrongness of doing so -- well, by that very token, he exhibits some non-trivial degree of respect for the perspectives of others. He is not the picture-perfect jerk.

It is possible for the picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. "So what, yeah, I'm a jerk," he might say. As long as this label carries no real sting of self-disapprobation, the jerk's moral self-ignorance remains. Maybe he thinks the world is a world of jerks and suckers and he is only claiming his own. Or maybe he superficially accepts the label "jerk", without accepting the full moral loading upon it, as a useful strategy for silencing criticism. It is exactly contrary to the nature of the jerk to sympathetically imagine moral criticism for his jerkitude, feeling shame as a result.

Not all moral vices are like this. The coward might be loathe to confront her cowardice and might be motivated to self-flattering rationalization, but it is not intrinsic to cowardice that one fails fully to appreciate one’s cowardice. Similarly for intemperance, cruelty, greed, dishonesty. One can be painfully ashamed of one’s dishonesty and resolve to be more honest in the future; and this resolution might or might not affect how honest one in fact is. Resolving does not make it so. But the moment one painfully realizes one’s jerkitude, one already, in that very moment and for that very reason, deviates from the profile of the ideal jerk.

There's an interesting instability here: Genuinely worrying about its being so helps to make it not so; but then if you take comfort in that fact and cease worrying, you have undermined the basis of that comfort.

Tuesday, October 08, 2013

The Nature of Desire: A Liberal, Dispositional Approach

What is it to desire something? I suggest: to desire (or want) some item or some state of affairs is just to be disposed to make certain choices, to inwardly and outwardly react in certain ways, and to make certain types of cognitive moves. It is to match, well enough, a certain inward and outward dispositional profile given by folk psychology. Compare: What is it to be an extravert? It is just to match, well enough, the dispositional profile of the extravert -- to seek out and enjoy social gatherings, to be expressive and talkative, to enjoy meeting new people. Match this stereotypical profile of the extravert well enough and you are an extravert. Nothing more to it. Similarly for desire: If you will seek out chocolate cake, if you would choose chocolate cake over other desserts, if you tingle with delight when eating it, if you say "I want chocolate cake", if the thought of getting chocolate cake captures your anticipatory attention, etc., then you like or want or desire chocolate cake. Nothing more to it.

There are two types of alternative account. One alternative approach is, shall we say, deep: To desire something, on a deep account, is to be in some particular brain state or to have some underlying representational structure in the mind (perhaps the representation "I eat chocolate cake" in the Desire Box). The problem with such deep accounts is, I believe, that they don't get to the metaphysical root.

Consider an alien case. Suppose some Deep Structure D is necessary for wanting chocolate cake, on some deep account of desire. Unless that structure is more or less tantamount to possessing the dispositional profile constitutive (on my account) of wanting chocolate cake, then it should be metaphysically possible for an alien species that lacks Deep Structure D to act and react, inwardly and outwardly, in every respect as though it wanted chocolate cake. In such a case, I would suggest, both ordinary common sense and good philosophy advises ascribing the desire for chocolate cake to such hypothetical aliens, despite their lacking whatever Deep Structure D is necessary in the human case.

Alternatively, suppose some Deep Structure E is held to be sufficient for wanting chocolate cake. It seems that we could construct, at least hypothetically, a possible case in which Deep Structure E is present but the person in no way acts or reacts, inwardly or outwardly, like someone who wants chocolate cake: She wouldn't seek it, she wouldn't enjoy eating it, the anticipation of eating it would give her no pleasure, she gives it no weight in her plans, etc. It seems that we should say, in such cases, that the person does not desire chocolate cake. In ascribing desire or its lack, what we care about, both as ordinary folks and as philosophers, is how the person would act and react across a wide variety of possible circumstances. It is only contingently important what underlying mechanisms implement that pattern of action and reaction.

A second type of alternative approach is, like my own approach, superficial rather than deep, but unlike my approach it is narrow. What matters, on such accounts, is just some sub-portion of the pattern that matters on my approach. Maybe what is essential is that the person would choose the cake if given the chance, and not whether the person thinks she wants it or would feel anticipation when about to get it or would enjoy eating it. Or maybe what is essential is that the person judges that it would be good to get cake, and all the rest is incidental. Or maybe the essence is that receiving chocolate cake would be rewarding to that person. Or.... (See Tim Schroeder's SEP entry on Desire for a review of various narrow accounts, which Schroeder contrasts with holistic accounts like my own.)

The problem with narrow accounts is that it's hard to see a good justification for picking out just one feature of the profile as the essential bit. Desire is more usefully regarded as a syndrome of lots of things that tend to go together -- like extraversion is a syndrome, or like being happy-go-lucky is a syndrome. We can be liberal about what goes into the profile. It can be a cluster concept; aspects of the syndrome might be more or less central or important to the picture, but there need be no one essential piece that is strictly necessary or sufficient.

The flexible minimalism of a liberal, dispositional approach is, I think, nicely displayed when we consider messy, in-between cases. So let's consider one.

Matthew the envious buddy. Matthew and Rajan were pals in philosophy grad school. Ten years out, they still consider themselves close friends. They exchange friendly emails, comment warmly on each other’s Facebook posts, and seek each other for tête-à-têtes at professional meetings. In most respects, they are typical ageing grad-school best buddies. Also perhaps not atypically, one has had much more professional success than the other. Rajan was hired straight into a prestigious tenure-track position. He published a string of well-regarded articles which earned him quick tenure and, recently, promotion to full Professor. Now he is considering a prestigious job offer from another leading department. Matthew, in contrast, struggled through three temporary positions before finally landing a job at a relatively unselective state school. He has published a couple of articles and book reviews, suffered some ugly department politics, and is now facing an uncertain tenure decision. Understandably, Matthew is somewhat envious of Rajan – a fact he explicitly admits to Rajan over afternoon coffee in the conference hotel. Rajan is finishing his first book project and Matthew is halfway through reading Rajan’s draft.

Matthew, as I’m imagining him, is not generally an envious character; he has a generous spirit. The well-wishes he utters to Rajan are sincerely felt at the time of utterance, not a sham. Picturing Rajan as the next David Lewis makes Matthew smile and chuckle with a good-natured shake of the head. There would be something truly cool about that, Matthew thinks – though the fact that he explicitly thinks that thought in that particular way already reveals a kind of ambivalence. Matthew intends to give Rajan his best advice about book revisions. He plans to recommend the book warmly to influential people he knows, including the program chair of the Pacific Division APA. At the same time, though, it’s true that were Matthew to read a devastating review of Rajan’s book, he would feel a kind of shameful pleasure, while seeing a glowing review in a top venue would bring a painful pang. In drafting out thoughts about the book, Matthew finds himself sometimes resentful of the effort, and he finds himself somewhat unhappy when he reads a particularly fresh and clever argument in the draft, wishing he had come up with that argument himself instead – though when he notices this about himself, he rebukes himself sharply. If Rajan’s book were to flop, Matthew would love commisserating; if Rajan’s book were to be a great success, that would add to the growing distance between the two friends. In some moments, Matthew admits to himself that he doesn’t really know if he wants the book to succeed or not.

We can, of course, add as much detail to this case as we want -- dispositions pointing in different directions, in whatever balance we wish.

Question: Does Matthew want Rajan's book to succeed?

The best answer, I submit, if we've built the case as I've intended, is "kind of", or "it's an intermediate, messy case". Just as someone might be an extravert in some respects and an introvert in other respects so that neither a plain ascription of "extravert" nor a plain ascription of "introvert" is quite right, so also with the question of whether Matthew wants Rajan's book to succeed. A liberal, dispositional approach to desire captures this ambivalence perfectly: Matthew wants the book to succeed exactly insofar as he matches the broad syndrome and no farther. There need be no "Q" either determinately in or determinately out of his "Desire Box"; there need be no one essential feature. In ascribing a desire, we are pointing toward a folk-psychologically recognizable pattern, and people might fit that pattern very well or not well at all, deviating in different ways and to different degrees.

The implications for self-knowledge of desire I leave as an exercise for the reader.

[For more on my dispositional approach to the attitudes see here.]