Thursday, February 27, 2014

I'm Glad I'm Not Real

(by Amie L. Thomasson)

There once was a fictional character. And her name was May. May was four years old; or, at least that's what the story said. In fact, she had just been written quite recently "and so in that sense, I am a newborn fictional character" she liked to say. Especially when she was looking for a good excuse to curl up on the rug like a teeny tiny baby, suck her thumb, and bat at things in a homemade play gym.

Well May was a lovely little fictional character, bouncy, fun, very clever, and with an uncanny ability to eat olives. Put as many as you like in front of her, and they'd always be gone by the next page.

But still May was not happy, for although she appreciated the olives, she was lonely. "Why don't I have a mommy or daddy to take care of me?" she asked.

"But you do have a mother. I created you from words and pictures" her author said.

"That's NOT what I mean" the little girl sulked, and she slumped down right in the corner of the page, folding one edge over to hide herself.

"Oh alright then," the author said. And she made her a mother. A mother who loved her more than anything in the world, who taught her to paint and to laugh at herself, who sat on the floor for hours making zoos and block houses and earthquakes to destroy them. And she made her a father. A father with constant love and gentle patience, who taught her to bake banana bread and to play piano and to name every bird in the garden. And they were happy together.

They came to live in a book. A real hardcover book, with full color pictures and shiny pages. And the book came to be on the shelves of a little girl—a four year old girl, as it happens—by the name of Natalie. The various dragons and bears who lived in tatty second hand paperbacks on lower shelves really quite envied them.

Till one day, just before bedtime, Natalie spotted the book, sticking out slightly between a board book about ducklings and something involving a circus. What's this book? She asked. She had never seen it before. "I want to read it now! Can't we read it pleeeaaase?" She asked. "Well, it's a bit late," her mommy said, "but I guess we could read just this one." And they all plumped down on Natalie's fluffy red comforter, and her daddy began to read.

As they closed the book for the night, Natalie's mommy said, "well I'm glad little May got some parents and isn't lonely anymore." "But mooommmy," Natalie protested, "she's not REAL!" Oh yeah, admitted mommy, closing the book gently and turning out the light.

"I'm glad I'm real and not just in a book." said Natalie quietly as she curled up with her blanket.

"So am I, sweetheart," her mommy agreed as she kissed her soft cheek goodnight.

Well once the book was closed little May began to cry. "What does she MEAN I'm not real?" asked May who, like most children, had forgotten those muddled early days after she was first made, those days when she was lonely. Well, her mother explained, we are just characters in a book. We do what our author writes, there’s no more to us than she's given us, and we stay in the world of these pages.

But I want to get out of here! May protested. I want to be really REAL. I want to have toes (for these had never been seen in the pages). I want to know what happened when I was just two (for this had never been spoken of). And I want to go wherever I want to go, not where some author puts me! She railed. And she wept and she struggled and she stewed. Her mother cried a bit too, to see her daughter realizing these sad truths, but her daddy just held her hand.

You know, he said, since were not real, we'll never get sick (see: no sickness is ever mentioned). We'll never bump too hard off a slide. Or get bitten by mosquitoes.

And will no one ever steal the olives out of my lunch box? May wanted to know.

Nope, no one will ever steal the olives out of your lunchbox. Or your vanilla cookies either. And best of all, none of us will ever die—we can stay here together for always, loving each other in this book.

I'm glad we're not real, May decided. And she curled up in a corner of the page, sucking her thumb quietly, and went to sleep.

-----------------------------------------

Extract from "I'm glad I'm not real" by Amie L. Thomasson, from The Philosophy Shop: Ideas, activities and questions to get people, young and old, thinking philosophically. Edited by Peter Worley (c)Peter Worley 2012. ISBN 9781781350492

Wednesday, February 19, 2014

A Case for Pessimism about General Theories of Consciousess

Recently, I've been arguing (e.g., here and here) that we should be skeptical about any general theory of consciousness, whether philosophical or scientific. Here's one way of putting the case.

When we advance a general theory of consciousness, we must do so on some combination of empirical grounds (especially the study of actual conscious systems here on Earth) and armchair reflection (especially thought experiments).

Our empirical grounds are very limited: We have only seen conscious systems as they exist on Earth. To draw, on these grounds, universal conclusions about how any conscious system must be structured would be a reckless leap, if our theories are really supposed to be driven by empirical tests on conscious animals, which could have come out either way. Who knows how those crucial theory-driving experiments would have come out on very differently constructed beings from Andromeda Galaxy?

A truly universal theory of consciousness seems more likely to succeed if it draws broadly from a range of hypothetical cases, abstracting away from empirical details of implementation on Earth. So we must sit in our armchairs. However, armchair reflection about the consciousness of hypothetical beings has two huge shortcomings:

  1. Experts in the field reach very different conclusions when asked to reflect on what sorts of hypothetical beings would be conscious (all the way from panpsychism to views that require highly sophisticated cognitive abilities).
  2. Our judgments about such cases must be grounded in some sort of prior knowledge, such as our experience of beings here on Earth and our developmentally and socially and evolutionarily favored beliefs. And there seems little reason to trust such judgments outside the run of normal cases, for example, about the consciousness or not of large group entities under various conditions.

If you are moved by these concerns, you might think that the appropriate response is to restrict our theory to consciousness as it appears on Earth. But even just thinking about consciousness on Earth drops us into a huge methodological dilemma. If we treat introspective reportability as something close to a necessary condition for consciousness, then we end up with a very sparse view of the distribution of consciousness on Earth. And maybe that's right! But it also seems reasonable to think that consciousness might be possible without introspective reportability, e.g., in dogs and babies. And then it becomes extremely unclear how we determine whether it is present without begging big theoretical questions. How could we possibly determine whether an ant is conscious without begging the question against people with very different views than our own?

Could we forget about non-human animals and babies and restrict our (increasingly less general) theory of consciousness just to adult humans? Even here, I incline toward pessimism, at least for the medium-term future.

One reason is this: I see no near-term way to resolve the question of whether consciousness abundantly outruns attention. I think I can imagine two very different possibilities here. One possibility is that I have constant tactile experience of my feet in my shoes, constant auditory experience of the hum of the refrigerator, etc., but when I'm not attending to such matters, that experience drops out of memory so quickly and is so lightly processed that it is unreportable. Another possibility is that I usually have no tactile experience whatsoever of my feet in my shoes or auditory experience of the hum of the fridge unless these things capture my attention for some reason. These possibilities seem substantively distinct, and it's easy to see how a proponent of one can create a methodological error theory to explain away the judgments of a proponent of the other.

Now maybe there's a way around these problems. Scientists have often found ingenious ways to embarrass earlier naysayers! But still, there's such a huge spread between the best neuroscientific approaches (e.g., Tononi and Dehaene) and such a huge spread between the best philosophical approaches (e.g., Chalmers and Dennett), that it's hard for me to envision a well-justified consensus emerging in my philosophical lifetime.

[HT Scott Bakker, who has been pushing me on these issues.]
[image source]

Friday, February 14, 2014

Might I Be a Cosmic Freak?

A "freak observer" or "Boltzmann brain" is a conscious being who did not arise in the normal way on a large, stable planet, but who instead congealed by freak chance out of chaos, due to a low-probability quantum or thermodynamic fluctuation -- a conscious being with rich seemingly sensory experience, rich seeming-memories, and capable of sophisticated thoughts or seeming-thoughts about itself and its position in the universe. By hypothesis, such a being is massively deluded about its past. And since random fluctuations are much likelier to create a relatively small system than a relatively large system, and since a relatively small system (such as a bare brain) amid chaos is doomed to a short existence, most freak observers will swiftly perish.

If certain cosmological theories are true, then almost all conscious systems are freak observers of this sort. Here's one such theory: There is exactly one universe which began with a unique Bang, which contains a finite number of ordinary non-freak observers, and which will eventually become thin chaos, enduring infinitely thereafter in a disorganized state. In any spacetime region there is a miniscule but finite chance of the spontaneous freak formation of any finite organized system, with smaller and less organized systems vastly more likely than larger and more organized systems. Given infinite time, the number of spontaneously formed freak observers will eventually vastly outnumber the normal observers. Whatever specific experiences and evidence I take myself now to have, according to this theory, to any finite degree of precision, there will be an infinite number of randomly generated Eric Schwitzgebel clones who have the same experiences and apparent evidence.

Can I prove that I am not a freak observer by counting "1, 2, 3, still here"? Seemingly no, for two reasons: (1.) By the time I reach "still here" I am relying on my memory of the "1, 2, 3", and the theory says that there will be an infinite number of freak observers with exactly that false memory. (2.) Even if assume knowledge of my continued existence for three seconds, there will be an infinite number of somewhat larger freak observers who congealed simultaneously with a large enough hunk of environment to exist for three seconds, doing that apparent count. If I am such a one, I will very likely perish soon, but it is not guaranteed that I will perish, and if I don't perish and thus conclude that I am not a freak I have ignored the overwhelming base rate of freaks to normal observers.

Suppose that given the physical evidence such a cosmology seems plausible, or some other cosmology in which freak observers vastly outnumber normal observers. Should I conclude I am probably a freak observer? It would be a strange conclusion to draw!

One interesting argument against this conclusion is the cognitive instability argument (Carroll 2010; Davenport & Olum 2010; Crawford 2013): Suppose that my grounds for believing that I am a freak observer are Physical Theory X, which I accept only conditionally upon believing that I have good empirical evidence for Physical Theory X. If I am a freak observer, then, contrary to the initial assumption, I do not have good empirical evidence for Physical Theory X. I have not, for example, despite my contrary impression, actually read any articles about X. If I seem to have good empirical evidence for Physical Theory X, I know already that that evidence is almost certainly misleading or wrongly interpreted -- either I do have the properly-caused body of evidence that I think I have, that is, I am not a freak, and that evidence is misleadingly pointing me to the wrong conclusion about my situation; or I am a freak and I don't have such a body of properly-caused evidence at all.

For this reason, I think it would be irrational to accept a cosmological theory that implies that almost all observers are freak observers and then conclude that therefore I am also a freak observer.

But a lower-confidence conclusion seems to be more cognitively stable. Suppose our best cosmological theory implies that 1% of observers are freaks. I might then accept that there is a non-trivial chance that I am one of the freaks. After all, my best understanding of the universe implies that there are such freaks, and I see no compelling reason to suppose that I couldn't be one of them.

Alternatively, maybe my best evidence should leave me undecided among lots of cosmologies, in some of which I'm a freak and in others of which I'm not. The possibility that I'm a freak undercuts my confidence in the evidence I seem to have for any specific cosmology, but that only adds to my indecision among the possibilities; it doesn't seem to compel elimination of the possibility that I am a freak.

Here's another way to think about it: As I sit here in my office, or seem to, and think about the scope of the cosmos, I find myself inclined to ascribe a non-trivial credence to some sort of very large or infinite cosmology, and also a non-trivial credence to the hypothesis that given enough time freak observers will spontaneously form, and also a non-trivial credence to the possibility that the freaks aren't vastly outnumbered by the normal observers. If I accept this conjunction of views, then it seems to me that I should also assign a bit of credence to the possibility that I am one of the freaks. To do otherwise would seem to commit me to near certainty on some proposition, such as about the relative nucleation rates of freaks vs. environments containing normal observers, that I wouldn't normally think of as something I know with near certainty.

Or maybe I should just take it as an absolutely certain "framework" assumption that I do have the kind of past I think I have, regardless of how many Eric-Schwitzgebelesque freaks the cosmos may contain? I can see how that might be a reasonable stance. But that approach has a dogmatic air that I find foreign.

If I allow that I'm not absolutely 100.0000000000000000000000000000% certain that I'm not a spontaneously formed freak observer, what sort of credence should I assign to the possibility that I am a freak? One in million? One in ten trillion? One in 10^100? I would like to go low! But I'm not sure that it's reasonable for me to go so low, once the possibility occurs to me and I start to consider my reasons pro and con. I'm inclined to think it is vastly less likely that I am a freak observer than that this ticket will win the one-in-ten-million Lotto jackpot -- but given the dubiety of cosmological theories and my inability to really assess them, should I perhaps be considerably less confident than that about my non-freakish position in the cosmos?

Thursday, February 06, 2014

Knowledge Without Belief and the Knowledge Norm of Assertion

Philosophers sometimes say that knowledge is a norm of assertion -- that a person should assert only what she knows. Since knowing some proposition P is usually taken to imply believing that same proposition P, commitment to a knowledge norm of assertion is generally thought to imply commitment to a belief norm of assertion: A person should assert only what she believes.

What happens, however, if one accepts, as I do, that knowledge that P does not imply belief that P? Can the belief norm be violated as long as the knowledge norm is satisfied? Bracketing, if we can, pragmatic and contextualist concerns (which I normally take quite seriously), is it acceptable to assert something one knows but does not believe?

I'm inclined to think it is.

Consider my favorite case of knowledge without belief (or at least without full, determinate belief), the prejudiced professor case:

Juliet, let’s suppose, is a philosophy professor who racially identifies as white. She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the black students never look bright. When a black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a white or Asian student to do so, even though her black students make insightful comments and submit excellent essays at the same rate as do the others. And so on.
I am inclined to say, in such a case (assuming the details are fleshed out in plausible ways) that Juliet knows that all the races are equally intelligent, but her belief state is muddy and in-betweenish; it's not determinately correct to say that she believes it. Such in-between cases of belief require a more nuanced treatment than is permitted by straightforward ascription or denial of the belief. (I often analogize here to in-betweenish cases of having a personality trait like courage or extraversion.) Juliet determinately knows but does not determinately believe.

You might not accept this description of the case. My view about it is distinctly in the philosophical minority. However, suppose you grant my description. Is Juliet justified in asserting that all the races are equally intelligent, despite her not determinately believing that to be the case?

I'm inclined to think so. She has the evidence, she's taken her stand, she does not err when she asserts the proposition in debate, even if she cannot bring herself quite to live in a way consistent with determinately believing it to be so. However she is inclined spontaneously to respond to the world, the egalitarian proposition reflects her best, most informed judgment. Assertability in this sense tracks knowledge better than it tracks belief. She can properly assert despite not determinately believing.


***************
Objection: "Moore's paradox" is the strangeness of saying things like "It's raining but I don't believe that it's raining". One might object to the above that I now seem to be committed to the assertability of Moore-paradoxical sentences like

(1.) All the races are intellectually equal but I don't (determinately) believe that they are.
Reply: I grant that (1) is not properly assertable in most contexts. Rather, what is properly assertable on my view is something like:
(2.) All the races are intellectually equal, but I accept Schwitzgebel's dispositional approach to belief and it is true in the terms of that theory that I do not determinately believe that all the races are intellectually equal.
The non-assertability of (1) flows from the fact that my dispositional approach to belief is not the standard conception of belief. If my view about belief were to become the standard view, then (1) would become assertable.

Tuesday, February 04, 2014

MIT Press BITS

One reason I'm a fan of MIT Press (the publisher of both of my books) is that for an academic press their prices are very low (my 2011 book is currently $14.21 at Amazon) which means that a broader range of people can afford the book than if it were at another press. Another reason I'm a fan is that MIT has tended to be a leader in exploring new electronic media.

So it's very cool that they've chosen a chapter of my Perplexities of Consciousness for their BITS project, a new enterprise which allows people to electronically buy a portion of an MIT Press book for a low price ($2.99 in this case) and then later, if the reader wants, the whole book for 40% off list price. The chapter they've chosen is "When Your Eyes Are Closed, What Do You See?", which although it is the eighth and final chapter of my book, does not require that the reader know material from the previous chapters -- thus, a reasonable choice for a BIT.

What I'd really love to see down the road is a model where you can buy any selection of pages from a book for a nickel per page.

Thursday, January 30, 2014

An Objection to Group Consciousness Suggested by David Chalmers

For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.

To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.

The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.

This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.

To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?

Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that

(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, and

(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.

Contra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.

Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.

The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.

Thursday, January 23, 2014

New Essay in Draft: The Moral Behavior of Ethicists

... which is a recurrent topic of my research, as regular readers of this blog will know.

This new paper, co-authored with Joshua Rust, summarizes our work on the topic to date and offers a quantitative meta-analysis that supports our overall finding that professors of ethics behave neither morally better nor morally worse overall than do philosophers not specializing in ethics.

You might find it entirely unsurprising that ethicists should behave no differently than other professors. If you do find it unsurprising (Josh and I don't), you might still be interested in looking at another of Josh's and my papers, in which we think through some of the theoretical implications of this finding.

Tuesday, January 21, 2014

Stanislaw Lem's Proof that the External World Exists

Slowly catching up on science fiction classics, reading Lem's Solaris, I'm struck by how the narrator, Kris, escapes a skeptical quandary. Worried that his sensory experiences might be completely delusional, Kris concocts the following empirical test:

I instructed the satellite to give me the figure of the galactic meridians it was traversing at 22-second intervals while orbiting Solaris, and I specified an answer to five decimal points.

Then I sat and waited for the reply. Ten minutes later, it arrived. I tore off the strip of freshly printed paper and hid it in a drawer, taking care not to look at it.... Then I sat down to work out for myself the answer to the question I had posed. For an hour or more, I integrated the equations....

If the figures obtained from the satellite were simply the product of my deranged mind, they could not possibly coincide with [my hand calculations]. My brain might be unhinged, but it could not conceivably complete with the Station's giant computer and secretly perform calculations requiring several months' work. Therefore if the figures corresponded, it would follow that the Station's computer really existed, that I had really used it, and that I was not delirious (1961/1970, p. 50-51).

Except in detail, Kris's test closely resembles an experiment Alan Moore and I have used in our attempt to empirically establish the existence of the external world (full paper in draft here).

Kris is hasty in concluding from this experiment that he must have used an actually existing computer. Kris might, for example, have been victim of a deceiver with great computational powers, who can give him the meridians within ten minutes of his asking. And Kris would have done better, I think, to have looked at the readout before having done his own calculations. By not looking until the end, he leaves open the possibility that he delusively creates the figures supposedly from the satellite only after he has derived the correct answers himself. Assuming he can trust his memory and arithmetical abilities for at least a short duration (and if not, he's really screwed), Kris should look at the satellite's figures first, holding them steady before his mind, while he confirms by hand that the numbers make mathematical sense.

Increasingly, I think the greatest science fiction writers are also philosophers. Exploring the limits of technological possibility inevitably involves confronting the central issues of metaphysics, epistemology, and human value.

Wednesday, January 15, 2014

Waves of Mind-Wandering in Live Performances

I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)

I've posted about this issue before; and although professional philosophy talks aren't paradigmatic examples of aesthetic performances, I have beeped people during some of my talks. One striking result: People spend lots of time thinking about things other than the explicit content of the performance -- for example, thinking instead about needing to go to the bathroom, or a sports bet they just won, or the weird color of an advertising flyer. And I'd bet Nutcracker audiences are similarly scatterbrained. (See also Schooler, Reichle, and Halpern 2004; Schubert, Vincs, and Stevens 2013.)

(image source: *)

But I also get the sense that if I pause, I can gather the audience up. A brief pause is commanding -- in music (e.g. Roxanne), in film -- but especially in a live performance like a talk. Partly, I suspect this is due to contrast with previous noise levels, but also it seems to raise curiosity about what's next -- a topic change, a point of emphasis, some unplanned piece of human behavior. (How interesting it is when the speaker drops his cup! -- much more interesting, usually, in a sad and wonderfully primate way, than the talk itself.)

I picture people's conscious attention coming in waves. We launch out together reasonably well focused, but soon people start drifting their various directions. The speaker pauses or does something else that draws attention, and that gathers everyone briefly back together. Soon the audience is off drifting again.

We could study this with beepers. We could see if I'm right about pauses. We could see what parts of performance tend to draw people back from their wanderings and what parts of performance tend to escape conscious attention. We could see how immersive a performance is (in one sense of "immersive") by seeing how frequently people report being off topic vs. on a tangentially related topic vs. being focused on the immediate content of the performance. We could vastly improve our understanding of the audience experience. New avenues for criticism could open up. Knowing how to capture and manipulate the waves could help writers and performers create a performance more in line with their aesthetic goals. Maybe artists could learn to want waves and gatherings of a certain sort, adding a new dimension to their aesthetic goals.

As far as I can tell, no one has ever done a systematic experience sampling study during aesthetic experience that explores these issues. It's time.

Friday, January 10, 2014

Skeptical Fog vs. Real Fog

I am a passenger in a jumbo jet that is descending through turbulent night fog into New York City. I'm not usually nervous about flying, but the turbulence is getting to me. I know that the odds of dying in a jet crash with a major US airline are well below one in a million, but descent in difficult weather conditions is among the most dangerous parts of flight -- so maybe I should estimate my odds of death in the next few minutes as about one in a million or one in ten million? I can't say those are odds I'm entirely happy about.

But then I think: Maybe some radically skeptical scenario is true. Maybe, for example, I'm a short-term sim -- an artificial, computerized being in a small world, doomed soon to be shut down or deleted. I don't think that is at all likely, but I don't entirely rule it out. I have about a 1% credence that some radically skeptical scenario or other is true, and about 0.1% credence, specifically, that I'm in a short-term sim. In a substantial portion of these radically skeptical scenarios, my life will be over soon. So my credence that my life will soon end for some skeptical-scenario type reason is maybe about one in a thousand or one in ten thousand -- orders of magnitude higher than my credence that my life will soon end for the ordinary-plane-crash type of reason.

Still, the plane-crash possibility worries me more than the skeptical possibility.

Does the fact that these skeptical reflections leave me emotionally cold show that I don't really, "deep down", have even a one-in-a-million credence in at least the imminent-death versions of the skeptical scenarios? Now maybe I shouldn't worry about those scenarios even if I truly assign a non-trivial credence to them. After all, there's nothing I can do about them, no action that I can reasonably take in light of them. I can't, for example, buy sim-insurance. But if that's why the scenarios leave me unmoved, the same is true about the descending plane. There's nothing I can do about the fog; I need to just sit tight. As a general matter helplessness doesn't eliminate anxiety.

Here my interest in radical skepticism intersects another of my interests, the nature of belief. What would be involved in really believing that there is a non-trivial chance that one will soon die because some radically skeptical scenario is true? Does genuine belief only require saying these things to oneself, with apparent sincerity, and thinking that one accepts them? Or do they need to get into your gut?

My view is that it's an in-between case. To believe, on my account, is to have a certain dispositional profile -- to be disposed to reason, and to act and react, both inwardly and outwardly, as ordinary people would expect someone with that belief to do, given their other related attitudes. So, for example, to believe that something carries a 1/10,000 risk of death is in part to be disposed sincerely to say it does and to draw conclusions from that fact (e.g., that it's riskier than something with a 1/1,000,000 risk of death); but it is also to have certain emotional reactions, to spontaneously draw upon it in one's everyday thinking, and to guide one's actions in light of it. I match the dispositional profile, to some extent, for believing there's a small but non-trivial chance I might soon die for skeptical-scenario-type reasons -- for example, I will sincerely say this when reflecting in my armchair -- but in other important ways I seem not to match the relevant dispositional profile.

It is not at all uncommon for people intellectually to accept certain propositions -- for example, that their marriage is one of the most valuable things in their lives, or that it's more important for their children to be happy than to get good grades, or that custodians deserve as much respect as professors -- while in their emotional reactions and spontaneous thinking, they do not very closely match the dispositional profile constitutive of believing such things. I have argued that this is one important way in which we can occupy the messy middle space between being accurately describable as believing something and being accurately describable as failing to believe it. My own low-but-not-negligible credence in radically skeptical scenarios is something like this, I suppose.

Thursday, January 02, 2014

Our Possible Imminent Divinity

We might soon be gods.

John Searle might be right that digital computers could never be conscious. Or the pessimists might be right who say we will blow ourselves up before we ever advance far enough to create real consciousness in computers. But let's assume, for the sake of argument, that Searle and the pessimists are wrong: In a few decades we will be producing genuinely conscious artificial intelligences in substantial quantity.

We will then have at least some features of gods: We will have created a new type of being, perhaps in our image. We will presumably have the power to shape our creations' personalities to suit us, to make them feel blessed or miserable, to hijack their wills to our purposes, to condemn them to looping circuits of pain or reward, to command their worship if we wish.

If consciousness is only possible in fully embodied robots, our powers might stop approximately there, but if we can create conscious beings inside artificial environments, we become even more truly divine. Imagine a simulated world inside a computer with its own laws and containing multiple conscious beings whose sensory inputs all flow in according to the rules of that world and whose actions are all expressed in that world -- The Sims but with conscious AIs.

[image from http://tapirangkasaterbabas.blogspot.com; go ahead and apply feminist critique whenever ready]

Now we can command not only the AI beings themselves but their entire world.

We approach omnipotence: We can create miracles. We can drop in Godzilla, we can revive the dead, we can move a mountain, undo errors, create or end the whole world at a whim. Zeus would be envious.

We approach omniscience: We can look at any part of the world, look inside anyone's mind, see the past if we have properly recorded it -- possibly, too, predict the future, depending on the details of the program.

We stand outside of space and to some extent time: Our created beings can point any direction of the sphere and not point at us -- we are everywhere and nowhere, not on their map, though capable of seeing and reaching anywhere. If the sim has a fast clock relative to our time, we can seem to endure for millenia or longer. We can pause their time and do whatever we like unconstrained by their clock. We can rewind to save points and thus directly view and interact with the past, perhaps sprouting off new worlds from it or rewriting the history of the one world.

But will we be benevolent gods? What duties will we have to our creations, and how well will we execute those duties? Philosophers don't discuss this issue as much as they should. (Nick Bostrom and Eliezer Yudkowsky are exceptions, and there's some terrific science fiction, e.g., Ted Chiang. In this story, R. Scott Bakker and I pit the duty to maximize happiness against the duty to give our creations autonomy and self-knowledge.)

Though to our creations we will literally have the features of divinity and they might rightly call us their gods, from the perspective of this level of reality we might remain very mortal, weak, and flawed. We might even ourselves be the playthings of still higher gods.

Wednesday, January 01, 2014

What I Wrote in 2013

I hope it is not too vain to begin 2014 with a retrospect of what I wrote in 2013. I kind of enjoy gathering it here -- it helps convince me that my solitary office labors are not a waste -- and maybe some readers will find it useful.

This work appeared in print in 2013:

This work is finished and forthcoming:
Also in 2013, I began writing short speculative fiction in earnest.  I am not sure how this will turn out; I think it's too early for me to know if I'm any good at it.  My first effort, a collaboration with professional fiction writer R. Scott Bakker, appeared in Nature ("Reinstalling Eden", listed above).  I wish I could post drafts on my website and solicit feedback, as I do with my philosophy articles, but fiction venues seem to dislike that.

Update January 2:
I fear the ill-chosen title of this post might give some people the misleading impression that I wrote all of this material during 2013.  Most of the work that appeared in print was finalized before 2013, and a fair portion of the other work was at least in circulating draft before 2013.  Here's how things stood at the end of 2012; lots of overlap!

Thursday, December 26, 2013

The Moral Epistemology of the Jerk

The past few days, I've been appreciating the Grinch's perspective on Christmas -- particularly his desire to drop all the presents off Mount Crumpit. An easy perspective for me to adopt! I've already got my toys (mostly books via Amazon, purchased any old time I like), and there's such a grouchy self-satisfaction in scoffing, with moralistic disdain, at others' desire for their own favorite luxuries.

(image from http://news.mst.edu)

When I write about jerks -- and the Grinch is a capital one -- it's always with two types of ambivalence. First, I worry that the term invites the mistaken thought that there is a particular and readily identifiable species of people, "jerks", who are different in kind from the rest of us. Second, I worry about the extent to which using this term rightly turns the camera upon me myself: Who am I to call someone a jerk? Maybe I'm the jerk here!

My Grinchy attitudes are, I think, the jerk bubbling up in me; and as I step back from the moral condemnations toward which I'm tempted, I find myself reflecting on why jerks make bad moralists.

A jerk, in my semi-technical definition, is someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The Grinch doesn't respect the Whos, doesn't value their perspectives. He doesn't see why they might enjoy presents and songs, and he doesn't accord any weight to their desires for such things. This is moral and epistemic failure, intertwined.

The jerk fails as a moralist -- fails, that is, in the epistemic task of discovering moral truths -- for at least three reasons.

(1.) Mercy is, I think, near the heart of practical, lived morality. Virtually everything everyone does falls short of perfection. Her turn of phrase is less than perfect, she arrives a bit late, her clothes are tacky, her gesture irritable, her choice somewhat selfish, her coffee less than frugal, her melody trite -- one can create quite a list! Practical mercy involves letting these quibbles pass forgiven or even better entirely unnoticed, even if a complaint, were it made, would be just. The jerk appreciates neither the other's difficulties in attaining all the perfections he himself (imagines he) has nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralizing principle comes naturally to the jerk, while it is alien to the jerk's opposite, the sweetheart. The jerk will sometimes give mercy, but if he does, he does so unequally -- the flaws and foibles that are forgiven are exactly the ones the jerk recognizes in himself or has other special reasons to be willing to forgive.

(2.) The jerk, in failing to respect the perspectives of others, fails to appreciate the delight others feel in things he does not himself enjoy -- just as the Grinch fails to appreciate the Whos' presents and songs. He is thus blind to the diversity of human goods and human ways of life, which sets his principles badly askew.

(3.) The jerk, in failing to respect the perspectives of others, fails to be open to frank feedback from those who disagree with him. Unless you respect another person, it is difficult to be open to accepting the possible truth in hard moral criticisms from that person, and it is difficult to triangulate epistemically with that person as a peer, appreciating what might be right in that person's view and wrong in your own. This general epistemic handicap shows especially in moral judgment, where bias is rampant and peer feedback essential.

For these reasons, and probably others, the jerk suffers from severe epistemic shortcomings in his moral theorizing. I am thus tempted to say that the first question of moral theorizing should not be something abstract like "what is to be done?" or "what is the ethical good?" but rather "am I a jerk?" -- or more precisely, "to what extent and in what ways am I a jerk?" The ethicist who does not frankly confront herself on this matter, and who does not begin to execute repairs, works with deficient tools. Good first-person ethics precedes good second-person and third-person ethics.

Wednesday, December 18, 2013

Should I Try to Fly, Just on the Off-Chance That This Might Be a Dreambody?

I don't often attempt to fly when walking across campus, but yesterday I gave it a try. I was going to the science library to retrieve some books on dreaming. About halfway there, in the wide-open mostly-empty quad, I spread my arms, looked at the sky, and added a leap to one of my steps.

My thinking was this: I was almost certainly awake -- but only almost certainly! As I've argued, I think it's hard to justify much more than 99.9% confidence that one is awake, once one considers the dubitability of all the empirical theories and philosophical arguments against dream doubt. And when one's confidence is imperfect, it will sometimes be reasonable to act on the off-chance that one is mistaken -- whenever the benefits of acting on that off-chance are sufficiently high and the costs sufficiently low.

I imagined that if I was dreaming, it would be totally awesome to fly around, instead of trudging along. On the other hand, if I was not dreaming, it seemed no big deal to leap, and in fact kind of fun -- maybe not entirely in keeping with the sober persona I (feebly) attempt to maintain as a professor, but heck, it's winter break and no one's around. So I figured, why not give it a whirl?

I'll model this thinking with a decision matrix, since we all love decision matrices, don't we? Call dream-flying a gain of 100, waking leap-and-fail a loss of 0.1, dreaming leap-and-fail a loss of only 0.01 (since no one will really see me), and continuing to walk in the dream a loss of 1 (since why bother with the trip if it's just a dream?). All this is relative to a default of zero for walking, awake, to the library. (For simplicity, I assume that if I'm dreaming things are overall not much better or worse than if I'm awake, e.g., that I can get the books and work on my research tomorrow.) I'd been reading about false awakenings, and at that moment 99.7% confidence in my wakefulness seemed about right to me. The odds of flying conditional upon dreaming I held to be about 50/50, since I don't always succeed when I try to fly in my dreams.

So here's the payoff matrix:

Plugging into the expected value formula:

Leap = (.003)(.5)(100) + (.003)(.5)(-0.01) + (.997)(-0.1) = approx. +.05.

Not Leap = (.003)(-1) + (.997)(0) = -.003.

Leap wins!

Of course, this decision outcome is highly dependent on one's degree of confidence that one is awake, on the downsides of leaping if it's not a dream, on the pleasure one takes in dream-flying, and on the probability of success if one is in fact dreaming. I wouldn't recommend attempting to fly if, say, you're driving your son to school or if you're standing in front of a class of 400, lecturing on evil.

But in those quiet moments, as you're walking along doing nothing else, with no one nearby to judge you -- well maybe in such moments spreading your wings can be the most reasonable thing to do.

Wednesday, December 11, 2013

How Subtly Do Philosophers Analyze Moral Dilemmas?

You know the trolley problems. A runaway train trolley will kill five people ahead on the tracks if nothing is done. But -- yay! -- you can intervene and save those five people! There's a catch, though: your intervention will cost one person's life. Should you intervene? Both philosophers' and non-philosophers' judgments vary depending on the details of the case. One interesting question is how sensitive philosophers and non-philosophers are to details that might be morally relevant (as opposed to presumably irrelevant distracting features like order of presentation or the point-of-view used in expressing the scenario).

Consider, then, these four variants of the trolley dilemma:

Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.

Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.

Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.

Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)

Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.

Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)

In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.

About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.

Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?

In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).

I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.

How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).

However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).

Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.

Thursday, December 05, 2013

Dream Skepticism and the Phenomenal Shadow of Belief

Ernest Sosa has argued that we do not form beliefs when we dream. If I dream that a tiger is chasing me, I do not really believe that a tiger is chasing me. If I dream that I am saying to myself "I'm awake!" I do not really believe that I'm awake. Real beliefs are more deeply integrated than are these dream-mirages with my standing attitudes and my waking behavior. If so, it follows that if I genuinely believe that I'm awake, necessarily I am correct; and conversely if I believe I'm dreaming, necessarily I'm wrong. The first belief is self-verifying; the second self-defeating. Deliberating between them, I should not choose the self-defeating one, nor should I decline to choose, as though these two options were of equal epistemic merit. Rather, I should settle upon the self-verifying belief that I am awake. Thus, dream skepticism is vanquished!

One nice thing about Sosa's argument is that it does not require that dream experience differ from waking experience in any of the ways that dreams and waking life are sometimes thought to differ (e.g., dream experience needn't be gappier, or less coherent, or more like imagery experience than like perceptual experience). The argument would still work even if dream experience were, as Sosa says, "internally indistinguishable" from waking experience.

This seeming strength of the argument, though, seems to me to signal a flaw. Suppose that dreaming life is in fact in every respect phenomenally indistinguishable from waking life -- indistinguishable from the inside, as it were -- and accordingly that I could easily experience exactly *this* while sleeping; and furthermore suppose that I dream extensively every night and that most of my dreams have mundane everyday content just like that of my waking life. None of this should affect Sosa's argument. And suppose further that I am in fact now awake (and thus capable of forming beliefs about whether I am dreaming, per Sosa), and that I know that due to a horrible disease I acquired at age 35, I spend almost all of my life in dreaming sleep so that 90% of the time when I have experiences of this sort (as if in my office, thinking about philosophy, working on a blog post...) I am sleeping. Unless there's something I'm aware of that points toward this not being a dream, shouldn't I hesitate before jumping to the conclusion that this time, unlike all those others, I really am awake? Probabilities, frequencies, and degrees of resemblance seem to matter, but there is no room for them in Sosa's argument.

Maybe we don't form beliefs when we dream -- Sosa, and also Jonathan Ichikawa, have presented some interesting arguments along those lines. But if there is no difference from the inside between dreams and waking, then my dreaming self, when he was dreaming about considering dream skepticism (e.g., here) did something that was phenomenally indistinguishable from forming the belief that he was thinking about philosophy, something that was phenomenally indistinguishable from forming the belief that was affirming or denying or suspending belief about the question of whether he was dreaming -- and then the question becomes: How do I know that I'm not doing that very same thing right now?

Call it dream-shadow believing: It's like believing, except that it happens only in dreams. If dream-shadow believing is possible, then if I dream-shadow believe that I am dreaming, necessarily I am correct; if I dream-shadow believe that I am awake, necessarily I am wrong. The first is self-verifying, the second self-defeating. The skeptic can now ask: Should I try to form the belief that I am awake or instead the dream-shadow belief that I am dreaming? -- and to this question, Sosa's argument gives no answer.

Update, 3:28 pm:

Jonathan Ichikawa has kindly reminded me that he presented similar arguments against Sosa back in 2007 -- which I knew (in fact, Jonathan thanks me in the article for my comments) but somehow forgot. Jonathan runs the reply a bit differently, in terms of quasi-affirming (which is neutral between genuine affirming and something phenomenally indistinguishable from affirming, but which one can do in a dream) rather than in terms of dream-shadow believing. Perhaps my dream-shadow belief formulation enables a parity-of-argument objection, if (given the phenomenal indistinguishability of dreams and waking) the argument that one should settle on self-verifying dream-shadow belief is as strong an argument as is Sosa's original argument.

Wednesday, November 27, 2013

Reinstalling Eden

If someday we can create consciousness inside computers, what moral obligations will we have to the conscious beings we create?

R. Scott Bakker and I have written a short story about this, which came out today in Nature.

You might think that it would be a huge moral triumph to create a society of millions of actually conscious, happy beings inside one's computer, who think they are living, peacefully and comfortably, in the base level of reality -- Eden, but better! Divinity done right!

On the other hand, there might be something creepy and problematic about playing God in that way. Arguably, such creatures should be given self-knowledge, autonomy, and control over their own world -- but then we might end up, again, with evil, or even with an entity both intellectually superior to us and hostile.

[For Scott's and my first go-round on these issues, see here.]

Friday, November 22, 2013

Introspecting My Visual Experience "as of" Seeing a Hat?

In "The Unreliability of Naive Introspection" (here and here), I argue, contra a philosophical tradition going back at least to Descartes, that we have much better knowledge of middle-sized objects in the world around us than we do of our stream of sensory experience while perceiving those objects.

As I write near the end of that paper:

The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.
Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).

One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.

I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.

First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.

Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.

(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)

So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.

Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).

Update, November 25 [revised 5:24 pm]:

Paul Boghossian writes:

I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.

I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.

Roger White then made the same point without using the brain in a vat scenario.

I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!

Friday, November 15, 2013

Skepticism, Godzilla, and the Artificial Computerized Many-Branching You

Nick Bostrom has argued that we might be sims. A technologically advanced society might use hugely powerful computers, he says, to run "ancestor simulations" containing actually conscious people who think they are living, say, on Earth in the early 21st century but who in fact live entirely inside an advanced computational system. David Chalmers has considered a similar possibility in his well-known commentary on the movie The Matrix.

Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.

Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.

However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.

Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.

Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.

The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.

Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?

Friday, November 08, 2013

Expert Disagreement as a Reason for Doubt about the Metaphysics of Mind (Or: David Chalmers Exists, Therefore You Don't Know)

Probably you have some opinions about the relative merit of different metaphysical positions about the mind, such as materialism vs. dualism vs. idealism vs. alternatives that reject all three options or seek to compromise among them. Of course, no matter what your position is, there are philosophers who will disagree with you -- philosophers whom you might normally regard as your intellectual peers or even your intellectual superiors in such matters – people, that is, who would seem to be at least as well-informed and intellectually capable as you are. What should you make of that fact?

Normally, when experts disagree about some proposition, doubt about that proposition is the most reasonable response. Not always, though! Plausibly, one might disregard a group of experts if those experts are: (1.) a tiny minority; (2.) plainly much more biased than the remaining experts; (3.) much less well-informed or intelligent than the remaining experts; or (4.) committed to a view that is so obviously undeserving of credence that we can justifiably disregard anyone who espouses it. None of these four conditions seems to apply to dissent within the metaphysics of mind. (Maybe we could exclude a few minority positions for such reasons, but that will hardly resolve the issue.)

Thomas Kelly (2005) has argued that you may disregard peer dissent when you have “thoroughly scrutinized the available evidence and arguments” on which your disagreeing peer’s judgment is based. But we cannot disregard peer disagreement in philosophy of mind on the grounds that this condition is met. The condition is not met! No philosopher has thoroughly scrutinized the evidence and arguments on which all of her disagreeing peers’ views are based. The field is too large. Some philosophers are more expert on the literature on a priori metaphysics, others on arguments in the history of philosophy, others on empirical issues; and these broad literatures further divide into subliteratures and sub-subliteratures with which philosophers are differently acquainted. You might be quite well informed overall. You’ve read Jackson’s (1986) Mary argument, for example, and some of the responses to it. You have an opinion. Maybe you have a favorite objection. But unless you are a serious Mary-ologist, you won’t have read all of the objections to that argument, nor all the arguments offered against taking your favorite objection seriously. You will have epistemic peers and probably epistemic superiors whose views are based on arguments which you have not even briefly examined, much less thoroughly scrutinized.

Furthermore, epistemic peers, though overall similar in intellectual capacity, tend to differ in the exact profile of virtues they possess. Consequently, even assessing exactly the same evidence and arguments, convergence or divergence with one’s peers should still be epistemically relevant if the evidence and arguments are complicated enough that their thorough scrutiny challenges the upper range of human capacity across several intellectual virtues – a condition that the metaphysics of mind appears to meet. Some philosophers are more careful readers of opponents’ views, some more facile with complicated formal arguments, some more imaginative in constructing hypothetical scenarios, etc., and world-class intellectual virtue in any one of these respects can substantially improve the quality of one’s assessments of arguments in the metaphysics of mind. Every philosopher’s preferred metaphysical position is rejected by a substantial proportion of philosophers who are overall approximately as well informed and intellectually virtuous as she is, and who are also in some respects better informed and more intellectually virtuous than she is. Under these conditions, Kelly’s reasons for disregarding peer dissent do not apply, and a high degree of confidence in one’s position is epistemically unwarranted.

Adam Elga (2007) has argued that you can discount peer disagreement if you reasonably regard the fact that the seeming-peer disagrees with you as evidence that, at least on that one narrow topic, that person is not in fact a full epistemic equal. Thus, a materialist might see anti-materialist philosophers of mind, simply by the virtue of their anti-materialism, as evincing less than a perfect level-headedness about the facts. This is not, I think, entirely unreasonable. But it's also fully consistent with still giving the fact of disagreement some weight as a source of doubt. And since your best philosophical opponents will exceed you in some of their intellectual virtues and know some facts and arguments, which they consider relevant or even decisive, which you have not fully considered, you ought to give the fact of dissent quite substantial weight as a source of doubt.

Imagine an array of experts betting on a horse race: Some have seen some pieces of the horses’ behavior in the hours before the race, some have seen other pieces; some know some things about the horses’ performance in previous races, some know other things; some have a better eye for a horse’s mood, some have a better sense of the jockeys. You see Horse A as the most likely winner. If you learn that other experts with different, partly overlapping evidence and skill sets also favor Horse A, that should strengthen your confidence; if you learn that a substantial portion of those other experts favor B or C instead, that should lessen your confidence. This is so even if you don’t see all the experts quite as peers, and even if you treat an expert’s preference for B or C as grounds to wonder about her good judgment.

Try this thought experiment. You are shut in a seminar room, required to defend your favorite metaphysics of mind for six hours (or six days, if you prefer) against the objections of Ned Block, David Chalmers, Daniel Dennett, and Saul Kripke. Just in case we aren’t now living in the golden age of metaphysics of mind, let’s add Kant, Leibniz, Hume, Zhu Xi, and Aristotle too. (First we’ll catch them up on recent developments.) If you don’t imagine yourself emerging triumphant, then you might want to acknowledge that the grounds for your favorite position might not really be very compelling.

It is entirely possible to combine appropriate intellectual modesty with enthusiasm for a preferred view. Consider everyone’s favorite philosophy student: She vigorously champions her opinions, while at the same time being intellectually open and acknowledging the doubt that appropriately flows from her awareness that others think otherwise, despite those others being in some ways better informed and more capable than she is. Even the best professional philosophers still are such students, or should aspire to be, only in a larger classroom. So pick a favorite view! Distribute one’s credences differentially among the options. Suspect the most awesome philosophers of poor metaphysical judgment. But also: Acknowledge that you don't really know.

[For more on disagreement in philosophy see here and here. This post is adapted from my paper in draft The Crazyist Metaphysics of Mind.]

Friday, November 01, 2013

Striking Confirmation of the Spelunker Illusion

In 2010, I worked up a post on what I dubbed The Spelunker Illusion (see also the last endnote of my 2011 book). Now, hot off the press at Psychological Science, Kevin Dieter and colleagues offer empirical confirmation.

The Spelunker Illusion, well-known among cave explorers, is this: In absolute darkness, you wave your hand before your eyes. Many people report seeing the motion of the hand, despite the absolute darkness. If a friend waves her hand in front of your face, you don't see it.

I see three possible explanations:

(1.) The brain's motor output and your own proprioceptive input create hints of visual experience of hand motion.

(2.) Since you know you are moving your hand, you interpret low-level sensory noise in conformity with your knowledge that your hand is in such-and-such a place, moving in such-and-such a way, much as you might see a meaningful shape in a random splash of line segments.

(3.) There is no visual experience of motion at all, but you mistakenly think there is such experience because you expect there to be. (Yes, I think you can be radically wrong about your own stream of sensory experience.)

Dieter and colleagues had participants wave their hands in front of their faces while blindfolded. About a third reported seeing motion. (None reported seeing motion when the experimenter waved his hand before the participants.) Dieter and colleagues add two interesting twists: One is that they add a condition in which participants wave a cardboard silhouette of a hand rather than the hand itself. Under these conditions the effect remains, almost as strong as when the hand itself is waved. The other twist is that they track participants' eye movements.

Eye movements tend to be jerky, jumping around the scene. One exception to this, however, is smooth pursuit, when one stabilizes ones gaze on a moving object. This is not under voluntary control: Without an object to track, most people cannot move their eyes smoothly even if they try. In 1997, Katsumi Watanabe and Shinsuke Shimojo found that although people had trouble smoothly moving their eyes in total darkness, they could do so if they were trying to track their ("invisible") hand motion in darkness. Dieter and colleagues confirmed smooth hand-tracking in blindfolded participants and, strikingly, found that participants who reported sensations of hand motion were able to move their eyes much more smoothly than those who reported no sensations of motion.

I'm a big fan of corroborating subjective reports about consciousness with behavioral measures that are difficult to fake, so I love this eye-tracking measure. I believe that it speaks pretty clearly against hypothesis (3) above.

Dieter and colleagues embrace hypothesis (1): Participants have actual visual experience of their hands, caused by some combination of proprioceptive inputs and efferent copies of their motor outputs. However, it's not clear to me that we should exclude hypothesis (2). And (1) and (2) are, I think, different. People's experience in darkness is not merely blank or pure black, but contains a certain amount (perhaps a lot) of noise. Hypothesis (2) is that the effect arises "top down", as it were, from one's high-level knowledge of the position of one's hand. This top-down knowledge then allows you to experience that noisy buzz as containing motion -- perhaps changing the buzz itself, or perhaps not. (As long as one can find a few pieces of motion in the noise to string together, one might even fairly smoothly track that motion with one's eyes.)

Here's one way to start to pull (1) apart from (2): Have someone else move your hand in front of your face, so that your hand motion is passive. Although this won't eliminate proprioceptive knowledge of one's hand position, it should eliminate the cues from motor output. If efferent copies of motor output drive the Spelunker Illusion, then the Spelunker Illusion should disappear in this condition.

Another possibility: Familiarize participants with a swinging pendulum synchronized with a sound, then suddenly darken the room. If hypothesis (2) is correct and the sound is suggestive enough of the pendulum's exact position, perhaps participants will report still visually experiencing that motion.

Update, April 28, 2014:

Leonard Rosgole and Miguel Roig point out to me that these phenomena were reported in the psychological literature in Hofstetter 1970, Brosgole and Neylon 1973, Brosgole and Roig 1983. If you're aware of earlier sources, I'd be curious to know.