Monday, April 29, 2013

Waterfall Skepticism

Yesterday morning around dawn I sat hypnotized by “Paradise Falls”. I had hiked there from my parents’ house while my family slept, as I often do when we visit my parents.

Although at first it didn’t feel that way, I wonder if I have been harmed by philosophy. I gazed at the waterfall, thinking about Boltzmann brains – thinking, that is, about the possibility that I had no past, or a past radically unlike what I usually suppose it to be, instead having just then randomly congealed by freak chance from disorganized matter. On some ways of thinking about cosmology, there are many more randomly congealed brains, or randomly congealed brains-plus-local-pieces-of-environment, than there are intelligent beings who have arisen in what we think of as the normal way, from billions of years of evolution in a large, stable environment. If such a cosmology is true, then it might be much more likely that I have randomly congealed than that I have lived a full forty-five years in human form. The thought troubled me, but also the spark of doubt felt comfortable in a way. I am accustomed to skeptical roads.

(Paradise Falls, 6:25 a.m., April 28, 2013)

Of course, most cosmologists flee from the Boltzmann brains hypothesis. If a cosmology implies that you are very likely a Boltzmann brain, that’s normally taken to be a reductio ad absurdum of that cosmology. But as I sat there thinking, I wondered if such dismissals arose more from fear of skepticism than from sound reasoning. I am no expert cosmologist, with a view very likely to be true about the origin and nature of the universe or multiverse and thus about the number of Boltzmann brains vs. evolved consciousnesses in existence – but neither are any professional cosmologists sufficiently expert to claim secure knowledge of these matters, the field is so uncertain and changing. As I gazed around Paradise Falls, the Boltzmann brain hypothesis started to seem impossible to assess. This seemed especially so to me given the limited tools at hand – not even an internet connection! – though I wondered whether having such tools would really help after all. Still, the world did not dissolve around me, as I suppose it must around most spontaneously congealed brains. So as I endured, I came to feel more justified in my preferred opinion that I am not a Boltzmann brain. However, I also had to admit the possibility that my seeming to have endured over the sixty seconds of contemplating these issues was itself the false memory of a just-congealed Boltzmann brain. My skepticism renewed itself, though somehow this second time only as a shadow, without the force of genuine doubt.

I considered the possibility that I was a computer program in a simulated environment. If consciousness can arise in programmed silicon chips, then presumably there’s something it’s like to be such a computerized consciousness. Maybe such computer consciousnesses sometimes seem to dwell in natural environments, fed with simulated visual inputs (for example of waterfalls), simulated tactile inputs (for example of sitting on a stone), and false memories (for example of having hiked to the waterfall that morning). If Nick Bostrom is right, there might be many more such simulated beings than naturally evolved human beings.

I considered Dan Dennett’s argument against skepticism: Throw a skeptic a coin, Dennett says, and “in a second or two of hefting, scratching, ringing, tasting, and just plain looking at how the sun glints on its surface, the skeptic will consume more information than a Cray supercomputer can organize in a year” (1991, p. 6). Our experience, he says, has an informational wealth that cannot realistically be achieved by computational imitation. In graduate school, I had found this argument tempting. But it seemed to me yesterday that my vision of the waterfall was not as high fidelity as that, and could easily be reproduced on a computer. I fingered the mud at my feet. The complexity of tactile sensation did not seem to me the sort of thing beyond the capacity of a computer artificially to supply, if we suppose a future of computers advanced enough to host consciousness. We are so eager to reject skepticism that we satisfy ourselves too quickly with weak arguments against it.

Now maybe John Searle is right, and no computer could ever host consciousness. Or maybe, though computer consciousness is possible, it is never actually achieved, or achieved only so rarely that the vast majority of conscious beings are organically evolved beings of the sort I usually consider myself to be. But I hardly felt sure of these possibilities.

The philosophers who most prominently acknowledge the possibility that they are simulated beings instantiated by computer programs don’t seem very worried by it. They don’t push it in skeptical directions. Nick Bostrom seems to think it likely that if we are in a simulation, it is a large stable one. David Chalmers emphasizes that if we are in a simulation scenario like that depicted in the movie The Matrix, skepticism needn’t follow. And maybe it is the case that the easiest and most common way to create an artificial consciousness is to evolve it up through a billion or a million years in a stable environment; and maybe the easiest, cheapest way to create seeming conversation partners is to give those seeming conversation partners real consciousness themselves, rather than making them Eliza-like shells of simple response patterns. But on the other hand, if I take the simulation possibility seriously, then I feel compelled to take seriously also the possibility that my memories are mostly false, that I am instantiated within a smallish environment of short duration, perhaps inside a child’s game. I am the citizen to be surprised when Godzilla comes through; I am the victim to be rescued by the child’s swashbuckling hero; I am the hero himself, born new and not yet apprised of my magic. Nor did I have, at that moment, a clever conversation partner to convince me of her existence. I might be Adam entirely alone.

Fred Dretske and Alvin Goldman say that as long as my beliefs have been reliably enough caused by my environment, by virtue of well-functioning perceptual and memory systems, then I know that there’s a real waterfall there, I know that I have hiked the two kilometers from my parents’ house. But this seems to be a merely conditional comfort. If my beliefs have been reliably enough caused…. But have they? And I was no longer sure I believed, in any case. What is it, to believe? I still would have bet on the existence of my parents’ house – what else could I do, since skepticism offers no advice? – but my feeling of doubtless confidence had evaporated. Had everything dissolved around me at that moment, though I would have been surprised, I would not have felt utter shock. I was not seamlessly sure that the world as I knew it existed beyond the ridge.

I turned to hike back, and as I began to mount the slope, I considered Descartes’s madman. In his Meditations on First Philosophy, Descartes seems to say that it would be madness seriously to consider the possibility that one is merely a madman, like those who believe they are kings when they are paupers or who believe their heads are made of glass. But why is it madness to consider this? Or maybe it is madness, but then, since I am now in fact considering it, should that count as evidence that I am mad? Am I a philosopher who works at U.C. Riverside, whom some readers take seriously, or am I indeed just a madman lost in weird skepticism, with merely confused and stupid thoughts? Somehow, this skepticism felt less pleasantly meditative than my earlier reflections.

I returned home. That afternoon, in philosophical conversation I told my father that I thought he did probably exist other than as a figment of my mind. It seemed the wrong thing to say. I wanted to jettison my remnants of skepticism and fully join the human community. I felt isolated and ridiculous. Fortunately, my wife then called me in for a round of living-room theater, and playing the fox to my daughter's gingerbread girl cured me of my mood.

I thought about writing up this confession of my thoughts. I thought about whether readers would relate to it or see me only as possessed for a day by foolish, laughable doubts. Sextus Empiricus was wrong; I have not found that skepticism leads to equanimity.

Wednesday, April 24, 2013

John Searle's Homunculus Announces Phased Retirement

Details here.

The truth is, there are actually two homunculi in there, they've been squabbling, and this is part of a divorce settlement.

Monday, April 22, 2013

A Somewhat Impractical Plan for Immortality

... and arguably evil, too, though let's set that aside.

My plan requires the truth of a psychological theory of personal identity, a "vehicle externalist" account of memory, and some radical social changes. But it requires no magic or computer technology, and arguably we could actually implement it.

Psychological theory of personal identity. Most philosophers think that personal identity over time is grounded by something psychological . Twenty-year-old you and forty-year-old you are (or will be) the same person because of some psychological linkage over time -- maybe continuity of memory, maybe some other sort of counterfactual-supporting causal connectedness between psychological states over time. Maybe traits, values, plans, and projects come into the picture, too. In practice, people don't have the right kind of psychological connectedness without having biological bodily continuity. But that, perhaps, is merely a contingent fact about us.

Vehicle externalism about memory. What is memory? If a madman thinks he is Napoleon remembering Waterloo, he does not remember Waterloo, even if by chance he happens upon exactly the same memory images as Napoleon himself had later in life. Memory requires, it seems, the right kind of causal connectedness to the original event. But need the relevant causal connectedness be entirely interior to the skull? Vehicle externalists about memory say no, there is nothing sacred about the brain-environment boundary. External objects can hold, or partly hold, our memories, if they are hooked up to us with the right kind of reliable causal chains. Consider Andy Clark's and David Chalmers's delightful short paper on Otto, whose ever-available notebook serves as part of his mind; or consider a science-fiction case in which part one's memory is temporarily transferred onto a computer chip and then later recovered.

Implementation. Could one's temporary memory reservoir be another person? I don't see why not, on a vehicle externalist account. And could the memories -- and the values and projects and whatever else is essential to personal identity -- then be transferred into another human body, for example, over the course of a decade or two into the body of a newborn baby as she grows up? I don't see why not, if we accept at least somewhat liberal versions of all the premises so far, and if we assume the most excellent possible shaping of that child.

By formatting a new child with your memories, your personality, your values, your projects, your loves, your hopes and regrets, you could thus continue into a new body. Presumably, you could continue this process indefinitely, taking a new body every fifty years or so.

As I said, a madman's dream of being Napoleon is no continuation of Napoleon. But the situation would be very different from that. There would be no madness. The memories would have well-preserved causal traces back to the original events; the crucial functional role of memory, to save those traces, would be perserved; everything would be held steadily in place by the person or people implementing this plan on your behalf, as a stable network of correctly functioning cognitive processes. And the result would be not just something on paper or in a memory chip but a consciously experienced memory image, felt by its owner to be a real authentic memory of the original event.

This could, it seems, be done with existing technology, using clever mnemonic and psychological techniques. One would need mnemonists who knew everything possible about you, who observed the same events and shared your same memories, and who were exceptionally skilled at preserving this information and transferring it to the child. The question then would be whether it would be true that the child, when she grew up, would really be you, with your authentic memories, instead of a mad Napoleon. And the answer to that question depends when whether certain theories of personal identity and memory are true. If the right theories are indeed true, then immortality -- or rather, longevity potentially in perpetuity -- would be possible for sufficiently wealthy and powerful people now, if they only chose to implement it.

I have written a story about this: The Mnemonists.

The Mnemonists

[This is a draft of a short story. See here for explicit discussion of the philosophical idea behind the story.]

[Revised April 23]

When he was four years old, my Oligarch wandered away from his caretakers to gaze into an oval fountain. At sixteen, he blushingly refused the kiss he had so desperately longed for. A week before his death, he made plans (which must now be postponed) to visit an old friend in Lak-Blilin. I, his mnemonist, have internalized all this. I remember it just as he does, see the same images, feel the same emotions as he does in remembering those things. I have adopted his attitudes, absorbed his personality. My whole life is arranged to know him as perfectly as one person can know another. My first twenty years I learned the required arts. Since then, I have concentrated on nothing but the Oligarch.

My Oligarch knows that to hide from me is to obliterate part of himself. He whispers to me his most shameful thoughts. I memorize the strain on his face as he defecates; I lay my hands on his tensing stomach. When my Oligarch forces himself on his friend’s daughter, I press against him in the dark. I feel the girl’s breasts as he does. I forget my sex and hallucinate his ejaculation.

At my fiftieth birthday, my Oligarch toasts me, raising and then drinking down his fine crystal cup of hemlock. As he dies, I study his face. I mimic his last breath. A newborn baby boy is brought and my second task begins.

By age three, the boy has absorbed enough of the Oligarch’s identity to know that he is the Oligarch now again, in a new body. A new apprentice mnemonist joins us now, waiting in the shadows. At age four, the Oligarch finally visits his friend in Lak-Blilin, apologizing for the long delay. He begins to berate his advisors as he always had, at first clumsily, in a young child’s vocal register. He comes to take the same political stands, comes to dispense the same advice. I am ever at his side helping in all this, the apprentice mnemonist behind me; his trust in us is instant and absolute. At age eight, the Oligarch understands enough to try to apologize to his friend’s daughter – though he also notices her hair again in the same way, so good am I.

My Oligarch boy does not intentionally memorize his old life. He recalls it with assistance. Just as I might suggest to you a memory image, wholly fake, of a certain view of the sea with ragged mountains and gulls, which you then later mistake for a real memory image from your own direct experience, so also are my suggestions adopted by the Oligarch, but uncritically and with absolutely rigorous sincerity on both sides. The most crucial memory images I paint and voice and verbally elaborate. Sometimes I brush my fingers or body against him to better convey the feel, or flex his limbs, or ride atop him, narrating. I give him again the oval fountain. I give him again the refused kiss.

A madman’s dream of being Napoleon is no continuation of Napoleon. But here there is no madness. My Oligarch’s memories have continuous properly-caused traces back to the original events, his whole psychology continued by a stable network of processes, as he well knows. His plans and passions, commitments and obligations, legal contracts, attitudes and resolutions, vengeances, thank-yous and regrets – all are continued without fail, if temporarily set aside through infancy as though through sleep.

The boy, now eleven, is only middling bold, though in previous form, my Oligarch had been among the boldest in the land. I renew my stories of bold heroes, remind him of his long habit of boldness, subtly condition and reinforce him. I push the boundaries of acceptable technique. Though I feel the dissonance sharply, the boy does not. He knows who he is. He feels he has only changed his mind.

[continued here]

Thursday, April 18, 2013

Does Tylenol Ease Existential Angst?

Intriguing evidence here. Also my vote for best use of a David Lynch video so far this year.

I'm dubious about the model and mechanism and curious about whether it will prove replicable by researchers with different theoretical perspectives. But still. How cool is that study?

Wednesday, April 17, 2013

The Jerk-Sweetie Spectrum

A central question of moral epistemology is, or should be: Am I a jerk? Until you figure that one out, you probably ought to be cautious in morally assessing others.

But how to know if you're a jerk? It's not obvious. Some jerks seem aware of their jerkitude, but most seem to lack self-knowledge. So can you rule out the possibility that you're one of those self-ignorant jerks? Maybe a general theory of jerks will help!

I'm inclined to think of the jerk as someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The characteristic phenomenology of the jerk is "I'm important, and I'm surrounded by idiots!" However, the jerk needn't explicitly think that way, as long as his behavior and reactions fit the mold. Also, the jerk might regard other high-status people as important and regard people with manifestly superior knowledge as non-idiots.

To the jerk, the line of people in the post office is a mass of unimportant fools; it's a felt injustice that he must wait while they bumble around with their requests. To the jerk, the flight attendant is not an individual doing her best in a difficult job, but the most available face of the corporation he berates for trying to force him to hang up his phone. To the jerk, the people waiting to board the train are not a latticework of equals with interesting lives and valuable projects but rather stupid schmoes to be nudged and edged out and cut off. Students and employees are lazy complainers. Low-level staff are people who failed to achieve meaningful careers through their own incompetence who ought to take the scut work and clean up the messes. (If he is in a low-level position, it's a just a rung on the way up or a result of crimes against him.)

Inconveniencing others tends not to register in the jerk's mind. Some academic examples drawn from some of my friends' reports: a professor who schedules his office hours at 7 pm Friday evenings to ensure that students won't come (and who then doesn't always show up himself); a TA who tried to reschedule his section times (after all the undergrads had already signed up and presumably arranged their own schedules accordingly) because they interfered with his napping schedule, and who then, when the staffperson refused to implement this change, met with the department chair to have the staffer reprimanded (fortunately, the chair would have none of it); the professor who harshly penalizes students for typos in their essays but whose syllabus is full of typos.

These examples suggest two derivative features of the jerk: a tendency to exhibit jerkish behavior mostly down the social hierarchy and a lack of self-knowledge of how one will be perceived by others. The first feature follows from the tendency to treat people as objects to be manipulated. Manipulating those with power requires at least a surface-level respect. Since jerkitude is most often displayed down the social ladder, people of high social status often have no idea who the jerks are. It's the secretaries, the students, the waitresses who know, not the CEO. The second feature follows from the limited perspective-taking: If one does not value others' perspectives, there's not likely to be much inclination to climb into their minds to imagine how one will be perceived by them.

In considering whether you yourself are a jerk, you might take comfort in the fact that you have never scheduled your office hours for Friday night or asked 70 people to rearrange their schedules for your nap. But it would be a mistake to comfort oneself so easily. There are many manifestations of jerkitude, and even hard-core jerks are only going to exhibit a sample. The most sophisticated, self-delusional jerks also employ the following clever trick: Find one domain in which one's behavior is exemplary and dwell upon that as proof of one's rectitude. Often, too, the jerk emits an aura of self-serving moral indignation -- partly, perhaps, as an anticipated defense against the potential criticisms of others, and partly due to his failure to think about how others' seemingly immoral actions might be justified from their own point of view.

The opposite of the jerk is the sweetheart or the sweetie. The sweetie is vividly aware of the perspectives of others around him -- seeing them as individual people who merit concern as equals, whose desires and interests and opinions and goals warrant attention and respect. The sweetie offers his place in line to the hurried shopper, spends extra time helping the student in need, calls up an acquaintance with an embarrassed apology after having been unintentionally rude.

Being reluctant to think of other people as jerks is one indicator of being a sweetie: The sweetie charitably sees things from the jerk's point of view! In contrast, the jerk will err toward seeing others as jerks.

We are all of us, no doubt, part jerk and part sweetie. The perfect jerk is a cardboard fiction. We occupy different points in the middle of the jerk-sweetie spectrum, and different contexts will call out the jerk and the sweetie in different people. No way do I think there's going to be a clean sorting.

-------------------------------------------------------
I'm accumulating examples of jerkish behavior here. Please add your own! I'm interested both in cases that conform to the theory above that those that don't seem to.

Compare also Aaron James's theory of assholes, which I discuss here.

Monday, April 15, 2013

Wanted: Examples of Jerkish Behavior

I'm working on theory of jerks and I need data. In the comments section, I'm hoping some of you (ideally, lots of you) will describe examples of what you think of as typical "jerkish" behavior.

Here's why: I'm working on a theory of jerks. This theory is aimed largely at the question of how you can know if you are, in fact, a jerk. (Do you know?) Toward this end, I've worked a bit on the phenomenology of being a jerk and on the "jerk-sucker ratio". Soon, I plan to propose a "jerk-sweetie spectrum". But before I get too deep into this, I'd appreciate some thoughts from people not much influenced by my theorizing. I want to check my theory against proposed cases. Also, I'd like to draw a "portrait of a jerk", and I need things to include in the portrait.

Favorite examples I will pull up into the body of this post as updates. (And I'll keep my ear out for examples via comments on this post as long as I actively maintain this blog, since comments filter into my email.) Also, readers who provide any examples that I incorporate in my portrait of a jerk will receive due name credit in the final published version of my planned paper on this topic.

But please: no names of individuals. And nothing that will clearly single out a particular individual. And if you sign your true name, please be careful to be sufficiently vague that you risk no reprisal from the perpetrator!

The anti-hero of my portrait will probably be an academic jerk, so academic examples are especially welcome. However, this jerk lives outside of academia too, and my theory of jerks is meant to apply broadly, so I need a good range of non-academic examples, too.

I've Googled "What a Jerk" as a source of examples to kick the thing off. Below are a few. No obligation to read them before diving in with your own.

From Alan Lurie at Huffington Post:

I turned to see a tall bald man looking down at me as the train pulled in to the platform. I let two people in before me, and that's when I felt the push. As we turned toward the seats I felt another push on my back, and again looked at the man, who now released an annoyed huff of breath. What a jerk! I thought. Does he think that he's the only one who deserves a seat? Then I felt a poke on my shoulder, and in a loud angry voice the tall bald man said, "What are you looking at? You got a problem, buddy?"
From Sarah Cliff (2001):
My AA, Maureen, flubbed a meeting time - scheduled over something else-and he really lit into her. Not the end of the world - she had made a mistake, and he had to rearrange an appointment - but he could have gotten the point across more tactfully. And she is *my* AA. (And I am *his* boss, and he did it in front of me.)
From Richard Norquist (1961):
I know a college president who can be described only as a jerk. He is not an unintelligent man, nor unlearned, nor even unschooled in the social amenities. Yet he is a jerk cum laude, because of a fatal flaw in his nature--he is totally incapable of looking into the mirror of his soul and shuddering at what he sees there. A jerk, then, is a man (or woman) who is utterly unable to see himself as he appears to others. He has no grace, he is tactless without meaning to be, he is a bore even to his best friends, he is an egotist without charm.
From Florian Meuck:
He is such an unlikeable character. You never invited him; he sat down on your sofa and hasn’t left since. He never stops talking, which is quite annoying. But it’s getting worse: he doesn’t like to talk about energetic, positive, uplifting stuff. No – it’s the opposite! He’s a total bummer! He cheats, he betrays, he deceives, he fakes, he misleads, he tricks, and he swindles. He is negative, sometimes even malicious. He’s a black hole! He promotes fear – not joy. He persuades you to think small – not big. He convinces you to incarcerate your potential – not to unlock it.
Update, 4:43 p.m.:

Good comments so far! I'm finding this helpful. Thanks! I'm going to start pulling up some favorites into the body of the post, but that doesn't mean the others aren't helpful and interesting too.

* At the gym a few weeks ago. A man there (working out) had probably 10 weights of various sizes strewn in a wide radius around him, blocking other people's potential work-out space. I asked him if the weights were his, and he said "no - the person before me left them here, and I DON'T PICK UP OTHER PEOPLE'S WEIGHTS." [from anon 02:55 pm]

* the professor who has hard deadlines for their students, but then doesn't respond or reply promptly themselves, or expects perfection in writing but then has a syllabus and other written materials full of typos. [from Theresa]

* Anyone who blames low-level folk for problems that are obviously originating many levels higher up (or to the side). For example, berating a clerk for the store's return policy, the stewardess for the airline's cell phone rules, the waiter for the steak's doneness, etc. [from Jesse, 01:50 pm]

* If I'm descending the stairs towards the eastbound subway platform and I hear an approaching train, then I'll generally speed up if I see that the train is eastbound and I'll slow down if it's the westbound train. If there's no one in front of me on the stairs but there are several people following me, they'll use my change of pace as a signal re. whether the approaching train is eastbound or westbound. No one agreed on this tendency or explicitly recommended it. It's just a behaviour that arose spontaneously and became standard. So, if, on seeing that the train is indeed eastbound I deviate from the norm and slow my pace, thereby leading others behind me to slow down and miss the train, I'd say I've engaged in jerkish behaviour [from praymont, Apr 17]

Thursday, April 11, 2013

Adam and Eve in the Happiness Pod

The Institute for Evil Neuroscience has finally done it: human consciousness -- or quasi-human -- on a computer. By special courier, I receive the 2 exabyte memory stick. I plug it into my computer's port and install the proprietary software. A conscious being! By default, she has an IQ of 130, the ordinary range of human cognitive skills and emotional reactions, and sensory experiences as though she has just awakened on an uninhabited tropical island. I set my monitors to see through her eyes, my speakers to play her inner speech. She wonders where she is and how she got there. She admires the beauty of the island. She cracks a coconut, drinks its juice, and tastes its flesh. She thinks about where she will sleep when the sun sets.

With a few mouse clicks, I give her a mate -- a man who has woken on a nearby part of the island. The two meet. I have set the island for abundance and comfort: no predators, no extreme temperatures, a ready supply of seeming fruit that will meet all their biological (apparently biological) needs. The man and the woman talk -- Adam and Eve, their default names. They seem to remember no past, but they find themselves with island-appropriate skills and knowledge. They make plans to explore the island, which I can arbitrarily enlarge and populate.

Since Adam and Eve really are, by design, rational and conscious and capable of the full human range of feeling, the decision I just made to install them on my computer was as morally important as was my decision fifteen years ago to have children. Wasn't it? And arguably my moral obligations to Adam and Eve are no less important than my moral obligations to my children. It would be cruel -- not just pretend-cruel, like when I release Godzilla in SimCity or let a tamagotchi starve -- but really, genuinely cruel, were I to make them suffer. Their environment might not seem real to me, but their pains and pleasures are as real as my own. I should want them happy. I should seek, maybe, to maximize their happiness. Deleting their files would be murder.

They want children. They want the stimulation of social life. My computer has lots of spare capacity. Why not give them all that? I could create an archipelago of 100,000 happy people. If it's good to bring two happy children into the world, isn't it 50,000 times better to bring 100,000 happy island citizens into the world -- especially if they are no particular drain upon the world's (the "real world's") resources? It seems that bringing my Archipelago to life is by far the most significant thing I will ever do -- a momentous moral accomplishment, if also, in a way, a rather mundane and easy accomplishment. Click, click, click, an hour and it's done. A hundred thousand lives, brimming with joy and fulfillment, in a fist-sized pod! The coconuts might not be real (or are they? -- what is a "coconut", to them?), but their conversations and plans and loves have authentic Socratic depth.

By disposition, my people are friendly. There are no wars here. They will reproduce to the limit of my computer's resources, then they will find themselves infertile -- which they experience as somewhat frustrating but only one small disappointment in their enviably excellent lives.

If I was willing to spend thousands on fertility treatments to bring one child into the world, shouldn't I also be willing to spend thousands to bring a hundred thousand more Archipelagists (as I now call them) into the world? I buy a new computer and connect it to my old one. My archipelago is doubled. What a wealth of happiness and fulfillment I have just enabled! Shouldn't I do even more, then? I have tens of thousands of dollars saved up in my childrens' college funds. Surely a million lives brimming with joy and fulfillment are worth more than my two children's college education? I spend the money.

I devote my entire existence to maximizing the happiness, the fulfillment, the moral good character, and the triumphant achievements of as many of these people as I can make. This is no pretense. This is, for them, reality, and I treat it as earnestly as they do. I become a public speaker. I argue that there is nothing more important that Earthly society could do than to bring into existence a superabundance of maximally excellent Archipelagists. And as a society, we could easily create trillions of them -- trillions of trillions if we truly dedicated our energies to it -- many more Archipelagists than ordinary Earthlings.

Could there be any greater achievement? In comparison, the moon shot was nothing. The plays of Shakespeare, nothing. The Archipelagists might have a hundred trillion Shakespeares, if we do it right.

We face decisions: How much Earthling suffering is worth trading off for Archipelagist suffering? (One to one?) Is it better to give Archipelagists constant feelings of absolutely maximal bliss, even if doing so means reducing their intelligence and behavior to cow-like levels, or is it better to give them a broader range of emotions and behaviors? Should the Archipelagists know conflict, deprivation, and suffering or always only joy, abundance, and harmony? Should there be death and replacement or perpetual life as long as computer resources exist to sustain it? Is it better to build the Archipelagists so that they always by nature choose the moral good, or should they be morally more complex? Are there aesthetic values we should aim to achieve in their world and not just morality-and-happiness maximizing values? Should we let them know that they are "merely" sims? Should we allow them to rise to superintelligence, if that becomes possible? And if so, what should our subsequent relationship with them be? Might we ourselves be Archipelagists, without knowing it, in some morally dubious god's vision of a world it would be cool to create?

A virus invades my computer. It's a brutal one. I should have known; I should have protected my computer better with so much depending on it. I fight the virus with passion and urgency. I must spend the last of my money, the money I had set aside for my kidney treatments. I thus die to save the lives of my Archipelagists. You will, I know, carry on my work.

Wednesday, April 10, 2013

Philosophers' Carnival Sequicentmensial!

Sesquicentmensial? Okay, I admit, I made the word up. I was going to say "sesquicentennial", but it's been 150 months, not 150 years, so I swapped in "mensis" ("month" in Latin) for "annus" ("year"). I think you'll agree that the result is semi-pulchritudinous!

The Philsosophers' Carnival, as you probably know, posts links to selected posts from around the blogosphere, chosen and hosted by a different blogger every month. Since philosophers are just a bunch of silly children in grown-up bodies, I use a playground theme.

The Metaphysical Whirligig:
All the kids on the playground know who Thomas Nagel is. He's the one riding the Whirligig saying he has no idea what it's like to be a bat! Recently, he's been saying something about evolution and teleology that sounds suspiciously anti-Darwinian. But maybe most of us are too busy with our own toys to read it? Peter at Conscious Entities has a lucid and insightful review (part one and part two). Meanwhile, Michael McKenna at Flickers of Freedom is telling us that "free will" is just a term of art and so we can safely ignore, for example, what those experimental philosophy kids are doing, polling the other kids in the sandbox. Whoa, I'm getting dizzy!

The Philosophy of Mind Sandpit:
Some of the kids here are paralyzed on one side of their body, and they don't even know it. How sad! They grab their toys only from one side and the toys tumble out of their hands. Glenn Carruthers at Brains muses about what these anosognosics' lack of self-knowledge really amounts to. I like the nuance of his description, compared with the black-or-white portrayals of anosognosias some of the philosophy kids offer.

The Curving Tunnel of Philosophy of Language:
Wolfgang Schwarz is looking down the tunnel at a single red dot, viewed through two tubes, one for each eye -- but he doesn't know it's only one dot! What he really sees, Wo says, is just another Frege case, nothing requiring centered worlds, contra David Chalmers. In the comments, Chalmers responds. Meanwhile Rebecca Kukla and Cassie Herbert are listening to what the philosophy kids are whispering to each other on the side in "peripheral" forums, like blogs. Why are the boys getting all the attention?

The Epistemic Slide:
Some children stand at the top of the slide, afraid to go down and ruining the fun for everyone. Me for instance! I remain unconvinced that Hans Reichenbach or Elliott Sober have satisfactorily proven that the external world exists.

The Moral Teeter-Totter:
At Philosophy, Et Cetera, Richard Chappell is scolding those fun-loving up-and-down moral antirealists: Though they might think they can accept all the same first-order norms as do moral realists, they can't. Concern for others, is for antirealists, just too optional. Anti-realists thus fail to see people as really mattering "in themselves". Are the antirealist children hearing this? No! They plug their ears, sing, and keep on endorsing whatever norms they feel like! At the Living Archives on Eugenics Blog, Moyralang discusses a fascinating case of parents trying to force a surrogate mother to abort her disabled baby. Some children just can't play nice with the special needs kids.

The Philosophy of Science Picnic Table:
See that kid sitting at the table with a winning lottery ticket? Why is she crying? At Mind Hacks, Tom Stafford gives a primer on research that money won't buy happiness. Meanwhile, the kids at Machines Like Us are gossiping about a new study suggesting that a large proportion of neuroscience research is unreliable due to small sample size. And that girl at the picnic table with the iPad? She's just seeing what Google autocompletes when you enter "women can't", "women should", "women need", vs. "men can't", "men should", "men need", etc. Nothing interesting there for us boys, of course!

The Historical Jungle Gym:
Steve Angle, at Warp, Weft, and Way -- what have you just put in your mouth?! Steve argues against PJ Ivanhoe's interpretation of the Confucian tradition as treating the moral skill as a kind of connoisseurship, like cultivation of taste in wine (or bugs). After all, even the poorly educated know that bugs taste bad!

The Metaphilosophy Spiderclimb:
What have all these children learned, really? Not much, maybe! Empty boasting might be the order of the day. Joshua Knobe at Experimental Philosophy pulls together the existing empirical evidence on philosophical expertise.

The Issues in the Profession Nanny:
Why aren't there more children on the equipment, you might wonder? So do I! It turns out they're wasting all their time applying for grants! Shame on them, says playground watcher Helen De Cruz at NewAPPS -- or rather, shame on the system. Children should be playing and jumping and throwing sand at each other, not forced to spend all their time hunting around in the grass for nickels. Meanwhile, Janet Stemwedel at Adventures in Ethics & Science tells a nice anecdote about the philosophy boys' cluelessness about the prevalence of sexual harassment -- still! But I know you philosophy blog readers won't be so ignorant, since you've been keeping up with the steady wave of shockers over at What Is It Like to Be a Woman in Philosophy?.

The next Philosophers' Carnival will be hosted by Camels with Hammers.

Monday, April 08, 2013

The Humor of Zhuangzi; the Self-Seriousness of Laozi

Why do I love Zhuangzi (aka Chuang Tzu) so much, when I so loathe Laozi (aka Lao Tzu)? Aren't they both "Daoists"?

It has something to do with Zhuangzi's humor and Laozi's self-seriousness, when they say strange things.

Compare them on death. First Zhuangzi:

When Chuang-tzu was dying, his disciples wanted to give him a lavish funeral. Said Chuang-tzu

'I have heaven and earth for my outer and inner coffin, the sun and moon for my pair of jade discs, the stars for my pearls, the myriad creatures for my farewell presents. Is anything missing from my funeral paraphernalia? What will you add to these?'

'Master, we are afraid that the crows and the kites will eat you.'

'Above ground, I'll be eaten by the crows and kites; below ground, I'll be eaten by the ants and molecrickets. You rob the one of them to give to the other; how come you like them so much better?' (Graham, trans.)

Of course Zhuangzi's disciples will bury him. They're not going to throw his corpse under a tree! He's razzing them, using the occasion of his death to make a joke -- a joke with a point, of course. In fact, the joke has at least three points: the surface point of challenging the burial traditions taken so seriously by most of his contemporaries, but also points conveyed by his mood and tone -- rejecting solemnity and negativity about death, and undercutting his disciples' attempts to revere him. Challenging tradition, refusing to be unsettled by death, and undermining his own authority are all central themes in Zhuangzi. They come together so nicely here in a crisp joke! Although this fragment isn't from the Inner Chapters, it's a perfect slice of Zhuangzi.

Now Laozi on death:

To be courageous in daring leads to death;
To be courageous in not daring leads to life.
These two bring benefit to some and loss to others.
Who knows why Heaven dislikes what it does?
Even sages regard this as a difficult question.
The Way does not contend but is good at victory;
Does not speak but is good at responding;
Does not call but things come of their own accord;
Is not anxious but is good at laying plans.
Heaven's net is vast;
Its mesh is loose but misses nothing. (Ivanhoe, trans.)
No jokes here! (Or anywhere in the Daodejing.) Laozi is dispensing some serious advice: To be courageous in daring leads to death but to be courageous in not daring leads to life. Wait, "to be courageous in not daring"? What does that mean? Hm, maybe Laozi is advising us to avoid battle even if it means facing scorn? Or at least he's saying that doing so is likelier to preserve your life? Well, no surprise there! Naw, the passage can't be that vapid, can it?

The text continues: "These two bring benefit to some and loss to others." Okay, and...? For a moment, there seems to be a bit of self-doubt. He can't say why. I can almost hear the relief in his voice, though, when he says that even the sages find these questions difficult; there's no real threat to his self-esteem, even if he can't figure it out! And within two lines all is better, with Laozi back to the usual profound paradoxicalizing he seems to find so comfortable: "The Way does not contend but is good at victory", etc.

Laozi sounds so deep! But it is exactly this seeming-profundity I mistrust. It's easy to invent profound-seeming inversions. "The voice that speaks loudest is the one that that is most quiet. The Way is largest in its being tiny. The basketball that misses the hoop is the one that truly goes in." Try ten more as an exercise at home. See, anyone can do it! Almost reflexively, the reader responds with attempts to see the deep sense in such remarks: Is Schwitzgebel saying that one gains more from failure in basketball than from success? Or is he saying that, in life, we should most admire the nothing-but-net swish shot that doesn't even touch the hoop? Or...? Wow, it's so multi-dimensionally profound it can't fully be articulated!

So we come up against the limits of language, or at least seem to. Let's compare Laozi and Zhuangzi on that issue. Here's the famous opening passage of Laozi's Daodejing:

A way that can be followed is not a constant Way
A name that can be named is not a constant name.
Nameless, it is the beginning of Heaven and Earth;
Named, it is the mother of the myriad creatures.
And so,
Always eliminate desires in order to observe its mysteries;
Always have desires in order to observe its manifestations.
These two come forth in unity but diverge in name.
Their unity is known as an enigma.
Within this enigma is yet a deeper enigma.
The gate of all mysteries! (Ivanhoe, trans.)
Just in case you couldn't tell from the profound-seeming reversals, you are also told explicitly: This is enigmatic! In fact, it's an enigma within an enigma! And this book is your gate to all that.

I admit it, Laozi makes me crabby. Probably I'm too uncharitable in reading him, but I think he's a poser. "Here, I've got secrets. Secrets within secrets, even! Too profound for words! If you're really in tune with the Dao, though, reader, you can start to fathom my depths. If anything I say seems silly or wrong, it's either your fault or the inherent limitations of language."

Contrast Zhuangzi on the limits of language:

Now I am going to make a statement here. I don't know if it fits into the category of other people's statements or not. But whether it fits into their category or whether it doesn't, it obviously fits into some category. So in that respect it is no different from their statements. However let me try making my statement.

There is a beginning. There is a not yet beginning to be a beginning. There is a not yet beginning to be a not yet beginning to be a beginning. There is being. There is nonbeing. There is a not yet beginning to be nonbeing. Suddenly there is being and nonbeing. But between this being and nonbeing, I don't really know which is being and which is nonbeing. Now I have just said something. But I don't know whether what I have said has really said something or whether it hasn't said something. (Watson, trans.)

Now, some interpreters (prominently, A.C. Graham) seem to think Zhuangzi is offering here a serious theory of not-yet-beginning-to-be-nonbeing. Really? It seems so clearly to me to be a parody! It wouldn't be the only parody in the Inner Chapters -- not by a long shot. Zhuangzi is gently mocking his friend the paradoxical logician Huizi and other philosophers advancing abstract general theories -- including maybe the folks who were putting together the Daodejing. But I don't see the humor here as mean-spirited or superior in tone; he brings his own language and theories within the umbrella of his mockery. He too finds his words collapsing around him, suspects his criticism applies to himself as much as it does to others. Once again, Zhuangzi undercuts himself where Laozi hypes himself. At least that's how I read it.

And to me, that difference in tone is all the difference in the world. A philosopher who says weird paradoxical things while undercutting those very things with humor and self-criticism is a very different philosopher from one who might say some superficially very similar-sounding weird paradoxical things while loudly insisting upon his own unfathomable profundity.

(For more hatin' on Laozi, see also this post. For more on Zhuangzi's self-undercutting uses of language see my essay here.)

Tuesday, April 02, 2013

The Splintered Mind Liability Release

Notice: By reading The Splintered Mind you agree to release, indemnify and hold forever harmless, the Splintered Mind, its owners, agents, officers, affiliates, volunteers, participants, employees, nominees, heirs, referees, commenters, commentators, publishers, guests, and all the great and minor and intermediate figures of the history of philosophy and psychology and in contemporary philosophy and psychology that might or might not be mentioned directly or indirectly herein (hereinafter "SM"), on behalf of yourself, your spouse, children, heirs, representatives, assigns, agents, estate, publishers, co-authors, academic or personal associates, and governmental and corporate and group-mind bodies, spirits, or other metaphysical vehicles over which you might or might not have full or partial control, from all responsibilities, liability, actions, demands, claims, losses, or costs of any sort whatsoever arising in any manner whatsoever from your use, non-use, or accidental or intentional discovery of this site, its archives, its links, the material in its links and to which it has failed to link and any behavior whatsoever on or off the internet, connected or unconnected with this site or the material within or not within, whether due to oversight, neglect, abuse, well-intentioned ineptitude, deliberate criminal malevolence aforethought, or arising from any cause or coincidence or lack of coincidence whatsoever. Should SM incur any legal costs directly or indirectly related to your action or lack of action, you agree to pay all such legal costs on behalf of SM plus an inconvenience fee of $1000 per hour (CPI adjusted to 2013 dollars) and free massages, even in the metaphysically impossible event that SM is found civilly or criminally responsible.

You hereby also further agree that reading The Splintered Mind is a risky activity that might result in false beliefs, dangerous lemmas, despair, loss of religion, adoption of a false religion, injury, death, insanity, in-between attitudes, ill-advised pragmatism, philosophical error, the sudden kindling of prurient desires, bizarre and uncontrollable thoughts, hatred of small cuddly kittens, up to and including condemnation to eternal torment; and that there are further risks, some known but intentionally held secret from you and some neither specified nor known. By reading this far (but even before reading this far), you have waived all of your rights in every jurisdiction, not only in the actual world but also in all possible and impossible worlds, whether distant, proximate, or entirely absurd, to any sort of action whatsoever and leave it entirely to the discretion of SM to treat you in any way they deem fit or unfit, without necessity of justification or defense.

If any portion of this contract is found void or unenforceable, the remaining portions shall remain in full force and effect.

This contract shall be binding upon all persons, non-human animals, aliens, group minds, and other entities and processes that have any casual contact or quantum-mechanical entanglement with SM, forward, backward, or sideways in time, mediated or unmediated, of any form whatsoever, whether they read this statement or not.

(Inspired by SkyZone and Hangar 18.)

Monday, April 01, 2013

A Two-Seater Homunculus

My neighbor Bill seemed like an ordinary fellow until the skiing accident. He hit a tree, his head split open, and out jumped not one but two homunculi. I persuaded them not to flee and sat them down for an interview.

The homunculi reproduce as follows. At night, while a person is sleeping, a female homunculus lays one egg in each of the victim's tear ducts. The eggs hatch and minute worms wiggle into the victim's brain. As the worms grow, they consume the victim's neurons and draw resources from the victim's bloodstream. Although there are some outward changes in the victim's behavior and physiological regulation, these changes are not sufficiently serious to engender suspicion in the victim's friends; the homunculi are careful to support the victim's remaining neural structure, especially by sending out from themselves neural signals similar to what the person would have received had their brain tissue not been consumed. The victim reports no discomfort and suspects nothing amiss.

Each growing homunculus consumes one hemisphere of the brain. Shared neural structures they divide equally between themselves. They communicate by whispering in a language much like English, but twenty times as fast -- much less inter-hemispheric information transfer than in the normal human brain, of course, but as commissurotomy cases show, massive information transfer between the hemispheres is not essential to most normal human behavior. Any apparent deficits are masked by a quick stream of speech between the homunculi, and unlike hemispheric specialization in the human brain, both homunculi receive all inputs and have joint control over all outputs.

Two months later, the person is a two-seater vehicle for brother and sister homunculi. An internal screen of sorts displays the victim's visual input to both of the homunculi; through miniature speakers the homunculi hear the auditory input; tactile input is fed to them by dedicated sensors on their own limbs, etc. They control the victim's limbs and mouth by joint steering mechanisms. Each homunculus is as intelligent as a normal human being, though operating an order of magnitude more quickly due to their more efficient brains (carbon based, like ours, but operating on much different internal principles). When the homunculi disagree about what to do, they quickly negotiate compromises and deferences. When fast reactions are needed, behavior defaults to pre-negotiated compromises and deferences.

But what is it like for Bill? There is no Bill anymore, maybe, though he didn't notice his gradual disappearance. Or maybe there still is Bill, despite the radical change in his neurophysiology? How many streams of experience are there? Two? One for each homunculus but none for Bill? Or one stream only, for the two homunculi, if they are well enough integrated and fast-enough communicating? (How well and quickly integrated would they have to be to share a single stream?) Three streams? One for each homunculus plus one, still, for Bill? Two and a half? One for Bill and one and a half for the two partly-integrated homunculi? Or when counting streams of experience, must we always confine ourselves to whole numbers?

Bill always liked sushi. He never lost that preference, I think. Neither of the homunculi would want to put sushi in their own mouths, though. Bill loved his wife, the Lakers, and finding clever ways to save money on taxes. Bill can still recite Rime of the Ancient Mariner by heart, though neither of the homunculi could do it on their own. When he sees the swing in his backyard, Bill sometimes calls up a fond and vivid memory image of pushing his son on the swing, years ago. When the homunculi consumed his brain, they preserved the information in this memory image between them -- and in many others like it -- and they would draw it up on their visual imagery screen when appropriate. Characteristic remarks would then emerge from Bill, like "Maggie, do you remember how much Ethan loved to ride high on this swing? I can still picture his laughing face!" So spontaneous and natural does it seem to the well-entrenched homunculi to make such remarks, they told me, that they would lose all sense that they are acting. Maybe, indeed, they were no longer acting but really (jointly) became Bill.

I don't know why the homunculi thought I would not be alarmed upon learning all this. Maybe they thought that, as a crazy philosopher who takes literal group consciousness seriously, I'd think of the two-seater homunculus as merely an interesting implementation of good old Bill. But if so, their trust was misplaced. I snatched the homunculi, knocked them unconcious, and shoved them back inside Bill's head. I glued and stapled Bill's skull closed and consulted David Chalmers, my go-to source for bizarre scenarios involving consciousness.

None of what I said was news to Dave. He had been well aware of the homunculus infestation for some time. It had been a closely-held secret, to prevent general panic. But with the help of Christof Koch, a partial cure had just been devised, and news of the infestation was being disseminated where necessary to implement the cure.

The cure works by dissolving homunculi's own skulls and slowly fusing their brains together. Their motor outputs controlling the victim's behavior are slowly replaced by efferent neurons. Simultaneously, the remains of the homuncular bodies are slowly reabsorbed by the victim. At the end of the process, the victim's physiology is very different from what it had been before, but it is a single stream in a single brain, with no homuncular viewing screens or output controls and more or less the victim's original preferences, memories, skills, and behavior patterns.

All of this happened two years ago. Bill never knew the difference and remains happily married, though I had to tell a few white lies about his skiing accident to explain the scars.

Monday, March 25, 2013

Problematizing the B Condition on Knowledge

Central to contemporary epistemology is the question of what it is to know something. And the orthodox starting point in discussions of what it is to know something is the "JTB account" of knowledge: Knowledge is justified (J) true (T) belief (B), plus maybe some fourth condition to take care of weird cases where justified beliefs are true merely by accident (so-called Gettier cases). Discussion tends to focus on how to understand the J condition and whether some further fourth condition is necessary. The truth and belief conditions are typically regarded as unproblematic.

In a forthcoming paper, Blake Myers-Schulz and I pick up a mostly-cold torch from Colin Radford (whose seminal work on this topic was in the 1960s) and challenge the belief condition. Can one know that something is the case even if one doesn't believe that it's the case? We offer five plausible cases (one adapted from Radford) along with empirical evidence that our intuitions [note 1] about these cases are not idiosyncratic.

This paper has already drawn several follow-up studies, some critical and some supportive -- but interestingly, even the critical studies can be read as contributing to an emerging consensus that problematizes the belief condition. (I don't predict the consensus will last. They never do in philosophy. But still!)

First, to give you a feel for it, our cases:

1. An unconfident examinee who feels like she is guessing the answer but non-accidentally gets it right;

2. An absent-minded driver who momentarily forgets that a bridge he normally takes to work is closed and continues en route toward that bridge;

3. A prejudiced professor, who intellectually appreciates that her athletic students are just as capable as her non-athletic students but who nonetheless is persistently biased in her reactions to student-athletes;

4. A freaked-out movie-watcher who seems to have the momentary impression that the scenario depicted in a horror film is real;

5. A self-deceived husband who has lots of evidence that his wife is cheating and some emotional responses that seem to reveal that he knows this, but who refuses to admit the truth to himself.

Now maybe not all the cases work, but we think in each case there's at least some plausibility to the thought that the person in question knows (that Queen Elizabeth died in 1603, that the bridge is closed, that athletic students are just as capable, that aliens won't come out of her faucet, that his wife is cheating) but does not believe -- at least not as fully and determinately as she knows. And lots of undergraduates seem to agree with us! So we think the B condition on knowledge should at least be open for discussion. It should not be regarded as uncontroversially nonproblematic.

Follow-up studies (e.g., here, here, here, and here) have added some new plausible cases. Our favorite of these is:

6. A religious fundamentalist geocentrist who aces her astronomy class -- seeming to know that Earth revolves around the sun but not to believe it.
Although some of these follow-up studies are pitched as in agreement with us and others as critique, we think there's actually a pretty clear thread of consensus through it all, from a bird's-eye view:

Knowledge requires some sort of psychological connection to the justified, true proposition -- something broadly like a belief condition; but it doesn't seem to require full-on act-it-and-live-it-and-breathe-it belief. However reasonable it might be to think the Earth goes round the sun, that fact has to register with me cognitively in some way if I am to qualify as knowing it; but the fact needn't play the full functional role of belief as envisioned in behaviorally-rich accounts of belief like my own. But how exactly should we should conceptualize this somewhat weak but broadly beliefish psychological-connectedness condition? At this point, that's wide open.

Blake and I suggest that one must have the capacity to act on the stored information that P; Rose and Schaffer seem to suggest that what's crucial is that the information be "available to the mind"; Buckwalter and colleagues suggest that one must believe but only in some "thin" sense of belief; Murray and colleagues suggest that one need to be disposed to "assent" to the content. None of these approaches are well specified (and I've simplified them somewhat; apologies). Figuring out what's going on with the B condition thus seems like a potentially fruitful task that brings together core issues in epistemology and philosophy of mind.

-------------------------------------------------

[note 1] Yes, I use the word "intuition". Herman Cappelen has me worried about the term. But I stand firm!

Tuesday, March 19, 2013

Hans Reichenbach's Cubical World and Elliott Sober's Beach

In his 1938 book, Hans Reichenbach imagines a "cubical world" whose inhabitants are prevented from approaching the sides. Outside the world, birds fly, whose silhouettes show on the translucent ceiling of the world. A "friendly ghost" has arranged lights and mirrors so that identical silhouettes also appear on one wall of the world, so that any time a silhouette on the ceiling flaps its wings so also, in perfect correspondence, a silhouette on the wall flaps its wings, etc.

Here's Reichenbach's diagram:


The inhabitants of this world, Reichenbach says, will eventually come to infer that something exists beyond the cubical boundary that causes the shadows on the ceiling and wall. So also likewise, he says, can we infer, from the patterns of relationship among our experiences, that something exists beyond those experiences, causing them.

It is crucial to Reichenbach's argument that the inhabitants of this world ("cubists", let's call them) infer the existence of something beyond the walls that is the common cause of the pairs of corresponding silhouettes. If the cubists could reasonably believe that only the shadows existed, with laws of relation among them, no external world would follow; and so correspondingly in the experiential case there might only be laws of relationship among our experiences with no external common cause beyond.

Unfortunately, it's obscure why Reichenbach thinks the cubists couldn't instead reach the conclusion that the shadows on the ceiling directly affect the shadows on the wall or vice versa, e.g., by the transmission of invisible and unblockable waves through the interior of the cube or simply by action at a distance. (In Reichenbach's mirror set-up, height has no influence on the bird's ceiling position but it does influence position on the wall; and the reverse holds for horizontal position; but direct-causers can posit hidden-variable explanations or similar.) Reichenbach addresses this worry with a single sentence: Within the confines of cubical world, he says, the cubists will have found that "Whenever there were corresponding shadow-figures like spots on the screen, there was in addition, a third body with independent existence", so they'll reasonably regard it as likely that the same is true on their walls (p. 123).

There are two serious problems with this response. First, it cannot be straightforwardly adapted to the sensory-experience/external-world case, which is of course the real aim of Reichenbach's argument. Second, it is false anyway: We can readily construct cases where one spot on a screen causes another on a separate screen without a common cause behind them, e.g. by using a mirror to reflect light from one screen onto another or by making the first screen sufficiently translucent and staging the second screen directly behind it; this is no less natural than the friendly ghost's arrangement.

In a 2011 article, Elliott Sober (the Reichenbach professor at UW-Madison!) notices the weaknesses in Reichenbach's argument and offers a new approach in its place. Call it Sober's beach.

Sober imagines sitting on the beach, noticing the correlation between visual experiences of waves breaking on the beach and auditory experiences of crashing waves. The two types of experience cannot be related as cause and effect because he can stop one while the other continues: When he closes his eyes he still hears the crashing; when he stops his ears he still sees the breakers. Presumably, then, there's a common cause of both.

So far, so good. But to establish an external world beyond the realm of experience, we must establish that this common cause is something outside the realm of experience. Sober responds to this concern by considering one solipsistic alternative: the intention to go to the beach. He then argues that this intention cannot serve as an adequate common cause because the visual and auditory experiences are correlated beyond what would follow simply from taking the intention into account. So he challenges the solipsist to produce a more adequate common cause. He suggests that this challenge cannot be met.

But it can be met! Or so I think. The common cause could be my first beach-like experience. This experience, whether auditory or visual or both, then causes subsequent beach-like experiences. That takes care of the correlation. If I have an experience as of closing my eyes, the auditory experience at time 1 causes the auditory experience at time 2 and also the visual experience at time 2 conditionally upon my having an experience as of opening my eyes; analogously if I stop my ears. The solipsist can either play this out with the first experience causing all the subsequent ones until conditions change, or she can have each experience cause the next in a chain. On the chaining version, if I have my eyes and ears simultaneously closed, my opinion that I will soon have beach-like experiences then does the causal work. (There are imperfections in these regularities, of course, e.g., I might seem to myself to have booted up an an audiorecording of waves, but to take advantage of those imperfections is contrary to the spirit of the toy example and would cause trouble for Sober's model too.)

I agree in spirit with what Reichenbach and Sober are trying to do -- and Bertrand Russell, and Jonathan Vogel. The most reasonable explanation of the patterns in my experience is that there is an external world behind those experiences. But the argument isn't quite as easy as it looks! That's why you need to read Alan Moore's and my paper on the topic. (Or for short blog-post versions of our arguments, see here, here, and here.)

Revised 4:42 pm.

Monday, March 18, 2013

How to Give $1 Million a Year to Philosophers

I hate grants.

I got into philosophy to think and to read and to teach -- to write articles and blog posts, to meet with grad students, to put together engaging classes for undergrads, to argue with colleagues. That's what I want to do with my time. What I don't want to do is spend lots of time applying for grant money.

And society should feel the same way. I'm an employee of the University of California, my salary funded by taxpayers. Taxpayers want me to teach. Taxpayers should want me to do research too -- if for no other reason than to make me the kind of leading scholar who can teach cutting-edge classes. But taxpayers should not be paying me to spend large amounts of time writing down stuff to convince some committee that I deserve money more than Professors X, Y, and Z deserve money. The amount of time academics spend applying for grants is a giant, loathsome waste of energy of some of the most capable minds in the world.

Plus, why should we want to tie researchers down to what they thought they wanted to do two years ago, when they applied? Times change, ideas mature, opportunities arise!

But society has to fund research, right? So there need to be grants out there to support researchers.

The solution is simple: Give money to researchers for their research without their having to apply, and let them spend it on any reasonable research expenses. That should be the dominant model of grant funding. The committees that award grant money should spend their time finding out who, based on recent performance, is likely to put research money to best use, and they should simply hand those people the money. This will free up the time of those people to do more of their interesting research. Think MacArthur "genius" grants on a small scale.

A few strings should be attached, so that recipients don't just pocket the money as salary. Suppose you're a foundation with $1 million a year to fund philosophical research on Topic X. Here's how you might do it. Form a committee of leading scholars on Topic X. Have them find 40 people who are actively doing excellent work on Topic X -- from post-docs through distinguished professors, some at every level. And then send each of the potential recipients a letter offering them $25,000 over the course of five years, with the following two conditions: (1.) The money be spent only on documented research expenses (provide a list of allowable expenses), and (2.) In the last year they receive money, they come to your annual conference to present some of their research on Topic X. (Right, you now have to host an annual conference on Topic X, where your brilliant researchers can argue with each other. That seems like a good idea anyway, doesn't it?)

(Or make it $10,000 to 100, or $100,000 to 10, or whatever -- depending on the committee's vision.)

A good committee (maybe eventually composed of past recipients) can easily identify a good pool of leading active researchers on Topic X. Look at who is publishing; look at who is presenting at conferences; etc. Those are the people to fund. They want to travel; they want to buy sabbatical time to be able to focus better on their research; they want to spend money on books and equipment and graduate student research assistants. I predict that the committee would end up funding better research overall, with less waste -- at least in philosophy -- than if they wait for applicants to come to them, funding on the basis of shiny-tight-looking proposals.

If the committee is especially interested in encouraging certain sorts of activities, the committee could also offer some funding contingent on executing those activities: $15,000 of research money if the recipient is willing to use it to organize a mini-conference, $4,000 bonus research money if the recipient speaks at four universities in continental Europe, whatever.

To counteract some of the elitism inherent in the proposal, the committee should be especially directed to look for researchers with large teaching loads and non-elite appointments who are able to be research-productive nonetheless. Such people especially deserve funding, and they might be especially likely to put their funding to good use.

I don't think this is the only way research should be funded. The standard model can still play a role: People with no brilliant track record, or who have been overlooked by the committee, should be funded if they can put together excellent proposals. Some people might have especially ambitious visions requiring sums of money larger than the usual amounts. For these cases -- but, I would argue, only for these cases -- the standard granting model makes sense.

(Templeton, are you listening?)

Update March 19:

In the comments, Neil reminded me about people whose salary is paid for by grants. This can easily be added to my system. One way is contingent directed funding: Offer some recipients $N of funding for a post-doc if they are willing to do the search and supervision. Another way is to have people apply for either renewable or non-renewable salaried grants, much as they might apply for a job. For the renewable ones, continuation would be based on actual performance during the lifetime of the grant.

Update April 9:

See Helen De Cruz's excellent post on this at NewAPPS, too.

Tuesday, March 12, 2013

The Coolest of All Possible Worlds (a Theodicy for the 21st Century)

If I knew that there were a planet with life on the far side of the galaxy, with no hope of contact with us, what would I wish for it? Not that it be merely bacteria, nor that it be merely happy cows, but rather that it soar with the heights of civilization, science, the arts, philosophy -- right? Wouldn't that be better, cooler? Let's, see, what else....

"Really cool wars?" suggests one of my TAs (Will Swanson), when I run the idea by him.

Yesterday, another one of my TAs (Meredith McFadden) was guest lecturing to my course, Philosophy 5 ("Evil"), on "the problem of evil": If God is omnipotent, omniscient, and benevolent, this should be the best of all possible worlds, shouldn't it? But it doesn't look like the best of all possible worlds. There are of course some traditional theological responses to this problem, though they all face considerable obstacles. However, with the help of Will's suggestion about wars, we might construct a somewhat different theodicy. In this theodicy, God is not omnipotent, omniscient, and benevolent but rather:
(1.) omnipotent,
(2.) omniscient, and
(3.) super-cool.

Thus, instead of creating the best of all possible worlds, God creates the coolest of all possible worlds. The question then arises: Do we in fact live in the coolest of all possible worlds?

My first inclination is to say no. But the super-cool theologian can respond to some of the obvious objections. The issue isn't entirely straightforward.

Objection 1: The world would be cooler if magic were real.
Reply 1: No, if magic were real, we would just call it physics. Maybe, in this sense, magnetism is magic. It's much cooler for magic to be imaginary. (I owe this point also to Will.)

Objection 2: The world would be cooler if aliens were real.
Reply 2: The universe is large enough that aliens probably are real. We're not in contact with them (yet?), but it's not obviously cooler to have a universe in which every intelligent species is in contact with other intelligent species than to have a universe in which some intelligent species are isolated.

Objection 3: The world would be cooler if dorky person X didn't exist.
Reply 3: Although joy seems to be possible without suffering (a problem for traditional theodicies, especially those with a Heaven), coolness is arguably impossible without uncoolness as a contrast. For example, The Rolling Stones wouldn't have been nearly so cool if there weren't also dorky Beatles-imitators to contrast with.

Objection 4: The Holocaust was seriously uncool, and in a way that cannot be fully counterbalanced by any contrast effect.

Now before I reply let me say that I don't think this reply ultimately works, and I am reluctant to say anything good about the Holocaust. But theologians who think that this is the best of all possible worlds are in an even worse position, because all the super-cool theologian needs to say is that the world is cooler for having had the Holocaust than for not having had it -- not that the world is better all things considered for having had the Holocaust.

Reply 4: Let's suppose that the super-cool theologian does in fact buy into the idea of cool wars -- buys into the idea that violence, disaster, and tragedy can make for a cooler world than a world in which people are always placid and happy. Maybe The Lord of the Rings can be a model here. If Tolkien's world is cool, well, Sauron perpetrates some serious death and horror, and that's essential to the coolness of the world. If we think about Tolkien's world or a world on the other side of the galaxy, maybe we can warm up to the idea that huge amounts of horrible tragedy and undeserved suffering can belong in a maximally cool world, if there's also enough triumph at the end. Will likes Nietzsche, and maybe this attitude fits with Nietzschean yes-saying to even the most horrible aspects of the world.

Alternatively, maybe we can do some natural theology here: We can try to infer the attributes of God by looking out at the world God chose to create. If God was going for coolness, God must have thought a world with the Holocaust would be cooler than one without. Maybe this says something about God's moral character. Maybe we're like soldier ants God finds it cool to shake up and watch fight? A God with little sympathy for us but an interest in "cool wars" might think Nazis are the coolest bad guys, in part because because of the irredeemably evil awfulness of the Holocaust. I can't say that I would be fond of such a God, but if "coolness" isn't sharply separable from benevolence, super-cool theology is no real alternative to orthodoxy.

A Euthyphro question arises. Is something cool because it is seen as cool by the super-cool God, or is God super-cool because God loves things that are cool -- things that would be cool regardless of God's preferences? Although there are surely limits -- an uncool dork God seems possible -- to some extent it seems God could make things cool by finding them cool. For example, if God started wearing hightop sneakers, that might make hightop sneakers cooler than they would be if God weren't wearing them. I doubt this works for the Holocaust though.

The picture, then, would be an unbenevolent God who is entirely willing to inflict vast undeserved suffering in the interests of a "cool" historical arc, with maybe some triumphs and awesomeness down the road that we can't yet anticipate, and for whom uncoolness is justified mainly to make current and future coolness pop out ever more coolly. I can't say we have great evidence for this view. But a sufficiently motivated theist might find it avoids some of the problems that flow from assuming divine benevolence.

[Revised March 13]

Thursday, March 07, 2013

Against the One True Kant

I start with two premises:
Premise 1: All human beings are bad at philosophy.
Premise 2: Kant was a human being.
Therefore, um, uh, let's see....

It is sufficient a person's for being "bad at philosophy" in the relevant sense that when that person tries to build an ambitious, elaborate philosophical system that addresses the great, enduring questions of metaphysics and epistemology, there will be some serious errors in the system, as a result of the person's cognitive shortcomings (e.g., invisible presuppositions, equivocal arguments). It is very easy to be bad at philosophy in this sense, and we have excellent empirical evidence for Premise 1. Premise 2 also seems well attested. Further supporting evidence for the conclusion comes from the boneheaded things Kant sometimes says when he is speaking clearly and concretely rather than in a difficult-to-evaluate haze of abstracta.

Here's a vision, then, of Kant:

Kant has a brilliant sense of what it would be very cool to be able to prove -- or at least a brilliant sense of what lots of philosophers think it would be very cool to be able to prove. For example, it would be very cool to (non-question-beggingly) prove that the external world exists. It would be very cool to prove that immorality is irrational. Kant also has some clever and creative pieces of argumentation that seem like promising elements in potential proofs of this sort. And finally, Kant has an intimidating aura of authority. He creates a fog of jargon through which the pieces of argument appealingly glint, in their coolness and cleverosity. And, voila, he asserts success. If you fail to understand, the fault seems to be yours.

Maybe this sounds bad. But the thing is: There really are interesting pieces of argument in there! It's just that they don't all fit together. There are gaps in the arguments, and seeming inconsistencies, and different possibilities for the meaning of the jargon. Because these gaps, seeming inconsistencies, and possibilities might be variously resolved, there need be no one right interpretation of Kant. We can be Kant interpretation pluralists. Although there are clearly bad ways of reading Kant (e.g., as an unreconstructed Lockean), there might be no determinately best way, but rather a variety of attractive ways with competing costs and benefits.

Interpret the terms this way and fill in the gaps that way, and find a Kant who thinks that there's stuff out there independent of our minds that causes our sensory experiences. Interpret the terms this other way and fill in the gaps that other way, and find a Kant who regards such stuff as merely an invention of our minds. Yet another Kant holds that there might be such stuff, but we can't prove that there is. Call these Kant Model 1, Kant Model 2, and Kant Model 3. There will also be Kant Model 4, Kant Model 1a, Kant Model 5f, etc. Similarly across the range of Kantian issues.

But surely only one of these things is what Kant really thought? No, I wouldn't be sure of that at all! When our terms admit multiple interpretations, when our arguments are gappy and dispositions unstable, the contents of both our occurrent thoughts and our dispositional opinions can be muddy. When I say, "the only really important thing is to be happy" or "all men are created equal", what exactly to do I mean? There might be no exactness about it! (See my dispositional approach to attitudes.) This is as true of philosophers as of anyone else -- and, I would argue, as true of the mortal Kant as of any other philosopher.

But even if Kant did have absolutely specific private opinions on all the topics of his writings, it doesn't matter. The philosophy of Kant is not that. Maybe in the secret grotto of his soul he was an orthodox Thomist and he invented the critical philosophy only as a joke to amuse his manservant Martin Lampe. This would not render the Critique of Pure Reason a defense of Thomism. Kant's philosophy is embodied in the words he left behind, not in his private opinions about those words. And those words might not, very likely do not, determinately resolve into one single self-consistent philosophical system.

Historians of philosophy can and should fight about whether to treat Kant Model 2b, Kant Model 5f, or instead some other Kant, as the canonical Kant. But those of us who don't make Kant interpretation our profession should have some liberty to choose among the Kants, as best suits our philosophical purposes -- as long as we bear in mind that Kant Model 2b is no more the One Kant than Hamlet Interpretation 2b is the One Hamlet.

Monday, March 04, 2013

The Spatial Location of Inner Speech

Last night, my six-year-old daughter Kate told me she had a song "in her head". I asked her if it was really inside her head, and she said yes it was. I asked her how big it was. At first she said she didn't know, but when pressed she agreed that it was larger than a pea but smaller than a dog, and she spread her fingers a few centimeters apart.

Most of the people I've interviewed are willing to attribute a spatial location to their experience of inner speech and imagined tunes -- and that location is virtually always inside their heads, not in their tummies or their toes or out in the environment, unless it's a hallucination or a case in which they're not sure whether the origin is some subtle environmental sound. Why, I wonder, this uniformity of report?

You might say -- as my 13-year-old son Davy said later last night, when I interviewed him -- that it's experienced as in the head because its origin is in your brain, and your brain is in the head. But that argument can't work without some supplementation. Phantom-limb pain, for example, is experienced as spatially located outside the head, even if its origin is in the head (or in peripheral nerves closer to the center of the body). Visual experience is a product of the brain but not normally described as located in the head. Visual imagery, too, although often described as "in the head", is sometimes experienced as out in the environment. For example, I might imagine a demon crouching in the corner of my office as I now look into that very corner. Also -- somewhat surprisingly to me! -- when I interview people about their visual imagery experiences, about 25% describe their visual imagery as spatially located a few inches in front of the forehead. In contrast, I have never heard anyone describe their inner speech as transpiring a few inches in front of their forehead!

You might say that it's because the origin of our outwardly verbalized speech is our head, so we're used to locating our speech inside our heads. But that doesn't quite work either. When I speak, the spatial origin of the sound, it seems to me, is my mouth. Although that's part of my head, most people, when they locate their inner speech, locate it not in their mouths but in the interior of their cranium.

You might think that it makes sense that we would imagine music as transpiring in our cranium, since that's where it seems to be when you're listening with headphones. But that doesn't quite work either, I think, since people with limited exposure to headphones (like Kate), who hear most of their music from exterior sources, still report tunes as spatially interior. (I'd wager one finds this "inside the head" phenomenological positioning, too, if one looks at phenomenological reports in Anglophone culture pre-stereophonics, but I haven't done the search on that (yet).)

A more interesting possibility is this: Sometimes imagery is experienced as environmentally positioned -- like that demon in the corner of my office. We might imagine a representation like this {demon with properties a,b,c; egocentric location x,y,z}. But most of the time we don't visually imagine things as environmentally located, so the representation is just {demon with properties a,b,c}. Without an environmental position explicitly represented, we might default to representation at our subjective center -- either actually experiencing it as there or erroneously thinking we experience it as there. And maybe our subjective center is inside our cranium. But even if so, the view has problems accounting for visual imagery reported as in front of the forehead, and with reports of inner speech as moving around inside one's head (as some of Russ Hurlburt's interviewees report).

So I'm left still wondering....

Monday, February 25, 2013

An Objection to Some Accounts of Self-Knowledge of Attitudes

You believe some proposition P. You believe that Armadillo is the capital of Texas, say.[footnote 1] Someone asks you what you think the capital of Texas is. You say, "In my opinion, the capital of Texas is Armadillo." How do you know that that is what you believe?

Here's one account (e.g., in Nichols and Stich 2003): You have in your mind a dedicated "Monitoring Mechanism". The job of this Monitoring Mechanism is to scan the contents of your Belief Box, finding tokens such as P ("Armadillo is the capital of Texas") or Q ("There's an orangutan in the fridge"), and producing, in consequence, new beliefs of the form "I believe P" or "I believe Q". Similarly, it or a related mechanism can scan your Desire Box, producing new beliefs of the form "I desire R" or "I desire S". Call this the Dedicated Mechanism Account.

One alternative account is Peter Carruthers's. Carruthers argues that there is no such dedicated mechanism. Instead, we theorize on the basis of sensory evidence and our own imagery. For example, I hear myself saying -- either aloud or in inner speech (which is a form of imagery) -- "The capital of Texas is Armadillo", and I think something like, "Well, I wouldn't say that unless I thought it was true!", and so I conclude that I believe that Armadillo is the capital. This theoretical, interpretative reasoning about myself is usually nonconscious, but in the bowels of my cognitive architecture, that's what I'm doing. And there's no more direct route to self-knowledge, according to Carruthers. We have to interpret ourselves given the evidence of our behavior, our environmental context, and our stream of imagery and inner speech.

Here's an argument against both accounts.

First, assume that to believe that P is to have a representation with the content P stored in a Belief Box (or memory store), i.e., ready to be accessed for theoretical inference and practical decision making. (I'm not keen on Belief Boxes myself, but I'll get to that later.) A typical deployment of P might be as follows: When Bluebeard says to me, "I'm heading off to the capital of Texas!", I call up P from my Belief Box and conclude that Bluebeard is heading off to Armadillo. I might similarly ascribe a belief to Bluebeard on that basis. Unless I have reason to think Bluebeard ignorant about the capital of Texas or (by my lights) mistaken about it, I can reasonably conclude that Bluebeard believes that he is heading to Armadillo. All parties agree that I need not introspect to attribute this belief to Bluebeard, nor call upon any specially dedicated self-scanning mechanism (other than whatever allows ordinary memory retrieval), nor interpret my own behavior and imagery. I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.

Now suppose the conclusion of interest -- the "Q" in this case -- is just "I believe that P". What other beliefs does P need to be hooked up with to license this conclusion? None, it seems! I can go straightaway, in normal cases, from pulling up P to the conclusion "I believe that P". If that's how it works, no dedicated self-scanning mechanism or self-interpretation is required, but only ordinary belief-retrieval for cognition, contra both Carruthers's view and Dedicated Mechanism Accounts.

That will have seemed a bit fast, perhaps. So let's consider some comparison cases. Suppose Sally is the school registrar. I assume she has true beliefs about the main events on the academic calendar. I believe that final exams end on June 8. If someone asks me when Sally believes final exams will end, I can call up P1 ("exams end June 8") and P2 ("Sally has true beliefs about the main events on the academic calendar") to conclude Q ("Sally believes exams end June 8"). Self-ascription would be like that, but without P2 required. Or suppose I believe in divine omniscience. From P1 plus divine omniscience, I can conclude that God believes P1. Or suppose that I've heard that there's this guy, Eric Schwitzgebel, who believes all the same things I believe about politics. If P1 concerns politics, I can conclude from P1 and this knowledge about Eric Schwitzgebel that this Eric Schwitzgebel guy believes P1. Later I might find out that Eric Schwitzgebel is me.

Do I need to self-ascribe the belief that P1 before reaching that conclusion about the Eric Schwitzgebel guy? I don't see why I must. I know that moving from "P1 is true and concerns politics" to "that Eric Schwitzgebel guy believes P1" will get me true conclusions. I can rely on it. It might be cognitively efficient for me to develop a habit of thought by which I leap straight from one to the other.

Alternatively: Everyone thinks that I can at least sometimes ascribe myself beliefs as a result of inference. I subscribe to a general theory, say, on which if P1 and P2 are true of Person S and if P3 and P4 are true in general about the world, then I can conclude that S believes Q. Now suppose S is me. And suppose Q is "I believe P" and suppose P3 is P. And then jettison the rest of P1, P2, and P4. Voila![footnote 2]

If there is a Desire Box, it might work much the same way. If I can call up the desire R to join with some other beliefs and desires to form a plan, in just the ordinary cognitive way that desires are called up, so also it seems I should be able to do for the purposes of self-ascription. It would be odd if we could call up beliefs and desires for all the wide variety of cognitive purposes that we ordinarily call them up for but not for the purposes of self-ascriptive judgment. What would explain that strange incapacity?

What if there isn't a Belief Box, a Desire Box, or a representational storage bin? The idea remains basically the same: Whatever mechanisms allow me to reach conclusions and act based on my beliefs and desires should also allow me to reach conclusions about my beliefs and desires -- at least once I am cognitively sophisticated enough to have adult-strength concepts of belief and desire.

This doesn't mean I never go wrong and don't self-interpret at all. We are inconsistent and unstable in our belief- and desire-involving behavioral patterns; the opinions we tend to act on in some circumstances (e.g., when self-ascription or verbal avowal is our task) might very often differ from those we tend to act on in other circumstances; and it's a convenient shorthand -- too convenient, sometimes -- to assume that what we say, when we're not just singing to ourselves and not intending to lie, reflects our opinions. Nor does it imply that there aren't also dedicated mechanisms of a certain sort. My own view of self-knowledge is, in fact, pluralist. But among the many paths, I think, is the path above.

(Fans of Alex Byrne's approach to self-knowledge will notice substantial similarities between the above and his views, to which I owe a considerable debt.)

Update, February 27

Peter Carruthers replies as follows:

Eric says: “I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.” This embodies a false assumption (albeit one that is widely shared among philosophers; and note that essentially the same response to that below can be made to Alex Byrne). This is that there is a central propositional workspace of the mind where beliefs and desires can be activated and interact with one another directly in unconstrained ways to issue in new beliefs or decisions. In fact there is no such amodal workspace. The only central workspace that the mind contains is the working memory system, which has been heavily studied by psychologists for the last half-century. The emerging consensus from this work (especially over the last 15 years or so) is that working memory is sensory based. It depends upon attention directed toward mid-level sensory areas of the brain, resulting in globally broadcast sensory representations in visual or motor imagery, inner speech, and so on. While these representations can have conceptual information bound into them, it is impossible for such information to enter the central workspace alone, not integrated into a sensory-based representation of some sort.

Unless P is an episodic memory, then (which is likely to have a significant sensory component), or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation.

This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same. Indeed, a second error made by Eric in his post is a failure to note that the mindreading system bifurcates into two (or more) distinct components: a domain-specific system that attributes mental states to others (and to oneself), and a set of domain-general planning systems that can be used to simulate the reasoning of another in order to generate predictions about that person’s other beliefs or likely behavior. On this Nichols & Stich and I agree, and it provides the former the wherewithal to reply to Eric’s critique also. For the “pulling up of beliefs” to draw inferences about another’s beliefs takes place (unconsciously) in the planning systems, and isn’t directly available to the domain-specific system responsible for attributing beliefs to others or to oneself.

Peter says: "Unless P is an episodic memory... or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation." I don't accept that theory of how the mind works, but even if I did accept that theory, it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode. Or am I misunderstanding the import of his sentence? If I'm not misunderstanding, then hasn't just given me all I need for this step of my argument? Let's imagine that "Armadillo is the capital of Texas" is stored in partly sensory format (as a visual map of Texas with the word "Armadillo" and a star). Now Peter seems to be allowing that it can drive inferences without requiring an intermediate act of self-interpretation. So then why not allow it to drive also the conclusion that I believe that Armadillo is the capital? We're back to the main question of this post, right?

Peter continues: "This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same." First, let me note that I agree that the fact that some systems can access stored representations without activating those representations in the global workspace doesn't stricly imply that the mindreading system (if there is a dedicated system, which is part of the issue in dispute) can also do so. But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading. I don't think Peter really offers us as much as we should want to explain this disability.

... which brings me to my second reaction to this quote. What Peter seems to be presenting as a secondary feature of the mind -- "the construction of plans in an 'automatic', unconscious manner" -- is, in my view, the very heart of mentality. For example, to create inner speech itself, we need to bring together a huge variety of knowledge and skills about language, about the social environment, and about the topic of discourse. The motor plan or speech plan constructed in this way cannot mostly be driven by considerations that are pulled explicitly into the narrow theater of the "global workspace" (which is widely held to host only a small amount of material at a time, consciously experienced). Our most sophisticated cognition tends to be what happens before things hit the global workspace, or even entirely independent of it. If Peter allows, as I think he must, that that pre-workspace cognition can access beliefs like P, what then remains to be shown to complete my argument is just that these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.

Update, February 28

Peter responds:

Eric writes: “it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode.”

I allow that the content of sensory-based memory can enter working memory, and so can join with other beliefs to drive a conclusion. But that the content in question is the content of a memory rather than a fantasy or supposition requires interpretation. There is nothing about the content of an image as such that identifies it as a memory, and memory images don’t come with tags attached signifying that they are memories. (There is a pretty large body of empirical work supporting this claim, I should say. It isn’t just an implication of the ISA theory.)

Eric writes: “But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading.”

Well, I and others (especially Nichols & Stich in their mindreading book) had provided that theory. The separation between thought-attribution and behavioral prediction is now widely accepted in the literature, with the latter utilizing the subject’s own planning systems, which can in turn access the subject’s beliefs. There is also an increasing body of work suggesting that on-line, unreflective, forms of mental-state attribution are encapsulated from background beliefs. (I make this point at various places in The Opacity of Mind. But more recently, see Ian Apperly’s book Mindreaders, and my own “Mindreading in Infancy”, shortly to appear in Mind & Language.) The claim also makes good theoretical sense seen in evolutionary and functional terms, if the mindreading system evolved to track the mental states of others and generate predictions therefrom. From this perspective one might predict that thought-attribution could access a domain-specific database of acquired information (e.g. “person files” containing previously acquired information about the mental states of others), without being able to conduct free-wheeling searches of memory more generally.

Eric writes: “these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.”

First, I concede that it is possible. I merely claim that it isn’t actual. As for the evidence that supports such a claim, there are multiple strands. The most important is evidence that people confabulate about their beliefs and other mental states in just the sorts of circumstances that the ISA theory predicts that they would. (Big chunks of The Opacity of Mind are devoted to substantiating this claim.) Now, Eric can claim that he, too, can allow for confabulation, since he holds a pluralist account of self-knowledge. But this theory is too underspecified to be capable of explaining the data. Saying “sometimes we have direct access to our beliefs and sometimes we self-interpret” issues in no predictions about when we will self-interpret. In contrast, other mixed-method theorists such as Nichols & Stich and Alvin Goldman have attempted to specify when one or another method will be employed. But none of these accounts is consistent with the totality of the evidence. The only theory currently on the market that does explain the data is the ISA theory. And this entails that the only access that we have to our own beliefs is sensory-based and interpretive.

I agree that people can certainly make "source monitoring" and related errors in which genuine memories of external events are confused with merely imagined events. But it sounds to me like Peter is saying that a stored belief, in order to fulfill its function as a memory rather than a fantasy or supposition must be "interpreted" -- and, given his earlier remarks, presumably interpreted in a way that requires activation of that content in the "global workspace". (Otherwise, his main argument doesn't seem to go through.) I feel like I must be missing something. I don't see how spontaneous, skillful action that draws together many influences -- for example, in conversational wit -- could realistically be construed as working this way. Lots of pieces of background knowledge flow together in guiding such responsiveness; they can't all be mediated in the "global workspace", which is normally thought to have a very limited capacity. (See also Terry Horgan's recent work on jokes.)

Whether we are looking at visual judgments, memory judgments, social judgments about other people, or judgments about ourselves, the general rule seems to be that the sources are manifold and the mechanisms complex. "P, therefore I believe that P" is far too simple to be the whole story; but so also I think is any single-mechanism story, including Peter's.

I guess Peter and I will have a chance to hammer this out a bit more in person during our SSPP session tomorrow! _______________________________________________

[note 1]: Usually philosophers believe that it's raining. Failing that, they believe that snow is white. I just wanted a change, okay?

[note 2]: Is it really "inference" if the solidity of the conclusion doesn't require the solidity of the premises? I don't see why that should be an essential feature of inferences. But if you instead want to call it (following Byrne) just an "epistemic rule" that you follow, that's okay by me.