I've been working (for years, I'm afraid) on an essay called The Unreliability of Naive Introspection. A common reaction to the essay's title is this: What, We don't know what we believe and how we're feeling? That's nuts!
I might be nuts, but I'm not that nuts. I do think that we're fairly good judges of what we think and how we feel. I can still hold the view that our introspective judgments are generally unreliable because I don't think such judgments are grounded in introspection. Instead, I'd call them expressive.
Here's the idea. When someone asks you "What do you think about X?" You don't cast your eye (metaphorically) inward. You don't attend to your experience or think about your mind. Instead, you express what's on your mind. You reflect on X, perhaps, and allow yourself to render aloud your judgment on the matter. This is a very different process from thinking about what your visual experience is right now (e.g., whether it's fuzzy 15-degrees from the center of fixation) or from trying to decide whether your present thought is happening in inner speech and if so whether that inner speech involves auditory imagery, motor imagery, and/or something else. In the latter case, you are attempting to discern something about your ongoing stream of experience. In the former, you're not. My beef is only with the latter sort of judgment.
Wittgenstein famously characterized sentences like "I'm in pain" or "that hurts" as just a complex way of saying "ow!" or grimacing -- in other words, as an expression in the strict sense in which a facial expression is an expression -- a more or less spontaneous manifestation of one's mental state. But even pain we can reflect on introspectively. If the doctor asks exactly what the pain in my finger is like, I can attend to my experience and say "well, it's kind of a dull throbbing in the middle of the knuckle". The difference between introspective judgment and expressive self-ascription is the difference between such reflective descriptions and a spontaneous "that hurts!"
But maybe it's not fair to compare the accuracy of a very general self-ascription ("that hurts") with a rather specific introspection ("shooting pain from here to here"). In the case of pain, I suspect, very general introspections ("there is pain") will tend to be fairly accurate.
However, self-ascriptive expressions of belief, unlike pain, can be pretty specific: "I think that fly will be landing on the ice-cream shortly" -- similarly with desire, intention, and many other propositional attitudes (for a definition of "propositional attitudes", see the second paragraph here). I doubt that I am similarly specifically accurate my self-reflective introspections about what exactly my stream of experience is as I think about that fly.
Emotion commonly lends itself both to spontaneous self-ascription and to reflective introspection. When someone says "I'm depressed" or "I'm angry", it's often hard to know how much this is expression vs. introspection. But in adding detail, people tend either to go expressive, treating the emotion as a propositional attitude ("I'm angry that such-and-such") or to go more strictly introspective ("I'm experiencing my anger as a certain kind of tenseness in the middle of my chest"). It's only the last sort of judgment I would argue to be unreliable.
Wednesday, June 13, 2007
Introspection and Expression
Posted by Eric Schwitzgebel at 8:05 AM 19 comments
Labels: introspection, self-knowledge
Monday, June 11, 2007
Should Ethicists Behave Better? Should Epistemologists Think More Rationally?
Thursday, I'll be presenting some of my work on the moral behavior of ethics professors at the meeting of the Society for Philosophy and Psychology. In his comments, Jonathan Weinberg tells me he'll ask this: Why should we think ethicists will be morally better behaved, any more than we would think epistemologists would be better thinkers (or have more knowledge, or better justified beliefs)?
My argument that ethicists will behave better is this:
(1.) Philosophical ethics improves (or selects for) moral reasoning.
(2.) Improved (or professional habits of) moral reasoning tends to lead either to (a.) better moral knowledge, or (at least) (b.) more frequent moral reflection.
(3.) (a) and (b) tend to cause better moral behavior.
Therefore, ethicists will behave better than non-ethicists.
The problem, as I see it, is that ethicists don't behave better. So we need to jettison (1), (2), or (3). But the premises are all empirically plausible, unless one has a cynical view of moral reasoning; and I myself don't find a cynical view very attractive.
But maybe there's a flaw in the argument that can be revealed by the comparison to epistemologists. Consider the parallel:
(1'.) Philosophical epistemology improves (or selects for) rationality.
(2'.) Improved (or professional habits of) rationality tends to lead to more knowledge and better justified beliefs.
Therefore, epistemologists will be more rational and have more knowledge and better justified beliefs than non-epistemologists.
The argument is shorter, since no behavioral predictions are involved. (We could generate some -- e.g., they will act in ways that better satisfy their goals? -- but then the conclusion would be even more of a reach.)
Why does it seem reasonable -- to me, and to many undergraduates -- to think ethicists would behave better, while we're not so sure about the additional rationality of epistemologists? (I do think undergraduates tend to expect more from ethicists. Though it seems strange to me now, I recall being disappointed as a sophomore when I discovered that my philosophy professor didn't live a live of sagelike austerity!)
Here's my thought, then: Ethics (except maybe metaethics) is more directly practical than epistemology. We wouldn't often expect to profit from considering the nature of knowledge or of justification, or the other sorts of things epistemologists tend to worry about, in forming our opinions about everyday matters. On the other hand, it does seem -- barring cynical views! -- that reflection on honesty, justice, maximizing happiness, acting on universalizable maxims, and the kinds of things ethicists tend to worry about should improve our everyday moral decisions.
Furthermore, when epistemology is directly practical, I would expect epistemologists to think more rationally. For example, I'd expect experts on Bayesian decision theory to do a better job of maximizing their money in situations that can helpfully be modeled as gambling scenarios. I'd expect experts on fallacies in human reasoning to be better than others in seeing quickly through bad arguments on talk shows, if the errors are subtle enough to slip by many of us yet fall into patterns that someone attuned to fallacies will have labels for.
I remain perplexed. I continue to believe that those of us who value moral reasoning should be troubled by the apparent failure of professional ethicists to behave any better than those of similar socio-economic background.
Posted by Eric Schwitzgebel at 10:46 AM 11 comments
Labels: ethics professors
Friday, June 08, 2007
Why Are People So Confident About Their Stream of Experience?
One theme in my work is this: I don't think people are generally accurate in their reports about their stream of experience, even their concurrently ongoing conscious experience. But if people are so wrong about their phenomenology -- their imagery, their dreams, their inner speech, their visual experience, their cognitive experience -- why are they nonetheless so confident?
My suspicion is this. When we're asked questions about our "inner lives" ("a penny for your thoughts") or when we report on our dreams, our imagery, etc., we almost never get corrective feedback. On the contrary, we get an interested audience who assumes that what we're saying is true. No one ever scolds us for getting it wrong about our experience. This makes us cavalier and encourages a hypertrophy of confidence. Who doesn't enjoy being the sole expert in the room whose word has unchallengeable weight? In such situations, we take up the mantle of authority, exude a blustery confidence -- and feel that confidence sincerely, until we imagine possibly being shown wrong by another authority or by the unfolding of future events. (Professors may be especially liable to this.) About our own stream of experience, however, there appears to be no such humbling danger.
Suppose you're an ordinary undergraduate, and your job is to tutor a low-performing high school student. You are given some difficult poetry to interpret, and the student nods his head and passively receives your interpretation, whatever it happens to be. Then you do it again, the next week, with a different poem. Then again, then again. Pretty soon -- though you'll have received no significant feedback and probably not have improved much in your skills at poetry interpretation -- I'll wager you'll start to feel pretty good about your skills as an interpreter of poetry. You've said some things; they seemed plausible to you; the audience was receptive; no one slapped you down; you run no risk of being slapped down in the future. Your confidence will grow. (So I conjecture. I don't know of any psychological experiments directly on this sort of thing. Although the eyewitness testimony literature shows people's confidence will increase as they repeat the same testimony over and over, that's not quite the same phenomenon.)
Here's another case: Those of us who referee journal articles don't really receive any serious feedback about the quality of our referee reports -- just appreciative remarks from the editors and occasionally (not often, in my experience) very polite letters from the authors explaining how a new revision addresses all our "very useful" criticisms. Yet I'd wager that our confidence in the quality of our referee reports goes up over time; and I'd also wager than the quality of the reports themselves does not go up. Rather, whatever gains we might have in our actual refereeing skills are counterbalanced, or more than counterbalanced, by an increasingly rushed and cavalier attitude toward refereeing as our experience and status increases.
That feeling of being taken seriously, and of saying things that seem plausible to you, without any actual feedback about the quality of your performance -- that is, I think, essentially the situation people are in when reporting on their stream of conscious experience (at least until they meet me!). If I'm right that those are excellent conditions for confidence inflation, that might partly explain our feeling of infallibility.
(I had a nice chat about this yesterday with UCR psychologist Steven Clark.)
Posted by Eric Schwitzgebel at 12:26 PM 9 comments
Labels: introspection, stream of experience
Wednesday, June 06, 2007
Remembering from the Third-Person Perspective?
A few days ago, I heard a National Public Radio interview on the topic of autobiographical memory. One thing the interviewee said stuck in my mind: People who remember past events in the "third person" (i.e., as though viewing themselves from the outside) differ from those who tend to remember past events in the "first person" (i.e., as though looking at it through their own eyes again). Among other things, this researcher claimed that third-person memory was better associated with accepting one's past mistakes and growing in response to them.
Several things in those remarks set off my skeptical alarms, but let me focus on one: Do people really remember events in the third or first person? I have no doubt that if you ask people to say whether a memory was first- or third-person, they'll be kind enough to give you a confident-seeming answer. But do autobiographical memories of particular past episodes have to have a visual perspective of this sort?
Some behaviorally quite normal people claim never to experience visual imagery. Let's suppose they're right about this. Of course they nonetheless have autobiographical episodic memories. How would such memories have a first- or third-person perspective, if there's no visual imagery involved? Would they have a first- or third-person auditory perspective? (Well sure, why not? But is this what the researchers have in mind?)
Maybe memories can be episodic and not visual at all; or visual yet not perspectival. The great writer Jorge Luis Borges and the emiment 19th century psychologist Francis Galton describe cases of visual imagery from visually-impossible circular or all-embracing perspectives or non-perspectives (e.g., the front and back of a coin visualized simultaneously).
In the 1950s people said they dreamed in black and white. Now they say they dream in color. People seem to assimilate their dreams to movies -- so much so that they erroneously attribute incidental features of movies, like black and whiteness (and maybe also like coloration) to their dreams. Similarly, it seems that people in cultural groups that analogize waking visual experience to flat media like pictures and paintings are more likely to attribute some sort of flatness to their visual experience than those who use other sorts of analogies for visual experience.
So I wonder: Do we imagine that we're remembering things "from a third-person perspective" in part because we assimilate autobiographical memory to television and movie narratives? Maybe, because of our immersion in film media, we (now) really do remember our past lives as though we were the protagonist of a movie? Or maybe we don't really tend to do that, but rather report our autobiographical memories as being like that (when pressed by a psychologist or by someone else or even just by ourselves) because the analogy between movies and memorial flashbacks is so tempting?
Would people in cultures without movies have comparably high rates of reporting autobiographical memory as though from a third-person perspective? Probably this has never been studied....
Posted by Eric Schwitzgebel at 9:07 AM 62 comments
Labels: stream of experience
Monday, June 04, 2007
Can we Have Moral Standards without Moral Beliefs? (by guest blogger Justin Tiwald)
Let's say you have a student in your introductory philosophy class who claims he doesn't "have a morality" (there's always someone!). He explains his claim in various ways. Most often he says he doesn't have a morality because his every decision is based on egoistic calculations, other times it's simply because he does as he pleases. But whatever the explanation it's clear that he takes some pride in it: other people live by moral standards, but he has risen above that.
I think my response is similar to that of just about everyone else in this situation: I don't believe it. What gets me out of sorts isn't the thought that he's a moral monster (he usually isn't), it's that he really does have a morality but won't admit it. How does one convince him that he has a morality in spite of himself?
"Having a morality" can mean many things, but what the class amoralist seems to have in mind is this: you have a morality when you hold yourself to moral standards as such. At minimum, you believe that living according to a standard is morally good, and this belief enters as a non-instrumental reason to adhere to it. These moral reasons needn't be decisive, and they don't always need to motivate you to do the right thing (you can have a morality even if you fail to live up to it). But your belief in the standard's moral goodness is essential, and this is where the self-proclaimed amoralist thinks he parts ways with the moralist. While the amoralist has standards that he holds himself to, it's obvious to him that they're not moral ones. He doesn't ultimately care whether his behavior is right, considerate, charitable, fair, respectful, etc. He only cares whether it will get him richer, make him more loved, or allow him to have more fun.
Put this way, so much of the amoralist's smugness depends on his not believing his standards to be moral ones. But does this really matter? In my view it matters much more that he treat his standards as moral, not that he believe them to be moral. The characteristic ways of treating a standard as moral include taking seemingly moral pride in meeting them, and feeling seemingly moral guilt or shame for falling short of them. It also includes behaving as though the standards are imposed from the outside. Subjectively speaking, the standards aren't "up to us," nor are they fixed by our wants and needs. However we understand our own relationship to these norms, we invariably think and behave as though we're stuck with them, even when we'd prefer others.
I tend to think that most psychologically healthy human beings cannot but have standards that they treat in these ways (with all of the usual caveats for sociopaths and victims of bizarre head injuries). Generally speaking we're stuck with our consciences, and our consciences will treat various standards as moral ones whether we like it or not. Sometimes they'll hold us to moral norms that we do not consciously uphold, as when someone explicitly disavows charity but feels guilty for leaving her brother homeless. But our consciences will treat even treat many of our non-moral norms as as though they are moral norms. Many people pursue wealth with a moral zeal, and if the amoralist is serious about his amoralism he'll invariably take a kind of torturous, guilt-ridden "moral" pride in having risen above moralism (call this "Raskolnikov Syndrome"). Whatever the amoralist may believe, then, it would be far-fetched to say that he "has no morality" at all.
------
I'd like to thank Eric for letting me borrow his soapbox these last few weeks. I've truly benefited from the comments and emails that I've gotten in response. Having seen this from the other side, I can say with even more certainty that he has a great thing going here!
Posted by Eric Schwitzgebel at 11:17 AM 11 comments
Labels: justin tiwald, moral psychology
Friday, June 01, 2007
The Clarity, or Not, of Visual Experience
Most people (not everyone!) will say there is some experiential difference between the center of their visual field and the periphery. The center is clear, precise, sharply detailed -- something like that -- and the periphery is hazy, imprecise, lacking detail.
If you agree with this (and if you don't, I'd be interested to hear), I want you to think about the following question: How large is that center of clarity? If you're comfortable with degrees of arc, you might think of it in those terms. Otherwise, think about, say, how much of your desktop you can see in precise detail in a single moment. Consider also, how stable the region of clarity is, approximately how much shifting there is of things from the clear center to the unclear periphery and vice versa. Is it a constant flux, say, or pretty stable over stretches of several seconds?
Humor me, if you will, and formulate in your mind an explicit answer to these questions before reading on.
Dan Dennett suggests the following experiment. Randomly take a card from a deck of playing cards and hold it at arm's length off to one side, just beyond your field of view. Holding your gaze fixed on a single point in front of you, slowly rotate the card toward the center of your field of view (keeping it at arm's length). How close to the center do you have to bring the card before you can determine its suit, its color, its value?
Most people are surprised at the results of this little experiment (so Dennett reports, and so I've found, too). You have to bring it really close! Go try it! If a playing card isn't handy, try a book cover with a picture on it. I've also posted a playing card here, if that might help (image from here).
In doing this exercise, you're doing something pretty unusual (unless you've been a subject in a lot of vision science experiments!) -- you've been attending to, or thinking about, your experience of parts of your visual field not quite at the center of fixation. It's a little tricky, but you can try doing this as your eyes move around more naturally. For example, you might decide to attend to your visual experience of one particular object (maybe the top left part of the banner at the top of your screen, or the Jack to the right), allowing your eyes to move around so that you're looking all around it but never directly at it. How well do you see it?
So here's the question: Has your opinion about your visual experience changed as a result of this little exercise? And if so, how?
I have a little bit of a wager, you might say, with a colleague of mine about this.
Posted by Eric Schwitzgebel at 5:09 PM 11 comments
Wednesday, May 30, 2007
Appearances, Beliefs, and the Moral Emotions (by guest blogger Justin Tiwald)
Part of what it means to have an emotion is to regard an object in a certain way. To fear an activity, for example, is to regard it as a threat to your interests. To be indignant about someone's behavior is to regard her as committing an injustice. But this usage of "regard" is ambiguous. If I regard something as dangerous, it might mean that I believe it to be dangerous. But it could also mean that it just seems dangerous to me. Often what seems to be the case is also what we believe to be the case, but sometimes these two things come apart. A popular example is the perceptual illusion created when the moon is low on the horizon. It seems bigger than usual, but most of us don't believe it's bigger.
These days most philosophers working on the emotions prefer the believing version of regarding. One thing going against it, though, is that we have emotional responses that don't match up with our beliefs. A good example (which I steal shamelessly from Michael Stocker) is the fear of flying: we don't really believe flying is dangerous. In fact most of us know that it's safer than driving. But we fear it all the same, and that's probably because it seems dangerous to us, despite our acceptance of the fact that it's safe.
Now let me apply this to an issue in historical moral psychology. I spend a lot of time reading the Neo-Confucian philosophers, who wholeheartedly embrace an account of the emotions as constituted by thoughts and judgments. For a long time I (like most scholars in my line of work) assumed they were cognitivists in the more familiar "believing" sense. Recently I've come to realize that they also make room for cognitivism in the "seeming" sense. In fact, the purpose of moral education as they understood it was to make us more reliant on emotional appearances (seemings) than on emotional beliefs. The beliefs just "second" the emotional appearances. Here's why.
When we think about harmless perceptual illusions like the appearance of the moon on the horizon, it's evident that beliefs tend to be more reliable than appearances. But in matters of moral significance the situation is often reversed. Moral beliefs tend to be more susceptible to rationalization and self-deception than moral appearances. Admittedly moral appearances also get things wrong--visceral disgust often plays a crucial role in the moral condemnation of entire classes of people (think of the initial disgust elicited by foreign eating habits or different sexual practices). But while both beliefs and appearances are unreliable, one of these problems is more intractable than others. It doesn't take much exposure to overcome our visceral disgust at unfamiliar things. But the tendency to rationalize self-serving ends is a permanent feature of the human condition. When given the chance, successful revolutionaries usually turn into unapologetic dictators.
On my reading the Neo-Confucians thought that emotions were constituted by both appearances and beliefs. But unlike many moral sense theorists they thought we were better off relying on the former. The latter will never go away, but they can be shut out by acting on our more spontaneous feelings (unlike most classical Greek and Chinese virtue ethicists, the Neo-Confucians were more attracted to accounts of moral selves as permanently divided between their good and bad parts). I think there is some truth to this, even if I'm not willing to give up entirely on belief-based emotional responses.
Posted by Eric Schwitzgebel at 2:12 PM 13 comments
Labels: chinese philosophy, justin tiwald, moral psychology
Monday, May 28, 2007
Happy Memorial Day...
My wife is trying to convince me to take holidays off. Imagine that!
Posted by Eric Schwitzgebel at 9:07 AM 0 comments
Friday, May 25, 2007
On Old Friends Looking Young
What do I see when I look at a friend's face?
When I was a graduate student at U.C. Berkeley in the 1990s, two friends (call them Jack and Rob) came up unexpectedly for a visit. But only one, Jack, showed up at my door. He invited me out for coffee, and we immediately walked downtown. As always, a number of people were loitering around outside the Berkeley subway station. Jack pointed at one and said, "Doesn't that guy look a bit like Rob to you?" I didn't think so: That guy looked shabby, fat, and old. Rob, of course, was none of these things. The catch, of course, was that it was Rob. (Those pranksters!)
As I recall it now, it seemed to me that a few seconds later, when suddenly I recognized Rob for who he was, his appearance literally changed before my eyes. He actually looked thinner, neater, younger, more handsome. (Philosophers of language may be reminded of John Perry's famous example of seeing himself in the mirror of a bus and wondering about that shabby pedagogue.)
When I see a friend for the first time in ten years, it's rare for that friend not to look disappointingly old, but the friends I see every day seem to retain their youthful faces. It's an unsurprising psychological fact that we like people we find handsome and find handsome the people we like. But is this just having a positive attitude about a face we see in all its flabby wrinkles every day? Or do we actually smooth over those wrinkles, as it were, in our visual experience? We don't need to see the details of our friend's faces much, I suppose -- at least those details irrelevant to emotion and expression -- so maybe we just "fill in", as it were, with something smooth and regular, or something idealized?
Though Dennett might disagree, it seems to me there's a difference between simply ignoring warts, moles, and wrinkles that are part of our visual experience of our friends' faces and not visually experiencing those warts, moles, and wrinkles at all. Do I have an accurate picture of my friend before my mind's eye, as it were, or is the picture touched up? Though I can't put much weight on a distant memory of a subjective experience, my encounter with Rob inclines me toward the latter. In that moment of recognition, my visual experience was transformed.
Update, 9:26 PM. This picture, grabbed off The Situationist, seems pertinent!
Posted by Eric Schwitzgebel at 5:44 PM 2 comments
Labels: sense experience
Wednesday, May 23, 2007
Finding the Blame in Survivor Guilt (by guest blogger Justin Tiwald)
On many standard accounts of the moral feelings, guilt is distinguished from other kinds of self-inflicted anguish by a belief in one's own culpability. I feel guilty when I believe I played a self-preventable causal role in bringing about a wrong state of affairs. Without this belief, my feeling might be better described as regret, shame, or embarrassment.
The phenomenon of survivor guilt throws something of a monkey wrench in this view. Survivors of fatal plane crashes often report feeling guilty about their survival, especially when it seems like a matter of luck that they lived and others didn't. Many survivors admit that they couldn't have done anything to prevent the crash, but they describe their feelings as guilt all the same. Are they right to do so?
One response is to say that such survivors are mistaken, or that the guilt they feel is of a different kind than the thief's or the murderer's. Another response is to attribute their guilt to a hidden or unconscious belief in their own culpability. I'm not satisfied with either of these answers. The lesson I take from survivor guilt is that we should characterize the cognitive elements of guilt in a broader and more textured way. What makes self-imposed anguish an instance of guilt isn't the belief that one is culpably wrong, but rather the entire family of reactions associated with that belief. And here's the key move: we can have these characteristic reactions to the belief without the belief itself.
When we think ourselves responsible for something we judge to be wrong, we typically respond in a number of ways. If I steal the laptop computer that I've always wanted, the laptop will quickly lose its luster for me. Even if I resolve not to atone for my wrong, I will nevertheless find myself imagining various ways of atoning. Similarly, survivors report that living feels like a shameful burden rather than a stroke of good fortune. They often think that they owe something to the dead or their families, and even report that they feel like better people once they find ways of making amends.
We don't always need the belief in our own culpable wrongness in order to motivate these stereotypical reactions. In fact we have a readily identifiable set of psychological mechanisms--conscience--that routinely replicates such reactions without that underlying belief. For example, conscience is often more responsive to the brute fact of human or animal suffering than to careful considerations of moral principles or calculations of consequences. We'd hope, of course, that our consciences would be more responsive to our considered judgments than this, but this isn't usually the case. I think the same tendency to bypass judgments of culpable wrongness is at work when someone feels guilty about being the sole survivor of a fatal accident. Faced with a traumatic event, judgments of responsibility go out the window.
So guilt is better distinguished by the thoughts and tendencies characteristic of someone who believes herself culpably wrong, and not helpfully distinguished by the belief in culpable wrongness itself. Our mistake is in thinking we can't have the latter without the former, which is a mistake that the psychology of conscience can easily correct. I wouldn't be surprised if we could tell a similar story about indignation, shame, or any of a number of other moral feelings.
Posted by Eric Schwitzgebel at 1:20 PM 0 comments
Labels: justin tiwald, moral psychology
Monday, May 21, 2007
Should Philosophers Belong to the APA?
Philosophers active in the profession -- if they are employed full-time in a U.S. university, especially a high-profile university with a Ph.D. program in philosophy -- have, I think, some obligation to the philosophical community for the support and nurturance of the profession. One may fulfill this obligation in part by doing things like refereeing essays, chairing sessions, serving on committees, and paying dues to professional organizations, including minimally the American Philosophical Association. The APA in particular forms committees and publishes newsletters and proceedings pertinent to issues in the profession, organizes three annual conferences, supports and publicizes awards, and provides an admirably well-organized and equitable structure for advertising positions in philosophy (including keeping a list of censured institutions).
Some active U.S. philosophers, I'm sure, have good reasons not to belong. But I'm inclined to think that at least for leading members of the profession, it's a small, defeasible wrong not to belong -- a bit of freeloading or a small lapse of generosity, perhaps. (Of course, we all have our lapses in one arena or another!)
Even if there is no obligation -- not even a weak and defeasible one -- to be a member, it still seems reasonable to suppose that ordinarily, and all else being equal, supporting the organization financially by one's dues and membership, is a good thing, if you are an active, prominent, full-time member of the profession.
Or are decisions about membership simply decisions of prudence, so that if you personally derive no benefits from being a member (you can do without the newsletters and proceedings, you are willing to pay the slightly higher registration fees when you go to meetings, you don't care to serve on any committees) paying membership fees is only foolish, like buying a pair of shoes you'll never use?
(Let's say these reflections are a propos of the following statement in the On-Line Philosophy Conference introduction:
Finally, we are very pleased to announce that Professors McMahan and Sosa have generously offered to donate their keynote honorariums to charity. This year the charities selected by the OPC keynote speakers are Amnesty International, Oxfam, and The American Philosophical Association. Please follow their generous lead and donate what you can. If nothing else, treat it as an inexpensive conference registration fee! We have provided links in the sidebar to this year's official charities. We hope that with your assistance we can start a charitable tradition here at the OPC, and we thank both Professors McMahan and Sosa for laying the groundwork!)
Posted by Eric Schwitzgebel at 1:15 PM 3 comments
Friday, May 18, 2007
Sympathy and Self-Love (by guest blogger Justin Tiwald)
The philosopher I work on, Dai Zhen, maintains that sympathy by its very nature requires a strong interest in one's own good. Here is my short synopsis of his argument, pieced together from comments and character glosses. See what you think.
To sympathize with someone, I must care about her for her sake. But this "for her sake" is a tricky concept. It doesn't really count if my concern for someone is grounded in any of her particular properties. Let's say that I care about someone named Mary who is also a great sociologist and an amazing Frisbee golfer. Although this combination of traits may be unique, in principle some other Frisbee golfing sociologist could fit the bill just as well. We wouldn't want a form of care in which one object of concern could be substituted for another. Therefore, we want a form of care that is largely independent of her particular virtues or appealing characteristics.
If concern for Mary's sake must be independent of these properties, then in some sense it has to be unconditional. Or at least it has to have a great deal of counterfactual resiliency. I should care about Mary even if she were a very different kind of person.
Achieving concern of this more unconditional variety is much more difficult than it appears, especially when the object of concern is someone with whom we are barely acquainted. To be able to sympathize with just anyone we have to be capable of appreciating her relevant feelings and desires even if we find them appalling or strange, and we have to consider her suffering regrettable even if we think it deserved or necessary. The best way to do this is to imagine ourselves in the stranger's place, wanting her good as we want our own. There is nothing that comes more naturally than caring about our own well-being as such, and if we imagine ourselves as the stranger, we'll be much more likely to recapture the depth and unconditionality of our own self-concern.
Now to an interesting historical point. In form, this argument runs along some of the same lines as a familiar Confucian argument for preferential love. Very roughly, the Confucian argument is that we show the requisite concern for strangers by building on the natural concern we already have for our parents and siblings. We then model that more natural concern in our interactions with outsiders. However, familial love by its very nature requires that we play favorites--if I cared about strangers as much as I do my brother then my care for my brother wouldn't be familial love. Thus, for the sake of having the right kind of concern for strangers, I must care even more about my family. This might seem unfair to the strangers, but it's just the price we must pay so that I can have the right kind of concern at all.
Dai Zhen takes it one step further. The real foundation for other-directed concern is not familial love but self-love. Self-love is unconditional in a way that natural familial love is not. Children will cease to love their parents if the parents are complete monsters, but we care about ourselves no matter what (even if we think ourselves unworthy). Whenever we are sympathetically concerned for others we are emulating and building on the concern we have for ourselves. Thus (in some very qualified sense) the self comes first. This might seem unfair to everyone else, but this is just the price we must pay in order to have the right kind of concern at all.
[Update: ES, May 22, 8:53 a.m.: Somehow the comments on this post were disabled. I have re-enabled them.]
Posted by Eric Schwitzgebel at 7:34 AM 1 comments
Labels: chinese philosophy, justin tiwald, moral psychology
Wednesday, May 16, 2007
The Two Envelope Paradox
In 1993, when I was a graduate student, fellow student Josh Dever introduced me to a simple puzzle in decision theory called the "exchange problem" or the "two envelope paradox". It got under my skin.
You are presented with the choice between two envelopes, Envelope A and Envelope B. You know that one envelope has half as much money as the other, but you don't know which has more. Arbitrarily, you choose Envelope A. Then you think to yourself, should I switch to Envelope B instead? There's a 50-50 chance it has twice as much money and a 50-50 chance it has half as much money. And since double or nothing is a fair bet, double or half should be more than fair! Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch.
Of course that's an absurd result. You have no reason to expect more from Envelope B. Parity of reasoning -- calling "Y" the amount in Envelope B -- would yield the result that you should expect more from Envelope A. Something has gone wrong.
But what exactly has gone wrong? I've never seen a satisfying answer to this question. Various authors, like Frank Jackson and Richard Jeffrey, have proposed constraints on the use of variables in the expectation formula, constraints that would prevent the fallacious reasoning above. However such constraints are impractically strong, since they would also forbid intuitively valid forms of reasoning such as: If I have to choose between (i.) a gift from Person A and (ii.) a coinflip determining whether I get a gift from Person B or Person C, and I believe that Person A would, on average, give me about twice as much money as Person B and half as much as Person C, I should take option (ii).
Terry Horgan and Charles Chihara have proposed less formal contraints on the use of variables in such cases, constraints that I find difficult to interpret and which I'm not sure would consistently forbid fallacious calculations (for example in non-linear cases).
Many mathematicians and decision theorists have written interestingly on what happens after you open the envelope and see an amount. For example, could there be a probability distribution according to which no matter what amount you see, you should switch? That's a fun question, but I'm interested in the closed-envelope case, in diagnosing what is wrong in the simple reasoning above. No one, I think, has got the diagnosis right.
For Josh Dever's and my stab at a solution, see here (simplified version) or here (more detailed version).
For a list of on-line essays on this topic, see this Wikipedia entry. (This entry gives Josh and me credit for "the most common solution" -- does this mean that our unorthodoxy has become the new orthodoxy? -- and then shifts focus to the open envelope version.)
Posted by Eric Schwitzgebel at 11:45 AM 28 comments
Monday, May 14, 2007
The Second On-Line Philosophy Conference...
is here. Check it out!
(I have commented on Shaun Nichols's essay on the motivations of compatibilism about freedom and determinism, developing some of my ideas from last Friday's post on the psychology of philosophy.)
Posted by Eric Schwitzgebel at 11:11 AM 0 comments
How Selfless Can We Be (and Still Care about Others)? (by guest blogger Justin Tiwald)
Let's say that I am a person who cares very little about his own well-being. I am content with my humble job and my austere apartment. But let's say I also aspire to be the sort of person who sympathizes with a friend when she loses her good job and her family home. Is it possible to be both of these things at once?
I take sympathy to require, among other things, an ability to simulate the significant thoughts and feelings that the friend would have in her particular circumstances. This generally requires the possession of relevantly similar desires (even if not desires for exactly the same types of objects or states of affairs). And this poses a problem for the person who wants very little for himself, especially if good jobs and homes are among those things that he doesn't want.
This sort of worry emerges from time to time in the literature on Zhu Xi (1130-1200). Like many strong proponents of selfless character, Zhu Xi wants to have his cake and eat it too: he'd like his ideal moral agents to have very little interest in their own well-being and yet be capable of great compassion. Zhu's defenders usually respond by saying that he isn't so strong a proponent of asceticism as one might think. They point to many overlooked passages in which he explicitly countenances desires for basic human goods like food and family. Put these together, they conclude, and we could well have desires for a reasonably good life.
I've never been satisfied with this move. Defenders of Zhu Xi are right to point out that his ideal moral agent desires things like food and family, but they don't pay sufficient attention to why she desires them. It's one thing if she desires them because they make her life better, but it's another thing entirely if she desires them independently of their contribution to her life. In the first case she desires things that benefit her under that description. In the second case she desires things that happen to benefit her. Zhu Xi permits us to desire things that happen to be good for us, but, he warns, we better not want them because they are good for us.
This strikes me as omitting the largest share of the human good. When my friend loses her home and career, surely a substantial part of her anguish depends upon the thought that her life has taken a turn for the worse. In general, most people want their lives to go well. Knowing that one's life is on an upward trajectory is itself a source of great satisfaction, and knowing that it is not is itself a source of despair. If I am so selfless as to be entirely without desires that my life go well, I'm not going to be a particularly good at feeling the pain of those who do.
Many proponents of moral selflessness turn out to be ascetics of the more subtle kind that I find in Zhu Xi. While they might appear to condemn all desires for outcomes that are self-serving, on closer examination they turn out to condemn primarily those desires that are conscientiously self-serving. This characterization of the good moral agent strikes me as much more realistic, but it still falls well short of what is required for robust sympathetic concern.
Posted by Eric Schwitzgebel at 10:52 AM 7 comments
Labels: justin tiwald, moral psychology
Friday, May 11, 2007
The Psychology of Philosophy
As an undergrad, my favorite philosophers were Nietzsche, Zhuangzi, and Paul Feyerabend -- all critics of High Reason, elegant rhetoricians against seeing philosophy (and science) as dispassionate intellection of the one truth. Why was I drawn to them -- to each of them from the first page, almost? Was it a nuanced appreciation of the arguments and counterarguments? Of course not. Rather, it was a psychological urge: Something in me rebelled against tyrant reason. I wanted to see it get its comeuppance. (What this partly because I was so attracted to reason, almost painfully intellectual as a kid?)
My 1997 dissertation, I realized only in retrospect, had four parts each of which was a rebellion, too: The first part (on infant and animal belief) attacked the views of Donald Davidson, probably the most eminent philosopher at my graduate institution. Each of the other three parts assaulted the views of one of my dissertation advisors (against Gopnik's treatment of representation, against Lloyd's treatment of theories, against Searle's treatment of belief). Coincidence? Fortunately, they were a tolerant lot!
My interests in philosophy have traced a crooked course, from Nietzsche and Unamuno to skepticism and philosophy of science, to developmental psychology, belief, consciousness, self-knowledge, and moral psychology. However, as I now realize looking back, a central theme in most of these has been an interest in, not just the philosophy of psychology, but the psychology of philosophy. What psychological factors drive philosophers toward certain views and away from others?
There is no broadly recognized subfield called "psychology of philosophy", though much of Nietzsche's best work fits aptly under this heading. Historians of philosophy -- especially those whose home departments are outside of philosophy -- generally recognize the importance of historical and social factors in shaping philosophical views, but few push for a deeper understanding of the psychological factors. Yet surely we could look more closely at such factors. We could use the tools of contemporary psychology (tools unknown to Nietzsche) to help improve our understanding of the field. Why not? Philosophy leaves such a wide latitude for disagreement, and our philosophical impulses -- our attractions to certain types of view and distaste for other views -- play a role so early in our exposure to philosophy, before we can really fairly assess the arguments, that it seems almost undeniable that contingent features of individual psychology must play a major role in our philosophical lives. (This needn't always be a matter of psychodynamic "depth psychology." One theme I find recurring in my work is the role played by unwitting metaphor. For example here and here and here and here.)
I was brought to these reflections reading Shaun Nichols's forthcoming contribution to next week's On-Line Philosophy Conference. Nichols's piece is exactly what I've just endorsed: a piece of psychology of philosophy, using the techniques of empirical psychology to cast light on philosophical motivations.
I'll post a link to that essay and to my commentary (which will contain more discussion of the psychology of philosophy) next Monday, when they are posted.
Posted by Eric Schwitzgebel at 9:05 AM 17 comments
Labels: psychology of philosophy
Wednesday, May 09, 2007
What Does It Mean to Have a Desire? (by guest blogger Justin Tiwald)
We don't normally speak as though having a desire for something implies that we presently feel some inclination to acquire it. It makes sense to say that I have a desire for Thai curry even if I'm currently taking a driving test and not thinking about Thai curry at all. Therefore it's tempting to say that "having a desire" can be cashed out in terms of a fairly straight-forward counterfactual. I would have a desire for Thai curry just in case the following is true:
(1.) If I were sufficiently deprived of Thai curry and entertaining the possibility of acquiring it, I would feel in an inclination to acquire it.
I don't think (1) does justice to the nuances of desire-possession. Consider another account offered by Xunzi (Hsün Tzu, 3rd Century B.C.E.). On the one hand Xunzi holds that our natural desires are susceptible of being utterly transformed. On the other hand Xunzi also claims that certain inclinations are permanent, such as the eyes' lust for beautiful things. The eyes will always lust for such things when allowed to dwell on them, but it's also true the eyes' lust can be refashioned into a desire for sights that are consistent with virtue. How is this possible? Consider the following passage:
[The gentleman] makes his eyes not want to see what is not right, makes his ears not want to hear what is not right, [etc.]...He comes to the point where he loves [learning the Way], and his eyes love it more than the five colors, his ears love it more than the five tones, [etc.]...For this reason, power and profit cannot sway him. ("An Exhortation to Learning," Ivanhoe and Van Norden, pp. 260-61.)
A strong claim about the possibility of radical self-transformation, to be sure. But notice that Xunzi isn't suggesting that we can entirely eliminate the disposition to lust for beautiful things when allowed to dwell on them. Rather, the eyes develop a preemptory desire to avoid dwelling on the wrong things in the first place--a power of selective perception. With sufficient reinforcement it no longer makes sense to say that we have a desire for beautiful things as such, even though our eyes would lust for them if our thoughts were allowed to linger on them. This gives us a slightly more nuanced account of having a desire for beautiful things:
(2.) If I were sufficiently deprived of beautiful things and presented with an opportunity to entertain the thought of acquiring them, I would feel an inclination to acquire them.
Of course, I can be presented with an opportunity to entertain the thought of acquiring something without actually entertaining that thought. So on this account I could have a desire for beautiful things in sense (1) without having it in sense (2).
I think (2) sits closer to our usual way of understanding desire-possession. If I allowed myself to dwell on the thought of taking someone's fancy new laptop, I would probably feel an inclination to do so. But it's highly unusual for me to contemplate such a thing. I can sit in a classroom for hours without noticing open bags and backpacks that might have laptops inside. Often students will use their laptops in class and it won't even register. In contrast, a kleptomaniac would be well aware of those open bags, and would need to remind herself that it would be wrong to steal them.
So I have the desire in sense (1), because I would be tempted to acquire the laptop if I thought about it. But I don't have the desire in sense (2), because I don't in fact think about it (unlike the kleptomaniac). For purposes of evaluating moral character, (2) strikes me as the more decisive sense of having a desire.
Posted by Eric Schwitzgebel at 12:02 PM 10 comments
Labels: chinese philosophy, justin tiwald, moral psychology
Monday, May 07, 2007
Attention, Objects, and Aims
We normally think of attention as a relationship between a person and an object. If you are attending, you're attending to something, that is, to some thing -- a noise, a conversation, an apple.
First problem case: Macbeth hallucinates a dagger. I see a mirage. There is, of course, no dagger and no pool of water. So what thing, what object, do I stand in relation to, as the target of my attention? Some non-existent thing? Some mental thing (an idea, an experience)? If the latter, does it follow that I can't always tell whether my attention is directed outward to the world or inward, as it were, to my own mind? That would be strange.
Not a fatal objection, surely, to an "objectual model" (let's call it) of attention. Defenders of that view will have their resources. But why not, instead, jettison the objectual model and regard attention as the dedication of a certain kind of resource (what we might call "central cognitive resources") to a particular aim or goal? The aim of visual attention is the same in both the mirage case and the case of seeing an ordinary pool of water. The aim is to (for example) determine whether there's water over there, or whether this is really a mirage, or to estimate how long before the car hits the puddle. The mirage case and the visual case can be treated in the same way, without the aid of some ghostly, invented object for me to stand in an attentional relation to.
Consider also other sorts of attention-consuming tasks. Research psychologists have fixated on visual attention (and to some extent auditory attention) almost exclusively in recent decades, but in the early days of introspective psychology people spoke also of "intellectual attention". When you're thinking hard about a math puzzle or when you're contemplating the best route to grandma's house in rush hour, there's a perfectly legitimate sense in which you are devoting (non-sensory) attention to these tasks. Both kinds of tasks consume central cognitive resources. You can't do either very well while also quickly adding a column of numbers or while focusing on a difficult visual task.
But what are the objects I stand in relation to in intellectual attention? The route to grandma's house? Numbers? (What are numbers, anyway?) What if I'm thinking about unicorns? Better to say that I'm trying to do things. Attention is devoted to tasks, not objects. Or consider heavy exercise, holding one's eyes still, and other acts of self control. These tasks, too, consume attentional resources; yet it's not always clear that I am attending to objects (my own body, maybe?) in doing them.
So why do I care about this? Mainly because I think introspection is a species of attention, and that philosophers and psychologists often get introspection wrong because they work with too objectual a model of attention. But more on that in a future post....
(Thanks to Justin Fisher, by the way, for conversation on this point last Friday.)
Posted by Eric Schwitzgebel at 11:16 AM 19 comments
Friday, May 04, 2007
With Your Eyes Closed, Can You See Your Hand in Front of Your Face?
Puzzlement and confusion:
I close my eyes. I wave my hand in front of my face. It seems as though I can see the motion of my hand. Most people I've asked report the same.
It's possible that I do detect that motion. A certain amount of light penetrates the closed eyelids. I could be detecting differences in lighting as my hand passes before my eyes.
But on the other hand, most people, deep in a cave where there isn't a single photon to pierce the darkness, will report being able to see their hands moving in front of their faces. That this isn't a matter of picking up on visual stimulus is made clearer by our inability in such situations to detect another person's hand waved before our faces. It seems that our knowledge of the movement of our hand is somehow affecting our visual experience, or at least our judgments about our visual experience, without actually causing any visual input.
So: When my eyes are closed and I seem to detect my hand, am I actually visually detecting its motion? Or is what's going on more like what happens in a cave?
Let's do some science. Consciousness studies, in such matters, is pretty uncut. Maybe there's something out there on this, but I bet you'd have to dig pretty deep; and then you'd get a few weird articles from 1932 or something, or from a minor Japanese journal in 2001 -- articles that have never been cited, and that have strange, contradictory results. (I don't know this for sure, I'm just conjecturing based on past experience with similar questions.) If so, you can do novel experiments right there in your armchair.
Try facing different directions (toward a light source, away from a light source). Try closing your eyes more tightly, or occluding them with your other hand, or interposing an object between your eyes and your hand. That's what I did at least. I found myself sufficiently puzzled that I dashed downstairs and found a group of loitering undergraduates and had them all do it too! (This probably enhanced my reputation as a kooky professor.)
The results were complete uninterpretable chaos. For example, for myself: I seem to see it more strongly when I face a light source than when I face away. When I close my eyes tightly or put my other hand completely over them, I find myself uncertain about whether I have visual experience conditioned on the motion of my hand. If so, it is less. But when I put an occluding object between my eyes and my moving hand, say six inches in front of my face, I do think I still experience the motion of my hand, despite the fact that it can't be affecting me though that occluding object -- or at least that's how it seemed to me before I ran downstairs. I seem to be able to reproduce that effect only inconsistently. Others had different patterns of results.
If you're game to try, I'd be interested to hear your thoughts and experiences. Maybe I'll even work some of them into a presentation I'm hoping to give at the Association for the Scientific Study of Consciousness next month....
Posted by Eric Schwitzgebel at 7:27 AM 34 comments
Labels: eyes closed, sense experience
Wednesday, May 02, 2007
Virginia Tech: A Thought about the Media Coverage
I'm teaching a class this term on the moral psychology of evil. So far, I've managed not to say a word about the Virginia Tech shootings (yes, there's already a very good Wikipedia entry, with 119 references). I believe that the massive attention given to such events has negative consequences.
There's the obvious negative consequence (mentioned often, hand-wringingly and half self-condemingly, in the press coverage of such events) that excessive attention to these events catapults their perpetrators to a fame they don't deserve. The perpetrator becomes a model; his way of behaving gains salience as a possible way of behaving to others of unbalanced mind; and the promise of comparable notoriety may be appealing to some.
But what I find more troubling is this: Focus on events of this sort encourages an inaccurate and falsely comforting model of evil. By ignoring (or burying on page 12) the hundreds of thousands, maybe millions, killed every year by vile governmental, military, and corporate policies, and by individual, private acts of evil -- by focusing on massacres and suicide bombers instead, we ground our conception of evil in a narrow band of strange cases. In particular, we may be tempted to think of evil as something done by unusual, deranged people (like Cho) or indoctrinated, almost brainwashed, followers of radical religious movements (as most Americans conceptualize suicide bombers).
As Hannah Arendt, Ervin Staub, and many others have made clear, though, most of the evil in the world is not done by such people. Instead, it is done by ordinary folks, like you and me. The assumption that it is not -- that it is done instead by monsters and maniacs -- is comforting because it allows us to hide from recognizing the potential for evil in ourselves.
And for exactly that same reason, that assumption is extremely dangerous.
Posted by Eric Schwitzgebel at 2:23 PM 6 comments
Labels: moral psychology
Monday, April 30, 2007
The Trembling Stoic
On a dispositionalist view of belief (which I elaborate in this essay and this encyclopedia entry), to believe some proposition P is just to act and react, both in one's outward behavior and one's inward feelings, as though P were the case. One difficulty for this sort of view is what to do when someone seems to sincerely, wholeheartedly endorse some proposition -- and hence, we might say, believe it -- and yet does not pervasively act and react as though that proposition were true.
Excluding cases of deception (self or other) or unusual cognitive background (such as strange accompanying beliefs and desires), such cases seem to come mainly in two varieties:
(1.) Trembling Stoic cases. The Stoic sincerely judges, both alone in his study and out in the world in discussions with others, unhesitantly and unreservedly, that death is not bad. Yet he trembles before the sword, and he fears what the doctor will say. Or: A liberal professor sincerely professes that all the races are intellectually equal (and has, let's suppose, the scientific evidence to prove it), yet reveals implicitly in her behavior and reactions a subtle but persistent racism when it comes to matters of intelligence. Or: Someone comes to believe that God and Heaven exist, but does not transform her behavior accordingly.
These cases needn't involve self-deception or insincerity. One may perfectly well realize the need to reform. Nor are such cases necessarily matters of "weakness of will" in the face of acute, impulsive desires: Our liberal professor, for example, need have no particular desire to treat other races as intellectually inferior.
(2.) Momentary forgetfulness cases. I get an email saying a bridge I normally take to work is closed. Yet the next day, I find myself headed toward the bridge, rather than my intended alternate route, until at some moment (maybe only after seeing the bridge) I recall the previous day's email. Or: The trashcan used always to be under the sink, now it's by the fridge. I still reach under the sink half the time, though, when I go to throw something away.
Aaron Zimmerman, Tori McGeer, and Ted Preston have emphasized the importance of such cases to me in evaluating dispositionalism about belief (Tori and Ted being sympathetic to dispositionalism, Aaron less so).
My response to such cases is to distinguish broadly dispositional belief from momentary occurrences of sincere judgment. Then I'll "bite the bullet" on belief: The Stoic, the absent-minded driver, do not fully and completely believe, respectively, that death is not bad, that the bridge is closed (neither do they fully and completely believe the opposite). Although there are moments when they make sincere judgments to that effect, it takes a certain about of work and self-regulation to allow such judgments fully to inform one's habitual everyday behavior. Until that work is done, they don't fully believe.
(I recognize that it may grate a bit to say that I don't believe that the bridge is closed, as I'm driving toward it, especially since it seems right for me to say, in retrospect, "I knew the bridge was closed!" Here's a brief discussion of that particular issue.)
Posted by Eric Schwitzgebel at 6:45 AM 18 comments
Labels: belief
Friday, April 27, 2007
A Flurry of Essays
In the last few weeks I've had four essays come out -- a flood, really. (I published six essays total in the four years from 2003-2006.) These essays had all been cooking for at least two years. Philosophical publishing is molasses-slow! (A warning to new assistant professors.) I started working on the HPQ essay in 1998. One essay I've been working on (sporadically) since 1993 still isn't out. (Well, it is on decision theory, which is pretty low in my research priorities these days.)
For a general list of on-line essays in philosophy, not confined to Schwitzgeblia, I recommend Jonathan Ichikawa's conscientiously-maintained (but oxymoronically titled) Online Papers in Philosophy. (Don't see the oxymoron? Look at the second word. I'm trying to jettison the habit of referring to essays as "papers", for pointlessly priggish etymological reasons I suppose -- the same sorts of reasons that make me resist pronouncing "processes" with a long final "e" [as though the singular were "processis"], the same sorts of reasons that make me wince when people speak of "steep learning curves" without realizing that steepness in traditional behaviorist learning curves indicates learning quickly.)
Okay, enough free association. Here are the essays. Their topics should be clear enough from their titles. They develop ideas about conscious experience, self-knowledge, and moral development that are among the frequent themes of this blog.
Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau. History of Philosophy Quarterly, 24 (2007), 147-168.
Do Things Look Flat? Philosophy & Phenomenological Research, 72 (2006 - yes, they're a bit behind!), 589-599.
Do You Have Constant Tactile Experience of Your Feet in Your Shoes? Or Is Experience Limited to What's in Attention? Journal of Consciousness Studies, 14 (2007), no. 3, 5-35.
No Unchallengeable Epistemic Authority, of Any Sort, Regarding Our Own Conscious Experience -- Contra Dennett? Phenomenology & the Cognitive Sciences, 6 (2007), 107-112.
Posted by Eric Schwitzgebel at 8:17 AM 8 comments
Wednesday, April 25, 2007
Happy First Birthday, The Splintered Mind
Today, The Splintered Mind is one year old. (Let's hope it doesn't take 'til age 38 to peak!) A few reflections on my first year of blogging:
* It's a lot of work, but I like the discipline of posting on a MWF schedule. Every philosopher should have three thoughts a week good enough to be worth sharing in a casual way, with a forgiving audience.
* I am delighted with the readership of the blog. Comments are substantive and thoughtful (and polite!); and they often lead me to (or force me to) refine my thinking, or better understand potential objections, or see precedents. It's like contantly being in a (slow-paced) philosophical conversation. And I, as a philosopher, tend to learn more from and be more motivated by conversation than anything else.
* Blogging helps teach humility. Working on my own, it often seems to me that various things are obvious or obviously false, brilliant or ridiculous. But if I put it that way in the blog, people will call me out. I'm too often wrong to get away with being arrogant!
Since its launch one year ago, this blog has had a bit over 40,000 "unique visitors" (each new day someone visits, that person counts as a new unique visitor). I'd estimate that about half of the visitors were looking for this blog or following a link and about half happened upon it as the result of a topical search.
The monthly stats:
Apr 2006: 38 visitors
May 2006: 511 visitors
Jun 2006: 445 visitors
Jul 2006: 1642 visitors
Aug 2006: 1945 visitors
Sep 2006: 1922 visitors
Oct 2006: 2230 visitors
Nov 2006: 8320 visitors
Dec 2006: 2952 visitors
Jan 2007: 6707 visitors
Feb 2007: 4025 visitors
Mar 2007: 4340 visitors
Apr 2007: 5226 visitors (so far)
Although these numbers don't reach anything like the lofty heights of Leiter's blog (for understandable reasons!), I'm quite happy with them. Surely, these are many more people than read my articles; and my impression is that the readership is largely advanced students and youngish professors of philosophy. And if too many people read my blog, I wouldn't have enough time to respond individually to nearly every comment, which is my current practice!
The peaks in Nov. 2006 and Jan. 2007 are due entirely to my two most popular, most linked blog posts:
Most-Cited Ethicists in the Stanford Encyclopedia (and its companion Most-Cited Philosophers of Mind and Language in the Stanford Encyclopedia).
and
Still More Data on the Theft of Ethics Books.
Thanks for visiting!
Posted by Eric Schwitzgebel at 8:08 AM 2 comments
Tuesday, April 24, 2007
The New Philosophers' Carnival is...
here. (Thanks to Avery Archer.)
Posted by Eric Schwitzgebel at 7:34 AM 0 comments
Monday, April 23, 2007
Peer Opinion of the Behavior of Ethicists: Results by Academic Rank
As regular visitors to this blog will know, survey data Josh Rust and I collected at the Pacific and Eastern meetings of the American Philosophical Association suggest that philosophers don't think ethicists behave much differently than philosophers who aren't ethicists.
Results from the Eastern division meeting suggested the possibility of a U-shaped curve based on rank. Students and full professors seemed to think better of ethicists than did professors around tenure time. Was this trend borne out by the Pacific Division data?
Not at first glance. Here are the Pacific results broken down by rank. Q1 is whether ethicists on average behave better than non-ethicists in philosophy. Q3 is whether specialists in metaphysics and epistemology (including philosophy of mind) behave better than philosophers who are not M&E specialists. The scale ranges from 1 (substantially morally better) through 4 (about the same) to 7 (substantially morally worse).
Q1 (ethicists vs. other philosophers):
Undergraduates: 3.7
Graduate students: 3.5
Adjuncts: 3.9
Assistant Profs (tenure track): 3.8
Associate Profs: 3.8
Full Profs: 3.7
Distinguished Profs: 3.9
Q3 (M&E specialists vs. other philosophers):
Undergraduates: 3.5
Graduates: 4.0
Adjuncts: 4.2
Assistant Profs: 4.3
Associate Profs: 4.1
Full Profs: 4.3
Distinguished Profs: 4.1
No striking trends here. Of course, if one has a generally low opinion of philosophers, that might not show up in these questions, which only ask to compare philosophers of one group with philosophers as a whole. More telling perhaps are Q2 and Q4, which are identical to Q1 and Q3 except that the comparison group is "non-academics of similar social background".
Q2 (ethicists vs. non-academics):
Undergraduates: 3.6
Graduate students: 3.5
Adjuncts: 4.0
Assistant Profs (tenure track): 3.6
Associate Profs: 3.5
Full Profs: 3.5
Distinguished Profs: 3.2
Q4 (M&E specialists vs. non-academics):
Undergraduates: 2.9
Graduates: 3.9
Adjuncts: 3.8
Assistant Profs: 3.9
Associate Profs: 3.8
Full Profs: 3.8
Distinguished Profs: 3.4
There is a weak but not statistically significant trend here toward the students and full professors thinking better of philosophers than do adjuncts, assistants and associates (mean 3.6 vs. 3.8, p = .16).
As these numbers also suggest, there was a general tendency for Q1 & Q3 to have lower numbers than Q2 & Q4 -- implicitly suggesting that philosophers think philosophers behave morally better than non-academics (mean 3.9 vs. 3.6, p = .003). That trend is largely, but not entirely, driven by the tendency of ethicists to rate ethicists as morally better than non-philosophers (mean 3.1).
When asked to compare the behavior of a particular ethicist or M&E specialist in the department to that of other members of the department and to non-academics of similar social background (Version 2 of the questionnaire), the breakdown by ranks looks rather different, with a general tendency for higher rank to correlate with a lower opinion of one's colleagues and a lower opinion of ethicists in particular.
For example, when asked to compare the moral behavior of the ethicist in your department whose name comes next in alphabetical order after yours (looping around from Z to A if necessary) to others in your department and to non-academics of similar social background, here's the the breakdown by rank:
That particular ethicist in your department vs. others in your department:
Undegrad: 2.9
Grad: 3.5
Adjunct: 3.6
Assistant: 3.0
Associate: 3.8
Full: 4.1
Distinguished: 4.1
(ANOVA, p = .03)
That particular ethicist in your department vs. non-academics:
Undegrad: 2.6
Grad: 3.2
Adjunct: 3.5
Assistant: 3.0
Associate: 3.4
Full: 3.9
Distinguished: 4.2
(ANOVA, p = .07)
Posted by Eric Schwitzgebel at 11:10 AM 0 comments
Labels: ethics professors, Joshua Rust, psychology of philosophy
Friday, April 20, 2007
The Moral Behavior of Ethicists: Opinions Revealed in Conversation and on Questionnaires
Questionnaire respondents at the Pacific Division meeting of the American Philosophical Association a few weeks ago and at the Eastern Division meeting a few months ago said that ethicists behaved about as well, on average, as non-ethicists. At least, that's what the average non-ethicist said. Ethicists thought ethicists behaved a little bit better.
These results surprised me, not because I think ethicists actually behave particularly better or worse than non-ethicists, but because of what I've heard in conversation. I have probably spoken to about 200 philosophers about the moral behavior of ethicists. I'd say about 55-60% say ethicists behave about the same as non-ethicists, about 35-40% say they behave worse, and only about 5-10% say they behave better. On the questionnaire, on the other hand, responses were closer to 1/3 - 1/3 - 1/3.
Why this difference between my questionnaire results and what people say in conversation?
Some possibilities:
(1.) In conversation, ethicists will be shy about saying ethicists behave better, since that might seem insulting to non-ethicists. (Of course, this wouldn't explain the fact that many non-ethicists said on the questionnaire that ethicists behave better.)
(2.) Saying ethicists behave worse, or about the same, makes for more entertaining conversation, but it may not reflect the speaker's true opinion. (It seems a little strange to me to suppose that non-ethicists especially would deliberately hide their opinions about this, but maybe it's plausible as a covert conversational pressure operating non-consciously.)
(3.) People might be reluctant to say ethicists behave better because it risks seeming naive -- and more so in face-to-face conversation than in an anonymous questionnaire. Conversely, saying that ethicists behave the same or worse might seem worldly and sophisticated.
(4.) My inclination to think that ethicists don't behave better, or much better, may come across very early in the conversation. (I hope not, but I don't really know. Why my bias should have an asymmetic effect -- causing those with higher opinions of ethicists to adjust their statements but not those with lower opinions -- would need to be explained.)
(5.) Some respondents might worry about showing the profession in a bad light and so present themselves, in a formal survey, as a bit more sanguine about the behavior of ethicists than the really are. In conversation they might be more frank.
(6.) The more cynical philosophers, with darker views in general and darker views of ethicists in particular, may be less likely to volunteer to complete a questionnaire than the average philosopher, causing sanguine philosophers to be overrepresented in the respondent pool. This might have been especially true at the Eastern meeting, where many people seemed to assume my co-investigator, Josh Rust, was selling or advertising something. (At the Pacific, the sign clarified that it was a "philosophical/scientific" questionnaire; and I -- who knew probably 100 people at the meeting -- personally was at the table half the time.)
(1)-(4), if true, suggest that the questionnaire is the more reliable instrument; (5)-(6) that informal conversation is more telling.
Any thoughts?
Posted by Eric Schwitzgebel at 11:29 AM 0 comments
Labels: ethics professors, Joshua Rust
Tuesday, April 17, 2007
Judgment, Attunement, and Introspection
I've argued that we have only very poor knowledge of our own stream of conscious experience. When asked to form judgments about our visual experience, our auditory experience, our inner speech and imagery, we're prone to gross mistakes. But don't we have some sort of accurate responsiveness to our stream of experience (and not just the grossest features of it)? I'm not sure I want entirely to let go of that idea.
To the rescue (maybe!): the concept of attunement.
I catch a baseball in the net of my mitt. I don't see it go in. I don't feel the baseball directly, or even through a thin layer of leather. But I know it's in there. How do I know? My judgment about the baseball's presence in the net is in some way based on knowledge of what is going on with my mitt.
But is "knowledge" the right word here? If someone were to ask what about my mitt permits me to know so confidently that I caught the ball, I might easily stumble. Is it that the mitt tugged against my hand in a certain way? Is it that it has a kind of weighty inertia to it? I might not only fail to express it in words, but I might really have no real idea at all. And yet, some epistemic relationship I stand in with respect to my mitt seems to serve as the basis of my knowledge that I caught the ball.
Let's say that I am "attuned" to certain things that happen to my mitt, and this attunement grounds my knowledge that I caught the ball.
Or: I can see from my wife's face that she's feeling a bit edgy and tired. But what exactly in her face reveals this to me I don't know. Or: I know that someone is standing behind me. But whether I hear his breathing, or detect a sound-occluding object through echolocation, or have tuned into a difference in lighting, or have paranormal powers, I don't know. Let's say it's by echolocation. I'm attuned to the sound occluding properties of his body, but have no idea that this is the case.
Might we, then, be attuned to our stream of experience, in this sense, without being able to make accurate judgments about it, as I don't make accurate judgments about the mitt or my wife's face?
(I do worry, though, that if so, this might lead too quickly to something like an "indirect perception" theory of perception, according to which my knowledge of external objects is grounded in knowledge of -- in this case, attunement to -- the sensory experiences those objects produce in me. I'm not sure I want to go there....)
Posted by Eric Schwitzgebel at 1:33 PM 20 comments
Monday, April 16, 2007
Are 38-Year-Olds the Best Philosophers?
I'm sitting here looking at the covers of Donald Davidon's recently re-issued anthologies. Boy does he look old. Is that what philosophers look like?
Davidson was born in 1917. The pictures must have been taken near the time of his death (I'd almost say after) in 2003, when he was 86. But his most famous, best regarded essays were written long before, in the 1960s and 1970s, when he was in his 40s and 50s. Why not grace the cover with a picture of him from that era? Non-philosophers often think of philosophers as old. My mother advised me, when I was an undergraduate, to do science first and philosophy when I'm old, since the best scientists are young the the best philosophers are old!
Davidson actually started a bit late. In my philosophy of mind class, I present to students the dates of the philosophers we read and the dates of publication of the assigned essays -- some of the most important essays in historical and 20th-century philosophy of mind. To calculate age, I subtract one from the difference of the years (if you're born in the middle of 1900, in 1950 you're 49 for half the year, and there's always a delay before publication). Here's the list from the main part of the course:
Rene Descartes: 1596-1650. Meditations on First Philosophy: 1641 (age 44).
John Locke (1632-1704). Essay Concerning Human Understanding: 1689 (age 56).
George Berkeley (1685-1753). A Treatise Concerning the Principles of Human Knowledge: 1710 (age 24). Three Dialogues Between Hylas and Philonous: 1713 (age 27).
Julien Offray de la Mettrie (1709-1751). Man a Machine: 1748 (age 38).
J. J. C. Smart (1920- ). "Sensations and Brain Processes": 1959 (age 38).
Hilary Putnam (1926- ). "The Nature of Mental States": 1967 (age 40).
David Lewis (1941-2001). "Psychophysical and Theoretical Identifications": 1972 (age 30). "Mad Pain and Martian Pain": 1980 (age 38).
Ned Block (1942- ). "Troubles with Functionalism": 1978 (age 35).
Frank Jackson (1943- ). "What Mary Didn’t Know": 1986 (age 42).
Paul M. Churchland (1942- ). "Eliminative Materialism and the Propositional Attitudes": 1981 (age 38).
Colin McGinn (1950- ). "Can we solve the mind-body problem?": 1989 (age 38).
David Chalmers (1966- ). The Conscious Mind: 1996 (age 29).
Supposing these data are representative, here are two theories:
(1.) Philosophers tend to peak around age 40; or
(2.) Philosophers who haven't done anything very influential by age 40 tend to withdraw from publishing philosophy, or not aim very high, for the rest of their careers; and those who do achieve eminence by age 40 tend to regress toward the mean for the rest of their careers. (How many great ideas, or bursts of genius, can you expect one person to have?) This second theory would explain the overrepresentation of great work by 40-year-old philosophers without committing to the thesis that 40 years of age is the best time to do philosophy.
(I just left 38 behind me last weekend. What will I think of this issue, I wonder, when I'm 60?)
Posted by Eric Schwitzgebel at 7:23 AM 25 comments
Friday, April 13, 2007
Obedience and Evil in McDonald's
In Milgram's famous experiments on the moral psychology of evil, he finds that obedience to commands to deliver extreme electric shocks to another person decreases as the person issuing the commands gets farther away from the subject as and the victim of the shocks gets closer. On the basis of his research, one might expect very low rates of obedience when the victim is in close physical proximity and the authority is issuing commands over the telephone.
This video is therefore doubly interesting. A man calls a McDonald's in Kentucky, purporting to be a police officer, and orders the assistant manager to strip search a teenage girl. Eventually, the assistant manager's fiance is brought in to replace the assistant manager, and at the command of the man on the phone he performs corporal punishment on the naked girl and has her perform sexual acts. This goes on for several hours. I recommend watching the video (which consists of security camera clips and interviews with the victim and assistant manager) to get a vivid sense of the events.
Several features of the situation may be working to increase compliance, despite the proximity of the victim and distance of the authority: the slow, stepwise progression (for the assistant manager), the existence of an apparently more knowledgeable person whose interpretation of the situation would frame his own (for the fiance), the obedience generally accorded police, and maybe (esp. for the fiance) an appealingly erotic aspect of the activity.
Only when a sufficiently skeptical outsider was brought in, with a different perspective on the sitution, was the assistant manager able to reframe the situation and consider the possibility that the commands of the "authority" on the phone were not legitimate.
We shouldn't be too quick, I think, to assume that we would have seen through the ploy and resisted the man's commands....
Posted by Eric Schwitzgebel at 5:53 AM 24 comments
Labels: moral psychology
Wednesday, April 11, 2007
The Moral Behavior of Ethics Professors: Peer Opinion
At the Pacific Division meeting of the APA last week, Josh Rust and I offered passersby chocolate for completing questionnaires on the moral behavior of ethics professors.
We did a preliminary study of this at the Eastern APA. It also connects to a general interest I have in the relationship between moral reflection and moral behavior.
The survey came in two versions. The key questions in Version 1 were:
1. Take a moment to consider the various ethics professors you have known, both as colleagues and in the student-mentor relationship. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as philosophers not specializing in ethics? (Please circle one number below.)
[The numbers then ran from 1 ("substantially morally better") to 4 ("about the same") to 7 ("substantially morally worse").]
and
2. [same question, but with the comparison group being "non-academics of similar social background"].
For comparison, identical questions were asked about "specialists in metaphysics and epistemology (including philosophy of mind)".
Version 2 was similar, but it asked the respondent to think about the particular ethicist in your department whose name comes next in alphabetical order after yours (looping around from Z to A if necessary). [Thanks to Jonathan Ichikawa for formulating this question, in a comment on this blog!] Again, for comparison, identical questions were asked about the metaphysics and epistemology specialist in your department. In both versions we collected demographic information about area of specialization, rank, type of institution, and graduate school.
277 people completed questionnaires out of about 1300 registered for the meeting. There were some cute stories, too. Among them: An eminent ethicist who shall remain nameless grabbed a chocolate from our table without completing the survey, then dashed off, saying "I'm being evil!" I don't think she realized that her behavior was actually pertinent to the content of the questionnaire!
Preliminary results: Ethicists think ethicists behave slightly better than philosophers specializing in other areas. Non-ethicists think they behave the same.
On Version 1, ethicists' mean response for the question comparing ethicists' behavior to the behavior of non-ethicists was 3.44 (where 4 is "about the same") (t-test vs. 4, p = .01). M&E specialists got a mean of 4.26. On Version 2, they rated an arbitrarily chosen ethicist in their department better, compared to others in their department, than an arbitrarily chosen M&E specialist (3.38 vs. 3.98) (p = .05)
On Version 1, non-ethicists rated ethicists at an even 4.00 vs. other philosophers. (About 1/3 said they behaved better, about 1/3 said they behaved worse.) On Version 2, non-ethicists rated both the ethicists and the M&E specialists at about 3.5 compared to others in their departments (showing a slight bias toward favoring individuals over groups, but no better opinion of ethicists overall).
More thoughts (including analysis by rank and institution type) to come soon!
Posted by Eric Schwitzgebel at 4:33 PM 11 comments
Labels: ethics professors, Joshua Rust, moral psychology
Friday, April 06, 2007
At the APA
I'm up in San Francisco for the Pacific Division meeting of the American Philosophical Association. The APA has generously permitted me to set up a table outside the book display, where I'm offering people chocolate in exchange for filling out a questionnaire.
The 5-minute questionnaire solicits opinions about the moral behavior of ethics professors. I'll post preliminary results here at The Splintered Mind within the next couple of weeks.
Respondents and passersby have largely been neutral or kind, but -- to me a bit surprisingly -- very few eminent professors I know (even those who know me fairly well) have stopped to complete the questionnaire. In striking contrast, nearly everyone I know personally who is a peer or lower in professional status has completed a questionnaire or promised to.
Now maybe the eminent professors are just busier and more besieged by people competing for their attention than are the others, or maybe their high salaries give them less incentive to earn chocolate, but also it seems to me that several were somewhat uncomfortable seeing me at a table distributing questionnaires. (This is without even knowing the content of the questionnaire.) One shook his head and smiled disapprovingly. I said, "It's even worse than you think." When he turned to walk away, I asked him if I didn't at least pique his curiosity. He said I only tweaked his aversion.
What am I doing to their APA...?
Update, 11:08 p.m.:
I should partly take these observations back. A couple very eminent professors completed the questionnaire today; and I'm feeling a bit more sympathetic right now with the extent to which the ones who did not may generally feel pressed from many directions for their time and attention at meetings of this sort.
Update, 4:23 p.m., April 11:
Maybe I should largely take back the post above. I seemed to get better reactions from eminent philosophers as the meeting went on -- whether by chance or for some other reason, I don't know. In the end, 16 of the 277 respondents indicated (hopefully honestly!) that their highest level of academic achievement was "distinguished professor" (which is the highest academic rank and generally indicates eminence in the field). And 1:16 sounds like a roughly plausible ratio of distinguished professors to others of lower rank among meeting attendees.
Posted by Eric Schwitzgebel at 6:22 AM 1 comments
Wednesday, April 04, 2007
Happy Lynchers (repost)
I'm headed off to the Pacific Division meeting of the American Philosophical Association (where, hopefully, I'll get some data of interest to readers of this blog); and I just taught a class on the psychology of lynching, so I hope readers will forgive me for my first-ever repost. (This post harks back to the early weeks of this blog -- May 2006. I only got 511 visitors in May. Now I get that many in about 3 days!)
To render the photos below less viscerally disturbing, I've blanked out sections. They remain, I think, ethically quite disturbing.


The blanked out parts of the pictures are, of course, the victims of lynchings (all African-American) in early 20th-century United States. I won't risk the sensibilities of readers any more than I already have by describing the details of the corpses, but to put it blandly, in the first and third pictures especially, they are grotesquely mutilated.
I post these pictures not (I hope) from any motive of voyeurism, but to share with you my sense that they powerfully raise one of the most important issues in moral psychology: the emotions of perpetrators of evil. Though it's a bit hard to see in these small pictures (the maximum size Blogger allows), I hope it's nonetheless evident that most of the lynchers look relaxed and happy -- though they're only feet from a freshly murdered corpse. It was not uncommon to bring small children along to lynchings, to collect souvenirs, to take photos and sell them as postcards. (These pictures are from a collection of just such postcards: James Allen's Without Sanctuary.)
Although I'm attracted to a roughly Mencian view of human nature, according to which something in us is deeply revolted by evil, when that evil is nearby and "in one's face" as it were, I find pictures like this somewhat difficult to reconcile with that view. Are these people inwardly revolted, under their smiles?
(The old comments and replies are here.)
Posted by Eric Schwitzgebel at 11:48 AM 7 comments
Monday, April 02, 2007
Against "Appearances"
Chisholm (1957) and Jackson (1977) distinguish an epistemic from a phenomenal sense of the word "appears".
The epistemic sense: If I say "It appears the Democrats are headed for defeat", normally I'm expressing a kind of hedged judgment. I'm expressing the view that the Democrats are headed for defeat, qualified with a recognition that my judgment may be wrong. I'm not saying anything in particular about my "phenomenology" or stream of experience. I'm not claiming for example, to be entertaining a mental image of defeated Democrats or to hear the word "defeat" ringing in my mind in inner speech.
The phenomenal sense: If I say, looking at a well-known visual illusion, "The top line appears longer", I'm not expressing a judgment about the line. I know the lines are the same length. Instead, I'm making a claim about my phenomenology, my visual experience.
These senses sometimes come together in perception: If I say, looking at two peaks in the distance, "The one on the left appears higher", I might be saying something about the peaks, or I might be saying something about my experience, or (more interestingly) I might be saying something about the peaks by way of saying something about my experience. I might be saying: "My visual experience is a left-looking-higher kind of visual experience; based on that I tentatively conclude that the left peak is higher."
Now how often do we actually do that last type of thing? If not in conversation with others, in our own cognition? How often, that is, do we reach judgments about outward objects on the basis of our knowledge of our own experience?
Traditional "veil of perception" views of perception (e.g., Descartes and Locke, on standard interpretations) suggest that that is what we do all the time in perception. We know outward things only by knowing our own minds first (and better), in particular by knowing our inner experiences of those things. What we know directly is our own sensory experience; our knowledge of the outside world is derivative of that. (Thus, it is as though our sensory experience stands like a veil between us and the world, preventing direct contact with things as they are in themselves.)
I worry that the word "appearance", as philosophers of perception typically use it, invites something like this view, by blurring together the phenomenal and the epistemic senses of "appears". I worry that it invites the view that our judgments about the things we see -- the real, physical objects around us -- are grounded in facts about how those objects are experienced phenomenally. I worry it invites the view that when I say "It (visually) appears that there is a coffee cup on the table" I mean both that I (visually) am inclined to judge that there is a coffee mug on the table and that I am having a visual experience of a certain sort -- a coffee-muggish experience; and that these two events are integrated in a certain way, as different aspects of the "appearance" perhaps. It's because I have the visual experience that I reach the visual judgment.
But here's the question: Do we reach judgments about the properties of objects based on the sensory experiences they produce in us? Or is the visual experience we have of an object the product of, or something created in parallel with, our judgments about the object? If I'm right that talk of "visual appearances" tends to invite the former view, that's reason to be wary of it, if the latter view has merit.
Posted by Eric Schwitzgebel at 3:49 PM 14 comments