here. Insights on the nature and advantages of the medium. Much, but not all, applies to academic blogs.
From my first post in April 2006 through our adoption of Kate in March 2008, I posted relentlessly Mon-Wed-Fri. Now it's more like once a week. I suspect that not only the one-year-old child but also the new ipod have cut into my blogging: Many blogging ideas used to come during morning walks, which are now sometimes filled with Frank Sinatra, Al Stewart, or This American Life instead. I haven't decided if this is a good thing or bad.
Oh, and Happy (recent or continuing) Whatever! (Global Orgasm Day, for example.)
Saturday, December 27, 2008
here. Insights on the nature and advantages of the medium. Much, but not all, applies to academic blogs.
Posted by Eric Schwitzgebel at 8:28 PM
Friday, December 19, 2008
[Cross-posted at Manyul Im's Chinese Philosophy Blog]
Okay, I've written about this before; but, to my enduring amazement, not everyone agrees with me. The orthodox interpretation of Zhuangzi (Chuang Tzu) puts skillful activity near the center of Zhuangzi's value system. (The orthodoxy here includes Graham, Ivanhoe, Roth, and many others, including Velleman in a recent article I objected to in another connection.)
Here is one reason to be suspicious of this orthdoxy: Examples of skillful activity are rare in the Inner Chapters, the authentic core of Zhuangzi's book. And the one place in the Inner Chapters where Zhuangzi does indisputably praise skillful activity is in an oddly truncated chapter, with a title and message ("caring for life") suggestive of the early, immature Zhuangzi (if one follows Graham in seeing Zhuangzi as originally a Yangist). Even the term "wu wei", often stressed in skill-based interpretations as indicating a kind of spontaneous responsiveness, only appears three times in the Inner Chapters, and never in a way that indisputably means anything other than literally "doing nothing".
Maybe you've never seen a wildcat or a weasel. It crouches down and hides, watching for something to come along. It leaps and races east and west, not hesitating to go high or low -- untill it falls into the trap and dies in the net. Then again there's the yak, big as a cloud covering the sky. It certainly knows how to be big, though it doesn't know how to catch rats (Watson trans., Complete, p. 35).On the one hand, we have the skill of the weasel, which Zhuangzi does not seem to be urging us to imitate; and on the other hand we have the yak who knows how to... how to do what? How to be big! It has no useful skills -- it cannot carve oxen, guide a boat, or carve a wheel -- and in this respect, Zhuangzi says it is like the "big and useless" trees that repeatedly occur in the text, earning Zhuangzi's praise. Zhuangzi continues:
Now you have this big tree and you're distressed because it's useless. Why don't you plant it in Not-Even-Anything Village, or the field of Broad-and-Boundless, relax and do nothing by its side, or lie down for a free and easy sleep under it? (ibid.)That is the core of Zhuangzi, I submit -- not the skillful activity of craftsmen, but lazy, lounging bigness!
Where else does Zhuangzi talk about skill in the Inner Chapters? He describes the skill of a famous lute player, a music master, and Huizi the logician as "close to perfection", yet he calls the lute-playing "injury" and he says these three "ended in the foolishness of 'hard' and 'white' [i.e., meaningless logical distinctions]" (p. 41-42). Also: "When men get together to pit their strength in games of skill, they start off in a light and friendly mood, but usually end up in a dark and angry one, and if they go on too long they start resorting to various underhanded tricks" (p. 60-61). He repeatedly praises amputees and "cripples" who appear to have no special skills. Although he praises abilities such as floating on the wind (p. 32) and entering water without getting wet (p. 77), these appear to be magical powers rather than perfections of skill, along the lines of having "skin like ice or snow" and being impervious to heat (p. 33); and its unclear the extent to which he seriously believes in such abilities.
How did the orthodox view arise, then? I suspect it's mostly due to overemphasizing the dubious Outer and Mixed Chapters and conflating Zhuangzi's view with that of the more famous "Daoist" Laozi (Lao Tzu). Since this happened early in the interpretive tradition, it has the additional force of inertia.
Friday, December 12, 2008
Thursday, December 11, 2008
[Cross posted at Manyul Im's Chinese Philosophy Blog]
In my 2006 essay "Do Things Look Flat?", I examine some of the cultural history of the opinion that visual appearances involve what I call "projective distortions" -- the opinion, that is, that tilted coins look elliptical, rows of streetlights look like they shrink into the distance, etc. I conjecture that our inclination to say such things is due to overanalogizing visual experience to flat, projective media like paintings and photographs. In support of this conjecture, I contrast the contemporary and early modern periods (in the West) with ancient Greece and introspective psychology circa 1900. In the first two cultures, one finds both a tendency to compare visual experience to pictures and a tendency to describe visual experience as projectively distorted. In the latter two cultures, one finds little of either, despite plenty of talk about visual appearances in general.
I didn't do a systematic search of classical Chinese philosophy, which I love but which has less epistemology of perception, but I did find one relevant passage:
If you look down on a herd of cows from the top of a hill, they will look no bigger than sheep, and yet no one hoping to find sheep is likely to run down the hill after them. It is simply that the distance obscures their actual size. If you look up at a forest from the foot of a hill, the biggest trees appear no taller than chopsticks, and yet no one hoping to find chopsticks is likely to go picking among them. It is simply that the height obscures their actual dimensions (Xunzi ch. 21; Basic Writings, Watson trans., p. 134)Though I can recall no ancient Chinese comparisons of visual experience and painting, both Xunzi and Zhuangzi compare the mind to a pan of water which can reflect things accurately or inaccurately, an analogy that seems related (Xunzi ibid. p. 131, ch. 25, Knoblock trans. 1999, p. 799; Zhuangzi, Watson trans., Complete Works, p. 97). In medieval China, which I know much less about, I noticed Wang Yangming saying such a comparison was commonplace (Instructions for Practical Living, Chan trans., p. 45).
So my question is, for those of you who know more Chinese philosophy than I, are there other passages I should be looking at -- either on perspectival shape or size distortion or on analogies for visual experience? I'm revising the essay for a book chapter and I'd like to expand my discussion to China if I can find enough material. Any help would be much appreciated!
(I also wouldn't mind more help on Greek passages, too, if anyone has the inclination. Some of the more obvious passages are Plato's discussion of painters in the Republic and Sophist, Aristotle's discussion of sensory experience as like impressions in wax, Sextus's lists of sensory distortions in experience and his discussions of wax impressions, Epicurus's discussions of the transmission of images, discussions of the sun as looking "one foot wide", and Euclid's and Ptolemy's optics.)
Friday, December 05, 2008
Here's a passage from David Velleman's recent essay, "The Way of the Wanton" that caught my attention (earning a rare four hm's in the margin, plus a question mark and exclamation point):
Attentively reflecting on one's thirst entails standing back from it, for several reasons. First, the content of one's reflective thoughts is not especially expressive of the motive on which one is reflecting: "I am thirsty" is not an especially thirsty thought, not necessarily the the thought of someone thinking thirstily. Second, attentive reflection is itself an activity -- a mental activity -- and, as such, it requires a motive, which, of course, is not thirst. Reflecting on one's thirst is, therefore, a distraction from acting on one's thirst, and in that respect is even a distraction from being thirsty. Most importantly, though, consciousness just seems to open a gulf between subject and object, even when its object is the subject himself. Consciousness seems to have the structure of vision, requiring its object to stand across from the viewer -- to occupy the position of the Gegenstand (p. 181, emphasis in original).Let's go one point at a time.
Does reflecting on thirst entail "standing back" from it? It's not clear what this metaphor means, though Velleman's subsquent three reasons help clarify. But before we get to those reasons, let's just wallow in the metaphor a bit: Standing back from one's thirst. I don't want to be too unsympathetic here. The metaphor is inviting in a way. But I at least don't feel I have the kind of rigorous understanding I'd want of this idea, as a philosopher.
On to the reasons:
(1.) Per Velleman: "I am thirsty" is not an especially thirsty thought, not necessarily the thought of something thinking thirstily.
Walking across campus, I see a water fountain. The sentence "Damn, I'm thirsty!" springs to mind as I head for a drink. Is this not a thirsty thought? It seems reflective of thirst; it probably reinforces the thirst and helps push along the thirst-quenching behavior -- so it's thirsty enough, I'd say. Is it not a thought, then -- or at least not a thought in the self-reflective sense Velleman evidently has in mind here? Maybe, for example, it's simply expressive and not introspective, an outburst like "ow!" when you stub your toe, but as it were an inner outburst? (Is that too oxymoronic?)
So let's try it more introspectively. As it happens, I've been introspecting my thirst quite a bit in writing this post, and despite having had a drink just a few minutes ago I find myself almost desperately thirsty....
Okay, I'm back. (Yes, I dashed off to the fountain.)
All right, I just don't get this point. Or I do get it and it just seems plain wrong.
(2.) Per Velleman: Attentive reflection is a mental activity that requires a motive, which is of course not thirst. It's a distraction both from acting on one's thirst and from being thirsty.
Does mental activity require a motive? If an image of a Jim wearing a duck-hat comes to mind unbidden as I talk to Jim, need there be a motive? (Or is that not "mental activity"?) And even if there is a motive for reflecting on one's thirst, why can't that motive sometimes be thirst itself? For example, reflecting on my thirst might be a means to achieving drink -- for example, it might help ensure that I order something to drink at the restaurant. And as such, it needn't be a distraction from acting on one's thirst; it might be part of so acting. And finally, is it a distraction from being thirsty? Well, not in my experience! Darn, I'm getting thirsty again! I can imagine a kind of contemplative attention to one's thirst (as to one's pain) that in a certain way renders that thirst (or pain) less compelling. Maybe something like that is achieved in certain sorts of meditation. But that doesn't seem to me the standard case.
(3.) Per Velleman: Consciousness opens a gulf between subject and object, requiring its object to stand across from the viewer.
Huh? There's nothing wrong with metaphor per se, but they're hard to work with when you don't see eye to eye. Velleman develops the metaphor a bit in the next paragraph: As a subject of thirst, thirst is not in one's "field of view" -- rather things like water-fountains are. In self-reflection, one's thirst is in the field of view. Now this seems to me mainly a way of saying that one is not thinking about one's thirst in the first case and one is thinking about it in the second. (Is there more to it than that? If so, tell me.) But then that brings us back to the issue in (2): Is there a competition, as Velleman seems to believe, between feeling thirst and acting thirsty, on the one hand, and thinking about one's thirst on the other hand? Or do the two normally complement and co-operate?
Can we venture an empirical prediction here? If I suggest to subjects that think about whether they are thirsty, then set them free, will they be more or less likely to stop by the fountain on their way out than subjects I invite to think about something else? I'm pretty sure which way this one will turn out. Now I suspect this test wouldn't be fair to Velleman for some reason. (Maybe the suggestion will also affect thirst itself and not just reflection on it?) So if one of you is sympathetic to him, maybe you can help me out....
By the way, did I mention that this is a delightful and engaging article?
Wednesday, November 26, 2008
In the mid-20th century, people generally thought most of their dreams were black and white; no longer. The key appears to be different levels of group exposure to black and white media. Two key questions are:
(1.) Does black and white media exposure lead people to really dream in black and white or does it lead people to erroneously report that they do?Two recent studies (Schredl et al. 2008; Murzyn 2008) cast a bit more light on these questions. Both researchers asked general questions about people's dreams and also had people answer questions about their dreams in "dream diaries" immediately upon waking in the morning.
(2.) Do people who report dreaming in color really dream in color or are the colors of most of the objects in the dreamworld unspecified? (If you have trouble conceiving of the latter, think about novels, which leave the colors of most of their objects unspecified.)
First, both studies confirm that college-age respondents these days rarely report black-and-white dreams, either when asked about their dreams in general or when completing dream diaries. Murzyn finds that older respondents (aged about 55-75 years) more commonly report black and white dreams, but even in this group the rates of reported black and white dreams (22%) don't approach the levels of 50 or 60 years ago.
On issue 1: Both Schredl and Murzyn find that people with better overall dream recall report more colored and less black and white dreaming. Schredl also finds that people with better recall of color in (waking) visual displays report more color in dreams. On the face of it, this might suggest that reports of black and white dreams come from less credible reporters; but it could just be that the kind of people who dream in black and white are the kind of people who dream less often and less vividly and are less interested in color memory tasks; or black and white dreams may generally be less detailed. Also, it's possible that the experimenters' different measures corrupt each other: People who describe themselves as having frequent colored dreams may find themselves more motivated to report richly detailed colored dreams and to try harder on color recall tasks (as if to conform to their earlier self-portrayal) than do those reporting black and white dreaming.
On issue 2: Both studies find that respondents generally claim to dream in color or a mix of color and black and white, rather than claiming to dream neither in color nor in black and white. In Murzyn's questionnaire, only one of sixty respondents claimed to dream neither in color nor black and white (which matches my own findings in 2003). In their dream diaries, Murzyn's participants described only 2% of their dreams as neither colored nor black and white. In Shredl's dream diaries, participants listed objects central to their dreams and stated if those objects were colored. By this measure, 83% of dream objects were colored (vs. 3% black and white, 15% don't recall). Therefore, if it's true that most dream objects are neither colored nor black and white, respondents themselves must not realize this, even about their own immediately past dreams. This may seem unlikely, but given the apparent inaccuracy of introspection even about current conscious experience I consider it a definite possibility.
Thursday, November 20, 2008
Psychologists -- and some philosophers -- spend a huge amount of time seeking grant money; I'm sure so also for many of the other sciences. I've become increasingly convinced that this is not the best way for leading researchers to be employing their time and talents. What if granting agencies simply selected (through a rotating committee of experts) a large number of established researchers and simply gave them research money without their having to ask, tracking only that it has been used for legitimate research purposes? There would still have to be ample room of course for unselected researchers to submit applications to obtain research funds and for researchers (selected or not) to submit applications for unusually large disbursements for especially worthy and expensive projects.
Wouldn't that give a lot of people more time simply to do their work?
Update, Nov. 21:
Driving home after posting this yesterday, I found myself anticipating comments asserting that such a policy would increase the gap between the academic haves and have-nots. I think that's a legitimate concern, but one that could be addressed by having the granting committee be especially energetic about looking for merit in junior researchers and outside the top schools.
Posted by Eric Schwitzgebel at 3:18 PM
Friday, November 14, 2008
Do we have conscious sensory experience of objects we don't attend to? On a rich view, we have virtually constant sensory experience in every sensory modality (for example, all day long, a peripheral experience of the feeling of our feet in our shoes). On a thin view, conscious experience is limited to one or a few things or regions in attention at any given time. This rich-thin dispute is substantive, not merely terminological, and ordinary folks (as well as psychologists and philosophers) seem to split about 50-50 on it (with some moderates). I worry that the issue may be scientifically irresolvable.
However, some leading researchers in consciousness studies (Block 2007; and [more qualifiedly] Koch & Tsuchiya 2007) have recently put forward an argument against the thin view that runs as follows. When one's visual attention is consumed with a demanding task and stimuli are presented in the periphery, one can still report some features of those stimuli, such as their "gist" (e.g., animal vs. vehicle). Similarly, when one is presented with a Sperling-like display -- a very brief presentation of three rows of alphanumeric characters -- one has a sense of visually experiencing the whole display despite the fact that one can only attend to (and report) some incomplete portion of it. Therefore, conscious experience outruns attention.
I believe this argument fails. In both cases, it's plausible to suppose that there may be diffuse attention to the entire display, the entire (say) computer screen, albeit with focal attention on only one part of it. Such examples may establish that consciousness outruns focal attention narrowly defined, but they do not establish that consciousness outruns some broader span of diffuse attention. When attending to a visual display on a computer screen one may not even diffusedly attend to the picture on the wall behind the computer or the pressure of the seat against one's back. The question is, are these consciously experienced when absorbed in the experimental tasks? The Block/Koch argument shines no light whatsoever on that issue.
[Update November 19: Ned Block emailed me to say that he thinks I'm oversimplifying his view. I did simplify the argument somewhat, and for brevity and convenience I used the Sperling example, which he mainly deploys for another (closely related) purpose, rather than using his own preferred example. But whether the above is an objectionable oversimplification is a further question. I emailed Ned back hoping for clarification on some key points but have not yet received a reply.]
[Update November 20: Ned Block has emailed me with a fuller reply (full text, with his permission, here). He explains that in his view consciousness only probably outruns attention and that "the evidence points toward" that fact; and he thinks this is better suggested by "inattentional blindness" type cases (where people don't report seeing even fairly large objects or property changes when their attention is primarily occupied in some distractor task) than in what I called "Sperling-like" displays (by which I meant complex displays shown too briefly to allow full report but with report enabled by a cue to some portion of the display either during the display or very shortly thereafter). He also points out that Koch & Tsuchiya, like he, say that "it is difficult to make absolutely sure that there is no attention devoted to a certain stimulus". Finally, he says that to the extent he makes a case that consciousness outruns attention, it is a "holistic" case based on a variety of evidence and theoretical considerations, not a single type of experiment.
When I originally wrote the post, I was less interested in the details of Block interpretation than in a certain form of argument which I have heard several times orally (including during a well-attended talk by a very eminent researcher), the argument taking the form described in the post; and Block and Koch & Tsuchiya are the most eminent people I've found recently saying things along those lines in print; but it's true that I should have more carefully stated their qualifications and hesistations.]
Wednesday, November 05, 2008
David Lewis famously endorsed the possibility of "mad pain" in his article "Mad Pain and Martian Pain":
There might be a strange main who sometimes feels pain, just as we do, but whose pain differs greatly from ours in its causes and effects. Our pain is typically caused by cuts, burns, pressure, and the like; his is caused by moderate exercise on an empty stomach. Our pain is generally distracting; his turns his mind to mathematics, facilitating concentration on that but distracting him from anything else. Intense pain has no tendency whatever to cause him to groan or writhe, but does cause him to cross his legs and snap his fingers. He is not in the least motivated to prevent pain or get rid of it. In short, he feels pain but his pain does not at all occupy the typical causal role of pain.Mad pain in this sense seems to me conceivable. My question is: Could there be a parallel case for belief? Let's try to imagine such a case.
Daiyu, say, is a woman who believes that most pearls are white. However, this belief was not caused in the normal way. It was not caused by having seen white pearls nor by hearing testimony to the effect that most pearls are white or inferring that pearls are white from some other facts about pearls or whiteness. It was caused, say, by having looked for 4 seconds at the sun setting over the Pacific Ocean. And, for her, this is just the kind of event that would cause that belief: It's not the case that she would ever form that belief by any of the normal means such as those described above; rather the kinds of things that cause that belief in her in all "nearby possible worlds", or across the relevant range of counterfactual circumstances, are perception of setting-sun events of a certain sort, and maybe also eating a certain sort of salad. Furthermore, Daiyu's belief that most pearls are white has atypical effects. It does not cause her to say anything like "most pearls are white" (which she'd deny; she is actually disposed to say "most pearls are black") or to think to herself in inner speech "most pearls are white". She would feel surprise were she to see a white pearl. If a friend were to say she was looking for white jewelry to go with a dress, Daiyu would not be at all inclined to recommend a pearl necklace. She is not at all disposed to infer from her belief that most pearls are white that there is a type of precious object used in jewelry that is both round and white.
Now I'm inclined to think that this case is incoherent. If Daiyu in fact has that sort of causal/functional structure, it's not correct to say that she really does believe that most pearls are white. In this respect, belief is different from pain. If you agree with me about this, that would seem to rule out a certain class of views about belief, namely, those views that characterize belief in terms of a mental state (maybe a brain state) of the sort that, in humans, typically has certain sorts of causes and typically has certain sorts of effects but which may, in some particular individuals, be not at all apt to have been brought about those causes and be not at all apt to have those effects. It's hard to know exactly how to read "representationalists" about belief (like Fodor, Dretske, Millikan) on this point, but a certain way of reading the representationalist view would imply no incoherence in the idea of mad belief: If an individual possesses an internal representation of the right sort, held in such a way that if everything were functioning normally it would have the normal effects, that person believes -- even if everything is not functioning normally.
Compare: having a heart. Hearts might be defined in terms of their normal functional role (to pump blood), but a being can still have a heart even if that heart fails utterly to fill that normal functional role (in which case the being will presumably either not be viable or have its life sustained somehow without a functioning heart). I'm a type functionalist about hearts: To have a heart is to have the type of organ that normally fills the causal role of hearts even if in one's own case the organ does not fill that causal role. Lewis is a type functionalist about pain. But if the Daiyu case is incoherent, we should not be type functionalists about belief. Closer to the truth, I suspect, would be token functionalism: To believe is to be in a state that actually, for you, plays the functional/causal role characteristic of belief. I'm not sure how readily representationalists about belief, especially those who think of mental representations as biological types or real in-the-head entities, can take token functionalism on board. Perhaps they are committed to the possibility of mad belief.
Friday, October 31, 2008
It seems a little strange to think so, and the philosophers I've asked about this in the last few days tend to say no. But here are three possible examples:
Consider the following rule: If P is true, then conclude that I believe that P is true. Of course, it's not generally true that for all P I believe that P. (Sadly, I'm not omniscient. Or happily?) However, if I apply this rule in my thinking, I will almost always be right, since employing the rule will require judging, in fact, that P is true. And if I judge that P is true, normally it is also true that I believe that P is true. So if by employing the rule I generate the belief or judgment that I believe that P is true, that belief or judgment is generally correct. The rule is, in a way, self-fulfilling. (Gareth Evans, Fred Dretske, Richard Moran, and Alex Byrne have all advocated rules something like this.)
And of course the conclusion "I believe that P is true" (the conclusion I now believe, having applied the rule) will itself generally be true even if P is false. I'm inclined to think it's usually knowledge.
One question is: Is this really inference? Well, it looks a bit like inference. It seems to play a psychological role like that of inference. What else would it be?
Instrumentalism in Science:
It's a common view in science and in philosophy of science that some scientific theories may not be strictly speaking true (or even approximately true) and yet can be used as "calculating devices" or the like to arrive at truths. For example, on Bas Van Fraassen's view, we shouldn't believe that unobservably small entities like atoms exist, and yet we can use the equations and models of atomic physics to predict events that happen among the things we can observe (such as tracks in a cloud chamber or clicks in a Geiger counter). Let's further suppose that atoms do not in fact exist. Would this be a case of scientific inference in which false premises (about atoms) generate conclusions (about Geiger counters) that count as knowledge?
Perhaps the relevant premise is not "atoms behave [suchly]" but "a model in which atoms are posited as fictions that behave [suchly] generates true claims about observables". But this seems to me needlessly complex and perhaps not accurate to psychological reality for all scientists who'd I'd be inclined to say derive knowledge about observables using atomic models even if some of the crucial statements in those models are false.
In standard two-valued logic, I can derive "P or Q" from P. What if Q, in some particular case, is just "not P"? Perhaps, then, I can derive (and know) "P or not P" from P, even if P is false?
What's the problem here? Why do philosophers seem to be reluctant to say we can sometimes gain knowledge through inference from false premises?
Tuesday, October 21, 2008
My nominee for best use of fart spray in 2008:
Simone Schnall and co-authors (including the always interesting Jonathan Haidt) set up a table on the Stanford campus, asking passing Stanford students to complete a questionnaire on the immorality or not of marrying one's first cousin, having consensual sex with a first cousin, driving rather than walking 1 1/2 miles to work, and releasing a documentary over the objections of immigrants who didn't realize they were being interviewed on film. All respondents completed the questionnaire while standing near a trashbucket. For one group, the bucket was clean and empty; for another it was lightly doused with fart spray so that a mild odor emanated from it; for a third group, the bucket was liberally sprayed and emitted a strong stench. Participants in the odiferous conditions rated all four actions morally worse than in the fart-absent condition.
In other research, Haidt has found that people hypnotically induced to experience disgust are also more inclined to reach negative moral judgments then when they're not experiencing hypnotically-induced disgust; Schnall et al. found that people were more morally condemnatory when completing questionnaires in a disgustingly dirty office than in a clean one, after vividly recalling a disgusting event than after not being instructed to do so, and after watching a disgusting movie scene as opposed to a neutral or sad scene. In the last three of these experiments, they found the difference in moral judgment only among people who, in a post-test, described themselves as being highly aware of bodily states such as hunger and bodily tension. (As an aside, I'm generally mistrustful of the accuracy of people's reports about their typical daily steam of conscious experience, and I wonder if responses on the post-test might be influenced by the strength of either their reaction to the previously presented moral scenarios or their reaction to the disgusting stimulus.)
Moral condemnation and visceral disgust may be more closely related, then, than you think -- or at least than most philosophers seem inclined to think. Whether this is a good thing or a bad thing is open to dispute. In these scenarios, it seems like a bad thing, since people are being swayed in their judgments by irrelevant factors. Whether it's generally a bad thing, I suppose, will depend on whether there's generally a good relationship between the things that evoke visceral disgust and those worth morally condemning. (Unusual sexual practices? Poor hygiene? Illness? Reflecting on these sorts of cases leads me to suspect that the connection between visceral and moral disgust is overall more misleading than helpful.)
There's a practical moral to all this, too: When you're trying to get people to judge you lightly for all the crap you've done, don't fart!
Monday, October 13, 2008
Fiery Cushman at Harvard and I are running a new version of the "Moral Sense Test", which asks respondents to make moral judgments about hypothetical scenarios. We're especially hoping to recruit people with philosophy degrees for this test so that we can compare philosophers' and non-philosophers' responses. So while I would encourage all readers of this blog to take the test (your answers, though completely anonymous, will be treasured!), I would especially appreciate it if people with graduate degrees in philosophy would take the time to complete it.
The test should take about 15-20 minutes, and people who have taken earlier versions of the Moral Sense Test have often reported it interesting to think about the kinds of moral dilemmas posed in the test.
Here's the link to the test.
(By the way, I'm off to Australia on Wednesday, and I doubt I'll have time to post to the blog between now and when I recover from my jet lag. But if you notice any problems with the test, do please email me so I can correct it immediately!)
[Update, October 14: Discussion of the test is warmly welcomed either by email or in the comments section of this post. However, if you are planning to take the test, please do so before reading the comments on this post.]
[Update, October 15: By the way, people should feel free to retake the test if they want. Just make sure you answer "yes" to the question of whether you've taken the Moral Sense Test before!]
Posted by Eric Schwitzgebel at 11:12 AM
Wednesday, October 08, 2008
The American Philosophical Association's Newsletter on Asian and Asian-American Philosophers and Philosophies has recently posted a discussion of the crisis in Chinese philosophy -- the perceived crisis being the fact that no highly-ranked North American philosophy department has a specialist in Chinese philosophy. I recommend the entire newsletter to those interested in the state of graduate education in Chinese philosophy -- perhaps starting with Bryan Van Norden's article.
My own take is that the situation is very serious for those hoping to receive graduate training in the area in the near future, but that the crisis is likely to be temporary, given what seems to me the generally increasing quality of work in that field, combined with the gradually increasing ethnic integration of North America.
Update, 5:05 p.m.: Manyul Im has opened a thread on the topic on his Chinese philosophy blog. No comments there yet, but I expect that's where the most informed discussion will be.
Tuesday, October 07, 2008
Yes, it's that time! Last year, I wrote a series of long posts on applying to Ph.D. programs in philosophy, based on my experience on admission committees at U.C. Riverside (and also to a lesser extent on my experience as an applicant and graduate student in the 1990s). Since people appear to have found it useful, I uploaded the whole series to the Underblog. There are also links to the original posts, where comments are welcome.
I have received a number of emails from people asking about their particular situations, and while I like to be helpful and I try to respond to all emails, I would encourage potential emailers to read through the posts and the comments to see if I've already addressed your type of situation. If you are in a type of situation that I have not addressed, though, I'd be happy to receive an email -- or even better hear about it in a comment, where my reply might also be useful to others.
I reiterate that these posts represent my own perspective only. Some of the things I say may be inaccurate or unrepresentative of general opinion. (And if so, I'd appreciate hearing from others who have served on admissions committees or who have recent relevant admissions experiences.) What I say is certainly not UCR policy. I won't even be on the admissions committee this year.
A few notes:
(1.) I know very little about M.A. programs, including admissions criteria, graduation rates, placement success, expectations within the programs, etc. I suspect that there's enormous diversity in these dimensions among programs.
(2.) Many students have emailed me or posted comments on applying to grad schools one, a few, or many years after graduation. I advise students to read through the comments section of Part II. There's also some further discussion in Part IV.
(3.) Another big issue is the student with the imperfect GPA or unusual institutional background. There's more discussion of this in the comments sections of several of the Parts.
(4. [update, 2:07 p.m.]) You might also want to check out the comments section on Brian Leiter's blog on the difference between U.S. and U.K. statements of purpose.
Thursday, October 02, 2008
Wednesday, October 01, 2008
How is philosophy different from the other academic disciplines? What makes it worth funding as an academic department? Here, I'll check my email for a minute while you think about your answer....
We could go sociological: We could say that philosophy is whatever it is that people who call themselves "philosophers" do. Or we could say that it is whatever it is that fits best into an integrated tradition arising from the canonical works of canonical figures like Plato and Kant. While neat in a way, this sort of sociological definition seems to me at best a fallback, if no more substantive definition succeeds -- and it strains to accommodate ancient Chinese and Indian philosophers and the possibility of philosophy on other planets or in the distant future when all memory of us has been lost.
Method or content seems the better hope. But is there a distinctively philosophical method or a set of distinctly philosophical topics?
Philosophy cannot, I think, be defined methodologically as an a priori discipline distinguished from the sciences by its focus on truths discoverable from an armchair and immune to empirical refutation. There are, in my view, no such truths. (I know that's contentious.) Speaking more moderately, it doesn't seem that philosophy is limited to such truths. Philosophers of science take stands on the nature of spacetime and natural selection, stands presumably empirically grounded and open to empirical refutation. Atheists and religious philosophers appeal to the appearance, or not, of benevolent design. Philosophers of mind connect their views with those in empirical psychology. Is there then some other method constitutive of philosophy? What could it be? Philosophy seems, if anything, methodologically pluralistic (especially with the rise of experimental philosophy).
A topical characterization of philosophy is more inviting: Philosophers consider such questions as the fundamental nature of reality, the nature of mind and knowledge and reason, general questions about moral right and wrong. But physicists and psychologists and religious leaders also consider these questions. Are they being philosophers when they do so? And what about the possibility of new philosophical questions? Also, a laundry list of questions is not very theoretically appealing. What we want to know is what those sorts of questions have in common that makes them philosophical.
Here's my view: To practice philosophy is to articulate argumentatively broad features of one's worldview, or -- derivatively -- to reflect on subsidiary points crucial to disputes about worldview, with an eye to how they feed into those disputes.
On this view, the empirical is no threat to philosophy. In fact, it would be nuts to develop a broad worldview without one's eyes open to the world. And although the empirical is deeply relevant to philosophy, no set of experiments could ever replace philosophy because no set of experiments could ever settle the most general questions of worldview (including, for example, the extent to which we should allow our beliefs to be governed by the results of scientific experiments). No science or set of sciences could aim at the broad vision of philosophy without thereby becoming philosophy -- becoming either bad philosophy (simplistic naturalized epistemology or cosmology, with substantial philosophical commitments simply assumed without argument and masked behind a web of scientific technicalities) or good, subtle, empirically-informed philosophy, philosophy recognizable to philosophers as philosophy.
This view of philosophy also, I think, properly highlights its importance and its centrality in academia.
I have been accused of aiming to destroy philosophy -- especially metaphysics and ethics -- replacing it with something empirical. However, philosophy is indestructible. People will always argumentatively articulate broad features of worldview. And I myself, even in my most empirical inquiries, aim to do nothing else.
Update, Oct. 2: Joachim Horvath points out in the comments section that important aspects of our worldview include the evolution of human beings from earlier primates and the falsity of geocentrism. But should exploring such questions count as philosophy? My own view is that their being empirical questions doesn't make them unphilosophical, and I would count Darwin and Huxley, Copernicus and Galileo, as doing philosophy when they put forward and defend such broad theses about the position of human beings in the universe. Likewise now, when we know so little about consciousness, the empirical study of basic facts about consciousness -- facts basic enough to count as central to a broad worldview -- should count as philosophy. Of course, later biologists, astronomers, and maybe consciousness scientists who get into narrower questions not involving broad features of our worldview are no longer doing philosophy on my conception. Thus, on my view, doing biology, or astronomy, or psychology can be way of doing philosophy. Perhaps in this respect my view of philosophy diverges more from the mainstream than may be evident on its face from the original post.
Friday, September 26, 2008
Well, see, I was working up this neat little post on the subjective location of visual imagery. Do some people experience visual imagery as located inside their heads, while others experience it as located in front of their foreheads and still others experience it as in no location at all? But it turns out I've already written that post. Maybe this time I'd have found a bit more to say, but I'm afraid I stole my own thunder....
Still, something light and quick would be nice before I head over to Talking Points Memo and National Review to resolve my low blood-pressure problem. So how about the following question: Is everything that breaks breakable?
Strangely, this question has been bothering me recently. (See, I really am an analytic philosopher after all!) Now, if "breakable" just means, "under some conditions it would break" then everything that breaks is breakable. But then everything solid is breakable (and maybe some things that aren't solid, too, such as machines made entirely of liquid). That seems to rob the word of its use. So maybe "breakable" means something weaker, like "under less-than-highly-unusual conditions it would break". Of course, then when those highly unusual conditions occur (someone takes a chainsaw to my garbage cans, an earthquake rends the giant granite rock) something that wasn't breakable broke. Hm!
Why do I care? Well, other than the fact that I haven't entirely shucked my inner nerdy metaphysician, the following arguably parallel case lies near my interests: If believing that P is being disposed to judge that P, does an actual occurrence of judging P imply belief that P? (Readers who've visited recently will note the connection between this question and Tuesday's post.)
If this seems to be just a matter of deciding how to use words, well that's what all metaphysics is (I contentiously aver), so this fits right in!
Tuesday, September 23, 2008
Many Caucasians in academia sincerely profess that all races are of equal intelligence. Juliet, let's suppose, is one such person, a Jewish-American philosophy professor. She has, perhaps, studied the matter more than most: She has criticially examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, authentically, and vehemently for equality of intelligence, and she has argued the point repeatedly in the past. Her egalitarianism in this matter coheres with her overarching "liberal" stance, according to which the sexes too possess equal intelligence, and racial and sexual discrimination are odious.
And yet -- I'm sure you see this coming -- Juliet is systematically racist in her spontaneous reactions, judgments, and unguarded behavior. When she gazes out on class the first day of each term, she can't help but think that some students look brighter than others -- and to her, the black students never look bright. When a black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a white or Asian student to do so, even though her black students make insightful comments and submit excellent essays at the same rate as the others; and, worse, her bias affects her grading and the way she guides class discussion. When Juliet is on the hiring committee for a new office manager, it won't seem to her that the black applicants are the most intellectually capable, even if they are; or if she does become convinced, it will have taken more evidence than if the applicant had been white. When she converses with a janitor or cashier, she expects less wit if the person is black. Juliet may be perfectly aware of these facts about herself; she may aspire to reform; she may not be self-deceived in any way.
So here's the question: Does Juliet believe that the races are intellectually equal? Considering similar cases, Aaron Zimmerman and Tamar Gendler have said yes: Our beliefs are what in us is responsive to evidence, what is under rational control; and for Juliet, that's her egalitarian avowals. Her other responses are merely habitual or uncontrolled responses. But this seems to me to draw too sharp a line between the rational and the irrational or merely habitual. Our habits and gut responses are inextricably intertwined with our reason, perhaps often themselves a form of reasoning. Furthermore, imagine two black students in Juliet's class. One says to the other, in full knowledge of Juliet's sincere statements about equality, "and yet, she doesn't really believe that the black people are as smart as white people". Is that student so far wrong? Aren't our beliefs as much about how we live as about what we say?
Does Juliet have contradictory beliefs on the issue (as Brie Gertler seems to suggest about a similar case)? Does she believe both that the races are intellectually equal and that they're not? It's hard for me to know what to make of such an attribution. We might say that part of her believes one thing and part believes another, but there are serious problems with taking such a division literally. (Like: How do the different parts communicate? How much duplication is there in the attitudes held by the different parts and in the neural systems underlying those attitudes?)
Should we simply say that Juliet does not believe that the races are intellectually equal? That doesn't seem to do justice to her sincerity in arguing otherwise.
I recommend we treat this as an "in-between" case of believing. It's not quite right to say that Juliet believes that the races are intellectually equal; but neither is it quite right to deny her that belief. Her dispositions are splintered -- she has, we might say, a splintered mind -- and our attribution must be nuanced. Just as it's not quite right either to say that someone is or is not courageous simpliciter if he's courageous on Wednesdays and not on Tuesdays, it's not quite right to simply ascribe or deny belief when someone's actions and reactions are divided as Juliet's are.
Zimmerman tells me I'm whimsically throwing overboard classical two-valued logic -- the view, standard in logic, that all meaningful propositions are either true or false -- but I say if two-valued logic cannot handle vague cases (and I think it can't, not really), so much the worse for two-valued logic!
Thursday, September 18, 2008
Philosophers often provide accounts of self-knowledge as though we knew our own minds either entirely or predominantly in just one way (Jesse Prinz is a good exception to the rule, though). But let me count the ways (saving the fun ones for the end).
(1.) Self-observation. Here's an old joke: Q.: How do behaviorists greet each other? A.: "You're fine. How am I?" Writers in the behaviorist tradition stressed the importance of observing your behavior to assess your own mental condition. Gilbert Ryle reportedly said, how do I know what I think until I hear what I have to say? Or: I find myself asking the server for the apple pie; I conclude that I must prefer the apple to the cherry. Every philosopher thinks we can do this, of course. Psychologists such as Bem and Nisbett have stressed (counterintuitively) the centrality of this mode of self-knowledge. I think they're right that it's too easily underrated. [Update: Let me fold all "third-person" methods, such as being told about yourself by someone else or applying a textbook theory, into this category, even though, as Pete Mandik points out in the comments, they're not strictly "self-observation".]
(2.) Introspection. We cast our mental eye inward, as it were, and discern our mental goings-on. This "inner eye" metaphor has been much maligned, and obviously there are differences between external perception and introspection -- but at least for some mental states (I think, especially, conscious states), some quasi-sensory introspective mechanism plausibly plays a role, if we treat the analogy to perception with a light touch: We detect in ourselves, relatively directly and non-inferentially, some pre-existing mental state.
(3.) Self-expression. Wittgenstein suggested that we might learn to say "I'm in pain!" as a replacement for crying -- and just as we don't need to do any introspection or behavioral observation to wince or cry when we burn ourselves in the kitchen, so also we might ejaculate "that hurts!" without any prior introspection or behavioral observation. Or a young child might demand: "I want ice cream!" She's not introspecting first, presumably, and detecting a pre-existing desire for ice cream. Nor is she watching her behavior. Yet she is right: She does want ice cream.
(4.) Self-fulfillment. The thought that I am thinking is necessarily true whenever it occurs. So is the thought that I am thinking of a pink elephant, assuming that attributing myself that thought is enough for me to qualify as thinking of a pink elephant. Again, no detection or observation required. I can make this stuff up on the fly and still be right about it. For some reason, Descartes thought this was important. My own view is that it's about as important as the fact that whenever you say, in semaphore, that you are holding two flags, you're right. (Well, okay, let me moderate that a little: For some mental states it may be an interesting fact that the self-ascription implies the existence of the state -- for example if it's not possible to believe that you believe that P without also thereby at least half-believing that P.)
(5.) Simple derivation. I adopt the following inference rule: If P is true, conclude that I believe that P. This rule works surprisingly well (as Alex Byrne has pointed out). Dodgier but still useable as a rule of thumb: From X would be good, conclude that I want X all else being equal.
(6.) Self-shaping. A romantic novice is out on his first date. He blurts out, simply because it sounds good, "I'm the type of guy who buys women flowers". At the same time, perhaps just as he is finishing up that claim, he successfully resolves from then on to be the kind of guy who buys women flowers. No detection, no observation, no self-expression of a previously existing state, no simple self-fulfillment, no derivational rule. His self-ascription is true because he makes it true as he says it. Victoria McGeer and Richard Moran have developed versions of this view, but neither quite as starkly as this example suggests. But I suspect that much of what we say about ourselves (both publicly and in private) is bluster that we find ourselves committed to living up to. This self-shaping model works pretty well for imagery reports too.
Six ways. That's it. Am I missing any? (By the way, so-called "transparency" views, the object of much recent philosophical attention, can be wedded to any combination of 3-6, and are not by themselves positive accounts of self-knowledge.)
Thursday, September 11, 2008
Philosophy is built upon intuitions. (Maybe all knowledge is, at root.) Arguments must start somewhere, with something that seems obvious, with something we're willing to take for granted. In the 20th century, philosophers became methodologically explicit about this. Ethicists explicitly appeal to the intuition that it's not right to secretly kill and dissect one healthy person in order to save five needing organ transplants. Metaphysicians appeal to the intuition that if your molecules were scanned and taken apart and that information used to create a person elsewhere who was molecule-for-molecule identical to you, that person would be you. Philosophical debate often consists of noting the apparent clash between one set of intuitions and a theory grounded in a different set of intuitions. For example, if you won't dissect the one to save the five, does that imply that if a runaway trolley is heading toward five immobilized people, you shouldn't divert it to a side-track containing only one? There are ways to say no to the one and yes to the other, of course, but only by means of principles that conflict with still other intuitions... and we're off into the sort of save-the-intuitions game that analytic philosophers (and I too) enjoy!
Until recently, such intuition-saving disputes have been conducted without any careful empirical reflection on the source and trustworthiness of those intuitions. We have the sense that it would be wrong to dissect the one or that the recreated individual would be you, but where does that intuition come from? Do such intuitions somehow track a set of facts, independent of the individual philosopher's mind, about what is really right, or about what personal identity really consists in? A story needs to be told.
That story will necessarily be an empirical story, a story about the psychology of intuition -- and maybe, too, the sociology and anthropology and history and linguistics of intuition. For example, suppose it turns out that only highly educated English speakers share some particular intuition that is widely cited in analytic philosophy. That should cast some doubt -- doubt that can perhaps be overcome with a further story -- about the merit of that intuition. Or suppose that a certain intuition was to be found only among people for whom having that intuition would excuse them from serious moral culpability for actions performed earlier in their lives. That should should cast defeasible doubt on the intuition.
With the maturing of empirical sciences that can cast light on the sources of our intuitions, we philosophers can no longer justifiably ignore such genetic considerations in evaluating our arguments. We can no longer innocently take our intuitions about philosophical cases as simply given. We must recognize that psychology, sociology, anthropology, history, and linguistics can cast important light on the merits and especially demerits of particular philosophical arguments.
Of course most philosophers know virtually nothing about psychology, sociology, anthropology, history, and linguistics; and most psychologists, sociologists, anthropologists, historians, and linguists are insufficiently enmeshed in philosophical debates to bring their resources to bear. A huge cross-disciplinary terrain remains almost unspoiled. To me, nothing could be more exciting! (Well, nothing in academia.)
A few have made starts: Paul Bloom, Tony Jack, and Philip Robbins have been discussing the roots of the intuition that mind and body are distinct. Fiery Cushman, Marc Hauser, Joshua Greene, and John Mikhail have been discussing the psychological roots of the moral intuitions in runaway trolley type cases. Reading "intuition" widely to include any views that people find attractive without compelling argument, Shaun Nichols has explored the roots of the intuition that there is no incompatibility between free will and causal determinism. I have examined the culturally-local metaphors behind the sense philosophical phenomenologists and others have that coins look elliptical when seen from an angle. These are barely beginnings.
Monday, September 08, 2008
Most of us (certainly I!) can a dozen times read something we've written without noticing a typo. What, I wonder, is the phenomenology of that?
Suppose I've written a sentence with "that" where "than" should be. Maybe I've been reading Nichols on disgust, and I write "Drinking five glasses of saliva is worse that drinking one." What is it like for me to see that sentence as I read it? Do I see it at all (Hurlburt thinks maybe not)? Supposing I do visually experience the sentence, do I see the "that" as a "than", so that my visual experience, in the appropriate place, is actually "n"-ish rather than "t"-ish? Or is my visual experience really "t"-ish in that spot, though I fail to notice the error? Or is my visual experience somehow indefinite between "n" and "t" (and maybe some other shapes), even though I may be foveating (looking directly) on the "t"?
Suppose I'm also saying the sentence to myself in inner speech as I read it. Parallel questions arise. Do I utter to myself "than", "that", or some more indefinite thing?
Now my own hunch is that I see the "t" (or maybe something more indefinite) but utter the "n". At least it's hard to imagine that I would utter the "that" aloud without noticing the typo. But this is only a hunch, and I'm not a great believer in introspective hunches. Maybe in some future neuroscience, if we can narrow down more precisely the correlations between brain states and conscious experience, we could scan the visual system for a "t"-ish or "n"-ish representation in the right part of the visual system -- but that's a long way off, if ever we'll get there. Could more careful introspection get us the right answer? That's tricky, too -- not noticing something is necessarily an elusive sort of experience. It's hard to believe, though, that something as mundane and nearby as this would be beyond our ken....
Friday, September 05, 2008
A refreshingly harsh and unsympathetic criticism of my recently published paper, "The Unreliability of Naive Introspection" has been posted over at Brain Pains. It's good once in a while (not too often!) to hear frank opinions from people who think you're full of crap. (You all here at The Splintered Mind are so polite and restrained in your comments -- not that I'm complaining!) I've responded at some length. My hope is that the comments are completely off target, but I confess I'm a biased judge!
Josh Rust and I have also just completed a draft of a new essay, "Do Ethicists and Political Philosophers Vote More Often Than Other Professors?". Regular readers of The Splintered Mind will recognize the theme from these earlier posts.
Wednesday, September 03, 2008
In 2003, two friends asked me to contribute something to their wedding ceremony. Since I’m a philosophy professor, I thought I would take the occasion to reflect a bit on the nature of conjugal love, the distinctive kind of love between a husband and wife. I never pursued these thoughts or sought publication for them, since the philosophy of love is not a research specialty of mine, but recently ceramic artist Jun Kaneko asked to publish them in a forthcoming anthology on Beethoven's Fidelio. Objections and corrections appreciated!
The common view that love is a feeling is, I think, quite misguided. Feelings come and go, while love is steady. Feelings are “passions” in the classic sense of ‘passion’ which shares a root with ‘passive’. They strike us largely unbidden. Love, in contrast, is something actively built. The passions suffered by teenagers and writers of romantic lyrics, felt so painfully, and often so temporarily, are not love – though in some cases they may be a prelude to it.
Rather than a feeling, love is a way of structuring one’s values, goals, and reactions. One characteristic of it is a deep commitment to the good of the other for his or her own sake. (This characterization of love owes quite a bit to Harry Frankfurt.) We all care about the good of other people we meet and know, for their own sake and not just for utilitarian ends, to some extent. Only if the regard is deep, though, only if we so highly value the other’s well-being that we are willing to thoroughly restructure and revise our own goals to accommodate it, and only if this restructuring is so well-rooted that it instantly and automatically informs our reactions to the person and to news that could affect him or her, do we possess real love.
Conjugal love involves all this, certainly. But it is also more than this. In conjugal love, one commits oneself to seeing one’s life always with the other in view. One commits to pursuing one’s major projects, even when alone, always in a kind of implicit conjunction with the other. One’s life becomes a co-authored work.
The love one feels for a young child may in some ways be purer and more unconditional than conjugal love. One expects nothing back from a young child. One needn’t share ideals to enjoy parental love. The child will grow away into his or her own separate life, independent of the parents’ preferences.
Conjugal love, because it involves the collaborative construction of a joint life, can’t be unconditional in that way. If the partners don’t share values and a vision, they can’t steer a mutual course. If one partner develops a separate vision or does not openly and in good faith work with the other toward their joint goals, conjugal love is impossible and is, at best, replaced with some more general type of loving concern.
Nonetheless, to dwell on the conditionality of conjugal love, and to develop a set of contingency plans should it fail, is already to depart from the project of jointly fabricating a life and to begin to develop a set of individual goals and values opposing those of the partner. Conjugal love requires an implacable, automatic commitment to responding to all major life events through the mutual lens of marriage. One cannot embody such a commitment if one harbors persistent thoughts about the contingency of the relationship and serious back-up plans.
There may be an appearance of paradox in the idea that conjugal love requires a lifelong commitment without contingency plans, yet at the same time is conditional in a way parental love is not. But there is no paradox. If one believes that something is permanent, one can make lifelong promises and commitments contingent upon it, because one believes the contingency will never come to pass. This then, is the significance of the marriage ceremony: It is the expression of a mutual unshakeable commitment to build a joint life together, where each partner’s commitment is possible, despite the contingency of conjugal love, because each partner trusts the other’s commitment to be unshakeable.
A deep faith and trust must therefore underlie true conjugal love. That trust is the most sacred and inviolable thing in a marriage, because it is the very foundation of its possibility. Deception and faithlessness destroy conjugal love because, and exactly to the extent that, they undermine the grounds of that trust. For the same reason, honest and open interchange about long-standing goals and attitudes stands at the heart of marriage.
Passion alone can’t ground conjugal trust. Neither can shared entertainments and the pleasure of each other’s company. Both partners must have matured enough that their core values are stable. They must be unselfish enough to lay everything on the table for compromise, apart from those permanent, shared core values. And they must be shorn of the tendency to form secret, individual goals. Only to the degree they approach these ideals are they worthy of the trust that makes conjugal love possible.
Monday, September 01, 2008
The philosophical distinction between sense and reference, which seems straightforward with a small handful of examples like "Cicero" and "Tully" or "The Morning Star' and "the Evening Star" is what underlies the idea that two domains of discourse can describe exactly the same entity. But too much focus on those examples have made it misleadingly easy to assume that the same individual described under two different descriptions would always have identical causal powers. From this it has been easy to conclude that describing a brain as a group of molecules would take nothing away from its causal powers, and that describing a mind as a brain would take nothing away from its causal powers. When we move away from the paradigm cases of Cicero and Venus, however, this is no longer obvious. Consider the following dialogue:
A: Is it true that Socrates' death was caused by drinking a cup of hemlock?Here’s one way of describing what’s wrong with this conclusion. Each of these different descriptions presupposes a different causal nexus that makes it happen. Consequently, certain attributes will be genuinely causal under one description of an event, and only epiphenomenal under another description. Epiphenomenal properties are just “along for the ride” and have no causal powers of their own. Under the description "Socrates' death" the fact that Socrates is married to Xantippe is epiphenomenal. Under the description, "Xantippe's becoming a widow", the fact that Socrates is married to Xantippe is causal. Thus under the first description, we can have a causal impact on the event only by saving Socrates' life. Under the second description we can have a causal impact on the event by either saving his life or having him divorce Xantippe.
A: Is it true that Xantippe's becoming a widow was also caused by his drinking the hemlock?
B: That is also true.
A: Why were both of these events caused by drinking the hemlock?
B: Because they were the same event, so talking about "both events" is not really correct.
A: Consider a different example: the cup's being empty and Socrates' death were both caused by his drinking the hemlock, yet those are two separate events. That is a very different case from Socrates' death and Xantippe's becoming a widow, is it not?
B: Without a doubt. Socrates' death and Xantippe's becoming a widow are really two different descriptions of the same event, not two different events.
A: Excellent. We must therefore conclude that we could have saved Socrates' life by having him divorce Xantippe.
Physical descriptions and mental descriptions outline different nexa of responsibility, and therefore we can never completely substitute one for the other, even when they both refer to the same events. Physical causes are not the only "real" causes, and mental causes are not dismissible as mere epiphenomena. Under physical descriptions, physical attributes are genuinely causal, and mental attributes are epiphenomenal. But under mental descriptions, physical attributes are epiphenomenal and mental attributes are genuinely causal.
Let us assume that P is a neurological event taking place in a brain. Let us replace P with another physical event Q that takes place in a silicon module newly installed in Jones' brain, and which now performs the exact same functional role as did the neurological event P. Because the silicon event Q is functionally identical to the neurological event P, we still get mental state M resulting from Q just as we got it from P. This means that with respect to M's coming into being, the difference between P and Q is epiphenomenal, because it is only physical, and that physical difference has no causal effect on whether M occurs or not. Similarly, if Socrates had taken the hemlock in the public square, rather than in the prison, he would still have died. In the same way, when we change neurological state P to silicon state Q we still get M, and therefore the physical characteristics that differentiate P from Q are epiphenomenal with respect to the mental processes.
Friday, August 29, 2008
The great philosophical "dialogues" are, of course, hardly dialogues. One voice is that of the philosopher, the others are foils of varying degrees of density and compliance. Large stretches of Plato's dialogues are merely expository essays with "It is so, Socrates" (or similar) regularly tossed in. In his Dialogues Concerning Natural Religion, Hume gives his foil Cleanthes a bit more philosophy than is usual, but the crucial final two parts, XI and XII, are Philo's almost alone.
In my view, this is merely an accident of the mundane realities of writing and publishing. Nothing prevents the compelling presentation of more than one side of an issue in a dialogic format. Normally, though, this will require authors with divergent views and an ability to work co-operatively with each other. My recent experience writing in this way with Russ Hurlburt (in our recent book) has convinced me that this can be a very useful method both for the authors and for readers. There's nothing like genuinely engaging with an opponent.
The dialogue is very different from the pro and con essay with replies. The dialogue has many conversational turns; the essay and replies no more than four. The dialogue invites the reader to a vision of philosophy as collaborative and progressive, with the alternative views building on each other; the pro and con essay invites a combative vision. The dialogue is written and re-written as a whole to cast each view in its best light given what emerges at the end of the dialogue, eliminating mere confusions and accommodating changes of view.
David Lewis published a couple of genuine dialogues on holes. John Perry has published delightful introductory dialogues on personal identity and on good and evil (though Perry is summarizing existing arguments rather than developing new ones). Surely there are other good exceptions, but they are rare.
I wonder how different philosophy would be -- and better -- if the standard method were to meet one's opponents and hash out a dialogue rather than to write a standard expository essay....
Tuesday, August 19, 2008
I've a couple more thoughts to share from Josh Rust's and my study of the voting rates of ethicists and political philosophers vs. other professors. (Our general finding is that ethicists and political philosophers vote no more often than other professors, though political scientists do vote more often.)
(1.) Take a guess: Do you think extreme views about the importance or pointlessness of voting will be overrepresented, underrepresented, or proportionately represented among political scientists and political philosophers compared to professors more generally? My own guess would be overrepresented: I'd expect both more maniacs about the importance of voting and more cynics about it among those who study democratic institutions than among your average run of professors.
However, the data don't support that idea. The variance in the voting rates of political scientists and political philosophers in our study is almost spot-on identical to the variance in the voting rates of professors generally. Either political scientists and political philosophers are no more prone to extreme views than are other professors, or those extreme views have no influence on their actual voting behavior.
(2.) California professors are incredibly conscientious about voting in statewide elections. Half of our sample is from California, where we only have data for statewide elections. Among California professors whose first recorded vote is in 2003 or earlier, a majority (52%) voted in every single one of the six statewide elections from 2003-2006. 72% voted in at least five of the six elections. This compares with a statewide voting rate, for the June 2006 primary election alone, of only 33.6% of registered voters. (For other states, we have local election data too. There's no such ceiling effect once you include every single local ballot initiative, city council runoff election, etc.; professors aren't quite that conscientious!)
Saturday, August 16, 2008
A friend asked me today about philosophical humor. Of course there are philosophical jokes that play on our jargon (e.g., a "goy" is a girl if observed before time t and a boy if observed after; compare "grue"), but are there philosophical jokes with a deeper point? For some reason the two examples that leapt to mind were both from the Daoist tradition. Their similarity is, I'm sure, not at all accidental.
When Chuang-tzu [a.k.a. Zhuangzi, 4th c. B.C.E.] was dying, his disciples wanted to give him a lavish funeral. Said Chuang-tzuAnd:
"I have heaven and earth for my outer and inner coffin, the sun and the moon for my pair of jade disks, the stars for my pearls, the myriad creatures for my farewell presents. Is anything missing in my funeral paraphernalia? What will you add to these?"
"Master, we are afraid that the crows and kites will eat you."
"Above ground, I'll be eaten by the crows and kites; below ground, I'll be eaten by the ants and molecrickets. You rob one of them to give to the other; how come you like them so much better?" (Graham 1981 trans., p. 125)
Liu Ling [3rd c. C.E.] always indulged in wine and let himself be uninhibited. Sometimes he would take his clothes off and stay in his house stark naked. When people saw this, they criticized him. Ling said: "I take Heaven and Earth as my pillars and roof, and the rooms of my house as my trousers. Gentlemen, what are you doing by entering my trousers?" (Goldin 2001, p. 117.)I suppose it's natural for the anticonventional Daoists to use humor to help knock loose people's presuppositions. Interestingly, though, the best-known Daoist, Laozi, doesn't employ much humor. In this respect, as in many others, the tone and spirit of Laozi and Zhuangzi differ immensely, despite the superficial similarity of their views.
[Note: Revised and updated Aug. 17.]
Monday, August 11, 2008
The following thought experiment ended up in my online paper “The Hard Problem Is Dead, Long Live the Hard Problem”.
It was first sent out to the Cognitive Questions mailing list, and received the following replies from a variety of interesting people.
Let us suppose that the laboratories of Marvin Minsky and Rodney Brooks get funded well into the middle of the next century. Each succeeds spectacularly at its stated goal, and completely stays off the other's turf.
The Minskians invent a device that can pass every possible variation on the Turing test.
It has no sense organs and no motor control, however. It sits stolidly in a room, and is only aware of what has been typed into its keyboard. Nevertheless, anyone who encountered it in an internet chatroom would never doubt that they were communicating with a perceptive intelligent being. It knows history, science, and literature, and can make perceptive judgments about all of those topics. It can write poetry, solve mathematical word problems, and make intelligent predictions about politics and the stock market. It can read another person's emotions from their typed input well enough to figure out what are emotionally sensitive topics and artfully changes the subject when that would be the best for all concerned. It makes jokes when fed straight lines, and can recognize a joke when it hears one. And it plays chess brilliantly.
Meanwhile, Rodney Brooks' lab has developed a mute robot that can do anything a human artist or athlete can do.
It has no language, neither spoken or internal language-of-thought, but it uses vector transformations and other principles of dynamic systems to master the uniquely human non-verbal abilities. It can paint and make sculptures in a distinctive artistic style. It can learn complicated dance steps, and after it has learned them can choreograph steps of its own that extrapolate creatively from them. It can sword fight against master fencers and often beat them, and if it doesn't beat them it learns their strategies so it can beat them in the future. It can read a person's emotions from her body language, and change its own behavior in response to those emotions in ways that are best for all concerned. And, to make things even more confusing, it plays chess brilliantly.
The problem that this thought experiment seems to raise is that we have two very different sets of functions that are unique and essential to human beings, and there seems to be evidence from Artificial Intelligence that these different functions may require radically different mechanisms. And because both of these functions are uniquely present in humans, there seems to be no principled reason to choose one over the other as the embodiment of consciousness. This seems to make the hard problem not only hard, but important. If it is a brute fact that X embodies consciousness, this could be something that we could learn to live with. But if we have to make a choice between two viable candidates X and Y, what possible criteria can we use to make the choice?
For me, at least, any attempt to decide between these two possibilities seems to rub our nose in the brute arbitrariness of the connection between experience and any sort of structure or function. So does any attempt to prove that consciousness needs both of these kinds of structures. (Yes, I know I'm beginning to sound like Chalmers. Somebody please call the Deprogrammers!) This question seems to be in principle unfalsifiable, and yet genuinely meaningful. And answering a question of this sort seems to be an inevitable hurtle if we are to have a scientific explanation of consciousness.
Friday, August 08, 2008
In a forthcoming article, Piercarlo Valdesolo and David DeSteno tried the following experiment: In part one, participants were faced with the possibility of doing one of two tasks -- one a brief and easy survey, the other a difficult and tedious series of mathematics and mental rotation problems -- and they were given the choice between two decision procedures. Either they could choose the task they preferred, in which case (they were led to believe) the next participant would receive the other task, or they could allow the computer to randomly assign them to one of the two tasks, since "some people feel that giving both individuals an equal chance is the fairest way to assign the tasks". Perhaps unsurprisingly, 93% of participants chose simply to give themselves the easy task.
In part two, participants were asked to express opinions about various aspects of the experiment, including rating how fairly they acted (on a 7-point scale from "extremely fairly" to "extremely unfairly"). Some participants completed these questions under normal conditions; others completed the questions under "cognitive load" -- that is, while simultaneously being asked to remember strings of seven digits. A third group did not complete part one, but watched a confederate of the experimenter complete it, rating the confederate's fairness.
Again unsurprisingly, people rated the choice of the easy task as more unfair when they saw someone else make that choice than when they made that choice themselves. But here's the interesting part: They did not do so when they had to make the judgment under the "cognitive load" of memorizing numbers.
Consider two possible models of rationalization. On the first model, we automatically see whatever we do as okay (or at least more okay than it would be if others did it) and the work of rationalization comes after this immediate self-exculpatory tendency. On the second model, our first impulse is to see our action in the same light we would see the same action done by others, and we have to do some rationalizing work to undercut this first impulse and see ourselves as (relatively) innocent. The current experiment appears to support the second model.
I suspect that moral reflection is bivalent -- that sometimes it helps drive moral behavior but sometimes it serves merely to dig us deeper into our rationalizations and is actually morally debilitating. It is by no means clear to me now which tendency dominates. (I originally inclined to think that moral reflection was overall morally improving, but my continued reflections on the moral behavior of ethics professors are leading me to doubt this.) Valdesolo and DeSteno's experiment and the second model of rationalization fit nicely with the negative side of the bivalent view: The more we devote our cognitive resources to reflecting on the moral character of our past behavior, the more we tend to make false angels of ourselves.
Tuesday, August 05, 2008
The most decisive criticism of the very idea behind “Intelligent Design” theory (ID) is that it is a “Science Stopper”. There is no such thing as evidence either for or against ID, because “God did it” is not an explanation. It is simply away of filling a gap in our knowledge with an empty rhetorical flourish. If there is a God, he created everything in the Universe, and thus to use this claim as an explanation for a particular occurrence is either trivially false or trivially true.
There are, however, two other science stoppers which are not acknowledged as such.
1) The concept of “direct awareness”, so beloved by positivists and other empiricists. To say something is directly given implies we have no explanation for how we are aware of it. I believe Hume refused to define experience, because its meaning was allegedly obvious. Ned Block made a similar claim for “phenomenal consciousness”. In this context, the word “obvious” basically means “any prejudice that is so widely accepted that no one feels a need to justify it.” The prejudice here is that because we are all familiar with experience, it does not require an explanation. However, once scientific explanations became available for how our experience arises, the idea of direct awareness was rejected. The essential point here is that this would have happened regardless of what explanation was discovered. Once there a mechanical cause-and-effect explanation for our experience, it would by definition no longer be direct. In much the same way, “God created Life” seemed plausible until Darwin showed us how Life was created. However, the God-referring explanation would have been rendered inadequate by any possible causal explanation. Because of the principle of sufficient reason, explanations like “we are directly aware of X” or “God created X” are both science stoppers waiting to be filled by causal explanations. Chalmers apparently rejects the principle of sufficient reason as a metaphysical truth, for he believes that “there is nothing metaphysically impossible about unexplained physical events”. If so, what criteria does he suggest we use for dismissing Intelligent Design?
2) The concept of “Intrinsic Causal Powers.” This concept stops us a little further down the road, but stops us nevertheless. Causal explanations always need to see certain properties as intrinsic, so they can map and describe the relations between those intrinsic properties. If causal explanations didn’t stop somewhere and start talking about the relationships between something and something else, they’d never get off the ground. But this pragmatic fact about scientific practice does not justify the metaphysical claim that there are certain causal powers which are “intrinsically intrinsic.” Paradoxically, intrinsicality is itself a relational property. A property which is intrinsic in one science (say chemistry or biology) must be analyzable into a set of relationships in some other science (for example, physics). To deny this is to limit us to descriptions, and stop us from finding explanations.
One brand of physicalism claims that only very tiny particles possess intrinsic powers, but this contradicts another “physicalist” claim that brains have intrinsic powers. Those who believe in the mind/brain identity theory claim that environmental factors may cause experiences, but brain states embody those experiences. This is a fancy way of saying that brains have the intrinsic causal power to produce mental states, just as knives are intrinsically sharp. But saying that knives are sharp is just short hand for saying that they can participate in bread-cutting events, cloth-cutting events etc. Talk of sharpness intrinsically inhering in knives, or mental states inhering in brains, is accurate enough for many purposes, but it mires us in an Aristotelian world of dispositional objects which limits scientific progress.
Wednesday, July 30, 2008
In the early to mid-20th century, anglophone philosophy famously took a "linguistic turn". Debates about the mind were recast as debates about the terms or concepts we use to describe the mind; debates about free will were recast as debates about the term or concept "freedom"; debates about knowledge were recast as debates about how we do or should use the word "knowledge"; etc. Wittgenstein famously suggested that most philosophical debates were not substantive but rather confusions that arise "when language goes on holiday" (Philosophical Investigations 38); diagnose the linguistic confusion, dissipate the feeling of a problem. J.L. Austin endorsed a vision of philosophy as often largely a matter of extracting the wisdom inherent in the subtleties of ordinary language usage. Bertrand Russell and Rudolf Carnap set philosophers the task of providing logical analyses of ordinary and scientific language and concepts. By no means, of course, did all philosophers go over to an entirely linguistic view of philosophy (not even Austin, Russell, or Carnap), but the shift was substantial and profound. In broad-sweeping histories, the linguistic turn is often characterized as the single most important characteristic or contribution of 20th century philosophy.
I view the linguistic turn partly sociologically -- as driven, in part, by philosophy's need to distinguish itself from rising empirical disciplines, especially psychology, as departmental and disciplinary boundaries grew sharper in anglophone universities. The linguistic turn worked to insulate philosophical discussion from empirical science: Psychologists study the mind, we philosophers study in contrast the concept of the mind. Physicists study matter, we study in contrast the concept of the material. "Analytic" philosophers could thus justify their ignorance of and disconnection from empirical work.
This move, however, could only work when psychology was in its youth and dominated by a behaviorist focus on simple behaviors and reinforcement mechanisms (and, even earlier, introspective psychophysics). As psychology has matured, it has become quite evident -- as indeed it should have been evident all along -- that questions about our words and concepts are themselves also empirical questions and so subject to psychological (and linguistic) study. This has become especially clear recently, I think, with the rise of "experimental philosophers" who empirically test, using the methods of psychology, philosophers' and ordinary folks' intuitions about the application of philosophical terms. (In fact, Austin himself was fairly empirical in his study of ordinary language, reading through dictionaries, informally surveying students and colleagues.)
A priori, armchair philosophy is thus in an awkward position. The 20th-century justification for analytic philosophy as a distinct a priori discipline appears to be collapsing. I don't think a prioristic philosophers will want to hold on much longer to the view that philosophy is really about linguistic and conceptual analysis. It's clear that psychology and linguistics will soon be able to (perhaps already can) analyze our philosophical concepts and language better than armchair philosophers. Philosophers who want to reserve a space for substantive a priori knowledge through "philosophical intuition", then, have a tough metaphilosophical task cut out for them. George Bealer, Laurence BonJour, and others have been working at it, but I can't say that I myself find their results so far very satisfying.