Part of what it means to have an emotion is to regard an object in a certain way. To fear an activity, for example, is to regard it as a threat to your interests. To be indignant about someone's behavior is to regard her as committing an injustice. But this usage of "regard" is ambiguous. If I regard something as dangerous, it might mean that I believe it to be dangerous. But it could also mean that it just seems dangerous to me. Often what seems to be the case is also what we believe to be the case, but sometimes these two things come apart. A popular example is the perceptual illusion created when the moon is low on the horizon. It seems bigger than usual, but most of us don't believe it's bigger.
These days most philosophers working on the emotions prefer the believing version of regarding. One thing going against it, though, is that we have emotional responses that don't match up with our beliefs. A good example (which I steal shamelessly from Michael Stocker) is the fear of flying: we don't really believe flying is dangerous. In fact most of us know that it's safer than driving. But we fear it all the same, and that's probably because it seems dangerous to us, despite our acceptance of the fact that it's safe.
Now let me apply this to an issue in historical moral psychology. I spend a lot of time reading the Neo-Confucian philosophers, who wholeheartedly embrace an account of the emotions as constituted by thoughts and judgments. For a long time I (like most scholars in my line of work) assumed they were cognitivists in the more familiar "believing" sense. Recently I've come to realize that they also make room for cognitivism in the "seeming" sense. In fact, the purpose of moral education as they understood it was to make us more reliant on emotional appearances (seemings) than on emotional beliefs. The beliefs just "second" the emotional appearances. Here's why.
When we think about harmless perceptual illusions like the appearance of the moon on the horizon, it's evident that beliefs tend to be more reliable than appearances. But in matters of moral significance the situation is often reversed. Moral beliefs tend to be more susceptible to rationalization and self-deception than moral appearances. Admittedly moral appearances also get things wrong--visceral disgust often plays a crucial role in the moral condemnation of entire classes of people (think of the initial disgust elicited by foreign eating habits or different sexual practices). But while both beliefs and appearances are unreliable, one of these problems is more intractable than others. It doesn't take much exposure to overcome our visceral disgust at unfamiliar things. But the tendency to rationalize self-serving ends is a permanent feature of the human condition. When given the chance, successful revolutionaries usually turn into unapologetic dictators.
On my reading the Neo-Confucians thought that emotions were constituted by both appearances and beliefs. But unlike many moral sense theorists they thought we were better off relying on the former. The latter will never go away, but they can be shut out by acting on our more spontaneous feelings (unlike most classical Greek and Chinese virtue ethicists, the Neo-Confucians were more attracted to accounts of moral selves as permanently divided between their good and bad parts). I think there is some truth to this, even if I'm not willing to give up entirely on belief-based emotional responses.
Wednesday, May 30, 2007
Monday, May 28, 2007
Friday, May 25, 2007
On Old Friends Looking Young
What do I see when I look at a friend's face?
When I was a graduate student at U.C. Berkeley in the 1990s, two friends (call them Jack and Rob) came up unexpectedly for a visit. But only one, Jack, showed up at my door. He invited me out for coffee, and we immediately walked downtown. As always, a number of people were loitering around outside the Berkeley subway station. Jack pointed at one and said, "Doesn't that guy look a bit like Rob to you?" I didn't think so: That guy looked shabby, fat, and old. Rob, of course, was none of these things. The catch, of course, was that it was Rob. (Those pranksters!)
As I recall it now, it seemed to me that a few seconds later, when suddenly I recognized Rob for who he was, his appearance literally changed before my eyes. He actually looked thinner, neater, younger, more handsome. (Philosophers of language may be reminded of John Perry's famous example of seeing himself in the mirror of a bus and wondering about that shabby pedagogue.)
When I see a friend for the first time in ten years, it's rare for that friend not to look disappointingly old, but the friends I see every day seem to retain their youthful faces. It's an unsurprising psychological fact that we like people we find handsome and find handsome the people we like. But is this just having a positive attitude about a face we see in all its flabby wrinkles every day? Or do we actually smooth over those wrinkles, as it were, in our visual experience? We don't need to see the details of our friend's faces much, I suppose -- at least those details irrelevant to emotion and expression -- so maybe we just "fill in", as it were, with something smooth and regular, or something idealized?
Though Dennett might disagree, it seems to me there's a difference between simply ignoring warts, moles, and wrinkles that are part of our visual experience of our friends' faces and not visually experiencing those warts, moles, and wrinkles at all. Do I have an accurate picture of my friend before my mind's eye, as it were, or is the picture touched up? Though I can't put much weight on a distant memory of a subjective experience, my encounter with Rob inclines me toward the latter. In that moment of recognition, my visual experience was transformed.
Update, 9:26 PM. This picture, grabbed off The Situationist, seems pertinent!
When I was a graduate student at U.C. Berkeley in the 1990s, two friends (call them Jack and Rob) came up unexpectedly for a visit. But only one, Jack, showed up at my door. He invited me out for coffee, and we immediately walked downtown. As always, a number of people were loitering around outside the Berkeley subway station. Jack pointed at one and said, "Doesn't that guy look a bit like Rob to you?" I didn't think so: That guy looked shabby, fat, and old. Rob, of course, was none of these things. The catch, of course, was that it was Rob. (Those pranksters!)
As I recall it now, it seemed to me that a few seconds later, when suddenly I recognized Rob for who he was, his appearance literally changed before my eyes. He actually looked thinner, neater, younger, more handsome. (Philosophers of language may be reminded of John Perry's famous example of seeing himself in the mirror of a bus and wondering about that shabby pedagogue.)
When I see a friend for the first time in ten years, it's rare for that friend not to look disappointingly old, but the friends I see every day seem to retain their youthful faces. It's an unsurprising psychological fact that we like people we find handsome and find handsome the people we like. But is this just having a positive attitude about a face we see in all its flabby wrinkles every day? Or do we actually smooth over those wrinkles, as it were, in our visual experience? We don't need to see the details of our friend's faces much, I suppose -- at least those details irrelevant to emotion and expression -- so maybe we just "fill in", as it were, with something smooth and regular, or something idealized?
Though Dennett might disagree, it seems to me there's a difference between simply ignoring warts, moles, and wrinkles that are part of our visual experience of our friends' faces and not visually experiencing those warts, moles, and wrinkles at all. Do I have an accurate picture of my friend before my mind's eye, as it were, or is the picture touched up? Though I can't put much weight on a distant memory of a subjective experience, my encounter with Rob inclines me toward the latter. In that moment of recognition, my visual experience was transformed.
Update, 9:26 PM. This picture, grabbed off The Situationist, seems pertinent!
Wednesday, May 23, 2007
Finding the Blame in Survivor Guilt (by guest blogger Justin Tiwald)
On many standard accounts of the moral feelings, guilt is distinguished from other kinds of self-inflicted anguish by a belief in one's own culpability. I feel guilty when I believe I played a self-preventable causal role in bringing about a wrong state of affairs. Without this belief, my feeling might be better described as regret, shame, or embarrassment.
The phenomenon of survivor guilt throws something of a monkey wrench in this view. Survivors of fatal plane crashes often report feeling guilty about their survival, especially when it seems like a matter of luck that they lived and others didn't. Many survivors admit that they couldn't have done anything to prevent the crash, but they describe their feelings as guilt all the same. Are they right to do so?
One response is to say that such survivors are mistaken, or that the guilt they feel is of a different kind than the thief's or the murderer's. Another response is to attribute their guilt to a hidden or unconscious belief in their own culpability. I'm not satisfied with either of these answers. The lesson I take from survivor guilt is that we should characterize the cognitive elements of guilt in a broader and more textured way. What makes self-imposed anguish an instance of guilt isn't the belief that one is culpably wrong, but rather the entire family of reactions associated with that belief. And here's the key move: we can have these characteristic reactions to the belief without the belief itself.
When we think ourselves responsible for something we judge to be wrong, we typically respond in a number of ways. If I steal the laptop computer that I've always wanted, the laptop will quickly lose its luster for me. Even if I resolve not to atone for my wrong, I will nevertheless find myself imagining various ways of atoning. Similarly, survivors report that living feels like a shameful burden rather than a stroke of good fortune. They often think that they owe something to the dead or their families, and even report that they feel like better people once they find ways of making amends.
We don't always need the belief in our own culpable wrongness in order to motivate these stereotypical reactions. In fact we have a readily identifiable set of psychological mechanisms--conscience--that routinely replicates such reactions without that underlying belief. For example, conscience is often more responsive to the brute fact of human or animal suffering than to careful considerations of moral principles or calculations of consequences. We'd hope, of course, that our consciences would be more responsive to our considered judgments than this, but this isn't usually the case. I think the same tendency to bypass judgments of culpable wrongness is at work when someone feels guilty about being the sole survivor of a fatal accident. Faced with a traumatic event, judgments of responsibility go out the window.
So guilt is better distinguished by the thoughts and tendencies characteristic of someone who believes herself culpably wrong, and not helpfully distinguished by the belief in culpable wrongness itself. Our mistake is in thinking we can't have the latter without the former, which is a mistake that the psychology of conscience can easily correct. I wouldn't be surprised if we could tell a similar story about indignation, shame, or any of a number of other moral feelings.
The phenomenon of survivor guilt throws something of a monkey wrench in this view. Survivors of fatal plane crashes often report feeling guilty about their survival, especially when it seems like a matter of luck that they lived and others didn't. Many survivors admit that they couldn't have done anything to prevent the crash, but they describe their feelings as guilt all the same. Are they right to do so?
One response is to say that such survivors are mistaken, or that the guilt they feel is of a different kind than the thief's or the murderer's. Another response is to attribute their guilt to a hidden or unconscious belief in their own culpability. I'm not satisfied with either of these answers. The lesson I take from survivor guilt is that we should characterize the cognitive elements of guilt in a broader and more textured way. What makes self-imposed anguish an instance of guilt isn't the belief that one is culpably wrong, but rather the entire family of reactions associated with that belief. And here's the key move: we can have these characteristic reactions to the belief without the belief itself.
When we think ourselves responsible for something we judge to be wrong, we typically respond in a number of ways. If I steal the laptop computer that I've always wanted, the laptop will quickly lose its luster for me. Even if I resolve not to atone for my wrong, I will nevertheless find myself imagining various ways of atoning. Similarly, survivors report that living feels like a shameful burden rather than a stroke of good fortune. They often think that they owe something to the dead or their families, and even report that they feel like better people once they find ways of making amends.
We don't always need the belief in our own culpable wrongness in order to motivate these stereotypical reactions. In fact we have a readily identifiable set of psychological mechanisms--conscience--that routinely replicates such reactions without that underlying belief. For example, conscience is often more responsive to the brute fact of human or animal suffering than to careful considerations of moral principles or calculations of consequences. We'd hope, of course, that our consciences would be more responsive to our considered judgments than this, but this isn't usually the case. I think the same tendency to bypass judgments of culpable wrongness is at work when someone feels guilty about being the sole survivor of a fatal accident. Faced with a traumatic event, judgments of responsibility go out the window.
So guilt is better distinguished by the thoughts and tendencies characteristic of someone who believes herself culpably wrong, and not helpfully distinguished by the belief in culpable wrongness itself. Our mistake is in thinking we can't have the latter without the former, which is a mistake that the psychology of conscience can easily correct. I wouldn't be surprised if we could tell a similar story about indignation, shame, or any of a number of other moral feelings.
Monday, May 21, 2007
Should Philosophers Belong to the APA?
Philosophers active in the profession -- if they are employed full-time in a U.S. university, especially a high-profile university with a Ph.D. program in philosophy -- have, I think, some obligation to the philosophical community for the support and nurturance of the profession. One may fulfill this obligation in part by doing things like refereeing essays, chairing sessions, serving on committees, and paying dues to professional organizations, including minimally the American Philosophical Association. The APA in particular forms committees and publishes newsletters and proceedings pertinent to issues in the profession, organizes three annual conferences, supports and publicizes awards, and provides an admirably well-organized and equitable structure for advertising positions in philosophy (including keeping a list of censured institutions).
Some active U.S. philosophers, I'm sure, have good reasons not to belong. But I'm inclined to think that at least for leading members of the profession, it's a small, defeasible wrong not to belong -- a bit of freeloading or a small lapse of generosity, perhaps. (Of course, we all have our lapses in one arena or another!)
Even if there is no obligation -- not even a weak and defeasible one -- to be a member, it still seems reasonable to suppose that ordinarily, and all else being equal, supporting the organization financially by one's dues and membership, is a good thing, if you are an active, prominent, full-time member of the profession.
Or are decisions about membership simply decisions of prudence, so that if you personally derive no benefits from being a member (you can do without the newsletters and proceedings, you are willing to pay the slightly higher registration fees when you go to meetings, you don't care to serve on any committees) paying membership fees is only foolish, like buying a pair of shoes you'll never use?
(Let's say these reflections are a propos of the following statement in the On-Line Philosophy Conference introduction:
Some active U.S. philosophers, I'm sure, have good reasons not to belong. But I'm inclined to think that at least for leading members of the profession, it's a small, defeasible wrong not to belong -- a bit of freeloading or a small lapse of generosity, perhaps. (Of course, we all have our lapses in one arena or another!)
Even if there is no obligation -- not even a weak and defeasible one -- to be a member, it still seems reasonable to suppose that ordinarily, and all else being equal, supporting the organization financially by one's dues and membership, is a good thing, if you are an active, prominent, full-time member of the profession.
Or are decisions about membership simply decisions of prudence, so that if you personally derive no benefits from being a member (you can do without the newsletters and proceedings, you are willing to pay the slightly higher registration fees when you go to meetings, you don't care to serve on any committees) paying membership fees is only foolish, like buying a pair of shoes you'll never use?
(Let's say these reflections are a propos of the following statement in the On-Line Philosophy Conference introduction:
Finally, we are very pleased to announce that Professors McMahan and Sosa have generously offered to donate their keynote honorariums to charity. This year the charities selected by the OPC keynote speakers are Amnesty International, Oxfam, and The American Philosophical Association. Please follow their generous lead and donate what you can. If nothing else, treat it as an inexpensive conference registration fee! We have provided links in the sidebar to this year's official charities. We hope that with your assistance we can start a charitable tradition here at the OPC, and we thank both Professors McMahan and Sosa for laying the groundwork!)
Friday, May 18, 2007
Sympathy and Self-Love (by guest blogger Justin Tiwald)
The philosopher I work on, Dai Zhen, maintains that sympathy by its very nature requires a strong interest in one's own good. Here is my short synopsis of his argument, pieced together from comments and character glosses. See what you think.
To sympathize with someone, I must care about her for her sake. But this "for her sake" is a tricky concept. It doesn't really count if my concern for someone is grounded in any of her particular properties. Let's say that I care about someone named Mary who is also a great sociologist and an amazing Frisbee golfer. Although this combination of traits may be unique, in principle some other Frisbee golfing sociologist could fit the bill just as well. We wouldn't want a form of care in which one object of concern could be substituted for another. Therefore, we want a form of care that is largely independent of her particular virtues or appealing characteristics.
If concern for Mary's sake must be independent of these properties, then in some sense it has to be unconditional. Or at least it has to have a great deal of counterfactual resiliency. I should care about Mary even if she were a very different kind of person.
Achieving concern of this more unconditional variety is much more difficult than it appears, especially when the object of concern is someone with whom we are barely acquainted. To be able to sympathize with just anyone we have to be capable of appreciating her relevant feelings and desires even if we find them appalling or strange, and we have to consider her suffering regrettable even if we think it deserved or necessary. The best way to do this is to imagine ourselves in the stranger's place, wanting her good as we want our own. There is nothing that comes more naturally than caring about our own well-being as such, and if we imagine ourselves as the stranger, we'll be much more likely to recapture the depth and unconditionality of our own self-concern.
Now to an interesting historical point. In form, this argument runs along some of the same lines as a familiar Confucian argument for preferential love. Very roughly, the Confucian argument is that we show the requisite concern for strangers by building on the natural concern we already have for our parents and siblings. We then model that more natural concern in our interactions with outsiders. However, familial love by its very nature requires that we play favorites--if I cared about strangers as much as I do my brother then my care for my brother wouldn't be familial love. Thus, for the sake of having the right kind of concern for strangers, I must care even more about my family. This might seem unfair to the strangers, but it's just the price we must pay so that I can have the right kind of concern at all.
Dai Zhen takes it one step further. The real foundation for other-directed concern is not familial love but self-love. Self-love is unconditional in a way that natural familial love is not. Children will cease to love their parents if the parents are complete monsters, but we care about ourselves no matter what (even if we think ourselves unworthy). Whenever we are sympathetically concerned for others we are emulating and building on the concern we have for ourselves. Thus (in some very qualified sense) the self comes first. This might seem unfair to everyone else, but this is just the price we must pay in order to have the right kind of concern at all.
[Update: ES, May 22, 8:53 a.m.: Somehow the comments on this post were disabled. I have re-enabled them.]
To sympathize with someone, I must care about her for her sake. But this "for her sake" is a tricky concept. It doesn't really count if my concern for someone is grounded in any of her particular properties. Let's say that I care about someone named Mary who is also a great sociologist and an amazing Frisbee golfer. Although this combination of traits may be unique, in principle some other Frisbee golfing sociologist could fit the bill just as well. We wouldn't want a form of care in which one object of concern could be substituted for another. Therefore, we want a form of care that is largely independent of her particular virtues or appealing characteristics.
If concern for Mary's sake must be independent of these properties, then in some sense it has to be unconditional. Or at least it has to have a great deal of counterfactual resiliency. I should care about Mary even if she were a very different kind of person.
Achieving concern of this more unconditional variety is much more difficult than it appears, especially when the object of concern is someone with whom we are barely acquainted. To be able to sympathize with just anyone we have to be capable of appreciating her relevant feelings and desires even if we find them appalling or strange, and we have to consider her suffering regrettable even if we think it deserved or necessary. The best way to do this is to imagine ourselves in the stranger's place, wanting her good as we want our own. There is nothing that comes more naturally than caring about our own well-being as such, and if we imagine ourselves as the stranger, we'll be much more likely to recapture the depth and unconditionality of our own self-concern.
Now to an interesting historical point. In form, this argument runs along some of the same lines as a familiar Confucian argument for preferential love. Very roughly, the Confucian argument is that we show the requisite concern for strangers by building on the natural concern we already have for our parents and siblings. We then model that more natural concern in our interactions with outsiders. However, familial love by its very nature requires that we play favorites--if I cared about strangers as much as I do my brother then my care for my brother wouldn't be familial love. Thus, for the sake of having the right kind of concern for strangers, I must care even more about my family. This might seem unfair to the strangers, but it's just the price we must pay so that I can have the right kind of concern at all.
Dai Zhen takes it one step further. The real foundation for other-directed concern is not familial love but self-love. Self-love is unconditional in a way that natural familial love is not. Children will cease to love their parents if the parents are complete monsters, but we care about ourselves no matter what (even if we think ourselves unworthy). Whenever we are sympathetically concerned for others we are emulating and building on the concern we have for ourselves. Thus (in some very qualified sense) the self comes first. This might seem unfair to everyone else, but this is just the price we must pay in order to have the right kind of concern at all.
[Update: ES, May 22, 8:53 a.m.: Somehow the comments on this post were disabled. I have re-enabled them.]
Wednesday, May 16, 2007
The Two Envelope Paradox
In 1993, when I was a graduate student, fellow student Josh Dever introduced me to a simple puzzle in decision theory called the "exchange problem" or the "two envelope paradox". It got under my skin.
You are presented with the choice between two envelopes, Envelope A and Envelope B. You know that one envelope has half as much money as the other, but you don't know which has more. Arbitrarily, you choose Envelope A. Then you think to yourself, should I switch to Envelope B instead? There's a 50-50 chance it has twice as much money and a 50-50 chance it has half as much money. And since double or nothing is a fair bet, double or half should be more than fair! Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch.
Of course that's an absurd result. You have no reason to expect more from Envelope B. Parity of reasoning -- calling "Y" the amount in Envelope B -- would yield the result that you should expect more from Envelope A. Something has gone wrong.
But what exactly has gone wrong? I've never seen a satisfying answer to this question. Various authors, like Frank Jackson and Richard Jeffrey, have proposed constraints on the use of variables in the expectation formula, constraints that would prevent the fallacious reasoning above. However such constraints are impractically strong, since they would also forbid intuitively valid forms of reasoning such as: If I have to choose between (i.) a gift from Person A and (ii.) a coinflip determining whether I get a gift from Person B or Person C, and I believe that Person A would, on average, give me about twice as much money as Person B and half as much as Person C, I should take option (ii).
Terry Horgan and Charles Chihara have proposed less formal contraints on the use of variables in such cases, constraints that I find difficult to interpret and which I'm not sure would consistently forbid fallacious calculations (for example in non-linear cases).
Many mathematicians and decision theorists have written interestingly on what happens after you open the envelope and see an amount. For example, could there be a probability distribution according to which no matter what amount you see, you should switch? That's a fun question, but I'm interested in the closed-envelope case, in diagnosing what is wrong in the simple reasoning above. No one, I think, has got the diagnosis right.
For Josh Dever's and my stab at a solution, see here (simplified version) or here (more detailed version).
For a list of on-line essays on this topic, see this Wikipedia entry. (This entry gives Josh and me credit for "the most common solution" -- does this mean that our unorthodoxy has become the new orthodoxy? -- and then shifts focus to the open envelope version.)
You are presented with the choice between two envelopes, Envelope A and Envelope B. You know that one envelope has half as much money as the other, but you don't know which has more. Arbitrarily, you choose Envelope A. Then you think to yourself, should I switch to Envelope B instead? There's a 50-50 chance it has twice as much money and a 50-50 chance it has half as much money. And since double or nothing is a fair bet, double or half should be more than fair! Using the tools of formal decision theory, you might call "X" the amount of money in Envelope A and then calculate the expectation of switching as (.5)*.5(X) + (.5)*2X = 5/4 X. So you switch.
Of course that's an absurd result. You have no reason to expect more from Envelope B. Parity of reasoning -- calling "Y" the amount in Envelope B -- would yield the result that you should expect more from Envelope A. Something has gone wrong.
But what exactly has gone wrong? I've never seen a satisfying answer to this question. Various authors, like Frank Jackson and Richard Jeffrey, have proposed constraints on the use of variables in the expectation formula, constraints that would prevent the fallacious reasoning above. However such constraints are impractically strong, since they would also forbid intuitively valid forms of reasoning such as: If I have to choose between (i.) a gift from Person A and (ii.) a coinflip determining whether I get a gift from Person B or Person C, and I believe that Person A would, on average, give me about twice as much money as Person B and half as much as Person C, I should take option (ii).
Terry Horgan and Charles Chihara have proposed less formal contraints on the use of variables in such cases, constraints that I find difficult to interpret and which I'm not sure would consistently forbid fallacious calculations (for example in non-linear cases).
Many mathematicians and decision theorists have written interestingly on what happens after you open the envelope and see an amount. For example, could there be a probability distribution according to which no matter what amount you see, you should switch? That's a fun question, but I'm interested in the closed-envelope case, in diagnosing what is wrong in the simple reasoning above. No one, I think, has got the diagnosis right.
For Josh Dever's and my stab at a solution, see here (simplified version) or here (more detailed version).
For a list of on-line essays on this topic, see this Wikipedia entry. (This entry gives Josh and me credit for "the most common solution" -- does this mean that our unorthodoxy has become the new orthodoxy? -- and then shifts focus to the open envelope version.)
Monday, May 14, 2007
The Second On-Line Philosophy Conference...
is here. Check it out!
(I have commented on Shaun Nichols's essay on the motivations of compatibilism about freedom and determinism, developing some of my ideas from last Friday's post on the psychology of philosophy.)
(I have commented on Shaun Nichols's essay on the motivations of compatibilism about freedom and determinism, developing some of my ideas from last Friday's post on the psychology of philosophy.)
How Selfless Can We Be (and Still Care about Others)? (by guest blogger Justin Tiwald)
Let's say that I am a person who cares very little about his own well-being. I am content with my humble job and my austere apartment. But let's say I also aspire to be the sort of person who sympathizes with a friend when she loses her good job and her family home. Is it possible to be both of these things at once?
I take sympathy to require, among other things, an ability to simulate the significant thoughts and feelings that the friend would have in her particular circumstances. This generally requires the possession of relevantly similar desires (even if not desires for exactly the same types of objects or states of affairs). And this poses a problem for the person who wants very little for himself, especially if good jobs and homes are among those things that he doesn't want.
This sort of worry emerges from time to time in the literature on Zhu Xi (1130-1200). Like many strong proponents of selfless character, Zhu Xi wants to have his cake and eat it too: he'd like his ideal moral agents to have very little interest in their own well-being and yet be capable of great compassion. Zhu's defenders usually respond by saying that he isn't so strong a proponent of asceticism as one might think. They point to many overlooked passages in which he explicitly countenances desires for basic human goods like food and family. Put these together, they conclude, and we could well have desires for a reasonably good life.
I've never been satisfied with this move. Defenders of Zhu Xi are right to point out that his ideal moral agent desires things like food and family, but they don't pay sufficient attention to why she desires them. It's one thing if she desires them because they make her life better, but it's another thing entirely if she desires them independently of their contribution to her life. In the first case she desires things that benefit her under that description. In the second case she desires things that happen to benefit her. Zhu Xi permits us to desire things that happen to be good for us, but, he warns, we better not want them because they are good for us.
This strikes me as omitting the largest share of the human good. When my friend loses her home and career, surely a substantial part of her anguish depends upon the thought that her life has taken a turn for the worse. In general, most people want their lives to go well. Knowing that one's life is on an upward trajectory is itself a source of great satisfaction, and knowing that it is not is itself a source of despair. If I am so selfless as to be entirely without desires that my life go well, I'm not going to be a particularly good at feeling the pain of those who do.
Many proponents of moral selflessness turn out to be ascetics of the more subtle kind that I find in Zhu Xi. While they might appear to condemn all desires for outcomes that are self-serving, on closer examination they turn out to condemn primarily those desires that are conscientiously self-serving. This characterization of the good moral agent strikes me as much more realistic, but it still falls well short of what is required for robust sympathetic concern.
I take sympathy to require, among other things, an ability to simulate the significant thoughts and feelings that the friend would have in her particular circumstances. This generally requires the possession of relevantly similar desires (even if not desires for exactly the same types of objects or states of affairs). And this poses a problem for the person who wants very little for himself, especially if good jobs and homes are among those things that he doesn't want.
This sort of worry emerges from time to time in the literature on Zhu Xi (1130-1200). Like many strong proponents of selfless character, Zhu Xi wants to have his cake and eat it too: he'd like his ideal moral agents to have very little interest in their own well-being and yet be capable of great compassion. Zhu's defenders usually respond by saying that he isn't so strong a proponent of asceticism as one might think. They point to many overlooked passages in which he explicitly countenances desires for basic human goods like food and family. Put these together, they conclude, and we could well have desires for a reasonably good life.
I've never been satisfied with this move. Defenders of Zhu Xi are right to point out that his ideal moral agent desires things like food and family, but they don't pay sufficient attention to why she desires them. It's one thing if she desires them because they make her life better, but it's another thing entirely if she desires them independently of their contribution to her life. In the first case she desires things that benefit her under that description. In the second case she desires things that happen to benefit her. Zhu Xi permits us to desire things that happen to be good for us, but, he warns, we better not want them because they are good for us.
This strikes me as omitting the largest share of the human good. When my friend loses her home and career, surely a substantial part of her anguish depends upon the thought that her life has taken a turn for the worse. In general, most people want their lives to go well. Knowing that one's life is on an upward trajectory is itself a source of great satisfaction, and knowing that it is not is itself a source of despair. If I am so selfless as to be entirely without desires that my life go well, I'm not going to be a particularly good at feeling the pain of those who do.
Many proponents of moral selflessness turn out to be ascetics of the more subtle kind that I find in Zhu Xi. While they might appear to condemn all desires for outcomes that are self-serving, on closer examination they turn out to condemn primarily those desires that are conscientiously self-serving. This characterization of the good moral agent strikes me as much more realistic, but it still falls well short of what is required for robust sympathetic concern.
Friday, May 11, 2007
The Psychology of Philosophy
As an undergrad, my favorite philosophers were Nietzsche, Zhuangzi, and Paul Feyerabend -- all critics of High Reason, elegant rhetoricians against seeing philosophy (and science) as dispassionate intellection of the one truth. Why was I drawn to them -- to each of them from the first page, almost? Was it a nuanced appreciation of the arguments and counterarguments? Of course not. Rather, it was a psychological urge: Something in me rebelled against tyrant reason. I wanted to see it get its comeuppance. (What this partly because I was so attracted to reason, almost painfully intellectual as a kid?)
My 1997 dissertation, I realized only in retrospect, had four parts each of which was a rebellion, too: The first part (on infant and animal belief) attacked the views of Donald Davidson, probably the most eminent philosopher at my graduate institution. Each of the other three parts assaulted the views of one of my dissertation advisors (against Gopnik's treatment of representation, against Lloyd's treatment of theories, against Searle's treatment of belief). Coincidence? Fortunately, they were a tolerant lot!
My interests in philosophy have traced a crooked course, from Nietzsche and Unamuno to skepticism and philosophy of science, to developmental psychology, belief, consciousness, self-knowledge, and moral psychology. However, as I now realize looking back, a central theme in most of these has been an interest in, not just the philosophy of psychology, but the psychology of philosophy. What psychological factors drive philosophers toward certain views and away from others?
There is no broadly recognized subfield called "psychology of philosophy", though much of Nietzsche's best work fits aptly under this heading. Historians of philosophy -- especially those whose home departments are outside of philosophy -- generally recognize the importance of historical and social factors in shaping philosophical views, but few push for a deeper understanding of the psychological factors. Yet surely we could look more closely at such factors. We could use the tools of contemporary psychology (tools unknown to Nietzsche) to help improve our understanding of the field. Why not? Philosophy leaves such a wide latitude for disagreement, and our philosophical impulses -- our attractions to certain types of view and distaste for other views -- play a role so early in our exposure to philosophy, before we can really fairly assess the arguments, that it seems almost undeniable that contingent features of individual psychology must play a major role in our philosophical lives. (This needn't always be a matter of psychodynamic "depth psychology." One theme I find recurring in my work is the role played by unwitting metaphor. For example here and here and here and here.)
I was brought to these reflections reading Shaun Nichols's forthcoming contribution to next week's On-Line Philosophy Conference. Nichols's piece is exactly what I've just endorsed: a piece of psychology of philosophy, using the techniques of empirical psychology to cast light on philosophical motivations.
I'll post a link to that essay and to my commentary (which will contain more discussion of the psychology of philosophy) next Monday, when they are posted.
My 1997 dissertation, I realized only in retrospect, had four parts each of which was a rebellion, too: The first part (on infant and animal belief) attacked the views of Donald Davidson, probably the most eminent philosopher at my graduate institution. Each of the other three parts assaulted the views of one of my dissertation advisors (against Gopnik's treatment of representation, against Lloyd's treatment of theories, against Searle's treatment of belief). Coincidence? Fortunately, they were a tolerant lot!
My interests in philosophy have traced a crooked course, from Nietzsche and Unamuno to skepticism and philosophy of science, to developmental psychology, belief, consciousness, self-knowledge, and moral psychology. However, as I now realize looking back, a central theme in most of these has been an interest in, not just the philosophy of psychology, but the psychology of philosophy. What psychological factors drive philosophers toward certain views and away from others?
There is no broadly recognized subfield called "psychology of philosophy", though much of Nietzsche's best work fits aptly under this heading. Historians of philosophy -- especially those whose home departments are outside of philosophy -- generally recognize the importance of historical and social factors in shaping philosophical views, but few push for a deeper understanding of the psychological factors. Yet surely we could look more closely at such factors. We could use the tools of contemporary psychology (tools unknown to Nietzsche) to help improve our understanding of the field. Why not? Philosophy leaves such a wide latitude for disagreement, and our philosophical impulses -- our attractions to certain types of view and distaste for other views -- play a role so early in our exposure to philosophy, before we can really fairly assess the arguments, that it seems almost undeniable that contingent features of individual psychology must play a major role in our philosophical lives. (This needn't always be a matter of psychodynamic "depth psychology." One theme I find recurring in my work is the role played by unwitting metaphor. For example here and here and here and here.)
I was brought to these reflections reading Shaun Nichols's forthcoming contribution to next week's On-Line Philosophy Conference. Nichols's piece is exactly what I've just endorsed: a piece of psychology of philosophy, using the techniques of empirical psychology to cast light on philosophical motivations.
I'll post a link to that essay and to my commentary (which will contain more discussion of the psychology of philosophy) next Monday, when they are posted.
Wednesday, May 09, 2007
What Does It Mean to Have a Desire? (by guest blogger Justin Tiwald)
We don't normally speak as though having a desire for something implies that we presently feel some inclination to acquire it. It makes sense to say that I have a desire for Thai curry even if I'm currently taking a driving test and not thinking about Thai curry at all. Therefore it's tempting to say that "having a desire" can be cashed out in terms of a fairly straight-forward counterfactual. I would have a desire for Thai curry just in case the following is true:
(1.) If I were sufficiently deprived of Thai curry and entertaining the possibility of acquiring it, I would feel in an inclination to acquire it.
I don't think (1) does justice to the nuances of desire-possession. Consider another account offered by Xunzi (Hsün Tzu, 3rd Century B.C.E.). On the one hand Xunzi holds that our natural desires are susceptible of being utterly transformed. On the other hand Xunzi also claims that certain inclinations are permanent, such as the eyes' lust for beautiful things. The eyes will always lust for such things when allowed to dwell on them, but it's also true the eyes' lust can be refashioned into a desire for sights that are consistent with virtue. How is this possible? Consider the following passage:
A strong claim about the possibility of radical self-transformation, to be sure. But notice that Xunzi isn't suggesting that we can entirely eliminate the disposition to lust for beautiful things when allowed to dwell on them. Rather, the eyes develop a preemptory desire to avoid dwelling on the wrong things in the first place--a power of selective perception. With sufficient reinforcement it no longer makes sense to say that we have a desire for beautiful things as such, even though our eyes would lust for them if our thoughts were allowed to linger on them. This gives us a slightly more nuanced account of having a desire for beautiful things:
(2.) If I were sufficiently deprived of beautiful things and presented with an opportunity to entertain the thought of acquiring them, I would feel an inclination to acquire them.
Of course, I can be presented with an opportunity to entertain the thought of acquiring something without actually entertaining that thought. So on this account I could have a desire for beautiful things in sense (1) without having it in sense (2).
I think (2) sits closer to our usual way of understanding desire-possession. If I allowed myself to dwell on the thought of taking someone's fancy new laptop, I would probably feel an inclination to do so. But it's highly unusual for me to contemplate such a thing. I can sit in a classroom for hours without noticing open bags and backpacks that might have laptops inside. Often students will use their laptops in class and it won't even register. In contrast, a kleptomaniac would be well aware of those open bags, and would need to remind herself that it would be wrong to steal them.
So I have the desire in sense (1), because I would be tempted to acquire the laptop if I thought about it. But I don't have the desire in sense (2), because I don't in fact think about it (unlike the kleptomaniac). For purposes of evaluating moral character, (2) strikes me as the more decisive sense of having a desire.
(1.) If I were sufficiently deprived of Thai curry and entertaining the possibility of acquiring it, I would feel in an inclination to acquire it.
I don't think (1) does justice to the nuances of desire-possession. Consider another account offered by Xunzi (Hsün Tzu, 3rd Century B.C.E.). On the one hand Xunzi holds that our natural desires are susceptible of being utterly transformed. On the other hand Xunzi also claims that certain inclinations are permanent, such as the eyes' lust for beautiful things. The eyes will always lust for such things when allowed to dwell on them, but it's also true the eyes' lust can be refashioned into a desire for sights that are consistent with virtue. How is this possible? Consider the following passage:
[The gentleman] makes his eyes not want to see what is not right, makes his ears not want to hear what is not right, [etc.]...He comes to the point where he loves [learning the Way], and his eyes love it more than the five colors, his ears love it more than the five tones, [etc.]...For this reason, power and profit cannot sway him. ("An Exhortation to Learning," Ivanhoe and Van Norden, pp. 260-61.)
A strong claim about the possibility of radical self-transformation, to be sure. But notice that Xunzi isn't suggesting that we can entirely eliminate the disposition to lust for beautiful things when allowed to dwell on them. Rather, the eyes develop a preemptory desire to avoid dwelling on the wrong things in the first place--a power of selective perception. With sufficient reinforcement it no longer makes sense to say that we have a desire for beautiful things as such, even though our eyes would lust for them if our thoughts were allowed to linger on them. This gives us a slightly more nuanced account of having a desire for beautiful things:
(2.) If I were sufficiently deprived of beautiful things and presented with an opportunity to entertain the thought of acquiring them, I would feel an inclination to acquire them.
Of course, I can be presented with an opportunity to entertain the thought of acquiring something without actually entertaining that thought. So on this account I could have a desire for beautiful things in sense (1) without having it in sense (2).
I think (2) sits closer to our usual way of understanding desire-possession. If I allowed myself to dwell on the thought of taking someone's fancy new laptop, I would probably feel an inclination to do so. But it's highly unusual for me to contemplate such a thing. I can sit in a classroom for hours without noticing open bags and backpacks that might have laptops inside. Often students will use their laptops in class and it won't even register. In contrast, a kleptomaniac would be well aware of those open bags, and would need to remind herself that it would be wrong to steal them.
So I have the desire in sense (1), because I would be tempted to acquire the laptop if I thought about it. But I don't have the desire in sense (2), because I don't in fact think about it (unlike the kleptomaniac). For purposes of evaluating moral character, (2) strikes me as the more decisive sense of having a desire.
Monday, May 07, 2007
Attention, Objects, and Aims
We normally think of attention as a relationship between a person and an object. If you are attending, you're attending to something, that is, to some thing -- a noise, a conversation, an apple.
First problem case: Macbeth hallucinates a dagger. I see a mirage. There is, of course, no dagger and no pool of water. So what thing, what object, do I stand in relation to, as the target of my attention? Some non-existent thing? Some mental thing (an idea, an experience)? If the latter, does it follow that I can't always tell whether my attention is directed outward to the world or inward, as it were, to my own mind? That would be strange.
Not a fatal objection, surely, to an "objectual model" (let's call it) of attention. Defenders of that view will have their resources. But why not, instead, jettison the objectual model and regard attention as the dedication of a certain kind of resource (what we might call "central cognitive resources") to a particular aim or goal? The aim of visual attention is the same in both the mirage case and the case of seeing an ordinary pool of water. The aim is to (for example) determine whether there's water over there, or whether this is really a mirage, or to estimate how long before the car hits the puddle. The mirage case and the visual case can be treated in the same way, without the aid of some ghostly, invented object for me to stand in an attentional relation to.
Consider also other sorts of attention-consuming tasks. Research psychologists have fixated on visual attention (and to some extent auditory attention) almost exclusively in recent decades, but in the early days of introspective psychology people spoke also of "intellectual attention". When you're thinking hard about a math puzzle or when you're contemplating the best route to grandma's house in rush hour, there's a perfectly legitimate sense in which you are devoting (non-sensory) attention to these tasks. Both kinds of tasks consume central cognitive resources. You can't do either very well while also quickly adding a column of numbers or while focusing on a difficult visual task.
But what are the objects I stand in relation to in intellectual attention? The route to grandma's house? Numbers? (What are numbers, anyway?) What if I'm thinking about unicorns? Better to say that I'm trying to do things. Attention is devoted to tasks, not objects. Or consider heavy exercise, holding one's eyes still, and other acts of self control. These tasks, too, consume attentional resources; yet it's not always clear that I am attending to objects (my own body, maybe?) in doing them.
So why do I care about this? Mainly because I think introspection is a species of attention, and that philosophers and psychologists often get introspection wrong because they work with too objectual a model of attention. But more on that in a future post....
(Thanks to Justin Fisher, by the way, for conversation on this point last Friday.)
First problem case: Macbeth hallucinates a dagger. I see a mirage. There is, of course, no dagger and no pool of water. So what thing, what object, do I stand in relation to, as the target of my attention? Some non-existent thing? Some mental thing (an idea, an experience)? If the latter, does it follow that I can't always tell whether my attention is directed outward to the world or inward, as it were, to my own mind? That would be strange.
Not a fatal objection, surely, to an "objectual model" (let's call it) of attention. Defenders of that view will have their resources. But why not, instead, jettison the objectual model and regard attention as the dedication of a certain kind of resource (what we might call "central cognitive resources") to a particular aim or goal? The aim of visual attention is the same in both the mirage case and the case of seeing an ordinary pool of water. The aim is to (for example) determine whether there's water over there, or whether this is really a mirage, or to estimate how long before the car hits the puddle. The mirage case and the visual case can be treated in the same way, without the aid of some ghostly, invented object for me to stand in an attentional relation to.
Consider also other sorts of attention-consuming tasks. Research psychologists have fixated on visual attention (and to some extent auditory attention) almost exclusively in recent decades, but in the early days of introspective psychology people spoke also of "intellectual attention". When you're thinking hard about a math puzzle or when you're contemplating the best route to grandma's house in rush hour, there's a perfectly legitimate sense in which you are devoting (non-sensory) attention to these tasks. Both kinds of tasks consume central cognitive resources. You can't do either very well while also quickly adding a column of numbers or while focusing on a difficult visual task.
But what are the objects I stand in relation to in intellectual attention? The route to grandma's house? Numbers? (What are numbers, anyway?) What if I'm thinking about unicorns? Better to say that I'm trying to do things. Attention is devoted to tasks, not objects. Or consider heavy exercise, holding one's eyes still, and other acts of self control. These tasks, too, consume attentional resources; yet it's not always clear that I am attending to objects (my own body, maybe?) in doing them.
So why do I care about this? Mainly because I think introspection is a species of attention, and that philosophers and psychologists often get introspection wrong because they work with too objectual a model of attention. But more on that in a future post....
(Thanks to Justin Fisher, by the way, for conversation on this point last Friday.)
Friday, May 04, 2007
With Your Eyes Closed, Can You See Your Hand in Front of Your Face?
Puzzlement and confusion:
I close my eyes. I wave my hand in front of my face. It seems as though I can see the motion of my hand. Most people I've asked report the same.
It's possible that I do detect that motion. A certain amount of light penetrates the closed eyelids. I could be detecting differences in lighting as my hand passes before my eyes.
But on the other hand, most people, deep in a cave where there isn't a single photon to pierce the darkness, will report being able to see their hands moving in front of their faces. That this isn't a matter of picking up on visual stimulus is made clearer by our inability in such situations to detect another person's hand waved before our faces. It seems that our knowledge of the movement of our hand is somehow affecting our visual experience, or at least our judgments about our visual experience, without actually causing any visual input.
So: When my eyes are closed and I seem to detect my hand, am I actually visually detecting its motion? Or is what's going on more like what happens in a cave?
Let's do some science. Consciousness studies, in such matters, is pretty uncut. Maybe there's something out there on this, but I bet you'd have to dig pretty deep; and then you'd get a few weird articles from 1932 or something, or from a minor Japanese journal in 2001 -- articles that have never been cited, and that have strange, contradictory results. (I don't know this for sure, I'm just conjecturing based on past experience with similar questions.) If so, you can do novel experiments right there in your armchair.
Try facing different directions (toward a light source, away from a light source). Try closing your eyes more tightly, or occluding them with your other hand, or interposing an object between your eyes and your hand. That's what I did at least. I found myself sufficiently puzzled that I dashed downstairs and found a group of loitering undergraduates and had them all do it too! (This probably enhanced my reputation as a kooky professor.)
The results were complete uninterpretable chaos. For example, for myself: I seem to see it more strongly when I face a light source than when I face away. When I close my eyes tightly or put my other hand completely over them, I find myself uncertain about whether I have visual experience conditioned on the motion of my hand. If so, it is less. But when I put an occluding object between my eyes and my moving hand, say six inches in front of my face, I do think I still experience the motion of my hand, despite the fact that it can't be affecting me though that occluding object -- or at least that's how it seemed to me before I ran downstairs. I seem to be able to reproduce that effect only inconsistently. Others had different patterns of results.
If you're game to try, I'd be interested to hear your thoughts and experiences. Maybe I'll even work some of them into a presentation I'm hoping to give at the Association for the Scientific Study of Consciousness next month....
I close my eyes. I wave my hand in front of my face. It seems as though I can see the motion of my hand. Most people I've asked report the same.
It's possible that I do detect that motion. A certain amount of light penetrates the closed eyelids. I could be detecting differences in lighting as my hand passes before my eyes.
But on the other hand, most people, deep in a cave where there isn't a single photon to pierce the darkness, will report being able to see their hands moving in front of their faces. That this isn't a matter of picking up on visual stimulus is made clearer by our inability in such situations to detect another person's hand waved before our faces. It seems that our knowledge of the movement of our hand is somehow affecting our visual experience, or at least our judgments about our visual experience, without actually causing any visual input.
So: When my eyes are closed and I seem to detect my hand, am I actually visually detecting its motion? Or is what's going on more like what happens in a cave?
Let's do some science. Consciousness studies, in such matters, is pretty uncut. Maybe there's something out there on this, but I bet you'd have to dig pretty deep; and then you'd get a few weird articles from 1932 or something, or from a minor Japanese journal in 2001 -- articles that have never been cited, and that have strange, contradictory results. (I don't know this for sure, I'm just conjecturing based on past experience with similar questions.) If so, you can do novel experiments right there in your armchair.
Try facing different directions (toward a light source, away from a light source). Try closing your eyes more tightly, or occluding them with your other hand, or interposing an object between your eyes and your hand. That's what I did at least. I found myself sufficiently puzzled that I dashed downstairs and found a group of loitering undergraduates and had them all do it too! (This probably enhanced my reputation as a kooky professor.)
The results were complete uninterpretable chaos. For example, for myself: I seem to see it more strongly when I face a light source than when I face away. When I close my eyes tightly or put my other hand completely over them, I find myself uncertain about whether I have visual experience conditioned on the motion of my hand. If so, it is less. But when I put an occluding object between my eyes and my moving hand, say six inches in front of my face, I do think I still experience the motion of my hand, despite the fact that it can't be affecting me though that occluding object -- or at least that's how it seemed to me before I ran downstairs. I seem to be able to reproduce that effect only inconsistently. Others had different patterns of results.
If you're game to try, I'd be interested to hear your thoughts and experiences. Maybe I'll even work some of them into a presentation I'm hoping to give at the Association for the Scientific Study of Consciousness next month....
Wednesday, May 02, 2007
Virginia Tech: A Thought about the Media Coverage
I'm teaching a class this term on the moral psychology of evil. So far, I've managed not to say a word about the Virginia Tech shootings (yes, there's already a very good Wikipedia entry, with 119 references). I believe that the massive attention given to such events has negative consequences.
There's the obvious negative consequence (mentioned often, hand-wringingly and half self-condemingly, in the press coverage of such events) that excessive attention to these events catapults their perpetrators to a fame they don't deserve. The perpetrator becomes a model; his way of behaving gains salience as a possible way of behaving to others of unbalanced mind; and the promise of comparable notoriety may be appealing to some.
But what I find more troubling is this: Focus on events of this sort encourages an inaccurate and falsely comforting model of evil. By ignoring (or burying on page 12) the hundreds of thousands, maybe millions, killed every year by vile governmental, military, and corporate policies, and by individual, private acts of evil -- by focusing on massacres and suicide bombers instead, we ground our conception of evil in a narrow band of strange cases. In particular, we may be tempted to think of evil as something done by unusual, deranged people (like Cho) or indoctrinated, almost brainwashed, followers of radical religious movements (as most Americans conceptualize suicide bombers).
As Hannah Arendt, Ervin Staub, and many others have made clear, though, most of the evil in the world is not done by such people. Instead, it is done by ordinary folks, like you and me. The assumption that it is not -- that it is done instead by monsters and maniacs -- is comforting because it allows us to hide from recognizing the potential for evil in ourselves.
And for exactly that same reason, that assumption is extremely dangerous.
There's the obvious negative consequence (mentioned often, hand-wringingly and half self-condemingly, in the press coverage of such events) that excessive attention to these events catapults their perpetrators to a fame they don't deserve. The perpetrator becomes a model; his way of behaving gains salience as a possible way of behaving to others of unbalanced mind; and the promise of comparable notoriety may be appealing to some.
But what I find more troubling is this: Focus on events of this sort encourages an inaccurate and falsely comforting model of evil. By ignoring (or burying on page 12) the hundreds of thousands, maybe millions, killed every year by vile governmental, military, and corporate policies, and by individual, private acts of evil -- by focusing on massacres and suicide bombers instead, we ground our conception of evil in a narrow band of strange cases. In particular, we may be tempted to think of evil as something done by unusual, deranged people (like Cho) or indoctrinated, almost brainwashed, followers of radical religious movements (as most Americans conceptualize suicide bombers).
As Hannah Arendt, Ervin Staub, and many others have made clear, though, most of the evil in the world is not done by such people. Instead, it is done by ordinary folks, like you and me. The assumption that it is not -- that it is done instead by monsters and maniacs -- is comforting because it allows us to hide from recognizing the potential for evil in ourselves.
And for exactly that same reason, that assumption is extremely dangerous.