9 am: Jack enters his office and flips the light switch. Call this event A. It is plausible to think that there's an intentional explanation for A: Jack wants light and believes that flipping the switch will produce it. But light doesn’t come. The bulb goes pop, and Jack sets off to the store cupboard to get a replacement.
9.05 am: Bulb in hand, Jack re-enters his office, and again flips the switch -- then curses his stupidity. Call the second switch-flipping event B. Now what is the explanation for B? More specifically, is the explanation the same as for A, and it is an intentional one?
There are four options, and each has its problems:
1) The explanation is the same and it is intentional: Jack wants light and believes that flipping the switch will produce it. Problem: In the run-up to event B Jack surely doesn't believe that flipping the switch will produce light. After all, he knows that the bulb is blown and that blown bulbs don’t produce light, and he is minimally rational.
2) The explanation is the same and it is not intentional -- perhaps the movement is a reflex one. Problem: Flipping a light switch is just one of a vast array of routine unreflective behaviours for which we find it perfectly natural to give intentional explanations. If these actions are not intentional, then the realm of folk-psychological explanation will be massively reduced, vindicating at least a partial form of eliminativism.
3) The explanation is different and it not intentional. Problem: It's implausible to think that A and B have different explanations. In a real life version, I'd be willing to bet that the neurological processes involved in two cases were of the same type.
4) The explanation is different and it is intentional. Problem: As for (3), plus it's hard to see what alternative beliefs and desires might have motivated B.
This puzzle about belief seems to me an important one, though it has received relatively little attention -- which is why I thought I’d give it an airing here. (One of the few extended discussions I know of is by Christopher Maloney in a 1990 Mind and Language paper titled 'It's hard to believe'. Eric also discusses cases of this sort in his draft paper 'Acting contrary to our professed beliefs'.)
My own view is that the plausibility of the options corresponds to the order in which I have stated them, with (1) being the most plausible. That is, I would deny that at the time of event B Jack doesn't believe that flipping the switch will produce light. The problem then, of course, is to explain how he can believe that the switch will work while at the same time believing that the bulb is blown and that blown bulbs don’t produce light. The only plausible way of doing this, I think, is to distinguish types, or levels, of belief which are relatively insulated from each other, and to claim that Jack's belief about the effect of flipping the switch is of one type and his belief about the condition of the bulb of the other. (Maloney takes the broadly same line, though he works out the details in a different way from me.) I happen to think that this view is independently plausible, so the puzzle is actually grist to my mill, though distinguishing types of beliefs has its own problems. I'd be interested to know how others react to the puzzle.
Tuesday, July 31, 2007
Monday, July 30, 2007
Religion and Crime
I've been reading the literature on the relationship between religious conviction and crime, as part of my thinking about the relationship between philosophical moral reflection and actual moral behavior. The literature is pretty weak. Much seems church-inspired and probably deserves about the same level of credence as drug-company funded research showing their blockbuster drugs are wonderful. Much of it is in weird journals.
I found a 2001 "meta-analysis" (Baier & Wright) of the literature that shows all the usual blindnesses of meta-analyses. Oh, you don't know what a meta-analysis is? As usually practiced, it's a way of doing math instead of thinking. First, you find all the published experiments pertinent to Hypothesis X (e.g., "religious people commit fewer crimes"). Then you combine the data using (depending on your taste) either simplistic or suspiciously fancy (and hidden-assumption-ridden) statistical tools. Finally -- voila! -- you announce the real size of the effect. So, for example, Baier and Wright find that the "median effect size" of religion on criminality is r = -.11!
What does this mean? Does being religious make you less likely to engage in criminal activity? Despite the a priori plausibility of that idea, I draw a negative conclusion.
First: A "median effect size" of religion on criminality of r = -.11 means that half the published studies found a correlation close to zero.
Second: And that's half the published studies. It's generally acknowledged in psychology that most studies that find no effect -- especially smaller studies -- languish in file drawers without ever getting published. Robert Rosenthal, the dean of meta-analysis, suggests assuming for every published study at least five unpublished studies averaging a null result.
Third: As Baier & Wright note (without sufficient suspicion), the studies finding large effects tend to be in the smaller studies and the studies co-ordinated through religious organizations. Hm!
Fourth: The studies are correlational, not causal. Even if there is some weak relationship between religiosity and lack of criminality, some common-cause explanation (e.g., a tendency toward social conformity) can't be ruled out. Interestingly, two recent studies that tried to get at the causal structure through temporal analyses didn't confirm the religion-prevents-criminality hypothesis. Heaton (2006) found no decrease in crime after the Easter holiday. And Eshuys & Smallbone (2006) found, to their surprise, that sex offenders who were religious in their youth had more and younger victims than those who were comparatively less religious.
Does this suggest that religion is morally inert? Well, another possibility is that religion has effects that go in both directions -- some people using it as a vehicle for love and good, others as a vehicle for hate and evil. (Much like secular ethics, now that I think of it!)
I found a 2001 "meta-analysis" (Baier & Wright) of the literature that shows all the usual blindnesses of meta-analyses. Oh, you don't know what a meta-analysis is? As usually practiced, it's a way of doing math instead of thinking. First, you find all the published experiments pertinent to Hypothesis X (e.g., "religious people commit fewer crimes"). Then you combine the data using (depending on your taste) either simplistic or suspiciously fancy (and hidden-assumption-ridden) statistical tools. Finally -- voila! -- you announce the real size of the effect. So, for example, Baier and Wright find that the "median effect size" of religion on criminality is r = -.11!
What does this mean? Does being religious make you less likely to engage in criminal activity? Despite the a priori plausibility of that idea, I draw a negative conclusion.
First: A "median effect size" of religion on criminality of r = -.11 means that half the published studies found a correlation close to zero.
Second: And that's half the published studies. It's generally acknowledged in psychology that most studies that find no effect -- especially smaller studies -- languish in file drawers without ever getting published. Robert Rosenthal, the dean of meta-analysis, suggests assuming for every published study at least five unpublished studies averaging a null result.
Third: As Baier & Wright note (without sufficient suspicion), the studies finding large effects tend to be in the smaller studies and the studies co-ordinated through religious organizations. Hm!
Fourth: The studies are correlational, not causal. Even if there is some weak relationship between religiosity and lack of criminality, some common-cause explanation (e.g., a tendency toward social conformity) can't be ruled out. Interestingly, two recent studies that tried to get at the causal structure through temporal analyses didn't confirm the religion-prevents-criminality hypothesis. Heaton (2006) found no decrease in crime after the Easter holiday. And Eshuys & Smallbone (2006) found, to their surprise, that sex offenders who were religious in their youth had more and younger victims than those who were comparatively less religious.
Does this suggest that religion is morally inert? Well, another possibility is that religion has effects that go in both directions -- some people using it as a vehicle for love and good, others as a vehicle for hate and evil. (Much like secular ethics, now that I think of it!)
Friday, July 27, 2007
Qualia: The real thing (by guest blogger Keith Frankish)
What is the explanandum for a theory of consciousness? The traditional view is that it is the qualia of experience, conceived of as ineffable, intrinsic, and essentially private properties -- classic qualia, we might say. Now classic qualia don't look likely to yield to explanation in physical terms, and physicalists typically propose that we start with a more neutral conception of the explanandum. They say that we shouldn't build ineffability, intrinsicality, and privacy into our conception of qualia, and that what needs explaining is simply the subjective feel of experience -- the 'what-it-is-likeness' -- where this may turn out to be effable (yes, there is such a word), relational, and public. Call this watered-down conception diet qualia. Though rejecting classic qualia, physicalists tend to assume that it's undeniable that diet qualia exist, and go on to offer reductive accounts of them -- suggesting, for example, that experiences come to have diet qualia in virtue of having a certain kind of representational content or of being the object of some kind of higher-order awareness.
Drawing a distinction between classic qualia and diet qualia (though not under those terms) is a common move in the literature, but I'm suspicious of it. I'm just not convinced that there is any distinctive content to the notion of diet qualia. To make the point, let me introduce a third concept, which shall I call zero qualia. Zero qualia are those properties of an experience that lead its possessor to judge that the experience has classic qualia and to make certain judgements about the character of those qualia. Now I assume that diet qualia are supposed to be different from zero qualia: an experience could have properties that dispose one to judge that it has classic qualia without it actually being like anything to undergo it. But what exactly would be missing? Well, a subjective feel. But what is that supposed to be, if not something intrinsic, ineffable, and private? I can see how the properties that dispose us to judge that our experiences have subjective feels might not be intrinsic, ineffable, and private, but I find it much harder to understand how subjective feels themselves might not be.
It may be replied that diet qualia are properties that seem to be intrinsic, ineffable, and private, but may not really be so. But if the suggestion is that they dispose us to judge that they are intrinsic, ineffable, and private, then I do not see how they differ from zero qualia. They are properties which dispose us to judge that the experiences that possess them have classic qualia -- in this case by disposing us to judge they themselves are classic qualia. If, on the other hand, the suggestion is that diet qualia involve some further dimension of seeming beyond this disposition to judge, then I return to my original question: what is this extra dimension, if not the one distinctive of classic qualia?
In short, I understand what classic qualia are, and I understand what zero qualia are, but I don't understand what diet qualia are; I suspect the concept has no distinctive content. If that's right, then the fundamental dispute between physicalists and anti-physicalists should be over the nature of the explanandum -- classic qualia or zero qualia -- not the explanans. The concept of diet qualia confuses the issue by leading us to think that both sides can agree about what needs to be
Footnote: This shows just how easy it is to be confused about qualia, even when it comes to the real thing.
Drawing a distinction between classic qualia and diet qualia (though not under those terms) is a common move in the literature, but I'm suspicious of it. I'm just not convinced that there is any distinctive content to the notion of diet qualia. To make the point, let me introduce a third concept, which shall I call zero qualia. Zero qualia are those properties of an experience that lead its possessor to judge that the experience has classic qualia and to make certain judgements about the character of those qualia. Now I assume that diet qualia are supposed to be different from zero qualia: an experience could have properties that dispose one to judge that it has classic qualia without it actually being like anything to undergo it. But what exactly would be missing? Well, a subjective feel. But what is that supposed to be, if not something intrinsic, ineffable, and private? I can see how the properties that dispose us to judge that our experiences have subjective feels might not be intrinsic, ineffable, and private, but I find it much harder to understand how subjective feels themselves might not be.
It may be replied that diet qualia are properties that seem to be intrinsic, ineffable, and private, but may not really be so. But if the suggestion is that they dispose us to judge that they are intrinsic, ineffable, and private, then I do not see how they differ from zero qualia. They are properties which dispose us to judge that the experiences that possess them have classic qualia -- in this case by disposing us to judge they themselves are classic qualia. If, on the other hand, the suggestion is that diet qualia involve some further dimension of seeming beyond this disposition to judge, then I return to my original question: what is this extra dimension, if not the one distinctive of classic qualia?
In short, I understand what classic qualia are, and I understand what zero qualia are, but I don't understand what diet qualia are; I suspect the concept has no distinctive content. If that's right, then the fundamental dispute between physicalists and anti-physicalists should be over the nature of the explanandum -- classic qualia or zero qualia -- not the explanans. The concept of diet qualia confuses the issue by leading us to think that both sides can agree about what needs to be
Footnote: This shows just how easy it is to be confused about qualia, even when it comes to the real thing.
Wednesday, July 25, 2007
Subjective Life Span
When I was 7 years old, a year seemed a very long time. And indeed it was -- it was 1/7 of my life. Now that I'm 39, a year seems much shorter. But of course now a year is only 1/39 of my life. When I was 7, 30 minutes seemed a long time; now it doesn't seem nearly so long.
Let's suppose that subjective time is inversely proportional to life span. The subjective time of any period is then the integral of 1/x, which is to say the difference between the natural logs of the end and the beginning of the period.
(Most recent psychological work about "subjective time" tends to be about subjective estimations of clock time, or about comparisons of periods close together in time as seeming to go relatively more quickly or more slowly. These are completely different issues than the one I'm contemplating here. They don't get at the fundamental question of whether the clock itself seems to speed up over the life span -- though see Wittmann & Lehnhoff 2005.)
On this model, since 1/x approaches infinity as x approaches 0 (from the positive direction), it follows that our subjective life-span is infinite. We seem, to ourselves, subjectively, to have been alive forever. (Of course, I know I was born in 1968, but that's merely objective time.)
There's something that seems right about that result; but an alternative way of evaluating subjective life span might be to exclude the earliest years -- years we don't remember -- starting the subjective life span at, say, age 4.
Adopting that second method, we can calculate percentages of subjective life span. Suppose I live to age 80. At age 39, I've lived less than half my objective life span, but I've already lived 76% of my subjective life span ([ln(80)-ln(39)]/(ln(80)-ln(4)] = 0.76). At what age was my subjective life half over? 18. Whoa! I feel positively geriatric! (And these reflections about philosophers peaking at age 38 don't help either.)
Regardless of whether the subjective life span begins at age 0 or age 4, we can compare the subjective lengths of various periods. For example, the four years of high school (age 14-18) are subjectively 25% longer than the four years of college (age 18-22). Doesn't that seem about right? Similarly, it wasn't until I was teaching for 9 years (age 29-38) that I had been a teacher as long, subjectively, as I had been a high-school student; and it will take 'til age 60 for my subjective years of teaching to exceed my subjective years of high school, college, and grad school combined.
If we throw in elementary school, objective and subjective time get even more out of synch. Those 7 years (age 5-12) will be subjectively equivalent to the 42 years from age 30-72! And unless I teach until I'm 168 years old, I'll always have had more subjective time as a student than as a teacher. Is that too extreme? Maybe so. But I don't really know what it's like to be 168 years old; and I'm not sure how stable and trustworthy my judgments now could be about how long 3rd grade seemed to take. Is 6 times as long as a middle-aged year so unreasonable?
Let's suppose that subjective time is inversely proportional to life span. The subjective time of any period is then the integral of 1/x, which is to say the difference between the natural logs of the end and the beginning of the period.
(Most recent psychological work about "subjective time" tends to be about subjective estimations of clock time, or about comparisons of periods close together in time as seeming to go relatively more quickly or more slowly. These are completely different issues than the one I'm contemplating here. They don't get at the fundamental question of whether the clock itself seems to speed up over the life span -- though see Wittmann & Lehnhoff 2005.)
On this model, since 1/x approaches infinity as x approaches 0 (from the positive direction), it follows that our subjective life-span is infinite. We seem, to ourselves, subjectively, to have been alive forever. (Of course, I know I was born in 1968, but that's merely objective time.)
There's something that seems right about that result; but an alternative way of evaluating subjective life span might be to exclude the earliest years -- years we don't remember -- starting the subjective life span at, say, age 4.
Adopting that second method, we can calculate percentages of subjective life span. Suppose I live to age 80. At age 39, I've lived less than half my objective life span, but I've already lived 76% of my subjective life span ([ln(80)-ln(39)]/(ln(80)-ln(4)] = 0.76). At what age was my subjective life half over? 18. Whoa! I feel positively geriatric! (And these reflections about philosophers peaking at age 38 don't help either.)
Regardless of whether the subjective life span begins at age 0 or age 4, we can compare the subjective lengths of various periods. For example, the four years of high school (age 14-18) are subjectively 25% longer than the four years of college (age 18-22). Doesn't that seem about right? Similarly, it wasn't until I was teaching for 9 years (age 29-38) that I had been a teacher as long, subjectively, as I had been a high-school student; and it will take 'til age 60 for my subjective years of teaching to exceed my subjective years of high school, college, and grad school combined.
If we throw in elementary school, objective and subjective time get even more out of synch. Those 7 years (age 5-12) will be subjectively equivalent to the 42 years from age 30-72! And unless I teach until I'm 168 years old, I'll always have had more subjective time as a student than as a teacher. Is that too extreme? Maybe so. But I don't really know what it's like to be 168 years old; and I'm not sure how stable and trustworthy my judgments now could be about how long 3rd grade seemed to take. Is 6 times as long as a middle-aged year so unreasonable?
Monday, July 23, 2007
If you want my opinion … (by guest blogger Keith Frankish)
I'd like to thank Eric for inviting me to guest on The Splintered Mind. Blogging gives one a chance to express one opinions, so I thought I’d begin by saying something about opinions.
When we talk of opinions I think we often have in mind states of the kind to which Daniel Dennett applies the term. An opinion in this sense is a reflective personal commitment to the truth of a sentence (see especially ch.16 of Brainstorms). Dennett suggests that we can actively form opinions and that we are often prompted to do so by social pressures. The need to give an opinion frequently forces us to create one – to foreclose on deliberation, find linguistic expression for an inchoate thought, and make a clear-cut doxastic commitment. This, Dennett suggests, is what we call making up our minds.
But what is the point of having opinions? Non-human animals get on well enough without them, and much of our behaviour seems to be guided without the involvement of these reflective, language-involving states. Dennett himself makes a sharp distinction between opinion and belief, and maintains that it is our beliefs and desires that directly predict our nonverbal actions, whereas our opinions manifest themselves only in what we say.
I disagree with Dennett here. I think that opinions can play a central role in conscious reasoning and decision-making. They can do so, I have argued, in virtue of our (usually non-conscious) higher-order attitudes towards them (see here for an early stab at the argument and here for the developed version). However, it’s undeniable that many of our opinions do not have much effect on how we conduct our daily lives. Many simply aren’t relevant. Few of us are deeply enough involved in politics for our political opinions to have a significant impact on our nonverbal behaviour. Moreover, opinions have drawbacks. They are hard to form. It’s not easy to arrive at coherent set of opinions which one is prepared to commit to and defend in argument. They can be dangerously imprecise. People are all too ready to endorse blanket generalizations and sweeping moral prescriptions. And they can be inflexible. We sometimes hang on to our opinions beyond the point where a wiser person would revise or abandon them, and end up falling into dogmatism or self-delusion. (Someone once said of the British politician Enoch Powell that he had the finest mind in Parliament until he made it up.)
The wise course, it seems, would be to keep an open mind as far as possible, and then commit oneself only to qualified views, which one is always ready to reconsider. Why, then, are people so keen to form strong opinions and to broadcast them to others? (a keenness very evident in the blogosphere). The question is one for social psychologists, but I'll speculate a bit. One factor is probably security. It's a complicated world and doubt is unsettling, so it's comforting to have clear, well-entrenched opinions. A unified package of opinions can also serve as a badge of tribal loyalty, identifying one as a member of a particular party or sect and so fostering a sense of comradeship and belonging. Another factor, I suspect, is prestige: a set of clear, firmly held opinions is impressive, suggesting that one is knowledgeable, tough-minded, and decisive.
These benefits aren't negligible, but I doubt they outweigh the risks, and it might be better if we were all more cautious in our opinions. I'm not recommending quietism; it's often important to take a stand. But I think we should resist the pressures to form quick and easy opinions, and, in particular, that we should resist the pressure to choose them from the predefined packages offered to us by professional politicians and 'opinion formers'. Referring to opinion polls, Spike Milligan once said that one day the 'Don't knows' would get in, and then where would we be? Well, perhaps we'd be a bit better off, actually.
When we talk of opinions I think we often have in mind states of the kind to which Daniel Dennett applies the term. An opinion in this sense is a reflective personal commitment to the truth of a sentence (see especially ch.16 of Brainstorms). Dennett suggests that we can actively form opinions and that we are often prompted to do so by social pressures. The need to give an opinion frequently forces us to create one – to foreclose on deliberation, find linguistic expression for an inchoate thought, and make a clear-cut doxastic commitment. This, Dennett suggests, is what we call making up our minds.
But what is the point of having opinions? Non-human animals get on well enough without them, and much of our behaviour seems to be guided without the involvement of these reflective, language-involving states. Dennett himself makes a sharp distinction between opinion and belief, and maintains that it is our beliefs and desires that directly predict our nonverbal actions, whereas our opinions manifest themselves only in what we say.
I disagree with Dennett here. I think that opinions can play a central role in conscious reasoning and decision-making. They can do so, I have argued, in virtue of our (usually non-conscious) higher-order attitudes towards them (see here for an early stab at the argument and here for the developed version). However, it’s undeniable that many of our opinions do not have much effect on how we conduct our daily lives. Many simply aren’t relevant. Few of us are deeply enough involved in politics for our political opinions to have a significant impact on our nonverbal behaviour. Moreover, opinions have drawbacks. They are hard to form. It’s not easy to arrive at coherent set of opinions which one is prepared to commit to and defend in argument. They can be dangerously imprecise. People are all too ready to endorse blanket generalizations and sweeping moral prescriptions. And they can be inflexible. We sometimes hang on to our opinions beyond the point where a wiser person would revise or abandon them, and end up falling into dogmatism or self-delusion. (Someone once said of the British politician Enoch Powell that he had the finest mind in Parliament until he made it up.)
The wise course, it seems, would be to keep an open mind as far as possible, and then commit oneself only to qualified views, which one is always ready to reconsider. Why, then, are people so keen to form strong opinions and to broadcast them to others? (a keenness very evident in the blogosphere). The question is one for social psychologists, but I'll speculate a bit. One factor is probably security. It's a complicated world and doubt is unsettling, so it's comforting to have clear, well-entrenched opinions. A unified package of opinions can also serve as a badge of tribal loyalty, identifying one as a member of a particular party or sect and so fostering a sense of comradeship and belonging. Another factor, I suspect, is prestige: a set of clear, firmly held opinions is impressive, suggesting that one is knowledgeable, tough-minded, and decisive.
These benefits aren't negligible, but I doubt they outweigh the risks, and it might be better if we were all more cautious in our opinions. I'm not recommending quietism; it's often important to take a stand. But I think we should resist the pressures to form quick and easy opinions, and, in particular, that we should resist the pressure to choose them from the predefined packages offered to us by professional politicians and 'opinion formers'. Referring to opinion polls, Spike Milligan once said that one day the 'Don't knows' would get in, and then where would we be? Well, perhaps we'd be a bit better off, actually.
Friday, July 20, 2007
Making Sense of Dennett's Views on Introspection
Dan Dennett and I have something in common: We both say that people often go grossly wrong about even their own ongoing conscious experience (for my view, see here). Of course Dennett is one of the world's most eminent philosophers and I'm, well, not. But another difference is this: Dennett also often says (as I don't) that subjects can no more go wrong about their experience than a fiction writer can go wrong about his fictions (e.g., 1991, p. 81, 94) and that their reports about their experience are "incorrigible" in the sense that no one could ever be justified in believing them mistaken (e.g., 2002, p. 13-14).
But how can it be the case both that we often go grossly wrong in reporting our own experience and that we have nearly infallible authority about it? I recently pubished an essay articulating my puzzlement over this point (see also this earlier post) to which Dennett graciously replied (see pp. 253ff and 263ff here). Dennett's reply continued to puzzle me -- it didn't seem to me to address the basic inconsistency between saying that we are often wrong about our experience and saying that we are rarely wrong about it -- so I had a good long chat with him about it at the ASSC meeting in June.
I think I've finally settled on a view that makes sense of much (I don't think quite all) of what Dennett says on the topic, and which also is a view I can agree with. So I emailed him to see what he thought, and he endorsed my interpretation. (However, I don't really want to hold him to that, since he might change his mind with further reflection!)
The key idea is that there are two sorts of "seemings" in introspective reports about experience, which Dennett doesn't clearly distinguish in his work. The first sense corresponds to our judgments about our experience, and the second to what's in stream of experience behind those judgments. Over the first sort of "seeming" we have almost unchallengeable authority; over the second sort of seeming we have no special authority at all. Interpretations of Dennett that ascribe him the view that there are no facts about experience beyond what we're inclined to judge about our experience emphasize the first sense and disregard the second. Interpretations that treat Dennett as a simple skeptic about introspective reports emphasize the second sense and ignore the first. Both miss something important in his view.
Let me clarify this two-layer view with an example. People will often say about their visual experience that everything near the center has clearly defined shape, at any particular instant, and the periphery, where clarity starts to fade, begins fairly far out from the center -- say about 30 degrees. Both the falsity of this view and people's implicit commitment to it can be revealed by a simple experiment suggested by Dennett: Take a playing card from a deck of cards and hold it at arm's length off to the side. Keeping your eyes focused straight ahead, slowly rotate the card toward the center of your visual field, noting how close you need to bring it to determine its suit, color, and value. Most people are amazed at how close they have to bring it before they can see it clearly! (If a card is not handy, you can get similar results with a book cover.) Although this isn't the place for the full story, I believe the evidence suggests that visual experience is not, as most people seem to think, a fairly stable field flush with detail, hazy only at the periphery, but rather a fairly fuzzy field with a rapidly moving and very narrow focal center. We don't notice this fact because our attention is almost always at the focal center. (See section vi of this essay.)
Now when people say, "Everything is simultaneously clear and precisely defined in my visual field, except at the far periphery" there's a sense in which they are accurately expressing how things seem to them -- a sense in which, if they are sincere, they are inevitably right about their experience of things -- that's how things seem to be, to them! -- and also a sense in which they are quite wrong about their visual experience. When Dennett attributes subjects authority and incorrigibility about their experience, we should interpret him as meaning that they have authority and incorrigibility over how things seem to them in that first sense. When he says that people often get it wrong about their experience, we should interpret him as saying that they often err about their stream of experience in the second sense.
Dennett's view on these matters is complicated somewhat by his discussion of metaphor in his response to me, because metaphor itself seems to straddle between the authoritative (it's my metaphor, so it means just what I intend it to mean) and the fallible (metaphors can be objectively more or less apt), but this post is already overlong....
Update, February 28, 2012:
As time passes, I find myself less convinced that Dennett should endorse this interpretation of his view. Unfortunately, however, I can't yet swap in a better interpretation.
But how can it be the case both that we often go grossly wrong in reporting our own experience and that we have nearly infallible authority about it? I recently pubished an essay articulating my puzzlement over this point (see also this earlier post) to which Dennett graciously replied (see pp. 253ff and 263ff here). Dennett's reply continued to puzzle me -- it didn't seem to me to address the basic inconsistency between saying that we are often wrong about our experience and saying that we are rarely wrong about it -- so I had a good long chat with him about it at the ASSC meeting in June.
I think I've finally settled on a view that makes sense of much (I don't think quite all) of what Dennett says on the topic, and which also is a view I can agree with. So I emailed him to see what he thought, and he endorsed my interpretation. (However, I don't really want to hold him to that, since he might change his mind with further reflection!)
The key idea is that there are two sorts of "seemings" in introspective reports about experience, which Dennett doesn't clearly distinguish in his work. The first sense corresponds to our judgments about our experience, and the second to what's in stream of experience behind those judgments. Over the first sort of "seeming" we have almost unchallengeable authority; over the second sort of seeming we have no special authority at all. Interpretations of Dennett that ascribe him the view that there are no facts about experience beyond what we're inclined to judge about our experience emphasize the first sense and disregard the second. Interpretations that treat Dennett as a simple skeptic about introspective reports emphasize the second sense and ignore the first. Both miss something important in his view.
Let me clarify this two-layer view with an example. People will often say about their visual experience that everything near the center has clearly defined shape, at any particular instant, and the periphery, where clarity starts to fade, begins fairly far out from the center -- say about 30 degrees. Both the falsity of this view and people's implicit commitment to it can be revealed by a simple experiment suggested by Dennett: Take a playing card from a deck of cards and hold it at arm's length off to the side. Keeping your eyes focused straight ahead, slowly rotate the card toward the center of your visual field, noting how close you need to bring it to determine its suit, color, and value. Most people are amazed at how close they have to bring it before they can see it clearly! (If a card is not handy, you can get similar results with a book cover.) Although this isn't the place for the full story, I believe the evidence suggests that visual experience is not, as most people seem to think, a fairly stable field flush with detail, hazy only at the periphery, but rather a fairly fuzzy field with a rapidly moving and very narrow focal center. We don't notice this fact because our attention is almost always at the focal center. (See section vi of this essay.)
Now when people say, "Everything is simultaneously clear and precisely defined in my visual field, except at the far periphery" there's a sense in which they are accurately expressing how things seem to them -- a sense in which, if they are sincere, they are inevitably right about their experience of things -- that's how things seem to be, to them! -- and also a sense in which they are quite wrong about their visual experience. When Dennett attributes subjects authority and incorrigibility about their experience, we should interpret him as meaning that they have authority and incorrigibility over how things seem to them in that first sense. When he says that people often get it wrong about their experience, we should interpret him as saying that they often err about their stream of experience in the second sense.
Dennett's view on these matters is complicated somewhat by his discussion of metaphor in his response to me, because metaphor itself seems to straddle between the authoritative (it's my metaphor, so it means just what I intend it to mean) and the fallible (metaphors can be objectively more or less apt), but this post is already overlong....
Update, February 28, 2012:
As time passes, I find myself less convinced that Dennett should endorse this interpretation of his view. Unfortunately, however, I can't yet swap in a better interpretation.
Thursday, July 19, 2007
The Generosity of Philosophy Students
At University of Zurich, when students register for classes, they have the option of donating to charities supporting needy students and foreign students. Bruno Frey and Stephan Meier found, in 2005, that economics students were a bit less likely to donate to the charities than other students (62% of economics students vs. 69% of others gave to at least one charity). However, the effect seemed to be more a matter of selection than training: Economics majors were less charitable than their peers from the very beginning of their freshman year. Thus, they were not made less charitable, Frey and Meier argue, by their training in economic theory.
How about philosophy students? Could the ethical component of philosophical education have any effect on rates of charitable giving? This relates to my general interest in whether ethicists behave any morally better than non-ethicists.
Frey and Meier kindly sent me their raw data, expanded with several new semesters not reported in the 2005 essay. Here are some preliminary analyses. I looked only at undergraduates no more than 30 years old. In total, there were 164,550 registered student semesters over the course of 6 years of data.
In any given semester, 72.0% of students gave to at least one charity. Majors with particularly high or low rates of giving and at least 1000 registered semesters were:
Below 65%
Teacher training in math & natural sciences: 54.8%
Business economics: 58.7%
Italian studies: 61.4%
Teacher training in humanities & social sciences: 62.9%
Over 80%
Sociology: 81.3%
Ethnology: 82.7%
Philosophy: 83.6%
Among large majors, philosophy students were the most generous! Does this bode well for the morally salutary effects of studying philosophy?
Unfortunately, as in the original Frey & Meier study, a look at the time-course of the charitable giving undermines the impression of an indoctrination or training effect.
Percentage of Philosophy majors giving to at least one charity, by year:
1st year of study: 85.4% (of 411 student semesters)
2nd year: 86.9% (of 289)
3rd year: 85.2% (of 250)
4th year: 85.2% (of 236)
5th year: 82.5% (of 171)
6th year: 83.1% (of 136)
7th year: 81.3% (of 107)
8th year or more: 73.2% (of 183)
It seems that studying philosophy is not making students more charitable. If anything, there is a decrease in contributions over time.
There is also a decrease among non-philosophers, from 75.4% in Year 1 to 66.0% in Year 7 and 61.0% in Year 8+. This looks like a sharper rate of decrease, but difference in the decrease may not be statistically significant, given the small numbers of advanced philosophy students and the non-independence of the trials. Looking at individual students (under age 40) for whom there are at least 7 semesters of data, philosophy majors are just as likely to increase (37.4%) or decrease (27.2%) their rates of giving as are an age-matched sample of non-philosophy majors (42.4% up, 30.2% down).
(Oddly, although overall rates of giving are lower among more advanced students, more students increase their rates of giving over time than decrease their rates of giving. These facts can be (depressingly) reconciled if students who donate to charity are less likely than students who don't donate to continue in their studies.)
Why are Zurich philosophy students more likely to donate to these charities than students of other majors? Does philosophy attract charitable people? I'm not ready yet to draw that conclusion: It could be something as simple as higher socio-economic status among philosophy majors. They might simply have more money to give. (Impressionistically, in the U.S., philosophy seems to draw wealthier students; students from lower income families tend, on average, to be drawn to more "practical" majors.)
How about philosophy students? Could the ethical component of philosophical education have any effect on rates of charitable giving? This relates to my general interest in whether ethicists behave any morally better than non-ethicists.
Frey and Meier kindly sent me their raw data, expanded with several new semesters not reported in the 2005 essay. Here are some preliminary analyses. I looked only at undergraduates no more than 30 years old. In total, there were 164,550 registered student semesters over the course of 6 years of data.
In any given semester, 72.0% of students gave to at least one charity. Majors with particularly high or low rates of giving and at least 1000 registered semesters were:
Below 65%
Teacher training in math & natural sciences: 54.8%
Business economics: 58.7%
Italian studies: 61.4%
Teacher training in humanities & social sciences: 62.9%
Over 80%
Sociology: 81.3%
Ethnology: 82.7%
Philosophy: 83.6%
Among large majors, philosophy students were the most generous! Does this bode well for the morally salutary effects of studying philosophy?
Unfortunately, as in the original Frey & Meier study, a look at the time-course of the charitable giving undermines the impression of an indoctrination or training effect.
Percentage of Philosophy majors giving to at least one charity, by year:
1st year of study: 85.4% (of 411 student semesters)
2nd year: 86.9% (of 289)
3rd year: 85.2% (of 250)
4th year: 85.2% (of 236)
5th year: 82.5% (of 171)
6th year: 83.1% (of 136)
7th year: 81.3% (of 107)
8th year or more: 73.2% (of 183)
It seems that studying philosophy is not making students more charitable. If anything, there is a decrease in contributions over time.
There is also a decrease among non-philosophers, from 75.4% in Year 1 to 66.0% in Year 7 and 61.0% in Year 8+. This looks like a sharper rate of decrease, but difference in the decrease may not be statistically significant, given the small numbers of advanced philosophy students and the non-independence of the trials. Looking at individual students (under age 40) for whom there are at least 7 semesters of data, philosophy majors are just as likely to increase (37.4%) or decrease (27.2%) their rates of giving as are an age-matched sample of non-philosophy majors (42.4% up, 30.2% down).
(Oddly, although overall rates of giving are lower among more advanced students, more students increase their rates of giving over time than decrease their rates of giving. These facts can be (depressingly) reconciled if students who donate to charity are less likely than students who don't donate to continue in their studies.)
Why are Zurich philosophy students more likely to donate to these charities than students of other majors? Does philosophy attract charitable people? I'm not ready yet to draw that conclusion: It could be something as simple as higher socio-economic status among philosophy majors. They might simply have more money to give. (Impressionistically, in the U.S., philosophy seems to draw wealthier students; students from lower income families tend, on average, to be drawn to more "practical" majors.)
Monday, July 16, 2007
Feeling bias in the measurement of happiness (by guest blogger Dan Haybron)
For starters, I want to thank Eric for letting me guest on his blog. This has been a lot of fun, with great comments, and definitely converted me to the value of blogging! Thanks to all. Now...
Suppose you think of happiness as a matter of a person’s emotional condition, or something along those lines. If you don’t like to think of happiness that way, then imagine you’re wanting to assess the emotional aspects of well-being: how well people are doing in terms of their emotional states. What, exactly, would you look to measure?
An obvious thought is feelings of joy and sadness, but of course there’s more to it than that: cheerfulness, anger, fear, and worry also come to mind, as well as feelings of being stressed out or anxious. So if you’re developing a self-report-based instrument, say, you’ll want to ask people about feelings like these, and doubtless others.
Here’s what Kahneman et al. (2004) use in one of the better measures, the Day Reconstruction Method (DRM): “Positive affect is the average of happy, warm/friendly, enjoying myself. Negative affect is the average of frustrated/annoyed, depressed/blue, hassled/pushed around, angry/hostile, worried/anxious, criticized/put down.” Also measured, but not placed under the positive/negative affect heading, were feelings of impatience, tiredness, and competence. (I’d be inclined to put the former two under negative—detracting from happiness, and the latter under positive—adding to happiness.) Another question asks, “Thinking only about yesterday, what percentage of the time were you: in a bad mood, a little low or irritable, in a mildly pleasant mood, in a very good mood.”
I think these are reasonable questions, but doubtless they can be improved. First, are these the right feelings to ask about? Second, should each of these feelings get the same weight, as the averaging method assumes? But third, should we only be looking at feelings?
Exercise: think about the most clearly, indisputably happy people you know. (Hopefully someone comes to mind!) Good measures of happiness should pick those individuals out, and for the right reasons. So what are the most salient facts about their emotional conditions? How do you know they are happy? Did you guess the integral of the feelings listed above over time? I doubt it! In my case, the first thing that comes to mind is not feelings at all, but a palpable confidence, centeredness, or settledness of stance. (BTW, the most blatantly cheerful people I know don’t strike me as very happy at all; their good cheer seems a way of compensating for a basically unsettled psyche.) For the people I’m thinking of, I’m guessing they’re happy because of what seems to me to be their basic psychic orientation, disposition, or stance. They are utterly at home in their skin, and their lives.
If this is even part of the story, then affect measures like those above appear to exhibit a “feeling bias,” putting too much weight on feeling episodes rather than matters of basic psychic orientation. How to fix? I don’t know, but one possibility is to use “mood induction” techniques, e.g. subjecting people to computer crashes and seeing how they respond. A happy person shouldn’t easily fly into a rage. But this won’t work well for some large surveys. And what about occurrent states like a constant, low level stress that doesn’t quite amount to a “feeling” of being stressed, or at least not enough to turn up in reports of feeling episodes, yet which may have a large impact on well-being? And how do you tell if someone is truly centered emotionally?
I believe that in the psychoanalytic tradition little stock is put in sums of occurrent feelings, much less reports of those feelings, since so much of unhappiness (and by extension happiness) in their view is a matter of the unconscious--deep-down stuff that only came through indirectly, in dreams, reactions to situations, etc. I think this is roughly right. But how do we measure that?
Basically, my question is, how should the sorts of affect measures used in the DRM be changed or supplemented to better assess happiness, or the quality of people’s emotional conditions?
Suppose you think of happiness as a matter of a person’s emotional condition, or something along those lines. If you don’t like to think of happiness that way, then imagine you’re wanting to assess the emotional aspects of well-being: how well people are doing in terms of their emotional states. What, exactly, would you look to measure?
An obvious thought is feelings of joy and sadness, but of course there’s more to it than that: cheerfulness, anger, fear, and worry also come to mind, as well as feelings of being stressed out or anxious. So if you’re developing a self-report-based instrument, say, you’ll want to ask people about feelings like these, and doubtless others.
Here’s what Kahneman et al. (2004) use in one of the better measures, the Day Reconstruction Method (DRM): “Positive affect is the average of happy, warm/friendly, enjoying myself. Negative affect is the average of frustrated/annoyed, depressed/blue, hassled/pushed around, angry/hostile, worried/anxious, criticized/put down.” Also measured, but not placed under the positive/negative affect heading, were feelings of impatience, tiredness, and competence. (I’d be inclined to put the former two under negative—detracting from happiness, and the latter under positive—adding to happiness.) Another question asks, “Thinking only about yesterday, what percentage of the time were you: in a bad mood, a little low or irritable, in a mildly pleasant mood, in a very good mood.”
I think these are reasonable questions, but doubtless they can be improved. First, are these the right feelings to ask about? Second, should each of these feelings get the same weight, as the averaging method assumes? But third, should we only be looking at feelings?
Exercise: think about the most clearly, indisputably happy people you know. (Hopefully someone comes to mind!) Good measures of happiness should pick those individuals out, and for the right reasons. So what are the most salient facts about their emotional conditions? How do you know they are happy? Did you guess the integral of the feelings listed above over time? I doubt it! In my case, the first thing that comes to mind is not feelings at all, but a palpable confidence, centeredness, or settledness of stance. (BTW, the most blatantly cheerful people I know don’t strike me as very happy at all; their good cheer seems a way of compensating for a basically unsettled psyche.) For the people I’m thinking of, I’m guessing they’re happy because of what seems to me to be their basic psychic orientation, disposition, or stance. They are utterly at home in their skin, and their lives.
If this is even part of the story, then affect measures like those above appear to exhibit a “feeling bias,” putting too much weight on feeling episodes rather than matters of basic psychic orientation. How to fix? I don’t know, but one possibility is to use “mood induction” techniques, e.g. subjecting people to computer crashes and seeing how they respond. A happy person shouldn’t easily fly into a rage. But this won’t work well for some large surveys. And what about occurrent states like a constant, low level stress that doesn’t quite amount to a “feeling” of being stressed, or at least not enough to turn up in reports of feeling episodes, yet which may have a large impact on well-being? And how do you tell if someone is truly centered emotionally?
I believe that in the psychoanalytic tradition little stock is put in sums of occurrent feelings, much less reports of those feelings, since so much of unhappiness (and by extension happiness) in their view is a matter of the unconscious--deep-down stuff that only came through indirectly, in dreams, reactions to situations, etc. I think this is roughly right. But how do we measure that?
Basically, my question is, how should the sorts of affect measures used in the DRM be changed or supplemented to better assess happiness, or the quality of people’s emotional conditions?
Friday, July 13, 2007
Checkerboards and Honeycombs in the Sun
In 1819, the eminent physiologist Johann Purkinje drew the following picture of what he saw when he closed his eyes and faced toward the sun:
Purkinje said that most individuals with whom he tried this experiment report seeing such figures, especially the little squares. (For a fuller translation of this and surrounding passages, see here.)
When I face the sun with eyes closed it doesn't seem to me that I see checkerboard or honeycomb shapes. Rather, I'd say, my visual field is broadly and diffusedly orange or light gray (slowing shifting between these two colors) -- and brighter, generally, in the direction of the sun. Sometimes it briefly becomes a vivid scarlet. Others I've asked to close their eyes and look at the sun also generally don't report Purkinje-like experiences (although one person on one occasion -- out of several occasions -- reported something like a honeycomb latticework).
So I'm curious: Was Purkinje simply mistaken? Did he have unusual experiences, accurately reported for his own part, and then subtly pressured his subjects into erroneously reporting similar things? Could this be the kind of experience that varies culturally? I'd be interested to hear if any of you experience checkerboards or latticworks.
Purkinje said that most individuals with whom he tried this experiment report seeing such figures, especially the little squares. (For a fuller translation of this and surrounding passages, see here.)
When I face the sun with eyes closed it doesn't seem to me that I see checkerboard or honeycomb shapes. Rather, I'd say, my visual field is broadly and diffusedly orange or light gray (slowing shifting between these two colors) -- and brighter, generally, in the direction of the sun. Sometimes it briefly becomes a vivid scarlet. Others I've asked to close their eyes and look at the sun also generally don't report Purkinje-like experiences (although one person on one occasion -- out of several occasions -- reported something like a honeycomb latticework).
So I'm curious: Was Purkinje simply mistaken? Did he have unusual experiences, accurately reported for his own part, and then subtly pressured his subjects into erroneously reporting similar things? Could this be the kind of experience that varies culturally? I'd be interested to hear if any of you experience checkerboards or latticworks.
Thursday, July 12, 2007
The Social Biophilia Hypothesis (by guest blogger Dan Haybron)
Two posts back I suggested that people may have evolved with psychological needs for which they lack corresponding desires, or at least strong enough desires given the significance of the needs. For certain needs may have been met automatically in the environment in which we evolved, so that there wouldn’t be any point in having desires for them. Today I want to suggest a possible example of this: a need for close engagement with the natural environment.
Biologist E.O. Wilson and others have defended the “biophilia” hypothesis, according to which human beings evolved with an innate affinity for nature. They have noted a variety of results pointing to the measurable benefits of exposure to natural scenes, wilderness, etc. (E.g., hospital patients with a view of trees and the like tend to have better outcomes.) To be honest I have not read this literature extensively, but the root idea strikes me as very plausible.
Indeed, I suspect that human beings have a basic psychological need for engagement with natural environments, so that their well-being (in particular, their happiness) is substantially diminished insofar as they are removed from such environments. And yet we don’t perceive an overwhelming desire for it, because the need was automatically fulfilled for our ancestors.
I can’t offer much argument here, but one reason to believe all this is that dealing with wilderness places intense cognitive demands on us, presenting us with an extremely rich perceptual environment that requires a high degree of attentiveness and discernment. (I don’t mean enjoying a hike in the woods, perceived as a pleasant but indiscriminate blur of greens, browns, and grays—I mean *knowing* the woods intimately, because the success of your daily activities depends on it.) The selection pressures on our hunter-gatherer ancestors to excel in meeting these demands must have been intense, and I think this is one of the things we are indeed really good at. Moreover, it is plausible that we really enjoy exercising these capacities (recall Rawls’ “Aristotelian Principle”). Insofar as we fail to exercise these capacities, we may be deprived of one of the chief sources of human happiness (see, Michael Pollan’s excellent “The Modern Hunter-Gatherer.”) I suspect that most artificial environments (think suburbia) are too simple and predictable, leaving these capacities mostly idled, and us bored. (Perhaps many people love cities precisely because they come closer to simulating the richness of nature.)
At the same time, we are obviously social creatures, most of whom have a deep need to live in community with others. Living alone in the forest is not a good plan for most of us. Distinguish two types of community: “land communities,” where daily live typically involves a close engagement with the natural environment; and “pavement communities,” where it does not. Virtually all of us now live in pavement communities.
Here’s a wild conjecture: human flourishing is best served in the context of a land community. Indeed, only in such a community can our basic psychological needs be met. Call this the “social biophilia hypothesis.” Plausible?
I suppose this will seem crazy to most readers, and maybe it is. For one thing, there is a conspicuous paucity of discussion of such ideas in the psychological literature. Why isn’t there more evidence for this hypothesis in the literature? I would suggest there are two reasons. First, current measures of happiness may be inadequate, e.g. focusing too little on stress and other states where we would expect to find the biggest differential. Second, psychologists basically don’t *study* people in land communities. Almost all the big studies of subjective well-being, the heritability studies, etc., focus on populations living in pavement communities. And there is virtually no work comparing the well-being of people closely engaged with nature and those who are not (but see Biswas-Diener et al. 2005). If the social biophilia hypothesis is true, then this would be a bit like studying human well-being using only hermits as subjects. (“Zounds, they’re all the same! Happiness must be mainly in the genes.”) The question is, how can we study the effects on well-being of living close to nature while controlling for other differences between people who do so and people living in pavement communities?
Biologist E.O. Wilson and others have defended the “biophilia” hypothesis, according to which human beings evolved with an innate affinity for nature. They have noted a variety of results pointing to the measurable benefits of exposure to natural scenes, wilderness, etc. (E.g., hospital patients with a view of trees and the like tend to have better outcomes.) To be honest I have not read this literature extensively, but the root idea strikes me as very plausible.
Indeed, I suspect that human beings have a basic psychological need for engagement with natural environments, so that their well-being (in particular, their happiness) is substantially diminished insofar as they are removed from such environments. And yet we don’t perceive an overwhelming desire for it, because the need was automatically fulfilled for our ancestors.
I can’t offer much argument here, but one reason to believe all this is that dealing with wilderness places intense cognitive demands on us, presenting us with an extremely rich perceptual environment that requires a high degree of attentiveness and discernment. (I don’t mean enjoying a hike in the woods, perceived as a pleasant but indiscriminate blur of greens, browns, and grays—I mean *knowing* the woods intimately, because the success of your daily activities depends on it.) The selection pressures on our hunter-gatherer ancestors to excel in meeting these demands must have been intense, and I think this is one of the things we are indeed really good at. Moreover, it is plausible that we really enjoy exercising these capacities (recall Rawls’ “Aristotelian Principle”). Insofar as we fail to exercise these capacities, we may be deprived of one of the chief sources of human happiness (see, Michael Pollan’s excellent “The Modern Hunter-Gatherer.”) I suspect that most artificial environments (think suburbia) are too simple and predictable, leaving these capacities mostly idled, and us bored. (Perhaps many people love cities precisely because they come closer to simulating the richness of nature.)
At the same time, we are obviously social creatures, most of whom have a deep need to live in community with others. Living alone in the forest is not a good plan for most of us. Distinguish two types of community: “land communities,” where daily live typically involves a close engagement with the natural environment; and “pavement communities,” where it does not. Virtually all of us now live in pavement communities.
Here’s a wild conjecture: human flourishing is best served in the context of a land community. Indeed, only in such a community can our basic psychological needs be met. Call this the “social biophilia hypothesis.” Plausible?
I suppose this will seem crazy to most readers, and maybe it is. For one thing, there is a conspicuous paucity of discussion of such ideas in the psychological literature. Why isn’t there more evidence for this hypothesis in the literature? I would suggest there are two reasons. First, current measures of happiness may be inadequate, e.g. focusing too little on stress and other states where we would expect to find the biggest differential. Second, psychologists basically don’t *study* people in land communities. Almost all the big studies of subjective well-being, the heritability studies, etc., focus on populations living in pavement communities. And there is virtually no work comparing the well-being of people closely engaged with nature and those who are not (but see Biswas-Diener et al. 2005). If the social biophilia hypothesis is true, then this would be a bit like studying human well-being using only hermits as subjects. (“Zounds, they’re all the same! Happiness must be mainly in the genes.”) The question is, how can we study the effects on well-being of living close to nature while controlling for other differences between people who do so and people living in pavement communities?
Monday, July 09, 2007
Big Things and Small Things in Morality
Hegel wrote that a great man's butler never thinks him great -- not, Hegel says, because the great man isn't great, but because the butler is a butler.
I don't really want to venture into the dark waters of Hegel interpretation, but the remark (besides being insulting to butlers and perhaps convenient for Hegel's self-image) suggests to me the following thought: Being good in small ways or accomplished in petty things -- in the kind of things a butler sees -- is unrelated, or maybe even negatively related, to being truly great. Einstein might not seem a genius to the man who handles his dry cleaning.
Does this apply to moral goodness or greatness? Is being good in small things -- civility with the cashier, not leaving one's coffee cup behind in the lecture hall -- much related to the big moral thngs, such as caring properly for one's children or doing good rather than harm to the world in one's chosen profession? Is it related to moral greatness of the sort seen in heroic rescuers of Jews during the Holocaust, such as Raoul Wallenberg, or moral visionaries such as Gandhi or Martin Luther King?
As far as I know, the question has not been systematically studied (although situationists might predict weak relationships among moral traits in general). Indeed, it's a somewhat daunting prospect, empirically. Although measuring small things the return of library books is easy, it's hard to get an accurate measure of broader moral life. People may have views about the daily character of King and Gandhi, but such views are almost inevitably distorted by politics, or by idolatry, or by the pleasure of bringing down a hero, so that it's hard to know what to make of them.
The issue troubles me particularly because of my interest in the moral behavior of ethics professors. Suppose I find (as it's generally looking so far) that on a number of small measures such as the failure to return library books, contribution to charities supporting needy students, etc., that ethicists look no better than the rest of us. How much can I draw from that? Are such little things simply too little to indicate anything of moral importance? (Or maybe, I wonder, is the moral life mostly composed of an accumulation of such little things...?)
I don't really want to venture into the dark waters of Hegel interpretation, but the remark (besides being insulting to butlers and perhaps convenient for Hegel's self-image) suggests to me the following thought: Being good in small ways or accomplished in petty things -- in the kind of things a butler sees -- is unrelated, or maybe even negatively related, to being truly great. Einstein might not seem a genius to the man who handles his dry cleaning.
Does this apply to moral goodness or greatness? Is being good in small things -- civility with the cashier, not leaving one's coffee cup behind in the lecture hall -- much related to the big moral thngs, such as caring properly for one's children or doing good rather than harm to the world in one's chosen profession? Is it related to moral greatness of the sort seen in heroic rescuers of Jews during the Holocaust, such as Raoul Wallenberg, or moral visionaries such as Gandhi or Martin Luther King?
As far as I know, the question has not been systematically studied (although situationists might predict weak relationships among moral traits in general). Indeed, it's a somewhat daunting prospect, empirically. Although measuring small things the return of library books is easy, it's hard to get an accurate measure of broader moral life. People may have views about the daily character of King and Gandhi, but such views are almost inevitably distorted by politics, or by idolatry, or by the pleasure of bringing down a hero, so that it's hard to know what to make of them.
The issue troubles me particularly because of my interest in the moral behavior of ethics professors. Suppose I find (as it's generally looking so far) that on a number of small measures such as the failure to return library books, contribution to charities supporting needy students, etc., that ethicists look no better than the rest of us. How much can I draw from that? Are such little things simply too little to indicate anything of moral importance? (Or maybe, I wonder, is the moral life mostly composed of an accumulation of such little things...?)
Friday, July 06, 2007
Indiscernible misery? (by guest blogger Dan Haybron)
I‘m on the road at the moment, so here‘s a quick traveler‘s post. A couple years back I had the pleasure of flying to California over the holidays with a family suffering from stomach flu. In my case the worst had seemingly passed, yet I was still definitely not feeling well. In fact the flight became excruciatingly unpleasant--one of those times where you keep changing positions and never manage to relieve the feeling for more than a few moments. I wanted to run screaming from the plane.
The thing is: even at times of peak discomfort, when I wanted to jump out of my skin, I could not discern anything in my experience to account for it. When I paused to introspect what I was feeling, I couldn‘t make out anything unpleasant--no discernible nausea, nothing. As if I felt fine. Except I didn‘t--I felt horrible--even, I think, at those moments. At least, that‘s what I recall, and I also recall at least getting some distraction thinking about these things at the time.
Has anyone experienced anything like this? Am I just confused? I don‘t think the overall unpleasantness of the experience was simply a matter of my intense desire to be rid of it--rather, it seemed the desire was a result of the unpleasantness...
The thing is: even at times of peak discomfort, when I wanted to jump out of my skin, I could not discern anything in my experience to account for it. When I paused to introspect what I was feeling, I couldn‘t make out anything unpleasant--no discernible nausea, nothing. As if I felt fine. Except I didn‘t--I felt horrible--even, I think, at those moments. At least, that‘s what I recall, and I also recall at least getting some distraction thinking about these things at the time.
Has anyone experienced anything like this? Am I just confused? I don‘t think the overall unpleasantness of the experience was simply a matter of my intense desire to be rid of it--rather, it seemed the desire was a result of the unpleasantness...
Wednesday, July 04, 2007
Seeing Through Your Eyelids -- Spreading Motion
When I close my eyes and wave my hand before my face, I seem to see motion. I think this isn't just the caver illusion (the sense people sometimes have, in complete darkness, that they can see their hands move), because the effect seems much stronger when I face toward a light source, and I can see a friend's hand in the same way. In some sense, I am seeing through my eyelids. This shouldn't be too surprising: Most people report being able to see the sun through their eyelids. Such a thin band of flesh is easily penetrated by light. I discussed this stuff a bit in a May post.
Although I was pretty confused in my May post, I'm finding more consistency now with directional and occlusion effects. If I move my hand slowly from one side to the other, I can locate the position of the movement as to the right or the left. If I face a bright light source and move my head, I can track the rough direction of the source. If I raise an occluding object between my face and my moving hand -- a newspaper, say, held eight inches before my face -- the impression of movement is much lessened. (Any sense of motion that remains might really be just the caver illusion.)
The oddest effect is when I slightly lower the occluding object, so that the tips of my fingers are not occluded, but the rest of my hand and arm is. Once again I have a vivid experience of motion -- but not as though located just at the top of the visual field. The motion seems to spread down the field, almost to the bottom, as though the newspaper were entirely removed, but somewhat less vivid. In fact, it seems to me that the primary effect of moving the newspaper up and down is increasing and decreasing the vividness of the sense of motion. The change in the visual extent of the motion experience appears relatively minor.
As far as I'm aware, this spreading of perceived motion when the eyes are closed has never been remarked on in the perception and consciousness literature. I wonder if others experience the same thing...?
Although I was pretty confused in my May post, I'm finding more consistency now with directional and occlusion effects. If I move my hand slowly from one side to the other, I can locate the position of the movement as to the right or the left. If I face a bright light source and move my head, I can track the rough direction of the source. If I raise an occluding object between my face and my moving hand -- a newspaper, say, held eight inches before my face -- the impression of movement is much lessened. (Any sense of motion that remains might really be just the caver illusion.)
The oddest effect is when I slightly lower the occluding object, so that the tips of my fingers are not occluded, but the rest of my hand and arm is. Once again I have a vivid experience of motion -- but not as though located just at the top of the visual field. The motion seems to spread down the field, almost to the bottom, as though the newspaper were entirely removed, but somewhat less vivid. In fact, it seems to me that the primary effect of moving the newspaper up and down is increasing and decreasing the vividness of the sense of motion. The change in the visual extent of the motion experience appears relatively minor.
As far as I'm aware, this spreading of perceived motion when the eyes are closed has never been remarked on in the perception and consciousness literature. I wonder if others experience the same thing...?
Tuesday, July 03, 2007
Germs, dirt, and relationships: why people may not want what they need (by guest blogger Dan Haybron)
It is widely thought that happiness depends on getting what you want. Indeed, the switch in economics from happiness to preference satisfaction as the standard of utility was originally based on the idea that the latter is a good proxy for the former: happiness is a function of the extent to which you get what you want. Even if you don‘t believe that, you might accept this weaker claim: basic human needs will normally be accompanied by desires for goods that tend to satisfy those needs; and the strength of those desires will reflect the importance of the needs. Thus human psychological needs will be reflected in people‘s motives. Call this the Needs-Motivation Congruency Thesis (NMCT). Hunger would be a typical example: we strongly desire food because we strongly need food (not just for happiness, of course).
I see no reason to believe that this is true. Among other things, there‘s an in-principle reason we should not expect the NMCT to hold: common human motivational tendencies will largely reflect the needs of our evolutionary ancestors. We want food because such a desire contributed to inclusive fitness: if you didn‘t have that desire, your genes didn‘t go very far. But here‘s another physiological need humans
apparently have: we seem to need early exposure to germs and dirt. Without it, we develop various allergies and immune deficiencies. Yet most people don‘t have a particular attraction to germs and dirt (as such!). If anything, it‘s the reverse. Why? Because such a desire would have done nothing for inclusive fitness when humans evolved: you couldn‘t avoid encounters with lots of germs and dirt. If anything, it
would have been adaptive to limit exposure to such things. So we need a dirty childhood, but don‘t want one; kids are happy to sit in an anti-septic environment playing video games all day, puffing on albuterol inhalers.
The same thing may happen with happiness: we may need certain things for happiness but either have no particular desire for them, or our desire for them is weak compared to the need. Relationships may be an example. Good relationships are the strongest known source of happiness, and are clearly a deep psychological need for human beings. Now normal people do, clearly, desire social relationships. Yet many
if not most of us choose to live in ways that compromise our relationships, often to the net detriment of our happiness. E.g., people often choose lucrative jobs at the expense of time with friends and family. It is easy to see how a strong desire for wealth and status might have been adaptive for early humans, whereas we probably
didn‘t need proportionately strong desires for friendship and family: you got those automatically. So our desire for wealth and status trumps our desire for a more important need, good relationships.
Next up: biophilia as another possible counterexample to the NMCT.
I see no reason to believe that this is true. Among other things, there‘s an in-principle reason we should not expect the NMCT to hold: common human motivational tendencies will largely reflect the needs of our evolutionary ancestors. We want food because such a desire contributed to inclusive fitness: if you didn‘t have that desire, your genes didn‘t go very far. But here‘s another physiological need humans
apparently have: we seem to need early exposure to germs and dirt. Without it, we develop various allergies and immune deficiencies. Yet most people don‘t have a particular attraction to germs and dirt (as such!). If anything, it‘s the reverse. Why? Because such a desire would have done nothing for inclusive fitness when humans evolved: you couldn‘t avoid encounters with lots of germs and dirt. If anything, it
would have been adaptive to limit exposure to such things. So we need a dirty childhood, but don‘t want one; kids are happy to sit in an anti-septic environment playing video games all day, puffing on albuterol inhalers.
The same thing may happen with happiness: we may need certain things for happiness but either have no particular desire for them, or our desire for them is weak compared to the need. Relationships may be an example. Good relationships are the strongest known source of happiness, and are clearly a deep psychological need for human beings. Now normal people do, clearly, desire social relationships. Yet many
if not most of us choose to live in ways that compromise our relationships, often to the net detriment of our happiness. E.g., people often choose lucrative jobs at the expense of time with friends and family. It is easy to see how a strong desire for wealth and status might have been adaptive for early humans, whereas we probably
didn‘t need proportionately strong desires for friendship and family: you got those automatically. So our desire for wealth and status trumps our desire for a more important need, good relationships.
Next up: biophilia as another possible counterexample to the NMCT.