I'm packing up now for a vacation until the 17th (to see relatives and friends in Maryland and Florida). I'll try to keep posting (and Dan Haybron is still mid-run as a guest blogger), but since I don't have access to my books and articles, things will be a little less formal.
Informally, then: If I ever I receive a major award, then maybe I'll know what's on people's minds when they call such awards "humbling" (Google humbling award for some examples). On the face of it, receiving awards seems generally to be the opposite of humbling. Nobel Prize and Academy Award receipients aren't, as a class, the humblest of folks. Nor do winners of lesser awards (various academic prizes, for example) seem generally to made more humble by the experience. (A friend of mine went on a blind date with a winner of a MacArthur Fellowship. He handed her his business card, with "certified genius" embossed on it! Unfortunately, she forget to ask for his autograph.)
Let's assume, then, that -- unlike truly humbling experiences -- winning awards doesn't make one humble. Yet the phrase is so common, I suspect there's something to it. Momentarily, at least, one can feel humbled by an award.
Here's my thought: If I receive an award that puts me in elevated company or that represents a very high appraisal of me by a group I respect, there may be a mismatch between my self-conception and the conception that others seem to have of me. My sense that I don't quite deserve to belong may be experienced as something like humility. However, ordinarily that feeling will pass. I'll adjust my self-conception upward; I'll slowly start to think of myself in terms of the award (since I'm so impressed by it!); it will be hard for me not to think less of those who haven't reached such heights.
Perversely, then, it may be exactly those people who are inclined to think of an award as "humbling" who are made less humble by attaining it. Those who were already arrogant will be unchanged -- they knew they deserved the award all along, and it's about time! And the type of person who is deeply, intrinsically humble (if there are any such people) may not be sufficiently inclined to see the award as a legitimate mark of comparison between oneself and other folks to have any striking experience of humility -- any "wow, me?!" -- in the face of it.
Friday, June 29, 2007
Wednesday, June 27, 2007
Why life satisfaction is (and isn’t) worth measuring (by guest blogger Dan Haybron)
A lot of people think of happiness in terms of life satisfaction, and take life satisfaction measures to tell us about how happy people are. There is something to this. But no one ever said “I just want my kids to be satisfied with their lives,” and for good reason: life satisfaction is very easy to come by. To be satisfied with your life, you don’t even have to see it as a good life: it just has to be good enough, and what counts as good enough can be pretty modest. If you assess life satisfaction in Tiny Timsylvania, where everyone is crippled and mildly depressed but likes to count their blessings, you may find very high levels of life satisfaction. This may even be reasonable on their part: your life may stink, but so does everyone’s, so be grateful for what you’ve got. Things could be a lot worse.
Many people would find it odd to call the folks of Tiny Timsylvania happy. At least, you would be surprised to pick up the paper and read about a study claiming that the depressed residents of that world are happy. If that’s happiness, who needs it? For this and other reasons, I think that life satisfaction does not have the sort of value we normally think happiness has, and that researchers should avoid couching life satisfaction studies as findings on “happiness.” To do so is misleading about their significance.
So are life satisfaction measures are pointless? No: we might still regard them as useful measures of how well people’s lives are going relative to their priorities. Even if they don’t tell you whether people’s lives are going well, for reasons just noted, they might still tell you who’s doing better and worse on this count: namely, if people whose lives are going better by their standards tend to report higher life satisfaction than those whose lives are going worse. This might well be the case, even in Tiny Timsylvania. (Though caution may be in order when comparing life satisfaction between that nation and Archie Bunkerton, where people like to kvetch no matter how well things are going.)
This kind of measure may be important, either because we think well-being includes success relative to your priorities, or because respect for persons requires considering their opinions about their lives when making decisions on their behalf. The government of Wittgensteinia, populated entirely by dysthymic philosophers who don’t mind being melancholy as long as they get to do philosophy, should take into account the fact that its citizens are satisfied with their lives, even if they aren’t happy.
Note that the present rationale for life satisfaction as a social indicator takes it to be potentially important, but not as a mental state good. Rather, it matters as an indicator of conditions in people’s lives. Concern for life satisfaction is not, primarily, concern about people’s mental states. So rejecting mentalistic views of well-being is no reason for skepticism about life satisfaction.
Many people would find it odd to call the folks of Tiny Timsylvania happy. At least, you would be surprised to pick up the paper and read about a study claiming that the depressed residents of that world are happy. If that’s happiness, who needs it? For this and other reasons, I think that life satisfaction does not have the sort of value we normally think happiness has, and that researchers should avoid couching life satisfaction studies as findings on “happiness.” To do so is misleading about their significance.
So are life satisfaction measures are pointless? No: we might still regard them as useful measures of how well people’s lives are going relative to their priorities. Even if they don’t tell you whether people’s lives are going well, for reasons just noted, they might still tell you who’s doing better and worse on this count: namely, if people whose lives are going better by their standards tend to report higher life satisfaction than those whose lives are going worse. This might well be the case, even in Tiny Timsylvania. (Though caution may be in order when comparing life satisfaction between that nation and Archie Bunkerton, where people like to kvetch no matter how well things are going.)
This kind of measure may be important, either because we think well-being includes success relative to your priorities, or because respect for persons requires considering their opinions about their lives when making decisions on their behalf. The government of Wittgensteinia, populated entirely by dysthymic philosophers who don’t mind being melancholy as long as they get to do philosophy, should take into account the fact that its citizens are satisfied with their lives, even if they aren’t happy.
Note that the present rationale for life satisfaction as a social indicator takes it to be potentially important, but not as a mental state good. Rather, it matters as an indicator of conditions in people’s lives. Concern for life satisfaction is not, primarily, concern about people’s mental states. So rejecting mentalistic views of well-being is no reason for skepticism about life satisfaction.
Monday, June 25, 2007
Are Babies More Conscious than Adults?
Philosophers and doctors used to dispute (sometimes still do dispute) whether babies are conscious or merely (as Alison Gopnik puts it in her criticism of the view) "crying carrots". This view went so far that doctors often used to think it unnecessary to give anaesthesia to infants. Infants are still, I think, not as conscientiously anaesthetized as adults.
Gopnik argues that babies are not only conscious, they are more conscious than adults. Her argument for this view begins with the idea that people in general -- adults, that is -- have more conscious experience of what they attend to than of what they disregard. We have either no experience, or limited experience, of the hum of the refrigerator in the background or the feeling of the shoes on our feet, until we stop to think about it. In contrast, when we expertly and automatically do something routine (such as driving to work on the usual route) we are often barely conscious at all, it seems. (I think the issue is complex, though.)
When we attend to something, the brain regions involved exhibit more cholinergic activity, become more plastic and open to new information. We learn more and lay down new memories. What we don't attend to, we often hardly learn about at all.
Baby brains, Gopnik says, exhibit a much broader plasticity than adults' and have a general neurochemistry similar to the neurochemistry involved in adult attention. Babies learn more quickly than we do, and about more things, and pick up more incidental knowledge outside a narrow band of attention. Gopnik suggests that we think of attention, in adults, as something like a mechanism that turns part of our mature and slow-changing brains, for a brief period, flexible, quick learning, and plastic -- baby-like -- while suppressing change in the rest of the brain.
So what is it like to be a baby? According to Gopnik, it's something like attending to everything at once: There's much less of the reflexive and ignored, the non-conscious, the automatic and expert. She suggests that the closest approximation adults typically get to baby-like experience is when they are in completely novel environments, such as very different cultures, where everything is new. In four days in New Guinea we might have more consciousness and lay down more memories than in four months at home. Also, she suggests, it may be something like certain forms of meditation -- those that involve dissolving one's attentional focus and becoming aware of everything at once. In such states, consciousness becomes not like a spotlight focused on one or a few objects of attention, with all else dark, but more like a lantern, shining its light on many things at once.
Now isn't that a nifty little thought?
Gopnik argues that babies are not only conscious, they are more conscious than adults. Her argument for this view begins with the idea that people in general -- adults, that is -- have more conscious experience of what they attend to than of what they disregard. We have either no experience, or limited experience, of the hum of the refrigerator in the background or the feeling of the shoes on our feet, until we stop to think about it. In contrast, when we expertly and automatically do something routine (such as driving to work on the usual route) we are often barely conscious at all, it seems. (I think the issue is complex, though.)
When we attend to something, the brain regions involved exhibit more cholinergic activity, become more plastic and open to new information. We learn more and lay down new memories. What we don't attend to, we often hardly learn about at all.
Baby brains, Gopnik says, exhibit a much broader plasticity than adults' and have a general neurochemistry similar to the neurochemistry involved in adult attention. Babies learn more quickly than we do, and about more things, and pick up more incidental knowledge outside a narrow band of attention. Gopnik suggests that we think of attention, in adults, as something like a mechanism that turns part of our mature and slow-changing brains, for a brief period, flexible, quick learning, and plastic -- baby-like -- while suppressing change in the rest of the brain.
So what is it like to be a baby? According to Gopnik, it's something like attending to everything at once: There's much less of the reflexive and ignored, the non-conscious, the automatic and expert. She suggests that the closest approximation adults typically get to baby-like experience is when they are in completely novel environments, such as very different cultures, where everything is new. In four days in New Guinea we might have more consciousness and lay down more memories than in four months at home. Also, she suggests, it may be something like certain forms of meditation -- those that involve dissolving one's attentional focus and becoming aware of everything at once. In such states, consciousness becomes not like a spotlight focused on one or a few objects of attention, with all else dark, but more like a lantern, shining its light on many things at once.
Now isn't that a nifty little thought?
Friday, June 22, 2007
How Happy Is Happy? (by guest blogger Dan Haybron)
The popular media often highlight studies purporting to show that some (usually very large) proportion of people are happy, whereas some other (usually very small) proportion are unhappy (e.g., here and here). I suspect that not many people really believe such assertions, and that they are a major source of skepticism about the science of happiness.
Such claims seem mainly to rest on three kinds of studies: self-reports of happiness (which I critiqued in my last post), life satisfaction surveys, and measures of affect such as the balance of positive versus negative affect. Today I want to focus on this last source of evidence, assuming for the sake of argument a hedonistic or emotional state theory of happiness. Affect balance measures can only support claims that people are “happy” given some view about the threshold for being happy: what balance of positive to negative is needed to count as happy?
Traditionally, the answer has been this: a bare majority of positive affect suffices for being happy. Greater than 50% PA, you’re happy; less and you’re unhappy. (Note that you must have precisely 50% PA vs. NA to fall in the category you might have thought many of us fit: neither happy nor unhappy.) If you are enraged for several hours a day, or cry yourself to sleep every night, you may still be happy if your negative affect doesn’t hit 50%. Perhaps you could be depressed and count as happy on this sort of view.
This seems deeply implausible to me: life probably has to be pretty awful for negative affect to literally be in the majority. When a family member dies, we usually aren’t happy, yet the laughter sometimes outweighs the tears (the unhappiness perhaps revealed more by the ease with which the tears come rather than by their frequency). In informal surveys of students in my classes, a majority refused to ascribe happiness in cases where the percent of NA was significantly less than 50%. Recent work by Fredrickson et al. claims that a 3:1 ratio of PA to NA is needed for people to “flourish”; much less and they “languish.”
I think the correct standard for happiness, even on hedonistic or emotional state views, is less than obvious. Researchers should not just assume the 50% threshold; it needs some defense. (How can we determine the right threshold?) But if the threshold for hedonistic/emotional state happiness is an open question, then we have no basis for saying whether people are happy given such views. This seems to me correct: the science of happiness has taught us a lot, but we really have no idea, except for our hunches and obvious cases like depression, whether people are happy or not. (My hunch: people probably aren’t as miserable as intellectuals tend to think, but most people still probably aren’t happy.)
Perhaps researchers should drop absolute claims about whether people are happy, and focus on relative levels of happiness: who is happier and why? Isn’t this what we mainly care about?
Such claims seem mainly to rest on three kinds of studies: self-reports of happiness (which I critiqued in my last post), life satisfaction surveys, and measures of affect such as the balance of positive versus negative affect. Today I want to focus on this last source of evidence, assuming for the sake of argument a hedonistic or emotional state theory of happiness. Affect balance measures can only support claims that people are “happy” given some view about the threshold for being happy: what balance of positive to negative is needed to count as happy?
Traditionally, the answer has been this: a bare majority of positive affect suffices for being happy. Greater than 50% PA, you’re happy; less and you’re unhappy. (Note that you must have precisely 50% PA vs. NA to fall in the category you might have thought many of us fit: neither happy nor unhappy.) If you are enraged for several hours a day, or cry yourself to sleep every night, you may still be happy if your negative affect doesn’t hit 50%. Perhaps you could be depressed and count as happy on this sort of view.
This seems deeply implausible to me: life probably has to be pretty awful for negative affect to literally be in the majority. When a family member dies, we usually aren’t happy, yet the laughter sometimes outweighs the tears (the unhappiness perhaps revealed more by the ease with which the tears come rather than by their frequency). In informal surveys of students in my classes, a majority refused to ascribe happiness in cases where the percent of NA was significantly less than 50%. Recent work by Fredrickson et al. claims that a 3:1 ratio of PA to NA is needed for people to “flourish”; much less and they “languish.”
I think the correct standard for happiness, even on hedonistic or emotional state views, is less than obvious. Researchers should not just assume the 50% threshold; it needs some defense. (How can we determine the right threshold?) But if the threshold for hedonistic/emotional state happiness is an open question, then we have no basis for saying whether people are happy given such views. This seems to me correct: the science of happiness has taught us a lot, but we really have no idea, except for our hunches and obvious cases like depression, whether people are happy or not. (My hunch: people probably aren’t as miserable as intellectuals tend to think, but most people still probably aren’t happy.)
Perhaps researchers should drop absolute claims about whether people are happy, and focus on relative levels of happiness: who is happier and why? Isn’t this what we mainly care about?
Wednesday, June 20, 2007
Condoms and Alcohol Containers, the MLA and the APA
I've been talking a lot recently about the moral behavior of ethicists. One thought I sometimes hear is this: Philosophers behave more morally than non-philosophers, but ethicists don't behave any better than other philosophers.
So: Can we test this?
One project I have cooking is on rates of charitable giving among philosophy students at University of Zurich. Hopefully, I'll have some data to post on that soon! (Any guesses?) But here's a story I heard at the Pacific APA in April that might shine some amusing light on the issue.
The Modern Language Association is the leading professional association of literature professors. The American Philosophical Association is the leading professional association of philosophers. Both hold their largest annual meetings in late December. One year the MLA and the APA met in the same hotel, I was told, one right after the other. Any good empirical philosopher would thus wonder whether the housekeeping staff might have insights into the difference between philosophers and lit professors. Evidently, word came back that at the MLA meeting there were lots of condoms and dirty sheets. At the APA meeting, there wasn't much of that but quite a few more alcohol bottles.
(Shoot, now as I'm telling this story, I'm finding myself worried that it's apocryphal. If anyone knows confirming or disconfirming details, let me know, and I'll post it as an update here. Minimally, the following is true: Literature professors dress way better than philosophy professors [except Alva Noe]!)
I'll assume that most people don't go to the MLA with their spouses; and I'll assume that sexual infidelity is a greater wrong than excessive alcohol consumption. So it looks like there is more sin at MLA than APA meetings.
Of course, this could be simply differences in opportunity -- only 20% of philosophers are women. At last weekend's SPP, one woman told me she heard about a man there who was hoping "to get laid". She unthinkingly responded, "oh, I didn't know he was gay!" I doubt the maids found many used condoms at that meeting!
So: Can we test this?
One project I have cooking is on rates of charitable giving among philosophy students at University of Zurich. Hopefully, I'll have some data to post on that soon! (Any guesses?) But here's a story I heard at the Pacific APA in April that might shine some amusing light on the issue.
The Modern Language Association is the leading professional association of literature professors. The American Philosophical Association is the leading professional association of philosophers. Both hold their largest annual meetings in late December. One year the MLA and the APA met in the same hotel, I was told, one right after the other. Any good empirical philosopher would thus wonder whether the housekeeping staff might have insights into the difference between philosophers and lit professors. Evidently, word came back that at the MLA meeting there were lots of condoms and dirty sheets. At the APA meeting, there wasn't much of that but quite a few more alcohol bottles.
(Shoot, now as I'm telling this story, I'm finding myself worried that it's apocryphal. If anyone knows confirming or disconfirming details, let me know, and I'll post it as an update here. Minimally, the following is true: Literature professors dress way better than philosophy professors [except Alva Noe]!)
I'll assume that most people don't go to the MLA with their spouses; and I'll assume that sexual infidelity is a greater wrong than excessive alcohol consumption. So it looks like there is more sin at MLA than APA meetings.
Of course, this could be simply differences in opportunity -- only 20% of philosophers are women. At last weekend's SPP, one woman told me she heard about a man there who was hoping "to get laid". She unthinkingly responded, "oh, I didn't know he was gay!" I doubt the maids found many used condoms at that meeting!
The Moral Behavior of Ethicists
For anyone who's interested, I've posted the PowerPoint slides of my SPP presentation "The Moral Behavior of Ethicists" here.
The presentation summarizes data on philosophers' (and a few non-philosophers') opinions about the moral behavior of ethicists, and on the rate at which ethics books are missing from academic libraries compared to non-ethics books in philosophy.
The presentation summarizes data on philosophers' (and a few non-philosophers') opinions about the moral behavior of ethicists, and on the rate at which ethics books are missing from academic libraries compared to non-ethics books in philosophy.
Monday, June 18, 2007
Taking ‘Happiness’ out of the Science of Happiness, Part One
(by guest blogger Dan Haybron)
For my first blog post anywhere, I should probably begin by thanking Eric for inviting me to guest on his blog. I’ve enjoyed his work a lot and think it’s about time his excellent “Unreliability of Naive Introspection” paper was published! So. . .
A fascinating conference on happiness and the law at the University of Chicago a couple of weeks ago made it clearer than ever that there’s lots of interesting philosophical work to be done on policy issues relating to the science of happiness. An obvious worry is that a lot of people still don’t take this research very seriously, making it harder to bring it into the policy arena. Here I want to consider one source of such doubts: studies asking people to report how “happy” they are. These kinds of studies do provide useful information, but they have problems that, given the publicity they receive, can undercut the credibility of the whole enterprise. Perhaps happiness researchers should largely discontinue the practice of asking people how happy they are (as many investigators have already done). Let me note three problems here.
First, we can’t assess the significance of such studies unless we know what people are referring to when they say they are “happy.” Is it life satisfaction, a positive emotional condition . . .? We can be certain that people vary in how they interpret the question, particularly across languages and cultures. Such studies don’t tell us (directly) how happy people are; they tell us, rather, how people think they measure up to their folk theories of “happiness.” This can be useful to know, but it isn’t that useful. For the most part, researchers should probably decide what they want to measure—life satisfaction, affect, etc.—and then measure that.
Second, asking people if they are happy or unhappy is a bit like asking them if they are ugly or stupid, or if their lives are a failure. The question is so emotionally loaded that we should not expect people to think very clearly about it. Americans, e.g., apparently think you’re more likely to go to heaven if you’re happy. Even ascribing (un)happiness to other people is a loaded matter, and can seem judgmental. Less emotionally laden questions should used where possible.
Most importantly, it is only on certain very controversial life satisfaction views of happiness that we should expect people to judge reliably how happy they are. If happiness is a matter of hedonic or emotional state, then people have to aggregate and sum across many states over long periods of time. Then they have to know what balance of affect is required to be (very) happy or unhappy. There is plenty of reason to doubt that people will perform this difficult task with great accuracy. (As readers of this blog may well know. Research on “duration neglect,” e.g., indicates that people basically ignore the duration of experiences when recalling how pleasant they were.) Indeed, I think people are probably dubious judges of how they feel even at the present moment.
If you think of happiness in terms of hedonic or emotional state, as I do, then you can assess it without asking people how happy they are. While there are lots of problems with measures of affect, there seems to be much less skepticism about them. Depression measures, e.g., have their problems, but people don’t generally regard them as meaningless, and most doubts concern the criteria for calling someone depressed rather than the accuracy of the mood measures. In my next post, I’ll suggest that the science of happiness faces a similar problem about the criteria for calling someone happy, versus unhappy. Researchers may be well advised to stop making claims about whether people are happy or not, period.
For my first blog post anywhere, I should probably begin by thanking Eric for inviting me to guest on his blog. I’ve enjoyed his work a lot and think it’s about time his excellent “Unreliability of Naive Introspection” paper was published! So. . .
A fascinating conference on happiness and the law at the University of Chicago a couple of weeks ago made it clearer than ever that there’s lots of interesting philosophical work to be done on policy issues relating to the science of happiness. An obvious worry is that a lot of people still don’t take this research very seriously, making it harder to bring it into the policy arena. Here I want to consider one source of such doubts: studies asking people to report how “happy” they are. These kinds of studies do provide useful information, but they have problems that, given the publicity they receive, can undercut the credibility of the whole enterprise. Perhaps happiness researchers should largely discontinue the practice of asking people how happy they are (as many investigators have already done). Let me note three problems here.
First, we can’t assess the significance of such studies unless we know what people are referring to when they say they are “happy.” Is it life satisfaction, a positive emotional condition . . .? We can be certain that people vary in how they interpret the question, particularly across languages and cultures. Such studies don’t tell us (directly) how happy people are; they tell us, rather, how people think they measure up to their folk theories of “happiness.” This can be useful to know, but it isn’t that useful. For the most part, researchers should probably decide what they want to measure—life satisfaction, affect, etc.—and then measure that.
Second, asking people if they are happy or unhappy is a bit like asking them if they are ugly or stupid, or if their lives are a failure. The question is so emotionally loaded that we should not expect people to think very clearly about it. Americans, e.g., apparently think you’re more likely to go to heaven if you’re happy. Even ascribing (un)happiness to other people is a loaded matter, and can seem judgmental. Less emotionally laden questions should used where possible.
Most importantly, it is only on certain very controversial life satisfaction views of happiness that we should expect people to judge reliably how happy they are. If happiness is a matter of hedonic or emotional state, then people have to aggregate and sum across many states over long periods of time. Then they have to know what balance of affect is required to be (very) happy or unhappy. There is plenty of reason to doubt that people will perform this difficult task with great accuracy. (As readers of this blog may well know. Research on “duration neglect,” e.g., indicates that people basically ignore the duration of experiences when recalling how pleasant they were.) Indeed, I think people are probably dubious judges of how they feel even at the present moment.
If you think of happiness in terms of hedonic or emotional state, as I do, then you can assess it without asking people how happy they are. While there are lots of problems with measures of affect, there seems to be much less skepticism about them. Depression measures, e.g., have their problems, but people don’t generally regard them as meaningless, and most doubts concern the criteria for calling someone depressed rather than the accuracy of the mood measures. In my next post, I’ll suggest that the science of happiness faces a similar problem about the criteria for calling someone happy, versus unhappy. Researchers may be well advised to stop making claims about whether people are happy or not, period.
Friday, June 15, 2007
Gilbert Ryle's Secret Grotto
Gilbert Ryle, in his justly famous 1949 book The Concept of Mind, downplayed the importance of inner events to our mental lives, emphasizing instead patterns of behavior. (See, for example, the IEP entry on Behaviorism.)
Yet Ryle does not (despite his reputation) deny the existence of an inner mental life altogether. He admits, for example, the existence of "silent monologues" (p. 184), silent tunes in one's head (p. 269), "thrills and twinges" of emotion (p. 86 and many other places), "private" visual images (p. 34), and the like. Daydreams and silent soliloquoys, he says, belong among things we can catch ourselves engaged in, much as we can catch ourselves scratching or speaking aloud to ourselves. Such events, he continues, "can be private or silent items" of our autobiography (p. 166-167) to which only we have access, and he sometimes explicitly characterizes them as "in our heads".
Such remarks may surprise some who are used to hearing Ryle characterized (or caricatured) as a radical behaviorist who denies that the terms of ordinary English can ever refer to private episodes. Although Ryle derides the general importance of inner, ghostly "shadow actions" (p. 25) in a "secret grotto" (p. 119), and repeatedly denies the necessity of their occurrence prior to outward actions, he plainly does allow the existence, and even the occasional importance, of private mental events.
This seems to me exactly the right view. We do have a sort of "secret grotto": We can (in some sense) witness our own silent utterances, visual imagery, daydreams, and twinges of emotion, in a way others cannot. Yet it's not clear that our noticing such events in the "stream of experience" gives us any generally privileged self-knowledge beyond the kind of privilege that anyone might have who can witness some things that others cannot (say what takes place when one is alone in a room); and indeed we may not actually be very accurate witnesses. Nor is it clear that inner speech is any more important, or necessary to thought, than outer speech; or that the twinges we feel are the most important or central fact about suffering an emotion; or that it's an any way important to most of our goals to be in touch with the happenings in this inner grotto.
And that's why it's fine, on my view, that we are so little in touch with them, and so badly.
Part of me is attracted to an externalist, embodied, view of the mind. There's something I'm suspicious of in talking about our experience as "inner". And yet I'm not sure the metaphor of spatial interiority (and maybe a metaphor is all it is) is so bad, if it's just meant to capture the kind of privacy that even Ryle seems to allow.
Yet Ryle does not (despite his reputation) deny the existence of an inner mental life altogether. He admits, for example, the existence of "silent monologues" (p. 184), silent tunes in one's head (p. 269), "thrills and twinges" of emotion (p. 86 and many other places), "private" visual images (p. 34), and the like. Daydreams and silent soliloquoys, he says, belong among things we can catch ourselves engaged in, much as we can catch ourselves scratching or speaking aloud to ourselves. Such events, he continues, "can be private or silent items" of our autobiography (p. 166-167) to which only we have access, and he sometimes explicitly characterizes them as "in our heads".
Such remarks may surprise some who are used to hearing Ryle characterized (or caricatured) as a radical behaviorist who denies that the terms of ordinary English can ever refer to private episodes. Although Ryle derides the general importance of inner, ghostly "shadow actions" (p. 25) in a "secret grotto" (p. 119), and repeatedly denies the necessity of their occurrence prior to outward actions, he plainly does allow the existence, and even the occasional importance, of private mental events.
This seems to me exactly the right view. We do have a sort of "secret grotto": We can (in some sense) witness our own silent utterances, visual imagery, daydreams, and twinges of emotion, in a way others cannot. Yet it's not clear that our noticing such events in the "stream of experience" gives us any generally privileged self-knowledge beyond the kind of privilege that anyone might have who can witness some things that others cannot (say what takes place when one is alone in a room); and indeed we may not actually be very accurate witnesses. Nor is it clear that inner speech is any more important, or necessary to thought, than outer speech; or that the twinges we feel are the most important or central fact about suffering an emotion; or that it's an any way important to most of our goals to be in touch with the happenings in this inner grotto.
And that's why it's fine, on my view, that we are so little in touch with them, and so badly.
Part of me is attracted to an externalist, embodied, view of the mind. There's something I'm suspicious of in talking about our experience as "inner". And yet I'm not sure the metaphor of spatial interiority (and maybe a metaphor is all it is) is so bad, if it's just meant to capture the kind of privacy that even Ryle seems to allow.
Wednesday, June 13, 2007
Introspection and Expression
I've been working (for years, I'm afraid) on an essay called The Unreliability of Naive Introspection. A common reaction to the essay's title is this: What, We don't know what we believe and how we're feeling? That's nuts!
I might be nuts, but I'm not that nuts. I do think that we're fairly good judges of what we think and how we feel. I can still hold the view that our introspective judgments are generally unreliable because I don't think such judgments are grounded in introspection. Instead, I'd call them expressive.
Here's the idea. When someone asks you "What do you think about X?" You don't cast your eye (metaphorically) inward. You don't attend to your experience or think about your mind. Instead, you express what's on your mind. You reflect on X, perhaps, and allow yourself to render aloud your judgment on the matter. This is a very different process from thinking about what your visual experience is right now (e.g., whether it's fuzzy 15-degrees from the center of fixation) or from trying to decide whether your present thought is happening in inner speech and if so whether that inner speech involves auditory imagery, motor imagery, and/or something else. In the latter case, you are attempting to discern something about your ongoing stream of experience. In the former, you're not. My beef is only with the latter sort of judgment.
Wittgenstein famously characterized sentences like "I'm in pain" or "that hurts" as just a complex way of saying "ow!" or grimacing -- in other words, as an expression in the strict sense in which a facial expression is an expression -- a more or less spontaneous manifestation of one's mental state. But even pain we can reflect on introspectively. If the doctor asks exactly what the pain in my finger is like, I can attend to my experience and say "well, it's kind of a dull throbbing in the middle of the knuckle". The difference between introspective judgment and expressive self-ascription is the difference between such reflective descriptions and a spontaneous "that hurts!"
But maybe it's not fair to compare the accuracy of a very general self-ascription ("that hurts") with a rather specific introspection ("shooting pain from here to here"). In the case of pain, I suspect, very general introspections ("there is pain") will tend to be fairly accurate.
However, self-ascriptive expressions of belief, unlike pain, can be pretty specific: "I think that fly will be landing on the ice-cream shortly" -- similarly with desire, intention, and many other propositional attitudes (for a definition of "propositional attitudes", see the second paragraph here). I doubt that I am similarly specifically accurate my self-reflective introspections about what exactly my stream of experience is as I think about that fly.
Emotion commonly lends itself both to spontaneous self-ascription and to reflective introspection. When someone says "I'm depressed" or "I'm angry", it's often hard to know how much this is expression vs. introspection. But in adding detail, people tend either to go expressive, treating the emotion as a propositional attitude ("I'm angry that such-and-such") or to go more strictly introspective ("I'm experiencing my anger as a certain kind of tenseness in the middle of my chest"). It's only the last sort of judgment I would argue to be unreliable.
I might be nuts, but I'm not that nuts. I do think that we're fairly good judges of what we think and how we feel. I can still hold the view that our introspective judgments are generally unreliable because I don't think such judgments are grounded in introspection. Instead, I'd call them expressive.
Here's the idea. When someone asks you "What do you think about X?" You don't cast your eye (metaphorically) inward. You don't attend to your experience or think about your mind. Instead, you express what's on your mind. You reflect on X, perhaps, and allow yourself to render aloud your judgment on the matter. This is a very different process from thinking about what your visual experience is right now (e.g., whether it's fuzzy 15-degrees from the center of fixation) or from trying to decide whether your present thought is happening in inner speech and if so whether that inner speech involves auditory imagery, motor imagery, and/or something else. In the latter case, you are attempting to discern something about your ongoing stream of experience. In the former, you're not. My beef is only with the latter sort of judgment.
Wittgenstein famously characterized sentences like "I'm in pain" or "that hurts" as just a complex way of saying "ow!" or grimacing -- in other words, as an expression in the strict sense in which a facial expression is an expression -- a more or less spontaneous manifestation of one's mental state. But even pain we can reflect on introspectively. If the doctor asks exactly what the pain in my finger is like, I can attend to my experience and say "well, it's kind of a dull throbbing in the middle of the knuckle". The difference between introspective judgment and expressive self-ascription is the difference between such reflective descriptions and a spontaneous "that hurts!"
But maybe it's not fair to compare the accuracy of a very general self-ascription ("that hurts") with a rather specific introspection ("shooting pain from here to here"). In the case of pain, I suspect, very general introspections ("there is pain") will tend to be fairly accurate.
However, self-ascriptive expressions of belief, unlike pain, can be pretty specific: "I think that fly will be landing on the ice-cream shortly" -- similarly with desire, intention, and many other propositional attitudes (for a definition of "propositional attitudes", see the second paragraph here). I doubt that I am similarly specifically accurate my self-reflective introspections about what exactly my stream of experience is as I think about that fly.
Emotion commonly lends itself both to spontaneous self-ascription and to reflective introspection. When someone says "I'm depressed" or "I'm angry", it's often hard to know how much this is expression vs. introspection. But in adding detail, people tend either to go expressive, treating the emotion as a propositional attitude ("I'm angry that such-and-such") or to go more strictly introspective ("I'm experiencing my anger as a certain kind of tenseness in the middle of my chest"). It's only the last sort of judgment I would argue to be unreliable.
Monday, June 11, 2007
Should Ethicists Behave Better? Should Epistemologists Think More Rationally?
Thursday, I'll be presenting some of my work on the moral behavior of ethics professors at the meeting of the Society for Philosophy and Psychology. In his comments, Jonathan Weinberg tells me he'll ask this: Why should we think ethicists will be morally better behaved, any more than we would think epistemologists would be better thinkers (or have more knowledge, or better justified beliefs)?
My argument that ethicists will behave better is this:
The problem, as I see it, is that ethicists don't behave better. So we need to jettison (1), (2), or (3). But the premises are all empirically plausible, unless one has a cynical view of moral reasoning; and I myself don't find a cynical view very attractive.
But maybe there's a flaw in the argument that can be revealed by the comparison to epistemologists. Consider the parallel:
The argument is shorter, since no behavioral predictions are involved. (We could generate some -- e.g., they will act in ways that better satisfy their goals? -- but then the conclusion would be even more of a reach.)
Why does it seem reasonable -- to me, and to many undergraduates -- to think ethicists would behave better, while we're not so sure about the additional rationality of epistemologists? (I do think undergraduates tend to expect more from ethicists. Though it seems strange to me now, I recall being disappointed as a sophomore when I discovered that my philosophy professor didn't live a live of sagelike austerity!)
Here's my thought, then: Ethics (except maybe metaethics) is more directly practical than epistemology. We wouldn't often expect to profit from considering the nature of knowledge or of justification, or the other sorts of things epistemologists tend to worry about, in forming our opinions about everyday matters. On the other hand, it does seem -- barring cynical views! -- that reflection on honesty, justice, maximizing happiness, acting on universalizable maxims, and the kinds of things ethicists tend to worry about should improve our everyday moral decisions.
Furthermore, when epistemology is directly practical, I would expect epistemologists to think more rationally. For example, I'd expect experts on Bayesian decision theory to do a better job of maximizing their money in situations that can helpfully be modeled as gambling scenarios. I'd expect experts on fallacies in human reasoning to be better than others in seeing quickly through bad arguments on talk shows, if the errors are subtle enough to slip by many of us yet fall into patterns that someone attuned to fallacies will have labels for.
I remain perplexed. I continue to believe that those of us who value moral reasoning should be troubled by the apparent failure of professional ethicists to behave any better than those of similar socio-economic background.
My argument that ethicists will behave better is this:
(1.) Philosophical ethics improves (or selects for) moral reasoning.
(2.) Improved (or professional habits of) moral reasoning tends to lead either to (a.) better moral knowledge, or (at least) (b.) more frequent moral reflection.
(3.) (a) and (b) tend to cause better moral behavior.
Therefore, ethicists will behave better than non-ethicists.
The problem, as I see it, is that ethicists don't behave better. So we need to jettison (1), (2), or (3). But the premises are all empirically plausible, unless one has a cynical view of moral reasoning; and I myself don't find a cynical view very attractive.
But maybe there's a flaw in the argument that can be revealed by the comparison to epistemologists. Consider the parallel:
(1'.) Philosophical epistemology improves (or selects for) rationality.
(2'.) Improved (or professional habits of) rationality tends to lead to more knowledge and better justified beliefs.
Therefore, epistemologists will be more rational and have more knowledge and better justified beliefs than non-epistemologists.
The argument is shorter, since no behavioral predictions are involved. (We could generate some -- e.g., they will act in ways that better satisfy their goals? -- but then the conclusion would be even more of a reach.)
Why does it seem reasonable -- to me, and to many undergraduates -- to think ethicists would behave better, while we're not so sure about the additional rationality of epistemologists? (I do think undergraduates tend to expect more from ethicists. Though it seems strange to me now, I recall being disappointed as a sophomore when I discovered that my philosophy professor didn't live a live of sagelike austerity!)
Here's my thought, then: Ethics (except maybe metaethics) is more directly practical than epistemology. We wouldn't often expect to profit from considering the nature of knowledge or of justification, or the other sorts of things epistemologists tend to worry about, in forming our opinions about everyday matters. On the other hand, it does seem -- barring cynical views! -- that reflection on honesty, justice, maximizing happiness, acting on universalizable maxims, and the kinds of things ethicists tend to worry about should improve our everyday moral decisions.
Furthermore, when epistemology is directly practical, I would expect epistemologists to think more rationally. For example, I'd expect experts on Bayesian decision theory to do a better job of maximizing their money in situations that can helpfully be modeled as gambling scenarios. I'd expect experts on fallacies in human reasoning to be better than others in seeing quickly through bad arguments on talk shows, if the errors are subtle enough to slip by many of us yet fall into patterns that someone attuned to fallacies will have labels for.
I remain perplexed. I continue to believe that those of us who value moral reasoning should be troubled by the apparent failure of professional ethicists to behave any better than those of similar socio-economic background.
Friday, June 08, 2007
Why Are People So Confident About Their Stream of Experience?
One theme in my work is this: I don't think people are generally accurate in their reports about their stream of experience, even their concurrently ongoing conscious experience. But if people are so wrong about their phenomenology -- their imagery, their dreams, their inner speech, their visual experience, their cognitive experience -- why are they nonetheless so confident?
My suspicion is this. When we're asked questions about our "inner lives" ("a penny for your thoughts") or when we report on our dreams, our imagery, etc., we almost never get corrective feedback. On the contrary, we get an interested audience who assumes that what we're saying is true. No one ever scolds us for getting it wrong about our experience. This makes us cavalier and encourages a hypertrophy of confidence. Who doesn't enjoy being the sole expert in the room whose word has unchallengeable weight? In such situations, we take up the mantle of authority, exude a blustery confidence -- and feel that confidence sincerely, until we imagine possibly being shown wrong by another authority or by the unfolding of future events. (Professors may be especially liable to this.) About our own stream of experience, however, there appears to be no such humbling danger.
Suppose you're an ordinary undergraduate, and your job is to tutor a low-performing high school student. You are given some difficult poetry to interpret, and the student nods his head and passively receives your interpretation, whatever it happens to be. Then you do it again, the next week, with a different poem. Then again, then again. Pretty soon -- though you'll have received no significant feedback and probably not have improved much in your skills at poetry interpretation -- I'll wager you'll start to feel pretty good about your skills as an interpreter of poetry. You've said some things; they seemed plausible to you; the audience was receptive; no one slapped you down; you run no risk of being slapped down in the future. Your confidence will grow. (So I conjecture. I don't know of any psychological experiments directly on this sort of thing. Although the eyewitness testimony literature shows people's confidence will increase as they repeat the same testimony over and over, that's not quite the same phenomenon.)
Here's another case: Those of us who referee journal articles don't really receive any serious feedback about the quality of our referee reports -- just appreciative remarks from the editors and occasionally (not often, in my experience) very polite letters from the authors explaining how a new revision addresses all our "very useful" criticisms. Yet I'd wager that our confidence in the quality of our referee reports goes up over time; and I'd also wager than the quality of the reports themselves does not go up. Rather, whatever gains we might have in our actual refereeing skills are counterbalanced, or more than counterbalanced, by an increasingly rushed and cavalier attitude toward refereeing as our experience and status increases.
That feeling of being taken seriously, and of saying things that seem plausible to you, without any actual feedback about the quality of your performance -- that is, I think, essentially the situation people are in when reporting on their stream of conscious experience (at least until they meet me!). If I'm right that those are excellent conditions for confidence inflation, that might partly explain our feeling of infallibility.
(I had a nice chat about this yesterday with UCR psychologist Steven Clark.)
My suspicion is this. When we're asked questions about our "inner lives" ("a penny for your thoughts") or when we report on our dreams, our imagery, etc., we almost never get corrective feedback. On the contrary, we get an interested audience who assumes that what we're saying is true. No one ever scolds us for getting it wrong about our experience. This makes us cavalier and encourages a hypertrophy of confidence. Who doesn't enjoy being the sole expert in the room whose word has unchallengeable weight? In such situations, we take up the mantle of authority, exude a blustery confidence -- and feel that confidence sincerely, until we imagine possibly being shown wrong by another authority or by the unfolding of future events. (Professors may be especially liable to this.) About our own stream of experience, however, there appears to be no such humbling danger.
Suppose you're an ordinary undergraduate, and your job is to tutor a low-performing high school student. You are given some difficult poetry to interpret, and the student nods his head and passively receives your interpretation, whatever it happens to be. Then you do it again, the next week, with a different poem. Then again, then again. Pretty soon -- though you'll have received no significant feedback and probably not have improved much in your skills at poetry interpretation -- I'll wager you'll start to feel pretty good about your skills as an interpreter of poetry. You've said some things; they seemed plausible to you; the audience was receptive; no one slapped you down; you run no risk of being slapped down in the future. Your confidence will grow. (So I conjecture. I don't know of any psychological experiments directly on this sort of thing. Although the eyewitness testimony literature shows people's confidence will increase as they repeat the same testimony over and over, that's not quite the same phenomenon.)
Here's another case: Those of us who referee journal articles don't really receive any serious feedback about the quality of our referee reports -- just appreciative remarks from the editors and occasionally (not often, in my experience) very polite letters from the authors explaining how a new revision addresses all our "very useful" criticisms. Yet I'd wager that our confidence in the quality of our referee reports goes up over time; and I'd also wager than the quality of the reports themselves does not go up. Rather, whatever gains we might have in our actual refereeing skills are counterbalanced, or more than counterbalanced, by an increasingly rushed and cavalier attitude toward refereeing as our experience and status increases.
That feeling of being taken seriously, and of saying things that seem plausible to you, without any actual feedback about the quality of your performance -- that is, I think, essentially the situation people are in when reporting on their stream of conscious experience (at least until they meet me!). If I'm right that those are excellent conditions for confidence inflation, that might partly explain our feeling of infallibility.
(I had a nice chat about this yesterday with UCR psychologist Steven Clark.)
Wednesday, June 06, 2007
Remembering from the Third-Person Perspective?
A few days ago, I heard a National Public Radio interview on the topic of autobiographical memory. One thing the interviewee said stuck in my mind: People who remember past events in the "third person" (i.e., as though viewing themselves from the outside) differ from those who tend to remember past events in the "first person" (i.e., as though looking at it through their own eyes again). Among other things, this researcher claimed that third-person memory was better associated with accepting one's past mistakes and growing in response to them.
Several things in those remarks set off my skeptical alarms, but let me focus on one: Do people really remember events in the third or first person? I have no doubt that if you ask people to say whether a memory was first- or third-person, they'll be kind enough to give you a confident-seeming answer. But do autobiographical memories of particular past episodes have to have a visual perspective of this sort?
Some behaviorally quite normal people claim never to experience visual imagery. Let's suppose they're right about this. Of course they nonetheless have autobiographical episodic memories. How would such memories have a first- or third-person perspective, if there's no visual imagery involved? Would they have a first- or third-person auditory perspective? (Well sure, why not? But is this what the researchers have in mind?)
Maybe memories can be episodic and not visual at all; or visual yet not perspectival. The great writer Jorge Luis Borges and the emiment 19th century psychologist Francis Galton describe cases of visual imagery from visually-impossible circular or all-embracing perspectives or non-perspectives (e.g., the front and back of a coin visualized simultaneously).
In the 1950s people said they dreamed in black and white. Now they say they dream in color. People seem to assimilate their dreams to movies -- so much so that they erroneously attribute incidental features of movies, like black and whiteness (and maybe also like coloration) to their dreams. Similarly, it seems that people in cultural groups that analogize waking visual experience to flat media like pictures and paintings are more likely to attribute some sort of flatness to their visual experience than those who use other sorts of analogies for visual experience.
So I wonder: Do we imagine that we're remembering things "from a third-person perspective" in part because we assimilate autobiographical memory to television and movie narratives? Maybe, because of our immersion in film media, we (now) really do remember our past lives as though we were the protagonist of a movie? Or maybe we don't really tend to do that, but rather report our autobiographical memories as being like that (when pressed by a psychologist or by someone else or even just by ourselves) because the analogy between movies and memorial flashbacks is so tempting?
Would people in cultures without movies have comparably high rates of reporting autobiographical memory as though from a third-person perspective? Probably this has never been studied....
Several things in those remarks set off my skeptical alarms, but let me focus on one: Do people really remember events in the third or first person? I have no doubt that if you ask people to say whether a memory was first- or third-person, they'll be kind enough to give you a confident-seeming answer. But do autobiographical memories of particular past episodes have to have a visual perspective of this sort?
Some behaviorally quite normal people claim never to experience visual imagery. Let's suppose they're right about this. Of course they nonetheless have autobiographical episodic memories. How would such memories have a first- or third-person perspective, if there's no visual imagery involved? Would they have a first- or third-person auditory perspective? (Well sure, why not? But is this what the researchers have in mind?)
Maybe memories can be episodic and not visual at all; or visual yet not perspectival. The great writer Jorge Luis Borges and the emiment 19th century psychologist Francis Galton describe cases of visual imagery from visually-impossible circular or all-embracing perspectives or non-perspectives (e.g., the front and back of a coin visualized simultaneously).
In the 1950s people said they dreamed in black and white. Now they say they dream in color. People seem to assimilate their dreams to movies -- so much so that they erroneously attribute incidental features of movies, like black and whiteness (and maybe also like coloration) to their dreams. Similarly, it seems that people in cultural groups that analogize waking visual experience to flat media like pictures and paintings are more likely to attribute some sort of flatness to their visual experience than those who use other sorts of analogies for visual experience.
So I wonder: Do we imagine that we're remembering things "from a third-person perspective" in part because we assimilate autobiographical memory to television and movie narratives? Maybe, because of our immersion in film media, we (now) really do remember our past lives as though we were the protagonist of a movie? Or maybe we don't really tend to do that, but rather report our autobiographical memories as being like that (when pressed by a psychologist or by someone else or even just by ourselves) because the analogy between movies and memorial flashbacks is so tempting?
Would people in cultures without movies have comparably high rates of reporting autobiographical memory as though from a third-person perspective? Probably this has never been studied....
Monday, June 04, 2007
Can we Have Moral Standards without Moral Beliefs? (by guest blogger Justin Tiwald)
Let's say you have a student in your introductory philosophy class who claims he doesn't "have a morality" (there's always someone!). He explains his claim in various ways. Most often he says he doesn't have a morality because his every decision is based on egoistic calculations, other times it's simply because he does as he pleases. But whatever the explanation it's clear that he takes some pride in it: other people live by moral standards, but he has risen above that.
I think my response is similar to that of just about everyone else in this situation: I don't believe it. What gets me out of sorts isn't the thought that he's a moral monster (he usually isn't), it's that he really does have a morality but won't admit it. How does one convince him that he has a morality in spite of himself?
"Having a morality" can mean many things, but what the class amoralist seems to have in mind is this: you have a morality when you hold yourself to moral standards as such. At minimum, you believe that living according to a standard is morally good, and this belief enters as a non-instrumental reason to adhere to it. These moral reasons needn't be decisive, and they don't always need to motivate you to do the right thing (you can have a morality even if you fail to live up to it). But your belief in the standard's moral goodness is essential, and this is where the self-proclaimed amoralist thinks he parts ways with the moralist. While the amoralist has standards that he holds himself to, it's obvious to him that they're not moral ones. He doesn't ultimately care whether his behavior is right, considerate, charitable, fair, respectful, etc. He only cares whether it will get him richer, make him more loved, or allow him to have more fun.
Put this way, so much of the amoralist's smugness depends on his not believing his standards to be moral ones. But does this really matter? In my view it matters much more that he treat his standards as moral, not that he believe them to be moral. The characteristic ways of treating a standard as moral include taking seemingly moral pride in meeting them, and feeling seemingly moral guilt or shame for falling short of them. It also includes behaving as though the standards are imposed from the outside. Subjectively speaking, the standards aren't "up to us," nor are they fixed by our wants and needs. However we understand our own relationship to these norms, we invariably think and behave as though we're stuck with them, even when we'd prefer others.
I tend to think that most psychologically healthy human beings cannot but have standards that they treat in these ways (with all of the usual caveats for sociopaths and victims of bizarre head injuries). Generally speaking we're stuck with our consciences, and our consciences will treat various standards as moral ones whether we like it or not. Sometimes they'll hold us to moral norms that we do not consciously uphold, as when someone explicitly disavows charity but feels guilty for leaving her brother homeless. But our consciences will treat even treat many of our non-moral norms as as though they are moral norms. Many people pursue wealth with a moral zeal, and if the amoralist is serious about his amoralism he'll invariably take a kind of torturous, guilt-ridden "moral" pride in having risen above moralism (call this "Raskolnikov Syndrome"). Whatever the amoralist may believe, then, it would be far-fetched to say that he "has no morality" at all.
------
I'd like to thank Eric for letting me borrow his soapbox these last few weeks. I've truly benefited from the comments and emails that I've gotten in response. Having seen this from the other side, I can say with even more certainty that he has a great thing going here!
I think my response is similar to that of just about everyone else in this situation: I don't believe it. What gets me out of sorts isn't the thought that he's a moral monster (he usually isn't), it's that he really does have a morality but won't admit it. How does one convince him that he has a morality in spite of himself?
"Having a morality" can mean many things, but what the class amoralist seems to have in mind is this: you have a morality when you hold yourself to moral standards as such. At minimum, you believe that living according to a standard is morally good, and this belief enters as a non-instrumental reason to adhere to it. These moral reasons needn't be decisive, and they don't always need to motivate you to do the right thing (you can have a morality even if you fail to live up to it). But your belief in the standard's moral goodness is essential, and this is where the self-proclaimed amoralist thinks he parts ways with the moralist. While the amoralist has standards that he holds himself to, it's obvious to him that they're not moral ones. He doesn't ultimately care whether his behavior is right, considerate, charitable, fair, respectful, etc. He only cares whether it will get him richer, make him more loved, or allow him to have more fun.
Put this way, so much of the amoralist's smugness depends on his not believing his standards to be moral ones. But does this really matter? In my view it matters much more that he treat his standards as moral, not that he believe them to be moral. The characteristic ways of treating a standard as moral include taking seemingly moral pride in meeting them, and feeling seemingly moral guilt or shame for falling short of them. It also includes behaving as though the standards are imposed from the outside. Subjectively speaking, the standards aren't "up to us," nor are they fixed by our wants and needs. However we understand our own relationship to these norms, we invariably think and behave as though we're stuck with them, even when we'd prefer others.
I tend to think that most psychologically healthy human beings cannot but have standards that they treat in these ways (with all of the usual caveats for sociopaths and victims of bizarre head injuries). Generally speaking we're stuck with our consciences, and our consciences will treat various standards as moral ones whether we like it or not. Sometimes they'll hold us to moral norms that we do not consciously uphold, as when someone explicitly disavows charity but feels guilty for leaving her brother homeless. But our consciences will treat even treat many of our non-moral norms as as though they are moral norms. Many people pursue wealth with a moral zeal, and if the amoralist is serious about his amoralism he'll invariably take a kind of torturous, guilt-ridden "moral" pride in having risen above moralism (call this "Raskolnikov Syndrome"). Whatever the amoralist may believe, then, it would be far-fetched to say that he "has no morality" at all.
------
I'd like to thank Eric for letting me borrow his soapbox these last few weeks. I've truly benefited from the comments and emails that I've gotten in response. Having seen this from the other side, I can say with even more certainty that he has a great thing going here!
Friday, June 01, 2007
The Clarity, or Not, of Visual Experience
Most people (not everyone!) will say there is some experiential difference between the center of their visual field and the periphery. The center is clear, precise, sharply detailed -- something like that -- and the periphery is hazy, imprecise, lacking detail.
If you agree with this (and if you don't, I'd be interested to hear), I want you to think about the following question: How large is that center of clarity? If you're comfortable with degrees of arc, you might think of it in those terms. Otherwise, think about, say, how much of your desktop you can see in precise detail in a single moment. Consider also, how stable the region of clarity is, approximately how much shifting there is of things from the clear center to the unclear periphery and vice versa. Is it a constant flux, say, or pretty stable over stretches of several seconds?
Humor me, if you will, and formulate in your mind an explicit answer to these questions before reading on.
Dan Dennett suggests the following experiment. Randomly take a card from a deck of playing cards and hold it at arm's length off to one side, just beyond your field of view. Holding your gaze fixed on a single point in front of you, slowly rotate the card toward the center of your field of view (keeping it at arm's length). How close to the center do you have to bring the card before you can determine its suit, its color, its value?
Most people are surprised at the results of this little experiment (so Dennett reports, and so I've found, too). You have to bring it really close! Go try it! If a playing card isn't handy, try a book cover with a picture on it. I've also posted a playing card here, if that might help (image from here).
In doing this exercise, you're doing something pretty unusual (unless you've been a subject in a lot of vision science experiments!) -- you've been attending to, or thinking about, your experience of parts of your visual field not quite at the center of fixation. It's a little tricky, but you can try doing this as your eyes move around more naturally. For example, you might decide to attend to your visual experience of one particular object (maybe the top left part of the banner at the top of your screen, or the Jack to the right), allowing your eyes to move around so that you're looking all around it but never directly at it. How well do you see it?
So here's the question: Has your opinion about your visual experience changed as a result of this little exercise? And if so, how?
I have a little bit of a wager, you might say, with a colleague of mine about this.
If you agree with this (and if you don't, I'd be interested to hear), I want you to think about the following question: How large is that center of clarity? If you're comfortable with degrees of arc, you might think of it in those terms. Otherwise, think about, say, how much of your desktop you can see in precise detail in a single moment. Consider also, how stable the region of clarity is, approximately how much shifting there is of things from the clear center to the unclear periphery and vice versa. Is it a constant flux, say, or pretty stable over stretches of several seconds?
Humor me, if you will, and formulate in your mind an explicit answer to these questions before reading on.
Dan Dennett suggests the following experiment. Randomly take a card from a deck of playing cards and hold it at arm's length off to one side, just beyond your field of view. Holding your gaze fixed on a single point in front of you, slowly rotate the card toward the center of your field of view (keeping it at arm's length). How close to the center do you have to bring the card before you can determine its suit, its color, its value?
Most people are surprised at the results of this little experiment (so Dennett reports, and so I've found, too). You have to bring it really close! Go try it! If a playing card isn't handy, try a book cover with a picture on it. I've also posted a playing card here, if that might help (image from here).
In doing this exercise, you're doing something pretty unusual (unless you've been a subject in a lot of vision science experiments!) -- you've been attending to, or thinking about, your experience of parts of your visual field not quite at the center of fixation. It's a little tricky, but you can try doing this as your eyes move around more naturally. For example, you might decide to attend to your visual experience of one particular object (maybe the top left part of the banner at the top of your screen, or the Jack to the right), allowing your eyes to move around so that you're looking all around it but never directly at it. How well do you see it?
So here's the question: Has your opinion about your visual experience changed as a result of this little exercise? And if so, how?
I have a little bit of a wager, you might say, with a colleague of mine about this.