Does it seem to you that your thinking is located, phenomenologically, inside your head? Many people report this, both in casual reflection and when their experience is sampled with a beeper.
But I bet Aristotle -- who famously claimed that the brain was merely an organ to cool the blood -- would not have said that. Nor, I suspect, would ancient Chinese philosophers such as Mencius and Xunzi, who characterized thought as occuring in the xin, the heart. Of course, we don't have strictly phenomenological reports about this, but it seems a strain to suppose they thought thinking occurred in the heart yet was referred in experience to another location.
Here's one possibility: Where thought is phenomenologically located varies with one's cultural context -- perhaps specifically with one's culture's views about the organ of thought. How strange and interesting, if true! What governs the felt location of cognition? Why do some feel it here and others feel it there? Why wouldn't this be physiologically determined, at some fairly low and immutable level? Could I feel my cognition as occuring in my feet or shoulders, or on the other side of the room, or back in my kitchen on the other side of town, if the cultural conditions and my background beliefs were right? (Not the last, surely, you say -- but why not?)
If you know me, you'll know that I think we should also consider a more skeptical possibility: Our opinions about our phenomenology are heavily influenced by background theories, metaphors, and cultural constructs, but the phenomenology itself is consistent over time. (Regarding cultural variation in the reported experience of dreaming, see here and here; regarding echolocation, see here; regarding visual perspective, see here and here.) We're no great shakes at reporting even the most basic, most "obvious" features of our phenomenology -- such as, I'd suggest, where (if anywhere) our conscious thoughts are phenomenologically located.
Now, what evidence can be brought to bear in favor of one of these possibilities over the other?
Friday, June 30, 2006
Wednesday, June 28, 2006
Inner Speech vs. Inner Hearing? Inner Sketching vs. Inner Seeing?
Okay, I'm on an inner speech kick. I admit it! I also keep talking about Russ Hurlburt. I'm writing a book with him, so his stuff is on my mind.
Russ draws a distinction between what he calls "inner speech", which is experienced as in some way produced by the thinker, and "inner hearing", which (though obviously in some sense produced by the thinker) is experienced more passively, as "coming at you as a recording would". Many of the people he interviews about their experience seem to find such a distinction intuitive and classify the sentences running through their minds as either one or the other (or both, or sometimes one, sometimes another).
So, first, is there a legitimate distinction to be drawn here? What do you think? If so, it's one that most philosophers who've discussed "inner speech" have been insensitive to.
And, second, if we grant that there's a distinction between inner speech and inner hearing, is it a fairly epiphenomenal distinction, with no real functional significance? Or is there something more passive, or alienated, or something, about those thoughts of ours that come in inner hearing as opposed to inner speech? Or...?
Although Russ doesn't draw a corresponding distinction with respect to visual imagery (nor does anyone else who comes to mind), I'm inclined to wonder whether maybe such a distinction could be drawn -- a distinction between "inner sketching", felt as actively produced by the visualizer, and "inner seeing", felt as passively received.
I cast my gaze inward, I reflect on my own experience. If Descartes were right, I should know nothing with more certainty. Yet I am perplexed....
Russ draws a distinction between what he calls "inner speech", which is experienced as in some way produced by the thinker, and "inner hearing", which (though obviously in some sense produced by the thinker) is experienced more passively, as "coming at you as a recording would". Many of the people he interviews about their experience seem to find such a distinction intuitive and classify the sentences running through their minds as either one or the other (or both, or sometimes one, sometimes another).
So, first, is there a legitimate distinction to be drawn here? What do you think? If so, it's one that most philosophers who've discussed "inner speech" have been insensitive to.
And, second, if we grant that there's a distinction between inner speech and inner hearing, is it a fairly epiphenomenal distinction, with no real functional significance? Or is there something more passive, or alienated, or something, about those thoughts of ours that come in inner hearing as opposed to inner speech? Or...?
Although Russ doesn't draw a corresponding distinction with respect to visual imagery (nor does anyone else who comes to mind), I'm inclined to wonder whether maybe such a distinction could be drawn -- a distinction between "inner sketching", felt as actively produced by the visualizer, and "inner seeing", felt as passively received.
I cast my gaze inward, I reflect on my own experience. If Descartes were right, I should know nothing with more certainty. Yet I am perplexed....
Monday, June 26, 2006
Knowledge Without Belief?
Suppose that yesterday you read an email about a bridge closure. You'll need to commute on an alternate route for a month. Yet here you are today, governed by habit, driving straight toward the closed bridge. In a moment, you will remember that the bridge is closed, but you haven't yet.
Now normally, I think we'd say the following things about you: You've forgotten that the bridge is closed. And you know the bridge is closed. Consider what you'd say to a passenger, for example, the moment after you remember: "Whoops! I forgot the bridge was closed"; "Oh, that was dumb of me; I knew the bridge was closed".
But do you believe the bridge is closed, as you drive blithely toward it? I don't feel much of an ordinary-language pull one way or another on this. But most contemporary philosophers of mind regard "think" (in the simple present, not the present progressive) as a fairly straightforward ordinary-language substitute for "believe" in many contexts. Do you think the bridge is closed, in those moments of forgetfulness? This sounds strange to my ear.
So maybe we can say you know the bridge is closed, but you don't think or believe that it is? Or (to change the example), as you stand there stammering, with Larry's name momentarily escaping you, that you do know that his name is Larry, after all, though you don't think or believe that it is, right now? Hmm... that seems strange, too!
We could test ordinary folks' intuitions on this. Provide a scenario of the sort above, then ask one group of subjects whether the person "knows" the fact in question, another whether she "believes" it, and a third whether she "thinks" it. I'd predict considerably higher attribution of "knows" than "thinks".
Is there trouble here for the standard view in contemporary epistemology that (propositional) knowledge is a species of belief? Or are ordinary intuitions a poor guide? Or am I wrong about how the intuitions will fall out?
Maybe part of what's going on is that "think" is a bit more temporally narrow in its reference than "knows"?
Now normally, I think we'd say the following things about you: You've forgotten that the bridge is closed. And you know the bridge is closed. Consider what you'd say to a passenger, for example, the moment after you remember: "Whoops! I forgot the bridge was closed"; "Oh, that was dumb of me; I knew the bridge was closed".
But do you believe the bridge is closed, as you drive blithely toward it? I don't feel much of an ordinary-language pull one way or another on this. But most contemporary philosophers of mind regard "think" (in the simple present, not the present progressive) as a fairly straightforward ordinary-language substitute for "believe" in many contexts. Do you think the bridge is closed, in those moments of forgetfulness? This sounds strange to my ear.
So maybe we can say you know the bridge is closed, but you don't think or believe that it is? Or (to change the example), as you stand there stammering, with Larry's name momentarily escaping you, that you do know that his name is Larry, after all, though you don't think or believe that it is, right now? Hmm... that seems strange, too!
We could test ordinary folks' intuitions on this. Provide a scenario of the sort above, then ask one group of subjects whether the person "knows" the fact in question, another whether she "believes" it, and a third whether she "thinks" it. I'd predict considerably higher attribution of "knows" than "thinks".
Is there trouble here for the standard view in contemporary epistemology that (propositional) knowledge is a species of belief? Or are ordinary intuitions a poor guide? Or am I wrong about how the intuitions will fall out?
Maybe part of what's going on is that "think" is a bit more temporally narrow in its reference than "knows"?
Friday, June 23, 2006
The Pace of Inner Speech
More thoughts on inner speech. Really, how can anything as pervasive and interesting in consciousness as inner speech -- our silent talking to ourselves -- be so unstudied?
Here's a question: Does inner speech generally transpire at the same speed as outer speech? It's natural, perhaps, to think so. Russ Hurlburt tells me that's what his subjects usually report, when he gives them a beeper to wear during normal daily activity and asks about their sampled experiences (at "the last undisturbed moment prior to the beep").
Yet when Russ and I sampled a subject together, she said that her inner speech seemed to go very fast, "speeded", yet "without being rushed"; and there's something I found appealing in that description. I've begun to wonder whether most people's reports of the pace of inner speech are based more on an unexamined assumption about its pace than on careful observation.
Try this experiment: Say the sentence "I wonder if inner speech is faster or slower than outer speech", first in inner speech, then in outer speech (or the other way around). Did one seem faster than the other?
My impression: The inner speech was faster. I could make the outer speech seem to take about roughly the same time, but only by really rushing it in a way that the inner speech did not feel rushed. But I don't know. There's something awfully artificial about this exercise, and inner speech as it occurs spontaneously in ordinary life might behave very differently.
Another reflection: If you have a complicated thought, and it seems to be in inner speech, such as "If a complicated thought, in a long sentence of inner speech, would take ten seconds to say out loud, does it take ten seconds to think it in inner speech too?" I'm guessing not. But then maybe we're too quick to assume that our thoughts are in inner speech. This tangles up with issues about the sense of what one's about to say that I remarked on Monday.
I'd like some more rigorous way to study this. Any thoughts?
Here's a question: Does inner speech generally transpire at the same speed as outer speech? It's natural, perhaps, to think so. Russ Hurlburt tells me that's what his subjects usually report, when he gives them a beeper to wear during normal daily activity and asks about their sampled experiences (at "the last undisturbed moment prior to the beep").
Yet when Russ and I sampled a subject together, she said that her inner speech seemed to go very fast, "speeded", yet "without being rushed"; and there's something I found appealing in that description. I've begun to wonder whether most people's reports of the pace of inner speech are based more on an unexamined assumption about its pace than on careful observation.
Try this experiment: Say the sentence "I wonder if inner speech is faster or slower than outer speech", first in inner speech, then in outer speech (or the other way around). Did one seem faster than the other?
My impression: The inner speech was faster. I could make the outer speech seem to take about roughly the same time, but only by really rushing it in a way that the inner speech did not feel rushed. But I don't know. There's something awfully artificial about this exercise, and inner speech as it occurs spontaneously in ordinary life might behave very differently.
Another reflection: If you have a complicated thought, and it seems to be in inner speech, such as "If a complicated thought, in a long sentence of inner speech, would take ten seconds to say out loud, does it take ten seconds to think it in inner speech too?" I'm guessing not. But then maybe we're too quick to assume that our thoughts are in inner speech. This tangles up with issues about the sense of what one's about to say that I remarked on Monday.
I'd like some more rigorous way to study this. Any thoughts?
Tuesday, June 20, 2006
History of Philosophy as a Source of Data for Psychology
I've recently become intrigued by the idea of the history of philosophy as a source of data for psychology. What variety of opinion, across cultures and centuries! Surely this says something about the human mind.
In Do Things Look Flat? I examined the course of philosophical variation in the opinion that visual appearances show the kind of shape and size distortions one sees in photographs (e.g., obliquely-viewed coins looking elliptical; distant things looking very much smaller than nearby ones). In Why Did We Think We Dreamed in Black and White? I looked (a little bit) at the history of philosophical opinion about the coloration of dreams. In both essays, I suggested that what people found to be "common sense", and what was endorsed by the most reflective philosophers and psychologists (not different people, generally, before the late 19th century), reflected culturally variable metaphors for aspects of the mind -- metaphors for visual experience and dream experience, respectively.
This variability, I think, to some extent undermines the credibility of what ordinary folk and philosophers now say about visual experience and dreams, since it raises the suspicion that this, too, is grounded in culturally contingent metaphors. We should be very wary of testimonial evidence by consciousness researchers or their subjects regarding the nature of their visual experience or dream experience.
Cultural variation in philosophy gives us a window into the variability of "common sense" and rational opinion -- somewhat like the variability cultural anthropology provides, but with a different and more public set of data, focused more deeply and exclusively, and often more carefully and with greater nuance, on ethics, metaphysics, mind, and other topics of philosophical interest. To the extent psychology analyzes the variability and sources of common sense intuitions (as in developmental psychology) or relies upon the intuitive judgments of researchers and subjects (as in consciousness studies), a sense of the relative cultural stability of those intuitions may be illuminating.
It's too easy to suppose that what we find intuitive is universally so -- or, conversely, that intuitive judgments (e.g., about ethics) vary so radically between cultures that we can find nothing in common between them. The history of philosophy provides crucial data for assessing such suppositions.
Of key importance in such an enterprise are philosophical traditions -- Asian traditions especially -- with a robust written philosophical literature and minimal Western influence.
In Do Things Look Flat? I examined the course of philosophical variation in the opinion that visual appearances show the kind of shape and size distortions one sees in photographs (e.g., obliquely-viewed coins looking elliptical; distant things looking very much smaller than nearby ones). In Why Did We Think We Dreamed in Black and White? I looked (a little bit) at the history of philosophical opinion about the coloration of dreams. In both essays, I suggested that what people found to be "common sense", and what was endorsed by the most reflective philosophers and psychologists (not different people, generally, before the late 19th century), reflected culturally variable metaphors for aspects of the mind -- metaphors for visual experience and dream experience, respectively.
This variability, I think, to some extent undermines the credibility of what ordinary folk and philosophers now say about visual experience and dreams, since it raises the suspicion that this, too, is grounded in culturally contingent metaphors. We should be very wary of testimonial evidence by consciousness researchers or their subjects regarding the nature of their visual experience or dream experience.
Cultural variation in philosophy gives us a window into the variability of "common sense" and rational opinion -- somewhat like the variability cultural anthropology provides, but with a different and more public set of data, focused more deeply and exclusively, and often more carefully and with greater nuance, on ethics, metaphysics, mind, and other topics of philosophical interest. To the extent psychology analyzes the variability and sources of common sense intuitions (as in developmental psychology) or relies upon the intuitive judgments of researchers and subjects (as in consciousness studies), a sense of the relative cultural stability of those intuitions may be illuminating.
It's too easy to suppose that what we find intuitive is universally so -- or, conversely, that intuitive judgments (e.g., about ethics) vary so radically between cultures that we can find nothing in common between them. The history of philosophy provides crucial data for assessing such suppositions.
Of key importance in such an enterprise are philosophical traditions -- Asian traditions especially -- with a robust written philosophical literature and minimal Western influence.
Monday, June 19, 2006
Is Inner Speech an Action?
Speaking aloud is, normally, a form of intentional action (involuntary ejaculations excepted). Private thoughts are not, seemingly, in the same way intentional (deliberately planned cogitations "let me think about that..." excepted).
What about inner speech? Lev Vygotsky, David Velleman, Dorit Bar-On, and others plausibly regard inner speech as an internalized form of external speech. Vygotsky argues that it occurs developmentally only after outer speech, as a kind of suppressed form of it. Velleman suggests that it requires some kind of restraint (restraint we often don't feel when we're alone in car) to hold speech impulses in, rather than giving vent to them outwardly. If Vygotsky and Velleman are right, it seems natural to suppose that inner speech, like outer speech, is a form of intentional action (maybe even more robustly intentional for requiring an additional act of suppression?). Bar-On quite explicitly endorses the idea of inner speech as a type of action near the end of her 2005 book.
But now if inner speech is a type of thought -- perhaps even the most pervasive form of conscious thought, as Peter Carruthers and William S. Robinson come near to suggesting -- it starts to look like thinking might be intentional action after all. Not just deliberately planned thought, but also all those spontaneous subvocalizations that spring to mind unbidden.
Does this seem as strange to you as it does to me? But where to put on the brakes?
What about inner speech? Lev Vygotsky, David Velleman, Dorit Bar-On, and others plausibly regard inner speech as an internalized form of external speech. Vygotsky argues that it occurs developmentally only after outer speech, as a kind of suppressed form of it. Velleman suggests that it requires some kind of restraint (restraint we often don't feel when we're alone in car) to hold speech impulses in, rather than giving vent to them outwardly. If Vygotsky and Velleman are right, it seems natural to suppose that inner speech, like outer speech, is a form of intentional action (maybe even more robustly intentional for requiring an additional act of suppression?). Bar-On quite explicitly endorses the idea of inner speech as a type of action near the end of her 2005 book.
But now if inner speech is a type of thought -- perhaps even the most pervasive form of conscious thought, as Peter Carruthers and William S. Robinson come near to suggesting -- it starts to look like thinking might be intentional action after all. Not just deliberately planned thought, but also all those spontaneous subvocalizations that spring to mind unbidden.
Does this seem as strange to you as it does to me? But where to put on the brakes?
Friday, June 16, 2006
Do Three-Year-Olds Dream?
Three- and four-year-old children have REM sleep, the stage of sleep most associated, in adults, with vivid, narrative dreams. It's natural to suppose they also dream. But do they?
My son Davy, at four, only very rarely claimed to dream -- despite my wife's repeatedly asking him about his dreams (she was trained as a psychotherapist!) -- and when he did confess to a dream, his reports were suspicious for a number of reasons: vague, short, and often just a repetition of something he had claimed to dream before. I'd say about half Davy's dream reports were simply this: "the house was full of popcorn", with no further elaboration to be coaxed from him.
Usually Davy claimed not to have slept at all. He generally seemed to have no awareness of the night's passing while he slept -- a fact that continually surprised my wife and me, given how sophisticated he was about many other things. No reference to clocks, the sun, to our seeing him lying still for hours, etc., could persuade him otherwise.
David Foulkes (e.g., in his 1999 book) systematically woke three- to five-year-old children during REM sleep and found that they generally denied dreaming. If they did give dream reports, those reports were generally short and suspect in a variety of ways. He argues that dreaming is a skill that develops, much as visual imagery is a skill that develops. (There is at least a little evidence that young children are pretty bad at visual imagery.) This would make sense if dreaming is just a form of (visually and otherwise) imagining (see Jonathan Ichikawa's interesting discussion of this).
Now I don't know. Children not dreaming? That's kind of hard to swallow. But I've begun to doubt.
My son Davy, at four, only very rarely claimed to dream -- despite my wife's repeatedly asking him about his dreams (she was trained as a psychotherapist!) -- and when he did confess to a dream, his reports were suspicious for a number of reasons: vague, short, and often just a repetition of something he had claimed to dream before. I'd say about half Davy's dream reports were simply this: "the house was full of popcorn", with no further elaboration to be coaxed from him.
Usually Davy claimed not to have slept at all. He generally seemed to have no awareness of the night's passing while he slept -- a fact that continually surprised my wife and me, given how sophisticated he was about many other things. No reference to clocks, the sun, to our seeing him lying still for hours, etc., could persuade him otherwise.
David Foulkes (e.g., in his 1999 book) systematically woke three- to five-year-old children during REM sleep and found that they generally denied dreaming. If they did give dream reports, those reports were generally short and suspect in a variety of ways. He argues that dreaming is a skill that develops, much as visual imagery is a skill that develops. (There is at least a little evidence that young children are pretty bad at visual imagery.) This would make sense if dreaming is just a form of (visually and otherwise) imagining (see Jonathan Ichikawa's interesting discussion of this).
Now I don't know. Children not dreaming? That's kind of hard to swallow. But I've begun to doubt.
Wednesday, June 14, 2006
Do We Think in Inner Speech?
We often "think" things silently to ourselves -- have the conscious experience of having a certain thought. We also silently "say" things to ourselves in inner speech. Here's the question: Is the latter a species of the former? Does conscious thinking sometimes take place IN inner speech? Or is inner speech more of an epiphenomenon, something that transpires more as a consequence of the thought than as the medium of thought itself? Peter Carruthers has recently argued the former.
Although there's plenty that's appealing in Carruthers' view, one type of case gives me (as it were) second thoughts. Russ Hurlburt and I were running an "experience sampling" experiment with a subject named Melanie. (I've mentioned this experiment in other posts.) We gave Melanie a random beeper. When the beeper went off, she was to note her "inner experience", as best she could ascertain it, at the last undisturbed moment prior to the beep.
In the sample I have in mind, Melanie was backing her car out of the driveway, saying to herself, silently in inner speech, "Why can't I..." when the beeper interrupted her. When we interviewed her about this experience shortly thereafter, she reported having a sense, at the time immediately prior to the beep, that the full content of her thought was this: "Why can't I remember about the parking brake?", and that thought was already completely there in her experience at the time of the beep, though not completely expressed in inner speech.
Let's set aside concerns about the accuracy of self-reports in such conditions (concerns I take quite seriously), and just consider her report on its (plausible) face. It seems, indeed, that we often have an unarticulated sense of what we're about to say -- in either inner or outer speech -- before we say it. No?
Sometimes this sense is only very rough and inchoate; but in other cases -- as perhaps in Melanie's case here -- it's fairly specific and developed. In the latter sort of case, it seems, then, plausible to say that the thought is complete before the speech is complete -- that there's a kind of thoughtless inertia sometimes in speech, inner or outer. But if the thought is complete before the inner speech is complete, then the inner speech can't be the medium of the thought, can it?
Although there's plenty that's appealing in Carruthers' view, one type of case gives me (as it were) second thoughts. Russ Hurlburt and I were running an "experience sampling" experiment with a subject named Melanie. (I've mentioned this experiment in other posts.) We gave Melanie a random beeper. When the beeper went off, she was to note her "inner experience", as best she could ascertain it, at the last undisturbed moment prior to the beep.
In the sample I have in mind, Melanie was backing her car out of the driveway, saying to herself, silently in inner speech, "Why can't I..." when the beeper interrupted her. When we interviewed her about this experience shortly thereafter, she reported having a sense, at the time immediately prior to the beep, that the full content of her thought was this: "Why can't I remember about the parking brake?", and that thought was already completely there in her experience at the time of the beep, though not completely expressed in inner speech.
Let's set aside concerns about the accuracy of self-reports in such conditions (concerns I take quite seriously), and just consider her report on its (plausible) face. It seems, indeed, that we often have an unarticulated sense of what we're about to say -- in either inner or outer speech -- before we say it. No?
Sometimes this sense is only very rough and inchoate; but in other cases -- as perhaps in Melanie's case here -- it's fairly specific and developed. In the latter sort of case, it seems, then, plausible to say that the thought is complete before the speech is complete -- that there's a kind of thoughtless inertia sometimes in speech, inner or outer. But if the thought is complete before the inner speech is complete, then the inner speech can't be the medium of the thought, can it?
Monday, June 12, 2006
On the Epidemiology of Sexual Norms
Shaun Nichols has suggested that social norms with "affective resonance" -- norms, that is, forbidding actions that are emotionally upsetting -- are more likely to persist than "affect neutral" norms. In particular, Nichols proposes that norms against harming people and norms pertaining to "core disgust" (especially involving bodily fluids) are more likely to endure the centuries than norms related to non-harmful, non-bodily-fluid-involving behavior, such as norms governing posture and the position of silverware. We'll let go of norms about elbows on the table; norms against slander and urinating in public we're more attached to.
(Studying, in this way, the "epidemiology" of norms can give us insight into the basis of our moral judgments. And that, of course, is near the core of ethics and moral psychology. Nichols' idea of getting at such questions by looking at the history of etiquette manuals stands among the most intriguingly creative uses of empirical evidence I've ever seen by a philosopher.)
Now, it does seem very plausible on its face that affect-backed norms would survive better than affect-neutral ones. Yet a student of mine, Beth Silverstein, has persuaded me that Nichols hasn't struck to the core of it. Silverstein suggests that norms that have a functional basis are the ones most likely to survive. Thus, affect-backed norms tend to survive because they typically have a functional basis. Disgust-backed norms are functionally grounded in health concerns; harm norms are functionally grounded in the conditions for the smooth operation of society. Non-affect-backed norms that have a functional grounding, such as avoiding pork (which used to be more likely to carry disease) or putting the fork on the left, knife and spoon on the right (in accordance with the traditional left-handed use of the fork), do tend to survive.
I doubt Nichols would want to disagree with the claim that functional norms are more likely to survive. The question is how much of a role is left over for affect as the basis of norms, once those driven by function are taken account of. Perhaps not much? Looking at the history of norms against flatulence and body odor might be relevant here. But the example that I find most interesting is the evolution of sexual norms.
We live in a society of relatively low sexual disgust (at least by recent European standards). Nichols points out our increasing sensitivity over time to hygiene, and our increasingly refined sense of disgust in that domain; but the opposite seems to be occurring with respect to sexual norms: What is counted as "pornography", what bodily exposures are seen as "disgusting", and how disgusting they seem -- in such matters we have become more lax, rather than more restrictive, especially over the last several decades.
I hypothesize that this has to do with the change in the functionality of sexual restrictions: With the advent of birth control of sexual disease prevention and cure, sexuality became functionally less dangerous, and our norms responded. In the 80's there was something of a reversal with the advent of AIDS, but as AIDS is coming more under control (in the middle-class U.S.), norms are liberalizing again. In large, traditional civilizations, sexual norms had to be restrictive because of the threat of the spread of disease; less so, perhaps, in small, traditional tribes. (I don't claim novelty for this hypothesis, though I can't now recall a source.)
So in the sexual domain at least, I'd suggest that Silverstein fares better than Nichols: The evolution of norms is driven more by function than by affective disgust. The sense of disgust comes after, shifting to match our function-driven norms.
(Studying, in this way, the "epidemiology" of norms can give us insight into the basis of our moral judgments. And that, of course, is near the core of ethics and moral psychology. Nichols' idea of getting at such questions by looking at the history of etiquette manuals stands among the most intriguingly creative uses of empirical evidence I've ever seen by a philosopher.)
Now, it does seem very plausible on its face that affect-backed norms would survive better than affect-neutral ones. Yet a student of mine, Beth Silverstein, has persuaded me that Nichols hasn't struck to the core of it. Silverstein suggests that norms that have a functional basis are the ones most likely to survive. Thus, affect-backed norms tend to survive because they typically have a functional basis. Disgust-backed norms are functionally grounded in health concerns; harm norms are functionally grounded in the conditions for the smooth operation of society. Non-affect-backed norms that have a functional grounding, such as avoiding pork (which used to be more likely to carry disease) or putting the fork on the left, knife and spoon on the right (in accordance with the traditional left-handed use of the fork), do tend to survive.
I doubt Nichols would want to disagree with the claim that functional norms are more likely to survive. The question is how much of a role is left over for affect as the basis of norms, once those driven by function are taken account of. Perhaps not much? Looking at the history of norms against flatulence and body odor might be relevant here. But the example that I find most interesting is the evolution of sexual norms.
We live in a society of relatively low sexual disgust (at least by recent European standards). Nichols points out our increasing sensitivity over time to hygiene, and our increasingly refined sense of disgust in that domain; but the opposite seems to be occurring with respect to sexual norms: What is counted as "pornography", what bodily exposures are seen as "disgusting", and how disgusting they seem -- in such matters we have become more lax, rather than more restrictive, especially over the last several decades.
I hypothesize that this has to do with the change in the functionality of sexual restrictions: With the advent of birth control of sexual disease prevention and cure, sexuality became functionally less dangerous, and our norms responded. In the 80's there was something of a reversal with the advent of AIDS, but as AIDS is coming more under control (in the middle-class U.S.), norms are liberalizing again. In large, traditional civilizations, sexual norms had to be restrictive because of the threat of the spread of disease; less so, perhaps, in small, traditional tribes. (I don't claim novelty for this hypothesis, though I can't now recall a source.)
So in the sexual domain at least, I'd suggest that Silverstein fares better than Nichols: The evolution of norms is driven more by function than by affective disgust. The sense of disgust comes after, shifting to match our function-driven norms.
More Technical Difficulties
Ugh, Blogger seems to be on the fritz again. My apologies if it dumped you while trying to make a comment (as it did me, trying to reply to Brad C's last comment). Hopefully they'll straighten everything out soon!
Friday, June 09, 2006
The Morality of Ethics Professors: Survey
In an earlier post, The Problem of the Ethics Professors, I asked why ethics professors often behave so badly. What does this suggest about the connection between ethical reflection and moral behavior?
Of course, implicit in this question is the presumably empirically testable assumption that ethics professors do behave (at least as) badly as the rest of us. But how to test that assumption?
I can think of no better way than to ask people who know ethics professors. While asking people about their impressions invites, of course, a variety of problems, the altenatives -- actually trying to run a controlled study or a direct observational study (or looking at criminal records!) -- seem patently infeasible. And perhaps if people are asked in the right way, the answers they give will deserve some credit.
So I've designed a questionnaire. My thought is to set up a table at an APA meeting or two (if the APA will let me!) with a sign saying something like "take this brief questionnaire, get a brownie!"
Q's 1 and 2 of the questionnaire would be:
1. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as philosophers not specializing in ethics? (Please circle one number below.)
[Here there'd be a likert scale of 1-7 from "substantially morally better" (1) through "about the same" (4) to "substantially morally worse" (7). I can't reproduce the actual look of scale here due to formatting constraints.]
2. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as non-academics of similar social background? (Please circle one number below.)
[Here would be the same likert scale as above.]
I would then ask questions about academic rank, area of teaching/research focus, and whether they knew in advance the topic of the questionnaire or had discussed it with anyone.
Thoughts, reactions, and suggestions welcome! Is this lame and pointlessly irritating? What would you predict for the results?
Of course, implicit in this question is the presumably empirically testable assumption that ethics professors do behave (at least as) badly as the rest of us. But how to test that assumption?
I can think of no better way than to ask people who know ethics professors. While asking people about their impressions invites, of course, a variety of problems, the altenatives -- actually trying to run a controlled study or a direct observational study (or looking at criminal records!) -- seem patently infeasible. And perhaps if people are asked in the right way, the answers they give will deserve some credit.
So I've designed a questionnaire. My thought is to set up a table at an APA meeting or two (if the APA will let me!) with a sign saying something like "take this brief questionnaire, get a brownie!"
Q's 1 and 2 of the questionnaire would be:
1. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as philosophers not specializing in ethics? (Please circle one number below.)
[Here there'd be a likert scale of 1-7 from "substantially morally better" (1) through "about the same" (4) to "substantially morally worse" (7). I can't reproduce the actual look of scale here due to formatting constraints.]
2. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as non-academics of similar social background? (Please circle one number below.)
[Here would be the same likert scale as above.]
I would then ask questions about academic rank, area of teaching/research focus, and whether they knew in advance the topic of the questionnaire or had discussed it with anyone.
Thoughts, reactions, and suggestions welcome! Is this lame and pointlessly irritating? What would you predict for the results?
Technical Difficulties
Blogger has been in and out of commission over the last few days. Apologies to those of you who tried to visit and found it down or impossibly slow! For much of the past few days, it has been showing my text but not accepting comments.
Hopefully, it's fixed now.
Hopefully, it's fixed now.
Wednesday, June 07, 2006
The Flight of Colors
If you glance briefly at a high-wattage light bulb or at the sun, then close your eyes, you will experience an enduring afterimage that slowly changes colors. This is called the "flight of colors".
The flight of colors should be relatively easy to study: Just close your eyes and report the colors! The course and variability of the flight of colors might reveal something potentially valuable about the visual system. Yet oddly -- despite the thousands of articles that have been written about other aspects of vision -- almost nothing is known about it.
E.B. Titchener (1901-1905) claims that after staring with dark-adapted eyes at a uniformly colored sky, the flight of colors is blue-green-yellow-red-blue-green -- though it may take a number of trials, and some introspective training, to reliably report that. William Berry (1922 and 1927) argues that there is no consistent pattern in the flight of colors, either between or within stimulus situations. J.L. Brown in his influential 1965 review of the literature on afterimages seems at one point to agree roughly with Titchener's description, then elsewhere, apparently inconsistently, to endorse Berry's claim that the flight of colors varies greatly from person to person. There has been very little subsequent literature on the issue. (For a review see my essay Introspective Training Apprehensively Defended.)
So: Will two normal people in the same circumstances experience roughly the same flight of colors? How much does the flight of colors vary with differences in circumstance? What conditions govern the progression of colors? How can we not know any of this?! Really, it's amazing how fresh and uncut much of psychology still is.
The flight of colors should be relatively easy to study: Just close your eyes and report the colors! The course and variability of the flight of colors might reveal something potentially valuable about the visual system. Yet oddly -- despite the thousands of articles that have been written about other aspects of vision -- almost nothing is known about it.
E.B. Titchener (1901-1905) claims that after staring with dark-adapted eyes at a uniformly colored sky, the flight of colors is blue-green-yellow-red-blue-green -- though it may take a number of trials, and some introspective training, to reliably report that. William Berry (1922 and 1927) argues that there is no consistent pattern in the flight of colors, either between or within stimulus situations. J.L. Brown in his influential 1965 review of the literature on afterimages seems at one point to agree roughly with Titchener's description, then elsewhere, apparently inconsistently, to endorse Berry's claim that the flight of colors varies greatly from person to person. There has been very little subsequent literature on the issue. (For a review see my essay Introspective Training Apprehensively Defended.)
So: Will two normal people in the same circumstances experience roughly the same flight of colors? How much does the flight of colors vary with differences in circumstance? What conditions govern the progression of colors? How can we not know any of this?! Really, it's amazing how fresh and uncut much of psychology still is.
Monday, June 05, 2006
Reporting What We Think, Want, and Fear
Gareth Evans, Robert Gordon, Richard Moran, and others have noted that one way to answer questions about what you believe involves reflecting not on your beliefs but rather on the things your beliefs are about. If someone asks me whether I think Schwartzenegger will be re-elected, I can answer that question simply by reflecting on Schwartzenegger and his chances, expressing the results of that reflection with the self-attributional sentence: "I think he'll win".
I could even go so far as to say the "I think..." swiftly, before I've done any reflection at all. I can then contemplate Schwartzenegger at leisure, as though the question were just about his chances (and not at all about my beliefs about his chances), saying "... he'll win" in whatever way and by whatever mechanism I normally, and non-self-attributively, express my opinions about the world. The idea here is that whatever ordinary means I have of expressing views like "the office closes at 5:00" or "the 49ers are doomed" -- things I can surely say without introspective self-examination -- can also be employed to say "I think the office closes at 5:00" or "I know the 49ers are doomed". The difference between the former expressions and the latter is a difference in tone and confidence, not a difference in the presence or absence of introspection. But, of course, technically the latter sentences are in some sense about my mental states in a way the former are not.
Moran and Gordon hope that an account of this sort can work generally to explain self-attributions of attitudes. Others, like Shaun Nichols and Alvin Goldman, who see self-attributions of attitudes as involving something more closely akin to self-scanning, suggest that accounts of this sort work best (if at all) for belief, and fail completely for desire, fear, etc.
Part of the problem here is that Moran and Gordon have not been as clear and explicit as they might be about how such an account would work with attitudes such as desire and fear.
For desire, do I look at the world and assess whether (for example) ice cream is desirable? For fear, do I look at the world and assess whether that big dog is something I should be afraid of? Perhaps. But it seems that what I capture most accurately by such techniques is not the desire for ice cream but rather the belief that ice-cream is desirable, not an absence of dog fear but rather the belief that that dog doesn't warrant fear. And of course the beliefs here can come apart from the other attitudes, as in the case of avowedly irrational fear.
So maybe the Evans/Gordon/Moran account only works for belief after all?
No, I think it's still open to them to give an account of the following sort: To answer the question about whether I'm afraid of the dog, I can look at the dog itself, prepared in advance to express any fear I might have with a self-attributional sentence like "I'm afraid of that dog". This expression may be the result not of scanning my own mind but rather a direct response to the world -- as an expression like "that hurts!" or "I'm so happy to see you!" is plausibly a direct response to the world rather than a result of self-scanning, not psychologically very different from non-self-attributive expressions like "ow!" or "you look great!"
(Technical note: Thus, it seems to me, there can be a happy marriage between accounts like Evans/Gordon/Moran with "expressivist" accounts like that of Dorit Bar-On -- despite the sharp line Bar-On draws between those accounts and hers [see also Brie Gertler's Stanford Encyclopedia entry on Self-Knowledge].)
But let me note in conclusion (lest it seem that I've forgotten everything I've written about belief in earlier blogs) that the connection between a sentence like "I think Schwartzenegger will win" (or "I think God exists" or "I believe all men were created equal") and one's actual beliefs is in certain cases rather dubious. It better expresses one's (potentially very superficial) conscious judgments than one's deep, implicitly accepted, action-animating beliefs.
I could even go so far as to say the "I think..." swiftly, before I've done any reflection at all. I can then contemplate Schwartzenegger at leisure, as though the question were just about his chances (and not at all about my beliefs about his chances), saying "... he'll win" in whatever way and by whatever mechanism I normally, and non-self-attributively, express my opinions about the world. The idea here is that whatever ordinary means I have of expressing views like "the office closes at 5:00" or "the 49ers are doomed" -- things I can surely say without introspective self-examination -- can also be employed to say "I think the office closes at 5:00" or "I know the 49ers are doomed". The difference between the former expressions and the latter is a difference in tone and confidence, not a difference in the presence or absence of introspection. But, of course, technically the latter sentences are in some sense about my mental states in a way the former are not.
Moran and Gordon hope that an account of this sort can work generally to explain self-attributions of attitudes. Others, like Shaun Nichols and Alvin Goldman, who see self-attributions of attitudes as involving something more closely akin to self-scanning, suggest that accounts of this sort work best (if at all) for belief, and fail completely for desire, fear, etc.
Part of the problem here is that Moran and Gordon have not been as clear and explicit as they might be about how such an account would work with attitudes such as desire and fear.
For desire, do I look at the world and assess whether (for example) ice cream is desirable? For fear, do I look at the world and assess whether that big dog is something I should be afraid of? Perhaps. But it seems that what I capture most accurately by such techniques is not the desire for ice cream but rather the belief that ice-cream is desirable, not an absence of dog fear but rather the belief that that dog doesn't warrant fear. And of course the beliefs here can come apart from the other attitudes, as in the case of avowedly irrational fear.
So maybe the Evans/Gordon/Moran account only works for belief after all?
No, I think it's still open to them to give an account of the following sort: To answer the question about whether I'm afraid of the dog, I can look at the dog itself, prepared in advance to express any fear I might have with a self-attributional sentence like "I'm afraid of that dog". This expression may be the result not of scanning my own mind but rather a direct response to the world -- as an expression like "that hurts!" or "I'm so happy to see you!" is plausibly a direct response to the world rather than a result of self-scanning, not psychologically very different from non-self-attributive expressions like "ow!" or "you look great!"
(Technical note: Thus, it seems to me, there can be a happy marriage between accounts like Evans/Gordon/Moran with "expressivist" accounts like that of Dorit Bar-On -- despite the sharp line Bar-On draws between those accounts and hers [see also Brie Gertler's Stanford Encyclopedia entry on Self-Knowledge].)
But let me note in conclusion (lest it seem that I've forgotten everything I've written about belief in earlier blogs) that the connection between a sentence like "I think Schwartzenegger will win" (or "I think God exists" or "I believe all men were created equal") and one's actual beliefs is in certain cases rather dubious. It better expresses one's (potentially very superficial) conscious judgments than one's deep, implicitly accepted, action-animating beliefs.
Friday, June 02, 2006
Turning Back Your Eyes
I'm in the Washington U. library in St. Louis, at the Society for Philosophy & Psychology meeting (great program so far!), and I don't have time to work my thoughts into the shape I'd like for a blog entry, but perhaps the following will entertain you, or even precipitate a thought:
About a week ago, my six-year-old son Davy had rotated his eyes high in their sockets so that I could barely see the pupils. So, naturally, I rotated mine as high as they could go, while pulling down on my cheeks so that (I hoped) only the whites of my eyes were visible. (If you don't know what I'm talking about, you were never a 12-year-old boy.)
Davy said: That's what I do to see what I'm thinking. I turn my eyes around until they can look at my brain.
I said: Isn't it dark in there?
Davy said: No, my ideas light it up.
So there you have it: A six-year-old account of introspection! No better or worse, I suspect than many of the theories already out there!
About a week ago, my six-year-old son Davy had rotated his eyes high in their sockets so that I could barely see the pupils. So, naturally, I rotated mine as high as they could go, while pulling down on my cheeks so that (I hoped) only the whites of my eyes were visible. (If you don't know what I'm talking about, you were never a 12-year-old boy.)
Davy said: That's what I do to see what I'm thinking. I turn my eyes around until they can look at my brain.
I said: Isn't it dark in there?
Davy said: No, my ideas light it up.
So there you have it: A six-year-old account of introspection! No better or worse, I suspect than many of the theories already out there!