Most philosophers of mind think that believing requires having mental representations. Jerry Fodor and many others suggest that these representations have a language-like structure. Frank Jackson and a few others suggest that their structure is more map-like.
Consider this example. Sam believes that the mountain peak is 10 km north of the river and 10 km south of the coast and that there's an oasis 5 km due east of it. What happens when he learns that the mountain peak is actually 15 km north of the river and 5 km south of the coast?
If Sam's representations are map-like, then changes to any part will ramify automatically throughout the system: Moving the mountain peak farther north on the map automatically changes its position relative to the oasis (which is now southeast, not due east, and a bit farther away), automatically changes the distance between it and the road south of the river, the length of the northbound trail, etc. A single change on the map draws with it a vast number of changes in the relationships of the elements. If his representations are language-like, no automatic ramification ensues: One can change the sentences about the distance of the mountain from the river and the coast without altering anything else.
Whether the advantage here goes to the maps view or the language view is unclear: On the one hand, the maps view nicely captures the fact that when one belief changes, we seem to update our connected beliefs effortlessly and automatically. (Consider, analogously, where a geographical map isn't at issue, the case of learning that Gary has replaced Georgia as chair; I now know that Gary will be running the meetings, that he'll be signing the forms, that Georgia will be back in her regular office, etc., for quite a large number of propositions, with no noticeable effort on my part.) On the other hand, the sentences view seems more naturally to account for those cases in which we fail to ramify -- if for example, Sam fails to take account of the fact that the oasis will no longer be due east of the mountain, or if I go to visit Georgia in the chair's office after Gary has moved in.
Relatedly, the sentences view seems more naturally and easily to account for inconsistency and indeterminacy in one's belief system. Can a map even be logically inconsistent? The maps view can perhaps avoid this problem by postulating multiple maps, each inconsistent with the others, but that drags in its train a lot of questions and apparatus pertaining to issues like reduplication and the relationship of the maps to each other.
The maps view seems to avoid the problems of implicit belief and overcrowding of the "belief box" in cases like that of the number of planets (less than 100, less than 101, less than 102, etc.: many sentence-tokens, but a small map; see this post). On the other hand, we probably don't want to say that you believe every consequence of what appears on your map, and it's not clear where to draw the line.
Is there a fact of the matter here (or a third model that avoids all these problems?) -- or just competing models with different strengths and weakness?
Wednesday, August 30, 2006
Tuesday, August 29, 2006
Is it Irrational to Wish to be Human? (by Guest Blogger Brad Cokelet)
First off, I want to thank Eric for inviting me to Blog on The Splintered Mind. I hope my posts, like Eric’s, help some fellow procrastinators fill their time in a less regrettable fashion than they would otherwise. But today’s topic is wishing, not hoping.
In the Groundwork, Kant makes a striking, negative claim about what it is rational to wish for: “But inclinations themselves, as sources of needs, are so far from having absolute value to make them desirable for their own sake that it must rather be the universal wish of every rational being to be wholly free of them,” [4:428] Is Kant right? Is it irrational to wish to have human inclinations for creature comforts like food and sex?
It will help to consider an argument for a similar conclusion that Graham Oddie discusses in his recent book. It focuses (no surprise) on desires for things that are indeed good; desires for the bad are presumably not something a rational agent would wish to have. Here is a version of that argument adapted to present purposes.
First, note that if we desire something then we do not have it. Thus, given that the object of a desire is something good, our desiring entails that we lack something good. But that entails that if you wish to have a desire for something good, then you wish to be without something good. And given that it is irrational to wish to be without something good instead of wishing to have that good, that entails that it is irrational to wish to have desires for the good. So we can conclude that it is irrational to wish to have desires.
How should we respond to this argument and Kant’s claim?
I think the most promising possibility is to argue that some desires are plausibly seen as necessary parts of or necessary means to a valuable experience. For example, we might argue that you have to have sexual desires in order to have valuable sexual experiences. We could then either argue one of two ways:
(1) That these desires are instrumentally valuable and that we cannot conceive of the experiences without them.
(2) That the desires are intrinsically valuable because we can undergo the relevant valuable experience if and only if we delay gratification and enjoy a desire, so to speak.
Questions abound here: Are there other types of experiences that we need desires in order to have? Is there a better way to respond to this argument? Or is Kant right that, if we are rational, we will wish to leave our humanity behind?
In the Groundwork, Kant makes a striking, negative claim about what it is rational to wish for: “But inclinations themselves, as sources of needs, are so far from having absolute value to make them desirable for their own sake that it must rather be the universal wish of every rational being to be wholly free of them,” [4:428] Is Kant right? Is it irrational to wish to have human inclinations for creature comforts like food and sex?
It will help to consider an argument for a similar conclusion that Graham Oddie discusses in his recent book. It focuses (no surprise) on desires for things that are indeed good; desires for the bad are presumably not something a rational agent would wish to have. Here is a version of that argument adapted to present purposes.
First, note that if we desire something then we do not have it. Thus, given that the object of a desire is something good, our desiring entails that we lack something good. But that entails that if you wish to have a desire for something good, then you wish to be without something good. And given that it is irrational to wish to be without something good instead of wishing to have that good, that entails that it is irrational to wish to have desires for the good. So we can conclude that it is irrational to wish to have desires.
How should we respond to this argument and Kant’s claim?
I think the most promising possibility is to argue that some desires are plausibly seen as necessary parts of or necessary means to a valuable experience. For example, we might argue that you have to have sexual desires in order to have valuable sexual experiences. We could then either argue one of two ways:
(1) That these desires are instrumentally valuable and that we cannot conceive of the experiences without them.
(2) That the desires are intrinsically valuable because we can undergo the relevant valuable experience if and only if we delay gratification and enjoy a desire, so to speak.
Questions abound here: Are there other types of experiences that we need desires in order to have? Is there a better way to respond to this argument? Or is Kant right that, if we are rational, we will wish to leave our humanity behind?
Monday, August 28, 2006
Implicit Belief and Tokens in the "Belief Box"
You believe the number of planets is less than -- well, let's be safe, 100! You also believe that the number of planets is less than 101, and that the number of planets is less than 102, etc. Or are you only disposed to believe such things, when prompted to reflect on them, while prior to reflection you neither believe nor disbelieve them?
Most philosophers of mind who've discussed the issue (e.g., Fodor, Field, Lycan, Dennett) are inclined to think everyone does believe that the number of planets is less than 157, though they may never have explicitly entertained that idea.
Now some of these same authors also think that, paradigmatically, believing something requires having a sentence "tokened" somewhere in the mind (metaphorically, in the "belief box"). So, then, do I have an indefinitely large number of tokens in the belief box pertaining to the number of planets? That's a rather uncomfortable view, I think, if we take the idea of "tokening" relatively realistically (as Fodor and others would like us to do). Consequently, the idea of "implicit belief" might be appealing. As Fodor and Dennett describe it, only explicit beliefs require actual tokenings in the belief box. We believe something "implicitly" if its content is swiftly derivable from something we explicitly believe.
Here are some concerns about that idea:
(1.) It draws a sharp distinction -- between the real cognitive structure of explicit and implicit beliefs -- where folk-psychologically and (so far) empirically, none (or only a gradation of degree) is discernable. Five minutes ago, did you believe that Quebec was north of South Carolina (or New York), that the freeways are slow at 5:17 p.m., that Clint Eastwood is a millionaire, that most philosophers aren't rich, etc.? Some of these contents I have probably previously entertained, others not; I don't know which, and it doesn't seem to matter. I note no sharp distinction between the implicit and explicit among them. (On the other hand, there is a sharp distinction, I think, between my belief that the number of my street address is less than 10,000 [it's 6855] and my just-now-generated belief that it's divisible by three, which took some figuring [using the rule that a number is divisible by three only if the sum of its digits is divisible by three and 6+8+5+5=24].)
UPDATE (2:39 p.m.) on (2) and (3):
(2.) Dennett imagines a chess-playing computer that, as a result of its programmed strategies, always tries to get its queen out early -- though there's nothing explicitly represented in the programming from which "get the queen out early" is swiftly derivable. Although most people wouldn't ascribe a chess-playing computer literal beliefs, one can imagine a similar structural situation arising in a person, if one accepts the rules-and-representations approach to cognition Fodor and others are offering. In such a case, we might still want to say, as Fodor grants, that the machine or person (implicitly) believes one should get the queen out early. But now we need a different account of implicit belief -- and there's a threat that one might go over to a view according to which to have a belief is to exhibit a certain pattern of behavior (and experience?) as Dennett, Ryle, and others have suggested; and that would be a change of position for most representationalists about belief.
(3.) Briefly: If beliefs commonly arise and (especially) are forgotten gradually, this puts strain on the central idea in the belief box model of the explicit-implicit distinction, which seems to assume that there's generally a distinct and discrete fact of the matter about what is "tokened" in the belief box. (At least I've never seen a convincing account of belief boxes that makes any useful sense of in-between cases and gradual change.)
(Thanks to Treat Dougherty for discussion of this issue [at This Is the Name of This Blog] in connection with my Stanford Encyclopedia entry on belief.)
Most philosophers of mind who've discussed the issue (e.g., Fodor, Field, Lycan, Dennett) are inclined to think everyone does believe that the number of planets is less than 157, though they may never have explicitly entertained that idea.
Now some of these same authors also think that, paradigmatically, believing something requires having a sentence "tokened" somewhere in the mind (metaphorically, in the "belief box"). So, then, do I have an indefinitely large number of tokens in the belief box pertaining to the number of planets? That's a rather uncomfortable view, I think, if we take the idea of "tokening" relatively realistically (as Fodor and others would like us to do). Consequently, the idea of "implicit belief" might be appealing. As Fodor and Dennett describe it, only explicit beliefs require actual tokenings in the belief box. We believe something "implicitly" if its content is swiftly derivable from something we explicitly believe.
Here are some concerns about that idea:
(1.) It draws a sharp distinction -- between the real cognitive structure of explicit and implicit beliefs -- where folk-psychologically and (so far) empirically, none (or only a gradation of degree) is discernable. Five minutes ago, did you believe that Quebec was north of South Carolina (or New York), that the freeways are slow at 5:17 p.m., that Clint Eastwood is a millionaire, that most philosophers aren't rich, etc.? Some of these contents I have probably previously entertained, others not; I don't know which, and it doesn't seem to matter. I note no sharp distinction between the implicit and explicit among them. (On the other hand, there is a sharp distinction, I think, between my belief that the number of my street address is less than 10,000 [it's 6855] and my just-now-generated belief that it's divisible by three, which took some figuring [using the rule that a number is divisible by three only if the sum of its digits is divisible by three and 6+8+5+5=24].)
UPDATE (2:39 p.m.) on (2) and (3):
(2.) Dennett imagines a chess-playing computer that, as a result of its programmed strategies, always tries to get its queen out early -- though there's nothing explicitly represented in the programming from which "get the queen out early" is swiftly derivable. Although most people wouldn't ascribe a chess-playing computer literal beliefs, one can imagine a similar structural situation arising in a person, if one accepts the rules-and-representations approach to cognition Fodor and others are offering. In such a case, we might still want to say, as Fodor grants, that the machine or person (implicitly) believes one should get the queen out early. But now we need a different account of implicit belief -- and there's a threat that one might go over to a view according to which to have a belief is to exhibit a certain pattern of behavior (and experience?) as Dennett, Ryle, and others have suggested; and that would be a change of position for most representationalists about belief.
(3.) Briefly: If beliefs commonly arise and (especially) are forgotten gradually, this puts strain on the central idea in the belief box model of the explicit-implicit distinction, which seems to assume that there's generally a distinct and discrete fact of the matter about what is "tokened" in the belief box. (At least I've never seen a convincing account of belief boxes that makes any useful sense of in-between cases and gradual change.)
(Thanks to Treat Dougherty for discussion of this issue [at This Is the Name of This Blog] in connection with my Stanford Encyclopedia entry on belief.)
Friday, August 25, 2006
Goldhagen's Challenge
Daniel Goldhagen's provocative (and controversial) book, Hitler's Willing Executioners, despite -- no because of -- its simplifications, powerfully raises a question that every moral psychologist should consider: When one's culture, or subculture, embraces a noxious set of values, what resources do ordinary individuals have to discover the immorality of those values?
In the early 1940s, Reserve Police Battalion 101, composed of a fairly arbitrary slice of 300 or so ordinary men from northern Germany -- men with no particular commitment to Nazism and little ideological training -- was sent to Poland to kill Jews. They killed thousands of men, women, and children. The two most prominent histories of this event -- Goldhagen's book and Christopher Browning's Ordinary Men -- concur on the basic facts of the case: That these ordinary men committed this slaughter willingly, without threat of severe punishment, and largely without objection. Browning thinks that at least some of the men felt pangs of conscience and remorse; but he concludes this (as Goldhagen points out) largely based on self-exculpatory claims ("well, I didn't want to do it") that these men gave at trial. If we dismiss such self-exculpatory claims, and look at the evidence of the time and the claims made by these men about the feelings of other men, Goldhagen argues, it is very difficult to find signs of genuine remorse or moral disapproval. The men posed for pictures of themselves tormenting Jews; almost none applied for transfer (one who did -- the one clear objector who consistently refused to participate in the genocide -- was actually transferred back to Germany and promoted!); there were plenty of volunteers for "Jew hunts"; etc.
Goldhagen points out that these men were given plenty of time to reflect: They had considerable free time between their genocidal activities. They had furloughs during which they could go home and gain perspective. And given the evident significance of what they were doing, reflection would certainly be natural. Based on their behavior, this reflection seems largely to have confirmed the permissibility, perhaps even praiseworthiness, of the genocide.
I would like to think that reflection tends to lead to moral improvement, to the discovery of right and wrong -- and that it has the power to do so, at least to some degree, even in an environment of noxious values. I'd like to think that an ordinary man, anti-Semitic but not brainwashed, asked to walk into the forest side by side with an innocent Jewish girl then shoot her in the head, could, by reflection, see that what he has been asked to do is gravely morally wrong.
But maybe not. (After all, ethical reflection doesn't seem to help philosophy professors much.)
In the early 1940s, Reserve Police Battalion 101, composed of a fairly arbitrary slice of 300 or so ordinary men from northern Germany -- men with no particular commitment to Nazism and little ideological training -- was sent to Poland to kill Jews. They killed thousands of men, women, and children. The two most prominent histories of this event -- Goldhagen's book and Christopher Browning's Ordinary Men -- concur on the basic facts of the case: That these ordinary men committed this slaughter willingly, without threat of severe punishment, and largely without objection. Browning thinks that at least some of the men felt pangs of conscience and remorse; but he concludes this (as Goldhagen points out) largely based on self-exculpatory claims ("well, I didn't want to do it") that these men gave at trial. If we dismiss such self-exculpatory claims, and look at the evidence of the time and the claims made by these men about the feelings of other men, Goldhagen argues, it is very difficult to find signs of genuine remorse or moral disapproval. The men posed for pictures of themselves tormenting Jews; almost none applied for transfer (one who did -- the one clear objector who consistently refused to participate in the genocide -- was actually transferred back to Germany and promoted!); there were plenty of volunteers for "Jew hunts"; etc.
Goldhagen points out that these men were given plenty of time to reflect: They had considerable free time between their genocidal activities. They had furloughs during which they could go home and gain perspective. And given the evident significance of what they were doing, reflection would certainly be natural. Based on their behavior, this reflection seems largely to have confirmed the permissibility, perhaps even praiseworthiness, of the genocide.
I would like to think that reflection tends to lead to moral improvement, to the discovery of right and wrong -- and that it has the power to do so, at least to some degree, even in an environment of noxious values. I'd like to think that an ordinary man, anti-Semitic but not brainwashed, asked to walk into the forest side by side with an innocent Jewish girl then shoot her in the head, could, by reflection, see that what he has been asked to do is gravely morally wrong.
But maybe not. (After all, ethical reflection doesn't seem to help philosophy professors much.)
Wednesday, August 23, 2006
Nasal Phosphenes and the Extent of the Visual Field
Of course you know this: If you press one corner of your eye, a bit of a spot will appear at the opposite end of your visual field, due to mechanical stimulation of the retina. (Or didn't you know that?) I find the effect clearest when I close one eye and press near the midline on the outside edge of the other eye. A phosphene -- seemingly a dark spot encircled by a bright ring -- appears right near the nasal edge of the visual field.
Now here's a challenge: See how far into the periphery you can get that nasal-side phosphene. Keep the other eye closed, half close the lid of the eye you're pressing, and rotate the pressure a bit toward the center of the eye. Can you see the phosphene go deeper in? At some point I find the phosphene disappears, but only well into the dark field -- not in the shade created by my nose but in what would, if the phosphene were located in space, be the interior region of my face. Who'd've thought we'd have retinal receptors that could refer stimuli there?!
Now if I open the other eye, I see that the phosphene is actually (barely) within the visual field created by both my eyes together. Well, that makes sense! Otherwise, there'd be some explaining to do about how the experienced visual field for phosphenes can extend beyond that for outward objects!
My question is this. When I close one eye, normally, does my visual field contract? My first impulse is to say yes. But now I wonder whether instead, my visual field might actually contain some of that dark region I've recently been exploring with the phosphene, the region inside my face -- a region I usually totally ignore, of course! If I turn my one open eye as far inward as it can go, I see my nose, and blackness farther in. But blackness, of course, isn't the end of the visual field. It's part of the visual field -- a part experienced as black. To see the contrast, rotate your eye as far outward as it can go. There's no blackness at the edge (or maybe just a little in the bottom half?) -- the visual field just stops.
So, when I stare straight ahead with one eye closed, does my visual field stop somewhere near my nose, like it stops at the outer edge, or does it include inward blackness, stretching over the entirety of the visual space that would normally be filled in by peripheral input from the other eye?
Now here's a challenge: See how far into the periphery you can get that nasal-side phosphene. Keep the other eye closed, half close the lid of the eye you're pressing, and rotate the pressure a bit toward the center of the eye. Can you see the phosphene go deeper in? At some point I find the phosphene disappears, but only well into the dark field -- not in the shade created by my nose but in what would, if the phosphene were located in space, be the interior region of my face. Who'd've thought we'd have retinal receptors that could refer stimuli there?!
Now if I open the other eye, I see that the phosphene is actually (barely) within the visual field created by both my eyes together. Well, that makes sense! Otherwise, there'd be some explaining to do about how the experienced visual field for phosphenes can extend beyond that for outward objects!
My question is this. When I close one eye, normally, does my visual field contract? My first impulse is to say yes. But now I wonder whether instead, my visual field might actually contain some of that dark region I've recently been exploring with the phosphene, the region inside my face -- a region I usually totally ignore, of course! If I turn my one open eye as far inward as it can go, I see my nose, and blackness farther in. But blackness, of course, isn't the end of the visual field. It's part of the visual field -- a part experienced as black. To see the contrast, rotate your eye as far outward as it can go. There's no blackness at the edge (or maybe just a little in the bottom half?) -- the visual field just stops.
So, when I stare straight ahead with one eye closed, does my visual field stop somewhere near my nose, like it stops at the outer edge, or does it include inward blackness, stretching over the entirety of the visual space that would normally be filled in by peripheral input from the other eye?
Monday, August 21, 2006
On (Not) Washing Your Car
I often walk in the early mornings, around sunrise. Sometimes I see the following remarkable phenomenon: a man out washing his car. (Yes, always a man.)
What's so remarkable about this? For one thing, the car is always clean before it is washed, already the cleanest in the neighborhood. No doubt a certain type of eye could have found an imperfection of dirt in it somewhere, though who but this man himself would inspect his car so closely? Maybe washing it again gives is a special sheen? Well, maybe so! -- but the difference is at most incremental and brief, thin pay for his labor.
Does he enjoy washing his car? Probably he does; and it gets him out in the morning air. But why should one enjoy washing a car more than washing dishes, or going for a walk, or lingering over the newspaper, or gardening, or any of the many other things a man could do in the morning for profit, pleasure, kindness, or self-maintenance? Surely he's not simply bored?
No, he loves his car. In washing, he caresses it. He spins the shammy cloth with a flourish. He takes comfort in the ritual. He tells himself it's only a chore, that he doesn't want to do it but he should -- that those of us who wash our cars at more moderate intervals are slovenly. Is his car a production of his hands to be proud of, a rebuilt 1932 Ford, maybe? No, it's an ordinary 2005 Lexus -- a "prestige" car. Is he proud of that? He is, of course, in his secret heart (and why not?), but also -- perhaps more -- he delights in the car itself, in its shine, its smooth surfaces, its power.
My father drove his cars into the ground. He never washed them. Following my father, I used to be proud, too, in my own perverse way -- proud that I drove a dirty 1985 Corolla rather than a two-year-old Lexus, proud precisely because it was an old and dirty car, and so -- I thought -- it said something about where my values, time, and money were: somewhere other than my car!
That pride was misplaced, though. I would never say the same about having an unwashed body and clothes. I would never sport them proudly as evidence that I have better things to do with my time and resources than take a shower and use a washing-machine! The Corolla is gone; I wash my two-year-old Honda, though not as often as I should.
My father always lived next-door to car-washers. He was always friendly with them, and them with him; but they could never entirely understand each other.
(Revised August 30.)
What's so remarkable about this? For one thing, the car is always clean before it is washed, already the cleanest in the neighborhood. No doubt a certain type of eye could have found an imperfection of dirt in it somewhere, though who but this man himself would inspect his car so closely? Maybe washing it again gives is a special sheen? Well, maybe so! -- but the difference is at most incremental and brief, thin pay for his labor.
Does he enjoy washing his car? Probably he does; and it gets him out in the morning air. But why should one enjoy washing a car more than washing dishes, or going for a walk, or lingering over the newspaper, or gardening, or any of the many other things a man could do in the morning for profit, pleasure, kindness, or self-maintenance? Surely he's not simply bored?
No, he loves his car. In washing, he caresses it. He spins the shammy cloth with a flourish. He takes comfort in the ritual. He tells himself it's only a chore, that he doesn't want to do it but he should -- that those of us who wash our cars at more moderate intervals are slovenly. Is his car a production of his hands to be proud of, a rebuilt 1932 Ford, maybe? No, it's an ordinary 2005 Lexus -- a "prestige" car. Is he proud of that? He is, of course, in his secret heart (and why not?), but also -- perhaps more -- he delights in the car itself, in its shine, its smooth surfaces, its power.
My father drove his cars into the ground. He never washed them. Following my father, I used to be proud, too, in my own perverse way -- proud that I drove a dirty 1985 Corolla rather than a two-year-old Lexus, proud precisely because it was an old and dirty car, and so -- I thought -- it said something about where my values, time, and money were: somewhere other than my car!
That pride was misplaced, though. I would never say the same about having an unwashed body and clothes. I would never sport them proudly as evidence that I have better things to do with my time and resources than take a shower and use a washing-machine! The Corolla is gone; I wash my two-year-old Honda, though not as often as I should.
My father always lived next-door to car-washers. He was always friendly with them, and them with him; but they could never entirely understand each other.
(Revised August 30.)
Friday, August 18, 2006
The New Philosophers' Carnival is...
here. (It took me a couple days to spot it; I was out of town, playing in the La Jolla surf rather than surfing the blogs!) Gracias, Marcos!
When Our Eyes Are Closed, What Do We See?
Here are some possibilities:
(1.) blackness or a grey haze,
(2.) nothing at all (not even blackness),
(3.) a field of shifting colors,
(4.) afterimages on a background field of black or gray,
(5.) patterns of color, not obviously related to external stimulus, on a field of black or gray,
(6.) visual imagery of objects, maybe like a dream or a daydream or a faint perception or a hallucination?
What do you think?
Last week I asked visitors here and a number of people around UCR. The responses were wonderfully various, and some of them quite detailed, but if I had to summarize or force them into boxes I'd say I got these results:
(1) black/grey (possibly shifting or undulating): 4 respondents;
(2) usually nothing, not even blackness: 1 respondent;
(3) colors: 4 or 5 respondents;
(4) afterimages: 1 or 2 respondents;
(5) patterns of color against a black or grey background: 5 respondents;
(6) visual imagery of objects or scenes: 1 respondent, but 3 others mentioned that they sometimes have this experience or they have it after a while.
Now what's interesting to me in this is the variability in the responses. Do people really experience such radically different things when they close their eyes? I don't think we can dismiss these differences as differences merely in the language used to convey basically the same idea (except maybe between 3 and 5, though even there only in some cases). Could some people simply be wrong about such an apparently easily observable matter?
To bolster some of the less popular responses:
On (2): Many researchers think we don't have conscious experience at all without attention -- for example, when we're not attending to the refrigerator hum in the background, we don't have any auditory experience of it at all. Others disagree. (I've written about this here and posted on it here.) If we go with those who deny experience outside of attention, and if we think that when our eyes are closed we normally aren't visually attending to anything, that lends some credence to (2). Of course, on this view, when I ask you what your visual experience is, and then you close your eyes, your attentional state is atypical and you'll probably experience something. But this may be quite unrepresentative of our visual experience when we're listening to music with eyes closed or when we're trying to fall asleep, etc.
On (1): This response can be buttressed by a related argument. Maybe paying attention to visual experience with one's eyes closed does something to bring out or enliven that experience, with afterimages and shifting colors, but normally, when we're not really attending, its more like a plain black or grey field.
On (4): Early introspective psychologists like Purkinje, Helmholtz, and Titchener, seemed to think afterimages consistuted a major part of one's eyes-closed visual experience. They studied this stuff intensively, so I don't think we should simply dismiss their reports.
Okay, I've managed to completely bewilder myself! Any thoughts about how (and whether) we can make progress on this question? Maybe it's not, strictly speaking, an especially important question. But it's a basic question -- one that's surprisingly hard to find good discussions of, this late in our study of the mind and visual experience.
(1.) blackness or a grey haze,
(2.) nothing at all (not even blackness),
(3.) a field of shifting colors,
(4.) afterimages on a background field of black or gray,
(5.) patterns of color, not obviously related to external stimulus, on a field of black or gray,
(6.) visual imagery of objects, maybe like a dream or a daydream or a faint perception or a hallucination?
What do you think?
Last week I asked visitors here and a number of people around UCR. The responses were wonderfully various, and some of them quite detailed, but if I had to summarize or force them into boxes I'd say I got these results:
(1) black/grey (possibly shifting or undulating): 4 respondents;
(2) usually nothing, not even blackness: 1 respondent;
(3) colors: 4 or 5 respondents;
(4) afterimages: 1 or 2 respondents;
(5) patterns of color against a black or grey background: 5 respondents;
(6) visual imagery of objects or scenes: 1 respondent, but 3 others mentioned that they sometimes have this experience or they have it after a while.
Now what's interesting to me in this is the variability in the responses. Do people really experience such radically different things when they close their eyes? I don't think we can dismiss these differences as differences merely in the language used to convey basically the same idea (except maybe between 3 and 5, though even there only in some cases). Could some people simply be wrong about such an apparently easily observable matter?
To bolster some of the less popular responses:
On (2): Many researchers think we don't have conscious experience at all without attention -- for example, when we're not attending to the refrigerator hum in the background, we don't have any auditory experience of it at all. Others disagree. (I've written about this here and posted on it here.) If we go with those who deny experience outside of attention, and if we think that when our eyes are closed we normally aren't visually attending to anything, that lends some credence to (2). Of course, on this view, when I ask you what your visual experience is, and then you close your eyes, your attentional state is atypical and you'll probably experience something. But this may be quite unrepresentative of our visual experience when we're listening to music with eyes closed or when we're trying to fall asleep, etc.
On (1): This response can be buttressed by a related argument. Maybe paying attention to visual experience with one's eyes closed does something to bring out or enliven that experience, with afterimages and shifting colors, but normally, when we're not really attending, its more like a plain black or grey field.
On (4): Early introspective psychologists like Purkinje, Helmholtz, and Titchener, seemed to think afterimages consistuted a major part of one's eyes-closed visual experience. They studied this stuff intensively, so I don't think we should simply dismiss their reports.
Okay, I've managed to completely bewilder myself! Any thoughts about how (and whether) we can make progress on this question? Maybe it's not, strictly speaking, an especially important question. But it's a basic question -- one that's surprisingly hard to find good discussions of, this late in our study of the mind and visual experience.
My Long Encyclopedia Entry on "Belief"...
... is finally up at the Stanford Encyclopedia of Philosophy, after my having worked on it off and on for four years. Bad timing on the "number of planets" example, though!
Wednesday, August 16, 2006
Does Saying "I'm Thinking of a Pink Elephant" Make It True?
Suppose I say "I'm thinking of a pink elephant". I'm sincere and there's no linguistic mistake. Does merely thinking such a thought or reaching such a judgment, silently or aloud, make it true? Tyler Burge and Jaakko Hintikka (among many others) have endorsed this idea; and it has often been thought key to understanding introspective self-knowledge.
I'll grant this: Certain things plausibly follow from the very having of a thought: that I'm thinking, that my thought has the content it has. Any thought that manages to assert the conditions or consequences of its existence will necessarily be true whenever it occurs.
But, indeed, anything that's evaluable as true or false, if it asserts the conditions or consequences of its existence, or has the right self-referential structure, will necessarily be true whenever it occurs: the spoken utterance "I'm speaking" or "I'm saying 'blu-bob'"; any English occurrence of "this sentence has five words"; any semaphore utterance of "I have two flags". This is simply the phenomenon of self-fulfillment. This kind of infallibility is cheap.
If I utter an infallibly self-fulfilling sentence, or if I have an infallibly self-fulfilling thought, it will be true regardless of what caused that utterance or thought -- whether introspection, fallacious reasoning, evil neurosurgery, quantum accident, stroke, indigestion, divine intervention, or sheer frolicsome confabulation. If "I'm thinking of a pink elephant" is of this species, then despite its infallibility, no particular introspective capacity, no remarkable self-detection, is required. And very little follows in general about our self-knowledge.
But I'm not sure that it is really necessary to think of a pink elephant to utter sincerely and comprehendingly, "I'm thinking of a pink elephant". Surely I needn't have a visual image of a pink elephant. Nor need I have, it seems, a sentence in inner speech to that effect (especially if the thought is uttered aloud). What is it to "think of" something? Is it merely to refer to it? To include it in a silent or spoken judgment? That seems a rather thin notion of "thinking"; and if we do adopt that notion, the vacuity of the infallibility claim becomes even more obvious. It becomes tantamount to "this thought makes reference to the following object: a pink elephant". Then it really is structurally no different from the utterance "I'm saying 'blu-bob'".
So, despite some philosophers' quest for and emphasis on the infallible in self-knowledge of the mind, to me the matter seems rather trivial, unimportant, and in fact utterly unconnected to the issue of the trustworthiness of introspection. (Take that, Descartes!) Or am I missing something? Maybe a fan of the importance of self-verifying thoughts can help me out?
I'll grant this: Certain things plausibly follow from the very having of a thought: that I'm thinking, that my thought has the content it has. Any thought that manages to assert the conditions or consequences of its existence will necessarily be true whenever it occurs.
But, indeed, anything that's evaluable as true or false, if it asserts the conditions or consequences of its existence, or has the right self-referential structure, will necessarily be true whenever it occurs: the spoken utterance "I'm speaking" or "I'm saying 'blu-bob'"; any English occurrence of "this sentence has five words"; any semaphore utterance of "I have two flags". This is simply the phenomenon of self-fulfillment. This kind of infallibility is cheap.
If I utter an infallibly self-fulfilling sentence, or if I have an infallibly self-fulfilling thought, it will be true regardless of what caused that utterance or thought -- whether introspection, fallacious reasoning, evil neurosurgery, quantum accident, stroke, indigestion, divine intervention, or sheer frolicsome confabulation. If "I'm thinking of a pink elephant" is of this species, then despite its infallibility, no particular introspective capacity, no remarkable self-detection, is required. And very little follows in general about our self-knowledge.
But I'm not sure that it is really necessary to think of a pink elephant to utter sincerely and comprehendingly, "I'm thinking of a pink elephant". Surely I needn't have a visual image of a pink elephant. Nor need I have, it seems, a sentence in inner speech to that effect (especially if the thought is uttered aloud). What is it to "think of" something? Is it merely to refer to it? To include it in a silent or spoken judgment? That seems a rather thin notion of "thinking"; and if we do adopt that notion, the vacuity of the infallibility claim becomes even more obvious. It becomes tantamount to "this thought makes reference to the following object: a pink elephant". Then it really is structurally no different from the utterance "I'm saying 'blu-bob'".
So, despite some philosophers' quest for and emphasis on the infallible in self-knowledge of the mind, to me the matter seems rather trivial, unimportant, and in fact utterly unconnected to the issue of the trustworthiness of introspection. (Take that, Descartes!) Or am I missing something? Maybe a fan of the importance of self-verifying thoughts can help me out?
Monday, August 14, 2006
Are Salty Experiences Salty? Are Square Experiences Square?
It has often been noticed -- I won't try to track down the history of this observation, but Ned Block and David Chalmers come to mind, and Ignacio Prado states it nicely in a comment on my Aug. 7 post about Dennett on fictions -- that the language in which we describe sensory experience is often derivative of the language we use to describe the objects sensed. Asked to describe the taste experience I have biting into a potato chip, I might say "salty". Now probably I don't (or shouldn't?) mean that my experience itself is salty. Rather, it's the potato chip that's salty. My experience is only "salty" in the sense that it's the kind of experience typically caused by salty things -- it's an experience of the saltiness of something else. Likewise, I might say that my tactile experience of the carpet is "rough" and my olfactory experience of my wife's cooking is "oniony".
Asked to describe such experiences without recourse to the language of external objects, I'm stumped. I could try analogy: "It tastes like a rushing elephant!" -- but the metaphor is weak, and we're still in the language of external objects, anyway. I could say a few things using concepts that apply literally both to experiences and to external events: The taste experience had a sharp onset, then gradually faded. But this won't get us far. Likewise, I can say things like that it was pleasurable and that it made me want to eat more, but such remarks, again, don't have much specificity. In attempting to describe sensory experience accurately and in detail, we depend essentially on our language for describing sensed objects.
I've used tactile, gustatory, and olfactory examples so far. What about vision and hearing? It's clear -- I think it's clear! -- that taste experiences aren't literally salty, etc., in the same way external objects can be salty. The parallel observation is not as clear for auditory and visual experience. Are experiences of square things literally square? Are experiences of loud things literally loud? By analogy, it seems we should say no; but on the other hand I, at least, feel some temptation to say that there is something literally square in my visual experience of a square object -- a squarish sense-datum? And can auditory experiences literally differ in volume in the way external sound events can differ in volume (louder and quieter sense data)? My taste experiences plainly don't literally contain salt, but maybe my visual experiences of squares do literally contain four equally sized edges set a right angles to each other?
Aren't the senses all ontologically parallel? Why, then, is it so much more tempting to say that square experiences are square than that salty experiences are salty? (Color introduces a separate batch of primary-quality/secondary-quality confusions, so I'm avoiding it as an example.)
The description of emotional experience, I might add, seems to work a little differently, though it presents its own problems. We can try to locate it in space and associate it with bodily conditions (I felt a twisting in my gut); we can fit it into an emotional category (it was a feeling of sadness); but most people I've interviewed think such descriptions don't do full justice to the phenonema and feel frustrated when asked to describe their emotional phenomenology very precisely and accurately.
Asked to describe such experiences without recourse to the language of external objects, I'm stumped. I could try analogy: "It tastes like a rushing elephant!" -- but the metaphor is weak, and we're still in the language of external objects, anyway. I could say a few things using concepts that apply literally both to experiences and to external events: The taste experience had a sharp onset, then gradually faded. But this won't get us far. Likewise, I can say things like that it was pleasurable and that it made me want to eat more, but such remarks, again, don't have much specificity. In attempting to describe sensory experience accurately and in detail, we depend essentially on our language for describing sensed objects.
I've used tactile, gustatory, and olfactory examples so far. What about vision and hearing? It's clear -- I think it's clear! -- that taste experiences aren't literally salty, etc., in the same way external objects can be salty. The parallel observation is not as clear for auditory and visual experience. Are experiences of square things literally square? Are experiences of loud things literally loud? By analogy, it seems we should say no; but on the other hand I, at least, feel some temptation to say that there is something literally square in my visual experience of a square object -- a squarish sense-datum? And can auditory experiences literally differ in volume in the way external sound events can differ in volume (louder and quieter sense data)? My taste experiences plainly don't literally contain salt, but maybe my visual experiences of squares do literally contain four equally sized edges set a right angles to each other?
Aren't the senses all ontologically parallel? Why, then, is it so much more tempting to say that square experiences are square than that salty experiences are salty? (Color introduces a separate batch of primary-quality/secondary-quality confusions, so I'm avoiding it as an example.)
The description of emotional experience, I might add, seems to work a little differently, though it presents its own problems. We can try to locate it in space and associate it with bodily conditions (I felt a twisting in my gut); we can fit it into an emotional category (it was a feeling of sadness); but most people I've interviewed think such descriptions don't do full justice to the phenonema and feel frustrated when asked to describe their emotional phenomenology very precisely and accurately.
Friday, August 11, 2006
When Your Eyes Are Closed...
... (but you are awake) do you generally have visual experience, or not? And if you do, what is that experience like?
I've been polling my acquaintances. Post your reflections in a reply (ornery responses welcomed). I'd encourage you to settle on your own response before reading the responses of others.
Next week, I'll work up a little post on the results and why I care.
Thanks for being my victims, I mean, um, participant-collaborators!
I've been polling my acquaintances. Post your reflections in a reply (ornery responses welcomed). I'd encourage you to settle on your own response before reading the responses of others.
Next week, I'll work up a little post on the results and why I care.
Thanks for being my victims, I mean, um, participant-collaborators!
Wednesday, August 09, 2006
Degrees of Conscious Judging? Degrees, but Not of Confidence?
Keith Frankish, in Mind and Supermind, argues (among many other things) that although non-conscious dispositional beliefs come in degrees of confidence, conscious beliefs (he calls them "superbeliefs") are always flat-out yes-or-no, are always simply either accepted or not accepted. I'd like to disagree with that.
(By the way, I'd recommend Dominic Murphy's illuminating, if somewhat critical, review of Mind and Supermind.)
It won't do to object that obviously we sometimes consciously think that some proposition is only somewhat likely to be true. Frankish can handle such cases as defenders of flat-out belief have always done: He can say such judgments are flat-out judgments about likelihoods. (Jargonistically, one might say it's bel[flat-out](pr(P)= .9) rather than bel[.9](P).) If the debate comes down to attempting to distinguish between believing flat-out that something has a certain likelihood of being true and believing that thing simpliciter with some degree of confidence, it's going to take some subtle argument to straighten it out; and indeed, I suspect, it will ultimately be a pragmatic decision about how to regiment the word "belief" for scholarly purposes.
So let's not touch probability and degrees of confidence. It has always bothered me, anyway, philosophers' talk about degrees of confidence as "degrees of belief" -- as though having intermediate confidence were the only way to be between fully believing something and entirely lacking the belief!
Consider superstition. At the craps table, the dice come around again to a man who has been a "cold" shooter the last three times in a row; I wager against him, on the "don't pass". In a sense, I know the dice are fair and he's no more likely to crap out than the hot shooter; but still I think to myself "he's gonna flail", and I'm much more comfortable with the don't pass than the pass bet. It doesn't seem quite right to say that I consciously judge that he's going to flail or quite right to say that I consciously judge that he's as likely as the next guy to have a hot run. Maybe in some cases, I flip between two contradictory judgments, each full and genuine judgments; but can't I also, instead, simply have a single superstitious thought that I recognize as superstitious and yet half judge to be true? Such cases are in-between, I'd suggest, in a way not neatly captured by positing intermediate degrees of confidence.
Consider half-endorsed thoughts. Someone is speaking. You're not sweating it too much, but going with the flow. She hasn't said anything startling or questionable enough for alarms to go off, so you're nodding -- not entirely absently, but not entirely conscientiously either. Are you judging or believing what she says? Here, too, I think is a spectrum from mere idly letting words wash over you and fully endorsing and accepting them, a spectrum not best characterized in terms of degrees of confidence. Similar phenomena occur with slogans that come unbidden to mind, or remembered strands of prose, or memorized lecture notes.
Consider indeterminate content. Mary thinks consciously to herself that the purse is on the bed. The thought has half-formed associations: that it's not in the kitchen, that she'd better go to the bedroom, that she'll be needing it soon. Must all these thoughts or associations be either fully formed or entirely absent from consciousness? Must Mary's thought have a single, precise English content (as perhaps an instance of inner speech does?) that includes its being on the bed, say, but not its being in the bedroom?
In my published discussions of in-between belief (e.g. here and here), I've emphasized dispositional cases, where someone is simultaneously disposed to act one way in some situations and other way in other situations. I have not, as here, focused on cases in which a single, individual thought may fall between being a genuine judgment and not being one.
(By the way, I'd recommend Dominic Murphy's illuminating, if somewhat critical, review of Mind and Supermind.)
It won't do to object that obviously we sometimes consciously think that some proposition is only somewhat likely to be true. Frankish can handle such cases as defenders of flat-out belief have always done: He can say such judgments are flat-out judgments about likelihoods. (Jargonistically, one might say it's bel[flat-out](pr(P)= .9) rather than bel[.9](P).) If the debate comes down to attempting to distinguish between believing flat-out that something has a certain likelihood of being true and believing that thing simpliciter with some degree of confidence, it's going to take some subtle argument to straighten it out; and indeed, I suspect, it will ultimately be a pragmatic decision about how to regiment the word "belief" for scholarly purposes.
So let's not touch probability and degrees of confidence. It has always bothered me, anyway, philosophers' talk about degrees of confidence as "degrees of belief" -- as though having intermediate confidence were the only way to be between fully believing something and entirely lacking the belief!
Consider superstition. At the craps table, the dice come around again to a man who has been a "cold" shooter the last three times in a row; I wager against him, on the "don't pass". In a sense, I know the dice are fair and he's no more likely to crap out than the hot shooter; but still I think to myself "he's gonna flail", and I'm much more comfortable with the don't pass than the pass bet. It doesn't seem quite right to say that I consciously judge that he's going to flail or quite right to say that I consciously judge that he's as likely as the next guy to have a hot run. Maybe in some cases, I flip between two contradictory judgments, each full and genuine judgments; but can't I also, instead, simply have a single superstitious thought that I recognize as superstitious and yet half judge to be true? Such cases are in-between, I'd suggest, in a way not neatly captured by positing intermediate degrees of confidence.
Consider half-endorsed thoughts. Someone is speaking. You're not sweating it too much, but going with the flow. She hasn't said anything startling or questionable enough for alarms to go off, so you're nodding -- not entirely absently, but not entirely conscientiously either. Are you judging or believing what she says? Here, too, I think is a spectrum from mere idly letting words wash over you and fully endorsing and accepting them, a spectrum not best characterized in terms of degrees of confidence. Similar phenomena occur with slogans that come unbidden to mind, or remembered strands of prose, or memorized lecture notes.
Consider indeterminate content. Mary thinks consciously to herself that the purse is on the bed. The thought has half-formed associations: that it's not in the kitchen, that she'd better go to the bedroom, that she'll be needing it soon. Must all these thoughts or associations be either fully formed or entirely absent from consciousness? Must Mary's thought have a single, precise English content (as perhaps an instance of inner speech does?) that includes its being on the bed, say, but not its being in the bedroom?
In my published discussions of in-between belief (e.g. here and here), I've emphasized dispositional cases, where someone is simultaneously disposed to act one way in some situations and other way in other situations. I have not, as here, focused on cases in which a single, individual thought may fall between being a genuine judgment and not being one.
The Splintered Mind Will Be Hosting...
... the Philosophers' Carnival on Nov. 6!
Veteran bloggers will know about the carnival, but for those who don't, it's an annotated collection of links to selected philosophical blog posts. It's easy to nominate a post to be included in a carnival: Just click the link above and follow the directions.
Veteran bloggers will know about the carnival, but for those who don't, it's an annotated collection of links to selected philosophical blog posts. It's easy to nominate a post to be included in a carnival: Just click the link above and follow the directions.
Monday, August 07, 2006
Dennett on Fictions about Consciousness
Dan Dennett, in his seminal work, Consciousness Explained, says some confusing things about the ontological status of claims about consciousness. Sometimes, he seems to say there are facts about consciousness that we can get right or wrong; at other times he compares claims about consciousness to the claims of fiction writers about their fictional worlds -- claims that simply can't be wrong, any more than Doyle could be wrong about the color of Holmes's easy chair (CE, p. 81). The first strand tends to be emphasized by those who find Dennett appallingly (or appealingly) committed to the possibility of pervasive and radical mistakes about consciousness (Alva Noe calls Dennett the "eminence grise" of the new skepticism about consciousness), the second strand by people attracted (or repulsed) by the promise of an end to questions about what our "real" conscious experience is, underneath our reports.
Both strands of Consciousness Explained have their appeal, but I can't seem to reconcile them. One can't get it wrong in one's reports about one's consciousness, it seems to me, if there are no facts about consciousness underneath one's reports. Fiction writers can't make errors of fact about their fictional worlds.
I've written a paper outlining my confusion more fully, forthcoming in a special issue of Phenomenology and the Cognitive Sciences. Dennett has written a very gracious reply to my criticisms in sections 2-3 of "Heterophenomenology Reconsidered". Among other things, he suggests a helpful analogy. Imagine cavemen brought into the 21st century; to describe what they see, they'll be forced to use metaphorical language (a pencil might be a slender woody plant with a black center that marks square white leaves, etc.). I'm amenable to this way of thinking about our phenomenological reports: Despite the apparent nearness and familiarity of experience, our tools for thinking about it and our conceptualizations of it are very weak and primitive. But nothing Dennett says in his response seems to me to remove the tension between the we-often-get-it-wrong strand of his work with the we're-fiction-writers strand. The cavemen, of course, aren't fiction writers. They can't, like Doyle, make their claims true simply by uttering them. Of course they use metaphor, as Dennett emphasizes; but metaphor and fiction are two entirely different beasts.
When I pressed Dennett about this in an email, he responded with the interesting suggestion that I was thinking too narrowly about fiction. Not all fiction is novels; there are "theoretical fictions" like quarks (maybe) or functionalist homunculi. And of course the rules governing the use of theoretical fictions in science are quite different from those governing novels.
If Dennett really endorses this (and I don't want necessarily to hold him to a quick remark in an email), it seems to me represent a shift of position, given his earlier talk about Doyle and Holmes's easy chair. But I don't know if it entirely resolves the tension, or how appealing it is as a view. Claims involving theoretical fictions, for instance, probably should not be evaluated as true or false. Rather, they are helpful or unhelpful, provide an elegant model of the observable phenomena or don't. Is this really how we want to think about our phenomenological claims?
Rather than novelists or positers of theoretical fictions, I'd rather see the person reporting her phenomenology as like a witness on the stand. She aims (if sincere) to be speaking the literal truth; and her claims can come close to it or can miss the mark entirely. Perhaps, in some ways, she will be like a caveman asked to report a drive-by shooting, stuck with inadequate conceptions and vocabulary, forced to (witting or unwitting) metaphor; but there's still a realm of facts that render her claims, independently of her or our judgment, true or false or somewhere in between.
Addendum, August 10: Pete Mandik reminds me in a comment that Dennett has spoken at length about "theoretical fictions" in earlier writings on belief and desire attribution -- for example in his magnificent essay "Real Patterns" (1991) and in the Intentional Stance (1987). There, Dennett's examples of "theoretical fictions" are things like centers of gravity and equators, not quarks and homunculi. (Dennett cited no particular examples in his email to me.)
Now, I'm more inclined to think that claims about centers of gravity are literally true than claims about homunculi. So if that's the kind of thing Dennett has in mind, my last remark above may be off target. On the other hand, the rules governing "theoretical fictions" of that sort match very nearly those governing literal language. This brings us even farther from the Doyle, saying-it-makes-it-true model in Consciousness Explained.
Both strands of Consciousness Explained have their appeal, but I can't seem to reconcile them. One can't get it wrong in one's reports about one's consciousness, it seems to me, if there are no facts about consciousness underneath one's reports. Fiction writers can't make errors of fact about their fictional worlds.
I've written a paper outlining my confusion more fully, forthcoming in a special issue of Phenomenology and the Cognitive Sciences. Dennett has written a very gracious reply to my criticisms in sections 2-3 of "Heterophenomenology Reconsidered". Among other things, he suggests a helpful analogy. Imagine cavemen brought into the 21st century; to describe what they see, they'll be forced to use metaphorical language (a pencil might be a slender woody plant with a black center that marks square white leaves, etc.). I'm amenable to this way of thinking about our phenomenological reports: Despite the apparent nearness and familiarity of experience, our tools for thinking about it and our conceptualizations of it are very weak and primitive. But nothing Dennett says in his response seems to me to remove the tension between the we-often-get-it-wrong strand of his work with the we're-fiction-writers strand. The cavemen, of course, aren't fiction writers. They can't, like Doyle, make their claims true simply by uttering them. Of course they use metaphor, as Dennett emphasizes; but metaphor and fiction are two entirely different beasts.
When I pressed Dennett about this in an email, he responded with the interesting suggestion that I was thinking too narrowly about fiction. Not all fiction is novels; there are "theoretical fictions" like quarks (maybe) or functionalist homunculi. And of course the rules governing the use of theoretical fictions in science are quite different from those governing novels.
If Dennett really endorses this (and I don't want necessarily to hold him to a quick remark in an email), it seems to me represent a shift of position, given his earlier talk about Doyle and Holmes's easy chair. But I don't know if it entirely resolves the tension, or how appealing it is as a view. Claims involving theoretical fictions, for instance, probably should not be evaluated as true or false. Rather, they are helpful or unhelpful, provide an elegant model of the observable phenomena or don't. Is this really how we want to think about our phenomenological claims?
Rather than novelists or positers of theoretical fictions, I'd rather see the person reporting her phenomenology as like a witness on the stand. She aims (if sincere) to be speaking the literal truth; and her claims can come close to it or can miss the mark entirely. Perhaps, in some ways, she will be like a caveman asked to report a drive-by shooting, stuck with inadequate conceptions and vocabulary, forced to (witting or unwitting) metaphor; but there's still a realm of facts that render her claims, independently of her or our judgment, true or false or somewhere in between.
Addendum, August 10: Pete Mandik reminds me in a comment that Dennett has spoken at length about "theoretical fictions" in earlier writings on belief and desire attribution -- for example in his magnificent essay "Real Patterns" (1991) and in the Intentional Stance (1987). There, Dennett's examples of "theoretical fictions" are things like centers of gravity and equators, not quarks and homunculi. (Dennett cited no particular examples in his email to me.)
Now, I'm more inclined to think that claims about centers of gravity are literally true than claims about homunculi. So if that's the kind of thing Dennett has in mind, my last remark above may be off target. On the other hand, the rules governing "theoretical fictions" of that sort match very nearly those governing literal language. This brings us even farther from the Doyle, saying-it-makes-it-true model in Consciousness Explained.
Friday, August 04, 2006
The Golden Rule vs. Mencius's "Extension"
One kind of emotionally engaged ethical reflection involves putting oneself in another's shoes, as it were -- imagining what things would be like from another's perspective. This kind of empathetic or sympathetic reasoning has received considerable attention in moral psychology, from the ancient Christian "Golden Rule" "do unto others..." to contemporary moral psychologists such as (to mention just a couple, William Damon and Patricia Greenspan).
In ancient China, Confucius also employs a version of the Golden Rule ("Do not impose on others what you yourself do not desire"; Analects 15.25, Lau trans.; cf. 5.12) However, the next great Confucian, Mencius, offers a subtly and interestingly different view. His focus is rather on "extending" one's concern or love or respect from those close to you (or those you can see) to others farther away.
Here's the key difference, it seems to me: Whereas Golden Rule or empathy accounts start from presumed concern for oneself first, and then transfer that concern to others (perhaps by an imaginative act), Mencian extension starts from presumed concern for others nearby and then transfers that concern to others farther away (by noting that those farther away merit similar consideration).
(For example, Mencius says, "Among babes in arms there is none that does not know to love its parents. When they grow older, there is none that does not know to respect its elder brother. Treating one's parents as parents is benevolence. Respecting one's elders is righteousness. There is nothing else to do but extend these to the world" [7A15, Van Norden trans.].)
We can, of course, allow for both means of coming emotionally to take others into account in one's ethical reasoning. Both self-concern and familial concern are deep-seated. There's something especially appealing, though, about the Mencian process. It starts less egoistically and closer to the target as it were; and it might be easier logically and emotionally to justify the shift from concern for someone nearby to someone far than to justify the shift from self-concern to other concern.
In ancient China, Confucius also employs a version of the Golden Rule ("Do not impose on others what you yourself do not desire"; Analects 15.25, Lau trans.; cf. 5.12) However, the next great Confucian, Mencius, offers a subtly and interestingly different view. His focus is rather on "extending" one's concern or love or respect from those close to you (or those you can see) to others farther away.
Here's the key difference, it seems to me: Whereas Golden Rule or empathy accounts start from presumed concern for oneself first, and then transfer that concern to others (perhaps by an imaginative act), Mencian extension starts from presumed concern for others nearby and then transfers that concern to others farther away (by noting that those farther away merit similar consideration).
(For example, Mencius says, "Among babes in arms there is none that does not know to love its parents. When they grow older, there is none that does not know to respect its elder brother. Treating one's parents as parents is benevolence. Respecting one's elders is righteousness. There is nothing else to do but extend these to the world" [7A15, Van Norden trans.].)
We can, of course, allow for both means of coming emotionally to take others into account in one's ethical reasoning. Both self-concern and familial concern are deep-seated. There's something especially appealing, though, about the Mencian process. It starts less egoistically and closer to the target as it were; and it might be easier logically and emotionally to justify the shift from concern for someone nearby to someone far than to justify the shift from self-concern to other concern.
Wednesday, August 02, 2006
The Prevalence of Afterimages?
In autumn 2003, vacationing in Berkeley with wife and son, I was up early and late, in secret nooks of our hotel, reading E.B. Titchener's great 1600-page laboratory training manual of introspective psychology. (Security rousted me out of one corner where a woman must have thought I was a sexual predator lying in wait. Doubtless I was unshaven and uncombed.) The result was a more bleary-eyed interaction with my Bay Area in-laws, a vast string of confusions about conscious experience, and one essay that vanished into the void (as most essays do). Let me share one confusion.
Titchener seems to have assumed we have nearly constant afterimages. This follows naturally from his view that the eyes are constantly adapting to their surroundings, that they're constantly in motion, and that adaptation and de-adaptation are experienced in part as afterimages. Close your eyes and note that you do not experience perfect blackness but an array of afterimages. Now consider this question: Do you always experience afterimages like this when your eyes are closed -- for example as you lie down to sleep? The two most obvious possibilities: Yes, but you simply ignore and forget them most of the time, or no, but you can evoke them by thinking about them. For that matter, do you always experience blackness when your eyes are closed or do you sometimes (usually?) have no visual experience at all?
People will say different things. But how the heck can we figure out who's right?
Of course, your eyes needn't be closed to experience afterimages, so the question generalizes. You've been looking at your computer for a bit, I assume; now turn toward a blank wall. You can experience some (probably subtle) afterimage effects, if you attend properly. Anywhere you look, your eyes will start to adapt; then when they move somewhere else, their adaptation will start to change. Is there then a constant flux of subtle afterimages, generally lost and unattended against the much more vivid visual array? Or is visual experience relatively stable and simple? Though subtle, this question is fundamental to a full understanding of visual phenomenology. Yet it's not obvious how best to explore it.
Our visual experience is constant (or maybe only frequent). Yet retrospective reflection on it is tainted by our poor memory for what's unattended, and concurrent introspection risks inventing the very objects we seek. So it remains unknown, despite its proximity.
Titchener, by the way, claims there are are also afterimages of pressure, temperature, and movement. Do you ever experience those? Do you always experience them?
Titchener seems to have assumed we have nearly constant afterimages. This follows naturally from his view that the eyes are constantly adapting to their surroundings, that they're constantly in motion, and that adaptation and de-adaptation are experienced in part as afterimages. Close your eyes and note that you do not experience perfect blackness but an array of afterimages. Now consider this question: Do you always experience afterimages like this when your eyes are closed -- for example as you lie down to sleep? The two most obvious possibilities: Yes, but you simply ignore and forget them most of the time, or no, but you can evoke them by thinking about them. For that matter, do you always experience blackness when your eyes are closed or do you sometimes (usually?) have no visual experience at all?
People will say different things. But how the heck can we figure out who's right?
Of course, your eyes needn't be closed to experience afterimages, so the question generalizes. You've been looking at your computer for a bit, I assume; now turn toward a blank wall. You can experience some (probably subtle) afterimage effects, if you attend properly. Anywhere you look, your eyes will start to adapt; then when they move somewhere else, their adaptation will start to change. Is there then a constant flux of subtle afterimages, generally lost and unattended against the much more vivid visual array? Or is visual experience relatively stable and simple? Though subtle, this question is fundamental to a full understanding of visual phenomenology. Yet it's not obvious how best to explore it.
Our visual experience is constant (or maybe only frequent). Yet retrospective reflection on it is tainted by our poor memory for what's unattended, and concurrent introspection risks inventing the very objects we seek. So it remains unknown, despite its proximity.
Titchener, by the way, claims there are are also afterimages of pressure, temperature, and movement. Do you ever experience those? Do you always experience them?