Friday, November 17, 2006

"Habituation" and Moral Development

On Wednesday, Gavin Lawrence spoke here at UC Riverside on Aristotle's conception of moral development. Lawrence argued that Aristotle stood something like "habituation" -- the acquisition of habits -- near the center of moral development, especially early moral development. Unfortunately, Lawrence left the concept of "habituation" relatively undeveloped.

One issue I find intriguing is the proper method of encouraging the acquisition of moral habits early in development.

Consider first a non-moral example: If you want a child to learn to like broccoli, is it better to force him to eat it again and again, expecting he'll learn to tolerate it more and eventually develop a taste for it? Or is forcing it counterproductive, leading the child increasingly to dislike it?

Correspondingly, if you want a child to learn to share, is it better to force her to share, or does forcing moral actions, contrary to the inclinations of the child's heart, only poison morality for her and impair her moral development?

In asking this question, I had the ancient Chinese philosophers Xunzi and Mencius in mind. Xunzi seems to adopt the first perspective as a general policy, and Mencius may endore the latter (if P.J. Ivanhoe's interpretation is right, though I worry that Ivanhoe depends too much on a dubious interpretation of Mengzi 2A2). Ivanhoe's Mencius suggests that the best spur to moral development is encouraging people to reflect and discover their joy at behaving morally in certain situations; and as you act morally and reflect on this joy, the moral inclinations grow in breadth and strength.

I posed this question to Lawrence in the discussion period after his talk. He suggested that he doubted Aristotle would have thought there was one universally best way to encourage moral development. Maybe sometimes it's better to force, at other times to lay off. Surely that must be right (if a bit cagey). I wonder if we can't lay moral educators on a spectrum depending on the extent to which they see habituation by coercion as an important and non-damaging tool in moral education, and then dispute about how far to one end or the other of this spectrum it's best to go.

I posed the same question to my seven-year-old son Davy that evening, and here's what he said:
If you want a child to learn to like broccoli, put tasty sauce on it. If you want a child to learn to like to share, start with his sharing something he doesn't like anyway and make sure the other child has something really cool to share back.

Ah, the wisdom of those on whom moral education (and broccoli) is inflicted!

Wednesday, November 15, 2006

Can You Introspect Your Judgments?

Here's an issue I find weirdly difficult: Can you introspect your judgments -- that is, your "occurrent", happening-now assessments of (for example) the truth or falsity of some proposition? (I distinguish such judgments from standing, dispositional beliefs.)

Surely we can, often, know what our judgments are. I'm thinking about whether there will be a department meeting next week. I reach the judgment that there won't be, and I can accurately tell myself and others that this is my judgment. But is such knowledge of our judgments generally derived through introspection, exactly?

Well, what is introspection? Here's a narrow definition I find attractive: Introspection is a species of attention to ongoing (or maybe very recently past) conscious experience. If, then, there is a conscious experience of judging that there won't be a department meeting next week, and if I get to know that that's my judgment by attending, in some way, to that conscious experience, then I've learned about my judgment through introspection. But does that happen? Can that happen? If it can happen, is it ordinary or exceptional? (Alvin Goldman and David Pitt seem to think it's ordinary, and indeed the rule in self-knowledge of attitudes.)

A number of philosophers, including Gareth Evans, Robert Gordon, Richard Moran, and Dorit Bar-On, have given non-introspective accounts of self-knowledge in such cases. Roughly speaking, on such views, we think about or attend to the world -- not our own minds -- and self-ascriptive statements like "I think there won't be a department meeting" are simply expressions of such external, world-oriented judgments, but in self-ascriptive language. We do not cast our eyes introspectively inward, as it were, every time we say that we think such-and-such is the case.

It's quite plausible that at least some of our self-ascriptive statements are non-introspective in (roughly) this way -- but are they all? Must they be?

Suppose, turning my mind to the question of whether there will be a department meeting next week, I find myself uttering, silently to myself in inner speech: "No, no department meeting". It seems I can discover this inner-speechy fact about myself though introspection, no? But introspecting inner speech isn't the same as introspecting judgment, is it? For example, if I'm reciting lines from a play silently in my head, or an advertising jingle, I may have inner speech without the corresponding judgment. It also seems that judgment often precedes inner speech.

Similar considerations apply to the visual imagery that may accompany (partly constitute?) a thought.

So is there some distinctive phenomenology specifically of judgment that we often are, or sometimes are, or at least in principle can be, introspectively attuned to, that serves or can serve as a basis for our knowledge about our judgments? I find it slipping my grasp....

Tuesday, November 14, 2006

Those Annoying Security Warnings

So I updated my blog to the new Blogger Beta. (I didn't realize it was still in beta when I did so, though!) There are a few cool new features, like categories (finally!), but now Internet Explorer (not Firefox) gives all kinds of annoying pop-up "Security Information" windows on the comments pages.

Please be patient with that. Hopefully Blogger will resolve the issue. In the meantime, as far as I can tell the only thing you miss out on if you decline to show the "nonsecure items" is the beautiful photos of the comments' authors.

I seem to be having to log in more, also, rather than having the website recognize me.

Let me know if there are any other annoyances or inconveniences, and I can think about possible work-arounds.

Monday, November 13, 2006

The Best-Guess Phenomenon and Degree of Belief

You're a lost tourist. You hit up a local for directions. With apparent confidence and fluency, the local sends you in utterly the wrong direction. I experience this so regularly in travelling that I now no longer ask a single local for directions. About half the time, the second gives a completely different version from the first. Occasionally, someone will express a degree of uncertainty. Almost always, if she's at all uncertain, she's completely wrong; and many who seem perfectly confident are equally wrong.

I've been on the other end of this, too, at least twice, realizing that my directions were seriously mistaken only after having given them. In one case, I sent a Mexican family so far astray that I fear it would take them at least an hour recover, were they unwise enough to trust me!

Similarly, but more serious: The doctor tells you you have disease X, with evident assurance, no visible uncertainty. Another tells you you have disease Y. Or even: The doctor starts out seemingly uncertain, undecided, then settles on something as -- you might think, her best guess -- then, as she spells out the guess to you becomes seemingly confident about it, confident enough, apparently, to stake your health on it. But she's wrong, and in fact changes her opinion easily when the next doctor you see calls up the first and describes his different, better diagnosis.

Let's call this the "best guess phenomenon". In certain situations, when Person A is presumably an expert and Person B has no resources to challenge Person A's opinion, Person A will give her best guess, conveying it with authority and confidence regardless of how well-founded the opinion is. No malice is intended, nor any disguise. It's not that Person A knows she's uncertain and aims to conceal that fact. Rather, the situation invites Person A to take on the mantle of expertise, with very little sensitivity to the proper degree of confidence.

One model I think won't suffice for such cases: Conventional philosophical/economic treatments in terms of "degree of belief" on a scale from 0 to 1. Best-guess phenomena are not, I think, best described as cases in which Person A has an irrationally high degree of confidence. For example, if asked to make a serious wager -- for example, if the local wanted to get there herself, or if the doctor's own health were at stake -- she'd balk, admit uncertainty, consult elsewhere. Rather, it's more like degree of confidence doesn't arise as an issue: Person A is neither certain nor uncertain, really. She's just talking, playing authority as part of a social role, without much thought about how much certainty is justified.

Friday, November 10, 2006

Moral Philosophy as Pathology?

Well, that title is a bit strong! But here's my thought (developed, in part, in conversation with someone at the Philosophy of Science Association meeting last week; I'll protect his privacy, though, unless he tells me he wants acknowledgement).

In psychology, there's a joke, which seems to have some truth in it (how much, exactly, is an interesting empirical question!), that the clinical psychologists are all crazy, the social psychologists are all socially awkward, the developmental psychologists act like children, etc. People are sometimes, it seems, drawn to fields that reflect something of their personal habits of thinking or problem areas in their lives. What about ethicists?

One way of developing the idea is this: Many philosophical ethicists like to approach ethics through explicit reasoning. That may reflect a durable habit or character trait that predates their choice of ethics as a field of study. And I'd wager, also, that there's a weakly negative correlation between being prone to reason in a cool, academic way about ethical matters and having strong gut reactions about such matters. Maybe if your gut doesn't tell you what to do morally, you're more prone to look toward explicit reasoning for moral guidance.

This raises the possibility -- I don't put this forward as a hypothesis, but merely as a possibility -- that a certain portion of those drawn to ethics as a discipline are so drawn because they're attracted to philosophical reasoning about ethical matters as compensation for weaker-than-average moral gut reactions. For them, moral philosophy is, perhaps, a sort of accommodation or crutch. We might then expect them to do morally well when faced with moral decisions of the sort fairly tractable to explicit reasoning and less well with other sorts of moral decisions.

I'd be interested to hear what you think, and also whether you think there's any way to cast empirical light on the matter.

Monday, November 06, 2006

Philosophers' Carnival #38

Welcome to The Splintered Mind's hosting of The Philosopher’s Carnival! My seven-year-old son, Davy, has been asking me what philosophers do, so I thought I’d cast things in playground terms.

The Metaphysical Whirligig:
Whoa, I’m dizzy! Justin Kahn poses some cute and interesting problem cases for the principle of Ockham’s Razor. What kind of sick person would put razors on a children’s whirligig! Wa Salaam gives us mystical reflections on the moon. Phil for Humanity tells all the little children on the gig that Santa Claus – wait, no, I mean God – doesn’t exist, making them cry. How cruel! Ken Taylor claims that smart children can believe in Santa Claus -- or is that God? -- but the question is, how, given that the bad children seem to get just as many toys! Kenny Pearce isn’t so sure that it follows from the fact that he has hands that the physical world exists. The Boundaries of Language reflects on the scope of Dummett’s anti-realism; this is for serious and scholarly children. Heterodoxia warns us that some insights are just too cold for our little hands to handle. We are fearful, hubristic, romantic. The children boo! But Heterodoxia knew they would! Glittering Muse ponders on “nothing” in a way that reminds my son of Heidegger’s remark that “nothing noths” – uh-oh, now there’s this little kid calling himself Carnap who wants to push him off the gig. No, no – Carnap wants to explode the whole gig. Bail out! I here at The Splintered Mind simply wish I knew what metaphysics was.

The Philosophy of Mind Sandpit:
Brain Hammer manages to get the children arguing loudly about whether there’s a difference between the sand’s seeming wet and our simply having the disposition to judge that it’s wet. (But why is it wet, I worry?) Neil Levy reflects on whether the mind stays within the skull or drips and fiddles all over the place. Maybe mine is buried even, right here in this sand! Whoops, not here -- David Chalmers, who is known sometimes to play in the external mind-sands, discovers instead an old, almost entirely ignored article by Fred Dretske spoofing sense-data theories. The Philosopher’s Playground, unearthing a copy of Alice in Wonderland, invites us to reconsider the question of whether we can intentionally believe that the Queen is 101 years old, five months, and a day. Aaron Cotnoir wonders if Alice shouldn’t be blamed if she can’t, or should be if she can. Salamander Candy reflects on what raccoons think of all us children, and how we know whether there’s anything we do look like to raccoons, and how we know there’s anything, indeed, we look like to ourselves. And, if I remember correctly, Philosophy of Memory[sorry for the unspecific link; there seem to be server problems] invites us to think about individual differences in memory.

Philosophy of Language’s Curving Tunnel:
John Greco starts the children off with a serious lesson on contextualism vs. interest-dependent invariantism. Good stuff, but most of the children can’t even pronounce the title. Into the darkness you go! Lemmings asks whether Leibniz would tell all the children that if this piece of play-doh is cruddy but the sculpture they make with it is not cruddy, then the sculpture is not identical to this piece of play-doh. My, my – but I can barely even see it at all, Brit! Dinner Table Don’ts asks about the transitivity of subjunctive conditionals. If only he didn’t talk about this, we’d finally start having fun. If there was a nuclear holocaust, he wouldn’t talk about this. Therefore, if there was a nuclear holocaust, we’d finally start having fun. Wait, that doesn’t seem quite right.... Is this why Gregory Wheeler says conditionals are bad for your health? Benjamin Nelson curves into the deepest darkness of what he calls “Pattern-Oriented Relational Grammar”; too much for a simple seven-year-old like me!

The Epistemic Slide:
Fred Vaughan gives us The Given, but the other children aren’t sure we shouldn’t start our epistemic slide with stainless steel instead. Do you want to stand atop an idea, looking down?

The Moral Teeter-Totter:
Daylight Atheism sets the moral teeter-totter rocking: What we approve of and condemn is historically contingent! Suggested are some issues that future generations might see in a different light. Carnival maven Richard Chappell warns against vigilantism against the Atheistic kid, even if he seems unreasonable – for which the Atheist had better be thankful, I say! Joseph Orosco, however, asks whether “choosing torture might be a democratic prerogative”; playground bullies everywhere agree! Andy Egan wonders if it's fair to ask, though, what the idealized bully self would do. Hell’s Handmaiden argues against the electoral college. College?! Heck, we’re barely in elementary school, shout the kids. Funkified rides the teeter-totter without thinking, purely spontaneously, which he thinks [?] is best! Hueina Su reminds all children that they must love themselves. Westminster Wisdom inquires into the principles of judicial independence; the children make it plain that they like clear rules and consistency, except when the contrary is to their advantage! Moralheath claims that moral objectivity has “fallen upon hard times”, explaining our bad behavior. As though to prove him right, Francios Temblay jumps entirely off the teeter-totter arguing that all morality is just a “religo-political smokescreen”! (But he falls into Pea Soup, where the distinction between moral realism and anti-realism is viscous and murky.)

Philosophy of Science Picnic Table:
The Voltage Gate tells all little children about the politics in the history of the science of human racial diversity. Children of all colors gather round, but they can’t tell each other apart! They may lie about their data, though. Janet Stemwedel wants to know why. Well, of course it's that their parents didn't raise them right! Humbug Online warns all children about too easily dismissing induction. Hasn’t the Humbug learned that each time someone raises this problem it only causes piss and consternation among the boys and girls at the table? Or will it be different, finally, this time?

The Historical Jungle Gym:
A Brood Comb invites us to think about Hegel’s dialectics with the example of left-right. Oh no! Soon the other children are upside-down and reaching the wrong direction, falling off the jungle gym into the deep sand of dialectic! Rethink rails on poor Cleitophon from Plato’s republic, for being concerned only about his reputation and nothing for the common good, a malady he finds all too common. Frankly, I’m surprised to see someone so long dead still taking insults on the jungle gym. But actually, it seems that Cleitophon hasn’t moved much recently – in the last, say, 2000 years at least. Phluaria takes on Socrates himself, asking if he was schizoid! And somewhere in the sand under the Jungle Gym, The Skwib found some of Henri Bergson’s lost PowerPoint slides!

A number of children submitted political diatribes of various sorts. That's not the sandbox I remember! With apologies to them, and in keeping with The Splintered Mind's largely apolitical spirit, I have chosen not to include in this week’s carnival anything that seemed to me more politics than political philosophy.

A Plea for Chaperones:
There is no volunteer for the next carnival. Bloggers: If your blog is listed here, and you have never hosted, I herewith assert that you are morally obligated to host. No, no, children, don’t run away! It’s not as bad as having to set the dinner table, I promise. Go here and sign up! (Or else the Carnival will crumble, and that's the end of all your juicy links....)

Thursday, November 02, 2006

PMS-WIPS: The Unreliability of Naive Introspection

Tomorrow I'll be at the Philosophy of Science Association meeting in Vancouver. In lieu of my usual Friday post, I offer a link instead to PMS-WIPS over at The Brain Hammer, where the paper under consideration is my The Unreliability of Naive Introspection (which, by the way, is also what I'll be presenting at the PSA).

Wednesday, November 01, 2006

Weirdism

In virtue of what am I conscious, while chicken soup is not? Most philosophers regard this as a metaphysical question. It is widely held that something about my internal structure – the organization of my material parts – makes me conscious. Perhaps there is something special about neurons, or perhaps the relevant feature is the abstract functional relationships between my internal states, and between those states and my environment and behavior. Or maybe internal structure is irrelevant: If I possess the right causal relationships to my environment and/or the right behavioral dispositions, I am conscious, regardless of my internal organization.

Or one might reject materialism. Perhaps an immaterial soul is necessary, or the possession of immaterial properties. But in virtue of what do I have an immaterial soul or immaterial properties, while chicken soup does not? One might invoke a powerful, soul-imbuing deity, or one might relate the immaterial somehow to internal, material structure or to material causes and behavior patterns. Or perhaps everything is conscious, including the chicken soup; or material things do not exist at all. Or maybe there's some flaw in the very idea of "material".

If this doesn’t exhaust the alternatives, at least it comes close. Unfortunately, every one of these alternatives has seriously counterintuitive consequences. Most people find it intuitively plausible that alien or artificial beings, entirely lacking neural structures like our own, could at least in principle be conscious. Holding neurons to be uniquely capable of grounding consciousness contradicts that intuition. On the other hand, Block and Searle have shown that it is counterintuitive to regard as conscious everything with the right functional organization, or the right causal relationships to the environment and behavioral dispositions, regardless of composition – for example if the structure is implemented by a vast population of people communicating by radio, or by beer cans and wire in outer space. One might suggest that although neurons aren’t strictly necessary, something resembling neurons in some important way is necessary, but it is doubtful that one can escape the dilemma by that maneuver. Any organization functionally similar to human neural structure could probably be implemented in a system to which it would be counterintuitive to ascribe consciousness. More biochemical measures of similarity seem bound to exclude conceivably conscious aliens of some stripe. Insistence that the system be naturally evolved rules out some of the weirdest systems, but it also rules out the intuitively appealing possibility of conscious robots or conscious brains grown in vats.

Non-materialist views suffer similar difficulties. Naturalistic dualism faces the problems described in the previous paragraph with respect to classifying the kinds of systems that have immaterial souls or immaterial properties. Supernatural dualism faces issues of how immaterial substances could have physical effects and of the apparent smooth gradation from beings without consciousness to beings with consciousness in both phylogeny and development, as well as general arguments against the existence of supernatural entities. Panpsychism and idealism are counterintuitive from the outset. And, finally, it's hard to see how some weak notion of "material" could be fundamentally and ineliminably flawed or what it would buy us if it were.

None of this should be news to anyone who has taught a survey course in philosophy of mind. Every metaphysician of mind has to “bite the bullet” on some issue or other – that is, accept certain counterintuitive consequences of his or her position. But how to know which bullet is best to bite? We could try somehow to compare the relative unintuitiveness of various positions - but even if we could do that in some plausible way, using it as our metaphysical method presupposes that our everyday and philosophical armchair intuitions are a good guide to the nature of consciousness, including in strange cases involving aliens, etc. - and that seems to me a rather doubtful position (see my post "Metaphysics, What?"). But, on the other hand, it doesn't seem that there's any straightforward empirical, scientific way to determine whether a silicon-based alien that behaved much like us (for example) has genuine conscious phenomenology (as opposed to merely behaving as though he does), without begging the metaphysical question at the outset. So I'm at a loss.

Let me dub the view that something weird must be true about the mind, but who knows what weird thing is true, weirdism.

Monday, October 30, 2006

Can You See the Insides of Your Eyelids?

Visitors to The Splintered Mind in August and September will know I have this weird fascination with the question of what we see with our eyes closed. Admittedly, maybe the issue is not quite as important as the nature and pursuit of happiness, which I wrote about Friday.

Setting aside issues about afterimages, "light chaos", visual imagery, etc., here's one possibility: we see the insides of our eyelids. What do you think?

If I close one eye and hold one hand about a foot before the other eye, it's clear that I see my hand, right? Now I bring it slowly closer to the open eye until it eventually covers it completely, blocking out all light (though my eye remains open). Is there a point at which I go from seeing the hand to not seeing the hand? Or, as I sit here hand over eye am I see seeing the hand, though no light whatsoever is reflecting off it or coming into my eye?

It seems to me slightly more natural to say that I see "nothing" than to say that I still see my hand. Maybe, then, we can say that when the hand stops reflecting light into my eye I stop seeing it? But reflecting light into the eye is a strange criterion for seeing, since it implies that I could never see anything that was absolutely black. And we don't want to say that: A good enough coat of black paint doesn't make things invisible -- just very black!

Maybe we can say that I see things as long as they would reflect light shining on them, into my eye, if they were not completely light-absorbent? No, that doesn't work either: Translucent things are visible, so reflecting light into my eye can't be a condition of seeing. And, indeed, my hand is partly translucent, as can be seen if I shine a flashlight through it, while sitting in the dark.

So say I do sit in the dark with a hand completely over an open eye and shine a light through that hand into the eye. Now am I seeing the hand? -- the redness of its blood, say? Or am I just seeing the light? Or both? And if I do see the hand in this case, do I also see it in the case when there is no detectable light coming through? Maybe we should say this, at least: I can see that something (mostly) opaque is covering my eye, even if I can't see the object itself?

All the same questions arise, of course, in the more normal case where one's eyelids are doing the occluding rather than one's hand.

What a magnificent tangle!

Friday, October 27, 2006

The Pursuit (or Not) of Happiness

The Founding Fathers of this country famously ranked "pursuit of happiness" right up there with life and liberty, among our unalienable rights. Psychological hedonism, historically a very important doctrine in philosophy, holds that we in fact pursue nothing but our own happiness. We (or at least Americans) tend to say that "happiness" is among the most important goals in life. But I wonder whether we pursue it very much at all.

Let's assume that happiness is some kind of durable positive mood or emotion or disposition toward positive moods and emotions. (Happiness has of course been defined numerous ways. It's too good a word not to be fought over for its positive resonances.) Something in that ballpark, anyway, seems to be what many Americans have in mind by "happiness".

Now consider this: How does sleep affect your moods and emotions? Surely, it has some important effects. Have you studied them? I seem to have the impression from some of my reading (though I won't look it up now) that mild, short-term, sleep deprivation has a slight mood-elevating effect while longer-term sleep deprivation worsens mood. But I don't really know; and neither you do. (Confess!) But if one of your most important goals in life is your own happiness, shouldn't you try to gain some understanding of this? Most Americans, I think, are mildly sleep-deprived. Is it better for your happiness to stay up that extra half-hour watching TV or reading the newspaper or whatever, or to go to bed more directly?

You say you want happiness over all things, yet you let yourself be sleep-deprived and crabby all day?

Given a choice between going to a restaurant with my family and weeding or doing the dishes, I'd choose going to the restaurant every time. I'm even willing to pay for it -- and if I had more money I'd pay someone else to weed and clean. But I wonder, if I stepped back, whether I'd find myself happier in the restaurant or out in the yard.

I've played a few computer games in my day. Now I can watch my son doing it. What do I see? Often this: Frustration, frustration, frustration, relief. Is the pleasure of relief enough to compensate hedonically for the hours of frustration? Wouldn't I, and wouldn't my son, have been happier enjoying the sunset?

Why aren't we all happiness experts, and remarkable for our hedonic self-care?

Can you say, then, that we really are pursuing happiness, but only doing so with remarkable stupidity? No, no -- better and more natural to say that despite the lip service happiness is not very high among most people's favored pursuits.

(I'm trying to convince Dan Haybron to guest blog here next term. Go check out his website, in the meantime, if you want to learn more about happiness. And shouldn't you want to?)

Wednesday, October 25, 2006

Attensity

In his 1913 essay "Psychology as the Behaviorist Views It", which is widely credited with (or blamed for!) launching the behaviorist revolt against early introspective psychology, John B. Watson complains

Take the case of sensation. A sensation is defined [by introspective psychologists] in terms of its attributes. One psychologist will state with readiness that the attributes of a visual sensation are quality, extension, duration, and intensity. Another will add clearness. Still another that of order. I doubt if any one psychologist can draw up a set of statements describing what he means by sensation which will be agreed to by three other psychologists of different training.... I firmly believe that two hundred years from now, unless the introspective method is discarded, psychology will still be divided on the question as to whether auditory sensations have the quality of 'extension,' whether intensity is an attribute which can be applied to color, whether there is a difference in 'texture' between image and sensation and upon many hundreds of others of like character (p. 164).

Of course, consciousness studies and introspective psychology are back; and anyone who has delved into the details of scientific or quasi-scientific introspective reports will see the considerable merit in Watson's complaint. And yet it does not follow that there are no facts of the matter to be explored here; maybe it's just hard.

Take the attribute of "clearness". Watson surely has E.B. Titchener in mind here. Titchener characterizes clearness thus:

Clearness... is the attribute which distinguishes the 'focal' from the 'marginal' sensation; it is the attribute whose variation reflects the 'distribution of attention' (1908, p. 26).

Since the term "clearness" has a number of resonances and senses in ordinary language that muddy the issue, Titchener and his students later came to substitute a neologism for it: "attensity". Now, here is the question we must assess to determine if attensity is an attribute of visual sensation: Can two visual experiences be alike in intensity of color, in shape, in resolution of detail, etc., yet differ experientially only in respect of how closely one is attending to them or to their objects -- i.e., in their "attensity"? There are two ways to say no: One might say that degree of attention does not affect visual experience at all, but only later processing, so that my visual experience of this hat before me is exactly the same when I'm attending to it and when I'm not attending to it (assuming all else, such as lighting, angle of eyes, etc., is held constant). Or one might say that degree of attention does affect visual experience but only by means of changing something else, such as the vividness of color or resolution of detail (which of course is another, non-Titchenerian, meaning of "clearness").

Now is this the kind of question we can ever expect an introspective science to answer authoritatively? Or should we join Watson and declare it hopeless? I confess that I myself am torn. I see no reason in principle that we couldn't resolve such matters. Yet the historical divisions of opinion and the muddiness of the answers I expect I would get if I polled people on the matter, and the feeling of lack of progress and the irresolvability of debates between entrenched opponents -- all that gives cause for pessimism....

Wednesday, October 18, 2006

Brief Hiatus

Regular visitors will know I usually post on a MWF schedule. I'll be out of town for a long weekend, Friday through Monday, and I won't be able to post again until next Wednesday. (I'll be away for my annual "Geekend": Some old friends and I rent a cabin in the mountains near Palm Springs and do old-fashioned pencil-and-paper roleplaying games morning to midnight for three days straight, fueled mainly by coffee and Doritos. Yes, yes, I know. That's why we call it Geekend.)

A Plea for Stories about Virtue and Wickedness in Ethicists

I beg a favor. Tell me stories about the ethics professors you've known -- stories of their virtue or malfeasance, the more detail the better. Post them as comments on this post, or email me at eschwitz at domain: ucr.edu.

I ask you this not out of pure gossip-love, but to good philosophical ends -- in connection, that is, with my reflections on the relationship between moral reflection and moral behavior.

I'm interested in anecdotes here, not generalizations (to get generalizations, I will be conducting a survey in December), with enough detail to give a real flavor of the incident.

(If you write a long comment, I recommend that you do so first in your word processing program, then paste it into the comments section. Occasionally Blogger crashes posting a comment, and it can be frustrating when that comment is long!)

Philosophers' Carnival #37...

... is at Hell's Handmaiden. (Thanks to the Handmaiden!)

The next carnival, Nov. 6, will be right here at The Splintered Mind. I see that no one has signed up yet to host the Nov. 27 carnival. So if you have a blog of your own, think about volunteering!

Monday, October 16, 2006

Metaphysics, What?

Philosophers, I suppose, sometimes do metaphysics. No, let me put it more cautiously. Philosophers engage in certain practices, which they sometimes call "metaphysics". I can tell fairly well what sorts of practices will be labeled in this way -- e.g., much of David Lewis's work and the ensuing discussions, analytic philosophy of mind as driven by thought experiments, discussions of "personal identity". But is this really metaphysics? What the heck is metaphysics, anyway?

Here's one view. Let's call it the "mystical view" -- because really it is rather mystical, though many hard-nosed, atheistic philosophers seem implicitly (or even explicitly) to accept it. Metaphysics is the discovery, by a priori armchair reflection without depending upon anything empirical, of necessary truths of the universe -- truths such as that causes must precede effects, and that a functional duplicate of me must necessarily have (or will not necessarily have) conscious experience. Such facts are supposed hold true regardless of our concepts, to be independent of our (contingent) ways of thinking about things. We tap into them not by looking at the world but rather by... well, that's the mystical part. How, exactly, do we learn about the outside universe (not just our own minds) without looking at it? Those philosophers who have gamely tried to explain the process in question -- George Bealer and Laurence BonJour, for example -- have tied themselves in such knots, been forced to wave their hands at such absolutely crucial junctures, and if I may be frank have failed so utterly as to make the hopelessness of their project even more evident after having read them than one might have thought beforehand.

Here's another view of what's going on. Call this the "no metaphysics" view. What philosophers learn from their armchairs, without looking at the world, are facts not about remote possible worlds accessible in no other way, or facts about the deep metaphysical structure of the universe, but rather facts about their own minds -- facts, especially, about their concepts. What else would one learn about, sitting in one's armchair? We learn that our concept of "cause" is a concept involving the temporal priority of the cause to the effect, our concept of a person is thus-and-such, etc.

But of course learning about our concepts is learning not metaphysical truths in the sense that philosophers ordinarily mean the phrase but rather learning contingent empirical facts about how we think. The concepts so delivered may be revisable in the face of empirical evidence (see Friday's post). And furthermore, they are empirically, psychologically explorable: There's more than one way to learn about "our" concepts. Philosophers in the armchair might not be getting the story right, or they may be an unrepresentative sample.

The philosophical practices labelled "metaphysics", then, have two uses, as I see it, neither of which is the discovering of metaphysical truths: (1.) They provide a kind of evidence about how people (a certain type of people, with certain habits of reflection and standards of inquiry) happen to conceptualize things; and (2.) (more interestingly, to me) they provide recommendations about how we should conceptualize things. If construed in this way, such recommendations should be evaluated pragmatically, in terms of their usefulness in organizing our way of thinking about matters of concern to us.

Getting clear about the pragmatic standard of evaluation can, I think, help us sort through and evaluate competing "metaphysical" claims about personal identity, causation, and the like. So, for example, in my work on belief, which could easily be misconstrued as metaphysics, I advocate a broad dispositional approach as giving us the best tool for talking about and characterizing the kinds of case that interest me most in believing -- what I call the "in-between" cases of gradual learning and forgetting, self-deception, confusion, ambivalence, irrationality, and failure to think things through.

Friday, October 13, 2006

Intuitions in the Sandbox

A graduate student recently reminded me of an essay I'd written in 1998 with Alison Gopnik, a developmental psychologist at U.C. Berkeley. He appears to be just about the only person who liked it. But maybe I'm wrong about that -- maybe he was merely being polite!

The essay begins with a dialogue pertinent to the relationship between empirical psychology and philosophical intuition, which is an increasingly hot topic these days. The dialogue is, I think, amusing, provocative, and self-standing (it was entirely written by Alison), and for some reason I feel like inflicting it on readers of this blog.

To understand the dialogue it's necessary to know that developmental psychologists now think that children progress from, at age 3, not realizing that beliefs can be false to knowing, by age 4, that beliefs can be false. (Surprising as this conclusion may be, it is now orthodoxy in developmental psychology and is supported by hundreds of studies.)

Here's the dialogue, conceived of as between two three-year-olds in a sandbox, Phil and Psyche.

Psyche: You know, Phil, something’s been bothering me. You know how beliefs are always true? Well, an odd thing happened the other day. My big brother saw my mom put a piece of chocolate in the cupboard and then left to play Nintendo, and while he was away my mom took the chocolate out of the cupboard and put it in the drawer. When my brother came back, he went straight to the cupboard and said loudly, several times, that he was sure the chocolate was in there. But of course, it was really in the drawer. So I have this idea: Could it be that he had a belief that was just like ordinary beliefs, except false?

Phil: My dear Psyche, as I have so often pointed out to you before, your confusion is due to a category mistake. You are treating the truth of beliefs as if it were an empirical matter. Actually, it is simply a conceptual fact about beliefs that they are always true. Indeed, we might say that it is criterial for a belief to be a belief that it be true. Look, consult your intuitions, consult the intuitions of anyone else in the sandbox. All of us agree, immediately, intuitively, without inference or theory, that all beliefs are true. Ask yourself what a belief is. What else could it be but a true representation of events?

Psyche: But couldn’t we all be wrong? Couldn’t there be an alternative way of conceiving of belief that none of us happen to subscribe to now?

Phil: Another category mistake. When I say that beliefs are necessarily true, this isn’t a mere contingent psychological fact about the concepts of all us three-year-olds. It’s an eternal, platonic, philosophical fact about the nature of belief and truth.

Psyche: Well, what about my brother?

Phil: He is probably participating in an alternative form of life. I always thought he was kind of weird.

Psyche: But you see, it isn’t just him. It even seems to be me. Since the chocolate incident, wherever I look, I see evidence that beliefs may be false. Why just yesterday, a woman came into the daycare center with a candy box and I said “Candy!” and then she opened the box and there were pencils inside. I know intuitively that I must have thought there were pencils in the box all along, and of course that’s what I told her when she asked me. But then why did I say “Candy!”? Am I turning into a madwoman?

Phil: (gravely) I fear you may have a worse affliction. I fear you are turning into a cognitive psychologist. As I was saying just the other day, “It would be dangerous to deny from a philosophical armchair that cognitive psychology is an intellectually respectable discipline, provided, of course, it stays within proper bounds.” [Apparently a copy of John McDowell's 1994 book, Mind and World found its way onto the picturebook shelf.] This is what happens when those bounds are breached.

Psyche: But surely there must be some explanation?

Phil: Philosophy does not provide explanations, only diagnoses. (Intones) Of that we cannot speak, thereof we must be silent....

The remainder of the essay was conceived of simply as the exposition of the main idea of this dialogue: that philosophers who think that their intuitions reveal necessary metaphysical truths about the world are as confused as Phil. Our intuitions derive from empirical sources (or else, no better, were written into us innately by natural selection), and we should hold them up to revision as new empirical evidence comes in. I don't think Quine or Carnap would have disagreed....

You can find the entire essay here.

Wednesday, October 11, 2006

The Nisbett-Wilson Myth

It seems like every time I present my work on our poor knowledge of our own conscious experience (e.g., here, here, and here) before a large group, someone says, "But didn't Nisbett and Wilson show that back in the '70s?"

Richard Nisbett's and Timothy Wilson's 1977 essay, "Telling More Than We Can Know" is one of the most-cited papers in the history of psychology. Looking at cases in which, for example, people seem to show amazing ignorance of the bases of their preference for a particular pair of socks, Nisbett and Wilson conclude that "people may have little ability to report accurately on their cognitive processes" (p. 246). In the psychological and philosophical lore, this conclusion has been amplified into a general repudiation of our knowledge of our own minds.

Yet Nisbett and Wilson themselves are quite clear that they do not intend their thesis that way. In a section titled "Confusion Between Content and Process" they draw a sharp distinction between "cognitive processes" (roughly, the causal process underlying and driving our judgments, decisions, emotions, and sensations) and mental "content" including those judgments, decisions, emotions, and sensations themselves. They explicitly limit their skepticism to the former. Regarding the latter they say that such "private facts... can be known with near certainty" (p. 255). In other words, despite the mythology, Nisbett and Wilson are not skeptics about introspective report of conscious experiences. They are skeptics about introspective knowledge of the causes of those experiences. They are skeptical about our knowledge of why we selected a particular brand of socks, not about the fact that we do judge them to be superior or about our sensory experience as we select them.

Wilson continues to be explicit about this. In his recent (2002) book Strangers to Ourselves, he argues that we have poor knowledge of "the adaptive unconscious". He distinguishes this from consciousness and restricts his skepticism to the former (e.g., p. 17-18).

So enough sloppy, second-hand references to Nisbett and Wilson! If you want to cite psychologists who truly argue for the view that we often go wrong in describing our stream of conscious experience, look neither to them, nor indeed to the behaviorists (who were often suspicious of the very idea that the phrase "stream of conscious experience" referred to anything worth exploring at all), but rather to early 20th-century introspective psychologists like E.B. Titchener and G.E. Mueller!

Monday, October 09, 2006

Ephemeral Belief?

I've often defended the view that we should think of beliefs (as opposed to temporary judgments) as involving a broad array of stable dispositions -- dispositions to act, react, think, and feel in ways appropriate to the belief, across a spectrum of situations. But here's an example that troubles me.

I'm at a party. Someone introduces himself -- "Jerry". I shake his hand and say "Hi, Jerry!" Five seconds later I cannot tell you his name. (Admit it, this happens to you too!)

Now in this case, it seems both that I believe (however temporarily) that his name is Jerry and that I don't form a broad array of stable dispositions pertinent to that belief. If so, of course, believing can't be a matter of having a broad array of stable dispositions -- contra me!

I see two responses. The first is to reject the intuition that I believe his name is Jerry (for those five seconds). (This is what Krista Lawlor said when I pressed her on the issue during her visit last week.) Maybe it's a weird, marginal case of the sort our intuitions really weren't meant to handle. We can, of course, (as philosophers) define the technical term "belief" however we want; we needn't hew to intuition in every case; and there's something valuable in reserving the term "belief" only for states in which one has a broad array of stable dispositions.

The second response is to reject stability: Maybe only breadth is necessary. For five seconds, my dispositions are all right, perhaps, across the board -- I would say "Jerry" to myself when thinking of him, I'd assume someone who said that name was talking about him, I'd greet him with that name, I'd feel surprised if someone called him "Larry", etc. -- and that's enough for belief. The broad array of dispositions changes quickly enough; it just won't stay put.

I'm not entirely happy with either answer.

Friday, October 06, 2006

Unqualified Judgment Without Belief?

Krista Lawlor gave a very interesting talk here at UC Riverside Wednesday, which has me thinking again about belief. (Admittedly, getting me thinking about belief isn't a very hard thing to do!)

It seemed implicit in her paper, and it came out more explicitly in discussion afterward, that Lawlor regards believing as a matter of having a broad, stable array of dispositions -- i.e., having general patterns of thought, reaction, planning, implicit assumption, etc., in conformity with the content of the belief -- as opposed to belief being merely a matter of having some thought or judgment or opinion occurring to one in a moment; and indeed the two phenomena often come apart. (For my endorsement of this view, see this post and this essay and this essay too.)

To use one of Lawlor's examples, someone raised in a family committed to the reality of homeopathy might as a result of taking a chemistry class become convinced that homeopathy doesn't work, in the sense of reaching a sincere judgment like this: "Something so diluted that not even a single molecule of the supposedly curative substance remains must be inert!" And yet that person might not yet be ready to throw his homeopathic remedies in the trash, might feel uncomfortable not taking those remedies in certain cases, might in unguarded moments find himself thinking "so-and-so needs such-and-such a remedy", etc. There's a certain amount of cognitive inertia between what we sincerely judge in the moment and what we enduringly, dispositionally believe.

Or here's an example from my essay linked to above: Someone might sincerely and unhesitantly and unqualifiedly endorse the proposition that all the races are intellectually equal, yet be so biased in her implicit reactions and background assumptions about people that we wouldn't want to say that she really should be described as fully, dispositionally believing that.

No one is more on board with Lawlor on such matters than I, yet my colleagues were not all entirely convinced!

Here's the most common objection I heard, in the comments and in discussion with Lawlor before and afterward: If your dispositions don't fall entirely into line with your judgment, then either your judgment must not be wholly unqualified, or you must be the victim of some sort of weird irrationality.

Now I'm not sure exactly what we ought to call "rational", but in some cases at least I think it makes considerable sense to have a sort of dispositional inertia. We don't want to cast aside long-held beliefs that ramify through our lives with the advent of a single unqualified judgment. Suppose the homeopathy case were, instead, a case of someone being converted to libertarianism by Ayn Rand. Fortunately, such conversions often fade quickly, fail to ramify, are conversions only of temporary judgment, not in the broad array of one's dispositions. (Apologies to libertarians!) So I hesitate to think of the divergence between unqualified judgment and broad, dispositional belief as simply irrational.

No?

Wednesday, October 04, 2006

Do Ethicists Steal More Books?

When I was young, my father and I used to joke about stealing Bibles, or breaking into a Christian store and making off with a load of crucifixes. The irony appealed to us, on the assumption that an important part of wanting a Bible or a crucifix is endorsing a set of values that includes the repudiation of theft. There's something likewise ironic, it seems, in stealing an ethics text (or should I say deliciously wicked?).

One might expect Bibles and books extolling the life of virtue to be relatively less stolen than similarly popular books with no moral message. On the other hand, given my sense that ethicists, on the whole, behave no better than the rest of us, maybe we shouldn't expect a difference. In casual conversation, I've sometimes heard it remarked that ethics books seem, indeed, more likely to be missing from libraries than books in other areas of philosophy -- which would comport nicely with the sense some people have of the particular viciousness of ethics professors. However, the impression that ethics books are more likely to be stolen might derive from their simply being more popular, or it might be a saliency effect -- perhaps we're more likely to be struck by and remember a theft of Kant's Groundwork of the Metaphysics of Morals than a theft of Kripke's Naming and Necessity.

Here at the University of California, we have access to a system called Melvyl, which gives circulation information on all the books in the University of California system. The main campus libraries at Berkeley, Irvine, Los Angeles, Riverside, San Diego, and Santa Cruz also give due date information, including for overdue books. So we can inquire: Are ethics books more or less likely to be overdue or missing from these UC campuses than other philosophy books?

I looked at the book reviews in Philosophical Review from 1994-2001. I included in my survey books that were clearly in ethics (excluding philosophy of action, political philosophy on proper governance [rather than private virtue], and other borderline cases). As a comparison class, I also looked at books that were clearly outside of ethics if the review started on a page number divisible by four. This gave me 76 ethics books and 67 non-ethics books. Almost all these texts were held by at least 5 of the 6 campuses; some texts had multiple copies at a single campus.

The ethics books were listed as off the shelf (checked out or missing) in 73 cases (between the 6 campuses) out of 452 held copies, for an off-shelf rate of 16.1%. Of these, 8 were overdue or missing (5 missing or lost; 1 more than 1 year overdue; 2 less than one year overdue), for a 1.8% deliquency rate per copy. 11.0% of the off-shelf books were delinquent.

The non-ethics books were listed as off the shelf in 66 cases out of 379 held copies, for an off-shelf rate of 17.4%. Of these, 7 were overdue or missing (actually, all 7 were simply missing, none overdue), for a 1.8% delinquency rate per copy. 10.6% of the off-shelf books were delinquent.

These numbers are too small to draw any definite conclusions, but they do seem to suggest that, among philosophical books prominent enough to be reviewed in Philosophical Review, ethics books are checked out and stolen at very nearly the same rate as non-ethics books -- neither more nor less.

The University of California has a pretty good system for tracking down overdue books. I wonder to what extent the low delinquency rates are due to good enforcement rather than the conscientiousness of the patrons. In this connection, it would be interesting to do a study of libraries that depend primarily on the honor of the patrons. The UCR Philosophy Department Library is an example of the latter (as opposed to the main library, Rivera, whose holdings are included in Melvyl, described above); but unfortunately there's no systematic record of its holdings.

If any readers of this blog have access to the circulation records of a consortium of libraries, or have access to information from which they could infer deliquency rates in libraries that depend mostly on the honor of the patrons, and are interested in exploring this issue farther, I'd love to hear from you!

Monday, October 02, 2006

Paranormal Phenomena and Substance Dualism

If you're going to be a dualist -- that is, if you differentiate the mental from the physical -- I think you ought to be a good old-fashioned substance dualist. You ought, in other words, to embrace the idea that there are distinct mental and material substances. The more fashionable form of dualism in analytic philosophy these days, "property dualism", which distinguishes mental from physical properties, as conceptually distinct, while denying that there is any distinctly mental substance, seems to me too far removed from the questions that we should care about in the dualism-materialism debate -- questions such as whether we have immaterial souls that could persist into an afterlife (property dualism, like materialism, says no), and whether our thoughts depend solely upon physical goings-on (property dualism, like materialism, says yes, for all practical purposes). I've not yet been convinced that I should care much about what would be the case in "logically possible worlds" where the laws of physics and psychology are suspended -- the sort of thing property dualists such as Chalmers want us to think about. (But if you are going to think about such things, Chalmers is a model of clarity and intelligence.)

The truth of substance dualism is empirically explorable, as the debate between materialism and property dualism (with its focus on the merely logically or "metaphysically" or "conceptually" possible) appears not to be. Of central relevance to the question, of course, is the dependency of our mental processes on how things stand in the material world -- on our brains in particular. The more it seems that mental life depends on and covaries with brain activity, the worse for substance dualism. With the advance of neuroscience, substance dualism isn't looking so good, I'd say.

However, there is one class of evidence that philosophers rarely explore and which, if it were to pan out, would spell serious trouble for materialism; that is "paranormal" or "psi" phenomena -- especially direct mind-to-mind communication (without a physical medium) and out-of-body experiences.

The evidence for paranormal phenemona is mixed. It is not as decisively negative as most contemporary academics tend to assume. The work of Daryl Bem (on direct mind-to-mind communication) and Pim Van Lommel (on out-of-body experiences in near-death situations) especially comes to mind.

Bem's classic "Ganzfeld" experiments (e.g. Bem & Honorton 1994 in Psych Bulletin) require a "sender" and a "receiver" to be sequestered in separate compartments; the "sender" is given a randomly selected image to concentrate on and to try to send; the "receiver" is to describe her thoughts and images aloud for 30 minutes. Finally the receiver is presented with four pictures (one the target) and asked to rate the similarity of each to her mentation during the 30 minute period. Results are generally above chance.

Bem, an eminent Cornell psychologist, knows how to design a study. Reading through his work, I generally think to myself, "If this were about anything else, I'd say this was a perfectly designed and utterly convincing study. He has controlled for everything." Carl Sagan was surely right in saying extraordinary claims require extraordinary evidence; but how extraordinary, exactly, is sufficient? Do we need to consider, for example, that Bem might simply be lying, or have been systematically deceived by unscrupulous collaborators and subjects?

Pim Van Lommel, similarly, has published work in the Lancet and elsewhere, work done in accord with typical scientific standards and suggestive of the reality and frequency of near-death experiences. Van Lommel has evidence that patients during cardiac arrest, with eyes closed and severely compromised brain function, were in some cases able to acquire otherwise unavailable information about happenings in the outside world (e.g., detailed descriptions of the what the doctor did with the patient's dentures) reported by the patient as having been seen from above. Van Lommel (personal communication) has even tried prospective studies of this latter sort of phenomenon, posting notes high in rooms where patients near cardiac death are being treated, notes facing the ceiling; but unfortunately, he reports, patients reporting near-death out-of-body experiences seem to be much more focused on their bodies and their religious experiences than on the contents of such notes!

I'm not saying we should accept Bem and Van Lommel; but I do think we should take them seriously. This is where I'd like to see the action in debates about dualism, rather than on questions such as the conceivability (or not) of various possibilities (e.g., "zombies"), if one suspends the laws of physics!

Wednesday, September 27, 2006

Brief Hiatus

Regular visitors to The Splintered Mind will know I usually post on a MWF schedule. No post this Friday, though: I'm heading up to Oregon tomorrow for my sister's wedding! I should be back in the saddle on Monday.

Purkinje on Visual Experience with One's Eyes Closed

Johann Purkinje [Jan Purkyne] was a leading figure in early 19th century physiology, and his descriptions of visual experience were discussed extensively by late 19th century introspective psychologists. However, I haven't been able to find English translations of his work. In connnection with my recent thoughts on our visual experience with our eyes closed, I was particularly intrigued by the respectful citations of his work on this topic by introspective psychologists such as Hermann Helmholtz and E.B. Titchener. So I decided to struggle through some Purkinje in German. In the Underblog, I've posted amateurish translations of a few passages.

A few points of interest about the translated passages:

(1.) Purkinje claims to see a chessboard pattern of squares when his eyes are closed and he's facing toward the sun (as well as in many other conditions). He claims that this experience "was noticed by most individuals with whom I made the experiments" -- so much so that he thinks it must derive from general conditions in the human organism. Yet I don't seem to have such an experience in those conditions; nor has anyone in any condition, whom I've asked about visual experience with their eyes closed, reported such a chessboard pattern to me.

(2.) He makes an interesting point about the difficulty in finding the borderpoint of the visual field with the eyes closed (cf. my post on the limits of the visual field with respect to nasal-side phosphenes).

(3.) The portrayals of afterimages (with the exception of what is now called the "Purkinje afterimage") are two-dimensional -- a point about which he is fairly explicit in Section XXVIII (cf. my post on whether images are flat).

Monday, September 25, 2006

The New Philosophers' Carnival is

here!

Can You Touch Your Jaw and Feel It in Your Hand?

"Phantom limb" phenomena have been well-known since at least the time of Descartes. People with missing limbs will report feeling sensations in the missing areas. In 1998, V.S. Ramachandran famously showed that people with missing arms can sometimes be induced to feel phantom sensations (as though in their missing hands) if gently stroked on the face. (See this article, for example, which is rich with interesting descriptions.) The reason for this, evidently, is that as the nerves from the phantom limb area provide no useful input, other nearby regions of the brain begin to recruit neurons from the areas formerly dedicated to input from the phantom limb; and the primary cortical region associated with tactile input from the face is adjacent to that associated with tactile input from the hand. Apparently, plasticity in input can sometimes outrun plasticity in the felt sensation, so that the relevant neurons that used to respond to stimulus from the hand and trigger (appropriately) a sensation subjectively located in the hand can come to respond to stimulus from the face while still triggering (now inappropriately) a sensation subjectively located in the hand.

Recent research -- for example by Peter Hickmott here at UC Riverside -- has shown that in animals whose nerves have been cut, one can start seeing neural plasticity within minutues. Cortical neurons near the border between forepaw and jaw which formerly acted in synchrony with other forepaw neurons start to act in synchrony with the jaw neurons.

This leads me to think of the following experiment. If we somehow induced in people cortical input from the hand similar to that one would get from denervation of the hand (by sensory deprivation? by anaesthesia?), and then one gently stroked the jaw, a la Ramachandran, might the person report a sensation in the hand?

If this has been done, I haven't heard of it.

Friday, September 22, 2006

Are Images Flat?

Okay, here’s another post about the spatial properties (or not) of visual images. I seem to be on a kick!

Pete Mandik reminded me of this issue when he said something in his comments on my last post that seemed (perhaps only seemed?) to imply that he regarded images as generally two-dimensional.

We certainly talk, sometimes, as though they are. Most tellingly, I think, we call images “pictures” (in the mind’s eye), not (say) “sculptures”. Stephen Kosslyn, in his seminal 1980 book on imagery, describes the imagery space as “roughly circular” and compares its horizontal and vertical dimensions. He does not (that I recall) discuss its depth. Likewise, he says that position in the imagery matrix can be indicated by a pair of co-ordinates (polar co-ordinates r and theta) – not, as would be necessary for a three-dimensional imagery space, a triplet of co-ordinates (such as the Cartesian x,y,z or the polar r, theta, and phi). His sample portrayals of images never indicate depth.

In my posts here and here and my article on the question of whether things look flat, I suggest that our tendency to think of circular objects viewed obliquely as “looking elliptical” and distant objects as “looking small” derives primarily from an over-analogizing of visual experience to flat media such as paintings and pictures. I won’t rehearse those arguments again; but if that’s right, then perhaps our (at least some of our) tendency to think of images as flat derives from a similar over-analogizing and should be treated with similar skepticism.

One can accept that images are often (or even typically?) three-dimensional without going so far as to say that we can imagine something from multiple perspectives at once -- just as we can say that our visual representations and visual experience are fundamentally and ineliminably marked out in three-dimensional space (even monocularly) without saying that we can see from more than one angle at once.

So I’m wary of our too easily supposing that our imagery appears as if on a flat plane. But maybe, nonetheless, it does. I wonder about readers' introspective sense of this; and I’d be interested to hear also if you have reflections on neuroscientific or behavioral tests that might shine light on the question.

Wednesday, September 20, 2006

Can People Imagine Things from Multiple Angles at Once?

Here's another question about imagery experience -- related to Monday's post about whether images have subjective location. Can people (at least some people, in some circumstances) imagine things from multiple angles at once? Francis Galton, in his seminal study of imagery experience (1880, 1907), says that some of the best imagers report being about to do this. Jorge Luis Borges describes a similar phenomenon in a fictional character obsessed with a coin he calls a "Zahir":

There was a time when I could visualize the obverse, and then the reverse. Now I see them simultaneously. This is not as though the Zahir were crystal, because it is not a matter of one face being superimposed upon another; rather, it is as though my eyesight were spherical, with the Zahir in the center.

Now is this really possible? I can't claim ever to have had such an imagery experience myself; but that doesn't mean others can't do it. On the other hand, I don't think we should simply take people at their word when they make unusual claims about their experiential lives.

Here are two reasons one might think multiple-angle imagery is impossible:

(1.) If images are located in subjective space, as some people report -- say, near your forehead -- then it seems natural to suppose (if not strictly implied) that we have a single visual angle on those images, presumably the angle from the center of one's subjective self to the image in question. (Now that I see these words in print, though, I must say there seems to me something a little fishy in them!)

(2.) If images are instantiated in the brain (or caused by the brain) in accord with some topography of either subjective or objective space (e.g., the right side of the image is created by this location in the brain, the left side by this other location), then that topography may well require a single visual angle or point of view (e.g., in "circular vision" right and left might not be well defined).

I don't mean to say that either of these points is decisive -- not by a long shot. I wonder: Have any readers of this blog have had experiences they would describe as imagery from more than one angle simultaneously?

Monday, September 18, 2006

Are Images in Subjective Space?

I've interviewed a number of people about their imagery. Some I've asked to form images as we speak; others I've had wear random beepers during their normal daily activity, and they're reported having imagery at the "sampled" moments when the beep goes off. Interestingly, some people report that their images have a spatial location -- typically near their foreheads or some small distance directly in front of their foreheads (up to a couple feet) -- while others deny that their images are located in subjective space in this way: Neither in their heads, nor in front of their heads, nor anywhere else.

Now I wonder: Are these differing reports to be trusted? Do some people experience their imagery as located in subjective space, while others do not? Or is one or the other, or both, of the groups confused in some way?

Although I'm not especially optimistic about definitively answering that question, here's at least one thought about how, in principle, it might be testable. Maybe imagery interacts to some extent with vision. If you imagine something in some particular region of space you might be worse (or better) at seeing external objects in that region. If so, then maybe people whose imagery is subjectively located will have enhanced or diminished performance in perceptual tasks in regions of space associated with their imagery, while they are maintaining an image, compared to regions of space outside their usual imagery field; and not so for those who report imagery as having no subjective location.

Friday, September 15, 2006

Philosophy of Mind and Science Works in Progress

Pete Mandik's Brain Hammer will be hosting a "works in progress" series in Philosophy of Mind and Science. The first article is already up. Check it out here!

What is Low Self-Esteem? (by guest blogger Brad Cokelet)

We commonly explain habitual patterns of action by appeal to people’s degree self-esteem. For example if your friend, call her Jane, keeps dating people who treat her poorly and someone asks you why, you might say she has low self-esteem. But what does it mean to say that someone has a low level of esteem for his or her self?

Taken literally, self-esteem seems to suggest believing that one's character and/or achievements are praiseworthy – that one has done well and, perhaps, can take credit for having done so. But Jane might be very successful in a number of departments of her life – she is doing well at her career, has many friends, material security, etc. If asked, she might say that all of these things are accomplishments that she can take credit for. But she can believe that and still suffer from “low self-esteem” and date people who are bad for her.

So what is self-esteem if it is not belief that one has done things that deserve praise?

One possibility is that esteem is like love – you can believe you have reason to love someone, but not actually feel it, and you can believe you have reason to esteem yourself, but not feel it. But even if self-esteem does involve feeling in addition to belief, I doubt that this solves our problem, because counterexamples cases seem to exist. For example: Jane believes she has reason to think well of herself, and feels positive when she thinks of herself and her accomplishments, but she still engages in the imprudent behavior – entering into relationships that are bad for her.

A second possibility is that the term ‘self-esteem’ is misleading; it is respect, not esteem, she is missing. To make this suggestion work we need an explanation of what self-respect – respect for one’s self – amounts to. Although it is initially appealing, I have doubts about this too. It is plausible to assume that a failure to respect X to be a failure to have the attitude towards X that one is obligated to have and that talk of obligation implies that the person in question is able to adopt do what they are obligated to do at will (by choice); and these assumptions imply that Jane could choose to respect herself (at will). But this casts doubt on the claim that her imprudence is explained by a failure of self-respect: part of the tragic thing about people like Jane, to whom we attribute low self-esteem, is that they often cannot solve their problem simply by choosing to (i.e. at will).

A final, third possibility, is that what we have in mind is Jane’s lack of self-concern. On this view, Jane would likely refrain from acting so imprudently if she cared more about her own well-being. If so we should stop talking about low self-esteem and talk about a lack of self-concern instead.

I am myself tempted to take that final approach and to stop talking of low self-esteem. Is this right? And do I need to fatten my diet of examples before drawing this general conclusion; are there other cases that allow us to make better sense of talk of low self-esteem?

Wednesday, September 13, 2006

Black and Black

When I close my eyes, and I'm not looking toward a bright light, I'm tempted to say I see black -- or, more accurately, an assortment of colors (afterimages?) on a largely black background. (See this post for a broader discussion of what we experience when our eyes are closed.)

But here's something that shakes my confidence: When I put my hands over my eyes (without pressing), it seems to get considerably darker. When I then remove my hands, I'm inclined to think (a.) that I'm having pretty much the same visual experience as when I originally closed my eyes, and (b.) that experience is one of middle gray, or -- since that doesn't seem quite right, either -- at least something too bright to be black.

Now it seems to me that either I was wrong in my first judgment that I was seeing (largely) black, or I'm wrong in (a) or (b).

The fact that the experience gets blacker with the hands over the eyes does not, it seems to me, compel the conclusion that it wasn't black in the first instance. My jeans are black, but my desk is definitely quite a bit blacker. Does this necessarily imply that my jeans are really only gray? And my desk is shiny. It reflects the beige floor tiles, in places, in a way that really is rather bright. That doesn't seem incompatible with its also being a perceivably black object; but is my visual experience as I look at that patch of the black desk also a visual experience of blackness? -- not just an experience of a black object, but involving "black(-ish) phenomenology" itself? Hm! I worry that there's something objectionably simplistic in that question, though I can't quite put my finger on it.

Coming back to my visual experience with my eyes closed, some of these same questions and confusions arise.

But -- you might have thought -- what could be simpler than recognizing an experience of blackness? Aren't judgments about one's current visual phenomenology of a field of color exactly the kind of thing that many philosophers have thought it's impossible to be mistaken about?

Monday, September 11, 2006

What’s Wrong With Judging Others? - Part I (by guest blogger Brad Cokelet)

Although I am not a Theist, I have always found Jesus’s sayings
thought-provoking. Consider his take on judging others:

“Judge not, that ye be not judged. For with what judgment you judge,
you will be judged; and with the measure you use, it will be measured
back to you. And why do you look at the speck in your brother's eye,
but do not consider the plank in your own eye? Or how can you say to
your brother, 'Let me remove the speck from your eye'; and look, a
plank is in your own eye? Hypocrite!”

Jesus is clearly denouncing our tendency to judge others. But what
is his argument to that effect? And how should we understand the
warning about being judged ourselves?

One interpretation is as follows: if we judge another person against
some standard, S, and offer to correct them or help them improve when
they are judged to be lacking, then we will be judged against S too
(perhaps at the last judgment).

But even if it is true, does it give us reason to refrain from
judging others? I do not think so.

In some cases it clearly does not: I judge students in my logic class
- I measure their performance against a standard - and try to “remove
the specks” from their thinking, and I do this knowing that I still
make errors that are just like theirs. But, the possibility of
having my own thinking judged by the same standard is no reason to
refrain from my practice; in fact, I hope to make this possibility
actual by helping my students understand standards of sound
reasoning! I judge my students in part to help them develop their
own capacities to judge.

Analogous considerations seem to apply in the ethical realm. Ben
Franklin reports in his Biography that a Quaker friend told him he
(Franklin) was commonly thought proud, and that led Franklin to add
humility to the list of virtues he was trying to keep in mind and
develop. I do not see why the fact that the friend's own level of
pride (or humility) would be judged, would count against his judging
Franklin. And I do not think the appropriateness of his judging
Franklin depends on whether he was himself proud or even had many bad
traits. After all, people who are working to overcome serious
problems of their own are often better at noticing more subtle
shortcomings in others; having a plank in your own eye can make you
more attentive to speck in your brother’s eye, so why not tell him
about it and help him get it out of there?

Consequently, on this first reading (a second one will be considered
in the next post), I can’t agree with what Jesus says: I think we
should go ahead and judge each other and say to each other, ‘Let me
remove the speck from your eye’ even if ­ maybe especially if ­ we
have a plank in our own eye. By judging each other, we can hope to
become a bit more self-aware and clear-sighted.

Friday, September 08, 2006

The Troublesome Appeal of Eugenics

One thing I think I'll always remember from Robert Jay Lifton 's excellent book, The Nazi Doctors -- though not the only thing -- is the ease with which I found myself able to sympathize with certain aspects of the Nazi mindset, the "Nazi biomedical vision" as Lifton calls it.

The Nazis (and some others!) gave eugenics a bad name, and few openly embrace eugenics today. Yet eugenics had many eminent supporters in the late 19th and early 20th centuries, and it's easy to see how people could be attracted to the idea of humanity taking control over its genetic pool and implementing eugenic measures designed to ensure that future generations are healthier, more intelligent, and of better moral character.

The view that racial differences are genetically important and that the races differ significantly in their intellectual and moral capacities has a similar history, involving some of the same figures. Like eugenics, the position had numerous eminent adherents in the 19th and early 20th centuries, only to become a political hot potato in the second half of the 20th century.

Both positions are of course abhorrent; let's take this as common ground. But I don't think they are obviously abhorrent. And it is that last fact to which I want to call your attention. In the current political climate, mainstream and liberal thinkers reflexively dismiss these views without, perhaps, appreciating their potential attractiveness to reasonable people in the right frame of mind and the right cultural context -- frames of mind and cultural contexts not too different from our own.

And of course, if you combine these two opinions (and certain views about the division and character of the races), one can come startlingly close to seeing merit in Nazi policy. In a Malthusian world, one might think it a moral duty to open up Lebensraum ("living space", i.e., new territory) for the genetically superior; given limited resources, one might think it best to trim away poor, and potentially genetically corrupting, stock. Evil can acquire the look of a moral imperative. If the heart rebels (as the ancient Chinese philosopher Mencius, one of my favorite moral psychologists, thinks it will), one might interpret that rebellion as misplaced compassion -- or at least compassion that should not be acted on, like the compassion that judges must sometimes set aside in delivering appropriately hard sentences.

That evil can disguise itself as reason is of course not news; but I think it salutary to remind ourselves sometimes how easily it can do so. Our ordinary, lazy habits of thinking tend to exaggerate the distance between ourselves and those we condemn.

Lifton writes:

Starvation as a method of killing was a logical extension of the frequent imagery of mental patients as "useless eaters." As a passive means of death, it was one more element of general neglect. In many places, mentally ill patients had already been fed insufficiently; and the idea of not nourishing them was "in the air" (p. 98).

and (you may wish to skip the following quote if you are easily upset):
I remember the gist of the following general remarks by Pfannmueller: These creatures (he meant the children) naturally represent for me as a National Socialist only a burden for the healthy body of the Volk. We do not kill (he could have here used a euphemistic expression for this word kill) with poison, injections, etc.; then the foreign press and certain gentlemen in Switzerland would only have new inflammatory material. No our method is much simpler and more natural, as you see. With these words, he pulled, with the help of a ... nurse, a child from its little bed. While he then exhibited the child like a dead rabbit, he asserted with a knowing expression and a cynical grin: For this one it will take two to three more days. The picture of this fat, grinning man, in his fleshy hand the wimpering skeleton, surrounded by other starving children, is still vivid in my mind (p. 62).

There are different kinds of evil -- evil in passion, evil in neglect -- but it is this cold evil, rigorously rationalized, whose shadowy potential in myself frightens me most.

Wednesday, September 06, 2006

Eastern Intuitions about Framing the Innocent (by guest blogger Brad Cokelet)

Consider this stock problem case for Utilitarians: if a judge frames an innocent person and has him killed in order to placate a violent mob, he will produce better overall results than if he refuses to do so.

Assuming the Utilitarian thinks we should choose to do whatever maximizes utility, he has to bite the bullet and condone the framing, which is a blatant injustice and therefore wrong.

To avoid condoning framing the innocent, many Utilitarians adopt forms of indirect Utilitarianism, according to which utility will not be maximized if people consciously aim to maximize it; they claim that utility will be maximized when people, including judges, don’t (directly) aim to bring about that result. We might defend this shift by appeal to a general methodological principle: when an ethical theory conflicts with an intuition that all reasonable people share, the theory needs to yield to the intuition. On this view, then, all reasonable people share the intuition that framing the innocent is an injustice and therefore wrong. But is that true?

In a forthcoming paper (available here), John Doris and Alexandra Plakias raise doubts about that very claim by citing empirical evidence that the anti-framing intuition is a parochial artifact of Western culture. More specifically, they appeal to a study (which is forthcoming) that contrasts the intuitions of, “Americans of predominantly European descent and Chinese living in the People’s Republic of China,” and suggests that people in China are more likely to have pro-framing intuitions. Doris and Plakias suggest that the variability of intuitions (if it exists) is evidence for a surprising conclusion: the intuition that framing an innocent is unjust and wrong is something about which reasonable people can disagree.

Now even assuming that the empirical claim about cultural variability is true, one might resist the suggestion about reasonable disagreement on the grounds that the relevant Easterners --­ Chinese living in the PRC --­ have distorted intuitions. The most promising argument to this effect is that some background theory or value conception has distorted the intuitions. One possibility that Doris and Plakias mention is the collectivist conception of self that some attribute to Easterners. Other possibilities include Marxist theories and more traditional value conceptions (e.g. Buddhism, Taoism, Confucianism).

One thing to do, I suppose, would be to by running the study in other Eastern countries that do not have the history of Marxist rule or the same traditional value conceptions. But it is also important to ask whether any of these background theories or value conceptions would actually (purport to) support, or have been thought to support, the pro-framing intuition. One question here is about whether a conception claims to support the intuition that framing and killing an innocent is not wrong; the other is about whether people who endorse the conception have in fact avowed the intuition. For example some Japanese Zen Buddhists endorsed Japanese militarism, but that does not show that a Zen Buddhist conception would support militarism or war.

So I am wondering:

(1) Are there Chinese philosophers who explicitly discuss cases or issues like this and come down one way or the other?

(2) Would the Communist ideology promulgated in China support framing an innocent to placate a mob? Has it been taken that way in China or elsewhere?

(3) What is a collectivist conception of self and would it support pro-framing intuitions?

Monday, September 04, 2006

Philosopher's Carnival #35 is...

here! Thanks, Steve!

Do You Mostly See Double?

Raise a finger to about four inches in front of your nose. Focus on some object in the distance, then -- without changing your focus -- shift your attention back to your finger. Does it seem doubled? Most people claim to be able to experience this, at least after a few tries.

If you then focus carefully on your finger (bringing it out maybe to six or eight inches, depending on how close in you're able to bring your focus), do the objects in the far distance seem unfocused, blurry? Even doubled? Reports of doubling in this case are less common, but I think I can get some doubling in myself in this condition.

One of the great geniuses of the 19th century, a pivotal figure in physics and physiology and psychology, Hermann von Helmholtz writes:

When a person's attention is directed for the first time to the double images in binocular vision, he is usually greatly astonished to think that he had never noticed them before, especially when he reflects that the only objects he has ever seen single were those few that happened at the moment to be about as far from his eyes as the point of fixation. The great majority of objects, comprising all those that were farther or nearer than this point, were all seen double (1910/1962, III.7).

The eminent 18th-century philosopher Thomas Reid writes:
We find that when the eyes are sound and perfect, and the axes of both directed to one point, an object placed in that point is seen single.... Other objects at the same distance from the eyes as that to which their axes are directed do also appear single.... Objects which are much nearer to the eyes, or much more distant from them, than that to which the two eyes are directed, appear double. Thus, if the candle is placed at the distance of ten feet, and I hold my finger at arms-length between my eyes and the candle; when I look at the candle, I see my finger double; and when I look at my finger, I see the candle double: and the same thing happens with regard to all other objects at like distances which fall within the sphere of vision.... You may find a man that can say with good conscience, that he never saw things double all his life; yet this very man, put in the situation above mentioned, with his finger between him and candle, and desired to attend to the appearance of the object which he does not look at, will, upon the first trial, see the candle double, when he looks at his finger; and his finger double, when he looks at the candle. Does he now see otherwise than he saw before? No, surely; but he now attends to what he never attended to before. The same double appearance of an object hath been a thousand times presented to his eye before now; but he did not attend to it; and so it is as little an object of his reflection and memory, as if it had never happened (1764/1997, p. 133-134).

E.B. Titchener, the leading American introspective psychologist circa 1900 (second in eminence only to William James), writes:
The field of vision … shows a good deal of doubling: the tip of the cigar in your mouth splits into two, the edge of the open door wavers into two, the ropes of the swing, the telegraph pole, the stem of another, nearer tree, all are doubled. So long, that is, as the eyes are at rest, only certain objects in the field are seen single; the rest are seen double.... Our habitual disregard ot double images is one of the curiosities of binocular vision (1910, p. 309).

I assume most people now would not agree with such claims.

What's going on? Are these guys, despite their reputations -- and despite Helmholtz's and Titchener's many subtle and interesting introspective discoveries -- simply bad introspectors? I can barely get any doubling, I think, except in the most extreme conditions. I focus on the bookshelf four feet away; the door handle ten feet away seems not at all doubled. Have they unwittingly trained themselves to see double? If so, do they really see most things double, most of the time?

Maybe, I -- we? -- are the bad introspectors? Yet I find it very difficult to imagine that I'm wrong about the singular appearance of that doorknob....

Friday, September 01, 2006

Is Pride in a Sports Team Foolish Pride? (By Guest Blogger Brad Cokelet)

I had a friend in high school, let’s call him Randy the Rebel, who was proud to have never read any of the assigned books for any of his English classes. I remember one break during college when he said he was no longer proud of that - he realized his pride had been foolish because not reading the books was nothing to be proud of.

This raises an interesting question: what should we, and what should we not, be proud of?

In thinking about this it is useful distinguish between two reasons we might have for saying that someone’s pride is foolish. The first is that the person is proud of something morally or ethically objectionable. The Nazi guard’s pride in having killed more prisoners than any other is foolish, and itself immoral, because nothing immoral is something to be proud of. But, as Justin D’Arms and Daniel Jacobson have argued, we may also criticize someone’s pride simply because it is inappropriate; some things are nothing to be proud of even though they are not immoral. Take, for example, my friend Randy’s “feat” of not reading the assigned books.

But what, then, makes something an appropriate object of pride? What does Randy’s “feat” lack?

One suggestion, built on Phillipa Foot's comments in her paper “Moral Beliefs” is as follows: in order to be an appropriate object of pride a thing must (1) belong to the person who is proud of it and (2) provide the person with some advantage or be an achievement. On this view, Randy’s pride was inappropriate because not having read the books was not really much of an achievement and provided him with no overall advantage. Personally, I like this but lean towards making the advantage bit a necessary condition; I think it is foolish to be proud of something that is not good for you.

D’Arms and Jacobson have recently objected to this account by appeal to the example of a sports fan. Consider a fan who is proud of the Buccaneers. On Foot’s view, this seems inappropriate because the team is not something that belongs to the fan. Or so D’Arms and Jacobson claim. On the contrary, I think that when we say that a fan is proud of his team, we really mean that she is proud to be a fan of a winning team, and that *is* something that belongs to her.

But even if that response works, I have to admit that on my view being a sports fan, even of a winning team, is not much to be proud of because it is not much of an achievement (it is mostly luck) and gives you only a minimal advantage (bragging rights, maybe the spoils from an office pool, etc.) I think it is foolish to be proud of your favorite team’s accomplishments.