Friday, August 31, 2007

Describing Inner Experience? -- book cover

Whew! I've been chugging away so hard at a draft of my "Eyes Closed" essay that I've barely come up for air the last two days. But instead of trying everyone's patience with more of that, I think I'll toss up something visual: The book cover design for Russ Hurlburt's & my forthcoming book Describing Inner Experience? Proponent Meets Skeptic.


Russ and I gave a subject, Melanie, a random beeper which she wore during her normal daily activities. When the beep went off, Melanie was to make her best effort to discern and recall her last undisturbed moment of inner experience immediately before the beep. She collected several such samples a day, and then we interviewed her about them -- Russ as a proponent of this as the best way he knows of to get at people's real inner lives, I as someone pretty skeptical about the accuracy of introspective reports.

We edited transcripts of six days of interviews, with Melanie describing her randomly sampled experiences as best she could, Russ probing her about it based on years of experience with such interviews, and I probing in my own way and asking skeptical questions. We added several dozen side boxes continuing our debates and connecting what's going on in the interviews with contemporary and historical literature in psychology and philosophy. Finally, Russ and I wrote introductory and concluding chapters each from our own perspective.

But most importantly: What do you think of the tangled thread cover design? Pretty nifty?

Wednesday, August 29, 2007

Eyes Closed in the Sun

Okay, I confess, I'm obsessed. I've been trying to acquaint myself with the whole history of phenomenological reports of visual experience (excluding imagery experience and afterimages) with one's eyes closed. I've been running subjects and bothering undergraduates and Philosophy Department staff. I've even opened a new blog category: eyes closed.

Today, among other things, I obtained two reports on visual experience while facing the sun with eyes closed. Reviewing almost 200 years of literature, I've amazingly never found an attempted replication of Purkinje's 1919 report that "most people" see checkboard figures under these conditions. Here's a figure drawn from his own experience:



I don't think I've ever seen such checkerboard or latticework figures in the sun! (And now, of course, I've tried several times for extended periods.) I gave four people beepers and asked them to sit in the sun; only two, I think actually did face the sun directly, and one reported a Purkinje-like "honeycomb" visual experience in a couple of cases, while the other reported no such thing.

So today, I coaxed two people to stare with closed eyes directly at the noontime sun for seven minutes (independently, one after the other), while I collected blow-by-blow reports. About halfway through each observation, I explicitly asked them if they saw any checkerboard or honeycomb-like figures.

The observers' reports were very similar: Both reported bright fields fluctuating in color from red to orange or yellow or white. Both reported the field as pretty uniform, apart from some perturbations (one reported diagonal lines that came and went, the other reported squiggles and lighting-like branching figures), and possibly a bit darker toward the periphery. Both explicitly denied any checkerboard, latticework, or honeycomb-like shape.

These reports are very similar to what I recall experiencing myself, though I believe I in no way suggested anything like this to the observers. When I invited him, the first observer said he knew what he would experience -- a bright red spot in the middle of his visual field -- and he was surprised to have to report otherwise.

Yet Purkinje's language about this is very strong:

Furthermore, I must mention that the described figures, especially the little squares, were noticed by most individuals with whom I made the experiments, insofar as, without drawings, it was possible to get an imperfect report through words.

They would come, therefore, not merely to particular individuals under quite special organic conditions, but rather would be grounded in general conditions of the organism or even in all subjects due to physical laws.

Curious!

[Update, Sept. 10: I've just discovered a translation of this passage in Purkinje (Wade & Brozek 2001) and I think I may have got a key phrase and idea wrong. It looks as though Purkinje thinks that one must maintain a fast waving of the fingers and that this is key to the phenomenon, which is almost stroboscopic. Helmholtz 1856/1909/1962, vol. 2, p. 256-7, reports similar phenomena; as do Smythies (1957) and others in discussion of "stroboscopic" effects....]

Monday, August 27, 2007

Eyes Closed Visual Experience -- Subject 2

What, if anything, do you visually experience when your eyes are closed? Historical reports are diverse (some of the most detailed are Purkinje's, partially translated here). So are casually collected contemporary introspective reports. To get some more data, I gave five people random beepers to wear while keeping their eyes closed for extended periods. About a week ago, I described what Subject 1 said was going on visually with him at the last undisturbed moment before he was beeped. Today, Subject 2.

While Subject 1 reported sensory visual experience in only 4 of his 14 samples (and visual imagery in many but not all of the rest), Subject 2 (who, like Subject 1, was a male graduate student in philosophy) described sensory visual experiences in all 10 of his samples. This immediately prompted me to wonder: Does Subject 1 really have relatively little visual experience when his eyes are closed, or did he simply forget his experience? Did Subject 2 really have visual experience in every single sample, or did he unwittingly fabricate some post hoc, after the beep occurred? Do you think you always have visual experience when your eyes are closed? Or does the experience fade away entirely when your mind is on other things? People appear to have divergent intuitions on this question.

When Subject 2 took his samples in a relatively dark environment, Subject 1 tended to report something like a black/gray wash permeated with varying tones of yellows, oranges, and whites, shifting in intensity but not swirling or moving in an organized way. In some samples one side of the field might be lighter gray or than another or yellowish-orangish (though still, despite the hue, as dark as black -- reminding me of Paul Churchland's "chimerical colors"). Not so different from Subject 1's reports.

Subject 2 also took samples in direct sunlight -- the first was fairly similar to the dark samples, but with more yellow-orange as well as streaks or flashes of light. However, in the remaining sunlight samples, Subject 2 tended to report honeycombs and latticework, and different backgrounds. For example, in one sample Subject 2 reported a bright red field with a large magenta circle in the center, with a darker red honeycomb structure over the whole field.

One reason I found these sunlight reports interesting is that Purkinje in 1819 reported that most of his observers saw checkerboard or latticework patterns when they faced the sun with their eyes closed (see fig. 2 here) -- something I don't recall any other researcher saying, and something I don't think I normally experience. (Do you?) So almost two centuries later, is Purkinje's report being vindicated? I resolved to ask my three remaining subjects to take some samples in direct sunlight....

Friday, August 24, 2007

From Helmholtz's Treatise on Physiological Optics

Today I share a long quote from Hermann von Helmholtz's Treatise on Physiological Optics. Helmholtz was one of the great intellectual figures of the 19th century, making seminal contributions in physiology and physics as well as psychology.

It might seem that nothing could be easier than to be conscious of one’s own sensations; and yet experience shows that for the discovery of subjective sensations some special talent is needed, such as PURKINJE manifested in the highest degree; or else it is the result of accident or of theoretical speculation. For instance, the phenomena of the blind spot were discovered by MARIOTTE from theoretical considerations. Similarly, in the domain of hearing, I discovered the existence of those combination tones which I have called summation tones.... It is only when subjective phenomena are so prominent as to interfere with the perception of things, that they attract everybody’s attention. Once the phenomena have been discovered, it is generally easier for others to perceive them also, provided the proper precautions are taken for observing them, and the attention is concentrated on them. In many cases, however – for example, in the phenomena of the blind spot, or in the separation of the overtones and combination tones from the fundamental tones of musical sounds, etc. – such an intense concentration of attention is required that, even with the help of convenient external appliances, many persons are unable to perform the experiments. Even the after-images of bright objects are not perceived by most persons at first except under particularly favorable external conditions. It takes much more practice to see the fainter kinds of after-images. A common experience, illustrative of this sort of thing, is for a person who has some ocular trouble that impairs his vision to become suddenly aware of the so-called mouches volantes in his visual field, although the causes of this phenomenon have been there in the vitreous humor all his life. Yet now he will be firmly persuaded that these corpuscles have developed as a result of his ocular ailment, although the truth simply is that, owing to his ailment, the patient has been paying more attention to visual phenomena. No doubt, also there are cases where one eye has gradually become blind, and yet the patient has continued to go about for an indefinite time without noticing it, until he happened one day to close the good eye without closing the other, and so noticed the blindness of that eye.

When a person’s attention is directed for the first time to the double images in binocular vision, he is usually greatly astonished to think that he had never noticed them before, especially when he reflect that the only objects he has ever seen single were those few that happened at the moment to be about as far from his eyes as the point of fixation. The great majority of objects, comprising all those that were farther or nearer than this point, were all seen double (Helmholtz 1856/1909/1962, vol. 3, p. 6-7, emphasis in original).

In a Cartesian (1641/1984) or Price-ian (1932) mood, it can seem almost impossible to doubt the correctness of your consequent judgments about your ongoing experience; but the leading figures of introspective psychology had quite the opposite opinion (as do I here and here). This was, no doubt, grounded in their experience of finding people disagreeing radically about their phenomenology, without any plausible physiological or behavioral or environmental differences underlying that disagreement; and of people changing their minds as their theories change, conforming too neatly to expectations, being swayed by the reports and opinions of their friends and advisors, and missing things that seem in retrospect to be obvious.

Consider Helmholtz’s own examples in this passage. The most familiar example to contemporary readers is the blind spot, which even in monocular vision can be very difficult to notice without aid. The musically or psychoacoustically trained will be familiar with combination tones and overtones, which are accompanying tones different in pitch from the fundamental tones produced by musical instruments. These tones surely add to our musical experience, but they can be very difficult to discern without training (see here for further explanation and a recreation the combination tone training exercises of Titchener 1901-1905). Whether Helmholtz is right about their being summation tones in particular (tones of the pitch characteristic of A+B, supposedly produced when sounds of frequency A and B occur together) remains unclear – buttressing his fundamental point. People sometimes notice bright afterimages – those that interfere with ordinary perception, especially, such as after having glanced at the sun – but rarely do they notice faint ones, which one might (with Helmholtz) think to be more or less a constant phenomenon of vision, or imperfections and floaters in the fluid that fills the eye, even when they’re looking for them; but is this imperfection in introspection, as Helmholtz supposes, or is our visual experience normally free of such perturbations?

Helmholtz’s final example is maybe the most striking: He suggests that most of the objects in the visual field, most of the time are seen double, but we fail to notice that. Reid (1764/1997) and Titchener (1910) and others make similar remarks (I discuss this also here and here). If you hold your finger near your nose and focus in the distance, the finger may seem to you to double. But is our visual experience of most objects like that? I can’t say it seems to me that way as I gaze about the room. But I haven’t had 10,000 trials of introspective training yet! Or maybe it’s Helmholtz and Reid and Titchener who are mistaken? But that only advances Helmholtz’s central point about the difficulty of the introspection. Or are we to suppose that Helmholtz and Reid saw most things double and the rest of us do not? – that everyone is right about his own experiences and wrong about everyone else’s? Besides the physiological and psychological implausibility of that (unless we see appropriate corresponding physiological or psychological differences), that supposition makes nonsense of people’s changing their minds....

Wednesday, August 22, 2007

Self-Reported Vividness of Imagery and the Cortex

In 2002, I published an article critical of the generally weak-to-nonexistent relationships between self-reported vividness of imagery and performance on tasks psychologists have often thought to involve imagery, such as mental rotation tasks and tests of visual memory and visual creativity. Differences in subjective report about imagery, I suggested, may relate only poorly to real differences in imagery experience. This fits with my general skepticism about the trustworthiness of our reports about our own conscious experience.

Yesterday, on a tip from Anibal in a comment on Monday's post, I read two articles on the relationship between self-reported vividness of visual imagery and activation in the cortex during visual imagery tasks.

The self-report measure was the widely-used Vividness of Visual Imagery Questionnaire (VVIQ). The VVIQ asks respondents to imagine various scenes (e.g., some relative or friend -- the exact contours of face, head, shoulders, and body; characteristic poses, etc.) and then rate the "vividness" of the resulting image on a scale from 1 (perfectly clear and vivid as normal vision) to 5 (no image at all, you only "know" you are thinking of the object).

Amedi et al. 2005 looked at nine subjects. Those who rated their images as more vivid on the VVIQ showed a trend (not statistically significant, though) to have more activity in their visual cortex during visual imagery tasks and, perhaps more interestingly, a substantial (and statistically significant) tendency to show less activation in their auditory cortex -- which Amedi et al. interpreted as showing a narrow focus of concentration on visual matters.

Cui et al. 2007 retested the issue of visual cortex activation and were able to confirm Amedi et al.'s trend: In their eight subjects, there was a strong and significant tendency for those claiming more vivid imagery on the VVIQ to show more activation in the visual cortex during visual imagery tasks. Although Cui et al. don't comment on this, a striking trend appears in their time course data: The self-rated poor visualizers start out with as much visual cortex activation as their vividly-visualizing peers, but that activation rapidly declines relative to other brain activity (over 10 seconds), while the good visualizers keep their level of visual cortex activation constant or increase it. The possibility occurs to me, then, that the difference between them may be in maintaining focus on the task -- which would also harmonize with the Amedi et al. results that more vivid self-rated visualizers show more selective cortical activation.

It still puzzles and troubles me that VVIQ scores should relate so poorly to behavioral performance. If only there were more research like this, showing consistent relationships between self-report of conscious experience and third-person measures!

Monday, August 20, 2007

Can You Directly Will Sensory Experiences?

You can will changes in your sensory visual experience indirectly, of course, by deliberately looking one direction, or closing your eyes, or pressing on your eyelids. And you can directly will visual imagery experiences by, for example, deciding to form the image of your house as seen from the street. But normally we don't think we can directly will sensory experience: We don't think we can will ourselves simply to see red or see a cross-shaped figure.

In 1894, the eminent psychologist George Ladd asserted, to the contrary, that he and his students could form visual experiences by direct willing.

What they were asked to do was briefly this: to close the eyes, allow the after-images completely to die away, and then persistently and attentively to will that the color-mass caused by the Eigenlicht [that is, the dark or chaotic visual field one supposedly experiences with one's eyes closed] should take some particular form, - a cross being the most experimented with.... Of the sixteen persons experimenting with themselves, four only reported no success; nine had a partial success which seemed to increase with practice and which they considered undoubtedly dependent directly upon volition; and with the remaining three the success was marked and really phenomenal. It should be said, however, that of the four who reported 'no success,' only one appears to have tried the experiment at all persistently.

As far as I am aware no one has ever published an attempted replication of Ladd's experiment.

What do you think? Can you make a cross -- not just an image of a cross but a sensory experience of a cross -- by closing your eyes and trying hard? Ladd recommends a few trials of no more than 5-7 minutes.

The Chinese Government Blocks This Blog

... I've just learned.

Maybe it's because of my dim view of Laozi?

Someone over there must have a lot of time on his hands, if he's combing through obscure philosophy blogs.

How Not to Pack a Suitcase

My friend Doug King advises us How Not to Pack a Suitcase.

Friday, August 17, 2007

Eyes Closed Visual Experience -- Subject 1

Over the last year, I've been thinking a bit about visual experience with our eyes closed (e.g., here, here, here, here). A few months ago, I started giving volunteer subjects random beepers and having them keep their eyes closed for two hours a day over the course of three days. After each random beep, they were to note whether they had any sort of visual experience in the last undisturbed moment immediately before the beep, and if so what it was. After each day (about 3-6 beeps), each participant came to my office for an hour-long interview.

Subject 1 was a male graduate student in philosophy who expected not to find any visual experience, not even of blackness, with his eyes closed. (On people's differing opinions about the omnipresence or not of visual experience see here.) He distinguished sharply between sensory visual experiences and visual imagery experiences. (To understand that distinction, keep your eyes open and form a mental image of the front of your house. There's a difference of some sort -- even if only in vividness [per Hume and Perky] -- between that imagery experience and your ordinary sensory visual experience, no?)

Of the 14 sampled experiences we discussed, Subject 1 reported 8 with visual imagery experience only and no visual sensory experience, 1 with visual sensory experience only and no visual imagery experience (the very first sample), 2 with neither sort of experience, and 3 with both visual imagery and visual sensory experience. He did not think his visual sensory experience and his visual imagery experiences interacted at all -- for example, in one sample he reported a visual sensory experience of a uniform, darkish orange field of light with a slight texture (as I understood it, he meant not a textured depth, but a bit of repetitive random variation in the color). He also had complex visual imagery of a former apartment of his. The visual imagery was not tinted orange, nor was it located in space (next to, behind, etc.) relative to the sensory visual experience.

Subject 1 generally reported his visual sensory experience with his eyes closed to be flat, two dimensional, and located in a forward direction, but without any sense of depth or distance. One experience he described as being "black with staticky [colored] swirls"; the rest he described as a lightly textured uniform darkish orange.

In coming posts, I'll describe four more subjects' reports. The big question for me: What patterns will emerge in their reports? What will they tend to agree on and disagree on? Will they all describe their sensory experience as flat? Will they all have the same view of the difference between, and non-interaction between, visual imagery and visual sensation? Will they report relatively simple, undifferentiated visual sensations, like Subject 1's? Like Subject 1, will they report visual sensory experience in only a minority of samples?

Not since the early 20th century has this sort of thing been studied systematically -- not that I have found yet, anyway! -- and early authors diverged considerably in their opinions.

Wednesday, August 15, 2007

Zombies and anti-zombies (by guest blogger Keith Frankish)

I'm coming to the end of my stint as guest blogger. It's been fun, and I'd like to thank Eric and everyone who has commented. I thought I'd finish with another post about consciousness.

Every schoolboy knows how the zombie argument goes. Zombies -- physical duplicates of us that lack consciousness –- are clearly conceivable. If a scenario is clearly conceivable, then it is metaphysically possible (the conceivability–possibility, or CP, principle). So zombies are metaphysically possible, and therefore physicalism is false. (Physicalism is the view that consciousness supervenes metaphysically on the physical and thus that there is no world where the physical correlates of consciousness are instantiated without consciousness.) I suspect that the first premise here is false -- that zombies are not conceivable, at least in the rigorous way required by the argument. (For a persuasive statement of the case for this view, see Allin Cottrell's paper, 'Sniffing the Camembert'.) But even if that's wrong, I still don't think the argument works. Like many people, I'm suspicious of the CP principle. And one way to highlight the problem is to note that physicalists can also invoke the principle to argue for their position. Here's how it goes.

Consider anti-zombies. These are beings that are physical duplicates of humans, and that have no non-physical properties, but which are nonetheless conscious. They inhabit an anti-zombie world, which is a physical duplicate of ours, but where no non-physical properties are instantiated. (Physicalists think that we are anti-zombies, of course.) Then we can run an anti-zombie argument for physicalism, as follows. Anti-zombies are conceivable and therefore, by the CP principle, metaphysically possible. And if anti-zombies are metaphysically possible, then physicalism is true. The last step may seem a big one, but it should be uncontroversial. In the anti-zombie world consciousness is physical, so the microphysical features of that world are metaphysically sufficient for consciousness, and any world with the same microphysical features will have the same distribution of phenomenal properties. But, by definition, our world has the same microphysical features as the anti-zombie world. Hence the microphysical features of our world are metaphysically sufficient for the existence of consciousness, which is to say that physicalism is true.

The argument has been anticipated by various writers -- notably Peter Marton -- but I've but tried give it a definitive statement in a recent paper (available here for those with a Blackwell Synergy subscription). As I stress in the paper, the only response available to defenders of the zombie argument is to deny the first premise, that anti-zombies are conceivable.

The point can be made independently by considering the unique world that is a physical duplicate of ours and where no further, non-physical properties are instantiated. This should be a zombie world, if any is. But it's also the only candidate for an anti-zombie world. Thus, the possibility of zombies is incompatible with that of anti-zombies. And if conceivability entails possibility, then the conceivability of zombies is incompatible with that of anti-zombies. So defenders of the zombie argument must deny that anti-zombies are conceivable.

Now of course physicalism is the view that we are anti-zombies, so if anti-zombies aren't conceivable then physicalism isn't conceivable either. In short, if you want to endorse the zombie argument, then you have to maintain that physicalism is inconceivable.

That's all from me. So long and thanks for all the fish.

Monday, August 13, 2007

On the Winnowing of Greats

Before about 1950 or so, it seems, academic giants straddled whole fields. They were few and far between. Now, it seems, there are no Greats though a number of Very Goods. Consider 1850-1950: In philosophy: Mill, Marx, Nietzsche, Heidegger, Russell, Wittgenstein (and others). In psychology: Freud, Piaget, James (and others). So also in other academic fields. Where are the Einsteins and Darwins of the last 60 years? Have we run dry? (Let's not pretend Hawking and Crick are Einstein and Darwin, please!)

A few obvious factors:

* Fields are much more specialized. Given the number of researchers and articles, it's going to be very difficult to have an expert command of as broad an area as all of psychology or all of physics. Part of being Great might be having command of a wide area.

* Also because there are many more people in each field, one would expect the very best thinkers to have other nearly as good thinkers nearby. Greatness might be in part a comparative measure.

* With improved communication and travel, it's easier to keep atop of the best and latest thinking from relatively remote places. This creates a more competitive, egalitarian atmosphere. It also may make it more difficult for the odd genius to incubate away from mainstream opinion.

But there's also, I think, the following effect, little remarked upon (and somewhat in tension with my Hawking/Watson remark above), which I'll call the winnowing of greats with distance: The farther away your perspective on any body of people varying in eminence, the more isolated and comparatively great will the most eminent among them seem.

Consider the matter abstractly, first. Let's say that a field has 100 eminent practioners, with levels of eminence varying from 1 to 100 -- with most clustered near the bottom of this distribution and tailing off toward the top. For specificity, suppose the 10 most eminent are A (with an eminence of 91), B (80), C (75), D (64), E (58), F (57), G (50), H (46), I (45), and J (40). Suppose you're in the field, and you know of all 100 people. A is the most eminent, but B isn't far off, and all of A-J are among the very eminent -- in the 90th percentile among the 100 most eminent practioners, after all! To the extent greatness involves being head-and-shoulders above everyone else, A, though the most eminent, is only one of a group.

Someone who knows much less of the field might only have heard of A-J. Everyone else, from their perspective, will be a non-entity. Someone who knows still less may know only A-E, or A-C, or A. In textbooks and summaries, where only one or a few people can be mentioned, A will be mentioned almost every single time, and B and C will sometimes be mentioned; D rarely so. Suddenly A, or A-C, are no longer the best among peers but peerless.

Consider early introspective psychology: All academics know James. All psychologists have heard of Wundt. Many psychologists know about Titchener. But only specialists like me know Kuelpe and Mueller and Stumpf and Calkins and Sanford. Though I don't deny that James was the best of the lot and indeed a rare genius, from my perspective he is not a peerlessly great, solitary figure. Similarly, we all know Chaplin, but when I started learning more about silent comedy, I learned about Keaton and Lloyd, and Chaplin no longer seemed quite as peerless.

As we back away from late 20th-century philosophy and our lists get shorter, will Rawls and Kripke (and a few others) look more and more like they stand alone?

One other effect as one backs away from a field: One's judgment becomes homogenized with that of others. An early film enthusiast may not actually think Chaplin is the best. I might think Kuelpe has the edge over Titchener. The implicit uniformitarianism of ignorant distance can produce a unanimous chorus that artificially gives the impression of greatness (though over time this may become the irresistable, self-fulfilling "judgment of history").

Friday, August 10, 2007

'What am I?' (by guest blogger Keith Frankish)

Eric's recent post about subjective time set me thinking about how strange and different the mental life of children can be. Here's a little example from my own experience. When I was a young child, around the age of four, I discovered that I could put myself into a rather odd state of mind simply by repeating to myself the question 'What am I?'. This had two effects. First, it generated a strong sense that I was not the boy Keith – the boy whose body I was associated with. The sense wasn't simply of a dissociation between mind and body; rather it was the sense of being a different person from Keith, in mind as well as body. It was as if I were someone who inhabited Keith's body and normally let him speak and act for me, but who was nonetheless quite distinct from Keith and didn’t wholly approve of him. The second effect was to generate a mild form of out-of-body experience. I felt as if I were slightly behind and above Keith's body, almost outside it, but not wholly separate. (Of course, I am describing all this in an adult vocabulary, but I'm trying to capture what it felt like, as far as I can remember it.)

I used to find this experience interesting rather than scary, and I would induce it quite often. I'm not sure how I interpreted the feelings it generated, though I do remember that I was puzzled enough to ask my mother what I was -- what I was -- to which of course I got the true but unsatisfactory response that I was a little boy. As the years passed, the experience become weaker and it became harder and harder for me to induce it, and by adolescence I completely lost the knack.

What was going on? I don't think it was merely a problem with the indexical 'I'. The sense of distinctness was too real to be the product of semantic confusion, and I didn't have similar problems with other indexicals (I wasn't given to asking where was here, for example). Perhaps it was a side-effect of the acquisition of full-blown theory of mind -– which we know happens between three and four. With theory of mind in place, we are able to think, not only about the thoughts of others, but also about our own thoughts, and to a child this might easily generate a sense of puzzlement. 'If I am thinking about someone's mind,' a child might reason, 'then I must be separate from that person, especially if I disapprove of their thoughts and feelings' (as I said, I didn't wholly approve of Keith). In an imaginative child, this puzzlement might also generate some phenomenology of dissociation. One attraction of this account is that it would explain why I eventually lost the ability to induce the dissociative state, since the fallacy in the reasoning behind it would in time have become apparent.

I'd be interested to know if others can recall having similar experiences or if anyone knows of research that has been done on this topic.

Monday, August 06, 2007

New Essay: Do Ethicists Steal More Books?

I've put up a new essay on my homepage: Do Ethicists Steal More Books? This essay presents more formally and in more detail the data discussed in two previous posts: Still More Data on the Theft of Ethics Books and Liberating On Liberty (from the Library).

I planning to submit this essay to a psychology journal soon. So email me your devastating objections now, before I embarrass myself in public! (Well, I suppose my websites are public too -- but you all know and love me, right?)

The abstract:

If explicit reasoning about morality is morally useful, as Kohlberg and many ethicists have suggested, then one might expect ethics professors to behave particularly well. However, professional ethicists’ behavior has never been systematically studied. The present research examines the rates at which ethics books are missing from leading academic libraries, compared to other philosophy books. Study 1 found that contemporary (post-1959) ethics books were actually 25% more likely to be missing than non-ethics books. When the list was reduced to the relatively obscure books most likely to be borrowed exclusively by professional ethicists, ethics books were almost 50% more likely to be missing. Study 2 found that classic (pre-1900) ethics books were more than twice as likely to be missing as other classic philosophy books.

I'm off camping for a few days, so this is in lieu of the usual Wednesday post. See (see?) you all Friday!

Introspection and consciousness (by guest blogger Keith Frankish)

One of the deepest disagreements about consciousness is whether the subjective character of experience is exhausted by its intentional content or whether it also has an intrinsic, non-representational component. The latter view is the traditional one, but it has come under attack in recent years from first-order representational theorists, such as Fred Dretske and Michael Tye.

Now, you would think this dispute would be easy to settle. The putative intrinsic properties of experience are very different from the properties of external objects represented in experience. The reddishness (to use Joseph Levine's term) of an experience of a red apple is a very different property from the redness of the apple itself. And if our experiences have these distinctive non-representational properties, then surely introspection should reveal this to us. (Indeed it's not clear that anything else could reveal it.)

This invites a bit of experimental philosophy, so I ran an informal survey on the philosophy discussion lists at the Open University. I asked people to pay close attention to their perceptual experiences and say whether, when they did so, they were (a) aware only of properties of the objects of the experiences, or (b) aware both of properties of the objects of the experiences and of properties of the experiences themselves. Twenty-two people replied, of whom five said (a), ten said (b), and seven objected to the way the question was posed. (Four respondents said the answer was sometimes (a) and sometimes (b), so I counted them as a half member of each of those camps.) From their comments, it appeared that some people were answering (b) because they were aware of feelings and reactions associated with their perceptual experiences, so I re-ran the survey stressing that participants should ignore such associations and focus on the character of the experiences themselves. In the event, this seemed to make little difference. Fourteen people replied, of whom three said (a), six said (b), and five questioned the question.)

Now, of course, this wasn't a serious piece of research, but the results are interesting all the same – both because of the number of people who rejected the question and because of the disagreement among those who accepted it. Why should people reject the question? Either experiences possess non–representational properties or they don't, and introspection should be the best, if not the only, way to find out. And assuming people don't have radically different inner lives, how could they differ as to the answer to the question? If experiences of red things possess reddishness, then how could even a minimally attentive introspector miss the fact? And if they don’t, then how could introspection lead us to think they do?

Of course, I'm being a bit disingenuous here. I think that introspective reports are heavily theory-laden, so I wasn't surprised by the results. But the results ought to be surprising, I think, on a very common view of consciousness, which takes the nature of the phenomenon to be an unproblematic given. ('If you have to ask, you ain't never gonna know.') What would Jackson's Mary say, I wonder, if she knew that people on the outside had such differing views about what could be learned from introspection?

Friday, August 03, 2007

Do Business Ethics Courses Do Any Good?

... and by "doing any good" I mean do they actually cause students to behave more ethically?

A hard issue to study, but surely there's some research on it, even with some mickey-mouse measure of ethical behavior? Or maybe there's an epidemiological study or two of people convicted of white-collar crimes -- are they any more or less likely to have been exposed to business ethics courses than an appropriately matched group of non-criminals?

Well, shoot. I can't find a single study. As far as I can see from the journals, no one has ever studied the effects of taking a business ethics course on real-world behavior. Hm!

A number of studies have looked at whether taking a business ethics course is related to self-reported attitudes about business ethics or sophistication of reasoning about moral dilemmas. The results are mixed, with some studies finding that students completing business ethics courses show more ethical or more mature responses (Boyd, 1981; Glenn, 1992; Hiltebeitel & Jones, 1992; Murphy & Boatright, 1994; Loe & Weeks, 2000; Luthar & Karri, 2005) and others finding a very limited relationship (Duizend & McCann, 1998; Conroy & Emerson, 2004) or none at all (Wynd & Mager, 1989; Borkowski & Ugras, 1992; Smith & Oakley, 1996; Martin, 2007).

Many of these studies are flawed in not having control groups or control questions. Without control questions, students can be rated as "more ethical" by means of simple strategies. For example, a number of studies simply measure the degree of students' self-reported condemnatory attitudes about hypothetical violations of ethical standards. Students may then appear more ethical simply by showing a bias toward regarding any presented scenario or behavior as ethically problematic -- a response strategy that ethical training courses may tend to encourage but which needn't show any real improvement in moral understanding, much less in moral behavior. The literature is, if anything, even worse than the literature on the relationship between religion and moral behavior.

Let me hazard a guess as to why there are no published studies on the real-world effects of business ethics courses: There is no effect. Not overall (again, perhaps, like religion). But studies with a null effect have to be pretty good (or pretty large) to be published, and given the difficulty of the assessment no such good or large studies yet exist.

Shoast

Cati Porter urges us to eat our shoast.

Since a friend of mine (Dan George -- I'd link him, but his website has been hijacked!) and I invented the word, vanity, and curiosity about the unlikely possibility that the word might make it into wider usage, impels me to link it here!

Tuesday, July 31, 2007

Another puzzle about belief (by guest blogger Keith Frankish)

9 am: Jack enters his office and flips the light switch. Call this event A. It is plausible to think that there's an intentional explanation for A: Jack wants light and believes that flipping the switch will produce it. But light doesn’t come. The bulb goes pop, and Jack sets off to the store cupboard to get a replacement.

9.05 am: Bulb in hand, Jack re-enters his office, and again flips the switch -- then curses his stupidity. Call the second switch-flipping event B. Now what is the explanation for B? More specifically, is the explanation the same as for A, and it is an intentional one?

There are four options, and each has its problems:

1) The explanation is the same and it is intentional: Jack wants light and believes that flipping the switch will produce it. Problem: In the run-up to event B Jack surely doesn't believe that flipping the switch will produce light. After all, he knows that the bulb is blown and that blown bulbs don’t produce light, and he is minimally rational.

2) The explanation is the same and it is not intentional -- perhaps the movement is a reflex one. Problem: Flipping a light switch is just one of a vast array of routine unreflective behaviours for which we find it perfectly natural to give intentional explanations. If these actions are not intentional, then the realm of folk-psychological explanation will be massively reduced, vindicating at least a partial form of eliminativism.

3) The explanation is different and it not intentional. Problem: It's implausible to think that A and B have different explanations. In a real life version, I'd be willing to bet that the neurological processes involved in two cases were of the same type.

4) The explanation is different and it is intentional. Problem: As for (3), plus it's hard to see what alternative beliefs and desires might have motivated B.

This puzzle about belief seems to me an important one, though it has received relatively little attention -- which is why I thought I’d give it an airing here. (One of the few extended discussions I know of is by Christopher Maloney in a 1990 Mind and Language paper titled 'It's hard to believe'. Eric also discusses cases of this sort in his draft paper 'Acting contrary to our professed beliefs'.)

My own view is that the plausibility of the options corresponds to the order in which I have stated them, with (1) being the most plausible. That is, I would deny that at the time of event B Jack doesn't believe that flipping the switch will produce light. The problem then, of course, is to explain how he can believe that the switch will work while at the same time believing that the bulb is blown and that blown bulbs don’t produce light. The only plausible way of doing this, I think, is to distinguish types, or levels, of belief which are relatively insulated from each other, and to claim that Jack's belief about the effect of flipping the switch is of one type and his belief about the condition of the bulb of the other. (Maloney takes the broadly same line, though he works out the details in a different way from me.) I happen to think that this view is independently plausible, so the puzzle is actually grist to my mill, though distinguishing types of beliefs has its own problems. I'd be interested to know how others react to the puzzle.

Monday, July 30, 2007

Religion and Crime

I've been reading the literature on the relationship between religious conviction and crime, as part of my thinking about the relationship between philosophical moral reflection and actual moral behavior. The literature is pretty weak. Much seems church-inspired and probably deserves about the same level of credence as drug-company funded research showing their blockbuster drugs are wonderful. Much of it is in weird journals.

I found a 2001 "meta-analysis" (Baier & Wright) of the literature that shows all the usual blindnesses of meta-analyses. Oh, you don't know what a meta-analysis is? As usually practiced, it's a way of doing math instead of thinking. First, you find all the published experiments pertinent to Hypothesis X (e.g., "religious people commit fewer crimes"). Then you combine the data using (depending on your taste) either simplistic or suspiciously fancy (and hidden-assumption-ridden) statistical tools. Finally -- voila! -- you announce the real size of the effect. So, for example, Baier and Wright find that the "median effect size" of religion on criminality is r = -.11!

What does this mean? Does being religious make you less likely to engage in criminal activity? Despite the a priori plausibility of that idea, I draw a negative conclusion.

First: A "median effect size" of religion on criminality of r = -.11 means that half the published studies found a correlation close to zero.

Second: And that's half the published studies. It's generally acknowledged in psychology that most studies that find no effect -- especially smaller studies -- languish in file drawers without ever getting published. Robert Rosenthal, the dean of meta-analysis, suggests assuming for every published study at least five unpublished studies averaging a null result.

Third: As Baier & Wright note (without sufficient suspicion), the studies finding large effects tend to be in the smaller studies and the studies co-ordinated through religious organizations. Hm!

Fourth: The studies are correlational, not causal. Even if there is some weak relationship between religiosity and lack of criminality, some common-cause explanation (e.g., a tendency toward social conformity) can't be ruled out. Interestingly, two recent studies that tried to get at the causal structure through temporal analyses didn't confirm the religion-prevents-criminality hypothesis. Heaton (2006) found no decrease in crime after the Easter holiday. And Eshuys & Smallbone (2006) found, to their surprise, that sex offenders who were religious in their youth had more and younger victims than those who were comparatively less religious.

Does this suggest that religion is morally inert? Well, another possibility is that religion has effects that go in both directions -- some people using it as a vehicle for love and good, others as a vehicle for hate and evil. (Much like secular ethics, now that I think of it!)

Friday, July 27, 2007

Qualia: The real thing (by guest blogger Keith Frankish)

What is the explanandum for a theory of consciousness? The traditional view is that it is the qualia of experience, conceived of as ineffable, intrinsic, and essentially private properties -- classic qualia, we might say. Now classic qualia don't look likely to yield to explanation in physical terms, and physicalists typically propose that we start with a more neutral conception of the explanandum. They say that we shouldn't build ineffability, intrinsicality, and privacy into our conception of qualia, and that what needs explaining is simply the subjective feel of experience -- the 'what-it-is-likeness' -- where this may turn out to be effable (yes, there is such a word), relational, and public. Call this watered-down conception diet qualia. Though rejecting classic qualia, physicalists tend to assume that it's undeniable that diet qualia exist, and go on to offer reductive accounts of them -- suggesting, for example, that experiences come to have diet qualia in virtue of having a certain kind of representational content or of being the object of some kind of higher-order awareness.

Drawing a distinction between classic qualia and diet qualia (though not under those terms) is a common move in the literature, but I'm suspicious of it. I'm just not convinced that there is any distinctive content to the notion of diet qualia. To make the point, let me introduce a third concept, which shall I call zero qualia. Zero qualia are those properties of an experience that lead its possessor to judge that the experience has classic qualia and to make certain judgements about the character of those qualia. Now I assume that diet qualia are supposed to be different from zero qualia: an experience could have properties that dispose one to judge that it has classic qualia without it actually being like anything to undergo it. But what exactly would be missing? Well, a subjective feel. But what is that supposed to be, if not something intrinsic, ineffable, and private? I can see how the properties that dispose us to judge that our experiences have subjective feels might not be intrinsic, ineffable, and private, but I find it much harder to understand how subjective feels themselves might not be.

It may be replied that diet qualia are properties that seem to be intrinsic, ineffable, and private, but may not really be so. But if the suggestion is that they dispose us to judge that they are intrinsic, ineffable, and private, then I do not see how they differ from zero qualia. They are properties which dispose us to judge that the experiences that possess them have classic qualia -- in this case by disposing us to judge they themselves are classic qualia. If, on the other hand, the suggestion is that diet qualia involve some further dimension of seeming beyond this disposition to judge, then I return to my original question: what is this extra dimension, if not the one distinctive of classic qualia?

In short, I understand what classic qualia are, and I understand what zero qualia are, but I don't understand what diet qualia are; I suspect the concept has no distinctive content. If that's right, then the fundamental dispute between physicalists and anti-physicalists should be over the nature of the explanandum -- classic qualia or zero qualia -- not the explanans. The concept of diet qualia confuses the issue by leading us to think that both sides can agree about what needs to be

Footnote: This shows just how easy it is to be confused about qualia, even when it comes to the real thing.

Wednesday, July 25, 2007

Subjective Life Span

When I was 7 years old, a year seemed a very long time. And indeed it was -- it was 1/7 of my life. Now that I'm 39, a year seems much shorter. But of course now a year is only 1/39 of my life. When I was 7, 30 minutes seemed a long time; now it doesn't seem nearly so long.

Let's suppose that subjective time is inversely proportional to life span. The subjective time of any period is then the integral of 1/x, which is to say the difference between the natural logs of the end and the beginning of the period.

(Most recent psychological work about "subjective time" tends to be about subjective estimations of clock time, or about comparisons of periods close together in time as seeming to go relatively more quickly or more slowly. These are completely different issues than the one I'm contemplating here. They don't get at the fundamental question of whether the clock itself seems to speed up over the life span -- though see Wittmann & Lehnhoff 2005.)

On this model, since 1/x approaches infinity as x approaches 0 (from the positive direction), it follows that our subjective life-span is infinite. We seem, to ourselves, subjectively, to have been alive forever. (Of course, I know I was born in 1968, but that's merely objective time.)

There's something that seems right about that result; but an alternative way of evaluating subjective life span might be to exclude the earliest years -- years we don't remember -- starting the subjective life span at, say, age 4.

Adopting that second method, we can calculate percentages of subjective life span. Suppose I live to age 80. At age 39, I've lived less than half my objective life span, but I've already lived 76% of my subjective life span ([ln(80)-ln(39)]/(ln(80)-ln(4)] = 0.76). At what age was my subjective life half over? 18. Whoa! I feel positively geriatric! (And these reflections about philosophers peaking at age 38 don't help either.)

Regardless of whether the subjective life span begins at age 0 or age 4, we can compare the subjective lengths of various periods. For example, the four years of high school (age 14-18) are subjectively 25% longer than the four years of college (age 18-22). Doesn't that seem about right? Similarly, it wasn't until I was teaching for 9 years (age 29-38) that I had been a teacher as long, subjectively, as I had been a high-school student; and it will take 'til age 60 for my subjective years of teaching to exceed my subjective years of high school, college, and grad school combined.

If we throw in elementary school, objective and subjective time get even more out of synch. Those 7 years (age 5-12) will be subjectively equivalent to the 42 years from age 30-72! And unless I teach until I'm 168 years old, I'll always have had more subjective time as a student than as a teacher. Is that too extreme? Maybe so. But I don't really know what it's like to be 168 years old; and I'm not sure how stable and trustworthy my judgments now could be about how long 3rd grade seemed to take. Is 6 times as long as a middle-aged year so unreasonable?

Monday, July 23, 2007

If you want my opinion … (by guest blogger Keith Frankish)

I'd like to thank Eric for inviting me to guest on The Splintered Mind. Blogging gives one a chance to express one opinions, so I thought I’d begin by saying something about opinions.

When we talk of opinions I think we often have in mind states of the kind to which Daniel Dennett applies the term. An opinion in this sense is a reflective personal commitment to the truth of a sentence (see especially ch.16 of Brainstorms). Dennett suggests that we can actively form opinions and that we are often prompted to do so by social pressures. The need to give an opinion frequently forces us to create one – to foreclose on deliberation, find linguistic expression for an inchoate thought, and make a clear-cut doxastic commitment. This, Dennett suggests, is what we call making up our minds.

But what is the point of having opinions? Non-human animals get on well enough without them, and much of our behaviour seems to be guided without the involvement of these reflective, language-involving states. Dennett himself makes a sharp distinction between opinion and belief, and maintains that it is our beliefs and desires that directly predict our nonverbal actions, whereas our opinions manifest themselves only in what we say.

I disagree with Dennett here. I think that opinions can play a central role in conscious reasoning and decision-making. They can do so, I have argued, in virtue of our (usually non-conscious) higher-order attitudes towards them (see here for an early stab at the argument and here for the developed version). However, it’s undeniable that many of our opinions do not have much effect on how we conduct our daily lives. Many simply aren’t relevant. Few of us are deeply enough involved in politics for our political opinions to have a significant impact on our nonverbal behaviour. Moreover, opinions have drawbacks. They are hard to form. It’s not easy to arrive at coherent set of opinions which one is prepared to commit to and defend in argument. They can be dangerously imprecise. People are all too ready to endorse blanket generalizations and sweeping moral prescriptions. And they can be inflexible. We sometimes hang on to our opinions beyond the point where a wiser person would revise or abandon them, and end up falling into dogmatism or self-delusion. (Someone once said of the British politician Enoch Powell that he had the finest mind in Parliament until he made it up.)

The wise course, it seems, would be to keep an open mind as far as possible, and then commit oneself only to qualified views, which one is always ready to reconsider. Why, then, are people so keen to form strong opinions and to broadcast them to others? (a keenness very evident in the blogosphere). The question is one for social psychologists, but I'll speculate a bit. One factor is probably security. It's a complicated world and doubt is unsettling, so it's comforting to have clear, well-entrenched opinions. A unified package of opinions can also serve as a badge of tribal loyalty, identifying one as a member of a particular party or sect and so fostering a sense of comradeship and belonging. Another factor, I suspect, is prestige: a set of clear, firmly held opinions is impressive, suggesting that one is knowledgeable, tough-minded, and decisive.

These benefits aren't negligible, but I doubt they outweigh the risks, and it might be better if we were all more cautious in our opinions. I'm not recommending quietism; it's often important to take a stand. But I think we should resist the pressures to form quick and easy opinions, and, in particular, that we should resist the pressure to choose them from the predefined packages offered to us by professional politicians and 'opinion formers'. Referring to opinion polls, Spike Milligan once said that one day the 'Don't knows' would get in, and then where would we be? Well, perhaps we'd be a bit better off, actually.

Friday, July 20, 2007

Making Sense of Dennett's Views on Introspection

Dan Dennett and I have something in common: We both say that people often go grossly wrong about even their own ongoing conscious experience (for my view, see here). Of course Dennett is one of the world's most eminent philosophers and I'm, well, not. But another difference is this: Dennett also often says (as I don't) that subjects can no more go wrong about their experience than a fiction writer can go wrong about his fictions (e.g., 1991, p. 81, 94) and that their reports about their experience are "incorrigible" in the sense that no one could ever be justified in believing them mistaken (e.g., 2002, p. 13-14).

But how can it be the case both that we often go grossly wrong in reporting our own experience and that we have nearly infallible authority about it? I recently pubished an essay articulating my puzzlement over this point (see also this earlier post) to which Dennett graciously replied (see pp. 253ff and 263ff here). Dennett's reply continued to puzzle me -- it didn't seem to me to address the basic inconsistency between saying that we are often wrong about our experience and saying that we are rarely wrong about it -- so I had a good long chat with him about it at the ASSC meeting in June.

I think I've finally settled on a view that makes sense of much (I don't think quite all) of what Dennett says on the topic, and which also is a view I can agree with. So I emailed him to see what he thought, and he endorsed my interpretation. (However, I don't really want to hold him to that, since he might change his mind with further reflection!)

The key idea is that there are two sorts of "seemings" in introspective reports about experience, which Dennett doesn't clearly distinguish in his work. The first sense corresponds to our judgments about our experience, and the second to what's in stream of experience behind those judgments. Over the first sort of "seeming" we have almost unchallengeable authority; over the second sort of seeming we have no special authority at all. Interpretations of Dennett that ascribe him the view that there are no facts about experience beyond what we're inclined to judge about our experience emphasize the first sense and disregard the second. Interpretations that treat Dennett as a simple skeptic about introspective reports emphasize the second sense and ignore the first. Both miss something important in his view.

Let me clarify this two-layer view with an example. People will often say about their visual experience that everything near the center has clearly defined shape, at any particular instant, and the periphery, where clarity starts to fade, begins fairly far out from the center -- say about 30 degrees. Both the falsity of this view and people's implicit commitment to it can be revealed by a simple experiment suggested by Dennett: Take a playing card from a deck of cards and hold it at arm's length off to the side. Keeping your eyes focused straight ahead, slowly rotate the card toward the center of your visual field, noting how close you need to bring it to determine its suit, color, and value. Most people are amazed at how close they have to bring it before they can see it clearly! (If a card is not handy, you can get similar results with a book cover.) Although this isn't the place for the full story, I believe the evidence suggests that visual experience is not, as most people seem to think, a fairly stable field flush with detail, hazy only at the periphery, but rather a fairly fuzzy field with a rapidly moving and very narrow focal center. We don't notice this fact because our attention is almost always at the focal center. (See section vi of this essay.)

Now when people say, "Everything is simultaneously clear and precisely defined in my visual field, except at the far periphery" there's a sense in which they are accurately expressing how things seem to them -- a sense in which, if they are sincere, they are inevitably right about their experience of things -- that's how things seem to be, to them! -- and also a sense in which they are quite wrong about their visual experience. When Dennett attributes subjects authority and incorrigibility about their experience, we should interpret him as meaning that they have authority and incorrigibility over how things seem to them in that first sense. When he says that people often get it wrong about their experience, we should interpret him as saying that they often err about their stream of experience in the second sense.

Dennett's view on these matters is complicated somewhat by his discussion of metaphor in his response to me, because metaphor itself seems to straddle between the authoritative (it's my metaphor, so it means just what I intend it to mean) and the fallible (metaphors can be objectively more or less apt), but this post is already overlong....

Update, February 28, 2012:
As time passes, I find myself less convinced that Dennett should endorse this interpretation of his view. Unfortunately, however, I can't yet swap in a better interpretation.

Thursday, July 19, 2007

The Generosity of Philosophy Students

At University of Zurich, when students register for classes, they have the option of donating to charities supporting needy students and foreign students. Bruno Frey and Stephan Meier found, in 2005, that economics students were a bit less likely to donate to the charities than other students (62% of economics students vs. 69% of others gave to at least one charity). However, the effect seemed to be more a matter of selection than training: Economics majors were less charitable than their peers from the very beginning of their freshman year. Thus, they were not made less charitable, Frey and Meier argue, by their training in economic theory.

How about philosophy students? Could the ethical component of philosophical education have any effect on rates of charitable giving? This relates to my general interest in whether ethicists behave any morally better than non-ethicists.

Frey and Meier kindly sent me their raw data, expanded with several new semesters not reported in the 2005 essay. Here are some preliminary analyses. I looked only at undergraduates no more than 30 years old. In total, there were 164,550 registered student semesters over the course of 6 years of data.

In any given semester, 72.0% of students gave to at least one charity. Majors with particularly high or low rates of giving and at least 1000 registered semesters were:

Below 65%
Teacher training in math & natural sciences: 54.8%
Business economics: 58.7%
Italian studies: 61.4%
Teacher training in humanities & social sciences: 62.9%

Over 80%
Sociology: 81.3%
Ethnology: 82.7%
Philosophy: 83.6%

Among large majors, philosophy students were the most generous! Does this bode well for the morally salutary effects of studying philosophy?

Unfortunately, as in the original Frey & Meier study, a look at the time-course of the charitable giving undermines the impression of an indoctrination or training effect.

Percentage of Philosophy majors giving to at least one charity, by year:
1st year of study: 85.4% (of 411 student semesters)
2nd year: 86.9% (of 289)
3rd year: 85.2% (of 250)
4th year: 85.2% (of 236)
5th year: 82.5% (of 171)
6th year: 83.1% (of 136)
7th year: 81.3% (of 107)
8th year or more: 73.2% (of 183)

It seems that studying philosophy is not making students more charitable. If anything, there is a decrease in contributions over time.

There is also a decrease among non-philosophers, from 75.4% in Year 1 to 66.0% in Year 7 and 61.0% in Year 8+. This looks like a sharper rate of decrease, but difference in the decrease may not be statistically significant, given the small numbers of advanced philosophy students and the non-independence of the trials. Looking at individual students (under age 40) for whom there are at least 7 semesters of data, philosophy majors are just as likely to increase (37.4%) or decrease (27.2%) their rates of giving as are an age-matched sample of non-philosophy majors (42.4% up, 30.2% down).

(Oddly, although overall rates of giving are lower among more advanced students, more students increase their rates of giving over time than decrease their rates of giving. These facts can be (depressingly) reconciled if students who donate to charity are less likely than students who don't donate to continue in their studies.)

Why are Zurich philosophy students more likely to donate to these charities than students of other majors? Does philosophy attract charitable people? I'm not ready yet to draw that conclusion: It could be something as simple as higher socio-economic status among philosophy majors. They might simply have more money to give. (Impressionistically, in the U.S., philosophy seems to draw wealthier students; students from lower income families tend, on average, to be drawn to more "practical" majors.)

Monday, July 16, 2007

Feeling bias in the measurement of happiness (by guest blogger Dan Haybron)

For starters, I want to thank Eric for letting me guest on his blog. This has been a lot of fun, with great comments, and definitely converted me to the value of blogging! Thanks to all. Now...

Suppose you think of happiness as a matter of a person’s emotional condition, or something along those lines. If you don’t like to think of happiness that way, then imagine you’re wanting to assess the emotional aspects of well-being: how well people are doing in terms of their emotional states. What, exactly, would you look to measure?

An obvious thought is feelings of joy and sadness, but of course there’s more to it than that: cheerfulness, anger, fear, and worry also come to mind, as well as feelings of being stressed out or anxious. So if you’re developing a self-report-based instrument, say, you’ll want to ask people about feelings like these, and doubtless others.

Here’s what Kahneman et al. (2004) use in one of the better measures, the Day Reconstruction Method (DRM): “Positive affect is the average of happy, warm/friendly, enjoying myself. Negative affect is the average of frustrated/annoyed, depressed/blue, hassled/pushed around, angry/hostile, worried/anxious, criticized/put down.” Also measured, but not placed under the positive/negative affect heading, were feelings of impatience, tiredness, and competence. (I’d be inclined to put the former two under negative—detracting from happiness, and the latter under positive—adding to happiness.) Another question asks, “Thinking only about yesterday, what percentage of the time were you: in a bad mood, a little low or irritable, in a mildly pleasant mood, in a very good mood.”

I think these are reasonable questions, but doubtless they can be improved. First, are these the right feelings to ask about? Second, should each of these feelings get the same weight, as the averaging method assumes? But third, should we only be looking at feelings?

Exercise: think about the most clearly, indisputably happy people you know. (Hopefully someone comes to mind!) Good measures of happiness should pick those individuals out, and for the right reasons. So what are the most salient facts about their emotional conditions? How do you know they are happy? Did you guess the integral of the feelings listed above over time? I doubt it! In my case, the first thing that comes to mind is not feelings at all, but a palpable confidence, centeredness, or settledness of stance. (BTW, the most blatantly cheerful people I know don’t strike me as very happy at all; their good cheer seems a way of compensating for a basically unsettled psyche.) For the people I’m thinking of, I’m guessing they’re happy because of what seems to me to be their basic psychic orientation, disposition, or stance. They are utterly at home in their skin, and their lives.

If this is even part of the story, then affect measures like those above appear to exhibit a “feeling bias,” putting too much weight on feeling episodes rather than matters of basic psychic orientation. How to fix? I don’t know, but one possibility is to use “mood induction” techniques, e.g. subjecting people to computer crashes and seeing how they respond. A happy person shouldn’t easily fly into a rage. But this won’t work well for some large surveys. And what about occurrent states like a constant, low level stress that doesn’t quite amount to a “feeling” of being stressed, or at least not enough to turn up in reports of feeling episodes, yet which may have a large impact on well-being? And how do you tell if someone is truly centered emotionally?

I believe that in the psychoanalytic tradition little stock is put in sums of occurrent feelings, much less reports of those feelings, since so much of unhappiness (and by extension happiness) in their view is a matter of the unconscious--deep-down stuff that only came through indirectly, in dreams, reactions to situations, etc. I think this is roughly right. But how do we measure that?

Basically, my question is, how should the sorts of affect measures used in the DRM be changed or supplemented to better assess happiness, or the quality of people’s emotional conditions?

Friday, July 13, 2007

Checkerboards and Honeycombs in the Sun

In 1819, the eminent physiologist Johann Purkinje drew the following picture of what he saw when he closed his eyes and faced toward the sun:



Purkinje said that most individuals with whom he tried this experiment report seeing such figures, especially the little squares. (For a fuller translation of this and surrounding passages, see here.)

When I face the sun with eyes closed it doesn't seem to me that I see checkerboard or honeycomb shapes. Rather, I'd say, my visual field is broadly and diffusedly orange or light gray (slowing shifting between these two colors) -- and brighter, generally, in the direction of the sun. Sometimes it briefly becomes a vivid scarlet. Others I've asked to close their eyes and look at the sun also generally don't report Purkinje-like experiences (although one person on one occasion -- out of several occasions -- reported something like a honeycomb latticework).

So I'm curious: Was Purkinje simply mistaken? Did he have unusual experiences, accurately reported for his own part, and then subtly pressured his subjects into erroneously reporting similar things? Could this be the kind of experience that varies culturally? I'd be interested to hear if any of you experience checkerboards or latticworks.

Thursday, July 12, 2007

The Social Biophilia Hypothesis (by guest blogger Dan Haybron)

Two posts back I suggested that people may have evolved with psychological needs for which they lack corresponding desires, or at least strong enough desires given the significance of the needs. For certain needs may have been met automatically in the environment in which we evolved, so that there wouldn’t be any point in having desires for them. Today I want to suggest a possible example of this: a need for close engagement with the natural environment.

Biologist E.O. Wilson and others have defended the “biophilia” hypothesis, according to which human beings evolved with an innate affinity for nature. They have noted a variety of results pointing to the measurable benefits of exposure to natural scenes, wilderness, etc. (E.g., hospital patients with a view of trees and the like tend to have better outcomes.) To be honest I have not read this literature extensively, but the root idea strikes me as very plausible.

Indeed, I suspect that human beings have a basic psychological need for engagement with natural environments, so that their well-being (in particular, their happiness) is substantially diminished insofar as they are removed from such environments. And yet we don’t perceive an overwhelming desire for it, because the need was automatically fulfilled for our ancestors.

I can’t offer much argument here, but one reason to believe all this is that dealing with wilderness places intense cognitive demands on us, presenting us with an extremely rich perceptual environment that requires a high degree of attentiveness and discernment. (I don’t mean enjoying a hike in the woods, perceived as a pleasant but indiscriminate blur of greens, browns, and grays—I mean *knowing* the woods intimately, because the success of your daily activities depends on it.) The selection pressures on our hunter-gatherer ancestors to excel in meeting these demands must have been intense, and I think this is one of the things we are indeed really good at. Moreover, it is plausible that we really enjoy exercising these capacities (recall Rawls’ “Aristotelian Principle”). Insofar as we fail to exercise these capacities, we may be deprived of one of the chief sources of human happiness (see, Michael Pollan’s excellent “The Modern Hunter-Gatherer.”) I suspect that most artificial environments (think suburbia) are too simple and predictable, leaving these capacities mostly idled, and us bored. (Perhaps many people love cities precisely because they come closer to simulating the richness of nature.)

At the same time, we are obviously social creatures, most of whom have a deep need to live in community with others. Living alone in the forest is not a good plan for most of us. Distinguish two types of community: “land communities,” where daily live typically involves a close engagement with the natural environment; and “pavement communities,” where it does not. Virtually all of us now live in pavement communities.

Here’s a wild conjecture: human flourishing is best served in the context of a land community. Indeed, only in such a community can our basic psychological needs be met. Call this the “social biophilia hypothesis.” Plausible?

I suppose this will seem crazy to most readers, and maybe it is. For one thing, there is a conspicuous paucity of discussion of such ideas in the psychological literature. Why isn’t there more evidence for this hypothesis in the literature? I would suggest there are two reasons. First, current measures of happiness may be inadequate, e.g. focusing too little on stress and other states where we would expect to find the biggest differential. Second, psychologists basically don’t *study* people in land communities. Almost all the big studies of subjective well-being, the heritability studies, etc., focus on populations living in pavement communities. And there is virtually no work comparing the well-being of people closely engaged with nature and those who are not (but see Biswas-Diener et al. 2005). If the social biophilia hypothesis is true, then this would be a bit like studying human well-being using only hermits as subjects. (“Zounds, they’re all the same! Happiness must be mainly in the genes.”) The question is, how can we study the effects on well-being of living close to nature while controlling for other differences between people who do so and people living in pavement communities?

Monday, July 09, 2007

Big Things and Small Things in Morality

Hegel wrote that a great man's butler never thinks him great -- not, Hegel says, because the great man isn't great, but because the butler is a butler.

I don't really want to venture into the dark waters of Hegel interpretation, but the remark (besides being insulting to butlers and perhaps convenient for Hegel's self-image) suggests to me the following thought: Being good in small ways or accomplished in petty things -- in the kind of things a butler sees -- is unrelated, or maybe even negatively related, to being truly great. Einstein might not seem a genius to the man who handles his dry cleaning.

Does this apply to moral goodness or greatness? Is being good in small things -- civility with the cashier, not leaving one's coffee cup behind in the lecture hall -- much related to the big moral thngs, such as caring properly for one's children or doing good rather than harm to the world in one's chosen profession? Is it related to moral greatness of the sort seen in heroic rescuers of Jews during the Holocaust, such as Raoul Wallenberg, or moral visionaries such as Gandhi or Martin Luther King?

As far as I know, the question has not been systematically studied (although situationists might predict weak relationships among moral traits in general). Indeed, it's a somewhat daunting prospect, empirically. Although measuring small things the return of library books is easy, it's hard to get an accurate measure of broader moral life. People may have views about the daily character of King and Gandhi, but such views are almost inevitably distorted by politics, or by idolatry, or by the pleasure of bringing down a hero, so that it's hard to know what to make of them.

The issue troubles me particularly because of my interest in the moral behavior of ethics professors. Suppose I find (as it's generally looking so far) that on a number of small measures such as the failure to return library books, contribution to charities supporting needy students, etc., that ethicists look no better than the rest of us. How much can I draw from that? Are such little things simply too little to indicate anything of moral importance? (Or maybe, I wonder, is the moral life mostly composed of an accumulation of such little things...?)

Friday, July 06, 2007

Indiscernible misery? (by guest blogger Dan Haybron)

I‘m on the road at the moment, so here‘s a quick traveler‘s post. A couple years back I had the pleasure of flying to California over the holidays with a family suffering from stomach flu. In my case the worst had seemingly passed, yet I was still definitely not feeling well. In fact the flight became excruciatingly unpleasant--one of those times where you keep changing positions and never manage to relieve the feeling for more than a few moments. I wanted to run screaming from the plane.

The thing is: even at times of peak discomfort, when I wanted to jump out of my skin, I could not discern anything in my experience to account for it. When I paused to introspect what I was feeling, I couldn‘t make out anything unpleasant--no discernible nausea, nothing. As if I felt fine. Except I didn‘t--I felt horrible--even, I think, at those moments. At least, that‘s what I recall, and I also recall at least getting some distraction thinking about these things at the time.

Has anyone experienced anything like this? Am I just confused? I don‘t think the overall unpleasantness of the experience was simply a matter of my intense desire to be rid of it--rather, it seemed the desire was a result of the unpleasantness...

Wednesday, July 04, 2007

Seeing Through Your Eyelids -- Spreading Motion

When I close my eyes and wave my hand before my face, I seem to see motion. I think this isn't just the caver illusion (the sense people sometimes have, in complete darkness, that they can see their hands move), because the effect seems much stronger when I face toward a light source, and I can see a friend's hand in the same way. In some sense, I am seeing through my eyelids. This shouldn't be too surprising: Most people report being able to see the sun through their eyelids. Such a thin band of flesh is easily penetrated by light. I discussed this stuff a bit in a May post.

Although I was pretty confused in my May post, I'm finding more consistency now with directional and occlusion effects. If I move my hand slowly from one side to the other, I can locate the position of the movement as to the right or the left. If I face a bright light source and move my head, I can track the rough direction of the source. If I raise an occluding object between my face and my moving hand -- a newspaper, say, held eight inches before my face -- the impression of movement is much lessened. (Any sense of motion that remains might really be just the caver illusion.)

The oddest effect is when I slightly lower the occluding object, so that the tips of my fingers are not occluded, but the rest of my hand and arm is. Once again I have a vivid experience of motion -- but not as though located just at the top of the visual field. The motion seems to spread down the field, almost to the bottom, as though the newspaper were entirely removed, but somewhat less vivid. In fact, it seems to me that the primary effect of moving the newspaper up and down is increasing and decreasing the vividness of the sense of motion. The change in the visual extent of the motion experience appears relatively minor.

As far as I'm aware, this spreading of perceived motion when the eyes are closed has never been remarked on in the perception and consciousness literature. I wonder if others experience the same thing...?

Tuesday, July 03, 2007

Germs, dirt, and relationships: why people may not want what they need (by guest blogger Dan Haybron)

It is widely thought that happiness depends on getting what you want. Indeed, the switch in economics from happiness to preference satisfaction as the standard of utility was originally based on the idea that the latter is a good proxy for the former: happiness is a function of the extent to which you get what you want. Even if you don‘t believe that, you might accept this weaker claim: basic human needs will normally be accompanied by desires for goods that tend to satisfy those needs; and the strength of those desires will reflect the importance of the needs. Thus human psychological needs will be reflected in people‘s motives. Call this the Needs-Motivation Congruency Thesis (NMCT). Hunger would be a typical example: we strongly desire food because we strongly need food (not just for happiness, of course).

I see no reason to believe that this is true. Among other things, there‘s an in-principle reason we should not expect the NMCT to hold: common human motivational tendencies will largely reflect the needs of our evolutionary ancestors. We want food because such a desire contributed to inclusive fitness: if you didn‘t have that desire, your genes didn‘t go very far. But here‘s another physiological need humans
apparently have: we seem to need early exposure to germs and dirt. Without it, we develop various allergies and immune deficiencies. Yet most people don‘t have a particular attraction to germs and dirt (as such!). If anything, it‘s the reverse. Why? Because such a desire would have done nothing for inclusive fitness when humans evolved: you couldn‘t avoid encounters with lots of germs and dirt. If anything, it
would have been adaptive to limit exposure to such things. So we need a dirty childhood, but don‘t want one; kids are happy to sit in an anti-septic environment playing video games all day, puffing on albuterol inhalers.

The same thing may happen with happiness: we may need certain things for happiness but either have no particular desire for them, or our desire for them is weak compared to the need. Relationships may be an example. Good relationships are the strongest known source of happiness, and are clearly a deep psychological need for human beings. Now normal people do, clearly, desire social relationships. Yet many
if not most of us choose to live in ways that compromise our relationships, often to the net detriment of our happiness. E.g., people often choose lucrative jobs at the expense of time with friends and family. It is easy to see how a strong desire for wealth and status might have been adaptive for early humans, whereas we probably
didn‘t need proportionately strong desires for friendship and family: you got those automatically. So our desire for wealth and status trumps our desire for a more important need, good relationships.

Next up: biophilia as another possible counterexample to the NMCT.

Friday, June 29, 2007

"Humbling" Experiences?

I'm packing up now for a vacation until the 17th (to see relatives and friends in Maryland and Florida). I'll try to keep posting (and Dan Haybron is still mid-run as a guest blogger), but since I don't have access to my books and articles, things will be a little less formal.

Informally, then: If I ever I receive a major award, then maybe I'll know what's on people's minds when they call such awards "humbling" (Google humbling award for some examples). On the face of it, receiving awards seems generally to be the opposite of humbling. Nobel Prize and Academy Award receipients aren't, as a class, the humblest of folks. Nor do winners of lesser awards (various academic prizes, for example) seem generally to made more humble by the experience. (A friend of mine went on a blind date with a winner of a MacArthur Fellowship. He handed her his business card, with "certified genius" embossed on it! Unfortunately, she forget to ask for his autograph.)

Let's assume, then, that -- unlike truly humbling experiences -- winning awards doesn't make one humble. Yet the phrase is so common, I suspect there's something to it. Momentarily, at least, one can feel humbled by an award.

Here's my thought: If I receive an award that puts me in elevated company or that represents a very high appraisal of me by a group I respect, there may be a mismatch between my self-conception and the conception that others seem to have of me. My sense that I don't quite deserve to belong may be experienced as something like humility. However, ordinarily that feeling will pass. I'll adjust my self-conception upward; I'll slowly start to think of myself in terms of the award (since I'm so impressed by it!); it will be hard for me not to think less of those who haven't reached such heights.

Perversely, then, it may be exactly those people who are inclined to think of an award as "humbling" who are made less humble by attaining it. Those who were already arrogant will be unchanged -- they knew they deserved the award all along, and it's about time! And the type of person who is deeply, intrinsically humble (if there are any such people) may not be sufficiently inclined to see the award as a legitimate mark of comparison between oneself and other folks to have any striking experience of humility -- any "wow, me?!" -- in the face of it.

Wednesday, June 27, 2007

Why life satisfaction is (and isn’t) worth measuring (by guest blogger Dan Haybron)

A lot of people think of happiness in terms of life satisfaction, and take life satisfaction measures to tell us about how happy people are. There is something to this. But no one ever said “I just want my kids to be satisfied with their lives,” and for good reason: life satisfaction is very easy to come by. To be satisfied with your life, you don’t even have to see it as a good life: it just has to be good enough, and what counts as good enough can be pretty modest. If you assess life satisfaction in Tiny Timsylvania, where everyone is crippled and mildly depressed but likes to count their blessings, you may find very high levels of life satisfaction. This may even be reasonable on their part: your life may stink, but so does everyone’s, so be grateful for what you’ve got. Things could be a lot worse.

Many people would find it odd to call the folks of Tiny Timsylvania happy. At least, you would be surprised to pick up the paper and read about a study claiming that the depressed residents of that world are happy. If that’s happiness, who needs it? For this and other reasons, I think that life satisfaction does not have the sort of value we normally think happiness has, and that researchers should avoid couching life satisfaction studies as findings on “happiness.” To do so is misleading about their significance.

So are life satisfaction measures are pointless? No: we might still regard them as useful measures of how well people’s lives are going relative to their priorities. Even if they don’t tell you whether people’s lives are going well, for reasons just noted, they might still tell you who’s doing better and worse on this count: namely, if people whose lives are going better by their standards tend to report higher life satisfaction than those whose lives are going worse. This might well be the case, even in Tiny Timsylvania. (Though caution may be in order when comparing life satisfaction between that nation and Archie Bunkerton, where people like to kvetch no matter how well things are going.)

This kind of measure may be important, either because we think well-being includes success relative to your priorities, or because respect for persons requires considering their opinions about their lives when making decisions on their behalf. The government of Wittgensteinia, populated entirely by dysthymic philosophers who don’t mind being melancholy as long as they get to do philosophy, should take into account the fact that its citizens are satisfied with their lives, even if they aren’t happy.

Note that the present rationale for life satisfaction as a social indicator takes it to be potentially important, but not as a mental state good. Rather, it matters as an indicator of conditions in people’s lives. Concern for life satisfaction is not, primarily, concern about people’s mental states. So rejecting mentalistic views of well-being is no reason for skepticism about life satisfaction.

Monday, June 25, 2007

Are Babies More Conscious than Adults?

Philosophers and doctors used to dispute (sometimes still do dispute) whether babies are conscious or merely (as Alison Gopnik puts it in her criticism of the view) "crying carrots". This view went so far that doctors often used to think it unnecessary to give anaesthesia to infants. Infants are still, I think, not as conscientiously anaesthetized as adults.

Gopnik argues that babies are not only conscious, they are more conscious than adults. Her argument for this view begins with the idea that people in general -- adults, that is -- have more conscious experience of what they attend to than of what they disregard. We have either no experience, or limited experience, of the hum of the refrigerator in the background or the feeling of the shoes on our feet, until we stop to think about it. In contrast, when we expertly and automatically do something routine (such as driving to work on the usual route) we are often barely conscious at all, it seems. (I think the issue is complex, though.)

When we attend to something, the brain regions involved exhibit more cholinergic activity, become more plastic and open to new information. We learn more and lay down new memories. What we don't attend to, we often hardly learn about at all.

Baby brains, Gopnik says, exhibit a much broader plasticity than adults' and have a general neurochemistry similar to the neurochemistry involved in adult attention. Babies learn more quickly than we do, and about more things, and pick up more incidental knowledge outside a narrow band of attention. Gopnik suggests that we think of attention, in adults, as something like a mechanism that turns part of our mature and slow-changing brains, for a brief period, flexible, quick learning, and plastic -- baby-like -- while suppressing change in the rest of the brain.

So what is it like to be a baby? According to Gopnik, it's something like attending to everything at once: There's much less of the reflexive and ignored, the non-conscious, the automatic and expert. She suggests that the closest approximation adults typically get to baby-like experience is when they are in completely novel environments, such as very different cultures, where everything is new. In four days in New Guinea we might have more consciousness and lay down more memories than in four months at home. Also, she suggests, it may be something like certain forms of meditation -- those that involve dissolving one's attentional focus and becoming aware of everything at once. In such states, consciousness becomes not like a spotlight focused on one or a few objects of attention, with all else dark, but more like a lantern, shining its light on many things at once.

Now isn't that a nifty little thought?