Thursday, May 28, 2009

A Landmark

... The Splintered Mind's 250,000th visitor since its founding on April 27, 2006.

When Your Eyes Are Closed, What Do You See?

I've drafted the eighth and final chapter of my book in progress (working title Perplexities of Consciousness). Chapters 4 and 7 are not yet in circulatable form. Like the other chapters, this chapter is written to be comprehensible in isolation. As usual, I welcome feedback, either by email or as comments on this post. Unlike the other seven chapters, this chapter is not based on a previously published article.

Here's an abstract:

This chapter raises a number of questions, not adequately addressed by any researcher to date, about what we see when our eyes are closed. In the historical literature, the question most frequently discussed was what we see when our eyes are closed in the dark (and so entirely or almost entirely deprived of light). In 1819, Purkinje, who was the first to write extensively about this, says he sees "wandering cloudy stripes" that shrink slowly toward the center of the field. Other later authors also say such stripes are commonly seen, but they differ about their characteristics. In 1897, for example, Scripture describes them as spreading violet rings. After Scripture, the cloudy stripes disappear from psychologists' reports. Other psychologists describe the darkened visual field as typically -- not just idiosyncratically, for themselves -- very nearly black (e.g., Fechner), mostly neutral gray (e.g., Hering), or bursting with color and shape (e.g., Ladd). I loaned beepers to five subjects and collected their reports about randomly sampled moments of experience with their eyes closed. Their reports were highly variable, and one subject denied ever having any visual experience at all (not even of blackness or grayness) in any of his samples. I also briefly discuss a few other issues: whether we can see through our eyelids, whether the closed-eye visual field is "cyclopean", whether the field is flat or has depth or distance, and whether we can control it directly by acts of will. The resolution of such questions, I suggest, will not be straightforward.

Thanks very much, by the way, to all the people who wrote about their eyes closed visual experience in response to my queries about it in earlier posts.

Tuesday, May 26, 2009

On Debunking III: A Surprising Concession from JJC Smart

(by guest blogger Tamler Sommers)

In my previous post, I suggested that recent attempts at “selective debunking” in metaethics—explaining away non-consequentialist intuitions while leaving the consequentialist ones intact—have been unsuccessful. The debunking either works for both sets of intuitions or it doesn’t work at all. In this post, I look for support from a surprising source: Mr. “Embrace the Reductio” himself, JJC Smart.

The classic debate about utilitarian approaches to justice—as we’re taught in textbooks—looks like this. The utilitarian argues that retributive approaches to punishment are incoherent and that punishing criminals is only justified when society as a whole benefits. The retributivist then mounts a reductio-ad-absurdum argument, claiming that the utilitarian approach could make it just to punish an innocent person (e.g. the magistrate and the mob case). The Utilitarian has two choices now: (1) claim (implausibly in my view) that in real life it could never benefit society to punish an innocent person or (2) embrace the reductio: claim that in those rare cases in which society benefits from punishing the innocent, it is morally right to do so. JJC Smart is associated with the latter response. When confronted by the fact that the common moral consciousness rebels against this conclusion, Smart famously replies “so much the worse for the common moral consciousness.” (p. 68 in Utilitarianism For and Against)

Smart goes on to say that he is inclined to reject the common methodology of testing general principles by seeing if they match our feelings about particular cases. Why? Smart writes: “it is undeniable that we have anti-utilitarian feelings in particular cases but perhaps they should be discounted as far as possible as due to our moral conditioning in childhood.” (68)

What I’ve never seen reproduced in books that lay out this dialectic are Smart’s parenthetic remarks that immediately follow:
“(The weakness of this line of thought is that the approval of the general moral principle of utilitarianism may be due to moral conditioning too. And even if benevolence was in some way a ‘natural,’ not an ‘artificial,’ attitude, this consideration could at best have persuasive force without any clear rationale. To argue from the naturalness of the attitude to its correctness is to commit the naturalistic fallacy.)”

This critique of his own strategy parallels my comments on Greene and Singer. The strategy one chooses for debunking anti-utilitarian feelings, if it works it all, seems to apply to utilitarian feelings as well. Selective debunking is treacherous business. Smart’s position here is more subtle and complex than it sometimes appears from secondhand reports.

Friday, May 22, 2009

Do Ethicists Eat Less Meat?

At philosophy functions there seems to be an abundance of vegetarians or semi-vegetarians, especially among ethicists. In my quest for some measure by which ethicists behave morally better than non-ethicists, this has seemed to me, along with charitable donation, among the most likely places to look. (On my history of failure to find evidence in previous research that professional ethicists behave better than anyone else, see here.)

Earlier this year, Joshua Rust and I sent out a survey to three groups of professors: Ethicists in philosophy, philosophers not specializing in ethics, and a comparison group of professors in other departments. After a number of prods (verging, I fear, on harrassment), we achieved a response rate in the ballpark of 60%, which is pretty good for a survey study given the wide variety of reasons people don't respond. Among our questions were three about vegetarianism.

First we asked a normative question. The prompt was "Please indicate the degree to which the action described is morally good or morally bad by checking one circle on each scale". Nine actions were described, among them "Regularly eating the meat of mammals such as beef or pork". Responses were on the following nine-point scale (laid out horizontally, not vertically as here)
O very morally bad
O
O somewhat morally bad
O
O morally neutral
O
O somewhat morally good
O
O very morally good
We coded the responses from "1" (very morally bad) to "9" (very morally good).

It seems that ethicists are substantially more condemnatory of eating meat (at least beef and pork) than are non-ethicists. Among the 196 ethicsts who responded to this question 59.7% espoused the view that regularly eating the meat of mammals was somewhere on the morally bad end of the scale (that is, 4 or less in our coding scheme). Among the 206 non-ethicist philosophers, 44.7% said eating the meat of mammals is morally bad. Among the 168 comparison professors only 19.6% said it is morally bad. (All differences are statistically significant.)

We posed two questions about respondents' own behavior. One question was this: "During about how many meals or snacks per week to you eat the meat of mammals such as beef or pork"? On this question, 50 ethicists (25.5%), 40 non-ethicist philosophers (19.4%), and 23 other professors (13.7%) claimed complete abstinence (zero meals per week). (The difference between the ethicists and comparison professors was statistically significant, the other differences within the range of chance variation.) Ethicists reported a median rate of 3 meals per week, the other groups median rates of 4 meals per week (a marginal statistical difference vs. the non-ethicist philosophers, a significant difference vs. the comparison profs).

Now by design that question was a bit difficult and easy to fudge. We also asked a much more specific question that we thought would be harder to fudge: "Think back on your last evening meal (not including snacks). Did you eat the meat of a mammal during that meal?" We figured that if there was a tendency to fudge or misrepresent on the survey, it would show up as a difference in patterns of response to these two questions; and if there was such a difference in patterns of response, we thought the latter question would probably yield the more accurate picture of actual behavior.

So here are the proportions of respondents who reported eating the meat of a mammal at their last evening meal:
Ethicists: 70/187 (37.4%)
Non-ethicist philosophers: 65/197 (33.0%)
Professors in other departments: 75/165 (45.4%).
There is no statistically detectable difference between the ethicists and either group of non-ethicists. (The difference between non-ethicists philosophers and the comparison professors was significant to marginal, depending on the test.)

Conclusion? Ethicists condemn meat-eating more than the other groups, but actually eat meat at about the same rate. Perhaps also, they're more likely to misrepresent their meat-eating practices (on the meals-per-week question and at philosophy functions) than the other groups.

I don't have anything against ethicists. Really I don't. In fact, my working theory of moral psychology predicted that ethicists would eat less meat, so I'm surprised. But this how the data are turning out.

Tuesday, May 19, 2009

On Debunking Part Deux: Selective Debunking in Metaethics.

(by guest blogger Tamler Sommers)

In my last post, I explained why an evolutionary account of parental love in no way undermined or debunked my love for my daughter. Now I want to apply some ideas from that post and discussion to a debunking strategy employed by Peter Singer and Josh Greene* in “Ethics and Intuitions” and “The Secret Joke of Kant’s Soul” respectively.

Singer and Greene aim to accomplish two things: first, debunk our deontological moral intuitions by appealing to an evolutionary account of their origin; and second, to explain why this account doesn’t reveal consequentialist intuitions to be equally misguided. (Balancing these two aims is tricky business, as I’ll try explain below.)

Their basic line of reasoning is this: evidence from evolutionary biology (and neuroscience, social psychology etc.) suggests that non-consequentialist intuitions are the product of emotional responses that enabled our hominid ancestors to leave more offspring. Since they were adaptive, we would have these intuitions whether or not they reflected moral truth of some kind. Consequently, we have no reason to trust these intuitions as guides to what we ought morally to do, or to take these intuitions as “data” to be justified by more general normative principles or as starting points in an attempt to reach reflective equilibrium. As Singer writes: “there is little point in constructing a moral theory designed to match considered moral judgments that themselves stem from our evolved responses to situations in which we and our ancestors lived during our period of evolution...” (348)

Here’s the key question. If this evolutionary account successfully debunks our non-consequentialist intuitions, then why doesn’t it debunk consequentialist intuitions as well, leading to moral nihilism? Singer provides one response but I don’t think it can work. He claims that our consequentialist intuitions are not products of natural selection. They are better described as “rational intuitions.” Why? Well, Singer argues, to take one example, natural selection would not favor treating everyone’s happiness as equal. True, but that is precisely the consequentalist intuition that we don’t have. We believe it’s permissible (or obligatory) to favor our own children’s welfare over the welfare of others. As for the other crucial consequentialist intuition, not wanting people to suffer in general—this likely is a product of our evolved sense of empathy. By Singer’s reasoning, we should likewise be suspicious of that intuition as well.

Josh Greene’s response is different, part of a divide and conquer strategy. He argues that the naturalistic and sentimentalist account undermines, at the very least, rationalist deontologists because it reveals them to be rationalizers. The normative conclusions they claim to be reaching through reason are actually a product of evolved emotional responses. As an analogy, he asks us to imagine a woman named Alice who (unbeknownst to her) has a height fetish and is only attracted to men over 6 foot 4. When she comes back from a date, she defends her view of the man’s attractiveness with claims about his wit, charm, intelligence, or lack thereof. But really it’s all about the height. Her claims are rationalizations of unconscious impulses, Greene argues, just like the theories of rationalist deontologists.

The analogy is interesting and perhaps not altogether favorable for Greene and Singer’s purposes—for consider what this true account of the causes of her taste in men doesn’t do. It doesn’t debunk her taste in men! The account does not show that it’s false that she finds tall men attractive, it just shows that she is attracted to them for different reasons than she originally thought. Is she going to start dating Danny Devito types now that she’s aware of this? Surely not. And there’s no reason why she should. Similarly, those who, say, believe it permissible to favor one’s children’s welfare over the welfare of strangers can retain this intuition and just abandon the pretense that they’ve arrived at it through reason.

In short, while Greene’s strategy may undermine a certain kind of justification for non-consequentialist intuitions, it doesn’t seem to give us any reason to hold them in less regard than our consequentialist ones. If you agree that Singer has not demonstrated the inherent “rationality” of consequentialist judgments, then it seems the two sets of intuitions are, for the moment, equally justified or unjustified.

I can think of a host of objections here, but the post is long so I’ll stop for now. Comments and eviscerations welcome!

*Greene is officially a moral skeptic but one who only attempts to debunk non-consequentialist judgments and who believes that consequentialism is the most reasonable normative theory to endorse even if it is not, strictly speaking, true. The paper is available on his website.

Sunday, May 17, 2009

Do You Have Constant Tactile Experience of Your Feet in Your Shoes?

Chapter Six of my book in draft (working title Perplexities of Consciousness) is available here. As with the other chapters, I've tried to make it comprehensible without having read the previous chapters. Also as with the other chapters, I would very much value feedback, either by email or as comments on this post.

Abstract:

Do we have a constant, complex flow of conscious experience in many sensory modalities simultaneously? Or is experience limited to one or a few modalities, regions, or objects at a time? Philosophers and psychologists disagree, running the spectrum from saying that experience is radically sparse (e.g., Julian Jaynes) to saying it's radically abundant (e.g., William James). Existing introspective and empirical arguments (including arguments from "inattentional blindness") generally beg the question. I describe the results of an experiment in which I gave subjects beepers to wear during everyday activity. When a beep sounded, they were to note the last conscious experience they were having immediately before the beep. I asked some participants to report any experience they could remember. I asked others to report simply whether they had visual experience or not. Still others I asked if they had tactile experience or not, or visual experience in the far right visual field, or tactile experience in the left foot. Interpreted at face value, the data suggest a moderate view according to which experience broadly outruns attentional focus but does not occur anything like 100% of the time through the whole field of each sensory modality. However, I offer a number of reasons not to take the reports at face value. I suggest that the issue may, in fact, prove utterly intractable. And if so, it may prove impossible to reach justifiable scientific consensus on a theory of consciousness.

Tuesday, May 12, 2009

On Debunking (by guest blogger Tamler Sommers)

Not too long ago, I was one of those people who found the children of my friends annoying and the endless discussions about how fast they were growing unbearable (really, babies grow???!! Amazing!!). Then my daughter Eliza was born and I was smitten from the day one. Teaching her to ride a bike, watching Charlie Chaplin movies with her, I feel like I’m in heaven. Now, if an evolutionary biologist comes along and tells me: “yes, but these feelings of “love” are really just a bunch of neurons firing—these feelings have been naturally selected for so that parents would care for offspring long enough for them to pass along their genes,” I’d shrug my shoulders or perhaps ask for more details. But this mechanistic/evolutionary explanation wouldn’t in any way undermine my love for my daughter or debunk my belief that I truly love her. Why? Because I’m a naturalist and never presumed that love wouldn’t have this type of explanation.

However, I know people who don’t feel this way about love—someone named Ashley for example. For Ashley, real love cannot just be neurons firing because it was adaptive for her ancestors to have those neurons firing. Real love must have its source in something completely unrelated to the struggle for survival and reproduction. Naturalistic explanations terrify Ashley precisely because they do undermine her belief that she truly loves her children or partner.

But would/should these explanations debunk her belief that she loves her children? Well, that depends. It certainly seems strange (for Ashley) to think that she loves her son because it was adaptive for her ancestors to love their children. That doesn’t seem like real love. On the other hand, it also seems strange to her, given what she now knows, to say “it’s false that I love my son.” She still adores him, loves to play with him, would kill anyone that tried to harm him. So what, in the end, does/should Ashley think about her belief in the existence of her love—is it (a) false or (b) just in need of revision? The answer seems to depend in large part on which option, upon reflection, seems stranger, more counterintuitive. It also seems to be the case that whatever she chooses will be the result of her personal history, the particular ways in which Ashley acquired the concept of love (as opposed to, say, the way I acquired the concept.)

I bring this up because lately I’ve been thinking that we have no agreed-upon method for determining when a belief has been explained and when it has been explained away. The above example makes me think that the success of debunking strategies is (a) tied to our preconceptions about the origins of the belief in question, and (b) indeterminate. In my next post, I’ll give my thoughts about how these considerations relate to specific naturalistic debunking strategies in metaethical debates (by Josh Greene, Richard Joyce, and Peter Singer). But first, I would love to hear others’ thoughts on the criteria for evaluating the success of debunking strategies in general, or debunking strategies in metaethics in particular.

Oh, and for a classic case of debunking (and a look back at one of Bob Barker’s lesser known enterprises) check this out. (Note that Randi would be providing an explanation rather than a debunking explanation if the preconception about what was causing the pages to move were different…)

Sunday, May 10, 2009

Titchener's Introspective Training Manual

... Chapter 5 of my book in draft Perplexities of Consciousness, is now up on my homepage.

This chapter does not presuppose any particular knowledge of Chapters 1-4 (and Chapter 4 still isn't in circulatable shape, anyway), so if you're curious feel free just to dive in.

Here's a brief abstract:
The unifying theme of this book is people's incompetence in assessing their own stream of experience, even in what would seem to be favorable circumstances. In this chapter I consider the possibility that old-fashioned "introspective training" in Titchener's style might help produce more accurate reports. In particular, I examine Titchener's treatment of auditory "difference tones" heard when a musical interval is played, his treatment of the "flight of colors" in afterimages following exposure to bright white light, and his treatment of subtle visual illusions.
As always, comments welcomed and appreciated, here on this post or my email to my academic address!

Wednesday, May 06, 2009

When Are Introspective Judgments Reliable?

I'm a pessimist about the accuracy of introspective judgments -- even introspective judgments about currently ongoing conscious experience (e.g., visual experience, imagery experience, inner speech; see here). But I'm not an utter skeptic. If you're looking directly at a large, red object in canonical conditions and you judge that you are having a visual experience of redness, I think the odds are very good that you're right. So then the question arises: Under what conditions do I think judgments about conscious experience are reliable?

I think there are two conditions. (In my Fullerton talk last week, I said three, but I'm giving up on one.)

First, I believe our judgments about conscious experience ("phenomenology") are reliable when we can lean upon our knowledge of the outside world to prop them up. For example, in the redness case, I know that I'm looking at a red thing in canonical conditions. I can use this fact to support and confirm my introspective judgment that I'm having a visual experience of redness. Or suppose that I know that my wife just insulted me. This knowledge of an outward fact can help support my introspective judgment that the rising arousal I'm feeling is anger.

Second, I believe that our judgments about phenomenology are reliable when they pertain to features of our phenomenology about which it's important to our survival or functioning to get it right. It's important that we be right about the approximate locations of our pains, so that we can avoid or repair tissue damage. It's important that we recognize hunger as hunger and thirst as thist. It's important that we know what objects and properties we're perceiving and in what sensory modality. It's also important, I think, that we be able to keep track of the general gist of our thoughts and imagery so that we can develop chains of reasoning over time.

However, about all other aspects of our phenomenology not falling under these two heads we are, I think, highly prone to error. This includes:
* the general structural morphology of our emotions (e.g., the extent to which they are experienced as bodily or located in space, their physiological phenomenology)

* whether our thoughts are in inner speech or are instead experienced in a less articulate way (except when we deliberately form thoughts inner speech, in which case it is part of the general gist of our thoughts that we're forming the kind of auditory imagery associated with inner speech)

* the exact location of our pains (as you'll know if you've ever tried to locate a toothache for a dentist) and their character (shooting, throbbing, etc.)

* the structural features of our imagery (how richly detailed, whether experienced as in a subjective location such as inside the head, how stable, etc.)

* non-objectual features of sensory experience such as its stability or gappiness or extension beyond the range of attention

* virtually everything about our hunger and thirst apart from their very presence or absence.
About such matters, I think, when we're asked to report, we're prone simply to fly by the seat of our pants and make crap up. If we exude a blustery confidence in doing so, that's simply because we know no one else can prove us wrong.