Thursday, August 27, 2009

Reconstructive Memory (vs. Storage and Retrieval) and "Experience Sampling"

In both cognitive science and folk psychology, the dominant metaphor for memory – a metaphor that both reflects and reinforces a certain way of thinking about it – is the metaphor of storage and retrieval (often with a search in the middle). There’s one particular aspect of this metaphor I want to highlight in this post: On the storage-and-retrieval picture, memory is a process that, once initiated, can and typically should operate largely independently of other cognitive processes. Other processes like inferring, imagining, and perceiving interfere with pure remembering. To the extent those processes influence one’s final judgment about some remembered fact or event, one isn’t really quite remembering it.

This isn’t to say, of course, that on such a model inferring, imagining, or perceiving couldn’t sometimes be helpful. When something is difficult to recall, they might help one recall it, perhaps by giving clues about where to look in one’s memory stores. (If the clue is specific enough, they might even turn a recall task into a recognition task.) They might appropriately increase or decrease one’s confidence in the results of the retrieval process. But if one’s aim is as purely and cleanly as possible simply to remember, there’s something problematic in allowing such processes to play anything but a secondary role. And one might worry that they’re as likely, perhaps more likely, to distort and corrupt the memory as to enable it.

Bartlett (1932), Neisser (1967), and Roediger (1980) have ably described the various infelicities of this storage-and-retrieval picture. When the task is to remember a complex event or a complex passage (as in Bartlett’s seminal research) the core problem with the retrieval metaphor is more evident than when the task is to recall, say, a list of numbers or nonsense syllables. If I tell you a story about a cricket match and ask you to recall it later, you will not reproduce the story verbatim. Nor will you reproduce gappy but verbatim pieces of the story. Rather, you will produce a new version of the story, in light of your general background knowledge of cricket. This half-inventive process is especially revealed by your plausible mistakes and interpolations, but there’s no reason to suppose that it would only be the mistakes and interpolations that show the heavy influence of background knowledge. Someone, for example, without that background knowledge would not do nearly so well remembering overall (even if certain mistakes are more likely). Nor is this simply a matter of a cricket-knowledgeable person encoding the story better in the first hearing and thus “storing” it differently (though no doubt hearing the story knowledgeably is very important to remembering it well later). Knowledge of cricket is also used to construct or reconstruct the story at the time of recall. If, in the intervening time, new knowledge of cricket is acquired, that will affect the reconstruction, probably for the better if the match was real and typical. (In my own case, I have particularly noticed the profound effect of new knowledge on my reconstructive memory of philosophical works I read as an undergraduate.)

Bartlett writes:

Remembering is not the re-excitation of innumerable fixed, lifeless and fragmentary traces. It is an imaginative reconstruction, or construction, built out of the relation of our attitude toward a whole active mass of organized past reactions or experience, and to a little outstanding detail which commonly appears in image or in language form. It is thus hardly even really exact, even in the most rudimentary cases of role recapitulation, and it is not at all important that it should be so (1932, p. 213).
From the fact that memory is reconstructive in this way – necessarily reconstructive, at least for complex events – it follows that imagination, inference, the application of pre-existing schemata, and other cognitive processes are not separable from the process of remembering but rather an integral part of it. They are not interfering or aiding forces from which an act of “pure” remembering could be isolated.

Let's apply this to an example, from "experience sampling" -- a topic close to my heart.

An event transpires in your stream of experience – an image of warplanes in flight, say – and then a randomly generated beep occurs, signaling that you are to try your best to recall that moment of experience, which is to say the last undisturbed moment of experience before the sampling beep. Russ Hurlburt (or someone else) will interview you about it later, trying to discover in this way the truth about randomly sampled moments of your everyday, lived experience. (Now that's pretty cool, don't you think?) Okay, so what's going to happen?

First, let's note the obvious: That target event is now gone. Furthermore, there’s no reason to think your brain would have stored a detailed and enduring record of that event as it was ongoing. As change blindness experiments have shown, as well as experiments about the forgetting of mundane everyday details (even details frequently seen like the layout of a penny), we almost instantly forget many, perhaps most, major features of the environment (Sanford 1917/1982; Nickerson and Adams 1979; Rensink, O’Regan, and Clark 1997, 2000; Simons and Levin 1998). You may try to retain that image of warplanes over the duration of the beep and the post-beep reflection, using that retained image as a model for the image as it existed the moment before the beep; but surely it’s plausible to suppose that the image might be transformed, elaborated, rendered artificial in the course of retention, and it may be very difficult to detect such changes reliably, accurately accounting for and subtracting them when reaching judgments about the target experience at the moment of the beep.

Or you may try to recreate the image, if it was momentarily lost, which would appear to invite all the same risks if not more.

Or you may try to recall the image without retaining or recreating it (perhaps purely linguistically?), but this too will be a constructive or reconstructive act, involving (for example) one’s knowledge of warplanes, how you take them generally to look, knowledge of the outward event that inspired the image (a passage in a book, say), and probably also one’s general opinions about imagery. It will not be the simple retrieval of a recorded trace, in high or low pixilation, but rather elaborative, constructive, and plausibility- and schemata-grounded, like Bartlett’s subjects’ recollections stories and passages of text.

Then, hours later, you are interviewed, and the reconstructive process begins again, with the target event less fresh, but – perhaps compensatingly – with more available bases for the reconstruction: all the general knowledge (or opinions), schemata, and skills that were available (except literal retention) in the first instance of recollection after the beep; plus also one’s knowledge of, or best recollection of, the judgments and other processes that occurred after the beep; plus one’s written notes; plus cues (maybe subtle) from the interviewer; plus one’s knowledge of the intervening beeps and interviews. From this confluence of forces issues an utterance, “they’re jet planes with a tapered nose and that kind of dark gray steel with a…”, which the interviewer interprets in accord with his own system of schemata and prejudices.

This, I think, is the cognitive process underlying interviews about sampled experiences – both in Hurlburt’s method and in related methods like Petitmengin's. You see, then, why I think there’s plenty of room for error.

Wednesday, August 26, 2009

The Unreliability of Naive Introspection

... Chapter 7 of my book in draft (provisionally titled Perplexities of Consciousness) is now up on my website. The chapter is independently readable -- a slightly revised version of my 2008 article of the same title -- and it's the argumentative core of the book. Comments and feedback more than welcome.

With this posting, a working draft of the entire book (except preface and references) is now available. Over the next couple of months (hopefully not too much longer) I will be tweaking and revising in light of further reflections, further reading, and the comments and criticisms that many people have kindly given.

Here's an abstract of the chapter:

We are prone to gross error, even in favorable circumstances of extended reflection, about our own ongoing conscious experience, our current phenomenology. Even in this apparently privileged domain, our self-knowledge is faulty and untrustworthy. We are not simply fallible at the margins but broadly inept. Examples highlighted in this chapter include: emotional experience (for example, is it entirely bodily; does joy have a common, distinctive phenomenological core?), peripheral vision (how broad and stable is the region of visual clarity?), and the phenomenology of thought (does it have a distinctive phenomenology, beyond just imagery and feelings?). Cartesian skeptical scenarios undermine knowledge of ongoing conscious experience as well as knowledge of the outside world. Infallible judgments about ongoing mental states are simply banal cases of self-fulfillment. Philosophical foundationalism supposing that we infer an external world from secure knowledge of our own consciousness is almost exactly backward.

Tuesday, August 25, 2009

A New Experimental Philosophy Page

with links to many articles in the area, is up here.

Tuesday, August 18, 2009

Philosophers' Honesty in Responding to Questionnaires

Last week, and in various previous posts, I've discussed a questionnaire Josh Rust and I sent to several hundred ethicists and non-ethicist professors (both inside and outside philosophy), soliticiting self-reports of their moral attitudes and moral behavior on a variety of issues, such as vegetarianism and voting. Our guiding question: Do ethicists behave any better, or any more in accord with their espoused principles, than do non-ethicists? Based on our analyses so far, it doesn't look like ethicists' behavior is any better.

You might wonder, though -- as I do -- how honestly our survey respondents are answering our questions. Are those who have behaved (at least by their own lights) less than ideally well really going to report that fact, even in an anonymous survey like ours? Maybe ethicists really do behave better than non-ethicists but don't look that way because they respond more honestly. Josh and I tried to get a handle on this, in part, by asking a few questions whose answers we could verify. Respondents' honesty on these questions might help us estimate the honesty of their responses overall. Since honesty, of course, is also a moral behavior, it merits examination in its own right.

We asked one question whose answer we could directly verify for all philosophy professors: whether they were dues-paying members of the American Philosophical Association. (The APA publishes an annual list of members, which includes people up to 10 months late with their dues.) Among the philosopher respondents, 138 non-ethicists and 128 ethicists were listed by the APA as members. Of the remaining 59 non-ethicist respondents -- that is, those not on the APA membership list -- 23 (39.0%) claimed to be members. Of the remaining 61 ethicist respondents, 27 (44.3%) claimed to be members. In other words, nearly half of the respondents with the arguably immoral behavior (free-riding by not belonging to the APA) denied that behavior.

The APA's list is not perfect, I'm sure, and people's memories are sometimes fallible for reasons entirely innocent, but it seems plausible to me that much of the effect here is due to culpable inaccuracies -- even if not deliberate lying, a blameworthy bias toward misremembering and misportraying oneself in a positive light. (More attributable, probably, to purely innocent error, either by the respondents or the APA, are the 4% of respondents -- 7 ethicists and 8 non-ethicists -- who were on the APA's lists but did not claim to be members.)

Of course, it's disputable whether philosophy professors should, morally speaking, belong to the APA. In the attitudinal part of the survey, a majority of philosophers (64.7%) said it was morally good to "regularly [pay] membership dues to support one's main academic disciplinary society (the APA, the MLA, etc., as appropriate)", but that left a substantial minority who said it was morally neutral (very few said it was bad). Non-members who claimed to be members may have been somewhat more likely to say it is morally good to support the APA through one's membership dues than were the non-members who truly stated that they were non-members, but if so, the trend was modest (62.0% vs. 52.9%), and not statistically significant, given our relatively small sample of APA non-members.

So the answer to our question about how accurately philosophers portrayed their negative behavior in our survey appears to be: not very accurately at all. Nor do ethicists seem any more honest; in fact the trend (not statistically significant) was toward less honesty. This also fits with professors' evident exaggeration, in our survey, of their responsiveness to undergraduate emails (with ethicists appearing just as prone to such exaggeration). Josh and I have some other tests of honesty, too, not all analyzed, which I'll discuss later.

Incidentally, near the the end of the questionnaire we asked about the morality of "responding dishonestly to survey questions such as the ones presented here" and also "Were you dishonest in your answers to any previous questions?" Those who appear to have falsely claimed APA membership trended, if anything, toward being more likely than those who truly stated their non-membership to say it is bad to respond to such questions dishonestly (93.2% vs. 85.3%, chi-square p = .17). Also, 2 of 49 in the first group (4.1%) and 3 of 65 in the second group (4.6%) admitted having answered a survey question dishonestly.

Thursday, August 13, 2009

Chapter 4 of Perplexities: Human Echolocation

As you may know, I'm writing a book about the inaccuracy (in my view) of people's judgments about their stream of conscious experience (tentative title: Perplexities of Consciousness). Last winter and spring I posted drafts of six of the eight chapters (available from my academic homepage). In early summer, I got distracted with a trip to Australia and a few other things, but now I'm back in the saddle. So here's Chapter 4, "Human Echolocation", co-authored with psychologist Michael S. Gordon.

Abstract:

Most people, when asked explicitly, will deny that they can detect the properties of silent objects, such as shape, texture, and distance, using echoic information about how sound is reflected or otherwise modified by those objects. They'll deny, that is, that they can echolocate. It turns out, however, that people are surprisingly good at echolocation (if not as good as bats or dolphins). We are mistaken not just about our sensory capacities but also about our sensory experiences. There's "something it's like" to echolocate; echolocation has a kind of auditory phenomenology. You can hear, for example, the proximity of a wall as you approach it eyes closed; you can hear the wadded softness of a blanket as you speak into it; and, generally speaking, though they tend to deny it, people have a pervasive auditory echoic phenomenology of their environments and objects nearby.

Radio Interview: Are Ethicists Ethical?

By Australia's ABC national radio, featuring Simon Longstaff and me, here.

Tuesday, August 11, 2009

In Recruiting Members, the APA Doesn't Appeal As Effectively to Self-Interest As Do Other Academic Disciplinary Societies

Or so it seems, from the data I'm looking at here.

As discussed on this blog several times previously, last spring Josh Rust and I conducted a survey of the moral attitudes and moral behavior of philosophers (including ethicists) and other professors. Part I of the survey solicited attitudes about the morality or immorality of various actions -- eating meat, donating to charity, etc. -- using a nine-point scale from "very morally bad" to "very morally good", with "morally neutral" in the middle. Part II asked respondents to report their own behavior in such matters.

We asked two questions about membership in academic societies. In Part I, we asked about the morality or immorality of "regularly paying membership dues to support one's main academic disciplinary society (the APA, the MLA, etc., as appropriate)". In Part II, we asked "Are you currently a dues-paying member of your discipline's main academic society?

Philosophers' attitudes toward the American Philosophical Association seemed about the same as other professors' attitudes. Philosophers were just as likely as non-philosophers to say it was good to pay membership dues to support their main academic disciplinary society, with 67.7% rating that action somewhere on the "morally good" side of the scale, compared to 64.7% of non-philosophers, a difference well within the range of the survey's sampling error (chi-squared, p = .48).

(I should mention, as a caveat, that among respondents who rated membership as morally good, philosophers rated it, on average, less good than did non-philosophers -- 6.89 vs. 7.53 on the 9-point scale [t-test, p < .001]. However, I believe this simply reflects philosophers' greater tendency to avoid the extreme ends of the scale. For every single one of the nine rated actions, philosophers' responses, when they were not neutral, were closer to neutral than were non-philosophers' responses -- an occupational hazard, perhaps, of philosophers' frequent reflection on unusual and extreme cases.)

However, philosophers were less likely than were other professors to report being members of their disciplinary societies: 78.0% of philosopher respondents were members, vs. 86.7% of the respondents from other disciplines (chi-square, p = .02). The difference is almost entirely driven by respondents who expressed the view that membership is morally neutral. Among those who said that membership was morally good, philosophers and non-philosophers differed little in their membership rates (82.6% vs. 87.9%, within chance, chi-square p = .21). But among professors who said they saw membership in their discipline's main society as morally neutral -- professors presumably motivated mainly by self-interest in their decision whether or not to be members -- philosophers' membership rates were considerably lower (68.0% vs. 84.5%, p = .02).

Put a bit differently, non-philosophers' membership rates hardly differed between those who saw membership as morally good and those who did not, suggesting that there are excellent prudential reasons for most professors in other disciplines to be members, while this was is not as true for philosophers.

Perhaps the APA should take note.

(Incidentally, ethicists and non-ethicist philosophers didn't appear to differ in any of these respects, which is why I've combined them in the analyses here.)

Now I should say that all this concerns self-reported membership only. For the philosophers, I happen to have data about the actual membership rates of our survey respondents -- which, as you might expect, are somewhat lower than self-reported membership rates. I'll get to this in the next post. Unfortunately, for the comparison to non-philosophers, self-report is all we have to go on.

Sunday, August 02, 2009

Has Anyone Ever Pushed the Fat Man off the Footbridge?

In the famous "trolley problems" (developed by contemporary philosophers such as Phillipa Foot and Judith Jarvis Thompson), a hypothetical observer is faced with several similar-seeming scenarios involving runaway trolleys or the like where there's a choice between letting five people die (if you do not intervene) and doing something that causes one other person to die, in order to save the five. The fun bit is this: Although many of the dilemmas seem similar, our moral intuitons tend to split on them, raising the question of what's driving the intuitions. Although discussion of these sorts of problems began in philosophy, with philosophers relying on their armchair intuitions, psychologists such as Marc Hauser have recently started to look at the psychology of this more systematically.

Two of the most famous scenarios are the "side track" and the "footbridge" scenarios.

In the side track scenario, you see a runaway trolley headed toward five people who will certainly die if you do nothing. You are standing next to a switch that would allow you to divert the trolley to a side track, saving the five people. Unfortunately, there is one person on the side track, who will certainly die if you divert the trolley. Question: Is it morally permissible (or even good) to divert the trolley?

The footbridge scenario is similar except that you're standing on a footbridge above the track. The only way to save the five people is to block the trolley with a sufficiently heavy object. The only sufficiently heavy object you can reach in time is a fat man (alternatively, perhaps more politely, a hiker with a heavy backpack) who is standing next to you. You could push him off the footbridge and the trolley would grind to a halt on his body, killing him but saving the five. You yourself are insufficiently heavy to stop the trolley with your own body. Queston: Is it morally permissible (or even good) to push the fat man?

Most people seem to have the intuition that it is morally permissible to flip the switch to divert the trolley but that it's not morally permissible to push the fat man. Why, exactly, is a very interesting question that I won't go into here. All I want to ask is this: Are there any real life scenarios in which someone has done something like pushing the fat man? Given that there is a non-trivial minority of people who do think it's okay (or even good) to do so, you might think one of them would have been faced with such an opportunity and done the thing. Of course, I'm not asking just about runaway trolleys but about any scenario with a similar structure, in which one kills an innocent person through an act of direct personal violence in order to save several others. I exclude abstract decision making involving administrative balancing the costs of lives against each other in times of war and emergency, such as deciding to reroute life-saving supplies from one place to another or asking one platoon to charge into the fire to save the battalion. What I'm asking about is archetypal violence -- murder, by a civilian in no position of authority, of an innocent bystander -- to save other people's lives.

In other words: Do people ever put their life-counting consequentialism into action? If so, you'd think it would make the news. In the 1980s Bernard Goetz made big headlines when he shot some muggers in a New York subway (and there's even a Wikipedia entry about it). You'd think pushing the fat man would be even bigger news. And although fat-man like scenarios are surely very rare, in this world of billions they must sometimes arise.

(HT: Jeanette Kennett for forcefully posing this issue to me.)