Monday, November 19, 2007

The Caterpillar Effect in Ethical Reflection

The caterpillar who thinks about how its legs work falls on its chin, the story goes. So similarly, Joshua Rust (my co-author on The Moral Behavior of Ethicists: Peer Opinion) suggests that in cases when our spontaneous responses would be morally appropriate, moral reflection can tangle up the works. If ethicists in fact act worse than non-ethicists, as suggested by about one-third of non-ethicist philosophers in our peer opinion survey, Josh believes the caterpillar effect may be the explanation why.

Consider my finding that ethics books are more likely to be missing from academic libraries. Here's a Rustian (Rusty?) explanation: Our normal, unreflective treatment of library books includes returning them when they're due and being sure to check them out before leaving the library. If we start to think about the ethics of returning books, these spontaneously virtuous responses might get thrown off. We might find ourselves, for example, rationalizating and justifying theft or carelessness.

Or, Bernard Williams-style, consider the person who pauses to reflect on the moral pros and cons before helping a person in need versus the person who unreflectively leaps to assist.

I'm not sure I'm quite ready to get on board with Josh on this one yet, though. It seems to me that often our spontaneous reactions are self-serving, and habits of ethical reflection can break us away from those. I'm inclined to think that overall (even if not in every particular situation) it's good to have habits of moral reflection. This, I suppose, is part of why I find it interesting and puzzling that ethicists, who presumably do tend to reflect morally more often on average than non-ethicists, seem to behave no better than anyone else.

I've posted Josh's draft essay on this in the Underblog. I'm sure he'd appreciate comments!

18 comments:

Anonymous said...

Perceived ethical behavior is usually strongly tied to perceived justification. So when ethicists acquire more elaborate systems of justification, don't you think it's unsurprising that their perceived ethical behavior covers a wider range of activities (some of which are by most standards unethical)?

Eric Schwitzgebel said...

I'm not sure I agree, anon! Why shouldn't it get more restrictive instead? I'd be inclined to think that people are more prone to err on the side of seeing things as permissible that aren't (esp. for themselves) than on the side of seeing things as impermissible that are.

Anonymous said...

So, Eric, I wasn't sure whether to post a response to the essay draft or to your comments here. but my thoughts pertain less to the argument/implications of the essay as a whole- rather to the point you've commented on- analogy with the caterpillar effect.

So, presumably the analogy to the experienced driver example works as follows: the actions involved in driving are normally controlled by automatic (sub -or –unconscious) channels in the brain. When we think about them, these processes are interrupted by our conscious ones, and are hence disrupted. Now this sort of automatic control of actions involves certain types of channels – types which are presumably relatively specific (though of course this is a question to be answered by neuroscientists). It seems like a bit of a stretch to suggest an analogy with behaviors like returning library books. There are a couple of points of comparison that seem stretched. First - degrees of required deliberation vs. automaticity. So, it's true the action involved in walking to the library and putting the book in the drop slot could presumably be controlled by automatic channels, sufficiently similar to those that control the action involved in driving. But the decision to take the actions of returning a book, even for non-ethicists, involves deliberation (or at least a sort of psuedo-deliberation) in a way that ‘deciding’ to turn slightly to the left, to follow the curve of the road, doesn’t. In the former case, there's a more complex mapping from scenarios that inspire actions to the actions themselves, since the channels involve awareness of those scenarios. I am more aware of the fact that I have library-book-X lying on my shelf, when it ought to be at the library, than of the fact that there is a certain degree of curvature in the road which I ought to follow by turning the wheel in a certain way.

The other feature that a situation involving the return of library books lacks is that a certain sort of action is required – action, namely, relatively-immediate action, which must take place simultaneously (or in quick alternating succession...) with deliberation about that action. Presumably, cognitive resources are ‘valuable’ because we can only do so many things at once – our brains can’t run an automatic action process and an introspective/reflective one about that otherwise-automatically-controlled action simultaneously. But the simultaneity requirement is lifted in the case of deliberating about whether to return a library book, so the problem can’t be cognitive resources. at least, we can't mean the same thing by it in both cases.

If this is right, than the analogy cannot derive its explanatory power from an exact mapping between the 2 types of scenarios.
But from what then?

Perhaps the idea is more general - that if we have this entrenched hyper-deliberative stance that is the burden of the ethicist, such that we feel inclined to deliberate about every little action, then we'll never do anything, because we just don't have the cognitive resources (or time in the day!) to deliberate about every action that could have some sort of ethical implications. that's why we normally allow socially-conditioned 'automatic' action programmes to control 'deliberation' about behaviors like returning library books.

But it seems unlikely that this is his point. what he says is that the hyper-deliberative stance leads to 'moral omission or oversight'. Having now loosened the analogy, this would seem to suggest that if I'm thinking too mcuh about whether I should return my book, I'll just overlook the obvious reasons for doing so, because I'll be too focused on the more subtle points. Now, I'm not an ethicist, but my introspective intuition leads me to doubt that this is the why the ethicist fails to return her books. It seems more likely that ethicists would recognize the relative unimportance of such points, and spend their time tackling bigger fish.

Eric Schwitzgebel said...

Wow, great comment, Emily!

Your discussion of the contrasts between the catepillar/driver and the act of returning books is helpful. Surely the issue is not one of automaticity in this sense. So maybe it's a matter of length or depth of reflection?

So then, as you point out, the story might be that too much reflection leads you to lose hold of the obvious points, getting tangled in your own thoughts. (This definitely can happen!) But that doesn't seem the likely explanation (*if* there's a phenomenon to be explained here).

Here's another thought: Maybe conventional wisdom is wiser than your average individual ethicist. Without much ethical reflection, we tend to go with conventional wisdom, but ethicists are tempted to strike out on their own?

Josh reads the blog, but I'll email him to make sure he's seen your comment. I'd be interested to hear his response!

Joshua Rust said...

Emily,

Thanks for your thoughtful and helpful comment! The question is how a hyper-deliberative stance can so undermine an ethicist’s moral competence that he or she might fail to return library books. Following a couple of suggestions in my essay you considered two readings of my text. The first reading is closer to what I had in mind; I take it for granted that if the ethicist did deliberate their moral schema would (typically) tell them to return the library books.

So, I claim that if returning library books is like working a clutch, then deliberation undermines action in something of the same way that an otherwise competent driver can stall a car if he or she thinks too much about working the clutch (I’ve experienced this when trying to teach someone how to drive a manual transmission). You objected by pointing out that returning library books is more complex than the quick, reflexive actions which constitute a skill: i.e., using a clutch or tying shoelaces. In particular you argued that the skill-sets such as clutch operation and tying shoelaces can be automatic because action can be prompted by fairly straightforward stimuli (the engine is making a certain noise, my shoelaces have come undone). The stimuli that prompts the returning of library books are comparatively multivalent (the library book has to be distinguished from other books, it has to be overdue, etc.) and so requires greater consciousness or deliberation.

I think you are right to emphasize the difference between two kinds of action, where returning the library book case involves a more complex mapping of scenarios which prompt action. My own temptation is to grant the difference but contend that while a non-delibarative stance is most evident in quick, reflexive actions, that stance is not limited to those actions. Professional athletes and chess players (is it wrong to see these categories as mutually exclusive? :-) ) seem capable of responding to very complex, multivalent scenarios in a strikingly non-deliberative way. Similarly it doesn’t seem crazy to think that a librarian just sees an overdue book, rather than a book which may or may not be a library book and which may or may not be overdue, etc.. A basketball player likewise just sees an open shot.

I think what I was trying to say was this (although I might say it much better here): we learn perceive certain situations as requiring an appropriate response. Some of these responses are aided by deliberation, especially when contextual triggers are unfamiliar.
If a person has been trained in the right way, and the triggers are familiar (even if complex), deliberation is not necessary; indeed it may be harmful. Generalists attempt to construct a set of rules which provides the right response no matter what the trigger (Dewey might be right to worry whether this could even be done!). How do we test our set of rules? By comparing the output of the various moral algorithms against eachother in the really hard, exceptional cases (the trolley, organ transplants, torture to prevent a nuclear explosion, etc). The problem here is that in focusing on these hard cases, we make the exceptional case of moral competence—the one which requires practical deliberation—the paradigm case; the exception becomes the rule. And I worry that such of focus can actually undermine the ability to recognize the standard boring cases—such as returning library books—as ethical.

What do you think?

Eric Schwitzgebel said...

Interesting comment, Josh! That's not the direction I thought you'd go.

I can see how thinking about the clutch, even if your thought is perfectly correct, can mess up skillful driving, but I don't see how that would work in the library book case. If you stop to deliberate, you suggest, you *would* still think you ought to return the library book. How does that interfere with actually returning it? I can imagine various scenarios, I suppose, but overall doesn't it seem that having that thought explicitly will make it *more* likely that you'll return it?

Anonymous said...

So, it’s still a bit unclear to me that deliberation about returning a library book would be harmful in the way that thinking about working the clutch or the angle of one’s basketball shot is. To highlight another distinction b/t library books and other cases (driving, sports), the output required is not a complex action (requiring me establish the proper angle, hold the ball a certain way, etc.) Either I should return the book or I shouldn’t. Nor is the deliberative scenario complex like a chess game, where deliberation requires imaginative variation from courses of to complex outcomes, such that my mind might lose track of the intuitive answer if I start to deliberate.

Also, I’m a bit concerned that I might be missing your point: your worry is that when “the exception becomes the rule”, we will become prone to deliberate in cases where we needn’t and perhaps shouldn’t. Such deliberation may be harmful. It would be better not to deliberate in the case of returning books.

Still, you agree that if one deliberates, their schema will recommend returning the book.

Is the idea, then, that although deliberation is normally harmful in this sort of case (because of something like the caterpillar effect), if one were able to deliberate successfully about it, her schema would recommend returning the book?

Joshua Rust said...

Emily and Eric,

Both of you expressed something of the same worry: if I grant that deliberation would result in proper behavior (returning the library books), how could deliberation be harmful?

The quick answer is as follows: while deliberation would, I'm assuming, speak in favor of returning the library books if the ethicist reflected on the case, I worry that an ethicist's training might undermine the chances that the ethicist would in fact reflect on the case in the first place. This would explain why the ethicist is less likely to return library books than non-ethicists.

On Aristotle's view deliberation is only required in the difficult cases (abortion, etc.), given the right training; returning library books is not typically among these difficult cases. If the ethicist spends their professional career deliberating about the difficult cases I worry that can blind them to the more straightforward instances of ethical opportunity. In this way, their training blind or undermine their moral competence.

The idea is that in holding deliberation to be a hallmark of moral excellence, that can attune you to only a certain subset of ethical opportunities. You want a person with ethical training to help negotiate difficult decisions in, say, a biomedical context; however these same people might perform less spectacularly in the realm of the ethical-mundane.

So the worry about the ethicist is not that they will over-think the returning of library books (and so not act); it's rather that they won't think about it at all (and so not act).

Anonymous said...

ah! So I see was misinterpreting. thanks for explaining and for the interesting discussion, Josh!

Eric Schwitzgebel said...

Yes, I guess I was misunderstanding also! But can you explain more why you think a career in ethics would tend to reduce the likelihood of reflecting about whether to return a library book? I'd have guessed the opposite.

Anonymous said...

(Not being a philosopher myself) I sometimes wonder about the heavy emphasis discussions like this give to rational processes.
I was pleased to see Eric raise issues of unconsciousness in the original post. Let's assume that moral philosophers do steal books from libraries. There must be 101 ways she or he would rationalise (that after all is his or her forte) this as not theft but something else. And rationalisation is is just one of the many conscious and unconscious defence mechanisms we use.
Then there's the question of character. The psychopath, for example, would easily justify keeping the books. So would the narcissist. So would the borderline....
Skills in ethical thinking can help us behave more ethically, but they can and are used in precisely the opposite way - as tools to defend unethical behaviour.

Eric Schwitzgebel said...

Thanks, Steve. I agree with all that! But here's the question: On average, over the long run, do habits of moral reflection lead to better moral behavior? I'd hope so -- I think it's pretty dark to think that thinking about ethics is morally useless or *just* rationalization! -- but the empirical evidence so far isn't looking so good....

Joshua Rust said...

Emily and Eric--

Explaining why the study of ethics would reduce the chances that the ethicist would return library books--

The sphere of the ethical covers a huge range of cases from the relatively uncontroversial (returning library books, keeping promises) to the highly contentious (abortion, trolley cases, torture). Typically generalists adjudicate differences between the various ethical algorithms by focusing on the contentious, borderline cases. I take it for granted that both utilitarianism and Kantianism would have us return library books (of course this mundane case could be turned into a contentious case if, say, the borrower knows that returning the book would i.e. lead to someone's death, etc.). Because the generalist tests their theories against the exceptional, contentious cases I worry that this process can actually desensitize the generalist to the entire spectrum of the ethical. This is what I meant when I said that the generalist might loose the ability to see the library book as even ethical; it looks quite different than the contentious cases he or she is used to entertaining.

So, while the generalist might act with great subtly and grace when confronted with a difficult case, I worry that their training can actually undermine the ability to see the returning of library books as an ethical opportunity.

By analogy, you can imagine an auto mechanic who--while skillful--fails to see that the car isn't starting simply because there is is no fuel in the tank. His or her skill in negotiating the more unusual mechanical difficulties can blind him/her to the more mundane cases.


drsteve--

Thanks for the post!

The explanation I'm trying to give for why ethicists are less prone to returning book isn't quite that they are better able to rationalize their actions. It's rather that the kind of cases one focuses on as an ethicist (in order to evaluate various theories) can desensitize one to the boring, everyday cases of ethical opportunity. So rather than rationalizing not returning library books, there is a certain way in which it doesn't come up for consideration in the first place. The returning of library books looks importantly different than the cases (abortion, etc) they are used to considering (see my response to Eric and Emily above).

However, you might also be correct; perhaps,even when they do consider returning library books, some ethicists might rationalize away their obligation.

Eric was a professor of mine as a graduate student (as you can imagine, he's an terrific teacher). I have dim memories of his telling us of a case wherein elementary school students where taught the Kantian Categorical Imperative in order to nudge them to the next stage of Kholberg's pyramid of moral development. Rather than improve their ability to be ethical, it was found that the young students simply used the CI to justify behavior that they wouldn't normally engage in! If this anecdote isn't apocryphal this lends some support to the plausibility of your explanation.

Anonymous said...

Josh,
Thanks for the follow up! I understand your position more clearly now.
I’m wondering if this would adequately explain the purported phenomenon, though. The idea is the case of returning library books looks importantly different from these borderline cases, so the ethicist fails to recognize it as an ethical matter. But I wonder: would the ethicist, as a result, perform worse than your average Joe? Or just the same?

That is, are you suggesting that the ethicist’s deliberation about borderline cases, in addition to hindering their ability to recognize ordinary ethical situations, also erases the sort of intuitive schema that most of us have, which tells us we should return books? If Joe doesn’t need to deliberate in order to return a book, why should the ethicist?

It wouldn’t make a lot of sense that the ethicist sees deliberation as a requirement for action in situations that don’t seem to him to be morally significant. So if the book scenario doesn’t, why wouldn’t it be served by the same automatic programs that handle it for Joe? But perhaps I’m making the distinction too black and white. One could reply that on some implicit level, the ethicist still recognizes this as a moral issue, and hence tends not to act without deliberating. What do you think?

Anonymous said...

Eric - maybe it's neither that "habits of moral reflection lead to better moral behavior" nor '*just* rationalization'. Freud's view is the roots of morality lie in the nursery where children come to this: "I'll agree not to demand special treatment if, and only if, no-one else gets special treatment." Morality, then, has little to do with habits of thought or defences. It has to do with a bargain. (Dark, but not pessimistic, in my view.) This has been a most stimulating post!

Joshua - thanks for the clrification. It doesn't seem that plausible to me that because I deal in large matters at work, at home I'll overlook small matters. Having said that, it does help to explain the case of the professional football coach (these guys are notorious disiplinarians and control freaks) who has out of control children right under his nose!

Coathangrrr said...

Doesn't this all assume that it is ethicists who are most likely to check out books on ethics?

I mean, law student, business student and all sorts of students have to take ethics. More so than metaphysics or really any other field of philosophy. Maybe the ethicists actually are better at returning books and the other students taking them out are worse.

Eric Schwitzgebel said...

I've tried to control for such factors -- for instance by separating out law books and excluding the better-known books more likely to be checked out by non-specialists. The results are the same. The full paper is here:
http://www.faculty.ucr.edu/~eschwitz/
SchwitzAbs/EthicsBooks.htm

Anonymous said...

it's good to have habits of moral reflection, at times, but most of the time it creates the caterpillar effect. i found and article about you on philosophyetc dot net... and decided to research it abit more and came across this page i dont know if you have read it but "Schwitzgebel's own suggestion is the 'bivalent view' that moral reflection affects our behaviour, in particular reducing conformity to prevailing social norms, but this is not always for the better (due to rationalization)." i enjoyed your post!

cheers.
Dianne