Wednesday, February 13, 2008

When Will Ethicists Behave Better and When Worse?

Suppose we accept that philosophical moral reflection is bivalent -- that sometimes it leads us toward morality and sometimes away from it. It would be nice to have a theory about when it will do one and when the other.

Here's a first thought. Suppose philosophical moral reflection simply introduces an element of randomness into one's ethical principles. Suppose that to a first approximation it's just a random walk away from pre-reflective standards. When conventional, thoughtless, ordinary behavior is morally good (returning one's library books, not pursuing elaborate schemes to evade taxes, waiting one's turn in line), whatever introduces randomness into that behavior is likely to lead away from the good; and thus we might predict that people who do a lot of philosophical moral reflection, such as ethics professors, will actually behave worse in such matters.

In other cases, ordinary, conventional, thoughtless behavior might have serious moral shortcomings, such as in the vast overuse of resources by Americans, not donating much to charity, regularly eating meat. In these cases, deviation from the norm might be more likely to be deviation toward the good than away from it. So in these cases, we might predict that people prone to philosophical moral reflection will on average behave better than others.

Risky predictions from a simple theory! -- a theory too simple to be true, I'm sure. Can we test it? I've already done the books study and it fits. I've looked a little at charity, but my data are pretty limited and ambiguous. What else can we test? Hm, ethicists cutting in line -- that might be do-able....

5 comments:

Anonymous said...

Hello,

Do you not just take it for granted that conventional morality is the right one?

I mean, suppose one does engage in moral reflection and this does lead one to moral standards that are apart from the conventional standards.

Could this not be because the ethical reflector has discovered a genuine moral truth that the folk have been just ignorant of?

One example that comes to my mind is Henry David Thoreau. Did he not depart from traditional morality by refusing to pay taxes in order to protest slavery - a morally permissible act in his time?

So maybe, stealing ethics books from the library is the right thing to do. :-)

It's just that you haven't reflected on it enough. :-P

Anonymous said...

Sorry, it seems as if I haven't read the third paragraph before commenting!

Anonymous said...

Hi Eric,

What you're calling the bivalence of moral reflection I would call an instance of a general truth concerning the underdetermination of action by reflection: for all P, there is no Q such that thinking about Q will make you act to bring about P instead of not P. For example, simply thinking that there's a tiger in the room is insufficient for me to run in the opposite direction. Whatever my ultimate behavior is depends on much else besides, like whether I want to be eaten by a tiger.

(Of course, if thinking is construed as an action then certain cogito thoughts will constitute counterexamples to the general claim. But let's set these aside.)

I'm guessing you'd like this moral bivalence stuff to be more, and thus more interesting, than just an instance of the general claim about underdetermination.

So what more do you think there is?

Eric Schwitzgebel said...

Cihan: I do agree that there's a potential tangle here regarding how to evaluate which are the cases in which conventional behavior is not as morally good as unconventional behavior. I hope that there are enough cases where that's surmountable (like perhaps the cases listed?) that the issue is still empirically approachable.

Eric Schwitzgebel said...

Hi Pete! I don't actually accept the standard view in philosophy of mind that there's a sharp divide between cognitive and conative states, or beliefs and desires, and that beliefs are powerless to drive action without desires (though I agree that in some cases an unusual set of desires can be what I call an "excusing condition" for not manifesting the behavior stereotypical of the belief, as in the tiger case you mention).

But setting that aside....

I'm not sure exactly what picture you have in mind, but one picture might be this: Ethical reflection leads us to better know what's morally good, but some people desire to do what's morally good (and those people it makes better) and some people desire to do what's morally bad (and those people it makes worse). I don't think that model works very well, though, because in order for the bivalence to be approximately balancing (so that on average ethicists don't behave better), there would have to be a substantial number of people for whom discovering that some action is morally bad makes them more likely to engage in that action. That seems unlikely to me.

I think the bivalence works at the level of principle and policy, rather than desire. Sometimes moral reflection leads us to morally better principles and policies (e.g., support environmental causes) and sometimes it leads us away from them (e.g., "it's okay for me to cheat on my taxes because [fill in self-serving rationalization]").

As you point out, there's a lot more meat that needs to be put on the model!