In an earlier post, The Problem of the Ethics Professors, I asked why ethics professors often behave so badly. What does this suggest about the connection between ethical reflection and moral behavior?
Of course, implicit in this question is the presumably empirically testable assumption that ethics professors do behave (at least as) badly as the rest of us. But how to test that assumption?
I can think of no better way than to ask people who know ethics professors. While asking people about their impressions invites, of course, a variety of problems, the altenatives -- actually trying to run a controlled study or a direct observational study (or looking at criminal records!) -- seem patently infeasible. And perhaps if people are asked in the right way, the answers they give will deserve some credit.
So I've designed a questionnaire. My thought is to set up a table at an APA meeting or two (if the APA will let me!) with a sign saying something like "take this brief questionnaire, get a brownie!"
Q's 1 and 2 of the questionnaire would be:
1. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as philosophers not specializing in ethics? (Please circle one number below.)
[Here there'd be a likert scale of 1-7 from "substantially morally better" (1) through "about the same" (4) to "substantially morally worse" (7). I can't reproduce the actual look of scale here due to formatting constraints.]
2. As best you can determine from your own experience, do professors specializing in ethics tend, on average, to behave morally better, worse, or about the same as non-academics of similar social background? (Please circle one number below.)
[Here would be the same likert scale as above.]
I would then ask questions about academic rank, area of teaching/research focus, and whether they knew in advance the topic of the questionnaire or had discussed it with anyone.
Thoughts, reactions, and suggestions welcome! Is this lame and pointlessly irritating? What would you predict for the results?
If you're reading this, the comment feature is working again!
ReplyDeleteEric, you're right that this is an empirical question, but I'm not sure your survey's going to get at the real data. Philosophers are very good at over-thinking -- I'd expect a large number of respondants to just do the a priori sociology and tell you their view as to whether ethics professors tend to be better people than other people, without bothering to reflect on the particular people they know.
ReplyDeleteHow to get around this? That's trickier. One idea might be to ask respondents to focus on a particular ethics specialist. "Think of the last ethics specialist you've interacted with", or maybe better: "think of the ethics specialist in your department whose name comes the soonest after your name, alphabetically", or something like that. Then ask whether that person seems to be a good person.
You get less data to work with, since you're only able to aggregate on one person for respondent, but it might be more reliable data.
That's a very interesting suggestion, Jonathan. I might just do that! It does seem empirically cleaner. Thanks so much for the helpful feedback.
ReplyDelete(If I do use your suggestion, I'll acknowledge you of course.)
(1) I would suggest adding another dimension to this aimed at assessing whether people think ethics professors are particularly good at giving ethical advice or helping one think through a personal ethical issue - this touches on worries some people have about the idea of ethical experts.
ReplyDelete(2) Regarding methodology for both surveys: I recommend also polling for reactive attitudes. You might, for example, offer stories where ethics professors (who they know or not) and non-ethics professors act well and poorly or provide good or bad advice. The idea would be to then poll for the reader's reactive attitudes (say ranging from surprise to anger, disapproval, indignation, etc).
I suspect the beliefs these reactions embody are functionally important and are not necessarily or usually corrolated with the beliefs people would avow in more staight-forward polling.
Polling for both would also get at differences in what we expect of people with different specialities and allow you to pick up on interesting divergences between the two types of beliefs.
It would be interesting to check for differences between cases where the story is one in which someone else is mistreated and one in which the reader is mistreated.
Other objects to test: guilt about lying to the person, cheating them, etc., or, on a different topic, willingness to accept an appology from them.
You should also probably add a "how certain of your answer are you" with numbers one through ten when collecting this kind of data.
ReplyDeleteThose are intriguing thoughts, Brad. As always, your comments and suggestions seem to open new avenues of thought for me!
ReplyDeleteI think the first thing to do, though, is to keep the questionnaire really simple and clean. If I can pull that off, then in follow-ups I can start plunging more deeply in -- perhaps in some of the ways you've suggested. (I might run some more ideas by you then, too, if you don’t mind!)
Hi Eric,
ReplyDeleteI have a few suggestions.
1. You might consider adding the option to each question to leave it blank. Or you could provide subjects with a "cannot answer" option and then provide a space so that subjects can let you know why they do not answer. Forcing the answer may result in your getting more bad data than you'd like.
2. You ask two main questions:
i. What is the difference between ethics philosophers and philosophers
generally?
ii. What is the difference between ethics philosophers and non-academics?
I propose a slightly different line of questioning:
i. What is the difference between an academic and a non-academic?
ii. What is the difference between a non-philosopher academic and a philosopher?
iii. What is the difference between a general philosopher and an ethics philosopher?
I see at least two advantages to the second line of questioning. First, you get an added bit of information about the possible ethical advantage of being a scholar generally. Second, your questionnaire becomes less transparent. By less transparent I mean, less obviously a questionnaire about ethics professors, and therefore, less susceptible to the biases that may come if your participants know what information you are most interested in.
3. However you do the first questions, before asking the first question pertaining to ethics professors in particular, it might be good to prime your subjects by asking them to take a moment before answering the next question to review their academic history and the various ethics professors- both male and female, whom they have known. This may get you the most considered data and help you to avoid getting bad data resulting from subjects appealing to their most ‘memorable’ ethics professor (in a way similar to, but less restrictive on your data than your question regarding the ethics professor alphabetically closest to your subjects).
4. Along these lines, I am of the opinion that there may be some interesting difference to be found between people who do meta-ethics exclusively, and those who do either normative, or some combination of normative and meta-ethics. To get an idea of whether this may be true, I’d propose testing for this in conjunction with the question where the subject considers the alphabetically closest ethics professor. So I’d ask in your Part II as question three, “Where does this person fit on this scale” and then I’d provide a likert type scale of 1-7 with 1 being exclusively meta-ethics and 7 being normative (where the between numbers = some combination of them).
-JM
One more actually...
ReplyDelete5. It may be good to have several versions where the questions are ordered differently. This way you can also check for biases affecting answers which result from ordering and non-intentional priming.
-JM
Thanks for your thoughts, Jennifer! I disagree with #1, since I'd worry about whether having too many non-responders would distort the sample. I'll definitely adopt #3. The others I'll think about, too. They're good points, all of them.
ReplyDelete