Professors appear to think that voting regularly in public elections is about as morally good as donating 10% of one's income to charity. This seems, anyway, to be suggested by the results of a survey Josh Rust and I sent earlier this year to hundreds of U.S. professors, ethicists and non-ethicists, both inside and outside of philosophy. (The survey is also described in a couple of previous posts.)
In one part of the survey, we asked professors to rate various actions on a nine point scale from "very morally bad" through "morally neutral" to "very morally good". Although some actions we expected to be rated negatively (e.g., "not consistently responding to student emails"), there were three we expected to be rated positively by most respondents: "regularly voting in public elections", "regularly donating blood", and "donating 10% of one's income to charity". Later in the survey, we asked related questions about the professors' own behavior, allowing us to compare expressed normative attitudes with self-described behavior. (In some cases we also have direct measures of behavior to compare with the self-reports.)
Looking at the data today, I found it striking how strongly the respondents seemed to feel about voting. Overall, 87.9% of the professors characterized voting in public elections as morally good. Only 12.0% said voting was morally neutral, and a lonely single professor (1 of the 569 respondents or 0.2%) characterized it as morally bad. That's a pretty strong consensus. Political philosophers were no more cynical about voting than the others, with 84.5% responding on the positive side of the scale (a difference well within the range of statistical chance variation). But I was struck, even more than by the percentage who responded on the morally good side of our scale, by the high value they seemed to put on voting. To appreciate this, we need to compare the voting question with the two other questions I mentioned.
On our 1 to 9 scale (with 5 "morally neutral" and 9 "very morally good"), the mean rating of "regularly donating blood" was 6.81, and the mean rating of "donating 10% of one's income to charity" was 7.36. "Regularly voting in public elections" came in just a smidgen above the second of those, at 7.37 (the difference being within statistical chance, of course).
I think we can assume that most people think it's fairly praiseworthy to donate 10% of one's income to charity (for the average professor, this would be about $8,000). Professors seem to be saying that voting is just about equally good. Someone who regularly donates blood can probably count at least one saved life to her credit; voting seems to be rated considerably better than that. (Of course, donating 10% of one's income to charity as a regular matter probably entails saving even more lives, if one gives to life-saving type charities, so it makes a kind of utilitarian sense to rate the money donation as better than the blood donation.)
Another measure of the importance professors seem to invest in voting is the rate at which they report doing it. Among professors who described themselves as U.S. citizens eligible to vote, fully 97.8% said they had voted in the Nov. 2008, U.S. Presidential election. (Whether this claim of near-perfect participation is true remains to be seen. We hope to get some data on this shortly.)
Now is it just crazy to say that voting is as morally good as giving 10% of one's income to charity? That was my first reaction. Giving that much to charity seems uncommon to me and highly admirable, while voting... yeah, it's good to do, of course, but not that good. One thought, however -- adapted from Derek Parfit -- gives me pause about that easy assessment. In the U.S. 2008 Presidential election, I'd have said the world would be in the ballpark of $10 trillion better off with one of the candidates than the other. (Just consider the financial and human costs at stake in the Iraq war and the U.S. bank bailouts, for starters.) Although my vote, being only one of about 100,000,000 cast, probably had only about a 1/100,000,000 chance of tilting the election, multiplying that tiny probability by a round trillion leaves a $10,000 expected public benefit from my voting -- not so far from 10% of my salary.
Of course, that calculation is incredibly problematic in any number of ways. I don't stand behind it, but it helps loosen the grip of my previous intuition that of course it's morally better to donate 10% to charity than to vote.
Update July 29:
As Neil points out in the comments, in this post I seem to have abandoned my usual caution in inferring attitudes from expressions of attitudes. Right: Maybe professors don't think this at all. But I found it a striking result, taken at face value. If it's not to be taken at face value, we might ask: Why would so many professors, who really think donating 10% of income is morally better than voting, mark a bubble more toward the "very morally good" end of the scale in response to the voting question than in response to the donation question? Moral self-defensiveness, perhaps, on the assumption (borne out elsewhere in the data) that few of them themselves donate 10%...?
Tuesday, July 28, 2009
Professors on the Morality of Voting
Posted by Eric Schwitzgebel at 3:56 PM 12 comments
Labels: ethics professors
Friday, March 20, 2009
Political Scientists and Political Philosophers Aren't More Likely to Show Extreme Patterns in Vote Rate
Last year, Josh Rust and I looked at the rates at which political scientists vote, compared to other professors. We also looked at the rates at which political philosophers voted, compared to ethicists in general and to philosophers not specializing in ethics or political philosophy. Our main finding (see here) was that all groups voted at about the same rate, except for political scientists, who voted about 10-15% more often. This fits with our general finding (so far) that by a variety of measures ethicists don't behave much differently than other people of similar social background.
The result that surprised me most from that study, though, and the one I keep coming back to in my mind, was this: The variance in voting rate was the the same (really, virtually exactly the same) for all the groups. I had expected that extreme views about voting -- either about its pointlessness or its importance -- would be overrepresented among political scientists and political philosophers, and that this would be reflected in the voting patterns. Maybe political philosophers aren't any more likely to vote, on average, I thought -- but there'd be a fair number who were highly conscientious, voting in virtually every election, and a fair number who were principled non-voters. If this were the case, they should show a wider spread of voting rates -- or in other words a higher variance. However, we found no such thing.
Excluding the non-voters for a minute, let's look at the distribution of voting rates among the sampled groups: political philosophers, political scientists, non-ethicist philosophers, and the comparison group of other professors, in the following four charts. (Each group gets its own chart. On the x-axis is the number of votes per year, on the y-axis is the percentage of the group that votes at that rate.)
The thing to notice is that there's no more spread in any of these groups than any of the others. Each shows basically the same hump in the middle. (The dip just to the left of the 1.00 votes per year in each group is due to the fact that professors are more likely to vote about once every two years [.50] or about once every year [1.00] than three times every four years [.75]. It's also worth noting that local election data are missing for some regions, so this chart somewhat underestimates the overall voting rate.)
The zeros are a little harder to interpret: For about 25% of sampled professors no voting record was found -- which might reflect a pattern of not voting among those professors, but might also reflect registration under a different name or in a different area. So the following numbers certainly overestimate the number of non-voters. But notice again that there is no tendency for overrepresentation at this end of the scale either, among political scientists or political philosophers (the variations in the percentages here are all within the range of chance variation).
Percentage of sampled professors with no voting record found:I find the overall results particularly striking for political philosophers: They are neither, on average, more prone to vote than other professors, nor are they bimodally split between conscientious voters and principled non-voters. Most of them just vote occasionally, sporadically, like the rest of us. It's as though all their thinking about politics has no influence on their voting behavior. (I have other evidence that suggests that it has no influence on their political party, either, but that's for another day.)
political philosophers: 22.4%
political scientists: 26.2%
non-ethicist philosophers: 29.1%
comparison professors: 26.9%
Posted by Eric Schwitzgebel at 10:00 AM 5 comments
Labels: ethics professors
Tuesday, August 19, 2008
More Data on Professors' Voting Habits: Variability and Conscientiousness
I've a couple more thoughts to share from Josh Rust's and my study of the voting rates of ethicists and political philosophers vs. other professors. (Our general finding is that ethicists and political philosophers vote no more often than other professors, though political scientists do vote more often.)
(1.) Take a guess: Do you think extreme views about the importance or pointlessness of voting will be overrepresented, underrepresented, or proportionately represented among political scientists and political philosophers compared to professors more generally? My own guess would be overrepresented: I'd expect both more maniacs about the importance of voting and more cynics about it among those who study democratic institutions than among your average run of professors.
However, the data don't support that idea. The variance in the voting rates of political scientists and political philosophers in our study is almost spot-on identical to the variance in the voting rates of professors generally. Either political scientists and political philosophers are no more prone to extreme views than are other professors, or those extreme views have no influence on their actual voting behavior.
(2.) California professors are incredibly conscientious about voting in statewide elections. Half of our sample is from California, where we only have data for statewide elections. Among California professors whose first recorded vote is in 2003 or earlier, a majority (52%) voted in every single one of the six statewide elections from 2003-2006. 72% voted in at least five of the six elections. This compares with a statewide voting rate, for the June 2006 primary election alone, of only 33.6% of registered voters. (For other states, we have local election data too. There's no such ceiling effect once you include every single local ballot initiative, city council runoff election, etc.; professors aren't quite that conscientious!)
Posted by Eric Schwitzgebel at 9:42 AM 5 comments
Labels: ethics professors, Joshua Rust, moral psychology
Monday, December 10, 2007
Political Scientists' Voting: Predictions and Methods
A week and a half ago, I posted a brief, frustrated reflection on my failure to find any good research on the rates at which political scientists vote in public elections. After several more search attempts, I've given up. As far as I can see, no one has explored this issue since a few studies in the 1960s and 1970s -- studies so problematic as to be utterly useless.
I've informally asked a number of people to guess what Josh Rust and I will find when we analyze the data. Everyone except the political scientists said that they suspect we'll find that political scientists vote more often than other professors. The political scientists, however, were cagey. A couple mentioned a minority view in political science that voting is for suckers: Your vote never makes a difference, so voting is a waste of time. Yet one of these same professors said that he himself has voted in every single election, down to the tiniest little runoff, since the turn of the century.
So here's my prediction: Political scientists will have a more broadly spread distribution than other professors -- there will be more at the extreme of voting in almost every election but there will also be many who vote rarely or never. On average, though, I predict, the political scientists will vote more. Compared to the average non-political-science professor, the average political scientist will be more informed about elections, more invested in and interested in the outcomes, and more likely to have publicly embraced the view that one should vote.
Well, we'll see! Here's what Josh and I plan to do:
From university websites, we'll gather names of philosophers, political scientists, and a sample of professors in other fields. We'll then look for these names on voting records that have been provided to us by several states, calculating a rate of votes per year for each individual since that individual's first recorded vote in the state.
There are two main weaknesses in this method, and I especially welcome readers' reflections or suggestions about these. First, since we don't have street addresses for the professors in question, we will not be able to disambiguate between voters with identical names. If there are four John Millers who live within commuting distance of So-and-So College, we won't know which one is the professor, so we will have to discard the data. And second, if no voting record matches the professor's name, we will not know whether that professor is registered under a different name, registered in a different locale, a non-citizen, a felon, or simply a non-voter. So we'll have to exclude those professors too.
Because of these difficulties, we won't be able to reach conclusions about absolute rates of voting participation among political scientists, just comparative rates -- more or less than professors in other departments. But will these difficulties undermine our ability even to draw that conclusion? Although I don't see any reason to think there will be large differences in the rates at which professors in different departments are registered under different names or in different locales, there is reason to suspect that different departments may have different rates of common names and of non-citizens. But hopefully we can keep those confounds under control: We'll have an exact count of the common-name professors in the different departments, so we can attempt analyses that account for that; and hopefully we can estimate the rates of non-citizenship in departments by accessing c.v.'s or biographies of our non-voting professors where possible and by looking at general data on the citizenship of professors.
What do you think?
Posted by Eric Schwitzgebel at 3:06 PM 3 comments
Tuesday, June 17, 2008
Ethicists and Political Philosophers Vote Less Often, Apparently, Than Other Philosophers
I assume that voting in public elections is a duty (a duty that admits of excuses and exceptions, of course) and that it's morally better to vote conscientiously than not to vote.
In previous research, I've found that:
(1.) ethics books are more likely to be missing from academic libraries than other philosophy books (full essay here),
(2.) philosophy students at Zurich do not give increasing amounts to student charities as their education proceeds, and
(3.) (with Joshua Rust) a majority of philosophers think ethicists behave, on average, no better than non-ethicists of similar social background (full essay here).
With Josh Rust's and my current findings on voting patterns, that's now four consecutive studies suggesting that ethicists behave no better than, or maybe even worse than, comparable non-ethicists.
Looking at voter history data from California, Florida, North Carolina, and Washington State, we found voting rates among professors registered to vote:
Ethicists: 0.97 votes/year (227 records total)The differences over .07 votes/year are statistically significant. The results are stable controlling for age, gender, ethnicity, state of residence, institution type, and political party. Controlling for rank doesn't substantially change the results, except that it raises the voting rate of the comparison group of "other professors" to a rate between that of ethicists and non-ethicists, so that it can't be said that philosophers vote more often than non-philosophers.
Political philosophers (a subgroup of ethicists): 0.95 votes/year (96 records)
Non-ethicist philosophers: 1.07 votes/year (279 records)
Political scientists: 1.11 votes/year (244 records)
Other professors: 0.93 votes/year
Now I'd have thought political philosophers, like political scientists, would be more engaged than average with the political process. Instead -- depressingly (to me; maybe you'll rejoice?) -- it seems that they're less engaged, at least if voting is taken as the measure of engagement.
When I face moral decisions -- decisions like "should I go out and vote even though I'd rather look for Weird Al videos on YouTube?" -- I often reflect on what I should do. I think about it; I weigh the pros and cons; I consider duties and consequences and what people I admire or loathe would do. I am implicitly and deeply committed to the value of reflection in making moral decisions and prompting moral behavior. To suppose that moral reflection is valueless is pretty dark, or at least pretty radical.
Yet if moral reflection does us moral good, you'd think that ethics professors, who are presumably champions of moral reflection, would themselves behave well -- or at least not worse!
(Josh Rust and I will be presenting these results as a poster at the Society for Philosophy and Psychology meeting next week. The full text of the poster will be available shortly on the Underblog.)
Update, June 26:
In the last couple days, Josh and I were able to do a first analysis of new data from Minnesota. In that state, the ethicists and political philosophers appear to be so conscientious in their voting that it knocked the p-value of our main effect from .03 to .06 -- in other words, the trend in Minnesota was so strong the other direction that we can now no longer feel sufficiently confident (employing the usual statisical standards) that the trend we see for ethicists to vote less is not due simply to chance. So we should probably amend our thesis from "ethicists vote less" to the weaker "ethicists vote no more often". However, the Minnesota data also seem to introduce some potential confounds (such as that Minnesota philosophers seem to have unusual job stability) that complicate the interpretation and that we may want to try to compensate for statistically. So the final analysis isn't in!
Posted by Eric Schwitzgebel at 8:38 AM 19 comments
Labels: ethics professors, Joshua Rust, moral psychology
Friday, September 04, 2020
Randomization and Causal Sparseness
Suppose I'm running a randomized study: Treatment group A gets the medicine; control group B gets a placebo; later, I test both groups for disease X. I've randomized perfectly, it's double blind, there's perfect compliance, my disease measure is flawless, and no one drops out. After the intervention, 40% of the treatment group have disease X and 80% of the control group do. Statistics confirm that the difference is very unlikely to be chance (p < .001). Yay! Time for FDA approval!
There's an assumption behind the optimistic inference that I want to highlight. I will call it the Causal Sparseness assumption. This assumption is required for us to be justified in concluding that randomization has achieved what we want randomization to achieve.
So, what is randomization supposed to achieve?
Dice roll, please....
Randomization is supposed to achieve this: a balancing of other causal influences that might bear on the outcome. Suppose that the treatment works only for women, but we the researchers don't know that. Randomization helps ensure that approximately as many women are in treatment as in control. Suppose that the treatment works twice as well for participants with genetic type ABCD. Randomization should also balance that difference (even if we the researchers do no genetic testing and are completely oblivious to this influence). Maybe the treatment works better if the medicine is taken after a meal. Randomization (and blinding) should balance that too.
But here's the thing: Randomization only balances such influences in expectation. Of course, it could end up, randomly, that substantially more women are in treatment than control. It's just unlikely if the number of participants N is large enough. If we had an N of 200 in each group, the odds are excellent that the number of women will be similar between the groups, though of course there remains a minuscule chance (6 x 10^-61 assuming 50% women) that 200 women are randomly assigned to treatment and none to control.
And here's the other thing: People (or any other experimental unit) have infinitely many properties. For example: hair length (cf. Rubin 1974), dryness of skin, last name of their kindergarten teacher, days since they've eaten a burrito, nearness of Mars on their 4th birthday....
Combine these two things and this follows: For any finite N, there will be infinitely many properties that are not balanced between the groups after randomization -- just by chance. If any of these properties are properties that need to be balanced for us to be warranted in concluding that the treatment had an effect, then we cannot be warranted in concluding that the treatment had an effect.
Let me restate in an less infinitary way: In order for randomization to warrant the conclusion that the intervention had an effect, N must be large enough to ensure balance of all other non-ignorable causes or moderators that might have a non-trivial influence on the outcome. If there are 200 possible causes or moderators to be balanced, for example, then we need sufficient N to balance all 200.
Treating all other possible and actual causes as "noise" is one way to deal with this. This is just to take everything that's unmeasured and make one giant variable out of it. Suppose that there are 200 unmeasured causal influences that actually do have an effect. Unless N is huge, some will be unbalanced after randomization. But it might not matter, since we ought to expect them to be unbalanced in a balanced way! A, B, and C are unbalanced in a way that favors a larger effect in the treatment condition; D, E, and F are unbalanced in a way that favors a larger effect in the control condition. Overall it just becomes approximately balanced noise. It would be unusual if all of the unbalanced factors A-F happened to favor a larger effect in the treatment condition.
That helps the situation, for sure. But it doesn't eliminate the problem. To see why, consider an outcome with many plausible causes, a treatment that's unlikely to actually have an effect, and a low-N study that barely passes the significance threshold.
Here's my study: I'm interested in whether silently thinking "vote" while reading through a list of registered voters increases the likelihood that the targets will vote. It's easy to randomize! One hundred get the think-vote treatment and another one hundred are in a control condition in which I instead silently think "float". I preregister the study as a one-tailed two-proportion test in which that's the only hypothesis: no p-hacking, no multiple comparisons. Come election day, in the think-vote condition 60 people vote and in the control condition only 48 vote (p = .04)! That's a pretty sizable effect for such a small intervention. Let's hire a bunch of volunteers?
Suppose also that there are at least 40 variables that plausibly influence voting rate: age, gender, income, political party, past voting history.... The odds are good that at least one of these variables will be unequally distributed after randomization in a way that favors higher voting rates in the treatment condition. And -- as the example is designed to suggest -- it's surely more plausible, despite the preregistration, to think that that unequally distributed factor better explains the different voting rates between the groups than the treatment does. (This point obviously lends itself to Bayesian analysis.)
We can now generalize back, if we like, to the infinite case: If there are infinitely many possible causal factors that we ought to be confident are balanced before accepting the experimental conclusion, then no finite N will suffice. No finite N can ensure that they are all balanced after randomization.
We need an assumption here, which I'm calling Causal Sparseness. (Others might have given this assumption a different name. I welcome pointers.) It can be thought of as either a knowability assumption or a simplicity assumption: We can know, before running our study, that there are few enough potentially unbalanced causes of the outcome that, if our treatment gives a significant result, the effectiveness of the treatment is a better explanation than one of those unbalanced causes. The world is not dense with plausible alternative causes.
As the think-vote example shows, the plausibility of the Causal Sparseness assumption varies with the plausibility of the treatment and the plausibility that there are many other important causal factors that might be unbalanced. Assessing this plausibility is a matter of theoretical argument and verbal justification.
Making the Causal Sparseness assumption more plausible is one important reason we normally try to make the treatment and control conditions as similar as possible. (Otherwise, why not just trust randomness and leave the rest to a single representation of "noise"?) The plausibility of Causal Sparseness cannot be assessed purely mechanically through formal methods. It requires a theory-grounded assessment in every randomized experiment.
Posted by Eric Schwitzgebel at 8:28 AM 21 comments
Friday, June 04, 2010
Ethicists' vs Non-Ethicists' Honesty in Questionnaire Responses
In 2009, Josh Rust and I ran a survey asking hundreds of ethicists, non-ethicist philosophers, and comparison professors, first, a variety of questions about their views on ethical matters (e.g., vegetarianism, voting, staying in touch with one's mother) and second, about their own personal behavior on the same matters. (No identifying information was associated with the responses, of course.) Some previous posts on the survey are here and here and here.
Now one of the cool things about this study is that in some cases we also have data on actual behavior -- thus enabling a three-way comparison of normative attitude, self-reported behavior, and actual behavior (as you'll see in the links above).
Thus, a measure of honesty falls out of the questionnaire: How well are the self-reports related to the actual behavior? Actually, we have two different types of measures of honesty: For most of the topics on which we didn't have direct behavioral data we asked two behavioral questions, one vague and easy to fudge, the other concrete and more difficult to fudge without explicit deceptive intent. So, for example, we asked at how many meals per week the respondent ate the meat of a mammal (the fudgy question) and also whether she ate the meat of a mammal at her last evening meal not including snacks (the concrete question). If, hypothetically, half the respondents who reported eating mammal meat at 3 or fewer meals per week also reported having eaten it at the last evening meal, we could infer that that group of respondents were fudging their answers.
We created a composite of six types of suspicious (or demonstrably false) responses and we compared the rates of suspicious responding between the groups. The six measures were:
* comparison of self-reported number of votes since the year 2000 with actual voting records;We found that all three groups showed similar rates of suspicious responding: 50% of ethicists, 49% percent of non-ethicist philosophers, and 49% of comparison professors gave at least one suspicious response -- variation well within chance, of course, given the number of respondents. (Remember that on the second three measures a suspicious response is not necessarily a lie or even an unconscious self-serving distortion, but only an answer or pattern of answering that seems more likely to be false or distortive than another pattern would be, when aggregated across respondents.)
* comparison of claims to have voted in the Nov. 2008 U.S. general Presidential election with actual voting records;
* comparison of claims of 100% or 95% responsiveness to undergraduate emails with responsiveness to emails that we had sent that were designed to look as though they were from undergraduates (see here, and yes we got IRB approval);
* for philosophers only, comparison of claims of membership in the American Philosophical Association with membership records (excluded from the analysis in this post, but discussed here);
* comparison of a general claim about how often the respondent talks with her mother (if living) with a specific claim about date of last conversation;
* comparison of a general claim about how often the respondent donates blood (if eligible) with a specific claim about date of last donation;
* comparison of a general claim about meals per week at which the respondent eats the meat of a mammal with a specific claim about the last evening meal.
Thus, as in previous research we failed to find any evidence that ethicists behave any better than other socially comparable non-ethicists.
This is assuming, of course, that lying or giving distorted answers on surveys like ours is morally bad. Now, as it happens, we asked our respondents about that very issue, and 87% (89% of ethicists) said it was morally bad to answer dishonestly on surveys like ours. We also had a measure of how bad they thought it was -- a 9-point scale from "very morally good" through "morally neutral" to "very morally bad". As it turned out, there was no statistically significant relationship between normative attitude and rate of suspicious responding.
Near the end of the survey, we also explicitly asked respondents whether they had answered any of the questions dishonestly. Few said they did, and answers to this question appeared to be unrelated to rates of suspicious responding: Among respondents with no suspicious looking responses, 6 (2.2%) said they had answered dishonestly, compared to 7 (2.6%) of the respondents with at least one suspicious response.
Finally, we had asked the philosophy respondents whether they prefered a deontological, consequentialist, virtue ethical, or some other sort of normative ethical view. Deontologists are often portrayed as sticklers about lying -- Kant, the leading historical deontologist, was notoriously very strict on the point. However we detected no difference in patterns of suspicious responding according to normative ethical view. To the extent there was a trend, it was for the consequentialists to be least likely to have suspicious or false responses (47%, vs. 56% for deontologists and 58% for virtue ethicists; this analysis includes the APA question).
Posted by Eric Schwitzgebel at 5:47 PM 21 comments
Labels: ethics professors
Friday, June 11, 2010
Do Kantians Really Behave Worse Than Other Ethicists?
As I noted in a previous post, Kantian ethicists seem to have a reputation among philosophers for behaving worse than other sorts of ethicists. But who has any systematic empirical data on this? Well, Josh Rust and I do!
Our data are based on a questionnaire Josh and I sent to hundreds of philosophers last year (more description and other results here, here, here, and here). The questionnaire asked first about the goodness or badness of theft from a friend, paying membership dues to the APA, voting in public elections, having regular conversations with your mother, eating the meat of mammals, being an organ donor, being a blood donor, responding to undergraduate emails, and donating to charity. Then, second, we solicited self-reports of these same behaviors (except for theft). In some cases we also have direct data about actual behavior. Near the end of the questionnaire, we asked about normative ethical view -- that is, about the philosopher's general theoretical approach to ethics. The response options were consequentialist, deontologist, virtue ethicist, skeptical, or no settled position.
(If you're not familiar with those terms: Consequentialists think, roughly, that one should act so as to produce the best expected consequences for everyone. Deontologists think, roughly, that one should act according to certain moral rules, such as don't lie or don't kill innocent people, even if you know that abiding by those rules won't produce the best consequences overall. Kant is currently the most eminent deontologist. Virtue ethicists think, roughly, that ethics is about having moral virtues like honesty, courage, and kindness.)
First, let's look at the general distribution of theoretical approach. Ethicists were more likely to be deontologists (29%) than consequentialists (10%), whereas non-ethicists were split about equally between those two positions (17% for both). One might wonder about causal direction here: Does seriously studying ethics tend to lead philosophers to abandon consequentialism? Or are consequentialists less likely to become ethicists? Or...? Rounding out the answers: 30% of ethicist and 27% of non-ethicist respondents espoused virtue ethics, 29% of ethicists and 33% of non-ethicists said they had no settled position, and 2% of ethicists and 6% of non-ethicists expressed skepticism. We detected no differences by age or gender.
There were two questions where I thought consequentialists, deontologists, and virtue ethicists would differ substantially in their normative opinions -- the meat question and the charity question. Consequentialists have a reputation for emphasizing the importance of charitable donation and vegetarianism (most famously Peter Singer). Surprisingly to me, however, the groups showed no difference in moral opinion on these questions. Only 52% of consequentialists rated "regularly eating the meat of mammals" toward the morally bad end of the scale, compared to 58% of deontologists and 53% of virtue ethicists -- well within statistical chance. Likewise, the mean percentage of income that respondents said that the typical professor should donate to charity was 7.5% (for consequentialists) vs. 7.4% (for both virtue ethicists and deontologists), again well within chance. Apparently, Singer's views aren't representative of consequentialists generally.
Virtue ethicists were less likely than others to say that it's morally bad not to be an organ donor (48% vs. 65% for consequentialists and deontologists) and that it's morally good to regularly donate blood (81% vs. 94% of consequentialists and 91% of deontologists). Virtue ethicists were also least likely to say it was good to pay membership dues to support one's main disciplinary society (59% vs. 80% of consequentialists and 74% of deontologists). Responses to the other normative questions seemed to be about the same for all groups.
Looking at self-reported and actual behavior: Virtue ethicists were least likely to belong to the APA (philosophers' main disciplinary society), based on our examination of the membership list: 58% vs. 65% of consequentialists and 74% of deontologists. However, they were just as likely to report being members (74% vs. 67% and 77%). Virtue ethicists were also least likely to report having an organ donor indicator on their drivers' license (58% vs. 78% and 70%). On the other hand, the deontologists were the ones who reported the longest lapse of time since their last blood donation (for eligible donors): 1994 vs. 2001 for consequentialists and 2000 for virtue ethicists. (Does this have anything to do with Kant's odd views on the matter?) And consequentialists appeared to be more responsive to the survey's charity incentive: Half of the questionnaires went out with a promise that we would donate $10 to a charity of the respondent's choice (among six major charities) when the survey was returned. 67% of the consequentialists' returned surveys were charity-incentive surveys, compared to 46% of deontologists' and 51% of virtue ethicists'.
We found no other differences in self-reported behavior, or in actual measured behavior -- including voting rate in public elections (based on actual voting records), responsiveness to sham undergraduate emails we had sent, or detected honesty or dishonesty in their survey responses.
In sum, we found no evidence that deontologists -- many or most of whom are presumably Kantians, broadly speaking -- behave any worse than professors who favor other normative theories.
Posted by Eric Schwitzgebel at 8:37 AM 4 comments
Labels: ethics professors, moral psychology, psychology of philosophy
Monday, June 14, 2010
Do Metaethicists Really Behave Worse Than Other Ethicists?
On Friday, I presented data suggesting -- contrary to common opinion -- that deontologists behave no worse than virtue ethicists and consequentialists. Another opinion I've often heard from philosophers is that metaethicists -- that is, philosophers who focus on the most abstract general questions about ethics (such as whether there are moral truths at all and if so what their metaphysical grounding is) -- behave, on average, less well than do other ethicists, perhaps especially applied ethicists. The same data set provides evidence on this question too.
At the center of the dataset is a survey Josh Rust and I sent to hundreds of professional philosophers. Near the end of the survey we asked respondents, "If an ethics-related area is among your specializations, which of the following best reflects the level of abstraction at which you tend to consider ethical issues? (check all that apply)". Response options were "metaethics", "normative ethics" [i.e., theoretical debates about deontology vs. consequentialism, etc., an intermediate level of abstraction], "applied ethics", and "no ethics-related area among my specializations". 28% of philosophers claimed a specialization in metaethics, 45% in normative ethics, 32% in applied ethics (these three groups overlapping, of course), and 36% claimed no such specialization. Although there was a trend for the non-ethicists to be more male (78% vs. 72%) this was not statistically significant given our sample size (361 respondents). Non-ethicist respondents did tend to be a little younger (mean birthyear 1958 vs. 1954). We saw no gender or age differences by level of abstraction within ethics.
Looking at our various questions about respondents' opinions on various applied ethical issues, applied ethicists seemed to think it morally better to vote regularly than did the other groups; ethicists in general thought it morally worse than did non-ethicists not to keep in at least monthly telephone or face-to-face contact with one's mother; and ethicists asserted more of a duty to give to charity (13% of ethicists said it was not the case that the typical professor should donate to charity vs. 24% of non-ethicists). We saw no detectable differences among the groups on the morality of belonging to one's main disciplinary society (for philosophers, the APA), eating the meat of mammals, being an organ or blood donor, or responding to student emails.
The groups did not differ in their self-reported rates of dues-paying membership in the APA, but looking directly at APA membership lists, applied ethicists were less likely than were other groups actually to be members of the APA (61% vs. 75%). Metaethicists tended to report voting in fewer public elections than did applied ethicists, and actual voting data obtained from public records appeared to bear out that trend (an estimated 1.26 votes/year for applied ethicists, vs. 1.05 for metaethicists; I should clarify here that Josh and I "de-identified" the survey data so that we cannot make inferences about particular individuals' survey responses). Metaethicists also reported eating more mammal meat than did applied ethicists (mean 5.2 vs. 3.2 meals/week, and 50% vs. 31% reporting having eaten the meat of a mammal at the last evening meal). Meta-ethicists self-reported giving less to charity (mean 3.9% of income vs 5.2%, excluding one applied ethicist who claimed to have given 500% of his income to charity) and metaethicists appeared to be less motivated by the survey's charity incentive (half of survey recipients received a charity incentive, that is a promise by us (which we did fulfill) to give $10 to a charity of the respondent's choice among six major charities in return for the completed survey; 45% of returned metaethicists' surveys had the charity incentive vs. 57% of applied ethicists'). We found no differences in self-reported organ or blood donation, self-reported or actual responsiveness to undergraduate emails, overall rate of suspicious responding to the survey, or frequency of contact with one's mother (if living).
Obviously, there's room for difference of opinion about these measures, but my interpretation of the data is that they tend to weakly confirm the hypothesis that metaethicists behave not quite as morally well as do applied ethicists.
Posted by Eric Schwitzgebel at 11:16 AM 0 comments
Labels: ethics professors, moral psychology, psychology of philosophy, sociology of philosophy
Wednesday, March 16, 2011
The Self-Reported Moral Behavior of Ethics Professors
New essay in draft here, co-authored with Joshua Rust.
Abstract:
We examine the self-reported moral attitudes and moral behavior of 198 ethics professors, 208 non-ethicist philosophers, and 167 professors in departments other than philosophy on eight moral issues: academic society membership, voting, staying in touch with one's mother, vegetarianism, organ and blood donation, responsiveness to student emails, charitable giving, and honesty in responding to survey questionnaires. On some issues we also had direct behavioral measures that we could compare with self-report. Ethicists expressed somewhat more stringent normative attitudes on some issues, such as vegetarianism and charitable donation. However, on no issue did ethicists show significantly better behavior than the two comparison groups. Our findings on attitude-behavior consistency were mixed: Ethicists showed the strongest relationship between behavior and expressed moral attitude regarding voting but the weakest regarding charitable donation.
Warning: This essay is monstrously long -- 70 pages! In earlier (uncirculated) drafts we had tried to keep it to normal journal-article length, but eventually we decided to give up on that. It's a very complicated study, so it just takes some space to lay it all out properly.
Posted by Eric Schwitzgebel at 1:08 PM 2 comments
Labels: ethics professors, Joshua Rust, moral psychology
Monday, June 09, 2008
Political Affiliations of American Philosophers, Political Scientists, and Other Academics
As regular readers will know, I've been working hard over the last year thinking of ways to get data on the moral behavior of ethics professors. As part of this project, I have looked at the public voting records of professors in several states (California, Florida, North Carolina, Washington State, and soon Minnesota), on the assumption that voting is a civic duty. If so, we can compare the rates at which ethicists and non-ethicists perform this duty. Soon I'll start posting some of my preliminary analyses.
First, however, I thought you might enjoy some data on the political affiliation of professors in California, Florida, and North Carolina. (These states make party affiliation publicly available information.) Although U.S. academics are generally reputed to be liberal and Democratic, systematic data are sparser than one might expect. Here's what I found.
Among philosophers (375 records total):
Democrat: 87.2%Among political scientists (225 records total:)
Republican: 7.7%
Green: 2.7%
Independent: 1.3%
Libertarian: 0.8%
Peace & Freedom: 0.3%
Democrat: 82.7%Among a comparison group drawn randomly from all other departments (179 records total):
Republican: 12.4%
Green: 4.0%
Independent: 0.4%
Peace & Freedom: 0.4%
Democrat: 75.4%By comparison, in California (from which the bulk of the data are drawn), the registration rates (excluding decline to state [19.4%]) are:
Republican: 22.9%
Independent: 1.1%
Green: 0.6%
Democrat: 54.3%Perhaps this accounts for my sense that if there's one thing that's a safe dinner conversation topic at philosophy conferences, it's bashing Republican Presidents.
Republican: 40.3%
Other: 5.3% [source]
Now I'm not sure 87.2% of professional philosophers would agree that there's good evidence the sun will rise tomorrow (well, that's a slight exaggeration, but we are an ornery and disputatious lot!), so why the virtual consensus about political party?
Conspiracy theories are out: There is no point in the job interview process, for example, when you would discover the political leanings of an applicant who was not applying in political philosophy. We ask about research, teaching, and that's about it. Even interviewing a political philosopher (a small minority of philosophers) it will not always be evident if the interviewee is "liberal" or "conservative", since her research will often be highly abstract or historical.
Self-interest also seems an insufficient explanation: Many professors are at private institutions, and few philosophy professors earn government grants, so even if Democrats are more supportive of funding for universities and research, many philosophy professors will at best profit very indirectly from that. Furthermore, it's not clear to me -- though I'm open to evidence on this -- that Democrats do serve professors' financial interests better than Republicans. For example, social services for the poor and keeping tuition low seem to have a higher priority among liberal Democrats in California than the salaries of professors.
Democrats might be tempted to flatter themselves with this explanation: Professors are smart and informed, and smart and informed people are rarely Republican. That would be interesting if it were true, and it's empirically explorable; but I suspect that in fact a better explanation has to do with the kind of values that lead one to go into academia and that an academic career reinforces -- though I find myself struggling now to discern exactly what those values are (tolerance of difference? more willingness to believe that knowledgeable people can direct society for the better? less respect for the pursuit of wealth as a career goal?).
Posted by Eric Schwitzgebel at 9:26 AM 20 comments
Labels: psychology of philosophy
Saturday, June 14, 2008
Political Scientists Vote More Often Than Other Professors
One theme of my recent research has been the moral behavior of ethics professors -- do they behave any better than others of similar social background? There's good reason to anticipate that they would: Presumably they care a lot and think a lot about morality, and one might hope (at least I would hope!) that would have a positive effect on their behavior.
However, some people don't think we should expect this. After all, doctors smoke, police commit crimes, economists invest badly. Whether they do so any less than anyone else is hard to assess. (However, the evidence I've seen so far suggests that doctors do smoke less and economists do invest better, contra the cynic. I don't know about police.)
Half a year ago I posted a couple of reflections on the lack of data regarding whether political scientists vote more often in public elections than other professors do (here and here). With perhaps more enthusiasm than wisdom, I decided to go out and get the data myself. Josh Rust and I (and some helpful RAs) gathered official voting histories of individuals in California, Florida, North Carolina, and Washington State (Minnesota pending) and matched those records with online information about professors in universities in those states. (The California data included only statewide elections; the other states include at least some local election data.) We looked at the years 2000-2007.
The data suggest that political scientists do vote more often, averaging 1.11 votes/year as opposed to 0.93 votes/year for a comparison group of professors drawn randomly from all other departments except philosophy.
We ruled out gender, political party, state of residence, age, ethnicity, and institution type (research-oriented vs. teaching-oriented) as explanatory factors. All of these factors either had no effect on vote rate (gender, party, institution type) or were balanced between the groups (state, age, ethnicity). The one factor that did have an effect and wasn't balanced between the groups was academic rank: Non-tenure-track faculty voted less often, and there were fewer tenure-track faculty in the comparison group than among the political scientists. However, even looking just at tenure-track faculty, political scientists still vote more: 1.12 votes/year for political scientists, 0.99 for comparison faculty. (Political science department affiliation also remains predictive of vote rate in multiple regression models including rank and other factors.)
These data support my ethicists project in two ways: First, they show at least some relationship between professorial career choice and real-world behavior; and second, since voting is widely (and I think rightly) seen as a duty, it's a measure of one piece of moral behavior. We can see if ethicists (and perhaps especially political philosophers) are more likely to perform this particular duty than are non-ethicists. Results on that soon!
Posted by Eric Schwitzgebel at 9:02 AM 8 comments
Labels: ethics professors, Joshua Rust, moral psychology
Friday, July 25, 2008
In Philosophy, Women Move More Slowly to Tenure Than Do Men
It's well known (at least among feminist philosophers!) that only about 20% of philosophy professors are women. Surely this is partly due to a history of sexism in the discipline. The question is, does it also reflect current sexism?
No simple analysis could possibly settle that question, but here's one thought. If sexism is still prevalent in philosophy, we should expect women, on average, to move less quickly than men through the academic ranks -- from graduate student to non-tenure-track faculty to tenure-track Assistant Professor to tenured Associate or full Professor. It would then follow that women would be on average older than men at the lower ranks.
As it happens, the data Joshua Rust and I collected for our study of the voting rates of philosophers can be re-analyzed with this issue in mind.
For the voting study, we collected (among other things) academic rank data for most professors of philosophy in five states: California, Florida, Minnesota, North Carolina, and Washington State. Examining voter registration records, we found unambiguous name matches for 60.4% of those professors. Since four states (all but North Carolina) provided age data for registered voters, we were able to compare rank and age.
Overall 23.1% of the philosophy professors in our study were female. The average birth year of men and women at each rank are:
Non-Tenure-Track: women 1958.1, men 1960.4That the average male tenured professor is older than the average female tenured professor fits with the idea that the gender ratio is philosophy has improved over time; but that the average female untenured professor is older than the average male suggests that women are still slower to progress to tenure.
Assistant Profesor: women 1965.3, men 1970.0
Tenured Professor: women 1955.3, men 1948.7
If you can bear with lists of numbers, the facts become clearer if we break down the data by birthyear first, then gender and rank:
1900-1939 (54 profs.):
96% male (90% full, 10% assoc.)1940-1949 (100 profs.):
4% female (50% full, 50% assoc.)
77% male (78% full, 13% assoc., 3% asst., 6% non-TT)1950-1959 (104 profs.):
23% female (70% full, 9% assoc., 4% asst., 17% non-TT)
71% male (55% full, 28% assoc., 4% asst., 12% non-TT)1960-1969 (99 profs.):
29% female (43% full, 27% assoc., 7% asst., 23% non-TT)
65% male (29% full, 39% assoc., 20% asst., 19% non-TT)1970-1979 (57 profs.):
35% female (22% full, 39% assoc., 29% asst., 14% non-TT)
81% male (2% full, 11% assoc., 74% asst., 13% non-TT)There's a general increase of representation of women in philosophy in the younger generations, but for almost all age groups women are underrepresented among full professors and overrepresented in the lower ranks. It seems to me that the natural interpretation is that although women are coming into philosophy at higher rates than they used to, they either progress more slowly through the ranks or enter philosophy later in their lives (which is perhaps just another way of progressing more slowly). Even the reversal of the gender ratio trend for women born in 1970 or later fits with this: Men may be more overrepresented in this group than in slightly older groups not because that generation has fewer women pursuing philosophy but rather because the men are completing their Ph.D.'s and moving into teaching more quickly.
19% female (0% full, 18% assoc., 45% asst., 36% non-TT)
It doesn't follow straightaway that the cause of women's slower progression is sexism, of course. Childbearing and other factors may play a role. It's also encouraging, I think, to see the rank differences diminishing in the younger groups.
Update, July 30: Brian Leiter looks at the assistant professors from top departments twelve years ago, and Rob Wilson breaks it down by gender in the comments to this post.
Posted by Eric Schwitzgebel at 10:28 AM 16 comments
Friday, November 30, 2007
Do Political Scientists Vote More Often?
Well, I was hoping to work up a post today on the voting behavior of political scientists, but so far the only literature I can find on this is old and hideous -- and now I have to dash off to Cal State Long Beach to give a talk (on the moral behavior of ethicists)!
So a tidbit: Henry A. Turner and Charles B. Spaulding (1969) mailed questionnaires to academics in various disciplines, asking people about their voting histories. 61% of the questionnaires were returned (ah, the good old days!). 89% of the respondents said they voted in 1956 and 91% of the respondents said they voted in 1960. Were political scientists the most likely to have said they voted? Nope! Geologists were (95% and 97% in the two elections). The methodological shortcomings of this study are left as an exercise for the reader.
Chasing threads through citation databases, I found a cluster of articles in the same general vein in the 1960s and 1970s -- mostly focusing on the party affiliations of the respondents (overwhelmingly Democrat in the humanities). Then the citation thread peters out....
Hopefully next week I can dig up something more recent and methodologically better. Or will it be up to me? Surely someone must have studied whether political scientists actually vote!
(You ask why I care? Well, beside its being intrinsically interesting, I need a comparison group for when I go hunt down the data on whether political philosophers are more likely than others to vote.)
Posted by Eric Schwitzgebel at 10:13 AM 0 comments
Tuesday, November 15, 2016
Three Ways to Be Not Quite Free of Racism
Suppose that you can say, with a feeling of sincerity, "All races and colors of people deserve equal respect". Suppose also that when you think about American Blacks or South Asians or Middle Eastern Muslims you don't detect any feelings of antipathy, or at least any feelings of antipathy that you believe arise merely from consideration of their race. This is good! You are not an all-out racist in the 19th-century sense of that term.
Still, you might not be entirely free of racial prejudice, if we took a close look at your choices, emotions, passing thoughts, and swift intuitive judgments about people.
Imagine then the following ideal: Being free of all unjustified racial prejudice. We can imagine similar ideals for classism, ableism, sexism, ethnicity, subculture, physical appearance, etc.
It would be a rare person who met all of these ideals. Yet not all falling short is the same. The recent election has made vivid for me three importantly distinct ways in which one can fall short. I use racism as my example, but other failures of egalitarianism can be analyzed similarly.
Racism is an attitude. Attitudes can be thought of as postures of the mind. To have an attitude is to be disposed to act and react in attitude-typical ways. (The nature of attitudes is a central part of my philosophical research. For a fuller account of my view, see here.) Among the dispositions constitutive of all-out racism are: making racist claims, purposely avoiding people of that race, uttering racist epithets in inner speech, feeling negative emotions when interacting with that race, leaping quickly to negative conclusions about individual members of that race, preferring social policies that privilege your preferred race, etc.
An all-out racist would have most or all of these dispositions (barring "excusing conditions"). Someone completely free of racism would have none of these dispositions. Likely, the majority of people in our culture inhabit the middle.
But "the middle" isn't all the same. Here are three very different ways of occupying it.
(1.) Implicit racism. Some of the relevant dispositions are explicitly or overtly racist -- for example, asserting that people of the target race are inherently inferior. Other dispositions are only implicitly or covertly racist, for example, being prone without realizing it to evaluate job applications more negatively if the applicant is of the target race, or being likely to experience negative emotion upon being assigned a cooperative task with a person of the target race. Recent psychological research suggests that many people in our culture, even if they reject explicitly racist statements, are disposed to have some implicitly racist reactions, at least occasionally or in some situations. We can thus construct a portrait of the "implicit racist": Someone who sincerely disavows all racial prejudice, but who nonetheless has a wide-ranging and persistent tendency toward implicitly racist reactions and evaluations. Probably no one is a perfect exemplar of this portrait, with all and only implicitly racist reactions, but it is probably common for people to match it to a certain extent. To that extent, whatever it is, that person is not quite free of implicit racism.
Implicit racism has received so much attention in the recent psychological and philosophical literature that one might think that it is the only way to be not quite free of racism while disavowing racism in the 19th-century sense of the term. Not so!
(2.) Situational racism. Dispositions manifest only under certain conditions. Priscilla (name randomly chosen) is disposed sincerely to say, if asked, that people of all races deserve equal respect. Of course, she doesn't actually spend the entire day saying this. She is disposed to say it only under certain conditions -- conditions, perhaps, that assume the continued social disapproval of racism. It might also be the case that under other conditions she would say the opposite. A person might be disposed sincerely to reject racist statements in some contexts and sincerely to endorse them in other contexts. This is not the implicit/explicit division. I am assuming both sides are explicit. Nor am I imagining a change in opinion over time. I am imagining a person like this: If situation X arose she would be explicitly racist, while if situation Y arose she would be explicitly anti-racist, maybe even passionately, self-sacrificingly so. This is not as incoherent as it might seem. Or if it is incoherent, it is a commonly human type of incoherence. The history of racism suggests that perfectly nice, non-racist-seeming people can change on a dime with a change in situation, and then change back when the situation shifts again. For some people, all it might take is the election of a racist politician. For others, it might take a more toxically immersive racist environment, or a personal economic crisis, or a demanding authority, or a recent personal clash with someone of the target race.
(3.) Racism of indifference. Part of what prompted this post was an interview I heard with someone who denied being racist on the grounds that he didn't care what happened to Black people. This deprioritization of concern is in principle separable from both implicit racism and situational racism. For example: I don't think much about Iceland. My concerns, voting habits, thoughts, and interests instead mostly involve what I think will be good for me, my family, my community, my country, or the world in general. But I'm probably not much biased against Iceland. I have mostly positive associations with it (beautiful landscapes, high literacy, geothermal power). Assuming (contra Mozi) that we have much greater obligations to family and compatriots than to people in far-off lands, my habit of not highly prioritizing the welfare of people in Iceland probably doesn't deserve to labeled pejoratively with an "-ism". But a similar disregard or deprioritization of people in your own community or country, on grounds of their race, does deserve a pejorative label, independent any implicit or explicit hostility.
These three ways of being not quite free of racism are conceptually separable. Empirically, though, things are likely to be messy and cross-cutting. Probably the majority of people don't map neatly onto these categories, but have a complex set of mixed-up dispositions. Furthermore, this mixed-up set probably often includes both racist dispositions and, right alongside, dispositions to admire, love, and even make special sacrifices for people who are racialized in culturally disvalued ways.
It's probably difficult to know the extent to which you yourself fail, in one or more of these three ways, to be entirely free of racism (sexism, ableism, etc.). Implicitly racist dispositions are by their nature elusive. So also is knowledge of how you would react to substantial changes in circumstance. So also are the real grounds of our choices. One of the great lessons of the past several decades of social and cognitive psychology is that we know far less than we think we know about what drives our preferences and about the situational influences on our behavior.
I am particularly struck by the potentially huge reach of the bigotry of indifference. Action is always a package deal. There are always pros and cons, which need to be weighed. You can't act toward one goal without simultaneously deprioritizing many other possible goals. Since it's difficult to know the basis of your prioritization of one thing over another, it is possible that the bigotry of indifference permeates a surprising number of your personal and political choices. Though you don't realize it, it might be the case that you would have felt more call to action had the welfare of a different group of people been at stake.
[image source Prabhu B Doss, creative commons]
Posted by Eric Schwitzgebel at 8:41 AM 27 comments
Labels: attitudes, culture, moral psychology
Wednesday, May 04, 2016
Possible Architectures of Group Minds: Perception
My favorite animal is the human. My favorite planet is Earth. But it's interesting to think, once in a while, about other possible advanced psychologies.
Over the course of a few related posts, I'll consider various possible architectures for superhuman group minds. Such minds regularly appear in science fiction -- e.g., Star Trek's Borg and the starships in Ann Leckie's Ancillary series -- but rarely do these fictions make the architecture entirely clear.
One cool thing about group minds is that they have the potential to be spatially distributed. The Borg can send an away team in a ship. A starship can send the ancillaries of which it is partly composed down to different parts of the planet's surface. We normally think of social groups as having separate minds in separate places, which communicate with each other. But if mentality (instead or also) happens at the group level, then we should probably think of it as a case of a mind with spatially distributed sensory receptors.
(Elsewhere, I've argued that ordinary human social groups might actually be spatially distributed group minds. We'll come back to that in a future post, I hope.)
So how might perception work, in a group mind?
Central Versus Distributed Perceptual Architecture:
For concreteness, suppose that the group mind is constituted by twenty groups of ten humanoids each, distributed across a planet's surface, in contact via relays through an orbiting ship. (This is similar to Leckie's scenario.)
If the architecture is highly centralized, it might work like this: Each humanoid aims its eyes (or other sensory organs) toward a sensory target, communicating its full bandwidth of data back up to the ship for processing by the central cognitive system (call it the "brain"). This central brain synthesizes these data as if it had two hundred pairs of eyes across the planet, using information from each pair to inform its understanding of the input from other pairs. For example if the ten humanoids in Squad B are flying in a sphere around an airplane, each viewing the airplane from a different angle, the central brain forms a fully three-dimensional percept of that airplane from all ten viewing angles at once. The central brain might then direct humanoid B2 to turn its eyes to the left because of some input from B3 that makes that viewpoint especially relevant -- something like how when you hear a surprising sound to your left, you spontaneously turn your eyes that direction, swiftly and naturally coordinating your senses.
Two disadvantages of this architecture are the bandwidth of information flow from the peripheral humanoids to the central brain and the possible delay of response to new information, as messages are sent to the center, processed in light of the full range of information from all sources, and then sent back to the periphery.
A more distributed architecture puts more of the information processing in the humanoid periphery. Each humanoid might process its sensory input as best it can, engaging in further sensory exploration (e.g., eye movements) in light of only its own local inputs, and then communicate summary results to the others. The central brain might do no processing at all but be only a relay point, bouncing all 200 streaming messages from each humanoid to the others with no modification. The ten humanoids around the airplane might then each have a single perspectival percept of the plane, with no integrated all-around percept.
Obviously, a variety of compromises are possible here. Some processing might be peripheral and some might be central. Peripheral sources might send both summary information and also high-bandwidth raw information for central processing. Local sensory exploration might depend partly on information from others in the group of ten, others in other 19 groups of ten, or from the central brain.
At the extreme end of central processing, you arguably have just a single large being with lots of sensory organs. At the extreme end of peripheral processing, you might not want to think about the system as a "group mind" at all. The most interesting group-mind-ish cases have both substantial peripheral processing and substantial control of the periphery either by the center or by other nodes in the periphery, with a wide variety of ways in which this might be done.
Perceptual Integration and Autonomy:
I've already suggested one high integration case: having a single spherical percept of an airplane, arising from ten surrounding points of view upon it. The corresponding low integration case is ten different perspectival percepts, one for each of the viewing humanoids. In the first case, there's single coherent perceptual map that smoothly integrates all the perceptual inputs; in the second case each humanoid has its own distinct map (perhaps influenced by knowledge of the others' maps).
This difference is especially interesting in cases of perceptual conflict. Consider an olfactory case: The ten humanoids in Squad B step into a meadow of uniform-looking flowers. Eight register olfactory input characteristic of roses. Two register olfactory input characteristic of daffodils. What to do?
Central dictatorship: All ten send their information to the central brain. The central brain, based on all of the input, plus its background knowledge and other sorts of information, makes a decision. Maybe it decides roses. Maybe it decides daffodils. Maybe it decides that there's a mix of roses and daffodils. Maybe it decides it is uncertain, and the field is 80% likely to be roses and 20% likely to be daffodils. Whatever. It then communicates this result to each of the humanoids, who adopt it as their own local action-guiding representation of the state of the field. For example, if the central brain says "roses", the two humanoids registering daffodil-like input nonetheless represent the field as roses, with no more ambivalence about it than any of the other humanoids.
Winner-take-all vote: There need be no central dictatorship. Eight humanoids might vote roses versus two voting daffodils. Roses wins, and this result becomes equally the representation of all.
Compromise vote: Eight versus two. The resulting shared representation is either a mix of the two flowers, with roses dominating, or some feeling of uncertainty about whether the field is roses (probably) or instead daffodils (possible but less likely).
Retention of local differences: Alternatively, each individual humanoid might retain its own locally formed opinion or representation even after receiving input from the group. A daffodil-smeller might then have a representation something like this: To me it smells like daffodils, even though I know that the group representation is roses. How this informs that humanoid's future action might vary. On a more autonomous structure, that humanoid might behave like a daffodil smeller (maybe saying, "Ah, it's daffodils, you guys! I'm picking this one to take one back to the daffodil loving Queen of Mars") or it might be more deferential to the group (maybe saying, "I know my own input suggests daffodils, but I give that input no more weight than I would give to the input of any other member of the group").
Finally, no peripheral representation at all: An extremely centralized system might involve no perceptual representations at all in the humanoids, with all behavior issuing directly from the center.
Conceptual Versus Perceptual:
There's an intuitive distinction between knowing something conceptually or abstractly and having a perceptual experience of that thing. This is especially vivid in cases of known illusion. Looking at the Muller-Lyer illusion you know (conceptually) that the two lines minus the tails are the same length, but that's not how you (perceptually) see it.
The conceptual/perceptual distinction can cross-cut most of the architectural possibilities. For example, the minority daffodil smeller might perceptually experience the daffodils but conceptually know that the group judgment is roses. Alternatively, the minority daffodil smeller might conceptually know that her own input is daffodils but perceptually experience roses.
Counting Streams of Experience:
If the group is literally phenomenally conscious at the group level, then there might be 201 streams of experience (one for each humanoid, plus one for the group); or there might be only one stream of experience (for the group); or streams of experience might not be cleanly individuated, with 200 semi-independent streams; or something else besides.
The dictatorship, etc., options can apply to the group-level stream, as well as to the humanoid-level streams, perhaps with different results. For example, the group stream of consciousness might be determined by compromise vote (80% roses), while the humanoid streams of experience retain their local differences (some roses, some daffodils).
To Come:
Similar issues arise for group level memory, goal-setting, inferential reasoning, and behavior. I'll work through some of these in future posts.
I also want to think about the moral status of the group and the individuals, under different architectural setups -- that is, what sorts of rights or respect or consideration we owe to the individuals vs. the group, and how that might vary depending on the set-up.
------------------------------------------------
Related:
Posted by Eric Schwitzgebel at 10:26 AM 5 comments
Labels: consciousness, science fiction, speculative fiction, USA consciousness
Thursday, December 30, 2010
Nazi Philosophers
Recently, I've done a fair bit of work on the moral behavior of ethics professors (mostly with Josh Rust). We consistently find that ethics professors behave no better than socially comparable non-ethicists. So far, the moral violations we've examined are mostly minor: stealing library books, not voting in public elections, neglecting student emails. One might argue that even if ethicists behave no better in such day-to-day ways, on grand issues of moral importance -- decisions that reflect one's overarching worldview, one's broad concern for humanity, one's general moral vision -- they show greater wisdom.
Enter the Nazis.
Nazism is an excellent test case of the grand-wisdom hypothesis for several reasons: For one thing, everyone now agrees that Nazism is extremely morally odious; for another, Germany had a robust philosophical tradition in the 1930s and excellent records are available on individual professors' participation in or resistance to the Nazi movement. So we can ask: Did a background in philosophical ethics serve as any kind of protection against the moral delusions of Nazism? Or were ethicists just as likely to be swept up in noxious German nationalism as were others of their social class? Did reading Kant on the importance of treating all people as "ends in themselves" (and the like) help philosophers better see the errors of Nazism or, instead, did philosophers tend to appropriate Kant for anti-Semitic and expansionist ends?
Heidegger's involvement with Nazism is famous and much discussed, but as I see him as a single data point. There were, of course, also German philosophers who opposed Nazism. My question is quantitative: Were philosophers any more likely than other academics to oppose Nazism -- or any less likely to be enthusiastic supporters -- than were other academics? I'm not aware of any careful, quantitative attempts to address this question (please do let me know if I'm missing something). It can't be an entirely straightforward bean count because dissent was dangerous and the pressures on philosophers were surely not the same as the pressures on academics in other departments -- probably the pressures were greater than on fields less obviously connected to political issues -- but we can at least start with a bean count.
There's a terrific resource on philosophers' involvement with Nazism: George Leaman's Heidegger im Kontext, which contains a complete list of all German philosophy professors from 1932 to 1945 and provides summary data on their involvement with or resistance to Nazism. I haven't yet found a similar resource for comparison groups of other professors, but Leaman's data are nonetheless interesting.
In Leaman's data set, I count 179 professors with "habilitation" in 1932 when the Nazis started to ascend to power (including Dozents and ausserordentlichers but not assistants). (Habilitation is an academic achievement after the Ph.D., without an equivalent in Britain or the U.S., with requirements roughly comparable to gaining tenure in the U.S.) I haven't attempted to divide these professors, yet, into ethicists vs. non-ethicists, so the rest of this post will just look at philosophers as a group. Of these, 58 (32%) joined the Nazi Party, the SA, or the SS. Jarausch and Arminger (1989) estimate that the percentage of university faculty in the Nazi party was between 21% and 25%. Philosophers were thus not underrepresented in the Nazi party.
The tricky questions come after this first breakdown: To what extent did joining the party reflect enthusiasm for its goals vs. opportunism vs. a reluctant decision under pressure?
I think we can assume that membership in the SA or SS reflects either enthusiastic Nazism or an unusual degree of self-serving opportunism: Membership in these organizations reflected considerable Nazi involvement and was by no means required for continuation in a university position. Among philosophers with habilitation in 1932, two (1%) joined the SS and another 20 (11%) joined (or were already in) the SA (one philosopher joined both), percentages approximately similar to the overall academic participation in those organizations. However, I suspect this estimate substantially undercounts enthusiastic Nazis, since a number of philosophers (including briefly Heidegger) appear to have gone beyond mere membership to enthusiastic support through their writings. I haven't yet attempted to quantify this -- though one further possible measure is involvement with Alfred Rosenberg the notorious Nazi racial theorist. Combining the SA, SS, and Rosenberg associates yields a minimum of 30 philosophers (17%) on the far right side of Nazism, not even including those who received their university posts after the Nazis rose to power (and thus perhaps partly because of their Nazism).
What can we say about the philosophers who were not party members? Well, 22 (12% of the 179 habilitated philosophers) were Jewish. Another 52 (29%) were deprived of the right to teach, imprisoned, or otherwise severely penalized by the Nazis for Jewish family connections or political unreliability (often both). It's somewhat difficult to tease apart how many of this latter group took courageous stands vs. found themselves insufferable to the Nazis due to family connections or previous political commitments outside of their control. One way to look at the data are these: Among the 157 non-Jewish habilitated philosophy professors, 37% joined the Nazi party and 30% were severely penalized by the Nazis (this second number excludes 5 people who were Nazi party members and also severely penalized), leaving 33% as what we might call "coasters" -- those who neither joined the party nor incurred severe penalty. Most of these coasters had at least token Nazi affiliations, especially with the NSLB (the Nazi organization of teachers), but probably NSLB affiliation alone did not reflect much commitment to the Nazi cause.
Membership in the Nazi party would not reflect a commitment to Nazism (or, also problematic, an unusually strong opportunistic willingness to fake commitment to further one's career) if joining the Nazi party was necessary simply to getting along as a professor. The fact that about a third of professors could be "coasters" suggests that token gestures of Nazism, rather than actual Nazi party membership, were sufficient for getting along, as long as one did not actively protest or have Jewish affiliations. Nor were the coasters mostly old men on the verge of retirement (though there was a wave of retirements in 1933, the year the Nazis assumed power). If we include only the subset of 107 professors who were not Jewish, habilitated by 1932, and continuing to teach past 1940, we still find 30% coasters (28% if we exclude two emigrants).
Here's what I tentatively conclude from this evidence: Philosophy professors were not forced to join the Nazi party. However, a substantial proportion did so voluntarily, either out of enthusiasm or opportunistically for the sake of career advancement. A substantial minority, at least 19% of the non-Jews, occupied the far right of the Nazi party, as reflected by membership in the SS, SA, or association with Rosenberg. Regardless of how the data look for other academic disciplines, it seems unlikely that we will be able to conclude that philosophers tended to avoid Nazism. Nonetheless, given that 30% of non-Jewish philosophers were severely penalized by the Nazis (including one executed for resistance and two who died in concentration camps), it remains possible that philosophers are overrepresented among those who resisted or were ejected.
Posted by Eric Schwitzgebel at 5:08 PM 32 comments
Labels: ethics professors
Tuesday, August 18, 2009
Philosophers' Honesty in Responding to Questionnaires
Last week, and in various previous posts, I've discussed a questionnaire Josh Rust and I sent to several hundred ethicists and non-ethicist professors (both inside and outside philosophy), soliticiting self-reports of their moral attitudes and moral behavior on a variety of issues, such as vegetarianism and voting. Our guiding question: Do ethicists behave any better, or any more in accord with their espoused principles, than do non-ethicists? Based on our analyses so far, it doesn't look like ethicists' behavior is any better.
You might wonder, though -- as I do -- how honestly our survey respondents are answering our questions. Are those who have behaved (at least by their own lights) less than ideally well really going to report that fact, even in an anonymous survey like ours? Maybe ethicists really do behave better than non-ethicists but don't look that way because they respond more honestly. Josh and I tried to get a handle on this, in part, by asking a few questions whose answers we could verify. Respondents' honesty on these questions might help us estimate the honesty of their responses overall. Since honesty, of course, is also a moral behavior, it merits examination in its own right.
We asked one question whose answer we could directly verify for all philosophy professors: whether they were dues-paying members of the American Philosophical Association. (The APA publishes an annual list of members, which includes people up to 10 months late with their dues.) Among the philosopher respondents, 138 non-ethicists and 128 ethicists were listed by the APA as members. Of the remaining 59 non-ethicist respondents -- that is, those not on the APA membership list -- 23 (39.0%) claimed to be members. Of the remaining 61 ethicist respondents, 27 (44.3%) claimed to be members. In other words, nearly half of the respondents with the arguably immoral behavior (free-riding by not belonging to the APA) denied that behavior.
The APA's list is not perfect, I'm sure, and people's memories are sometimes fallible for reasons entirely innocent, but it seems plausible to me that much of the effect here is due to culpable inaccuracies -- even if not deliberate lying, a blameworthy bias toward misremembering and misportraying oneself in a positive light. (More attributable, probably, to purely innocent error, either by the respondents or the APA, are the 4% of respondents -- 7 ethicists and 8 non-ethicists -- who were on the APA's lists but did not claim to be members.)
Of course, it's disputable whether philosophy professors should, morally speaking, belong to the APA. In the attitudinal part of the survey, a majority of philosophers (64.7%) said it was morally good to "regularly [pay] membership dues to support one's main academic disciplinary society (the APA, the MLA, etc., as appropriate)", but that left a substantial minority who said it was morally neutral (very few said it was bad). Non-members who claimed to be members may have been somewhat more likely to say it is morally good to support the APA through one's membership dues than were the non-members who truly stated that they were non-members, but if so, the trend was modest (62.0% vs. 52.9%), and not statistically significant, given our relatively small sample of APA non-members.
So the answer to our question about how accurately philosophers portrayed their negative behavior in our survey appears to be: not very accurately at all. Nor do ethicists seem any more honest; in fact the trend (not statistically significant) was toward less honesty. This also fits with professors' evident exaggeration, in our survey, of their responsiveness to undergraduate emails (with ethicists appearing just as prone to such exaggeration). Josh and I have some other tests of honesty, too, not all analyzed, which I'll discuss later.
Incidentally, near the the end of the questionnaire we asked about the morality of "responding dishonestly to survey questions such as the ones presented here" and also "Were you dishonest in your answers to any previous questions?" Those who appear to have falsely claimed APA membership trended, if anything, toward being more likely than those who truly stated their non-membership to say it is bad to respond to such questions dishonestly (93.2% vs. 85.3%, chi-square p = .17). Also, 2 of 49 in the first group (4.1%) and 3 of 65 in the second group (4.6%) admitted having answered a survey question dishonestly.
Posted by Eric Schwitzgebel at 3:15 PM 0 comments
Labels: ethics professors, psychological methods
Thursday, January 07, 2010
Might Ethicists Behave More Permissibly but Also No Better?
I've been thinking a fair bit about the relationship between moral reflection and moral behavior -- especially in light of my findings suggesting that ethicists behave no better than non-ethicists of similar social background. I've been working with the default assumption that moral reflection can and often does improve moral behavior; but I'm also inclined to read the empirical evidence as suggesting that people who morally reflect a lot don't behave, on average, better than those who don't morally reflect very much.
Those two thoughts can be reconciled if, about as often as moral reflection is morally salutary, it goes wrong in one of the following ways:
* it leads to moral skepticism or nihilism or egotism,But all this is rather depressing, since it suggests that if my aim is to behave well, there's no point in morally reflecting -- the downside is as big as the upside. (Or it is, unless I can find a good way to avoid those risks, and I have no reason to think I'm a special talent.)
* it collapses into self-serving rationalization, or
* it reduces our ability to respond unreflectively in good ways.
But it occurs to me now that the following empirical claim might be true: The majority of our moral reflection concerns not what it would be morally good to do but rather whether it's permissible to do things that are not morally good. So, for example, most people would agree that donating to well-chosen charities and embracing vegetarianism would be morally good things to do. (On vegetarianism: Even if animals have no rights, eating meat causes more pollution.) When I'm reflecting morally about whether to eat the slightly less appealing vegetarian dish or to donate money to Oxfam -- or to kick back instead of helping my wife with the dishes -- I'm not thinking about whether it would be morally good to do those things. I take it for granted that it would be. Rather, I'm thinking about whether not doing those things is morally permissible.
So here, then, is a possibility: Those who reflect a lot about ethics have a better sense of which morally-less-than-ideal things really are permissible and which are not. This might make them behave morally worse in some cases -- for example, when most people do what is morally good but not morally required, mistakenly thinking it is required (e.g., voting? returning library books?); and it might make them behave morally better in others (e.g., vegetarianism?) On average, they might behave just about as well as non-ethicists, doing less that is supererogatory but better meeting their moral obligations. If so, then philosophical moral reflection might be succeeding quite well in its aim of regulating behavior without actually improving it, no skepticism or nihilism or rationalization or injury of spontaneous reactions required.
Posted by Eric Schwitzgebel at 1:09 PM 9 comments
Labels: ethics professors, moral psychology