When my son Davy (who's now eight) was about ten months old he'd cruise around holding onto the couch saying "da da da da". He'd make the same "da da da" when he wanted to play with me. As an eager parent looking for milestones, I wondered if this was his first word but decided it didn't qualify. Soon he added "dis" to his reportoire: He'd point to something and say "dis", seemingly happy if I named what he was pointing at, but sometimes wanting more. What that a word? I asked an expert on infant language. Emphatically (almost tyrannically), she said no. Then more gently she added that developmental psychologists generally didn't take such seeming-words seriously until there were at least ten of them. Then they were words.
But ten of what? A hot trend recently in certain circles is teaching babies sign language: For example, if a baby makes a fist with one hand, that means "milk", if she brings her fingertips together, that means "more". Of course we don't want to be prejudiced against sign language: Words needn't be spoken aloud. The fist sign for milk is a word.
But now suppose that we have a psychologically identical case where instead of making a fist, the baby kicks her left foot in a distinctive way when she wants milk, and the parents learn to respond to that and reinforce it. Is that left-foot-kicking a word? Presumably we don't want to say that. To count such left-foot-kicking as linguistic seems to cheapen language too much. Ordinarily we think of language as something advanced, uniquely human or nearly so (except for maybe a few signing apes and Alex the parrot). The kicking doesn't seem to qualify, any more than we say a dog has language if he gets the leash when he wants a walk. In getting the leash, he communicates non-linguistically. So where to put on the brakes? Not every human communication is a word: A red stoplight is not a word, nor is a wink or a flag or a computer icon. (I think!) ;-)
Even very young babies will tighten their fists sometimes as a sign of hunger -- but surely that's not a word for a newborn. And babies seem quite naturally to point. Is pointing a word? My newly adopted sixteen-month-old daugher Kate will raise both hands over her head to communicate that she's "all done" eating. But just as the point might be a formalized reach, the hands up might be a formalized reaching up to be lifted out of her high chair. In fact, Kate has started raising her hands over her head in frustration when she has been trapped in her car seat too long and wants to be "all done" with that.
If a twelve-month old says "cup" for cup, we call it a word. If she says "mup" for cup, we find it cute and still call it a word. It seems to follow that if she makes regularly makes a sign language signal for cup, we should call that a word; and likewise it seems that if she makes her own unique sign for cup, we should call that a word too. But now the leash and left foot kicking seem to be back in.
So what is a word?
Tuesday, June 24, 2008
What Is a Word, If a Baby Can Say It?
Posted by Eric Schwitzgebel at 7:15 PM 24 comments
Thursday, June 19, 2008
The Psychology of Philosophy
"Experimental philosophy", as a movement within philosophy, has so far been almost entirely focused on testing people's intuitions and judgments about philosophical puzzle cases. In this post on the Underblog, I argue for a broader vision of experimental philosophy, including the possibility of experiments on:
* introspective claims about the structure of concious experience (e.g., beeper studies to test claims about ordinary lived experience)I'll be presenting these ideas orally at the Experimental Philosophy Workshop pre-conference at the Society for Philosophy and Psychology meeting in Philadelphia next week. (The program shows my presentation title as "Introspection and Experiment", but I've broadened my topic and thus changed the title.)
* the causes (including the psychological and cultural factors) influencing philosophers' preferences for particular sorts of philosophical theories (e.g., studies of the psychological correlates of a preference for Kantianism over consequentialism in ethics)
* the real-life consequences of adopting or teaching particular philosophical theories (e.g., does teaching students utilitarianism, or Nietzsche, have any positive (or negative) effects on their behavior?)
Comments welcome!
Posted by Eric Schwitzgebel at 7:11 AM 8 comments
Labels: psychology of philosophy
Tuesday, June 17, 2008
Ethicists and Political Philosophers Vote Less Often, Apparently, Than Other Philosophers
I assume that voting in public elections is a duty (a duty that admits of excuses and exceptions, of course) and that it's morally better to vote conscientiously than not to vote.
In previous research, I've found that:
(1.) ethics books are more likely to be missing from academic libraries than other philosophy books (full essay here),
(2.) philosophy students at Zurich do not give increasing amounts to student charities as their education proceeds, and
(3.) (with Joshua Rust) a majority of philosophers think ethicists behave, on average, no better than non-ethicists of similar social background (full essay here).
With Josh Rust's and my current findings on voting patterns, that's now four consecutive studies suggesting that ethicists behave no better than, or maybe even worse than, comparable non-ethicists.
Looking at voter history data from California, Florida, North Carolina, and Washington State, we found voting rates among professors registered to vote:
Ethicists: 0.97 votes/year (227 records total)The differences over .07 votes/year are statistically significant. The results are stable controlling for age, gender, ethnicity, state of residence, institution type, and political party. Controlling for rank doesn't substantially change the results, except that it raises the voting rate of the comparison group of "other professors" to a rate between that of ethicists and non-ethicists, so that it can't be said that philosophers vote more often than non-philosophers.
Political philosophers (a subgroup of ethicists): 0.95 votes/year (96 records)
Non-ethicist philosophers: 1.07 votes/year (279 records)
Political scientists: 1.11 votes/year (244 records)
Other professors: 0.93 votes/year
Now I'd have thought political philosophers, like political scientists, would be more engaged than average with the political process. Instead -- depressingly (to me; maybe you'll rejoice?) -- it seems that they're less engaged, at least if voting is taken as the measure of engagement.
When I face moral decisions -- decisions like "should I go out and vote even though I'd rather look for Weird Al videos on YouTube?" -- I often reflect on what I should do. I think about it; I weigh the pros and cons; I consider duties and consequences and what people I admire or loathe would do. I am implicitly and deeply committed to the value of reflection in making moral decisions and prompting moral behavior. To suppose that moral reflection is valueless is pretty dark, or at least pretty radical.
Yet if moral reflection does us moral good, you'd think that ethics professors, who are presumably champions of moral reflection, would themselves behave well -- or at least not worse!
(Josh Rust and I will be presenting these results as a poster at the Society for Philosophy and Psychology meeting next week. The full text of the poster will be available shortly on the Underblog.)
Update, June 26:
In the last couple days, Josh and I were able to do a first analysis of new data from Minnesota. In that state, the ethicists and political philosophers appear to be so conscientious in their voting that it knocked the p-value of our main effect from .03 to .06 -- in other words, the trend in Minnesota was so strong the other direction that we can now no longer feel sufficiently confident (employing the usual statisical standards) that the trend we see for ethicists to vote less is not due simply to chance. So we should probably amend our thesis from "ethicists vote less" to the weaker "ethicists vote no more often". However, the Minnesota data also seem to introduce some potential confounds (such as that Minnesota philosophers seem to have unusual job stability) that complicate the interpretation and that we may want to try to compensate for statistically. So the final analysis isn't in!
Posted by Eric Schwitzgebel at 8:38 AM 19 comments
Labels: ethics professors, Joshua Rust, moral psychology
Saturday, June 14, 2008
Political Scientists Vote More Often Than Other Professors
One theme of my recent research has been the moral behavior of ethics professors -- do they behave any better than others of similar social background? There's good reason to anticipate that they would: Presumably they care a lot and think a lot about morality, and one might hope (at least I would hope!) that would have a positive effect on their behavior.
However, some people don't think we should expect this. After all, doctors smoke, police commit crimes, economists invest badly. Whether they do so any less than anyone else is hard to assess. (However, the evidence I've seen so far suggests that doctors do smoke less and economists do invest better, contra the cynic. I don't know about police.)
Half a year ago I posted a couple of reflections on the lack of data regarding whether political scientists vote more often in public elections than other professors do (here and here). With perhaps more enthusiasm than wisdom, I decided to go out and get the data myself. Josh Rust and I (and some helpful RAs) gathered official voting histories of individuals in California, Florida, North Carolina, and Washington State (Minnesota pending) and matched those records with online information about professors in universities in those states. (The California data included only statewide elections; the other states include at least some local election data.) We looked at the years 2000-2007.
The data suggest that political scientists do vote more often, averaging 1.11 votes/year as opposed to 0.93 votes/year for a comparison group of professors drawn randomly from all other departments except philosophy.
We ruled out gender, political party, state of residence, age, ethnicity, and institution type (research-oriented vs. teaching-oriented) as explanatory factors. All of these factors either had no effect on vote rate (gender, party, institution type) or were balanced between the groups (state, age, ethnicity). The one factor that did have an effect and wasn't balanced between the groups was academic rank: Non-tenure-track faculty voted less often, and there were fewer tenure-track faculty in the comparison group than among the political scientists. However, even looking just at tenure-track faculty, political scientists still vote more: 1.12 votes/year for political scientists, 0.99 for comparison faculty. (Political science department affiliation also remains predictive of vote rate in multiple regression models including rank and other factors.)
These data support my ethicists project in two ways: First, they show at least some relationship between professorial career choice and real-world behavior; and second, since voting is widely (and I think rightly) seen as a duty, it's a measure of one piece of moral behavior. We can see if ethicists (and perhaps especially political philosophers) are more likely to perform this particular duty than are non-ethicists. Results on that soon!
Posted by Eric Schwitzgebel at 9:02 AM 8 comments
Labels: ethics professors, Joshua Rust, moral psychology
Friday, June 13, 2008
Experimental Philosophy Survey
Thomas Nadelhoffer has posted a new online survey, and he wants philosophical respondents. Link here. I shouldn't reveal the contents, though lest I worsen the problems of self-selection bias! The survey took me about 15 minutes to complete, going pretty fast.
Posted by Eric Schwitzgebel at 1:14 PM 4 comments
Monday, June 09, 2008
Political Affiliations of American Philosophers, Political Scientists, and Other Academics
As regular readers will know, I've been working hard over the last year thinking of ways to get data on the moral behavior of ethics professors. As part of this project, I have looked at the public voting records of professors in several states (California, Florida, North Carolina, Washington State, and soon Minnesota), on the assumption that voting is a civic duty. If so, we can compare the rates at which ethicists and non-ethicists perform this duty. Soon I'll start posting some of my preliminary analyses.
First, however, I thought you might enjoy some data on the political affiliation of professors in California, Florida, and North Carolina. (These states make party affiliation publicly available information.) Although U.S. academics are generally reputed to be liberal and Democratic, systematic data are sparser than one might expect. Here's what I found.
Among philosophers (375 records total):
Democrat: 87.2%Among political scientists (225 records total:)
Republican: 7.7%
Green: 2.7%
Independent: 1.3%
Libertarian: 0.8%
Peace & Freedom: 0.3%
Democrat: 82.7%Among a comparison group drawn randomly from all other departments (179 records total):
Republican: 12.4%
Green: 4.0%
Independent: 0.4%
Peace & Freedom: 0.4%
Democrat: 75.4%By comparison, in California (from which the bulk of the data are drawn), the registration rates (excluding decline to state [19.4%]) are:
Republican: 22.9%
Independent: 1.1%
Green: 0.6%
Democrat: 54.3%Perhaps this accounts for my sense that if there's one thing that's a safe dinner conversation topic at philosophy conferences, it's bashing Republican Presidents.
Republican: 40.3%
Other: 5.3% [source]
Now I'm not sure 87.2% of professional philosophers would agree that there's good evidence the sun will rise tomorrow (well, that's a slight exaggeration, but we are an ornery and disputatious lot!), so why the virtual consensus about political party?
Conspiracy theories are out: There is no point in the job interview process, for example, when you would discover the political leanings of an applicant who was not applying in political philosophy. We ask about research, teaching, and that's about it. Even interviewing a political philosopher (a small minority of philosophers) it will not always be evident if the interviewee is "liberal" or "conservative", since her research will often be highly abstract or historical.
Self-interest also seems an insufficient explanation: Many professors are at private institutions, and few philosophy professors earn government grants, so even if Democrats are more supportive of funding for universities and research, many philosophy professors will at best profit very indirectly from that. Furthermore, it's not clear to me -- though I'm open to evidence on this -- that Democrats do serve professors' financial interests better than Republicans. For example, social services for the poor and keeping tuition low seem to have a higher priority among liberal Democrats in California than the salaries of professors.
Democrats might be tempted to flatter themselves with this explanation: Professors are smart and informed, and smart and informed people are rarely Republican. That would be interesting if it were true, and it's empirically explorable; but I suspect that in fact a better explanation has to do with the kind of values that lead one to go into academia and that an academic career reinforces -- though I find myself struggling now to discern exactly what those values are (tolerance of difference? more willingness to believe that knowledgeable people can direct society for the better? less respect for the pursuit of wealth as a career goal?).
Posted by Eric Schwitzgebel at 9:26 AM 20 comments
Labels: psychology of philosophy
Wednesday, June 04, 2008
Self-Blindness?
If introspection is essentially a matter of perceiving one's own mind, as philosophers like John Locke and David Armstrong have suggested, then just as one might lose an organ of outer perception, rendering one blind to events in that modality, so also, presumably, could one lose an organ of inner perception, leaving one either totally introspectively blind or blind to some subclass of one's own mental states, such as one's beliefs or one's pains.
Is such self-blindness possible? There are, as far as I can tell, no clear clinical cases -- no cases of people who feel pain but consistently have no introspective awareness of those pains, no cases of people who can tell what they believe only by noticing how they behave. (I'm excluding cases where the being lacks the concept of pain or belief and so can't ascribe those states at all; and with apologies to Nichols and Stich on schizophrenia). Sydney Shoemaker suggests that the absence of such cases flows from a deep conceptual truth: There's a fundamental connection between believing and knowing what you believe, between feeling pain and knowing you're in pain, that there isn't between being facing a red thing and knowing you're facing a red thing -- and thus in this respect introspection differs importantly from sensory perception.
Since I'm generally a pessimist about the accuracy of introspective reports, my first inclination is to reject Shoemaker's view and allow the possibility of self-blindness. But then why do there seem to be no clear cases of self-blindness? I suspect that in this matter the cases of pain and belief are different.
Consider pain first. Possibility one: The imagined self-blind person has both the phenomenology of pain and typical pain behavior (such as avoidance of painful stimuli), maybe even saying "ow!" If so, on the basis of this behavior, she could determine (as well as anyone else could, from the outside) that she was sometimes in pain; but she would have no direct, introspective knowledge of that pain. Contra Shoemaker, this seems to me not inconceivable. However, it also seems very likely that a real, plastic neural system would detect regularities in the neural outputs generating pain behavior and respond by creating shortcuts to judgments of, or representations of, pain -- shortcuts not requiring sensory detection of actual outward behavior. For example, the neural system could notice the motor impulse to say "ow!" and base a pain-judgment on that impulse (perhaps even if the actual outward behavior is suppressed). This then, might start to look like (might even actually become?) "introspection" of the pain.
Alternatively, the self-blind person might show no pain behavior whatsoever. Then the person would behave identically to someone with total pain insensitivity (and cases of total pain insensitivity do exist). But now we're faced with the question: Do people normally classified as utterly incapable of feeling pain really feel no pain, or do they have painful phenomenology somewhere with no means to detect it and no way to act on it? The latter possibility seems extravagant to me but not conceptually impossible. And maybe even a fuller understanding of the neuropsychology of pain and pain insensitivity might help us decide whether there are some actual cases of the latter.
Regarding belief, I'm more in sympathy with Shoemaker. My own view is that to believe something is just a matter of being prone to act and react in ways appropriate to, or that we are apt to associate with, having the belief in question (taking other mental states and excusing conditions into account). Among the actions and reactions appropriate to belief is self-ascription of the belief in question, self-ascription that doesn't rely on the observation of one's own behavior. Someone who had the concept of belief but utterly lacked direct self-ascriptive capacity would be in some way defective not just as a perceiver of her beliefs but as a believer.
(Thanks to Amy Kind and Charles Siewert whose excellent articles criticizing Shoemaker on self-blindness prompted this post.)
Posted by Eric Schwitzgebel at 9:13 AM 25 comments
Labels: belief, introspection, self-knowledge