A philosophical discussion arc (as discussed in Tuesday's post) is the trend, over time, in the use of a term or name as a keyword in philosophy articles and books. As I mentioned Tuesday, discussion of prominent philosophers' work tends to peak around age 55-70. However, not all philosophers show this trend.
Given that the dataset starts in 1940, the only age cohort suitable for examining breakaways -- philosophers whose discussion arcs continue to rise after age seventy -- is the cohort born around the year 1900. Only for that cohort do we have both peak-career discussion rates and later discussion rates in the dataset.
I examined the discussion arcs of eight leading philosophers born between 1885 and 1915: Wittgenstein, Heidegger, Carnap, Ryle, Popper, Sartre, Merleau-Ponty, and Quine. On the x-axis is date, in five-year slices. On the y-axis is a ratio: The number of times the philosopher's name appears as a keyword, divided by a broad index of philosophical keywords in the database, times 100. Five of these philosophers show the usual career arc:
Heidegger and Merleau-Ponty, however, and maybe Wittgenstein, seem to show a different pattern:
To see more clearly how Heidegger and Wittgenstein broke away from the pack, let's remove the clutter by taking averages:
From about 1940-1954, Heidegger, Wittgenstein, Carnap, Ryle, Popper, Sartre, and Quine were all receiving about an equal amount of discussion. But only Heidegger and Wittgenstein increased and sustained that level of discussion in the ensuing decades.
In the previous generation of philosophers, we seem to see the tail end of a similar story, with Frege and Husserl breaking away while Royce, Bergson, Dewey, and Whitehead decline (though declined Dewey is still discussed as much as is broken-away Frege).
Wednesday, April 28, 2010
Heidegger and Wittgenstein Break Away from the Pack
Posted by Eric Schwitzgebel at 1:12 PM 6 comments
Labels: sociology of philosophy
Tuesday, April 27, 2010
Discussion Arcs
A philosophical discussion arc, as I'll use the term, is a curve displaying how often a topic or author is used as a "keyword" in a philosophical journal article or book abstract (i.e., in the article's or book's title, abstract, or list of key words). By looking at discussion arcs we can see what topics have been hot and what philosophers have been influential.
Let's begin with topical discussion arcs. On the x-axis is publication year, in five-year slices. The y-axis is a ratio: It's the number of articles in the Philosopher's Index containing the keyword, divided by a representative universe of articles, multiplied by 100. The data begin in 1940.
(A ratio is a much more accurate indicator of influence than is raw number, since the number of philosophy articles has increased about twenty-fold since the 1940s. I generated the representative universes, which serve at the denominators, by broad keyword searches, as indicated with each graph.)
Some topics have generated consistent interest over the decades. Dualism is one, as you can see below. The * is a truncation symbol, so this chart tracks any keyword starting with "dualis".[Representative universe: language + epistemology + mind + metaphysics -- lemmings for short.]
Interest in the leading 17th and 18th century philosophers is also steady across the period (with perhaps Kant gaining discussion and Locke losing discussion):[Representative universe: Lemmings + ethic* + moral* + polit*, or EMPlemmings for short]
Voguish topics, in contrast, are arc shaped.
Here, for example, is "twin earth" (a thought experiment about what the word "water" would mean in a world virtually identical to ours but with a different chemical formula for water):[Representative universe: Lemmings]
And here is "ordinary language" (a way of thinking about philosophical issues popular in the middle of the twentieth century):[Representative universe: Lemmings]
Here's a chart that displays the rise of Nietzsche from the second tier of historical figures into the first tier. (Note Nietzsche's final y-axis numbers are higher than those of Descartes, Locke, or Hume in the chart above.)[Representative univerise: EMPLemmings]
Twentieth century philosophers also have arcs. Here are five influential philosophers born in the 1910s. Note that discussion of their work tended to peak at about age sixty, with the exception of Donald Davidson:[Representative universe: Lemmings]
In fact, there's a fairly consistent pattern for the influence of 20th century analytic philosophers, as measured by discussion arc, to peak around age 55-70. The following chart shows average discussion-arc data from 26 prominent 20th century philosophers, with age on the x-axis. I normalized each philosopher's peak influence to 1. I did not truncate the philosophers' discussion arcs at death.[Representative universe: Lemmings; all included philosophers are Lemmings specialists]
I find it interesting that influence tends to peak at age 55-70, while the age at which philosophers tend to do their most influential work is about 35-40. (Here's a preliminary discussion of that last point; I hope to have fuller data on the matter soon.) I guess it takes time for word to get around!
Update, May 12:
In this more recent post, I (mostly) retract my claim about the age at which the most influential philosophical work tends to be done. Check out this cool chart!
Posted by Eric Schwitzgebel at 4:50 PM 12 comments
Labels: sociology of philosophy
Thursday, April 22, 2010
Chalmers's Fading/Dancing Qualia and Self-Knowledge
David Chalmers defends what he calls a principle of organizational invariance according to which if a system has conscious experiences then any other system with the same fine-grained functional organization will have qualitatively identical experiences. His main arguments for this principle are his "Fading Qualia" and "Dancing Qualia" arguments.
Both arguments are reductios. Let's start with Fading Qualia. Suppose, contra the principle of organizational invariance, that there could be a fine-grained functional isomorph of you without conscious experience -- perhaps a robot (call him Stu) with a brain made of silicon chips instead of neurons. If this is possible, then it should also be possible to create a series of intermediate beings between You and Stu -- perhaps, for example, beings in which different proportions of the neurons are replaced by silicon chips. If You have a hundred billion neurons in your brain, then maybe we can imagine a hundred billion minus one intermediate cases, each with one less neuron and one more silicon chip. The question is: What kind of consciousness do these intermediate beings have? Chalmers argues that there is no satisfactory answer.
There seem to be two ways to go: Consciousness might suddenly disappear somewhere in the progression, say between fifty billion and one neurons and fifty billion. But that seems bizarre. How could the replacement of one neuron make the difference between consciousness and its absence? You and Fifty-Billion-and-One are having vivid visual experience of a basketball game, say, while poor Fifty-Billion is a complete experiential blank. Surely we don't want to accept that.
Seemingly more plausible is a second option: Consciousness slowly fades out between You and Stu. But then what does Fifty-Billion experience? Half of a visual field? An entire visual field, but hazy or in unsaturated color? Note that since You, Stu, and Fifty-Billion are all identical at the level of functional organization, you will all exhibit exactly the same outward behavior. You will all, when asked to introspect, presumably say something like "I am having vivid visual experience of a basketball game". Stu is wrong about this, of course, if it makes sense to attribute assertions to him at all; but he is just a silicon robot without consciousness, so maybe that's okay. But Fifty-Billion is not just a silicon robot. He has some consciousness. But he seems to be badly wrong about it. His visual experience is not, as he says, vivid and sharp, but rather indistinct, or incomplete, or unsaturated. And Chalmers suggests that it's absurd to attribute that kind of radical error to him. Thus Chalmers completes the reductio: There's an absurdity in assuming the denial of the principle of organizational invariance. You, Stu, and Fifty-Billion all have qualitatively identical conscious experience.
I object to the last move in this argument, to the idea that it is absurd that Fifty-Billion could make that kind of mistake. My reason is this: Many of us make exactly the same mistake in ordinary instances of introspection. Some people, for example, when asked how detailed their conscious experience is at any one moment, say that it is extremely rich -- full of precise detail through a wide visual field, and simultaneously full of auditory detail, tactile detail, and detail in other modalities. Others say that their experience is very sparse -- they only experience one or a few things at a time. On the sparse view, when one is attending to the visual environment, one has no experience of the feet in one's shoes; when one is attending to one part of the visual field, one has no experience of the areas outside of attention; etc. I have argued that this dispute does not turn merely on a disagreement about terminology, and does not reflect radical differences in different people's experiences, but rather is a real substantive, phenomenological dispute. One or both parties must therefore be radically wrong about their experience. This is at least, I think, not an absurd view, given the potential sources of error about the richness of experience, such as the refrigerator light illusion (the possibility that thinking about experience in some modality or region creates experience in that modality or region where none was before, causing us to mistakenly think it was there all along). And if it's not absurd to suppose that ordinary people could be mistaken about how rich and detailed their experience is, it's not absurd to suppose that Fifty-Billion could be mistaken.
Dancing Qualia is a variation of Fading Qualia. It requires two visual processing systems with the same functional organization but different associated visual phenomenology, and it requires the capacity for you to switch swiftly between these systems. Since the functional organization of the systems is the same, you won't report any difference in experience when you switch from one to the other, thereby implying that some of your reports will be mistaken -- implausibly mistaken, in Chalmers's view. Therefore, by reductio, the systems cannot really differ in their associated visual phenomenology.
But in cases of "change blindness" -- for example here -- people will fail to notice substantial changes in their visual experience. (Or at least this is true if experience is relatively rich.) Such failures aren't perhaps as severe as what might be created by a visual system switch, and, as Chalmers notes, many of them require that your attention not be on the object of change. However, not all change blindness cases seem to require lack of attention to the changed stimulus -- like when the person you are talking to changes after brief interruption without your noticing (though determining what exactly qualifies as a target of attention may be a difficult matter in such scenarios); and in any case consideration of such cases should, I think, loosen our commitment to the seeming absurdity of failing, especially in weird scenarios, to notice radical changes in experience.
Furthermore, the Dancing Qualia case seems problematically pre-built to frustrate our ability to notice differences, much like radically skeptical brain-in-a-vat scenarios are pre-built to frustrate the sensory abilities on which we depend by giving the same sensory input despite a large change in the far-side objects. The following model is too simplistic, but conveys the idea I have in mind here: Imagine that introspection works by means of an introspection module located near the front of the brain, which receives input from the visual cortex in the back of the brain. The back of the brain has been changed so that experience is radically different (on the assumption of the reductio), but changed only in such a way that the input from the back to the front of the brain is exactly the same. In such a case, it seems not at all absurd to suppose that introspection would fail to notice a difference, despite a real difference in experience. Thus, the Dancing Qualia reductio fails.
Posted by Eric Schwitzgebel at 1:09 PM 12 comments
Labels: metaphysics, self-knowledge, stream of experience
Thursday, April 15, 2010
The Moral Behavior of Super-Duper Artificial Intelligences
David Chalmers gave a talk today (at the Toward a Science of Consciousness conference in Tucson) arguing that it is fairly likely that sometime in the next few centuries we will create artificial intelligence (perhaps silicon, perhaps biological) considerably more intelligent than ourselves -- and then those intelligent creatures will create even more intelligent successors, and so on, until there exist creatures that are vastly more intelligent than we are.
The question then arises, what will such hyperintelligent creatures do with us? Maybe they will be us, and we needn't worry. But what if human beings in something like the current form still exist alongside these hyperintelligent artificial creatures? If the hyperintelligent creatures don't care about our welfare, that seems like a pretty serious danger that we should plan ahead for.
Perhaps, Chalmers suggests, we should build only intelligences that value human flourishing or have benign desires. He also advises creating hyperintelligent creatures only in simulations that they can't escape. But, as he points out, these strategies for dealing with the risk might be tricky to execute successfully (as numerous science fiction works attest).
More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish.
One thing Chalmers didn't discuss, though, was the shape of the moral trajectory: Even if super-duper hyperintelligent artificial intelligences would be hypermoral, unless intermediate stages en route are also very moral (probably more moral than actual human beings are), we might still be in great danger. It seems like we want sharply rising, monotonic improvement in morality, and not just hypermorality at the endpoint.
So the question arises: Is there good empirical evidence that bears on this question, evidence concerning the relationship between morality and intelligence? By "intelligence" let's mean something like the capacity to learn facts or reason logically or design complicated plans, especially plans to make more intelligent creatures. Leading engineers, scientists, and philosophers would tend to be highly intelligent by this definition. Is there any reason to think that morality rises sharply and monotonically with intelligence in this sense?
There is some evidence for a negative relationship between IQ and criminality (though it's tangled in complicated ways with socioeconomic status and other factors). However, I can't say that my personal and hearsay knowledge of the moral behavior of university professors (perhaps especially ethicists?) makes me optimistic about a sharply increasing monotonic relationship between intelligence and morality.
In which case, look out, great-great-great-great-great-great-grandchildren!
Posted by Eric Schwitzgebel at 5:13 PM 17 comments
Labels: moral psychology
Friday, April 09, 2010
What's in People's Stream of Experience During Philosophy Talks?
As you may know, Russ Hurlburt and I recently published a book centering on a woman's reports about her experience as she went about her normal day wearing a random beeper. When the beep sounded, her job was to try to recall her "last undisturbed moment of inner experience" just before the beep. Russ and I then interviewed her about these experiences, trying to get both at the truth about them and at methodological issues about the value of this sort of approach in studying consciousness.
Russ and I have presented our joint work in a number of venues now (including at an author-meets-critics session at the APA last week), and normally when we do so, we "beep" the audience. That is, we set up a random beeper to sound when Russ or I or a critic is presenting material. When the beep sounds, each audience member is to think about what was going on in her last undisturbed moment of inner experience before the beep. We then use a random number generator to select an audience member to report on her experience. We interview her right there, discussing her experience and the method with the audience and each other. We'll do this maybe three times in a three-hour session.
As a result, we now have a couple dozen samples of reported inner experience during our academic talks, and the most striking thing we've found is that people rarely report thinking about the talk. The most recent six samples are representative (three from a presentation by me at Claremont Wednesday, three from the APA).
(1.) Thinking that he should put his cell phone away (probably not formulated either in words or imagery); visual experience of cell phone and whiteboard.
(2.) Scratching an itch, noticing how it feels; having a visual experience of a book.
(3.) Feeling like he's about to fade into a sweet daydream but no sense of its content yet; "fading" visual experience of the speaker.
(4.) Feeling confused; listening to speaker and reading along on handout, taking in the meaning. [I'm counting this as an instance of thinking about the talk.]
(5.) Visual imagery of the "macaroni orange" of a recently seen flyer; skanky taste of coffee; fantasizing about biting an apple instead of tasting coffee; feeling need to go to bathroom; hearing the speaker's sentence. The macaroni orange was the most prominent part of her experience.
(6.) Reading abstract for next talk; hearing an "echo" of the speaker's last sentence; fighting a feeling of tiredness; maybe feeling tingling on tooth from permanent retainer.
Where is the cooking up of objections, the thinking through of consequences, the feeling of understanding the meaning of what is being said, the finding of connections to other people's work? In only one of these samples was taking in the meaning of the talk the foremost part of the experience.
It could just be that Russ and I and our critics are unusually deadening speakers, but I don't think so. My guess is that most audience members, listening to most academic talks, spend most of their time with some distraction or other at the forefront of their stream of experience. They may not remember this fact because when they think back on their experience of a talk, what is salient to them are those rare occasions when they did make a novel connection or think up an interesting objection. (I think the same is true of sex thoughts. People often say they spend a lot of time thinking about sex, but when you beep them they very rarely report it. It's probably that our sex thoughts, though rare, are much more frequently remembered than other thoughts and so are dramatically overrepresented in retrospective memory.)
Here are two hypotheses about understanding academic talks that harmonize with these observational data:
(1.) Our understanding of academic talks comes mostly from our ability to take them in while other things are at the forefront of consciousness. The information gets in there, despite the near-constant layer of distraction, and that information then shapes skilled regurgitations of the content of the talks.
(2.) Our understanding of academic talks comes mostly from those few salient moments when we are actually not distracted. Maybe this happens three or twelve or thirty times, for very brief stretches, during the course of the talk. The understanding we walk away with at the end is a reconstruction of what must plausibly have been the author's view based on our recollection of those few instances when we were actually paying attention to what she was saying.
Any bets on (1) vs. (2)? Or candidates for a (3)? If (2) is closer to the truth, then it may be possible to discover strategies to get much more out of talks by discovering ways to better focus our attention on the content.
Posted by Eric Schwitzgebel at 3:22 PM 17 comments
Monday, March 29, 2010
Introspection, What?
Never have I worked so long and hard on a paper and been as little satisfied with the result. I have various theories why.
Brief abstract:
I argue for two theses: First, introspection is a species of attention to conscious experience, one that aims to exhibit what I call relatively direct sensitivity to the experience. Second, introspection is not the operation of a single, dedicated mechanism or family of dedicated mechanisms (such as self-scanning or self-monitoring devices); rather, in introspecting we opportunistically deploy a variety of cognitive systems and processes.Here it is, in draft. Maybe you can help.
Posted by Eric Schwitzgebel at 5:26 PM 6 comments
Labels: introspection, self-knowledge
Thursday, March 25, 2010
On Being Good at Seeming Smart
Once upon a time, there was a graduate student at UC Riverside whom I will call Student X. (If you are associated with UCR and think you can guess who Student X is, I think you will probably guess wrong.) There seemed to be a general sense among the faculty that Student X was particularly promising. For example, after a colloquium at which the student had asked a question, one faculty member expressed to me how impressive the student was. I was struck by that remark because I had thought the student's question had actually been pretty poor. But it occurred to me that the question had seemed, superficially, to be smart. That is, if you didn't think too much about the content but rather just about the tone and delivery, you probably would get a strong impression of smartness. In fact, my overall view of this student was that he was about average -- neither particularly good nor particularly bad -- but that he was a master of seeming smart: He had the confidence, the delivery, the style, all the paraphernalia of smartness, without an especially large dose of the actual thing.
Since then, I have been collecting anecdotal data on seeming smart. One thing I've noticed is what sort of person tends spontaneously to be described, in my presence, as "seeming smart". A very striking pattern emerges: In every case I have noted the smart-seeming person has been a young white male. Now my sample size is small and philosophy is about 75% white male anyway, so I want to be cautious in this inference. Women and minorities must sometimes "seem smart". And older people maybe have already proven or failed to prove their brilliance so that remarks about their apparent intelligence aren't as natural. (Maybe also it is less our place to evaluate them.) But still I would guess that there is something real behind that pattern, to wit:
Seeming smart is probably to a large extent about activating people's associations with intelligence. This is probably especially true when one is overhearing a comment about a complex subject that isn't exactly in one's expertise, so that the quality of the comment is hard to evaluate. And what do people associate with intelligence? Some things that are good: Poise, confidence (but not defensiveness), giving a moderate amount of detail but not too much, providing some frame and jargon, etc. But also, unfortunately, I suspect: whiteness, maleness, a certain physical bearing, a certain dialect (one American type, one British type), certain patterns of prosody -- all of which favor, I suspect, upper- to upper-middle class white men. If you look and sound like Lisa Kudrow and end every sentence with a rising intonation, it is going to be much harder to seem smart than if you look and sound like Matt Damon. But Lisa might actually be the smarter one. (I don't think there is a single trait of smartness, or even of being a smart philosopher, but let's bracket that for now.)
Here's the twist: Student X actually ended up doing very well in the program and writing an excellent dissertation. I suspect that's not because he started out with better tools but rather because he rose to his teachers' expectations. There is ample evidence in educational psychology that student performance tends to shift toward teacher expectations. Tell girls that girls on average do less well on math tests than do boys and the girls will in fact do less well. Tell a teacher that a particular student will do well and the change in the teacher's expectations will cause that student to actually do better (the Pygmalion effect). Life's not fair.
I hereby resolve to view skeptically all judgments of "seeming smart".
Posted by Eric Schwitzgebel at 9:04 AM 35 comments
Labels: professional issues in philosophy, psychology of philosophy, sociology of philosophy
Thursday, March 18, 2010
The Boxology of Self-Knowledge
It’s often helpful for cognitive scientists modeling psychological processes to describe the mind’s functional architecture using boxes and arrows, with the boxes indicating various functionally discrete processes or systems and the arrows indicating the causal or functional relationships among those discrete processes or systems. Figure 1 below expresses my view of self-knowledge, using the “boxology” of cognitive science. The model in that figure may be contrasted, for example, with the boxological models of self-knowledge on pages 162 and 165 of Nichols and Stich 2003, which feature tidy arrows in and out of the Belief Box, through a Monitoring Mechanism, a Percept-to-Belief Mediator, and a Theory of Mind Information store. You might also notice a resemblance between my model in Figure 1 and recent boxological models of visual processing, if the latter are squinted at.
Figure 1: The boxology of self-knowledge
Posted by Eric Schwitzgebel at 10:55 AM 4 comments
Labels: humor, introspection, self-knowledge
Tuesday, March 16, 2010
Knowing What You Don’t Believe
(by Blake Myers-Schulz and Eric Schwitzgebel)
Virtually every introduction to epistemology (online examples include the Stanford Encyclopedia and the Internet Encyclopedia of Philosophy) highlights the debate about what is commonly called the “JTB” theory of knowledge – the view according to which for some subject S to know some proposition P, it is necessary and sufficient that
(1.) P is true.
(2.) S believes that P is true.
(3.) S is justified in believing that P is true.
According to the JTB theory, knowledge is Justified True Belief. Perhaps the most-discussed issue in the last 40 years of epistemology is whether the JTB theory is true. Debate generally centers on whether there is a way of interpreting or revising the third (justification) condition or adding a fourth condition to avoid apparent counterexamples of various sorts (e.g., Gettier examples). Nearly all contemporary analytic philosophers endorse the truth of conditions (1) and (2): You can’t know a proposition that isn’t true, and you can’t know a proposition that you don’t believe. Few assumptions are more central to contemporary epistemology.
However, we – Blake and Eric – don’t find it intuitive that (2) is true. We think there are intuitively appealing cases in which someone can know that something is true without believing (or, per Eric, determinately believing) that it is true. We have four examples:
(A.) The unconfident examinee (from Colin Radford 1966, one of the very few deniers of (2)): Kate is asked on an exam to enter the date of Queen Elizabeth’s death. She feels like she is guessing, but enters the date correctly. Does Kate know/believe that Elizabeth died in 1603?
(B.) The absent-minded driver (from Schwitzgebel in draft): Ben reads an email telling him that a bridge he usually takes to work will be closed for repairs. He drives away from the house planning to take an alternate route but absent-mindedly misses the turn and continues toward the bridge on the old route. Does Ben know/believe that the bridge is closed?
(C.) The implicit racist (also from Schwitzgebel in draft): Juliet is implicitly biased against black people, tending to assume of individual black people that they are not intelligent. However, she vehemently endorses the (true and justified, let’s assume) claim that all the races are of equal intelligence. Does Juliet know/believe that all the races are intellectually equal?
(D.) The freaked-out movie-watcher: Jamie sees a horror movie in which vicious aliens come out of water faucets and attack people, and she is highly disturbed by it, though she acknowledges that it is not real. Immediately after the movie, when her friend goes to get water from the faucet, Jamie spontaneously shouts “Don’t do it!” Does Jamie know/believe that only water will come from the faucet?
In each case, we think, it is much more intuitive to ascribe knowledge than belief.
So, naturally (being experimental philosophers!), we checked with the folk. We used fleshed-out versions of the scenarios above (available here). Some subjects were asked whether the protagonist knew the proposition in question. Other subjects were asked whether the protagonist believed the proposition in question.
The results came in as predicted. Across the four scenarios, 75% of respondents (90/120, 1-prop z vs. 50%, p < .001) said that the protagonist knew, while only 35% said the protagonist believed (42/120; 1-prop z vs. 50%, p = .001). Considering each scenario individually, in each case a substantial majority said the protagonist knew and in no scenario did a majority say the protagonist believed. (A separate group of subjects were asked “Did Kate think that Queen Elizabeth died in 1603?” [and analogously for other scenarios]. The “think” results were very close to the “believe” results in all scenarios except for the unconfident examinee where they were closer to the “know” results.)
We think epistemologists should no longer take it for granted that condition (2) of the JTB account of knowledge is intuitively obvious.
[Cross-posted at the Experimental Philosophy Blog.]
Posted by Eric Schwitzgebel at 9:59 AM 24 comments
Labels: belief, epistemology
Wednesday, March 10, 2010
Conference: Experimental Philosophy and the Ethics of Autonomy
... in Miami, this Friday and Saturday, organized by past guest blogger Brad Cokelet. Schedule of speakers:
Friday, March 12 (U of Miami, Learning Center, Room 192):
10:30-12:00, Dan Haybron, "Adventures in Assisted Living"
1:35-3:05, Eric Schwitzgebel, "The Moral Behavior of Ethics Professors"
3:15-4:45, Alfred Mele, "Autonomy and Neuroscience"
Saturday, March 13 (U of Miami, Memorial Bldg, Room 192):
10:30-12:00, Valerie Tiberius, "In Defense of Reflection"
1:30-3:00, Blaine Flowers, "Evolution, Sociality, and Eudaimonia: An Aristotelian Integration of Human Nature and Goods"
3:20-5:25, round table discussion
It should be good fun! If you're in south Florida, you might consider checking it out. For further info, contact Brad Cokelet at University of Miami.
Posted by Eric Schwitzgebel at 9:38 AM 3 comments
Labels: announcements
Monday, March 08, 2010
Kant on Killing Bastards, on Masturbation, on Wives and Servants, on Organ Donation, Homosexuality, and Tyrants
I'm going to be dissin' on Kant. If you loathe that sort of thing, maybe you'll enjoy reviewing the results of last year's poll by Brian Leiter, according to which Kant is the third most important philosopher of all time -- which should remind you that Kant's reputation is plenty safe from the likes of me.
According to Kant in The Metaphysics of Morals (not to be confused with his Groundwork for the Metaphysics of Morals):
(1.) Wives, servants, and children are possessed in a way akin to our possession of objects. If they flee, they must be returned to the owner if he demands them, without regard for the cause that led them to flee. (See esp. pages 278, 282-284 [original pagination], Gregor trans.) Kant does acknowledge that the owner is not permitted to treat these people as mere objects to "use up", but this appears to have no bearing on the owner's right to demand their return. Evidently, if such an owned person flees to us from an abusive master, we may admonish the master for behaving badly while we return what is rightly his.
(2.) Homosexuality is an "unmentionable vice" so wrong that "there are no limitations whatsoever that can save [it] from being repudiated completely" (p. 277).
(3.) Masturbation is in some ways a worse vice than the horror of murdering oneself, and "debases [the masturbator] below the beasts". Kant writes:
But it is not so easy to produce a rational proof that unnatural, and even merely unpurposive, use of one's sexual attribute is inadmissible as being a violation of duty to oneself (and indeed, as far as its unnatural use is concerned, a violation in the highest degree). The ground of proof is, indeed, that by it a man surrenders his personality (throwing it away), since he uses himself as a means to satisfy an animal impulse. But this does not explain the high degree of violation of the humanity in one's own person by such a vice in its unnaturalness, which seems in terms of its form (the disposition it involves) to exceed even murdering oneself. It consists, then, in this: That a man who defiantly casts off life as a burden is at least not making a feeble surrender to animal impulse in throwing himself away (p. 425).(If masturbation caused a permanent reduction to sub-human levels of intelligence, this argument might make some sense, but as far as I'm aware, that consequence is rare.)
(4.) On killing bastards:
A child that comes into the world apart from marriage is born outside the law (for the law is marriage) and therefore outside the protection of the law. It has, as it were, stolen into the commonwealth (like contraband merchandise), so that the commonwealth can ignore its existence (since it rightly should not have come to exist in this way), and can therefore also ignore its annihilation (p. 336).(5.) On organ donation:
To deprive oneself of an integral part or organ (to maim oneself) -- for example, to give away or sell a tooth to be transplanted into another's mouth... are ways of partially murdering oneself... cutting one's hair in order to sell it is not altogether free from blame.(6.) Servants and women "lack civil personality and their existence is, as it were, only inherence" and thus should not be permitted to vote or take an active role in the affairs of state (p. 314-315).
(7.) Under no circumstances is it right to resist the legislative head of state or to rebel on the pretext that the ruler has abused his authority (p. 319-320). Of course, the ruler is supposed to treat people well -- but (as with wives and servants under abusive masters) there appears to be no legitimate means of escape if he does not.
These views are all, I hope you will agree, odious -- even if there are some good things too in The Metaphysics of Morals (e.g., Kant condemns slavery on p. 329 -- although that was hardly a radical position for a European at the time). But why bring out these aspects of Kant? Shouldn't we expect him to be a creature of his time, an imperfect discoverer of moral truths, someone prone to lapses as are we all?
I mention these aspects of Kant to draw two lessons:
First, from our cultural distance, it is evident that Kant's arguments against masturbation, for the return of wives to abusive husbands, etc., are gobbledy-gook. This should make us suspicious that there might be other parts of Kant, too, that are gobbledy-gook, for example, the stuff that transparently reads like gobbledy-gook, such as the transcendental deduction, and such as his claims that his various obviously non-equivalent formulations of the fundamental principle of morality are in fact "so many formulations of precisely the same law" (Groundwork, 4:436, Zweig trans.). I read Kant as a master at promising philosophers what they want and then effusing a haze of words with glimmers enough of hope that readers can convince themselves that there is something profound underneath.
Second, Kant's philosophical moral reasoning appears mainly to have confirmed his prejudices and the ideas inherited from his culture. We should be nervous about expecting more from the philosophical moral reasoning of people less philosophically capable than Kant.
Posted by Eric Schwitzgebel at 3:14 PM 102 comments
Labels: metaphilosophy, moral psychology
Wednesday, March 03, 2010
Get Beeped and Argue about it with Hurlburt and Schwitzgebel
at the biennial Toward a Science of Consciousness conference in Tucson.
We're running a four-hour pre-conference workshop from 9 am to 1 pm on April 12. Official description:
Psychologist Russell Hurlburt is known for his innovative methods of exploring inner experience. Philosopher Eric Schwitzgebel is known for his skepticism about such methods. Hurlburt and Schwitzgebel will team up (perhaps “square off” would be a better term) and interview participants in the workshop audience about the details of their inner experience. That interview will follow Hurlburt’s Descriptive Experience Sampling (DES) method: While Hurlburt and Schwitzgebel give presentations outlining their views, a random timer will be set for a beep to sound. When the beep sounds, participants are to reflect on their last undisturbed moment of inner experience just before the beep. A random member of the audience will then be selected to describe her experience, and Hurlburt, Schwitzgebel, and the tutorial attendees will question her about the beep, using a variation of what Hurlburt calls an “expositional interview”. During these interviews, we (all tutorial participants) will conduct “sidebar” discussions about: what are the characteristics of good and bad questions; how believable are the subjects’ reports; to what extent do we “lead the witness”; etc.Russ and I have done this several times before, and it's always a kick. I expect a small group (five to ten), so it should be a good opportunity for multi-directional interaction. Plus, it would be cool to meet you.
Register here.
Posted by Eric Schwitzgebel at 2:03 PM 0 comments
Labels: annoucements
Tuesday, March 02, 2010
Fooling Oneself: Comments on Moeller's The Moral Fool
Hans-Georg Moeller's recent book, The Moral Fool, just earned a harsh review from Michael Slater at Notre Dame Philosophical Reviews. And indeed, the book is bound to irritate the typical hard-working academic philosopher. The arguments are loose. Moeller's position is stated unclearly and it seems to shift around. Hardly any of the relevant literature is cited. Seen in one way, pretty much every criticism in Slater's review is on the mark. This is not a good piece of academic scholarship.
Thus, I recommend reading this book not as a piece of academic scholarship. Read it instead as an evocative diatribe, which is probably closer to Moeller's intention in writing it. Revel in its colorful prose, its iconoclasm, its anti-authoritarianism. Moeller's guiding idea is that morality, or moral discourse, or moral thinking -- try not to distinguish too precisely among these or you'll start to get frustrated -- makes the world a worse place. It is mostly a sham, a cover-up, a failure, an excuse for violence against people, post-hoc self-serving rationalization or rationalization of one's cultural prejudices. That's a thought, or a cluster of thoughts, or a broad attitude, worth some consideration -- worth more consideration than ethicists generally give it. Moeller plays around with those ideas and presents various thoughts that resonate in various ways with them.
My reactions to the book, using the approach I just described, are here. I'll be presenting these thoughts in an Author-Meets-Critics session on the book at the end of the month at the Pacific APA meeting in San Francisco.
Section ii of my comments may have some interest even to people unfamiliar with Moeller's book. It summarizes my current thinking about whether explicit moral reflection is, on average, an instrumentally good thing. The issue is, I fear, not as clear cut as one might hope.
Posted by Eric Schwitzgebel at 5:40 PM 0 comments
Labels: moral psychology
Thursday, February 25, 2010
The Review of Philosophy and Psychology
There's a new journal on the block. The first issue of The Review of Philosophy and Psychology is now out, and for a limited time all articles are available for free, here. The inaugural issue concerns experimental philosophy and was guest edited by Joshua Knobe, Tania Lombrozo, and Edouard Machery.
One of Josh Rust's and my papers is in it: Do Ethicists and Political Philosophers Vote More Often Than Other Professors? (Short answer: No.)
There's other good stuff in there too. I especially recommend Simon Cullen's critique of the methodology of experimental philosophy, Survey-Driven Romanticism.
Posted by Eric Schwitzgebel at 12:58 PM 1 comments
Labels: annoucements
Wednesday, February 24, 2010
How Far Away Is the Television Screen of Visual Experience?
... not that I think there really is one, even in a loose, metaphorical sense. (See here.) But:
David Boonin (a visiting speaker from Colorado) and UCR graduate students Alan Moore and Matt Braich and I were hiking up Mt. Rubidoux. From the top, we could see several miles across town to the UCR campus. We pointed out to Boonin the clock tower, and then Alan said that the humanities building housing the philosophy department was also visible nearby, down and to the right, "about an inch and a half away".
What, an inch and a half away?! Alan's statement -- as I'm sure he knew -- sharply conflicted with my published views about the nature of the visual experience of perspective. And yet I knew exactly what Alan meant. He had effectively pointed out the spot. It seemed to me that "an inch and a half" was a much better description of the apparent distance than, say, a millimeter or twenty feet. (Of course the real distance is much larger than any of those.)
My thumb is about 3/4 of an inch wide. Holding it at arm's length, I saw that it almost perfectly occluded the distance between the clock tower and the humanities building. Thus, if the television screen of visual experience were arm's length away, Alan should have said that the distance was 3/4 of an inch. From the fact that the building's apparent distance (in some sense of "apparent distance"!) was an inch and a half, I thus geometrically derive the conclusion that the television screen of visual experience is about five feet away.
There, proved!
Posted by Eric Schwitzgebel at 12:50 PM 8 comments
Labels: sense experience, stream of experience
Friday, February 19, 2010
Second Annual Consciousness Online Conference
... here.
Posted by Eric Schwitzgebel at 2:49 PM 3 comments
Thursday, February 18, 2010
Another Simple Argument Against Any General Theory of Consciousness
... related to my first Simple Argument, but spun out a bit differently.
(1.) The history of philosophy shows that no theory of consciousness can avoid having some highly unintuitive consequences. (Or more cautiously, the history suggests that. The strength of the conclusion turns in part on the strength of this premise.)
For example, if functionalism is true, some very weird assemblages will be conscious. If consciousness depends upon material constitution, then beings behaviorally indistinguishable from us but materially different might entirely lack consciousness. And: Intuitive notions of consciousness seem to involve sharp boundaries not present in the evolution or development of conscious systems. And so on.
(2.) Therefore, something apparently preposterous must be true of consciousness.
(3.) Therefore, reflection on what is intuitively true -- and metaphysical speculations that depend on such intuitions -- cannot be a reliable guide to consciousness. (What such speculations yield, as is evident from the literature, is a variety of idiosyncratic hunches.)
(4.) Empirical observation of physical structure and behavior also cannot settle the question of which preposterous things are true, because their interpretation depends on prior assumptions about consciousness. (For example: Does observing such-and-such a functional structure establish that consciousness is present? Only given such-and-such functionalist assumptions.)
(5.) So we're stuck.
If we are stuck, the live options seem to be mysterianism (we will never know the truth about consciousness) or eliminativism (the concept of "consciousness" is broken to begin with, so good riddance).
Posted by Eric Schwitzgebel at 4:53 PM 27 comments
Labels: metaphysics
Tuesday, February 09, 2010
Cognitive Shielding
Here's a concept I'm playing with and may soon have occasion to deploy in my work on introspection: cognitive shielding.
Normally when I reach judgments, the processes driving the judgment are wild. I don't attempt to control the influences on my judgment. I just let the judgment flow from whatever processes might drive and affect it. I look out the window and think about whether it will rain. I'm not sure what exactly causes me to conclude that it will. Presumably the appearance of the clouds is a major factor, but maybe I'm also influenced by my knowledge of what month it is and how common rain is this time of year. Maybe I'm influenced, too, by wind and by temperature, reflecting sensitivity to contingencies between those and oncoming rain -- contingencies I may have no conscious knowledge of. Maybe I'm influenced by knowledge of yesterday's weather, of this morning's weather report, and who knows what else. I don't attempt to control any of this, and the judgment comes.
Sometimes, I intentionally launch processes with the aim of having those processes influence my judgment. So, for example, I might think to myself: "In the northern hemisphere, storms spin in such a way that the wind of the leading edge tends to come from the south. So I really should consider the direction of the wind in reaching my judgment about the likelihood of rain." [How true this generalization actually is, I don't know.] I notice that the wind is indeed from the south and this increases my confidence that it will soon rain. The decision to consider a particular factor launched a process that would not otherwise have occurred, with an influence on the conclusion.
And finally, sometimes I try to shield my judgments from certain influences. Maybe I know that I'm overly pessimistic and am biased toward anticipating rain whenever I'm planning a picnic. I am in fact planning a picnic, and I don't want the resulting pessimism to affect my judgment, so I attempt to put the picnic out of mind or compensate somehow for the bias it would otherwise introduce. Or -- a familiar example for professors -- in grading student essays I might be legitimately concerned that my like or dislike for the student as an individual might bias my grading. I might attempt to compensate for this by not looking at the names on the essays, and then no cognitive shielding is necessary. But sometimes I do know who has written the essay I am grading. I might then try to shield my judgment about the essay's quality from that potentially biasing influence. Wild judgment might unfairly favor the student if I like her, so I try to reach a judgment uninfluenced by my opinion about her as a person.
Two issues:
(1.) It's not always clear whether some series of thoughts is wild or launched. Similarly for shielding. Possibly there is a large gray area here. But if the distinction between spontaneously considering certain factors and intentionally considering (or setting aside) certain factors makes sense -- and I think it does -- then I think these distinctions can fly, despite the gray area.
(2.) It seems that launching will normally be successful. Shielding on the other hand, may be difficult to execute successfully. One might try not to be influenced by certain things and yet nonetheless be influenced. But this is no objection to this taxonomy as long as it's clear that we can try to shield our judgments from certain influences.
Thoughts? Reactions? Does this make sense? Is there someone in the literature who has already laid this out better than I?
Posted by Eric Schwitzgebel at 12:35 PM 10 comments
Labels: epistemology
Wednesday, February 03, 2010
My entry on "Introspection" is now up in the Stanford Encyclopedia of Philosophy
... here.
From the intro:
Introspection, as the term is used in contemporary philosophy of mind, is a means of learning about one's own currently ongoing, or perhaps very recently past, mental states or processes. You can, of course, learn about your own mind in the same way you learn about others' minds—by reading psychology texts, by observing facial expressions (in a mirror), by examining readouts of brain activity, by noting patterns of past behavior—but it's generally thought that you can also learn about your mind introspectively, in a way that no one else can. But what exactly is introspection? No simple characterization is widely accepted. Although introspection must be a process that yields knowledge only of one's own current mental states, more than one type of process fits this characterization.
Introspection is a key concept in epistemology, since introspective knowledge is often thought to be particularly secure, maybe even immune to skeptical doubt. Introspective knowledge is also often held to be more immediate or direct than sensory knowledge. Both of these putative features of introspection have been cited in support of the idea that introspective knowledge can serve as a ground or foundation for other sorts of knowledge.
Introspection is also central to philosophy of mind, both as a process worth study in its own right and as a court of appeal for other claims about the mind. Philosophers of mind offer a variety of theories of the nature of introspection; and philosophical claims about consciousness, emotion, free will, personal identity, thought, belief, imagery, perception, and other mental phenomena are often thought to have introspective consequences or to be susceptible to introspective verification. For similar reasons, empirical psychologists too have discussed the accuracy of introspective judgments and the role of introspection in the science of the mind.
Posted by Eric Schwitzgebel at 9:08 AM 7 comments
Labels: introspection, self-knowledge
Tuesday, February 02, 2010
Podcast of "An Empirical Perspective on the Mencius-Xunzi Debate about Human Nature"
given to the Confucius Institute of Scotland on Jan. 19,
here.
The podcast is audio-only, so you won't see the overheads. I don't think you need to see the overheads to understand the talk. But for completeness here they are (as MS PowerPoint 2003).
You may also be interested to see this article, which was part of the basis for the talk.
Posted by Eric Schwitzgebel at 9:21 AM 2 comments
Labels: chinese philosophy, moral development
Friday, January 29, 2010
Knowledge Is a Capacity, Belief a Tendency
In epistemology articles and textbooks (e.g., in the Stanford Encyclopedia), you'll often see claims like the following. S (some person) knows that P (some proposition) only if:
(1.) P is true.Although many philosophers (following Gettier) dispute whether someone's meeting these three conditions is sufficient for knowing P and a few (like Dretske) also dispute the necessity of condition 3, pretty much everyone accepts that the first two conditions are necessary for knowledge -- or necessary at least for "propositional" knowledge, i.e., knowing that [some proposition is true], as opposed to, for example, knowing how [to do something].
(2.) S believes that P.
(3.) S is justified in believing P.
But it's not clear to me that knowing a fact requires believing it. Consider the following case:
Ben the Forgetful Driver: Ben reads an email and learns that a bridge he normally drives across to get to work will be closed for repairs. He immediately realizes that he will have to drive a different route to work. The next day, however, he finds himself on the old route, headed toward the closed bridge. He still knows, I submit, in that forgetful moment, that the bridge is closed. He has just momentarily failed to deploy that knowledge. As soon as he sees the bridge, he'll smack himself on the forehead and say, "The bridge is closed, of course, I know that!" However, contra the necessity of (2) above, it's not clear that, in that forgetful moment as he's driving toward the bridge, he believes (or more colloquially, thinks) the bridge is closed. He is, I think, actually in an in-between state of believing, such that it's not quite right to say that he believes that the bridge is closed but also not quite right to deny that he believes the bridge is closed. It's a borderline case in the application of a vague predicate. (Compare: is a man tall if he is 5 foot 11 inches?) So: We have a clear case of knowledge, but only an in-betweenish, borderline case of belief.
Although I find that a fairly intuitive thing to say, I reckon that that intuition will not be widely shared by trained epistemologists. But I'm willing to wager that a majority of ordinary English-speaking non-philosophers will say "yes" if asked whether Ben knows the bridge is closed and "no" if asked whether he believes or thinks that the bridge is closed. (Actual survey results on related cases are pending, thanks to Blake Myers-Schulz.)
One way of warming up to the idea is to think of it this way: Knowledge is a capacity, while belief is a tendency. Consider knowing how to do something: I know how to juggle five balls if I can sometimes succeed, other than by pure luck, even if most of the time I fail. As long as I have the capacity for appropriate responding, I have the knowledge, even if that capacity is not successfully deployed on most relevant occasions. Ben has the capacity to respond knowledgeably to the closure of the bridge; he just doesn't successfully deploy that capacity. He doesn't call up the knowledge that he has.
Believing that P, on the other hand, involves generally responding to the world in a P-ish way. (If the belief is often irrelevant to actual behavior, this generality might be mostly in counterfactual possible situations.) Believing is about one's overall way of steering cognitively through the world. (For a detailed defense of this view, see here and here.) If one acts and reacts more or less as though P is true -- for example by saying "P is true", by inferring Q if P implies Q, by depending on the truth of P in one's plans -- then one believes. Otherwise, one does not believe. And if someone is mixed up, sometimes steering P-ishly and sometimes not at all P-ishly, then one's belief state is in between.
Consider another case:
Juliet the Implicit Racist: Juliet is a Caucasian-American philosophy professor. Like most such professors, she will sincerely assert that all the races are intellectually equal. In fact, she has better grounds for saying this than most: She has extensively examined the academic literature on racial differences in intelligence and she finds the case for intellectual equality compelling. She will argue coherently, authentically, and vehemently for that conclusion. Yet she is systematically racist in most of her day-to-day interactions. She (falsely) assumes that her black students will not be as bright as her white and Asian students. She shows this bias, problematically, in the way she grades her papers and leads class discussion. When she's on a hiring committee for an office manager, she will require much more evidence to become convinced of the intelligence of a black applicant than a white applicant. And so on.
Does Juliet believe that all the races are intellectually equal? I'd say that the best answer to that question is an in-betweenish "kind of" -- and in some attributional contexts (for example, two black students talking about whether to enroll in one of her classes) a simple "no, she doesn't think black people are as smart as white people" seems a fair assessment. At the same time, let me suggest that Juliet does know that all the races are intellectually equal: She has the information and the capacity to respond knowledgeably even if she often fails to deploy that capacity. She is like the person who knows how to juggle five balls but can only pull it off sporadically or when conditions are just right.
(Thanks to David Hunter, in conversation, for the slogan "knowledge is a capacity, belief a tendency".)
Posted by Eric Schwitzgebel at 5:57 AM 10 comments
Labels: belief
Monday, January 18, 2010
Supersizing Introspection
I've always enjoyed Andy Clark's work (hence my desire to emulate his drink preferences), but I hadn't ('til now) got around to reading his latest book, Supersizing the Mind. Clark is one the the leading advocates of the view that cognitive processes extend beyond the boundaries of the brain to include aspects of the body and environment. The boundary of skull and skin is no privileged border, such that human cognition can only take place within it. If mental images of Scrabble letters are part of your cognitive process when thinking about your next play, then so also are the actual physical tiles when you manipulate and use them in an analogous way. When you work to create an environment that helps you remember, your knowledge is partly distributed into that environment. The mind is not just in the skull; it is "supersized".
I've been struggling lately to develop a general account of what introspection is. I characterize my view as "pluralist" -- I think a variety of mechanisms drive what are rightly thought of as introspective judgments. It now suddenly dawns on me that what I'm really doing is "supersizing" introspection. Introspective processes -- what are sometimes thought of as the most "inward" things there are -- often include the body and world, and broader aspects of the mind than is generally supposed.
How do I know what emotion I'm in? Do I turn on the inner emotion-scanner mechanism, which then produces the judgment that I'm (say) envious? How do I know my preferences? My imagery? My sensory experience? Philosophical opinion basically divides into two camps: First (probably the mainstream) are those who advocate "detection-after" accounts, according to which I have the experience (or other mental process in question) and once that completes (and maybe also while it continues) a separate scanning process of some sort detects the presence or absence of that state.
Second are those who advocate one or another of a variety of non-detection processes. One example is Alex Byrne, who holds that figuring out whether I believe that P (e.g., whether it will rain tomorrow) involves figuring out whether P is true (that is, whether it really will rain tomorrow -- a fact about the outside world) and then applying a belief formation rule according to which when P is true it is permissible to form the belief that you believe that P. On such a view, we know our beliefs not by introspection, but rather by "extrospection" of the outside world, plus the application of some simple inference rule. Similarly, we might learn about our visual experience by attending not to the visual experience itself but rather to the outward objects that we are seeing. We might learn about our emotions just by attending, proprioceptively, to states of our body. There is no turning in, no self-scanning of the mind, in introspection.
It has always seemed to me that both types of view are partly right: Contra the detection-after views, it seems to me unlikely that introspection is the operation of a simple subpersonal scanning module wholly distinct from the cognitive process that is the target of introspection, and outward-looking processes must be part of the story. Contra the outward-looking views, however, it seems to me that outward-looking processes, too, are only part of the story.
Okay, so how do I know that I'm feeling envious? Partly, I look outward: I notice that I am in the type of situation that is apt to promote envy. Someone has something valuable that I don't have. Maybe I look more carefully at or think more carefully about that thing itself. Partly, perhaps, I notice, proprioceptively, my own physical state -- an arousal of a certain sort. Maybe I notice that I have a visual image of the person suffering a painful death. (How do I know what imagery I have? Well maybe that's by introspection, too, and there will be a pluralist story to tell there also.) Maybe I try turning my thoughts toward what is enviable and not enviable about this person and notice whether my bodily arousal crests and falls. Maybe, in the very labeling of myself as "envious" I partly make it true; I was feeling more diffusedly negative before, and the label crystallizes it. On top of such processes, I see no reason to reject the possibility, and I see several reasons to accept the possibility, that there are subpersonal causal processes (not, necessarily, the operation of dedicated modules) that show some sort of sensitivity directly to the emotional state itself, i.e., that work directly to increase the likelihood of my reaching the judgment that I'm envious, given that I am indeed envious.
I doubt we can usefully carve out some subpart of this multifarious mash-up and say that it, alone, is the "introspective process". I think that introspection, like much of cognition according to Clark, is multi-faceted, partly in short connections in the head, partly in broad interactions in the head, and partly spread out into the body and environment.
This is, of course, not independent of my view that we often get our introspective judgments badly wrong.
Posted by Eric Schwitzgebel at 11:43 AM 5 comments
Labels: introspection
Friday, January 08, 2010
British Tour
I'll be speaking around Britain the next couple of weeks. Here's my schedule, if anyone wants to come to a talk or meet for coffee:
Tues Jan 12, 12:30 pm: Arrive in London (overnighting in Oxford until the 19th).
Thurs Jan 14, 12:00 pm, University of London: "Acting Contrary to Our Professed Beliefs, or the Gulf Between Occurrent Judgment and Dispositional Belief" (Institute of Philosophy, School of Advanced Study).
Fri Jan 15, 4:00 pm, Bristol University: "Introspection, What?" (Common Room, Philosophy Department, 9 Woodland Road).
Sat Jan 16, 9:45 am - 6:00 pm, Oxford University: Limitations of Introspection Workshop. 1:45 pm: "Introspection, What?" (JCR Lecture Theatre, St. Catherine's College).
Mon Jan 18, 12:30 pm, Oxford University: "The Moral Behavior of Ethics Professors" (Wellcome Center for Neuroethics, Old Indian Institute).
Tues Jan 19, 6:00 pm, University of Edinburgh: "An Empirical Perspective on the Mencius-Xunzi Debate about Human Nature" (Abden House, Confucius Institute for Scotland).
Wed Jan 20, 11:00 am, University of Edinburgh: Seminar in Philosophy, Psychology, and the Language Sciences.
Wed Jan 20, 5:00 pm, University of Edinburgh: "The Moral Behavior of Ethics Professors" (Department of Philosophy).
Thurs Jan 21, 12:00 pm, University of Leeds: "Acting Contrary to Our Professed Beliefs, or the Gulf Between Occurrent Judgment and Dispositional Belief" (CETL, Philosophy Department).
Thurs Jan 21, 5:00 pm, University of York: "Introspection, What?" (Department of Philosophy).
Fri Jan 22, University of Warwick: "Acting Contrary to Our Professed Beliefs, of the Gulf Between Occurrent Judgment and Dispositional Belief" (Department of Philosophy).
Sat Jan 23, 4:00 pm: Depart from London.
Posted by Eric Schwitzgebel at 9:51 AM 10 comments
Thursday, January 07, 2010
Might Ethicists Behave More Permissibly but Also No Better?
I've been thinking a fair bit about the relationship between moral reflection and moral behavior -- especially in light of my findings suggesting that ethicists behave no better than non-ethicists of similar social background. I've been working with the default assumption that moral reflection can and often does improve moral behavior; but I'm also inclined to read the empirical evidence as suggesting that people who morally reflect a lot don't behave, on average, better than those who don't morally reflect very much.
Those two thoughts can be reconciled if, about as often as moral reflection is morally salutary, it goes wrong in one of the following ways:
* it leads to moral skepticism or nihilism or egotism,But all this is rather depressing, since it suggests that if my aim is to behave well, there's no point in morally reflecting -- the downside is as big as the upside. (Or it is, unless I can find a good way to avoid those risks, and I have no reason to think I'm a special talent.)
* it collapses into self-serving rationalization, or
* it reduces our ability to respond unreflectively in good ways.
But it occurs to me now that the following empirical claim might be true: The majority of our moral reflection concerns not what it would be morally good to do but rather whether it's permissible to do things that are not morally good. So, for example, most people would agree that donating to well-chosen charities and embracing vegetarianism would be morally good things to do. (On vegetarianism: Even if animals have no rights, eating meat causes more pollution.) When I'm reflecting morally about whether to eat the slightly less appealing vegetarian dish or to donate money to Oxfam -- or to kick back instead of helping my wife with the dishes -- I'm not thinking about whether it would be morally good to do those things. I take it for granted that it would be. Rather, I'm thinking about whether not doing those things is morally permissible.
So here, then, is a possibility: Those who reflect a lot about ethics have a better sense of which morally-less-than-ideal things really are permissible and which are not. This might make them behave morally worse in some cases -- for example, when most people do what is morally good but not morally required, mistakenly thinking it is required (e.g., voting? returning library books?); and it might make them behave morally better in others (e.g., vegetarianism?) On average, they might behave just about as well as non-ethicists, doing less that is supererogatory but better meeting their moral obligations. If so, then philosophical moral reflection might be succeeding quite well in its aim of regulating behavior without actually improving it, no skepticism or nihilism or rationalization or injury of spontaneous reactions required.
Posted by Eric Schwitzgebel at 1:09 PM 9 comments
Labels: ethics professors, moral psychology
Wednesday, December 30, 2009
John Stuart Mill's Mother
... you wouldn't know he had one, from his Autobiography (which I just finished reading), other than that he mentions the presence of younger siblings and, in passing, his father's marriage. Pages containing the word "father" in the Autobiography: 127. Pages containing the word "mother": 1 (in reference to someone else's mother).
Biographers seem to conclude that Mill's mother had little influence on him (e.g., Wilson in the Stanford Encyclopedia entry on Mill) -- but I don't think that follows at all. A Google search of Harriet Barrow Mill doesn't reveal much; she seems to be lost in the dust of time.
Posted by Eric Schwitzgebel at 10:58 AM 4 comments
Monday, December 21, 2009
Worried about the University of California?
... this graph says it all. The blue line is below the yellow line because the blue line indicates the national average for all public higher education, including lower tier universities and two-year community colleges.
Source: http://keepcaliforniaspromise.org/?p=314
Posted by Eric Schwitzgebel at 8:57 PM 4 comments
Wednesday, December 16, 2009
Professorial Product Placement
Viewing the latest Lady Gaga video, with its ten product placements, I'm inspired by the thought: Why don't professors do product placements, too?
Actually, this first occurred to me a couple years ago, when I noticed Andy Clark sipping a Monster energy drink while speaking before a large audience at a plenary session of the biennial Tucscon Toward a Science of Consciousness conference. Naturally -- I dare say inevitably -- I thought to myself: "Hey, Monster energy drinks must be cool if Andy Clark is drinking one. I should go out and buy one now! I wonder how much Monster would pay me to drink one at my plenary session?" (Admittedly, my experience at the moment was not sampled by a Hurlburt beeper, so my recollection may be slightly erroneous.)
There are many product placement opportunities for professors: We could display products like drinks or high fashion during classes and public lectures -- with all the respect we command from the high socio-economic status young adult demographic! We could mention products as examples in oral presentations and published articles. ("Suppose that a trolley is rolling out of control toward five people it will inevitably kill unless you push a heavy object over the tracks to stop it. The only available heavy object is a late-model Lexus RX10....") We could even link to them from our blogs.
However, the most dramatic impact would surely come from a tattoo on the face. Thus, I make the following standing offer: For $2,000,000 U.S., I will give over three inches square of real estate on my check, for an appropriately tasteful tattoo by a company that's not too evil. (Evil companies will have to pay a surcharge sufficient to bring the overall utilitarian considerations back into balance.) To preserve what's left of my dignity, I will immediately donate half the amount to Oxfam -- which should, conservatively, save at least ten people's lives. (That seems worth it, doesn't it? Would you want to face the ten people who died because you weren't willing to tattoo your face?)
Now admittedly, the U.C. Riverside / Schwitzgebel brand is probably not realistically worth enough to command that kind of money for an advertisement, but maybe an eminent professor at Harvard or Princeton could do so -- especially given the free press that would no doubt accompany the first professorial facial tattoo advertisement. Peter Singer seems like a natural choice given his high visibility, and with his attitudes toward famine and charity, how could he refuse the offer?
Posted by Eric Schwitzgebel at 9:19 AM 8 comments
Tuesday, December 15, 2009
Map of the Analytic Philosopher's Brain
Back in the 1990s, Joe Cruz and I joked around about drawing up a "map of the analytic philosopher's brain" -- a kind of phrenological map, with the size of the labeled areas proportional to their importance to the discipline. Twin Earth would have a major lobe, while the meaning of life would have only a tiny nodule. (Twin Earth is a science fiction thought experiment about a planet just like Earth in all ways detectable to the inhabitants but with some chemical XYZ rather than H2O running in streams and clouds and faucets. The question is whether this would change the content or meaning of the inhabitants' thoughts and words.) Although Twin Earth discussion has died down a bit since the 1990s, I'd wager it still gets considerably more mentions in analytic philosophy articles than does the meaning of life.
(As a rough check of this, I just did a JStor search of occurrences, since 1990, of "twin earth" and "meaning of life" in the sixty JStor philosophy journals. Sure enough, "Twin Earth" wins 552 to 377. Looking just at the four most elite general analytic journals [J Phil, Mind, Nous, and Phil Review], the ratio is even more lopsided, 174 to 48.)
It occurs to me that the recent Chalmers/Bourget survey of the philosophical community is a kind of map of the analytic philosophers' brain, too. With feedback from a fair number of beta testers (including me), they developed a list of thirty questions to send around to a huge chunk of the Anglophone philosophical community (including almost all faculty at major departments) -- questions they felt would provide a kind of sociological snapshot of the profession's views on a wide range of key issues.
Below, then, are the thirty questions they selected. Notice that the meaning of life makes no appearance. But we do see questions about zombies, teletransporters, and runaway trolleys. That these were the questions chosen is as interesting a fact about the sociology of the profession, I think, as the particular distribution of the answers.
A priori knowledge: yes or no?
Abstract objects: Platonism or nominalism?
Aesthetic value: objective or subjective?
Analytic-synthetic distinction: yes or no?
Epistemic justification: internalism or externalism?
External world: idealism, skepticism, or non-skeptical realism?
Free will: compatibilism, libertarianism, or no free will?
God: theism or atheism?
Knowledge claims: contextualism, relativism, or invariantism?
Knowledge: empiricism or rationalism?
Laws of nature: Humean or non-Humean?
Logic: classical or non-classical?
Mental content: internalism or externalism?
Meta-ethics: moral realism or moral anti-realism?
Metaphilosophy: naturalism or non-naturalism?
Mind: physicalism or non-physicalism?
Moral judgment: cognitivism or non-cognitivism?
Moral motivation: internalism or externalism?
Newcomb's problem: one box or two boxes?
Normative ethics: deontology, consequentialism, or virtue ethics?
Perceptual experience: disjunctivism, qualia theory, representationalism, or sense-datum theory?
Personal identity: biological view, psychological view, or further-fact view?
Politics: communitarianism, egalitarianism, or libertarianism?
Proper names: Fregean or Millian?
Science: scientific realism or scientific anti-realism?
Teletransporter (new matter): survival or death?
Time: A-theory or B-theory?
Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?): switch or don't switch?
Truth: correspondence, deflationary, or epistemic?
Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible?
Posted by Eric Schwitzgebel at 3:42 PM 7 comments
Labels: metaphilosophy, sociology of philosophy
Friday, December 11, 2009
Do Ethicists Steal More Books?
... is now in print at Philosophical Psychology.
Abstract:
If explicit cognition about morality promotes moral behavior then one might expect ethics professors to behave particularly well. However, professional ethicists’ behavior has never been empirically studied. The present research examined the rates at which ethics books are missing from leading academic libraries, compared to other philosophy books similar in age and popularity. Study 1 found that relatively obscure, contemporary ethics books of the sort likely to be borrowed mainly by professors and advanced students of philosophy were actually about 50% more likely to be missing than non-ethics books. Study 2 found that classic (pre-1900) ethics books were about twice as likely to be missing.
My favorite table (click to enlarge):
Posted by Eric Schwitzgebel at 10:15 AM 6 comments
Labels: ethics professors, moral psychology
Wednesday, December 09, 2009
Does the Majority of Philosophers Think that the External World Exists?
... and other poll results here. (These are, of course, the results from David Chalmers' and David Bourget's PhilPapers survey of thousands of philosophers.)
It turns out that 81.6% of philosophers are non-skeptical realists about the external world, thus confirming my hypothesis that there is greater philosophical consensus in favor of the Democratic party than about the existence of a mind-independent external world.
Posted by Eric Schwitzgebel at 11:56 AM 1 comments
Labels: metaphilosophy
Thursday, December 03, 2009
Lebensraum / Elbow Room
The Schoolhouse Rock video "Elbow Room", celebrating the westward expansion of the U.S., got considerable TV airplay back in the 1970s when I was a kid. It looks very different to me now. What I find most chilling is the gleeful -- I'm sure unintentional -- parallel to the Nazi idea of "Lebensraum" (living room), used to justify German expansion.
Here's the video:
We were so fortunate to have an empty continent all to ourselves, don't you think?
Posted by Eric Schwitzgebel at 3:13 PM 5 comments
Labels: culture, moral psychology
Wednesday, November 25, 2009
The Experience of Reading
What kinds of imagistic or sensory experiences do you normally have when reading prose? Here are three possibilities, not exclusive:
(a.) Inner speech. You "hear" (or more accurately auditorially imagine) a voice -- maybe your own voice, or the voice of the author, or the voice of a character, or some other voice, saying the words you are reading.I'm inclined to say, in my own case, that (a) and (c) are pretty much constant and (b) comes and goes. I also would have been inclined to think that (a) and (c) would be pretty universal for everybody and (b) highly variable between people. But it turns out that reports of (a) and (c) are also highly variable.
(b.) Visual imagery. You experience visual images of the events described or hinted at in the text, or maybe images in other modalities (auditory images besides those of the words you are reading, maybe tactile images, olfactory images, motoric images).
(c.) Sensory experience of the text. You visually experience the text on the page, that is, the black and white of ink on paper or pixels on the computer screen.
For example, the research participant "Melanie", interviewed in my 2007 book with Russ Hurlburt, says that normally when she reads she starts out in inner speech and then "takes off" into images, leaving the inner speech behind (comparable to the difference between an airplane taxiing and flying; p. 101). When she is asked to report on two particular moments of experience while reading (having been interrupted by a beeper), she comes pretty close to explicitly denying that she has any sensory experience of the text on the page (e.g., p. 100).
Julian Jaynes says to his readers "And as you read you are not conscious of the letters or even of the words or even of the syntax or the sentences and punctuation, but only of their meaning" (1976, p. 26-27) -- thus seeming to deny at least visual experience the text on the page, and probably auditory imagery or inner speech of the words as well.
In contrast, Bernard Baars seems to assume the near-universality of inner speech, writing: "Human beings talk to themselves every moment of the waking day. Most readers of this sentence are doing it now" (2003, p. 106).
Wittgenstein writes: "Certainly I read a story and don't give a hang about any system of language. I simply read, have impressions, see pictures in my mind's eye, etc. I make the story pass before me like pictures, like a cartoon story" (1967, p. 44e).
Charles Siewert writes, after quoting the Jaynes passage above: "[If] Jaynes is denying that we consciously see the book, the page, or anything printed on it, then it seems what we are asked to believe is this: typically when we read, we function with a kind of premium-grade blindsight.... I find this extreme denial of visual consciousness, once made plain, very strange, and just about as obviously false a remark as one could make about visual experience" (1998, p. 248-249).
Max Velmans, like Siewert, seems to find the visual experience of the text mandatory, inner speech more optional: "When consciously reading this sentence, for example, you become aware of the printed text on the page, accompanied, perhaps, by inner speech (phonemic imagery), and a feeling of understanding (or not)" (2002, p. 16).
Gavin and Susan Fairbairn, in a text intended to instruct college students in better reading, write: "In contrast to the experience of those who find that they are conscious of every word when they read fiction, many people find, especially but not exclusively when they are reading fiction, that when they 'get into' the text they seem to be aware of meanings, sounds and pictures, even smells and feelings, without any conscious awareness of the words used to convey them.... Hearing the sounds of words when you read can be a handicap" (2001, p. 25). This view seems rather close to Melanie's analogy to taxiing and flying.
Almost all these authors -- Melanie is of course an exception, and Wittgenstein may or may not be -- take these statements to describe the experience of reading in general, not just for themselves individually. Obviously, though, they reach very different conclusions. (Such is consciousness studies!) As far as I'm aware, however, no one has ever published a systematic study of the matter.
Quotes, descriptions of your own experience, etc., warmly welcomed in the comments section.
(Thanks to my student Alan Moore for some of the quotes above. His own interesting work on the experience of reading will hopefully be the topic of a future post.)
Posted by Eric Schwitzgebel at 3:17 PM 17 comments
Labels: stream of experience
Thursday, November 19, 2009
On Measuring People Twice
Lots of psychological studies involve measuring people twice. For example, in the imagery literature, there's a minor industry that seeks to relate self-reports about imagery to performance on cognitive tasks that seem to involve visual imagery, such visual memory tests or mental rotation tasks.
(A typical mental rotation task presents two line drawings of 3-D figures and asks if one is a simple rotation of the other, for example:Image from http://www.skeptic.com here.)
Participants in such studies thus receive two tests, the cognitive test in question and also a self-report imagery test of some sort, such as the Vividness of Visual Imagery Questionnaire (VVIQ), which asks people to form various visual images and then rate their vividness. Correlations will often -- though by no means always -- be found. This will be taken to show that people with better (e.g. more vivid) imagery do in fact have more skill at the cognitive task in question.
This drives me nuts.
Reactivity between measures is, I think, a huge deal in such cases. Let me clarify by developing the imagery example a little farther.
Suppose you’re a participant in an experiment on mental imagery – an undergraduate, say, volunteering to participate in some studies to fulfill psychology course requirements. First, you’re given the VVIQ, that is, you’re asked how vivid your visual imagery is. Then, immediately afterward, you’re given a test of your visual memory – for example, a test of how many objects you can correctly recall after staring for a couple of minutes at a complex visual display. Now if I were in such an experiment and I had rated myself as an especially good visualizer when given the VVIQ, I might, when presented with the memory test, think something like this: “Damn! This experimenter is trying to see whether my imaging ability is really as good as I said it was! It’ll be embarrassing if I bomb. I’d better try especially hard.” Conversely, if I say I’m a poor visualizer, I might not put too much energy into the memory task, so as to confirm my self-report or what I take to be the experimenter’s hypothesis. Reactivity can work the other way, too, if the subjective report task is given second. Say I bomb the memory (or some other) task, then I’m given the VVIQ. I might be inclined to think of myself as a poor visualizer in part because I know I bombed the first task.
In general, participants are not passive innocents. Any time you give them two different tests, you should expect their knowledge of the first test to affect their performance on the second. Exactly how subjects will react to the second test in light of the first may be difficult to predict, but the probability of such reactivity should lead us to anticipate that, even if measures like the VVIQ utterly fail as measures of real, experienced imagery vividness, some researchers should find correlations between the VVIQ and performance on cognitive tasks. Therefore the fact that some researchers do find such correlations is no evidence at all of the reality of the posited relationship, unless there's a pattern in the correlations that could not just as easily be explained by reactivity.
In the particular case at hand, actually, I think the overall pattern of data positively suggests that reactivity is the main driving force behind the correlations. For example, to the extent there is a pattern in the relationship between the VVIQ and memory performance, the tendency is for the correlations to be higher in free recall tasks than in recognition tasks. Free recall tasks (like trying to list items in a remembered display) generally require more effort and energy from the subject than recognition tests (like “did you see this, yes or no?”) and so might be expected to show more reactivity between the measures.
The problem of reactivity between measures will plague any psychological subliterature in which participants are generally aware of being measured twice -- including much happiness research, almost any area of consciousness studies that seeks to relate self-reported experience and cognitive skills, the vast majority of longitudinal psychological studies, almost all studies on the effectiveness of psychotherapy or training programs, etc. Rarely, however, is it even given passing mention as a source of concern by people publishing in those areas.
Posted by Eric Schwitzgebel at 12:42 PM 6 comments
Labels: imagery, psychological methods