Wednesday, July 30, 2008

The Linguistic U-Turn

In the early to mid-20th century, anglophone philosophy famously took a "linguistic turn". Debates about the mind were recast as debates about the terms or concepts we use to describe the mind; debates about free will were recast as debates about the term or concept "freedom"; debates about knowledge were recast as debates about how we do or should use the word "knowledge"; etc. Wittgenstein famously suggested that most philosophical debates were not substantive but rather confusions that arise "when language goes on holiday" (Philosophical Investigations 38); diagnose the linguistic confusion, dissipate the feeling of a problem. J.L. Austin endorsed a vision of philosophy as often largely a matter of extracting the wisdom inherent in the subtleties of ordinary language usage. Bertrand Russell and Rudolf Carnap set philosophers the task of providing logical analyses of ordinary and scientific language and concepts. By no means, of course, did all philosophers go over to an entirely linguistic view of philosophy (not even Austin, Russell, or Carnap), but the shift was substantial and profound. In broad-sweeping histories, the linguistic turn is often characterized as the single most important characteristic or contribution of 20th century philosophy.

I view the linguistic turn partly sociologically -- as driven, in part, by philosophy's need to distinguish itself from rising empirical disciplines, especially psychology, as departmental and disciplinary boundaries grew sharper in anglophone universities. The linguistic turn worked to insulate philosophical discussion from empirical science: Psychologists study the mind, we philosophers study in contrast the concept of the mind. Physicists study matter, we study in contrast the concept of the material. "Analytic" philosophers could thus justify their ignorance of and disconnection from empirical work.

This move, however, could only work when psychology was in its youth and dominated by a behaviorist focus on simple behaviors and reinforcement mechanisms (and, even earlier, introspective psychophysics). As psychology has matured, it has become quite evident -- as indeed it should have been evident all along -- that questions about our words and concepts are themselves also empirical questions and so subject to psychological (and linguistic) study. This has become especially clear recently, I think, with the rise of "experimental philosophers" who empirically test, using the methods of psychology, philosophers' and ordinary folks' intuitions about the application of philosophical terms. (In fact, Austin himself was fairly empirical in his study of ordinary language, reading through dictionaries, informally surveying students and colleagues.)

A priori, armchair philosophy is thus in an awkward position. The 20th-century justification for analytic philosophy as a distinct a priori discipline appears to be collapsing. I don't think a prioristic philosophers will want to hold on much longer to the view that philosophy is really about linguistic and conceptual analysis. It's clear that psychology and linguistics will soon be able to (perhaps already can) analyze our philosophical concepts and language better than armchair philosophers. Philosophers who want to reserve a space for substantive a priori knowledge through "philosophical intuition", then, have a tough metaphilosophical task cut out for them. George Bealer, Laurence BonJour, and others have been working at it, but I can't say that I myself find their results so far very satisfying.

Monday, July 28, 2008

Why Zombies Are Impossible (by guest blogger Teed Rockwell)

Does it make sense to speak of creatures which behave exactly like conscious beings, but are not actually conscious? Such creatures are called Philosopher’s zombies, and they are very different from Hollywood zombies. Hollywood zombies are noticeably different from normal human beings, they drool, shuffle etc. However, when we use this definition of “Hollywood zombie” for philosophical purposes, we must remember that there are many Hollywood zombies who are not sufficiently behaviorally anomalous to land movie contracts. In this technical philosophical sense, if a zombie’s behavior deviates from the behavior of conscious beings in any way whatsoever, she (it?) is a Hollywood zombie.

I think that philosopher’s zombies could not possibly exist, because they are as self-contradictory as round squares. The following argument is one reason why. The logic of the argument is unassailable, so it will thus stand or fall depending on the truth of the three premises, which I will now defend in turn.

1) Zombies are possible if and only if subjective experiences are epiphenomenal.

Philosopher’s Zombies are possible only if there are mental states, often called qualia, which have no causal impact on the physical world whatsoever. If these qualia have any impact at all on the behavior of the creature that possesses them, then in principle another person could detect that impact, and consequently that creature would be a Hollywood zombie, not a philosopher’s zombie. There can never be such a thing as objective evidence for this kind of purely subjective consciousness, which means it would be epiphenomenal.

2) Subjective experiences are epiphenomenal if and only if we have a direct awareness of them.

Descartes claimed we have direct awareness of our mental states. It supposedly gives us absolute certainty, but granting this kind of certainty to our awareness of mental states gives second class status to our knowledge of everything else (including the mental states of other people). If we don’t have a direct awareness of our subjective states, how else could there be any reason for believing that they exist? We have already stipulated that there must be no evidence for their existence in the external world. Everyone agrees that there is no such thing as what Dan Dennett calls a zagnet i. e. an object which behaves like a magnet but lacks some kind of inner “magnetismo”. The only difference between a zagnet and a zombie is that we supposedly have a direct awareness of these epiphenomenal states which we possess and zombies lack.

3) There is no such thing as direct awareness.

See Sellars’ attack on the Myth of the Given and Quine’s on the second of the two dogmas of empiricism. These arguments are too complex to summarize briefly, but most people who have carefully studied them find them to be decisive. Check them out, if you can find the time. Also, anyone who has studied the physiology of perception knows that there is nothing direct about how we become aware of our sensations. There is lots of complicated processing going on; they don’t just jump into our awareness and say “hi , I’m the color red.”

Therefore, (by hypothetical syllogism and modus tollens) subjective experiences are not epiphenomenal, and zombies are not possible.

For my argument, I only need conditionals (if. . .then) not biconditionals (if and only if). However, I think the standard argument for zombies flips these conditionals around, (which is only possible with a biconditional), adds a modal operator, and creates an argument that looks like this.

A) If we have a direct awareness of our subjective experiences, then they could be epiphenomenal.
B) If our subjective experiences are epiphenomenal, then Zombies are possible.
C) We do have a direct awareness of our subjective experiences
Therefore, (by hypothetical syllogism and modus ponens), Zombies are possible.

The punchline of all this is that the best way to settle the zombie controversy is to refocus on the question of direct awareness. That is the only premise in this argument that both sides disagree on, and the two classic papers by Quine and Sellars, linked above, pretty much finished off the concept of direct awareness, as far as many of us are concerned. I hear it’s coming back, though, and I don’t think it’s a coincidence that many of its supporters also think that zombies are possible. Let’s redirect the zombie problem to the question of whether there is direct awareness. If direct awareness goes, zombies go with it.

Sunday, July 27, 2008

Yet One More Reason to Believe in Divine Dispassion...

... from my eight-year-old son, Davy, an aspiring cartoonist!



(Yes, that's an alien spaceship with a pincer grip on Earth.)

Friday, July 25, 2008

In Philosophy, Women Move More Slowly to Tenure Than Do Men

It's well known (at least among feminist philosophers!) that only about 20% of philosophy professors are women. Surely this is partly due to a history of sexism in the discipline. The question is, does it also reflect current sexism?

No simple analysis could possibly settle that question, but here's one thought. If sexism is still prevalent in philosophy, we should expect women, on average, to move less quickly than men through the academic ranks -- from graduate student to non-tenure-track faculty to tenure-track Assistant Professor to tenured Associate or full Professor. It would then follow that women would be on average older than men at the lower ranks.

As it happens, the data Joshua Rust and I collected for our study of the voting rates of philosophers can be re-analyzed with this issue in mind.

For the voting study, we collected (among other things) academic rank data for most professors of philosophy in five states: California, Florida, Minnesota, North Carolina, and Washington State. Examining voter registration records, we found unambiguous name matches for 60.4% of those professors. Since four states (all but North Carolina) provided age data for registered voters, we were able to compare rank and age.

Overall 23.1% of the philosophy professors in our study were female. The average birth year of men and women at each rank are:
Non-Tenure-Track: women 1958.1, men 1960.4
Assistant Profesor: women 1965.3, men 1970.0
Tenured Professor: women 1955.3, men 1948.7
That the average male tenured professor is older than the average female tenured professor fits with the idea that the gender ratio is philosophy has improved over time; but that the average female untenured professor is older than the average male suggests that women are still slower to progress to tenure.

If you can bear with lists of numbers, the facts become clearer if we break down the data by birthyear first, then gender and rank:

1900-1939 (54 profs.):
96% male (90% full, 10% assoc.)
4% female (50% full, 50% assoc.)
1940-1949 (100 profs.):
77% male (78% full, 13% assoc., 3% asst., 6% non-TT)
23% female (70% full, 9% assoc., 4% asst., 17% non-TT)
1950-1959 (104 profs.):
71% male (55% full, 28% assoc., 4% asst., 12% non-TT)
29% female (43% full, 27% assoc., 7% asst., 23% non-TT)
1960-1969 (99 profs.):
65% male (29% full, 39% assoc., 20% asst., 19% non-TT)
35% female (22% full, 39% assoc., 29% asst., 14% non-TT)
1970-1979 (57 profs.):
81% male (2% full, 11% assoc., 74% asst., 13% non-TT)
19% female (0% full, 18% assoc., 45% asst., 36% non-TT)
There's a general increase of representation of women in philosophy in the younger generations, but for almost all age groups women are underrepresented among full professors and overrepresented in the lower ranks. It seems to me that the natural interpretation is that although women are coming into philosophy at higher rates than they used to, they either progress more slowly through the ranks or enter philosophy later in their lives (which is perhaps just another way of progressing more slowly). Even the reversal of the gender ratio trend for women born in 1970 or later fits with this: Men may be more overrepresented in this group than in slightly older groups not because that generation has fewer women pursuing philosophy but rather because the men are completing their Ph.D.'s and moving into teaching more quickly.

It doesn't follow straightaway that the cause of women's slower progression is sexism, of course. Childbearing and other factors may play a role. It's also encouraging, I think, to see the rank differences diminishing in the younger groups.

Update, July 30: Brian Leiter looks at the assistant professors from top departments twelve years ago, and Rob Wilson breaks it down by gender in the comments to this post.

Sunday, July 20, 2008

Why there is No Magic in the Harry Potter Books (by guest blogger Teed Rockwell)

Philosophers and psychiatrists both use the word “magic" as a term of abuse. Intelligent Design Theory is accused of relying on magic. Psychiatric patients are accused of “Magical Thinking” when they believe that they can get what they want by simply wishing for it. Magical Thinking is often seen as the enemy of reason or science, but I think it would be more accurate to say that what Magical Thinking really rejects is causality. Knowledge and skills are dependent on knowing, and/or having a “grip” on, some causal factor that produces an effect we want. When defenders of Intelligent Design say that God interacts with the world “directly”, and that no further explanation is possible, they are renouncing causality itself. When people try to get what they want by just wishing for it, without bothering to find out how to get what they want, they are also renouncing causality, and putting their faith in magic.

Most people think that magic is just theories that aren’t true. By this definition, Newton would be a magician, because we now know that there is no such thing as absolute space. In fact, Newton also believed in Alchemy, which is often called a branch of magic because it is so different from what scientists believe today. However, none of this detracts from Newton’s credentials as a scientist, which rest not on the content of his beliefs, but on his methods of doing research. The basic ideals of the scientific method rest on principles like causality, the principle of sufficient reason, and the search for abstract unity to account for perceived regular patterns. Anyone in the past, present, or future who rigorously adheres to these kinds of ideals has a right to be called a scientist, even if much of what they “discover” is eventually revealed to be wrong.

In fact, to be completely fair, we should not deny this honorific even to our colleagues in alternative possible realities. That is why, by the definition described above, there is actually no magic in the Harry Potter books. What Harry and his friends are studying at Hogwarts are the causal principles of an alternative reality very different from ours. The intellectual rigor required of a Hogwarts-style magician is essentially the same as that required of a scientist. Or perhaps more accurately, an engineer. The Weasley twins do some original research while developing their joke shop wares, and Dumbledore probably worked in something resembling a laboratory when he discovered the twelve new uses for dragon’s blood. But students in Hogwarts classes learn what was discovered by research outside their classrooms, and like engineers, they learn how to use these skills and facts to produce practical results.

Also like engineers, if Hogwarts students don’t fully learn the intricacies of the causal order, they don’t get the results they strive for. When Hermione Granger tried to transform herself using polyjuice potion, it was not enough to wish to be transformed. She had to get all the ingredients and procedures right, and one mistake (using a cat hair instead of a human hair), turned her into a cat instead of a Slytherin girl. In fact, almost every chapter shows examples of what happens when a student doesn’t fully understand the causal order of this alternative reality. Bones get completely removed instead of healed, teacups grow fur but refuse to turn into hamsters, and naïve people ignore the principles of evidence and believe in “non-existent” creatures like crumpled-horn snorkacks. Ironically, the Harry Potter books could be a child’s best cure for what psychiatrists call Magical Thinking. They might even be good introductory material for a class in scientific method.

Friday, July 18, 2008

Do Words Ever Feel Like They're Literally on the Tip of Your Tongue?

In the course of writing my book with Russ Hurlburt, I started to notice what I took to be a pattern in our subject Melanie, and others, to over-literalize their metaphors into phenomenological reports -- that is, to regard themselves as having real conscious experiences that match the explicit or implicit metaphors they use to describe their mental lives, to think of themselves, for example, as literally seeing red when angry, being "blue" when depressed (Hurlburt and Schwitzgebel 2007, p. 72), or as experiencing their thoughts as sometimes literally in the back of their heads or minds (Hurlburt and Schwitzgebel 2007, p. 160). I'm generally suspicious of such claims.

So at the latest meeting of the Society for Philosophy and Psychology, when Jonathan Weinberg claimed to have a word on the tip of his tongue, I jumped in with the question of whether he really experienced a feeling like that of having a word near the tip of his tongue. Perhaps wisely, he denied it. Yet for all my skepticism, I feel some pull toward taking the "tip of the tongue" expression literally as an actual description of the phenomenology of having a word or name near to hand but not quite there.

In the case of seeing red when angry, I looked at cross-linguistic data: If people really do sometimes see red when angry, we might expect to see versions of that phrase across languages, or at least a cross-cultural association of red and anger; but we don't. For the tip-of-the-tongue phenomenon, however, there's much broader cultural agreement about the metaphor. Schwartz 1999 surveyed 51 Indo-European and non-Indo-European languages and found that 45 out of 51 used something like the "tip of the tongue" metaphor. Korean puts it particularly nicely with the phrase "sparkling at the end of the tongue".

So maybe there really is a widespread phenomenology here that this metaphor is latching onto?

Monday, July 14, 2008

How to Make a Rabbit’s Nose out of Electric Jello (by guest blogger Teed Rockwell)

We’ve all seen patterns created by causal forces “pushing” against each other (whether that description is literal or metaphorical depends on your metaphysics). Sometimes these patterns endure with a repeating persistence that makes them seem more like objects than events: waterfalls, tornados, rainbows. It seems a major accomplishment for these forces to coalesce into patterns that are solid enough to superficially resemble simple inanimate objects. But some argue that living organisms are more like interacting forces than they are like rocks. Unlike rocks, organisms do not passively endure. They must constantly interact with their environment, taking in matter, and transforming it from energy into motion, or they destabilize and settle into a more enduring equilibrium. In other words, they die.

Advocates of Dynamic Systems Theory (DST) argue that this might mean that we would do a better job of understanding minds if we saw them as more like events than objects. Instead of building computers out of hard silicon modules, perhaps we should “build” them out of events. These events would endure much longer than waterfalls and rainbows, but they would not be discrete objects like the modular circuits in computers. Furthermore, I believe that these events could interact with each to produce cognitively sophisticated behavior. Does this sound like a crazy speculation? That’s because it is. But thought-experiments, like science fiction, enable us to dream of things that never were, and say “why not?”. Neurobiologist Walter Freeman sees dynamic events as the primary cognitive factor in the olfactory brain of the Rabbit (colloquially referred to as “rabbit’s nose” in the title). Perhaps if we had the right sort of hardware, we could create dynamic patterns that had the cognitive sophistication of computer modules.

Imagine a large colloidal suspension surrounded by oscillators, which send electrical charges and/or acoustic vibrations into that colloid. Let’s call this colloid “Electric Jello”. Imagine a keyboard that controls the amplitudes and frequencies of those oscillators to create what Dewey called a “System of Tensions” - i.e. conflicting forces of various kinds which interact, then resolve into some kind of semi-equilibrium. A system of this sort would be neither stable nor unstable, but rather multi-stable. It could settle into different oscillating patterns, depending on the causal pressures it received from the oscillators. A system of this sort could in principle function like a decision tree in a computer program, and thus be arguably cognitive.

There are systems in nature, such as Freeman’s olfactory rabbit brain and the ambulatory system of horses, which perform the function of the decision trees mapped by computer languages like LISP. (For more on this, see my Minds and Machines paper “Attractor Spaces as Modules”. If DST is correct, hard-wired computer modules are only a brittle mechanical metaphor for these multi-stable systems. Until we come up with a new kind of hardware, however, these hard-wired computer systems will be the best that we can do. Electric Jello, however, might be a form of hardware that could duplicate both the cognitive complexity and the dynamic flexibility of real embodied cognitive systems. In a Jules Verne-like spirit, I confess that I see this machine with both typewriter keys and sliders, that enable the programmer to adjust the parameters of numerous input oscillators until representations of tornado-like repeating patterns begin to emerge on a video screen. These tornado-like patterns would be what DST theorists call attractor spaces, and eventually it would be possible to flip from one space to the other by manipulating the input oscillators. Somehow we would use the resulting changes in this system of tensions to manipulate outputs of some sort, and then we would have the functional equivalent of a computer control system. It would be, however, a system with unprecedented flexibility, because its “parts” would be events in multidimensional space, not hardwired modules.

Monday, July 07, 2008

Against Metaphysics, Especially the Metaphysics of Consciousness

I worry that metaphysics is a sham -- or at least, that it's a sham if it's thought of as an a priori discipline whereby one discovers a special class of truths while reflecting from an armchair. When you tilt back in your armchair and reflect, there's only one kind of thing you can discover, it seems to me: Facts about your own psychology. What gets called metaphysics, then, is really just a certain kind of self-study -- typically the study of the metaphysician's own concepts.

You can learn about the world by learning about your concepts, if your concepts contain in them important information about the world. Often they do contain such information: My concept of a bird as a type of biological organism, warm-blooded, bipedal, winged, feathered, egg-laying, etc. contains packed in it information about a certain cluster of traits that tend to travel together. Because of this, reflecting on my concept of "bird" can bring forward facts I may not have explicitly considered in the past.

How do my concepts come to contain information about the world? They must do so by contact with the world -- either my own contact (directly by empirical observation or indirectly by hearing the reports of others) or my ancestors' contact if the concepts are innate. I don't see, then, how studying my own concepts could yield a different kind of information than studying the world directly; nor do I see how our concepts could provide an independent source of information about the world orthogonal to empirical observation or immune to empirical refutation. (For a parody of the idea that the truths suggested by conceptual reflection are immune to empirical refutation, see here.)

The weird, science fiction cases that philosophers tend to dwell on in metaphysical discussions are exactly the kinds of cases where we should expect our concepts to be least in touch with the world, aren't they? Consider disputes in the metaphysics of consciousness: Would a silicon robot that behaves just like a human being be genuinely conscious, or would it have no more real consciousness than the computer on which I'm writing this post? Is it "metaphysically possible" (even if in practice unlikely) that my conscious experience is radically different from yours (e.g., red-green inverted or worse) despite all the similarities in our behavior?

If we construe these questions as questions about our concepts or pre-existing ideas, then it's not unreasonable to think we can make progress on them from the philosopher's armchair (though other methods for studying our concepts may be equally or more illuminating, in the spirit of recent "experimental philosophy"). Some people apparently find it impossible to conceive that a robot that behaved like a human being wouldn't be conscious; others apparently find it impossible to conceive that such a robot would have human consciousness. That shows something about their concepts or background assumptions. But how could it be (as Searle and Block and Putnam and Lewis and many others seem to think) that armchair reflection could reveal whether robots really would be conscious? Our concept of "bird" works well for near-home cases but tells us nothing about life on other planets; so also our concept of "consciousness" works well enough for distinguishing waking from dreamless sleep, mundane red experiences from mundane yellow experiences, but how could it cast useful light on robots or inverted spectra?

Metaphysicians often respond to such concerns by pointing to mathematics: In math, it seems, we discover substantive facts about the universe from the armchair, so why not also in philosophy? But is it clear that in studying math we do discover substantive facts about the universe? Not every philosophy of math grants this assumption. Maybe what we do, in studying math, is simply invent and apply rules for symbol manipulation. Maybe we discover facts about the structure of our concepts and invent new concepts. So irrefutable seems to me the view (empirically grounded!) that from the armchair we can discover nothing beyond the circuit of our own minds, that a conservative philosophy of math is mandatory. I find that considering alternative rules of logic (e.g., intuitionist logic or dialethism) and alternative rules of arithmetic (e.g. Boolean algebra) helps me feel the pull of the idea that mathematics is more an invention than a discovery of mind-independent facts.