Friday, August 29, 2008

Philosophical Dialogues

The great philosophical "dialogues" are, of course, hardly dialogues. One voice is that of the philosopher, the others are foils of varying degrees of density and compliance. Large stretches of Plato's dialogues are merely expository essays with "It is so, Socrates" (or similar) regularly tossed in. In his Dialogues Concerning Natural Religion, Hume gives his foil Cleanthes a bit more philosophy than is usual, but the crucial final two parts, XI and XII, are Philo's almost alone.

In my view, this is merely an accident of the mundane realities of writing and publishing. Nothing prevents the compelling presentation of more than one side of an issue in a dialogic format. Normally, though, this will require authors with divergent views and an ability to work co-operatively with each other. My recent experience writing in this way with Russ Hurlburt (in our recent book) has convinced me that this can be a very useful method both for the authors and for readers. There's nothing like genuinely engaging with an opponent.

The dialogue is very different from the pro and con essay with replies. The dialogue has many conversational turns; the essay and replies no more than four. The dialogue invites the reader to a vision of philosophy as collaborative and progressive, with the alternative views building on each other; the pro and con essay invites a combative vision. The dialogue is written and re-written as a whole to cast each view in its best light given what emerges at the end of the dialogue, eliminating mere confusions and accommodating changes of view.

David Lewis published a couple of genuine dialogues on holes. John Perry has published delightful introductory dialogues on personal identity and on good and evil (though Perry is summarizing existing arguments rather than developing new ones). Surely there are other good exceptions, but they are rare.

I wonder how different philosophy would be -- and better -- if the standard method were to meet one's opponents and hash out a dialogue rather than to write a standard expository essay....

Tuesday, August 19, 2008

More Data on Professors' Voting Habits: Variability and Conscientiousness

I've a couple more thoughts to share from Josh Rust's and my study of the voting rates of ethicists and political philosophers vs. other professors. (Our general finding is that ethicists and political philosophers vote no more often than other professors, though political scientists do vote more often.)

(1.) Take a guess: Do you think extreme views about the importance or pointlessness of voting will be overrepresented, underrepresented, or proportionately represented among political scientists and political philosophers compared to professors more generally? My own guess would be overrepresented: I'd expect both more maniacs about the importance of voting and more cynics about it among those who study democratic institutions than among your average run of professors.

However, the data don't support that idea. The variance in the voting rates of political scientists and political philosophers in our study is almost spot-on identical to the variance in the voting rates of professors generally. Either political scientists and political philosophers are no more prone to extreme views than are other professors, or those extreme views have no influence on their actual voting behavior.

(2.) California professors are incredibly conscientious about voting in statewide elections. Half of our sample is from California, where we only have data for statewide elections. Among California professors whose first recorded vote is in 2003 or earlier, a majority (52%) voted in every single one of the six statewide elections from 2003-2006. 72% voted in at least five of the six elections. This compares with a statewide voting rate, for the June 2006 primary election alone, of only 33.6% of registered voters. (For other states, we have local election data too. There's no such ceiling effect once you include every single local ballot initiative, city council runoff election, etc.; professors aren't quite that conscientious!)

Saturday, August 16, 2008

Zhuangzi's Coffin and Liu Ling's Trousers

A friend asked me today about philosophical humor. Of course there are philosophical jokes that play on our jargon (e.g., a "goy" is a girl if observed before time t and a boy if observed after; compare "grue"), but are there philosophical jokes with a deeper point? For some reason the two examples that leapt to mind were both from the Daoist tradition. Their similarity is, I'm sure, not at all accidental.

When Chuang-tzu [a.k.a. Zhuangzi, 4th c. B.C.E.] was dying, his disciples wanted to give him a lavish funeral. Said Chuang-tzu

"I have heaven and earth for my outer and inner coffin, the sun and the moon for my pair of jade disks, the stars for my pearls, the myriad creatures for my farewell presents. Is anything missing in my funeral paraphernalia? What will you add to these?"

"Master, we are afraid that the crows and kites will eat you."

"Above ground, I'll be eaten by the crows and kites; below ground, I'll be eaten by the ants and molecrickets. You rob one of them to give to the other; how come you like them so much better?" (Graham 1981 trans., p. 125)
And:
Liu Ling [3rd c. C.E.] always indulged in wine and let himself be uninhibited. Sometimes he would take his clothes off and stay in his house stark naked. When people saw this, they criticized him. Ling said: "I take Heaven and Earth as my pillars and roof, and the rooms of my house as my trousers. Gentlemen, what are you doing by entering my trousers?" (Goldin 2001, p. 117.)
I suppose it's natural for the anticonventional Daoists to use humor to help knock loose people's presuppositions. Interestingly, though, the best-known Daoist, Laozi, doesn't employ much humor. In this respect, as in many others, the tone and spirit of Laozi and Zhuangzi differ immensely, despite the superficial similarity of their views.

[Note: Revised and updated Aug. 17.]

Monday, August 11, 2008

Which Machine Is Conscious? (by guest blogger Teed Rockwell)

The following thought experiment ended up in my online paper “The Hard Problem Is Dead, Long Live the Hard Problem”.

It was first sent out to the Cognitive Questions mailing list, and received the following replies from a variety of interesting people.

Let us suppose that the laboratories of Marvin Minsky and Rodney Brooks get funded well into the middle of the next century. Each succeeds spectacularly at its stated goal, and completely stays off the other's turf.



The Minskians invent a device that can pass every possible variation on the Turing test.

It has no sense organs and no motor control, however. It sits stolidly in a room, and is only aware of what has been typed into its keyboard. Nevertheless, anyone who encountered it in an internet chatroom would never doubt that they were communicating with a perceptive intelligent being. It knows history, science, and literature, and can make perceptive judgments about all of those topics. It can write poetry, solve mathematical word problems, and make intelligent predictions about politics and the stock market. It can read another person's emotions from their typed input well enough to figure out what are emotionally sensitive topics and artfully changes the subject when that would be the best for all concerned. It makes jokes when fed straight lines, and can recognize a joke when it hears one. And it plays chess brilliantly.



Meanwhile, Rodney Brooks' lab has developed a mute robot that can do anything a human artist or athlete can do.

It has no language, neither spoken or internal language-of-thought, but it uses vector transformations and other principles of dynamic systems to master the uniquely human non-verbal abilities. It can paint and make sculptures in a distinctive artistic style. It can learn complicated dance steps, and after it has learned them can choreograph steps of its own that extrapolate creatively from them. It can sword fight against master fencers and often beat them, and if it doesn't beat them it learns their strategies so it can beat them in the future. It can read a person's emotions from her body language, and change its own behavior in response to those emotions in ways that are best for all concerned. And, to make things even more confusing, it plays chess brilliantly.

The problem that this thought experiment seems to raise is that we have two very different sets of functions that are unique and essential to human beings, and there seems to be evidence from Artificial Intelligence that these different functions may require radically different mechanisms. And because both of these functions are uniquely present in humans, there seems to be no principled reason to choose one over the other as the embodiment of consciousness. This seems to make the hard problem not only hard, but important. If it is a brute fact that X embodies consciousness, this could be something that we could learn to live with. But if we have to make a choice between two viable candidates X and Y, what possible criteria can we use to make the choice?

For me, at least, any attempt to decide between these two possibilities seems to rub our nose in the brute arbitrariness of the connection between experience and any sort of structure or function. So does any attempt to prove that consciousness needs both of these kinds of structures. (Yes, I know I'm beginning to sound like Chalmers. Somebody please call the Deprogrammers!) This question seems to be in principle unfalsifiable, and yet genuinely meaningful. And answering a question of this sort seems to be an inevitable hurtle if we are to have a scientific explanation of consciousness.

Friday, August 08, 2008

Working to Ignore Our Vice

In a forthcoming article, Piercarlo Valdesolo and David DeSteno tried the following experiment: In part one, participants were faced with the possibility of doing one of two tasks -- one a brief and easy survey, the other a difficult and tedious series of mathematics and mental rotation problems -- and they were given the choice between two decision procedures. Either they could choose the task they preferred, in which case (they were led to believe) the next participant would receive the other task, or they could allow the computer to randomly assign them to one of the two tasks, since "some people feel that giving both individuals an equal chance is the fairest way to assign the tasks". Perhaps unsurprisingly, 93% of participants chose simply to give themselves the easy task.

In part two, participants were asked to express opinions about various aspects of the experiment, including rating how fairly they acted (on a 7-point scale from "extremely fairly" to "extremely unfairly"). Some participants completed these questions under normal conditions; others completed the questions under "cognitive load" -- that is, while simultaneously being asked to remember strings of seven digits. A third group did not complete part one, but watched a confederate of the experimenter complete it, rating the confederate's fairness.

Again unsurprisingly, people rated the choice of the easy task as more unfair when they saw someone else make that choice than when they made that choice themselves. But here's the interesting part: They did not do so when they had to make the judgment under the "cognitive load" of memorizing numbers.

Consider two possible models of rationalization. On the first model, we automatically see whatever we do as okay (or at least more okay than it would be if others did it) and the work of rationalization comes after this immediate self-exculpatory tendency. On the second model, our first impulse is to see our action in the same light we would see the same action done by others, and we have to do some rationalizing work to undercut this first impulse and see ourselves as (relatively) innocent. The current experiment appears to support the second model.

I suspect that moral reflection is bivalent -- that sometimes it helps drive moral behavior but sometimes it serves merely to dig us deeper into our rationalizations and is actually morally debilitating. It is by no means clear to me now which tendency dominates. (I originally inclined to think that moral reflection was overall morally improving, but my continued reflections on the moral behavior of ethics professors are leading me to doubt this.) Valdesolo and DeSteno's experiment and the second model of rationalization fit nicely with the negative side of the bivalent view: The more we devote our cognitive resources to reflecting on the moral character of our past behavior, the more we tend to make false angels of ourselves.

Tuesday, August 05, 2008

3 Science Stoppers (by guest blogger Teed Rockwell)

The most decisive criticism of the very idea behind “Intelligent Design” theory (ID) is that it is a “Science Stopper”. There is no such thing as evidence either for or against ID, because “God did it” is not an explanation. It is simply away of filling a gap in our knowledge with an empty rhetorical flourish. If there is a God, he created everything in the Universe, and thus to use this claim as an explanation for a particular occurrence is either trivially false or trivially true.

There are, however, two other science stoppers which are not acknowledged as such.

1) The concept of “direct awareness”, so beloved by positivists and other empiricists. To say something is directly given implies we have no explanation for how we are aware of it. I believe Hume refused to define experience, because its meaning was allegedly obvious. Ned Block made a similar claim for “phenomenal consciousness”. In this context, the word “obvious” basically means “any prejudice that is so widely accepted that no one feels a need to justify it.” The prejudice here is that because we are all familiar with experience, it does not require an explanation. However, once scientific explanations became available for how our experience arises, the idea of direct awareness was rejected. The essential point here is that this would have happened regardless of what explanation was discovered. Once there a mechanical cause-and-effect explanation for our experience, it would by definition no longer be direct. In much the same way, “God created Life” seemed plausible until Darwin showed us how Life was created. However, the God-referring explanation would have been rendered inadequate by any possible causal explanation. Because of the principle of sufficient reason, explanations like “we are directly aware of X” or “God created X” are both science stoppers waiting to be filled by causal explanations. Chalmers apparently rejects the principle of sufficient reason as a metaphysical truth, for he believes that “there is nothing metaphysically impossible about unexplained physical events”. If so, what criteria does he suggest we use for dismissing Intelligent Design?

2) The concept of “Intrinsic Causal Powers.” This concept stops us a little further down the road, but stops us nevertheless. Causal explanations always need to see certain properties as intrinsic, so they can map and describe the relations between those intrinsic properties. If causal explanations didn’t stop somewhere and start talking about the relationships between something and something else, they’d never get off the ground. But this pragmatic fact about scientific practice does not justify the metaphysical claim that there are certain causal powers which are “intrinsically intrinsic.” Paradoxically, intrinsicality is itself a relational property. A property which is intrinsic in one science (say chemistry or biology) must be analyzable into a set of relationships in some other science (for example, physics). To deny this is to limit us to descriptions, and stop us from finding explanations.

One brand of physicalism claims that only very tiny particles possess intrinsic powers, but this contradicts another “physicalist” claim that brains have intrinsic powers. Those who believe in the mind/brain identity theory claim that environmental factors may cause experiences, but brain states embody those experiences. This is a fancy way of saying that brains have the intrinsic causal power to produce mental states, just as knives are intrinsically sharp. But saying that knives are sharp is just short hand for saying that they can participate in bread-cutting events, cloth-cutting events etc. Talk of sharpness intrinsically inhering in knives, or mental states inhering in brains, is accurate enough for many purposes, but it mires us in an Aristotelian world of dispositional objects which limits scientific progress.