Regular readers will have noticed over the past year a so a variety of posts on group consciousness. I have finally synthesized my thoughts into a full-length paper draft, which I have posted online here.
Here's the core argument: There seems to be no principled reason to deny entityhood to spatially distributed but informationally integrated beings (such as Martian Smartspiders or Betelgeusian beeheads). The United States can be considered as a concrete, spatially distributed but informationally integrated entity. Considered as such, the United States is at least a candidate for the literal possession of real psychological states, including consciousness. The question, then, is whether it meets plausible materialistic criteria for consciousness. My suggestion is that if those criteria are liberal enough to include both small mammals and alien species that exhibit sophisticated linguistic behavior, then the United States probably does meet those criteria. The United States is massively informationally interconnected and responds in sophisticated, goal-directed ways to its surroundings. Its internal representational states are functionally responsive to its environment and not randomly formed or assigned artificially from outside by the acts of an external user. And the United States exhibits complex linguistic behavior, including issuing self-reports and self-critiques that reveal a highly-developed ability to monitor its evolving internal and external conditions.
I leave it open whether we should (a.) do modus ponens and accept the conclusion, (b.) do modus tollens and deny the antecedent, (c.) see this as a challenge to find a good justification for accepting the antecedent while denying the consequent, or (d.) see this exercise as supporting skepticism about a certain style of metaphysics.
As always, thoughts and comments welcome.
Monday, June 25, 2012
Tuesday, June 19, 2012
Chinese Room Persona
I just came across this passage in Karl Schroeder's science fiction novel Sun of Suns (from a conversation between high-tech engineer Aubri Mallahan and low-tech Admiral Chaison Fanning):
"Max is a Chinese Room persona, which makes him as real as you or I." She saw his uncomprehending stare, and said, "There are many game-churches where the members of the congregation each take on the role of one component of a theoretical person's nervous system -- I might be the vagus nerve, or some tiny neuron buried in the amygdala. My responsibility during my shift is to tap out my assigned rhythm on a networked finger-drum, depending on what rhythms and sounds are transmitted to me by my neural neighbors, who could be on the other side of the planet for all I--" She saw that his expression hadn't changed. "Anyway, all of the actions of all the congregation make a one-to-one model of a complete nervous system... a human brain, usually, though there are dog and cat churches, and even attempts at constructing trans-human godly beings. The signals all converge and are integrated in an artificial body. Max's body looks odd to you because his church is a manga church, not a human one, but there's people walking around on the street you'd never know were church-made."
Chaison shook his head. "So this Thrace is... a fake person?"
Aubri looked horrified. "Listen, Admiral, you must never say such a thing! He's real. Of course he's real. And you have to understand, the game-churches are as incredibly important part of our culture. They're an attempt to answer the ultimate questions: what is a person? Where does the soul lie? What is our responsibility to other people? You're not just tapping on a drum, you're helping to give rise to the moment-by-moment consciousness of a real person.... To let down that responsibility could literally be murder.Although John Searle would likely disagree with Aubri's perspective on the Chinese room, I'm inclined to think that on general materialist principles there's no good reason to regard such details of implementation as sufficient to differentiate beings who really have consciousness from those who don't. We don't want to be neurochauvinists after all, do we?
Friday, June 15, 2012
New Data on The Generosity of Philosophy Students
In 2007, I analyzed data on student charitable giving at University of Zurich, broken down by major. When paying registration fees, students at Zurich could choose to give to one or both of two charities (one for foreign students, one for needy students). Among large majors, philosophy students proved the most charitable of all majors. However, philosophy majors' rates of charitable giving didn't rise over the course of their education, suggesting that their giving rates were not influenced by their undergraduate training.
I now have some similar data from University of Washington, thanks to Yoram Bauman and Elaina Rose. At UW from 1999-2002, when students registered for courses, they had the opportunity to donate to two charities: WashPIRG (a broadly left-leaning activist group) and (starting in fall 2000) "Affordable Tuition Now" (an advocacy group for reducing the costs of higher education in Washington). Bauman and Rose published an analysis of economics students' charity and they kindly shared their raw data with me for reanalysis. My analysis focuses on undergraduates under age 30.
First, looking major-by-major (based on final declared primary major at the end of the study period), we see that philosophy students are near the top. The main dependent measure is, in any given term, what percentage of students in the major gave to at least one of the two charities. Among majors with at least 1000 student enrollment terms, the five most charitable majors were:
As reported by Bauman and Rose and also in Frey and Meier 2003 (the original source of my Zurich data), business and economics majors are among the least charitable. The surprise here (to me) is sociology. In the Zurich data, sociology majors are among the most charitable. (Hypotheses welcome.)
But what is the time course of donation? Bauman and Rose and Frey and Meier found that business and economics students were among the least charitable from the very beginning of their education and their charity rates did not decline further. Thus, they suggest, their low rates of charity are a selection effect -- an effect of who tends to choose such majors -- rather than a result of college-level economics instruction. My analysis of the Zurich data suggests the converse for philosophers: Their high rates of charity reflect a fact about who chooses philosophy rather than the power of philosophical instruction.
So here are the charity rates for non-philosophers, by year of schooling:
One reaction some people have had to my work with Josh Rust on the moral behavior of ethics professors (e.g., here and here) is this: Although some professional training in the humanities is morally improving, one reaches a ceiling in one's undergraduate education after which further training isn't morally improving -- and philosophers, or humanities professors, or professors in general, have pretty much all hit that ceiling. That ceiling effect, the objection goes, rather than the failure of ethics courses to alter real-world behavior, explains Josh's and my finding that ethicists behave on average no better than do other types of professors. (Eddy Nahmias might suggest something like this in his critical commentary on one of Josh's and my papers next week at the Society for Philosophy and Psychology.)
I don't pretend that this is compelling evidence against that position. But it does seem to be a wee bit of evidence against that position.
I now have some similar data from University of Washington, thanks to Yoram Bauman and Elaina Rose. At UW from 1999-2002, when students registered for courses, they had the opportunity to donate to two charities: WashPIRG (a broadly left-leaning activist group) and (starting in fall 2000) "Affordable Tuition Now" (an advocacy group for reducing the costs of higher education in Washington). Bauman and Rose published an analysis of economics students' charity and they kindly shared their raw data with me for reanalysis. My analysis focuses on undergraduates under age 30.
First, looking major-by-major (based on final declared primary major at the end of the study period), we see that philosophy students are near the top. The main dependent measure is, in any given term, what percentage of students in the major gave to at least one of the two charities. Among majors with at least 1000 student enrollment terms, the five most charitable majors were:
Major: percent givingThe five least charitable were:
Comparative History of Ideas: 31%
International Studies: 24%
Philosophy: 24%
Physics: 22%
Anthropology: 20%
Construction Management: 7%These numbers compare with a 14% donation rate overall.
Business Administration: 7%
Sociology: 7%
Economics: 8%
Accounting: 9%
As reported by Bauman and Rose and also in Frey and Meier 2003 (the original source of my Zurich data), business and economics majors are among the least charitable. The surprise here (to me) is sociology. In the Zurich data, sociology majors are among the most charitable. (Hypotheses welcome.)
But what is the time course of donation? Bauman and Rose and Frey and Meier found that business and economics students were among the least charitable from the very beginning of their education and their charity rates did not decline further. Thus, they suggest, their low rates of charity are a selection effect -- an effect of who tends to choose such majors -- rather than a result of college-level economics instruction. My analysis of the Zurich data suggests the converse for philosophers: Their high rates of charity reflect a fact about who chooses philosophy rather than the power of philosophical instruction.
So here are the charity rates for non-philosophers, by year of schooling:
First year: 15%And for philosophers (looking at 1172 student semesters total):
Second year: 15%
Third year: 14%
Fourth year: 13%
First year: 26%So it looks like philosophers' donation rates are flat to declining, not increasing. Given the moderate-sized data set, the slight decline from 1st and 2nd to 3rd and 4th years is not statistically significant (though given the almost 70,000 data points the smaller decline among non-philosophers is statistically significant).
Second year: 27%
Third year: 21%
Fourth year: 24%
One reaction some people have had to my work with Josh Rust on the moral behavior of ethics professors (e.g., here and here) is this: Although some professional training in the humanities is morally improving, one reaches a ceiling in one's undergraduate education after which further training isn't morally improving -- and philosophers, or humanities professors, or professors in general, have pretty much all hit that ceiling. That ceiling effect, the objection goes, rather than the failure of ethics courses to alter real-world behavior, explains Josh's and my finding that ethicists behave on average no better than do other types of professors. (Eddy Nahmias might suggest something like this in his critical commentary on one of Josh's and my papers next week at the Society for Philosophy and Psychology.)
I don't pretend that this is compelling evidence against that position. But it does seem to be a wee bit of evidence against that position.
Wednesday, June 13, 2012
Moral Order and Happiness in Fiction
Imagine this change to Macbeth: Macbeth kills King Duncan and some other inconvenient people, he feels briefly anxious and guilty, and then he gets over it. Macduff finds out, comes after Macbeth, and Macduff and his family are killed. Macbeth lives happily ever after. In a final soliloquy, perhaps, Macbeth expresses satisfaction with his decision to kill Duncan and with how his life has gone.
What would the message of such a play be?
It's hard to imagine Shakespeare writing such a play. Even if in real life evil sometimes prospers and the perpetrators feel no regret, in fiction evil cannot in the end prosper -- not unless the work is a very dark one. (Woody Allen's Crimes and Misdemeanors comes to mind.) In reality, I think, evil sometimes prospers and sometimes fails, but somehow this nuanced attitude is difficult to bring across in any but the most subtle fiction. We are, perhaps, prepared from early childhood to expect fictional wrongdoers to suffer by the end, so when they don't, it's jarring enough to suggest that the writer embraces a dark vision of the world.
Similarly, when the protagonist makes a difficult moral choice and then suffers as a result, we read the work as suggesting that the protagonist made the wrong choice. But why is that? In real life, sometimes we make the right choice and suffer. Here I think of the Spielberg film Saving Private Ryan. The platoon captures a German soldier and after much debate decides to set him free out of mercy rather than execute him. Later that same German solider returns to kill one of the group that freed him. The audience, I think, is invited to conclude that setting him free had therefore been the wrong decision. Watching the film, it's hard to avoid feeling that! But might it instead have been the right decision that happened to have a bad consequence?
Here's what I'd like to see -- what I can't recall ever seeing (though I welcome suggestions! [update Jun. 14: in light of further suggestions and reflection, I think I overstated this point yesterday]) -- a work of fiction in which the protagonist makes the morally right decision, and suffers in consequence with neither an outward triumph in the end nor even a secret inner victory of the soul, no sweet comfort in knowing that the right choice was made. I want a work that advises us: Be a good guy and lose. Even if evil triumphs, even if the wicked thrive and the upright protagonist collapses in misery, even if the protagonist would have been much happier choosing evil, still he or she was right to have chosen the moral path.
Or is it just a fact about fiction that prudiential triumph -- the final happiness of the protagonist -- will inevitably be understood as a kind of endorsement of the character's choices?
Update June 16:
This is one of those posts I would reorganize from scratch if I could. First: I think I overstate the conclusion somewhat. And second: I don't think I was sufficiently clear about what the conclusion was, even in my own head. Of course there are a number of works in which a morally upright protagonist loses, without even a secret inner victory, and the morally vile opponent thrives without even secret inner suffering. What I think is challenging -- not impossible! -- is to portray that in a way that is neither (a.) some sort of critique of conventional morality, or (b.) dark enough to constitute some sort of critique of what might be thought of as the mainstream moralistic picture of the world. Such an outcome shouldn't have to be dark, one might think. Sometimes good guys lose and bad guys thrive in in a relatively upbeat understanding of the real world (anything short of pollyannaish), so it should be possible to portray it in fiction without committing to it as typical. But maybe the darkness flows from the symbolic function of fiction as presenting the moral order, or lack of it, within the fiction as somehow representative of how things generally are...?
(The Book of Job is an interesting case, as pointed out in one comment.)
What would the message of such a play be?
It's hard to imagine Shakespeare writing such a play. Even if in real life evil sometimes prospers and the perpetrators feel no regret, in fiction evil cannot in the end prosper -- not unless the work is a very dark one. (Woody Allen's Crimes and Misdemeanors comes to mind.) In reality, I think, evil sometimes prospers and sometimes fails, but somehow this nuanced attitude is difficult to bring across in any but the most subtle fiction. We are, perhaps, prepared from early childhood to expect fictional wrongdoers to suffer by the end, so when they don't, it's jarring enough to suggest that the writer embraces a dark vision of the world.
Similarly, when the protagonist makes a difficult moral choice and then suffers as a result, we read the work as suggesting that the protagonist made the wrong choice. But why is that? In real life, sometimes we make the right choice and suffer. Here I think of the Spielberg film Saving Private Ryan. The platoon captures a German soldier and after much debate decides to set him free out of mercy rather than execute him. Later that same German solider returns to kill one of the group that freed him. The audience, I think, is invited to conclude that setting him free had therefore been the wrong decision. Watching the film, it's hard to avoid feeling that! But might it instead have been the right decision that happened to have a bad consequence?
Here's what I'd like to see -- what I can't recall ever seeing (though I welcome suggestions! [update Jun. 14: in light of further suggestions and reflection, I think I overstated this point yesterday]) -- a work of fiction in which the protagonist makes the morally right decision, and suffers in consequence with neither an outward triumph in the end nor even a secret inner victory of the soul, no sweet comfort in knowing that the right choice was made. I want a work that advises us: Be a good guy and lose. Even if evil triumphs, even if the wicked thrive and the upright protagonist collapses in misery, even if the protagonist would have been much happier choosing evil, still he or she was right to have chosen the moral path.
Or is it just a fact about fiction that prudiential triumph -- the final happiness of the protagonist -- will inevitably be understood as a kind of endorsement of the character's choices?
Update June 16:
This is one of those posts I would reorganize from scratch if I could. First: I think I overstate the conclusion somewhat. And second: I don't think I was sufficiently clear about what the conclusion was, even in my own head. Of course there are a number of works in which a morally upright protagonist loses, without even a secret inner victory, and the morally vile opponent thrives without even secret inner suffering. What I think is challenging -- not impossible! -- is to portray that in a way that is neither (a.) some sort of critique of conventional morality, or (b.) dark enough to constitute some sort of critique of what might be thought of as the mainstream moralistic picture of the world. Such an outcome shouldn't have to be dark, one might think. Sometimes good guys lose and bad guys thrive in in a relatively upbeat understanding of the real world (anything short of pollyannaish), so it should be possible to portray it in fiction without committing to it as typical. But maybe the darkness flows from the symbolic function of fiction as presenting the moral order, or lack of it, within the fiction as somehow representative of how things generally are...?
(The Book of Job is an interesting case, as pointed out in one comment.)
Friday, June 08, 2012
Where Do We Go from Here? Some Final Thoughts
(by guest blogger Carrie Figdor)
I've discussed growing public anger and confusion about science, and the role of scientists, the popular science press, and philosophers who may all contribute to or fail to alleviate this confusion in different ways. I'll end my guest stint with a big "thank you" to Eric and the readers of his blog, and a few concluding ruminations.
There have already been calls for greater attention to miscommunication between "the folk" and scientists. In a 2010 article on neuroscience communication and the need to address public concerns, Illes et al. recommend a cultural shift among neuroscientists (including more openness regarding the potentials and limitations of the research), the development of neuroscience communication specialists, and additional empirical research on science communication. And at least one neuroscientist is being openly critical about the field online with the Neuroskeptic blog. However, there's also reason to think even these steps will not suffice to end miscommunication.
In a recent New Books in Philosophy interview about In Praise of Reason, Michael Lynch provides an interesting interpretation of data from a 2007 Gallup poll on American beliefs about evolution. According to the poll, the majority of Americans don’t believe in evolution. But why don’t they? Only 14 percent cited lack of evidence for evolution as the reason for their disbelief. That is, most agree there is overwhelming scientific agreement on evolution. Although the persistent lack of belief in the face of this evidence is often blamed on lack of scientific education, psychological factors, and so on, Lynch suggests an alternative (which is compatible with there being several factors): many Americans are implicitly skeptical about the methods and practices associated with science, and are not at all convinced that these methods are reliable when it comes to things that matter.
If so, the miscommunication problem is not just a matter of misleading uses of words and a lack of vigorous critical scrutiny of science by professional skeptics. These factors may just exacerbate prior widespread public skepticism about science and its methods of getting at truth when the truths are more complex than science is well-equipped to handle. Such skepticism needn't be an expression of religious conviction or the brute denial of empirical data, but of inchoate doubt that the simplifications required by the scientific method to generate empirical data will ever do justice to real phenomena. From this perspective, the public has good reason to be pissed off when scientists fail to treat it with respect: What make them think their method is so keen and wonderful when it comes to understanding real humans? The question is a propos, because we're living a moment in which science urges replacing our old vision in which we are not machines with a new vision in which we are. Ah, but everything, the public might say, hangs on the word "machine". As a professional skeptic, I'm inclined to agree.
I've discussed growing public anger and confusion about science, and the role of scientists, the popular science press, and philosophers who may all contribute to or fail to alleviate this confusion in different ways. I'll end my guest stint with a big "thank you" to Eric and the readers of his blog, and a few concluding ruminations.
There have already been calls for greater attention to miscommunication between "the folk" and scientists. In a 2010 article on neuroscience communication and the need to address public concerns, Illes et al. recommend a cultural shift among neuroscientists (including more openness regarding the potentials and limitations of the research), the development of neuroscience communication specialists, and additional empirical research on science communication. And at least one neuroscientist is being openly critical about the field online with the Neuroskeptic blog. However, there's also reason to think even these steps will not suffice to end miscommunication.
In a recent New Books in Philosophy interview about In Praise of Reason, Michael Lynch provides an interesting interpretation of data from a 2007 Gallup poll on American beliefs about evolution. According to the poll, the majority of Americans don’t believe in evolution. But why don’t they? Only 14 percent cited lack of evidence for evolution as the reason for their disbelief. That is, most agree there is overwhelming scientific agreement on evolution. Although the persistent lack of belief in the face of this evidence is often blamed on lack of scientific education, psychological factors, and so on, Lynch suggests an alternative (which is compatible with there being several factors): many Americans are implicitly skeptical about the methods and practices associated with science, and are not at all convinced that these methods are reliable when it comes to things that matter.
If so, the miscommunication problem is not just a matter of misleading uses of words and a lack of vigorous critical scrutiny of science by professional skeptics. These factors may just exacerbate prior widespread public skepticism about science and its methods of getting at truth when the truths are more complex than science is well-equipped to handle. Such skepticism needn't be an expression of religious conviction or the brute denial of empirical data, but of inchoate doubt that the simplifications required by the scientific method to generate empirical data will ever do justice to real phenomena. From this perspective, the public has good reason to be pissed off when scientists fail to treat it with respect: What make them think their method is so keen and wonderful when it comes to understanding real humans? The question is a propos, because we're living a moment in which science urges replacing our old vision in which we are not machines with a new vision in which we are. Ah, but everything, the public might say, hangs on the word "machine". As a professional skeptic, I'm inclined to agree.
Wednesday, June 06, 2012
Why Tononi Should Allow That Conscious Entities Can Have Conscious Parts
On March 23, I argued that eminent theorist of consciousness Giulio Tononi should embrace the view that the United States is conscious -- that is, literally possessed of a stream of phenomenal, subjective experience of its own, above and beyond the experiences of all its citizens and residents considered individually. My argument drew on Tononi's work from 2004 through 2009 arguing that any system in which information is integrated -- that is, virtually any causal system at all! -- is conscious. Tononi's one caveat in those works is that to count as a "system" in the relevant sense, an informational network must not be merely a subsystem within a more tightly integrated larger system. I argued that since the U.S. is a system in Tononi's sense, either it or some more tightly integrated larger system (the whole Earth?) must be conscious by Tononi's lights. While in other posts I had to do some work to show how I thought Daniel Dennett's, Fred Dretske's, and Nicholas Humphrey's views implied group consciousness, Tononi seemed an easy case.
However, my March interpretation of Tononi was out of date. More recently, (here [in note 9] and here [HT Scott Bakker and Luis Favela]), Tononi has endorsed what I will call an anti-nesting principle: A conscious entity cannot contain another conscious entity as a part. Tononi suggests that whenever one information-integrated system is nested in another, consciousness will exist only in the system with this highest degree of informational integration.
Tononi defends this principle by appeal to Occam's razor, with intuitive support from the apparent absurdity of supposing that a third group consciousness could emerge from two people talking. But it’s unclear why Tononi should put much weight on the intuitive resistance to group consciousness, given his near panpsychism. He thinks photodiodes and OR-gates have a little bit of conscious experience; so why not some such low-level consciousness from the group too? And Occam’s razor is a tricky implement: Although admitting the existence of unnecessary entities seems like a bad idea, what is an “entity” and what is “unnecessary” is often unclear, especially in part-whole cases. Is a hydrogen atom an unnecessary entity once one admits the proton and electron into one’s ontology? What makes it necessary, or not, to admit the existence of consciousness in the first place? It is obscure why the necessity of admitting consciousness in a large system should turn on whether it is also necessary to admit conscious experience in some of its subparts. (Consider my Betelgeusian beeheads, for example.) Tononi’s anti-nesting principle compromises the elegance of his earlier view.
Tononi's anti-nesting principle has some odd consequences. For example, it implies that if an ultra-tiny conscious organism were somehow to become incorporated into your brain, you would suddenly be rendered nonconscious, despite the fact that all your behavior, including self-reports of consciousness, might remain the same. (See Ned Block's "Troubles with Functionalism".) It also seems to imply that if there were a large enough election, with enough different ballot measures, the resulting informational integration would cause the voters, who can be conceptualized as nodes or node-complexes in a larger informational system, to lose consciousness. Perhaps we are already on the verge of this in California? Also, since “greater than” is a yes-or-no property rather than a matter of degree, there ought on Tononi’s view to be an exact point at which higher-level integration causes our human-level consciousness suddenly to vanish. Don’t add that one more vote!
Tononi's anti-nesting principle seems only to swap one set of counterintuitive implications for another, in the process abandoning general, broadly appealing materialist principles – the sort of principles that suggest that beings broadly similar in their behavior, self-reports, functional sophistication, and evolutionary history should not differ radically with respect to the presence or absence of consciousness.
However, my March interpretation of Tononi was out of date. More recently, (here [in note 9] and here [HT Scott Bakker and Luis Favela]), Tononi has endorsed what I will call an anti-nesting principle: A conscious entity cannot contain another conscious entity as a part. Tononi suggests that whenever one information-integrated system is nested in another, consciousness will exist only in the system with this highest degree of informational integration.
Tononi defends this principle by appeal to Occam's razor, with intuitive support from the apparent absurdity of supposing that a third group consciousness could emerge from two people talking. But it’s unclear why Tononi should put much weight on the intuitive resistance to group consciousness, given his near panpsychism. He thinks photodiodes and OR-gates have a little bit of conscious experience; so why not some such low-level consciousness from the group too? And Occam’s razor is a tricky implement: Although admitting the existence of unnecessary entities seems like a bad idea, what is an “entity” and what is “unnecessary” is often unclear, especially in part-whole cases. Is a hydrogen atom an unnecessary entity once one admits the proton and electron into one’s ontology? What makes it necessary, or not, to admit the existence of consciousness in the first place? It is obscure why the necessity of admitting consciousness in a large system should turn on whether it is also necessary to admit conscious experience in some of its subparts. (Consider my Betelgeusian beeheads, for example.) Tononi’s anti-nesting principle compromises the elegance of his earlier view.
Tononi's anti-nesting principle has some odd consequences. For example, it implies that if an ultra-tiny conscious organism were somehow to become incorporated into your brain, you would suddenly be rendered nonconscious, despite the fact that all your behavior, including self-reports of consciousness, might remain the same. (See Ned Block's "Troubles with Functionalism".) It also seems to imply that if there were a large enough election, with enough different ballot measures, the resulting informational integration would cause the voters, who can be conceptualized as nodes or node-complexes in a larger informational system, to lose consciousness. Perhaps we are already on the verge of this in California? Also, since “greater than” is a yes-or-no property rather than a matter of degree, there ought on Tononi’s view to be an exact point at which higher-level integration causes our human-level consciousness suddenly to vanish. Don’t add that one more vote!
Tononi's anti-nesting principle seems only to swap one set of counterintuitive implications for another, in the process abandoning general, broadly appealing materialist principles – the sort of principles that suggest that beings broadly similar in their behavior, self-reports, functional sophistication, and evolutionary history should not differ radically with respect to the presence or absence of consciousness.
Friday, June 01, 2012
The Armchair and the Lab Throne
(by guest blogger Carrie Figdor)
I've noted that the popular science press is both accomplice and victim in the miscommunication between god and mortal. But some philosophers too -- the other main class of professional skeptics -- may be in the same boat. Here a failure to be sufficiently skeptical can appear in the guise of burning the despised Armchair in favor of the Lab Throne, a.k.a rejecting intuitions as evidence in favor of empirical data (even though elicited intuitions are empirical data).
It's often hard to know exactly what is being rejected. Sometimes it seems to be the counterexample tennis (e.g. the Gettier-problem cottage industry) in which intuition gets used. Sometimes it seems to be the scope of an intuition-based claim (I am always reminded of the joke in which Tonto says to the Lone Ranger: "What do you mean "we", white man?"). Sometimes it seems to be the use of far-fetched hypothetical scenarios, although trolley problems are pretty far-fetched and what we are to make of the moral judgments (a.k.a. intuitions) they elicit is a lively subject of debate.
However, if burning the Armchair means not using intuitions as evidence, the Lab Throne will go up in flames along with it. Science depends heavily on intuition in the form of reasons that would be offered for background assumptions if such assumptions were made explicit and explicitly questioned. Why is memory cashed out in terms of encoding-storage-retrieval? Well, what else could memory be but a kind of storage? The fact that this intuitively plausible answer is only now being questioned by memory researchers seeking a more dynamical conception underlines how difficult it can be to root out the role of intuition in science.
Another way in which intuition is deeply embedded in science is in judgments about which aspects of a phenomenon (a stimulus, a task, a performance) are essential to its being the kind of phenomenon it is. One such case occurred to me during a recent talk by Stephen Stich, in which he argued that intuitive judgments exhibit wide variability based on apparently irrelevant factors (and so presumably cannot be relied on to track the truth). One example was the so-called Lady Macbeth Effect, in which moral judgments are affected by bodily cleansing. In one study, moral judgments of those who used an alcohol-based cleaning gel were significantly less severe than those of subjects who used a non-cleansing hand cream (Schnall et al. 2008).
But why think cleaning one's hands is irrelevant to moral judgment? Hopefully not because it seems intuitively plausible. My own off-the-cuff reason for doubt stemmed from generalizing from Goldin-Meadow's experiments with gesture and counting. If moral judgments are embodied, changing embodied aspects would be changing relevant features. (I should note Steve was wholly amenable to this interpretation; I also later found a similar view defended by two staff writers for the Association of Psychological Science.)
In this particular case, the moral is that in at least some cases the problem with Armchair intuition is that it is disembodied, not that it is a priori. Moreover, some critics of intuition may simply presuppose a disembodied (Cartesian?) view of intuition. (Why?) In general, the problem with intuition seems to be that different disciplines have different Armchairs, and the Lab Throne is just another style of Armchair -- and philosophers who prefer Lab Throne style should be especially critical of that particular Armchair.
I've noted that the popular science press is both accomplice and victim in the miscommunication between god and mortal. But some philosophers too -- the other main class of professional skeptics -- may be in the same boat. Here a failure to be sufficiently skeptical can appear in the guise of burning the despised Armchair in favor of the Lab Throne, a.k.a rejecting intuitions as evidence in favor of empirical data (even though elicited intuitions are empirical data).
It's often hard to know exactly what is being rejected. Sometimes it seems to be the counterexample tennis (e.g. the Gettier-problem cottage industry) in which intuition gets used. Sometimes it seems to be the scope of an intuition-based claim (I am always reminded of the joke in which Tonto says to the Lone Ranger: "What do you mean "we", white man?"). Sometimes it seems to be the use of far-fetched hypothetical scenarios, although trolley problems are pretty far-fetched and what we are to make of the moral judgments (a.k.a. intuitions) they elicit is a lively subject of debate.
However, if burning the Armchair means not using intuitions as evidence, the Lab Throne will go up in flames along with it. Science depends heavily on intuition in the form of reasons that would be offered for background assumptions if such assumptions were made explicit and explicitly questioned. Why is memory cashed out in terms of encoding-storage-retrieval? Well, what else could memory be but a kind of storage? The fact that this intuitively plausible answer is only now being questioned by memory researchers seeking a more dynamical conception underlines how difficult it can be to root out the role of intuition in science.
Another way in which intuition is deeply embedded in science is in judgments about which aspects of a phenomenon (a stimulus, a task, a performance) are essential to its being the kind of phenomenon it is. One such case occurred to me during a recent talk by Stephen Stich, in which he argued that intuitive judgments exhibit wide variability based on apparently irrelevant factors (and so presumably cannot be relied on to track the truth). One example was the so-called Lady Macbeth Effect, in which moral judgments are affected by bodily cleansing. In one study, moral judgments of those who used an alcohol-based cleaning gel were significantly less severe than those of subjects who used a non-cleansing hand cream (Schnall et al. 2008).
But why think cleaning one's hands is irrelevant to moral judgment? Hopefully not because it seems intuitively plausible. My own off-the-cuff reason for doubt stemmed from generalizing from Goldin-Meadow's experiments with gesture and counting. If moral judgments are embodied, changing embodied aspects would be changing relevant features. (I should note Steve was wholly amenable to this interpretation; I also later found a similar view defended by two staff writers for the Association of Psychological Science.)
In this particular case, the moral is that in at least some cases the problem with Armchair intuition is that it is disembodied, not that it is a priori. Moreover, some critics of intuition may simply presuppose a disembodied (Cartesian?) view of intuition. (Why?) In general, the problem with intuition seems to be that different disciplines have different Armchairs, and the Lab Throne is just another style of Armchair -- and philosophers who prefer Lab Throne style should be especially critical of that particular Armchair.