Thursday, May 31, 2012
Ruth Barcan Marcus Memorial Celebration
Monday, May 28, 2012
Betelgeusian Beeheads
The Betelgeusians are quirky in a few ways, however. For example, cognitive activity takes them, on average, about ten times longer to execute. This has no overall effect on their intelligence but it does test the patience of conversational partners unaccustomed to the Betelgeusians' slow pace. The Betelgeusians find some tasks cognitively easy that we find cognitively difficult and vice versa. They are baffled by our difficulty with simple logic problems like the Wason Selection Task, but they are impressed by our skill in integrating auditory and visual information.
Over time, Betelgeusians migrate down from their orbiting ship. Patchy accommodations are made for their size and speed, and they start to attend our schools and join our corporations. Later, they start to run for political office, displaying roughly the same range of political virtues and vices. Although Betelgeusians don't reproduce by coitus, they find some forms of physical contact arousing and have broadly human attitudes toward pair-bonding. Marriage equality is achieved. What a model of interplanetary harmony!
Everyone agrees that Betelgeusians are conscious, of course. In fact, the Betelgeusians have a long and impressive academic tradition in philosophy of mind and introspective psychology.
Why do I call them "beeheads", you're surely wondering? Well, it turns out that when you look inside their heads and humps, you find not neurons but rather tens of millions of squirming insects, each a fraction of a millimeter across. Each insect has a complete set of minute sensory organs and a nervous system of its own (compare mymaridae wasps on Earth). The beeheads' behavior arises from complex patterns of interaction among these individually dumb insects. These mammoth creatures are much-evolved descendants of Betelgeusian bees that evolved in symbiosis with a brainless, living hive. (The hive itself began as a simple, non-living physical structure that over evolutionary history slowly incorporated symbiotic organisms -- as Earthly ant hives sometimes do -- which then merged into a larger whole.) The Betelgeusians' giant heads and humps have, altogether, a hundred times as many neuron-like cells as the human brain has; and the insects' interactions are so informationally efficient that neighboring insects can respond differentially to the behavioral or chemical effects of other insects' individual efferent neural impulses. The process, still, is an order of magnitude slower than our own.
Maybe there are little spatial gaps between the bees. Does it matter? Maybe, in the privacy of their homes, the bees sometimes fly apart and back together, exiting and entering through the mouth. Does it matter? Maybe if the exterior body is too severely injured, the bees recruit a new body from nutrient tanks -- and when they fly off to do this, they do so without too much interruption of cognition, able to report thoughts mid-transfer. They reconvene and say, "Oh it's such a free and airy feeling to be without a body! And yet it's a fearful thing too. It's good to feel again the power of limbs and mouth. May this new body last long and well! Shall we dance, then, love?"
If you accept the consciousness of Betelgeusian beeheeds, you have accepted the first step in my argument that the United States is conscious.
Friday, May 25, 2012
What the Large Print Sayeth the Small Print Denieth
In my last post I noted how professional science journals and other peer-reviewed venues coopt folk-psychological terms to report experimental results. Popular science reporters are not introducing these terms as metaphors to describe neuroscience results in laymen's terms; the laymen's terms are right there in the original articles. Nevertheless, it is a common response to blame public misunderstanding of science results either on the popular science press, or on the public's lack of science education, or both.
For example, Racine and colleagues (2005) coined the term "neuro-realism" to label the way popular science coverage of fMRI studies "can make a phenomenon uncritically real, objective or effective in the eyes of the public." One of their examples of neuro-realism is a 2004 report in The Boston Globe: "[B]ecause fMRI investigation shows activation in reward centers when subjects ingest high-fat foods, one reads, 'Fat really does bring pleasure'." But after examining how the term "reward" is used in peer-reviewed neuroscience articles, it is not clear why the Globe reporter is thought to be miscommunicating the science, let alone responsible for the miscommunication.
Similarly, if adding neuro-babble to an excerpt of a psychological explanation makes non-experts rate the explanation more highly (Skolnick Weisberg et al. 2008), shouldn't the response be: "Hey, we're the gods of knowledge about reality! Maybe, just maybe, with great power comes great responsibility -- including responsibility for our language!" The public might well be excused from not being able to distinguish added neuro-babble ("Brain scans indicate that the "curse" happens because of the frontal lobe circuitry known to be involved in self-knowledge") from serious claims in research articles ("Several fMRI studies reported increased prefrontal and parietal activity during lie ... Based on these findings, deception has been conceptualized as inhibition of truth and generation of lie mediated by the prefrontal cortex, with truth being a “routine” response mediated by the posterior structures." (Langleben et al. 2005)
Instead, where popular science reporters may be blameworthy is in their abdication of their role as professional skeptics. In a 2006 analysis of 134 popular science articles on fMRI studies -- appearing in the popular press between 1994-2004 -- Racine and colleagues found that "the vast majority of articles (n = 104, 79 percent) were uncritical in tone, whereas twenty-eight (21 percent) were balanced or critical", and that specialized sources were less critical than general news sources. In short, the ferociously aggressive skepticism to which political stories and figures are routinely subjected by the press is nowhere to be found. The Scientific American spoof may be a sign that this free ride is about to end.
Tuesday, May 22, 2012
Applying to PhD Programs in Philosophy, Part V: Statement of Purpose
I've never read a first draft of a statement of purpose (also called a personal statement) that was any good. These things are hard to write, so give yourself plenty of time and seek the feedback of at least two of your letter writers. Plan to rewrite from scratch at least once.
It’s hard to know even what a “Statement of Purpose” is. Your plan is to go to graduate school, get a Ph.D., and become a professor. Duh! Are you supposed to try to convince the committee that you want to become a professor more than the next guy? That philosophy is written in your genes? That you have some profound vision for the transformation of philosophy or philosophy education?
Some Things Not to Do
* Don’t wax poetic. Don’t get corny. Avoid purple prose. “Ever since I was eight, I've pondered the deep questions of life.” Nope. “Philosophy is the queen of disciplines, delving to the heart of all.” Nope. “The Owl of Minerva has sung to me and the sage of Königsberg whispers in my sleep: Not to philosophize is to die.” If you are tempted to write sentences like that last one, please do so in longhand, with golden ink, on expensive stationery which you then burn without telling anyone.
* Don’t turn your statement into a sales pitch. Ignore all advice from friends and acquaintances in the business world. Don’t sell yourself. You don’t want to seem arrogant or grandiose or like a BS-ing huckster. You may still (optionally!) mention a few of your accomplishments, in a dry, factual way, but to be overly enthusiastic about accomplishments that are rather small in the overall scheme of academia is somewhat less professional than you ideally want to seem. If you’re already thinking like a graduate student at a good PhD program, then you won’t be too impressed with yourself for having published in the Kansas State Undergraduate Philosophy Journal (even if that is, in context, a notable achievement). Trust your letter writers. If you’ve armed them properly with a brag sheet, the important accomplishments will come across in your file. Let them do the pitch. Also, don’t say you plan to revolutionize philosophy, reinvigorate X, rediscover Y, finally find the answer to timeless question Z, or even teach in a top-ten department. Do you already know that you will be a more eminent professor than the people on your admissions committee? You’re aiming to be their student, not the next Wittgenstein – or at least that’s how you want to come across. You want to seem modest, humble, straightforward. If necessary, consult David Hume or Benjamin Franklin for inspiration on the advantages of false humility.
* If you are applying to a program in which you are expected to do coursework for a couple years before starting on a dissertation – that is, U.S.-style programs as opposed to British-style programs – then I recommend not taking stands on particular substantive philosophical issues. In the eyes of the admissions committee, you probably aren’t far enough in your philosophical education to be adopting hard philosophical commitments. They want you to come to their program with an open mind. Saying "I would like to defend Davidson's view that genuine belief is limited to language-speaking creatures" comes across a bit too strong. Similarly, "I showed in my honors thesis that Davidson's view...". If only, in philosophy, honors theses ever really showed anything! (“Argued” would be okay.) Better: "My central interests are philosophy of mind and philosophy of language. I am particularly interested in the intersection of the two, for example in Davidson's argument that only language-speaking creatures can have beliefs in the full and proper sense of 'belief'."
* Don’t tell the story of how you came to be interested in philosophy. It’s not really relevant.
What to Write
So how do you fill up that awful, blank-looking page? In April, I solicited sample statements of purpose from successful recent PhD applicants. About a dozen readers kindly sent in their statements and from among these I chose three that I thought were good and also diverse enough to illustrate the range of possibilities. Follow the links below to view the statements.
- Statement A was written by Allison Glasscock, who was admitted to Chicago, Cornell, Penn, Stanford, Toronto, and Yale.
- Statement B was written by a student who prefers to remain anonymous, who was admitted to Berkeley, Missouri, UMass Amherst, Virginia, Wash U. in St. Louis, and Wisconsin.
- Statement C was written by another student who prefers to remain anonymous, who was admitted to Connecticut and Indiana.
Each of the statements also adds something else, in addition to a description of areas of interest, but it is not really necessary to add anything else. Statement B starts with pretty much the perfect philosophy application joke. (Sorry, now it’s taken!) Statement C concludes with a paragraph describing the applicant’s involvement with his school’s philosophy club. Statement C is topically structured but salted with information about coursework relevant to the applicant’s interests, while Statement B is topically structured and minimalist, and Statement A is autobiographically structured with considerable detail. Any of these approaches is fine, though the topical structure is more common and raises fewer challenges about finding the right tone.
Statement A concludes with a paragraph specifically tailored for Yale. Thus we come to the question of...
Tailoring Statements to Particular Programs
It's not necessary, but you can adjust your statement for individual schools. If there is some particular reason you find a school attractive, there's no harm in mentioning that. Committees think about fit between a student’s interests and the strengths of the department and about what faculty could potentially be advisors. You can help the committee on this issue if you like, though normally it will be obvious from your description of your areas of interest.
For example, if you wish, you can mention 2-3 professors whose work especially interests you. But there are risks here, so be careful. Mentioning particular professors can backfire if you mischaracterize the professors, or if they don't match your areas of stated interest, or if you omit the professor in the department whose interests seem to the committee to be the closest match to your own.
Similarly, you can mention general strengths of the school. But, again, if you do this, be sure to get it right! If someone applies to UCR citing our strength in artificial intelligence, we know the person hasn’t paid attention to what our department is good at. No one here works on AI. But if you want to go to a school that has strengths in both mainstream “analytic” philosophy and 19th-20th century “Continental” philosophy, that’s something we at UCR do think of as a strong point of our program.
I'm not sure I'd recommend changing your stated areas of interest to suit the schools, though I see how that might be strategic. There are two risks in changing your stated areas of interest: One is that if you change them too much, there might be some discord between your statement of purpose and what your letter writers say about you. Another is that large changes might raise questions about your choice of letter writers. If you say your central passion is ancient philosophy, and your only ancient philosophy class was with Prof. Platophile, why hasn’t Prof. Platophile written one of your letters? That’s the type of oddness that might make a committee hesitate about an otherwise strong file.
Some people mention personal reasons for wanting to be in a particular geographical area (near family, etc.). Although this can be good because it can make it seem more likely that you would accept an offer of admission, I'd avoid it since graduating Ph.D.'s generally need to be flexible about location and it might be perceived as indicating that a career in philosophy is not your first priority.
Explaining Weaknesses in Your File
Although hopefully this won't be necessary, a statement of purpose can also be an opportunity to explain weaknesses or oddities in your file -- though letter writers can also do this, often more credibly. For example, if one quarter you did badly because your health was poor, you can mention that fact. If you changed undergraduate institutions (not necessarily a weakness if the second school is the more prestigious), you can briefly explain why. If you don't have a letter from your thesis advisor because he died, you can point that out.
Statements of Personal History
Some schools, like UCR, also allow applicants to submit “statements of personal history”, in which applicants can indicate disadvantages or obstacles they have overcome or otherwise attempt to paint an appealing picture of themselves. The higher-level U.C. system administration encourages such statements, I believe, because although state law prohibits the University of California from favoring applicants on the basis of ethnicity or gender, state law does allow admissions committees to take into account any hardships that applicants have overcome – which can include hardships due to poverty, disability, or other obstacles, including hardships deriving from ethnicity or gender.
Different committee members react rather differently to such statements, I suspect. I find them unhelpful for the most part. And yet I also think that some people do, because of their backgrounds, deserve special consideration. Unless you have a sure hand with tone, though, I would encourage a dry, minimal approach to this part of the application. It’s better to skip it entirely than to concoct a story that looks like special pleading from a rather ordinary complement of hardships. This part of the application also seems to beg for the corniness I warned against above: “Ever since I was eight, I’ve pondered the deep questions of life...”. I see how such corniness is tempting if the only alternative seems to be to leave an important part of the application blank. As a committee member, I usually just skim and forget the statements of personal history, unless something was particularly striking.
For more general thoughts on the influence of ethnicity and gender on committee decisions, see Part VI of this series.
***
For further advice on statements of purpose, see this discussion on Leiter Reports – particularly the discussion between the difference between U.S. and U.K. statements of purpose.
See here for comments on the 2007 version of this post. You might want to skim through those comments before posting a comment below.
Friday, May 18, 2012
Can a Rat Walk Down "Memory" Lane?
In my first post I introduced the issue of rising public anger at science due to the apparently cavalier way in which empirical results affecting issues the folk hold dear -- silly things like human nature and the nature of the universe -- are being disseminated. I'd like to provide examples of the forms this miscommunication can take, which due to my philosophical interests focus on neuroscience (although the Krauss-Albert tussle over the term "nothing" is clearly germane).
Sometimes miscommunication is one-off. While researching the paper described below, I came across a 1956 Scientific American article in which the scientist, reporting his results to the public, credits B.F. Skinner with refining the methods for measuring pleasurable and painful feelings (Olds 1956). I thought this was hilarious (Skinner?!) until I tried to come up with a good explanation for why he would describe Dr. Radical Behaviorist this way, let alone in a news outlet where the audience is not casual and space is not at such a premium. I couldn't, or at least not in a way in which the scientist came out looking good. (Note: this criticism has nothing to do with the brilliance of the research.) Again, while searching for a related cognitive neuroscience article in the peer-reviewed Journal of Comparative Neurology, I was taken aback to find it in a special issue entitled "The Anatomy of the Soul". Were they serious? Were they joking? Which answer is less bad?
But a systematic source of foreseeable public confusion stems from the use of folk psychological terms to report neuroscience results in professional contexts. Such terms are coopted into neuroscience discourse to report results and translational implications and recycled back into public discourse via the popular science press without mention of possible shifts in meaning. In this paper I compared uses of some terms ("reward", "fear", and "memory") in bibliographically linked studies and found that it is at least an open question whether syntactically identical terms taken from the folk mean what they ordinarily mean, or even whether they remain semantically identical across studies and over time. Unless the public is clearly warned not to assume these words mean what they ordinarily do, miscommunication is pretty much guaranteed.
Here's one example. In a 1954 study of brain areas associated with "reward" with rats, "reward" is behaviorally defined as a stimulus associated with increased frequency of response, and electrical stimulation in certain brain areas is "rewarding in the sense that the experimental animal will stimulate itself in these places frequently and regularly for long periods of time if permitted to do so.” (Olds & Milner 1954) By the time of a related 2005 fMRI study on romantic love, human subjects who self-report being madly in love are scanned while passively viewing photographs of their loved ones. The reported result is that areas of the brain associated in earlier studies (including the 1954 rat study) with motivation to acquire a "reward" are among those associated with being in love. But is love "rewarding" the way electrical stimulation of the brain is "rewarding"? Is a photograph passively viewed by a lover a "reward" the way an electrical brain stimulus self-administered with increasing frequency a "reward"? There's certainly self-stimulation to photographs in the vicinity, but my guess is that this sort of "reward" and "rewarding" feeling isn't what subjects felt in the experiment. So while I don't doubt the septum and caudate nucleus are part of a "reward and motivation" system, I'm not at all sure what "reward and motivation" means that would cover all these cases. Of course, the public has no way of figuring out (and should not be expected to) that "reward" and cognates have undergone shifts in meaning since such terms were seized from the public sphere.
Tuesday, May 15, 2012
On the Epistemic Status of Deathbed Regrets
I can find no systematic research about what people on their deathbeds do in fact say that they regret. A PsycInfo database search of "death*" and "regret*" turns up this article as the closest thing. Evidently, what elderly East Germans most regret is having been victimized by war. There's also this inspiring pablum, widely discussed in the popular press.
Let's grant, however, that the commencement truisms have a prima facie plausibility. With their dying breaths, grandparents around the world say, "If only I had pursued my dreams and worried less about money!" Does their dying perspective give them wisdom? Does it matter that it's dying grandparents who are saying this rather than, say, 45-year-old parents or high school counselors or assistant managers at regional banks? The deathbed has rhetorical cache. Does it deserve it?
I'm reminded of the wisdom expressed by Zaphod Beeblebrox IV in Hitchhiker's Guide to the Galaxy. Summoned in a seance, he says that being dead "gives one such a wonderfully uncluttered perspective. Oh-ummm, we have a saying up here: 'life is wasted on the living'."
There's something to that, no doubt. Life is wasted on the living. But here's my worry: The dead and dying are suspiciously safe from the need of having to live by their own advice. If I'm 45 and I say, "Pursue your dreams! Don't worry about money!" I can be held to account for hypocrisy if I don't live that way myself. But am I really going to live that way? Potential victimization by my own advice might help me more vividly appreciate the risks and stress of chucking the day job. Deathbed grandpa might be forgetting those risks and stress in a grandiose, self-flagellating fantasy about the gap between what he was and what he might have been.
A smaller version of this same pattern occurs day by day and week by week: Looking back, I can always fantasize having been more energetic, more productive, having seized each day with more gusto. Great! That would have been better. But seizing every day with inexhaustible gusto is superhuman. I forget, maybe, how superhuman that would be.
Another source of deathbed distortion might be this: To the extent one's achievements are epistemic and risk-avoidant, their costs might be foolishly easy to regret. Due to hindsight bias, opportunities sacrificed and energy spent to prove something (for example, to prove to yourself that you could be successful in business or academia) or to avoid a risk that never materialized (such as the risk of having to depend on substantial financial savings in order not to lose one's home) can seem not to have been worth it: Of course you would have succeeded in business, of course you would have been fine without that extra money in the bank. On your deathbed, you might think you should have known these things all along -- but you shouldn't have. The future is harder to predict than the past.
I prefer the wisdom of 45-year-olds -- the ones in the middle of life, who gaze equally in both directions. Some 45-year-olds also think you should pursue your dreams (within reason) and not worry (too much) about money.
Second Annual "Experiment Month" Initiative
The initiative is intended to help philosophers who are interested in running an experiment by pairing them with experts who can help with experimental design, participant recruitment, and statistical analysis.
Here are some of last year's results.
More info here.
Friday, May 11, 2012
Twilight of the (Scientific) Gods?
Is it a mere coincidence that the Metropolitan Opera is offering its latest Ring-cycle blitz at about the same time as my stint as an invited guest blogger for Eric? The skeptic in me warns against hasty judgment, yet I think there's an interesting relationship between the two series. Isomorphisms come cheap, but the best things in life are free, so even a cheap isomorphism is worth more than the best things in life.
Before I go on, I'll introduce myself: I'm a philosopher of mind and metaphysician at the University of Iowa, in the state where Herbert Hoover and Captain James T. Kirk are local notables and gay marriage is legal. I'm also a former Associated Press newswoman, hence a professional gadfly twice over. The news story I'm interested in maps Wagner's distinction between ordinary mortals and the gods to our distinction between "the folk" and scientists, who occupy the most powerful intellectual position in our culture. This story (but not Wagner's) is that the folk are getting deeply and inchoately pissed at the scientific gods. In this series of posts, I want to explore this anger and what it means for science, philosophy and the folk.
One unmistakable expression of it came on April 1, 2012, when Scientific American published a spoof of neuroscience claims, carefully labeled as such just in case the joke was not immediately obvious just by reading it: "Neuroscientists: We Don't Really Know What We're Talking About, Either." It began: "NEW YORK—At a surprise April 1 press conference, a panel of neuroscientists confessed that they and most of their colleagues make up half of what they write in research journals and tell reporters." I suspect the editor had inserted a less generous percentage in an earlier draft.
A second came on April 23, 2012, when The Atlantic Monthly included the following paragraph in an interview with Lawrence Krauss that was sparked by the Krauss-Albert affair (a clash of titans worthy of Wagner):
Because the story of modern cosmology has such deep implications for the way that we humans see ourselves and the universe, it must be told correctly and without exaggeration -- in the classroom, in the press and in works of popular science. To see two academics, both versed in theoretical physics, disagreeing so intensely on such a fundamental point is troubling. Not because scientists shouldn't disagree with each other, but because here they're disagreeing about a claim being disseminated to the public as a legitimate scientific discovery. Readers of popular science often assume that what they're reading is backed by a strong consensus.I'll borrow from The Atlantic to elaborate the issue: Because the story of neuroscience or physics (i.e., science) has such deep implications for the folk, it is important to tell that story to the folk correctly and without exaggeration. And yet it is not being told that way. The Atlantic is troubled because the public is being fed what may or may not be a crock, not because the gods are clashing (which is only to be expected). Scientific American effectively accuses neuroscientists of being full of it -- half the time, but which half? -- and by adding that "either" practically screams that its staff is getting really tired of being played.
Both missives from leaders in the mortal sphere imply that the folk are not being treated as they believe they should be. This is all the more annoying when your offerings -- um, taxpayer dollars -- are rabidly sought by these gods in the form of NSF grants. And so the question arises: how much longer will, or should, this situation go on? What can be done to change it, for the good of the folk and science?
Monday, May 07, 2012
Grounds for Dream Skepticism
Trudeau obliges. The key panels:
Life going well? Implausibly well? Wake up and smell the latrine, baby!
The same reasoning might apply if things are implausibly hellish.
Such reasoning should apply especially to Wittgenstein himself. I mean, what's the prior probability of that being your life -- impoverished scion of a suicidal Austrian family of immense wealth, arguably the greatest philosopher of your day though unemployed and hardly publishing, etc.? At the time he wrote On Certainty, Wittgenstein should have thought: Surely all this is some weird dreambrain mashup of wish fulfillment and nightmare!
Philosophers at the peak of public fame should all be dream skeptics. QED.
Friday, May 04, 2012
Martian Rabbit Superorganisms, Yeah!
Most philosophers of mind (but not all) likewise believe that if we were visited by a highly intelligent naturally-evolved alien species -- let's call them "Martians" -- that alien species might possess a radically different biology from us and yet still have conscious experience. Outwardly, let's suppose, Martians look rather like humans; and also they behave rather like humans, despite their independent evolutionary origins. They visit us, learn English, and soon integrate into schools and corporations, maybe even marriages. They write philosophical treatises about consciousness and psychological treatises about their emotions and visual experiences. We will naturally, and it seems rightly, think of such Martians as genuinely conscious. Inside, though, they have not human-style neurons but rather complicated hydraulics or optical networks or the like. To think that such beings would necessarily be nonconscious zombies, simply because their biology is different from ours, seems weirdly chauvinistic in a vast universe in which complex systems, intelligently responsive to their environments, can and do presumably evolve myriad ways.
Okay, so how about Martian rabbits? Martian rabbits would be both biologically and behaviorally very different from you and me. But it seems hard to justify excluding them from the consciousness club, if we let in both Earthly rabbits and Martian schoolteachers. Right?
Ready for a weirder case? Martian Smartspiders, let's suppose, are just as intelligent and linguistically sophisticated as the Martian bipeds we love so well. In fact, we couldn't distinguish the two in a Turing Test. But the Smartspiders are morphologically very different from bipeds. Smartspiders have a central body that contains basic biological functions, but most of their cognitive processing is distributed among their 1000 legs (which evolved from jellyfish-like propulsion and manipulation tentacles). Information exchange among these thousand legs is fast, since in the Martian ecosystem peripheral nerves operate not by the slowish chemical-electrical processes we use but rather by shooting light through reflective capillaries (fiber optics), saving precious milliseconds of reaction time. Thus the 1000 distributed centers of cognitive processing can be as quickly and tightly informationally integrated as are different regions of our own brains -- and the ultimate output is just as smart and linguistic as ours. If there were such Turing-Test-passing Martian Smartspiders, it seems we ought to let them into the bounds of genuinely conscious organisms, if we're letting in the bipedal Martians and the Martian rabbits.
Suppose Martian Smartspiders evolve so that their legs become detachable, while still capable of movement and control by the organism as a whole. The detachment can work because the nerve signals are light-based: The Martian just needs to replace directly connective fibers with transducers that can propagate the light signal across the gap from the surface of the limb to the portion of the central body where that limb had previously been attached. One can see how detachable limbs might be advantageous in hunting and in reaching into narrow spaces. Detaching a leg should have negligible impact on cognition speed as long as there are suitable transducers on the surface of the leg and on the surface of the main body where it normally attaches, since the information will cross the gap at lightspeed. If we put the Martian Smartspider in a sensory deprivation room and disable its proprioceptive system, it might not even be able to tell if its legs are attached or not.
So here's the Smartspider. Its thousand limbs venture out, all under the constant distributed control of the Smartspider -- just as much control and integration as if they were attached. She's still a conscious organism, though spatially distributed, right?
Now imagine the Smartspider gets dumb. As dumb as a rabbit. Evolutionary pressures support a general specieswide reduction in intelligence. Now we have a Notsosmartspider. Is there any good reason to think it wouldn't be conscious if the rabbit is conscious?
Finally, let's take these thoughts home to Earth. The most sophisticated ant colonies (e.g., large leaf cutter colonies) are as intelligent and sophisticated in their behavior as rabbits. If we're going to reject biological chauvinism and continguism (prejudice against discontiguous entities) on behalf of the Martians, why not reject those prejudices on behalf of Earthly superorganisms too? Maybe the colony as a whole has a distinctive, unified stream of conscious experience, of roughly mammal-level intelligence, above and beyond the consciousness of the individual ants (if individual ants even do have individual consciousness).
There are potentially important differences between the Notsosmartspider and an ant colony. Individual ants might not be as tightly informationally integrated as are the pieces of the Notsosmartspider as I've imagined it, and the rules governing their interaction might be fairly simple, despite leading to sophisticated emergent behavior. But should we regard such differences as decisive? As a thought experiment, imagine those differences first in the brain of a single Martian biped.
And if ant colonies are conscious, might the United States be conscious too?
[Revised May 5]