My friends at Yale have asked me to post the following: Students, colleagues, friends and admirers of Ruth are warmly invited to join us for a CELEBRATION OF THE LIFE OF RUTH BARCAN MARCUS Saturday afternoon, September 29th, 2012 Yale University Campus New Haven, CT Details of venue, time and program will be settled soon, and a more formal announcement will then be distributed.
Thursday, May 31, 2012
Ruth Barcan Marcus Memorial Celebration
Posted by Eric Schwitzgebel at 3:45 PM 0 comments
Labels: announcements
Monday, May 28, 2012
Betelgeusian Beeheads
On the surface of a planet around Betelgeuse lives a species of animals who look like woolly mammoths but who act very much like human beings. I have gazed into my crystal ball and this is what I see: Tomorrow they visit Earth. They watch our television shows, learn our language, and politely ask permission to tour our lands. It turns out they are sanitary, friendly, excellent conversationalists, and well supplied with rare metals for trade, so they are welcomed across the globe.
The Betelgeusians are quirky in a few ways, however. For example, cognitive activity takes them, on average, about ten times longer to execute. This has no overall effect on their intelligence but it does test the patience of conversational partners unaccustomed to the Betelgeusians' slow pace. The Betelgeusians find some tasks cognitively easy that we find cognitively difficult and vice versa. They are baffled by our difficulty with simple logic problems like the Wason Selection Task, but they are impressed by our skill in integrating auditory and visual information.
Over time, Betelgeusians migrate down from their orbiting ship. Patchy accommodations are made for their size and speed, and they start to attend our schools and join our corporations. Later, they start to run for political office, displaying roughly the same range of political virtues and vices. Although Betelgeusians don't reproduce by coitus, they find some forms of physical contact arousing and have broadly human attitudes toward pair-bonding. Marriage equality is achieved. What a model of interplanetary harmony!
Everyone agrees that Betelgeusians are conscious, of course. In fact, the Betelgeusians have a long and impressive academic tradition in philosophy of mind and introspective psychology.
Why do I call them "beeheads", you're surely wondering? Well, it turns out that when you look inside their heads and humps, you find not neurons but rather tens of millions of squirming insects, each a fraction of a millimeter across. Each insect has a complete set of minute sensory organs and a nervous system of its own (compare mymaridae wasps on Earth). The beeheads' behavior arises from complex patterns of interaction among these individually dumb insects. These mammoth creatures are much-evolved descendants of Betelgeusian bees that evolved in symbiosis with a brainless, living hive. (The hive itself began as a simple, non-living physical structure that over evolutionary history slowly incorporated symbiotic organisms -- as Earthly ant hives sometimes do -- which then merged into a larger whole.) The Betelgeusians' giant heads and humps have, altogether, a hundred times as many neuron-like cells as the human brain has; and the insects' interactions are so informationally efficient that neighboring insects can respond differentially to the behavioral or chemical effects of other insects' individual efferent neural impulses. The process, still, is an order of magnitude slower than our own.
Maybe there are little spatial gaps between the bees. Does it matter? Maybe, in the privacy of their homes, the bees sometimes fly apart and back together, exiting and entering through the mouth. Does it matter? Maybe if the exterior body is too severely injured, the bees recruit a new body from nutrient tanks -- and when they fly off to do this, they do so without too much interruption of cognition, able to report thoughts mid-transfer. They reconvene and say, "Oh it's such a free and airy feeling to be without a body! And yet it's a fearful thing too. It's good to feel again the power of limbs and mouth. May this new body last long and well! Shall we dance, then, love?"
If you accept the consciousness of Betelgeusian beeheeds, you have accepted the first step in my argument that the United States is conscious.
Posted by Eric Schwitzgebel at 9:02 AM 5 comments
Labels: metaphysics, stream of experience, USA consciousness
Friday, May 25, 2012
What the Large Print Sayeth the Small Print Denieth
(by guest blogger Carrie Figdor)
In my last post I noted how professional science journals and other peer-reviewed venues coopt folk-psychological terms to report experimental results. Popular science reporters are not introducing these terms as metaphors to describe neuroscience results in laymen's terms; the laymen's terms are right there in the original articles. Nevertheless, it is a common response to blame public misunderstanding of science results either on the popular science press, or on the public's lack of science education, or both.
For example, Racine and colleagues (2005) coined the term "neuro-realism" to label the way popular science coverage of fMRI studies "can make a phenomenon uncritically real, objective or effective in the eyes of the public." One of their examples of neuro-realism is a 2004 report in The Boston Globe: "[B]ecause fMRI investigation shows activation in reward centers when subjects ingest high-fat foods, one reads, 'Fat really does bring pleasure'." But after examining how the term "reward" is used in peer-reviewed neuroscience articles, it is not clear why the Globe reporter is thought to be miscommunicating the science, let alone responsible for the miscommunication.
Similarly, if adding neuro-babble to an excerpt of a psychological explanation makes non-experts rate the explanation more highly (Skolnick Weisberg et al. 2008), shouldn't the response be: "Hey, we're the gods of knowledge about reality! Maybe, just maybe, with great power comes great responsibility -- including responsibility for our language!" The public might well be excused from not being able to distinguish added neuro-babble ("Brain scans indicate that the "curse" happens because of the frontal lobe circuitry known to be involved in self-knowledge") from serious claims in research articles ("Several fMRI studies reported increased prefrontal and parietal activity during lie ... Based on these findings, deception has been conceptualized as inhibition of truth and generation of lie mediated by the prefrontal cortex, with truth being a “routine” response mediated by the posterior structures." (Langleben et al. 2005)
Instead, where popular science reporters may be blameworthy is in their abdication of their role as professional skeptics. In a 2006 analysis of 134 popular science articles on fMRI studies -- appearing in the popular press between 1994-2004 -- Racine and colleagues found that "the vast majority of articles (n = 104, 79 percent) were uncritical in tone, whereas twenty-eight (21 percent) were balanced or critical", and that specialized sources were less critical than general news sources. In short, the ferociously aggressive skepticism to which political stories and figures are routinely subjected by the press is nowhere to be found. The Scientific American spoof may be a sign that this free ride is about to end.
Posted by Eric Schwitzgebel at 1:13 PM 2 comments
Labels: Carrie Figdor, neuroscience
Tuesday, May 22, 2012
Applying to PhD Programs in Philosophy, Part V: Statement of Purpose
[This is an update of the fifth part of a series published in 2007. For the full series, see here.]
I've never read a first draft of a statement of purpose (also called a personal statement) that was any good. These things are hard to write, so give yourself plenty of time and seek the feedback of at least two of your letter writers. Plan to rewrite from scratch at least once.
It’s hard to know even what a “Statement of Purpose” is. Your plan is to go to graduate school, get a Ph.D., and become a professor. Duh! Are you supposed to try to convince the committee that you want to become a professor more than the next guy? That philosophy is written in your genes? That you have some profound vision for the transformation of philosophy or philosophy education?
Some Things Not to Do
* Don’t wax poetic. Don’t get corny. Avoid purple prose. “Ever since I was eight, I've pondered the deep questions of life.” Nope. “Philosophy is the queen of disciplines, delving to the heart of all.” Nope. “The Owl of Minerva has sung to me and the sage of Königsberg whispers in my sleep: Not to philosophize is to die.” If you are tempted to write sentences like that last one, please do so in longhand, with golden ink, on expensive stationery which you then burn without telling anyone.
* Don’t turn your statement into a sales pitch. Ignore all advice from friends and acquaintances in the business world. Don’t sell yourself. You don’t want to seem arrogant or grandiose or like a BS-ing huckster. You may still (optionally!) mention a few of your accomplishments, in a dry, factual way, but to be overly enthusiastic about accomplishments that are rather small in the overall scheme of academia is somewhat less professional than you ideally want to seem. If you’re already thinking like a graduate student at a good PhD program, then you won’t be too impressed with yourself for having published in the Kansas State Undergraduate Philosophy Journal (even if that is, in context, a notable achievement). Trust your letter writers. If you’ve armed them properly with a brag sheet, the important accomplishments will come across in your file. Let them do the pitch. Also, don’t say you plan to revolutionize philosophy, reinvigorate X, rediscover Y, finally find the answer to timeless question Z, or even teach in a top-ten department. Do you already know that you will be a more eminent professor than the people on your admissions committee? You’re aiming to be their student, not the next Wittgenstein – or at least that’s how you want to come across. You want to seem modest, humble, straightforward. If necessary, consult David Hume or Benjamin Franklin for inspiration on the advantages of false humility.
* If you are applying to a program in which you are expected to do coursework for a couple years before starting on a dissertation – that is, U.S.-style programs as opposed to British-style programs – then I recommend not taking stands on particular substantive philosophical issues. In the eyes of the admissions committee, you probably aren’t far enough in your philosophical education to be adopting hard philosophical commitments. They want you to come to their program with an open mind. Saying "I would like to defend Davidson's view that genuine belief is limited to language-speaking creatures" comes across a bit too strong. Similarly, "I showed in my honors thesis that Davidson's view...". If only, in philosophy, honors theses ever really showed anything! (“Argued” would be okay.) Better: "My central interests are philosophy of mind and philosophy of language. I am particularly interested in the intersection of the two, for example in Davidson's argument that only language-speaking creatures can have beliefs in the full and proper sense of 'belief'."
* Don’t tell the story of how you came to be interested in philosophy. It’s not really relevant.
What to Write
So how do you fill up that awful, blank-looking page? In April, I solicited sample statements of purpose from successful recent PhD applicants. About a dozen readers kindly sent in their statements and from among these I chose three that I thought were good and also diverse enough to illustrate the range of possibilities. Follow the links below to view the statements.
- Statement A was written by Allison Glasscock, who was admitted to Chicago, Cornell, Penn, Stanford, Toronto, and Yale.
- Statement B was written by a student who prefers to remain anonymous, who was admitted to Berkeley, Missouri, UMass Amherst, Virginia, Wash U. in St. Louis, and Wisconsin.
- Statement C was written by another student who prefers to remain anonymous, who was admitted to Connecticut and Indiana.
Each of the statements also adds something else, in addition to a description of areas of interest, but it is not really necessary to add anything else. Statement B starts with pretty much the perfect philosophy application joke. (Sorry, now it’s taken!) Statement C concludes with a paragraph describing the applicant’s involvement with his school’s philosophy club. Statement C is topically structured but salted with information about coursework relevant to the applicant’s interests, while Statement B is topically structured and minimalist, and Statement A is autobiographically structured with considerable detail. Any of these approaches is fine, though the topical structure is more common and raises fewer challenges about finding the right tone.
Statement A concludes with a paragraph specifically tailored for Yale. Thus we come to the question of...
Tailoring Statements to Particular Programs
It's not necessary, but you can adjust your statement for individual schools. If there is some particular reason you find a school attractive, there's no harm in mentioning that. Committees think about fit between a student’s interests and the strengths of the department and about what faculty could potentially be advisors. You can help the committee on this issue if you like, though normally it will be obvious from your description of your areas of interest.
For example, if you wish, you can mention 2-3 professors whose work especially interests you. But there are risks here, so be careful. Mentioning particular professors can backfire if you mischaracterize the professors, or if they don't match your areas of stated interest, or if you omit the professor in the department whose interests seem to the committee to be the closest match to your own.
Similarly, you can mention general strengths of the school. But, again, if you do this, be sure to get it right! If someone applies to UCR citing our strength in artificial intelligence, we know the person hasn’t paid attention to what our department is good at. No one here works on AI. But if you want to go to a school that has strengths in both mainstream “analytic” philosophy and 19th-20th century “Continental” philosophy, that’s something we at UCR do think of as a strong point of our program.
I'm not sure I'd recommend changing your stated areas of interest to suit the schools, though I see how that might be strategic. There are two risks in changing your stated areas of interest: One is that if you change them too much, there might be some discord between your statement of purpose and what your letter writers say about you. Another is that large changes might raise questions about your choice of letter writers. If you say your central passion is ancient philosophy, and your only ancient philosophy class was with Prof. Platophile, why hasn’t Prof. Platophile written one of your letters? That’s the type of oddness that might make a committee hesitate about an otherwise strong file.
Some people mention personal reasons for wanting to be in a particular geographical area (near family, etc.). Although this can be good because it can make it seem more likely that you would accept an offer of admission, I'd avoid it since graduating Ph.D.'s generally need to be flexible about location and it might be perceived as indicating that a career in philosophy is not your first priority.
Explaining Weaknesses in Your File
Although hopefully this won't be necessary, a statement of purpose can also be an opportunity to explain weaknesses or oddities in your file -- though letter writers can also do this, often more credibly. For example, if one quarter you did badly because your health was poor, you can mention that fact. If you changed undergraduate institutions (not necessarily a weakness if the second school is the more prestigious), you can briefly explain why. If you don't have a letter from your thesis advisor because he died, you can point that out.
Statements of Personal History
Some schools, like UCR, also allow applicants to submit “statements of personal history”, in which applicants can indicate disadvantages or obstacles they have overcome or otherwise attempt to paint an appealing picture of themselves. The higher-level U.C. system administration encourages such statements, I believe, because although state law prohibits the University of California from favoring applicants on the basis of ethnicity or gender, state law does allow admissions committees to take into account any hardships that applicants have overcome – which can include hardships due to poverty, disability, or other obstacles, including hardships deriving from ethnicity or gender.
Different committee members react rather differently to such statements, I suspect. I find them unhelpful for the most part. And yet I also think that some people do, because of their backgrounds, deserve special consideration. Unless you have a sure hand with tone, though, I would encourage a dry, minimal approach to this part of the application. It’s better to skip it entirely than to concoct a story that looks like special pleading from a rather ordinary complement of hardships. This part of the application also seems to beg for the corniness I warned against above: “Ever since I was eight, I’ve pondered the deep questions of life...”. I see how such corniness is tempting if the only alternative seems to be to leave an important part of the application blank. As a committee member, I usually just skim and forget the statements of personal history, unless something was particularly striking.
For more general thoughts on the influence of ethnicity and gender on committee decisions, see Part VI of this series.
***
For further advice on statements of purpose, see this discussion on Leiter Reports – particularly the discussion between the difference between U.S. and U.K. statements of purpose.
See here for comments on the 2007 version of this post. You might want to skim through those comments before posting a comment below.
Posted by Eric Schwitzgebel at 3:21 PM 50 comments
Friday, May 18, 2012
Can a Rat Walk Down "Memory" Lane?
(by guest blogger Carrie Figdor)
In my first post I introduced the issue of rising public anger at science due to the apparently cavalier way in which empirical results affecting issues the folk hold dear -- silly things like human nature and the nature of the universe -- are being disseminated. I'd like to provide examples of the forms this miscommunication can take, which due to my philosophical interests focus on neuroscience (although the Krauss-Albert tussle over the term "nothing" is clearly germane).
Sometimes miscommunication is one-off. While researching the paper described below, I came across a 1956 Scientific American article in which the scientist, reporting his results to the public, credits B.F. Skinner with refining the methods for measuring pleasurable and painful feelings (Olds 1956). I thought this was hilarious (Skinner?!) until I tried to come up with a good explanation for why he would describe Dr. Radical Behaviorist this way, let alone in a news outlet where the audience is not casual and space is not at such a premium. I couldn't, or at least not in a way in which the scientist came out looking good. (Note: this criticism has nothing to do with the brilliance of the research.) Again, while searching for a related cognitive neuroscience article in the peer-reviewed Journal of Comparative Neurology, I was taken aback to find it in a special issue entitled "The Anatomy of the Soul". Were they serious? Were they joking? Which answer is less bad?
But a systematic source of foreseeable public confusion stems from the use of folk psychological terms to report neuroscience results in professional contexts. Such terms are coopted into neuroscience discourse to report results and translational implications and recycled back into public discourse via the popular science press without mention of possible shifts in meaning. In this paper I compared uses of some terms ("reward", "fear", and "memory") in bibliographically linked studies and found that it is at least an open question whether syntactically identical terms taken from the folk mean what they ordinarily mean, or even whether they remain semantically identical across studies and over time. Unless the public is clearly warned not to assume these words mean what they ordinarily do, miscommunication is pretty much guaranteed.
Here's one example. In a 1954 study of brain areas associated with "reward" with rats, "reward" is behaviorally defined as a stimulus associated with increased frequency of response, and electrical stimulation in certain brain areas is "rewarding in the sense that the experimental animal will stimulate itself in these places frequently and regularly for long periods of time if permitted to do so.” (Olds & Milner 1954) By the time of a related 2005 fMRI study on romantic love, human subjects who self-report being madly in love are scanned while passively viewing photographs of their loved ones. The reported result is that areas of the brain associated in earlier studies (including the 1954 rat study) with motivation to acquire a "reward" are among those associated with being in love. But is love "rewarding" the way electrical stimulation of the brain is "rewarding"? Is a photograph passively viewed by a lover a "reward" the way an electrical brain stimulus self-administered with increasing frequency a "reward"? There's certainly self-stimulation to photographs in the vicinity, but my guess is that this sort of "reward" and "rewarding" feeling isn't what subjects felt in the experiment. So while I don't doubt the septum and caudate nucleus are part of a "reward and motivation" system, I'm not at all sure what "reward and motivation" means that would cover all these cases. Of course, the public has no way of figuring out (and should not be expected to) that "reward" and cognates have undergone shifts in meaning since such terms were seized from the public sphere.
Posted by Eric Schwitzgebel at 1:16 PM 0 comments
Labels: Carrie Figdor, history of psychology, neuroscience
Tuesday, May 15, 2012
On the Epistemic Status of Deathbed Regrets
It's graduation time, so that means it's time to hear about what people do and do not regret on their death beds. Intended lesson: Pursue your dreams! Don't worry about money!
I can find no systematic research about what people on their deathbeds do in fact say that they regret. A PsycInfo database search of "death*" and "regret*" turns up this article as the closest thing. Evidently, what elderly East Germans most regret is having been victimized by war. There's also this inspiring pablum, widely discussed in the popular press.
Let's grant, however, that the commencement truisms have a prima facie plausibility. With their dying breaths, grandparents around the world say, "If only I had pursued my dreams and worried less about money!" Does their dying perspective give them wisdom? Does it matter that it's dying grandparents who are saying this rather than, say, 45-year-old parents or high school counselors or assistant managers at regional banks? The deathbed has rhetorical cache. Does it deserve it?
I'm reminded of the wisdom expressed by Zaphod Beeblebrox IV in Hitchhiker's Guide to the Galaxy. Summoned in a seance, he says that being dead "gives one such a wonderfully uncluttered perspective. Oh-ummm, we have a saying up here: 'life is wasted on the living'."
There's something to that, no doubt. Life is wasted on the living. But here's my worry: The dead and dying are suspiciously safe from the need of having to live by their own advice. If I'm 45 and I say, "Pursue your dreams! Don't worry about money!" I can be held to account for hypocrisy if I don't live that way myself. But am I really going to live that way? Potential victimization by my own advice might help me more vividly appreciate the risks and stress of chucking the day job. Deathbed grandpa might be forgetting those risks and stress in a grandiose, self-flagellating fantasy about the gap between what he was and what he might have been.
A smaller version of this same pattern occurs day by day and week by week: Looking back, I can always fantasize having been more energetic, more productive, having seized each day with more gusto. Great! That would have been better. But seizing every day with inexhaustible gusto is superhuman. I forget, maybe, how superhuman that would be.
Another source of deathbed distortion might be this: To the extent one's achievements are epistemic and risk-avoidant, their costs might be foolishly easy to regret. Due to hindsight bias, opportunities sacrificed and energy spent to prove something (for example, to prove to yourself that you could be successful in business or academia) or to avoid a risk that never materialized (such as the risk of having to depend on substantial financial savings in order not to lose one's home) can seem not to have been worth it: Of course you would have succeeded in business, of course you would have been fine without that extra money in the bank. On your deathbed, you might think you should have known these things all along -- but you shouldn't have. The future is harder to predict than the past.
I prefer the wisdom of 45-year-olds -- the ones in the middle of life, who gaze equally in both directions. Some 45-year-olds also think you should pursue your dreams (within reason) and not worry (too much) about money.
Posted by Eric Schwitzgebel at 5:52 PM 12 comments
Labels: epistemology
Second Annual "Experiment Month" Initiative
Deadline for submissions June 15th.
The initiative is intended to help philosophers who are interested in running an experiment by pairing them with experts who can help with experimental design, participant recruitment, and statistical analysis.
Here are some of last year's results.
More info here.
Posted by Eric Schwitzgebel at 12:50 PM 0 comments
Labels: announcements
Friday, May 11, 2012
Twilight of the (Scientific) Gods?
(by guest blogger Carrie Figdor)
Is it a
mere coincidence that the Metropolitan Opera is offering its latest Ring-cycle blitz at about the same time as my stint as an invited guest blogger for Eric?
The skeptic in me warns against hasty judgment, yet I think there's an
interesting relationship between the two series. Isomorphisms come cheap, but
the best things in life are free, so even a cheap isomorphism is worth more
than the best things in life.
Before I go
on, I'll introduce myself: I'm a philosopher of mind and metaphysician at the
University of Iowa, in the state where Herbert Hoover and Captain James T. Kirk
are local notables and gay marriage is legal. I'm also a former Associated Press
newswoman, hence a professional gadfly twice over. The news story I'm
interested in maps Wagner's distinction between ordinary mortals and the gods
to our distinction between "the folk" and scientists, who occupy the
most powerful intellectual position in our culture. This story (but not
Wagner's) is that the folk are getting deeply and inchoately pissed at the
scientific gods. In this series of posts, I want to explore this anger and what it means for science,
philosophy and the folk.
One unmistakable
expression of it came on April 1, 2012, when Scientific American published a spoof of neuroscience claims, carefully labeled as such just in case the joke
was not immediately obvious just by reading it: "Neuroscientists: We Don't
Really Know What We're Talking About, Either." It began: "NEW YORK—At a surprise April
1 press conference, a panel of neuroscientists confessed that they and most of
their colleagues make up half of what they write in research journals and tell
reporters." I suspect the editor had inserted a less generous percentage
in an earlier draft.
A second came on April 23, 2012,
when The Atlantic Monthly included the following paragraph in an interview with
Lawrence Krauss that was sparked by the Krauss-Albert affair (a clash of titans
worthy of Wagner):Because the story of modern cosmology has such deep
implications for the way that we humans see ourselves and the universe, it must
be told correctly and without exaggeration -- in the classroom, in the press
and in works of popular science. To see two academics, both versed in
theoretical physics, disagreeing so intensely on such a fundamental point is
troubling. Not because scientists shouldn't disagree with each other, but
because here they're disagreeing about a claim being disseminated to the
public as a legitimate scientific discovery. Readers of popular science often
assume that what they're reading is backed by a strong consensus.
I'll borrow from The Atlantic to elaborate
the issue: Because the story of neuroscience or physics (i.e., science) has
such deep implications for the folk, it is important to tell that story to the
folk correctly and without exaggeration. And yet it is not being told that way.
The Atlantic is troubled because the public is being fed what may or may not be
a crock, not because the gods are clashing (which is only to be expected). Scientific
American effectively accuses neuroscientists of being full of it -- half the
time, but which half? -- and by adding that "either" practically screams
that its staff is getting really tired of being played.
Both missives from leaders in the
mortal sphere imply that the folk are not being treated as they believe they
should be. This is all the more annoying when your offerings -- um, taxpayer
dollars -- are rabidly sought by these gods in the form of NSF grants. And so
the question arises: how much longer will, or should, this situation go on?
What can be done to change it, for the good of the folk and science?
Posted by Eric Schwitzgebel at 4:50 PM 6 comments
Labels: Carrie Figdor, neuroscience
Monday, May 07, 2012
Grounds for Dream Skepticism
In his famous anti-skeptical work, On Certainty, Wittgenstein wants "grounds for doubt". He wants positive reason to accept a radically skeptical hypothesis.
Trudeau obliges.
The key panels:
The same reasoning might apply if things are implausibly hellish.
Such reasoning should apply especially to Wittgenstein himself. I mean, what's the prior probability of that being your life -- impoverished scion of a suicidal Austrian family of immense wealth, arguably the greatest philosopher of your day though unemployed and hardly publishing, etc.? At the time he wrote On Certainty, Wittgenstein should have thought: Surely all this is some weird dreambrain mashup of wish fulfillment and nightmare!
Philosophers at the peak of public fame should all be dream skeptics. QED.
Posted by Eric Schwitzgebel at 8:35 AM 2 comments
Labels: dreams, epistemology, humor, skepticism
Friday, May 04, 2012
Martian Rabbit Superorganisms, Yeah!
Most philosophers of mind (but not all) believe that rabbits have conscious experiences -- that rabbits are not, as it were, mere machines all dark inside but rather that there's "something it's like" to be a rabbit, that they can have sensory experiences, that they can experience pain, that they have (in contemporary jargon) "phenomenology". After all, rabbits are not so different from us, biologically. Rabbits might lack language and higher forms of abstract and self-reflective cognition, but few philosophers think that such differences between us and them are sufficient to render rabbits nonconscious.
Most philosophers of mind (but not all) likewise believe that if we were visited by a highly intelligent naturally-evolved alien species -- let's call them "Martians" -- that alien species might possess a radically different biology from us and yet still have conscious experience. Outwardly, let's suppose, Martians look rather like humans; and also they behave rather like humans, despite their independent evolutionary origins. They visit us, learn English, and soon integrate into schools and corporations, maybe even marriages. They write philosophical treatises about consciousness and psychological treatises about their emotions and visual experiences. We will naturally, and it seems rightly, think of such Martians as genuinely conscious. Inside, though, they have not human-style neurons but rather complicated hydraulics or optical networks or the like. To think that such beings would necessarily be nonconscious zombies, simply because their biology is different from ours, seems weirdly chauvinistic in a vast universe in which complex systems, intelligently responsive to their environments, can and do presumably evolve myriad ways.
Okay, so how about Martian rabbits? Martian rabbits would be both biologically and behaviorally very different from you and me. But it seems hard to justify excluding them from the consciousness club, if we let in both Earthly rabbits and Martian schoolteachers. Right?
Ready for a weirder case? Martian Smartspiders, let's suppose, are just as intelligent and linguistically sophisticated as the Martian bipeds we love so well. In fact, we couldn't distinguish the two in a Turing Test. But the Smartspiders are morphologically very different from bipeds. Smartspiders have a central body that contains basic biological functions, but most of their cognitive processing is distributed among their 1000 legs (which evolved from jellyfish-like propulsion and manipulation tentacles). Information exchange among these thousand legs is fast, since in the Martian ecosystem peripheral nerves operate not by the slowish chemical-electrical processes we use but rather by shooting light through reflective capillaries (fiber optics), saving precious milliseconds of reaction time. Thus the 1000 distributed centers of cognitive processing can be as quickly and tightly informationally integrated as are different regions of our own brains -- and the ultimate output is just as smart and linguistic as ours. If there were such Turing-Test-passing Martian Smartspiders, it seems we ought to let them into the bounds of genuinely conscious organisms, if we're letting in the bipedal Martians and the Martian rabbits.
Suppose Martian Smartspiders evolve so that their legs become detachable, while still capable of movement and control by the organism as a whole. The detachment can work because the nerve signals are light-based: The Martian just needs to replace directly connective fibers with transducers that can propagate the light signal across the gap from the surface of the limb to the portion of the central body where that limb had previously been attached. One can see how detachable limbs might be advantageous in hunting and in reaching into narrow spaces. Detaching a leg should have negligible impact on cognition speed as long as there are suitable transducers on the surface of the leg and on the surface of the main body where it normally attaches, since the information will cross the gap at lightspeed. If we put the Martian Smartspider in a sensory deprivation room and disable its proprioceptive system, it might not even be able to tell if its legs are attached or not.
So here's the Smartspider. Its thousand limbs venture out, all under the constant distributed control of the Smartspider -- just as much control and integration as if they were attached. She's still a conscious organism, though spatially distributed, right?
Now imagine the Smartspider gets dumb. As dumb as a rabbit. Evolutionary pressures support a general specieswide reduction in intelligence. Now we have a Notsosmartspider. Is there any good reason to think it wouldn't be conscious if the rabbit is conscious?
Finally, let's take these thoughts home to Earth. The most sophisticated ant colonies (e.g., large leaf cutter colonies) are as intelligent and sophisticated in their behavior as rabbits. If we're going to reject biological chauvinism and continguism (prejudice against discontiguous entities) on behalf of the Martians, why not reject those prejudices on behalf of Earthly superorganisms too? Maybe the colony as a whole has a distinctive, unified stream of conscious experience, of roughly mammal-level intelligence, above and beyond the consciousness of the individual ants (if individual ants even do have individual consciousness).
There are potentially important differences between the Notsosmartspider and an ant colony. Individual ants might not be as tightly informationally integrated as are the pieces of the Notsosmartspider as I've imagined it, and the rules governing their interaction might be fairly simple, despite leading to sophisticated emergent behavior. But should we regard such differences as decisive? As a thought experiment, imagine those differences first in the brain of a single Martian biped.
And if ant colonies are conscious, might the United States be conscious too?
[Revised May 5]
Posted by Eric Schwitzgebel at 5:34 PM 15 comments
Labels: metaphysics, stream of experience, USA consciousness
Friday, April 27, 2012
Adolf Eichmann, Hannah Arendt, Stanley Milgram, and King Xuan of Qi
Perhaps my favorite Mencius passage is 1A7. At its core is a story of a king's mercy on an ox.
While the king was sitting up in his hall, an ox was led past below. The king saw it and said, "Where is the ox going?" Hu He replied, "We are about to ritually anoint a bell with its blood." The king said, "Spare it. I cannot bear its frightened appearance, like an innocent going to the execution ground." Hu He replied, "So should we dispense with the anointing of the bell?" The king said, "How can that be dispensed with? Exchange it for a sheep." (Van Norden, trans.)Mencius asks the king (King Xuan of Qi):
If Your Majesty was pained at its being innocent and going to the execution ground, then was is there to choose between an ox and a sheep?... You saw the ox but had not seen the sheep. Gentlemen cannot bear to see animals die if they have seen them living. If they hear the cries of their suffering, they cannot bear to eat their flesh. Hence, gentlemen keep their distance from the kitchen.(Note that Mencius does not conclude that gentlemen should become vegetarians. Interesting possibilities for reflection arise regarding butchers, executioners, soldiers, etc., but let's not dally.) To understand the next part of the passage, you need to know what kind of person this king was. Skip forward to passage 1B11 where Mencius says to King Xuan:
Yan was ferocious to its people. Your Majesty went out and attacked it. The people thought that You were going to deliver them as from flood and fire. They welcomed Your Majesty with baskets and food and pots of soup. But if You kill their fathers and older brothers, put burdens on [enslave? capture? take hostage?] their sons and younger brothers, destroy their shrines and temples, plundering their valuable goods -- how could that be acceptable?The invasion of Yan probably occurred after his sparing of the ox, but it reveals King Xuan's character: He has mercy on an ox because the ox looks like an innocent person, but at the same time he is perfectly willing to kill innocent people. Now back to 1A7. Mencius says to the king:
Suppose there were someone who reported to Your Majesty, 'My strength is sufficient to life five hundred pounds, but not sufficient to lift one feather. My eyesight is sufficient to examine the tip of an autumn hair, but I cannot see a wagon of firewood.... In the present case your kindness is sufficient to reach animals, but the effects do not reach the commoners.... Measure it, and then you will distinguish the long and the short. Things are all like this, the heart most of all. Let Your Majesty measure it.I can't read Hannah Arendt's famous portrayal of Adolf Eichmann without thinking of this passage from Mencius. Eichmann (at least in Arendt's portrayal) respects and values his Jewish acquaintances, friends, and relatives -- even at one point has a Jewish lover. When he goes east to see the killing operations, he finds it morally horrible and can't bear to look. Yet he masterfully shipped hundreds of thousands of Jews to their deaths in the Holocaust. Near the end, Eichmann even defied Himmler's order to stop having Jews killed, since he knew Himmler's order would be contrary to Hitler's wish. Like King Xuan of Qi, Eichmann is merciful and soft (perhaps too soft) to those he sees, while indifferent to those outside his field of view, failing to note the similarity between the cases -- failing to "measure his heart".
You have probably heard of the Milgram experiment. What most people remember about it is that it was amazingly easy for Stanley Milgram to convince research subjects to deliver high-voltage, maybe even fatal, shocks to another research subject. (All shocks were actually faked.) What some people forget, but what Milgram himself emphasizes, is that people's obedience to instructions to deliver high-voltage shocks was very much contingent on the relative distances of the victim and of the authority issuing the instructions. If the victim was near at hand and the authority far away, almost no one complied. If the authority was nearby and the victim neither visible nor audible, almost everyone complied.
King Xuan and Eichmann would presumably be the perfect Milgram subjects.
Think and you will get it, Mencius says. Take the heart that is over here and apply it over there. Note how you react in the nearby, vivid cases; then note, intellectually, the lack of relevant difference between those cases and more distant, less vivid cases. For Mencius, this attention to the natural impulses of the heart, and the rational extension of those impulses, is the key to moral development.
Worth noting in conclusion: It's not all about extending impulses of sympathy or pity, as in 1A7 (and in some recent accounts of moral development). Mencius holds that one can also notice and intellectually extend respect, ritual propriety, and uprightness (3A5, 6A10, 7A15, 7B31).
Posted by Eric Schwitzgebel at 1:44 PM 8 comments
Labels: chinese philosophy, moral development, moral psychology
Tuesday, April 24, 2012
Bleg: Statements of Purpose / Personal Statements
I'm planning to update my series on applying to PhD programs in philosophy. One thing I'd like to do is display some actual statements of purpose (also known as personal statements) from successful applicants, so that future applicants can see what these things really look like at full length. Thus, I'm hoping that some readers will be willing to send me their past statements of purpose for this use.
If you're willing to help out, please email me (eschwitz at domain: ucr.edu) the following:
(1.) your statement of purpose;
(2.) the academic year in which you used it;
(3.) what schools you were admitted to (not just where you accepted but your full range of admittances);
(4.) whether you want me to list your name and link to your homepage (anonymous is fine if you prefer, but I also want to give credit where it's due).
I'll select a few statements to post on the Underblog and link to from the main blog when I update the PhD application series. I'll aim to display a few different flavors, to give readers a sense of the range of statement types. Selection criteria will include: recency (past 5 years preferable), success of application (elite schools nice but not necessary), and my sense of the statement's quality, representativeness, and difference from other selected statements.
Thank you for your awesomeness!
Posted by Eric Schwitzgebel at 8:26 PM 11 comments
Labels: applying to grad school
Tuesday, April 17, 2012
How Much Should You Care about How You Feel in Your Dreams?
Psychological hedonists say that people are motivated mainly or exclusively by the pursuit of pleasure and the avoidance of displeasure or pain. Normative hedonists say that what we should be mainly or exclusively concerned about in our actions is maximizing our own and others' pleasure and minimizing our own and others' displeasure. Both types of hedonism have fallen on hard times since the days of Jeremy Bentham. Still, it might seem that hedonism isn't grossly wrong: Pleasure and displeasure are crucial motivators, and increasing pleasure and reducing displeasure should be a major part of living wisely and of structuring a good society.
Now consider dreams. Often a dream is the most pleasant or unpleasant thing that occurs all day. Discovering that you can fly, whee! How much do you do in waking life that's as fun as that? Conversely, how many things in waking life are as unpleasant as a nightmare? Here's a great opportunity, then, to advance the hedonistic project! Whatever you can do to improve the ratio of pleasant to unpleasant dreams should have a big impact on the balance of pleasure vs. displeasure in your life.
This fact, naturally, explains the huge emphasis utilitarian ethicists have placed on improving one's dream life. It also explains why companies offering dream-improvement regimens make so much more money than those promising merely weight loss.
Not. Of course not! When I ask people how concerned they are about the overall hedonic balance of their dreams, their response is almost always "Meh". But if the overall sum of felt pleasure and displeasure is important -- even if it's not the whole of what makes life valuable -- shouldn't we take at least somewhat seriously the quality our dream lives?
Dreams are usually forgotten, but I'm not sure how much that matters. Most people forget most of their childhood, too, and within a week they forget almost everything that happened on any given day. That doesn't seem to make the hedonic quality of those events irrelevant. Your three-year-old may entirely forget her birthday party a year later, but you still want her to enjoy it, right? And anyway: We can easily work to remember our dreams if we want. Simply jotting down one's dreams in a diary hugely increases dream recall. So if recall were important, one could pursue a two-step regimen: First, work toward improving the hedonic quality of your dreams (maybe by learning lucid dreaming), and second, improve your dream memory. The total impact on the amount of remembered pleasure in your life would be enormous!
Robert Nozick famously argued against hedonism by saying that few people would choose the guaranteed pleasure one could get by plugging into an experience machine over the uncertain pleasures of real-life accomplishment. Nozickian experience machines don't really exist, of course, but dreams do, and, contra hedonism, our indifference about dreams suggests that Nozick is right: Few people value even the great pleasures and displeasures of dream life over the most meager of real-world accomplishments.
(I remember chatting with someone at the Pacific APA about this a couple weeks ago -- Stephen White, maybe? In the fog of memory, I can't recall exactly who it was or to what extent these thoughts originated from me as opposed to my interlocutor. Apologies, then, if they're due!)
[Related post: On Not Seeking Pleasure Much.]
Posted by Eric Schwitzgebel at 2:13 PM 9 comments
Labels: dreams, moral psychology, stream of experience
Friday, April 13, 2012
Steven Pinker: "Wow, How Awesome We Liberal Intellectuals Are!"
Okay, maybe that's not a direct quote.
There aren't many 700-page books I enjoy from beginning to end. Steven Pinker's The Better Angels of Our Nature was one. Pinker's sweep is impressive, his ability to angle in on the same issue in many ways, his knack for extracting central points from a morass of scholarship, his engagingly accessible but rigorous prose. He is a gifted scholar; his mind scintillates.
But the book also has a comfortable, self-congratulatory tone that leaves me uneasy. By "self-congratulatory" I don't mean that Pinker congratulates himself personally, but rather that he congratulates us -- us Western, highly educated, cosmopolitan liberals, with our broad, sober, rational sense of the world, with our far-reaching sympathies, with our ability to take the long view and to keep human vice in check.
One manifestation of this self-congratulation is how impressed Pinker seems to be that it has been almost seventy years since Europeans and North Americans have killed each other in war by the tens of millions. He calls this "the Long Peace", and he concludes Chapter 5 with the thought that "perhaps, at last, we're learning" to avoid war (p. 294). Credit for the Long Peace, in Pinker's view, goes to liberalism, democracy, "gentle commerce", rising levels of education, and the increasingly open exchange of ideas. The same forces for good also get credit for the "Rights Revolutions": minority rights, women's rights, gay rights, children's rights, and animal rights. The printing press, books, iPhones, university education, hooray! I love all those things too. But it makes me nervous to find myself praising my era above all other eras, my political system above all other political systems, and my types of contribution to society (books, education, technology, communication) as the foundation of all this excellent progress. I wish I could detect any hint of self-suspicious nervousness in Pinker.
Pinker concludes his chapter on the "Better Angels" -- on the sources of all our new peace and rights -- in praise of reason as the best and most dependable source of our progress. He argues that in the past hundred years our ability to think abstractly has risen enormously due to formal schooling, as revealed by massive improvements in people's performance on IQ tests (the Flynn Effect). And this increase in abstract reasoning capacity has, in turn, resulted in immense moral improvement. Men can now imagine much better what it's like to be a woman; white people can imagine what it's like to be black; adults can imagine what it's like to be children. Also, we can reason much better from abstract principles such as "all people are created equal" without being blinded by parochial bunk about the special destiny of our nation, etc. Pinker writes:
The other half of the sanity check is to ask whether our recent ancestors can really be considered to be morally retarded. The answer, I am prepared to argue, is yes. Though they were surely decent people with perfectly functioning brains, the collective moral sophistication of the culture in which they lived was as primitive by modern standards as their mineral spas and patent medicines are by the medical standards of today. Many of their beliefs can be considered not just monstrous but, in a very real sense, stupid. They would not stand up to intellectual scrutiny as being consistent with other values they claimed to hold, and they persisted only because the narrower intellectual spotlight of the day was not routinely shone on them (p. 658).Parody: Come to Harvard, study with us, and become a moral genius!
Pinker describes empirical evidence for seven connections between abstract reasoning and moral virtue:
(1.) People with higher IQs commit fewer crimes.
(2.) People with higher IQs are more likely to cooperate in "Prisoner's Dilemma" experiments.
(3.) People with higher IQs are more likely to be liberals.
(4.) People with higher IQs are more likely to support economic policies, like free trade, that (Pinker argues) tend to lead to peace between nations.
(5.) Countries whose populace had higher IQs in the 1960s were found in one study to be more likely to have prosperity and democracy in the 1990s.
(6.) Another study found countries with better educated populations to be less likely to enter civil war.
(7.) Another study found that politicians who speak in more nuanced, complex manner are less likely to lead their countries into war.
All these connections are interesting, but I don't see a compelling case here for the power of formal schooling and intellectual thought about moral issues to transform moral morons into better angels. Although Pinker sometimes notes that the studies in question control for confounding factors like income, it is hard to control for all potential confounds, and there are certainly some confounds that leap to mind. Higher IQ, for example, in our society, seems to relate to greater opportunity to advance one's interests other than by criminal means. People with more schooling might also react differently to the situation of being brought into a laboratory and given a Prisoner's Dilemma game; for example, they might be inclined to game the situation at a higher level by cooperating mainly as a means of communicating their cooperative nature to the experimenter. (As a Prisoner's Dilemma subject in a Stanford experiment in the 1980s, I seem to remember choosing to cooperate for exactly this reason.) Etc.
My own research on the moral behavior of ethics professors might be interpreted as evidence against Pinker's thesis. If we're really interested in the effect of intellectual moral reflection on real-world moral behavior, the comparison of ethics professors versus non-ethicist philosophers and other professors is potentially revealing because ethicists and other groups of professors will be similar, overall, in amount of formal schooling and in overall ability at abstract thought. But plausibly, ethics professors will have, on average, devoted considerably more abstract reasoning to moral issues like charitable donation, vegetarianism, and the nature of interpersonal virtue, than non-ethicists will have. And I have consistently found that ethicists behave on average no morally better in such matters than do comparison groups of other professors.
Pinker seems to recognize the potential threat to his thesis from the not-especially-admirable behavior of intellectuals. Unfortunately, he offers no detailed response, saying only:
It's also important to note that [Pinker's hypothesis] is about the influence of rationality -- the level of abstract reasoning in society -- and not about the influence of intellectuals. Intellectuals, in the words of the writer Eric Hoffer, "cannot operate at room temperature." They are excited by daring opinions, clever theories, sweeping ideologies, and utopian visions of the kind that caused so much trouble during the 20th century. The kind of reason that expands moral sensibilities comes not from grand intellectual "systems" but from the exercise of logic, clarity, objectivity, and proportionality. These habits of mind are distributed unevenly across the population at any time, but the Flynn Effect lifts all boats, and so we might expect to see a tide of mini- and micro-enlightenments across elites and ordinary citizens alike.I find it hard to see the merit in this response. It seems to be simultaneously a kind of self-flattery -- it's the kind of abstract moral and political reasoning that we intellectuals are so good at that generates moral enlightenment -- and a self-flattering moral excuse -- but don't expect us intellectuals to achieve much personal moral progress from our reasoning! We're too hot; we can't operate at room temperature! Take our intellectualist morals, please, our U.S. higher education, our professorial sense of right and wrong; treasure the moral improvements that flow from the formal schooling we provide; but don't expect us to exemplify the moral standards we impart to you.
No, no, it's not that bad. But I do wish that Pinker had worried more that it might be that bad.
Posted by Eric Schwitzgebel at 3:27 PM 6 comments
Labels: ethics professors, moral psychology
Tuesday, April 03, 2012
On Whether the Job of an Ethicist Is Only to Theorize about Morality, Not to Be Moral
Over several studies, I've found that professional ethicists tend to behave no better than non-ethicists. Ethicists sometimes react to my work by saying "My job is to theorize about ethics, not to live the moral life." What should we make of this response?
First: I agree about the formal job description.
Second: If the idea is that an ethicist's professionally espoused moral views both are and should be entirely isolated from her personal life, that seems an odd position to endorse. Taken to its natural conclusion, it seems to imply that ethicists advocating vegetarianism should be expected to consume cheeseburgers at the same rate as does everyone else. We should resist that conclusion. Both normatively and descriptively, we should expect Peter Singer to live at least approximately the vegetarianism he so passionately advocates. Analogously, if not quite as starkly, it seems reasonable to expect those Kantians who think that lying is particularly heinous to lie a bit less, on average, than do other people, and to expect Confucians who see filial duty as important to be a bit more attentive to their parents, and to expect consequentialists who emphasize the huge importance of donating to famine relief to donate a bit more to famine relief than do other people. Sainthood would be too much expect. But some movement to harmonize one's life and one's moral theories seems both normatively appropriate and descriptively likely, at least on the face of it.
Third: Ethicists seem, on average, to espouse somewhat more stringent moral views than do non-ethicists. For example, ethicists seem to be more likely than non-ethicists to say it's morally bad to eat meat, and on average they seem to think that people should donate more to charity than non-ethicists seem to think people should donate. (See here.) Unless there is some countervailing force, then, movements to harmonize normative attitude and real-world behavior ought to lead ethicists, on average, to regulate their moral behavior a bit more stringently than do non-ethicists. The problem is that this doesn't seem empirically to be the case. For example, although Josh Rust and I found 60% of ethicists to describe "regularly eating the meat of mammals such as beef and pork" as morally bad, compared to 45% of other philosophers and only 19% of professors in other departments, when asked if they had eaten meat at their previous evening meal, we found no statistically significant difference among the three groups.
Fourth: So we might consider some countervailing forces. One possibility is that there's some kind of "moral licensing" effect. Suppose, for example, that a consequentialist donates a wad to charity. Maybe then she feels free to behave worse in other ways than she otherwise would have. Suppose a Kantian remains rigorously honest at some substantial cost to his welfare. Maybe then he feels freer to be a jerk to his students. One depressing thought is that all this cancels out: Our efforts to live by our ethical principles exert sufficient psychic costs that we compensate by acting worse in other ways, only moving around the lump under the rug.
A very different possibility: Maybe those of us attracted to moral theorizing tend to be people with deficient moral emotional reactions, which we compensate for intellectually. Our moral reflection as ethicists does morally improve us, after all, relative to where we would be without that reflection -- but that improvement only brings us up to average.
Still another possibility: Ethicists are especially talented at coming up with superficially appealing rationalizations of immoral behavior, setting them free to engage in immoralities that others would resist. On average, the boost from harmonizing to stricter norms and the loss from toxic rationalization approximately cancel out.
There are other possibilities, too, interesting and empirically risky. We should explore such possibilities! But I don't think that such exploration is what ethicists have in mind when they say their job is only to theorize about morality, not to live it.
Fifth: I acknowledge that there is something a bit unfair, still, about holding ethicists to especially high standards because of their career choice. I don't really want to do that. In fact, I find something admirable in embracing and advocating stringent moral standards, even if one doesn't succeed in living up to those standards. Ultimately, most of the weight in evaluating people's moral character should rest on how they behave, not on how far they fall short of their own standards, which might be self-scathingly high or self-flatteringly low.
My aim is not to scold ethicists for failing to live up to their often high standards but rather to confront the issue of why there seems to be such a tenuous connection between philosophical moral reflection and real-world moral behavior. The dismissive "that's not my job" seems to me to be exactly the wrong spirit to bring to the issue.
Posted by Eric Schwitzgebel at 12:09 PM 27 comments
Labels: ethics professors, moral psychology
Saturday, March 31, 2012
On Whether the Rich Are Jerks
A recent article by Paul Piff and collaborators, purporting to show that rich people are jerks (more formally: "higher social class predicts increased unethical behavior"), has been getting attention in the popular media. Numerous people have sent me the article, correctly surmising that I'd be interested given my own research on the moral behavior of (generally high socioeconomic status) ethicists.
The article nicely displays some of the difficulties of researching moral behavior.
First, let me express a thought about the article's reception. If Piff and collaborators had found no differences in behavior, it seems a reasonable conjecture that the article would have received less attention. It might even have been difficult to publish at all. The same might be true even if Piff et al had found significant results but in the opposite direction, that is, if they had found the rich better behaved. Readers' and referees' critical acumen would probably have been activated, much more so than for a sexy result that tickles our fancy. Consider the many filters that a study must pass -- from approval by one's advisor, to design, to data collection, to analysis and write-up, to refereeing, to editorial acceptance, to public dissemination. At each step, the sexy study has an advantage over less-sexy competitors. The cumulative advantage in the marketplace of ideas should make us nervous about forming our opinions based on what we see in the news. (I recognize this applies to my own research on ethics professors too. So far, my most frequently mentioned study on ethicists is the one study that found ethicists behaving worse.)
Now let's consider the methods. The authors report the results of seven different studies.
Two studies examine the rudeness of drivers. Piff et al report that people driving fancy cars are less likely to wait their turn at a four-way stop and less likely to stop for a pedestrian entering a crosswalk. While I like the real-world naturalness of this study ("ecological validity"!), this particular measure seems very likely to be subject to experimenter effects -- that is, distortion in coding and results so as to favor the hypothesis of the experimenter. Experimenter effects can be large even when there is no obvious source of bias (hence medical research typically aims to be "double blind"). In this case the sources of possible coder bias seem obvious and very difficult to control. This is especially true of the crosswalk study. A confederate of the experimenter steps out into the crosswalk, and the experimenter codes both the perceived status of the car and whether it stops. Wisely, the status of the car is coded before the experimenter knows whether it has stopped. But anyone who has been a pedestrian in the San Francisco Bay Area (where the study was conducted) knows that the crosswalk is a place of subtle communication between ped and driver: You take a step out, you catch the driver's eye. How confidently you step, the look on your face, your reaction (or not) to the driver's glance and to the change (or not) of velocity -- all this has a big effect on what happens. The results might as easily reflect the expectations of the experimenter as any real difference in driving patterns. So this is exactly the sort of case in which one would expect large experimenter effects. Since the results have only mid-grade p values (.05 > p > .01), a small experimenter effect could vitiate them entirely.
(Piff et al state that the coders and confederates were "blind to the hypothesis of the study", but it is hard to imagine that the coders don't at least have strong suspicions, given that they are being asked to code the luxuriousness of the vehicles. At the very least, this should make vehicle status very salient to them, amping up any of the coders' prior expectations of a relationship between vehicle status and driving behavior.)
How about the other studies? Studies 3 and 5 asked participants to read scenarios and then describe how ethically or unethically they would act in those scenarios. Piff et al report that participants reporting higher social class also report that they would act less ethically in such scenarios. Would it seem too fussy of me to say that I don't fully trust self-report of moral behavior in hypothetical scenarios? I would like some evidence that this isn't, say, actually a measure of honesty and frankness instead of a measure of differences in how one would really act in such scenarios, with self-reports of less moral behavior revealing more honesty and frankness than do self-reports of moral perfection. That interpretation would completely flip the moral significance of Piff et al's results. Or maybe the measure is really something more like a measure of one's opinion about one's own moral character, which might have a zero correlation with real differences in moral character (as I suggest here)?
In Study 4, after completing filler tasks, participants were offered candy from a jar ostensibly for children in a nearby laboratory. Afterwards, they were asked how many candies they had taken from the jar. Participants who had been primed to think of themselves as relatively low class (by being asked to compare themselves to the rich, the well educated, and the prestigiously employed) reported having taken less candy than participants who had been primed to think of themselves as relatively high class [edited 5/28]. "Wait, what?" I hear you asking. They reported having taken more candy? But did they actually take more candy? If I'm reading the article correctly, the experimenters chose not to measure actual theft, relying on self-report instead, though the subjects' fingers were right there in the jar! Thus, honesty is confounded with immorality, as in Studies 3 and 5. Perhaps I can also mention the weirdness of coming into a psychology lab and then being offered candy ostensibly for children elsewhere. Are participants really buying this cover story? I participated in a few psychology studies as an undergrad, and I suspect I wouldn't have bought it for a minute. Educated undergrads expect to be lied to by psychologists.
Study 6 also has cover story problems (see also my discussion of a similar study by Gino and Ariely). Participants are set in front of a computer ostensibly presenting them with the outcome of random die rolls. Participants are asked to self-report the outcome -- without the experimenter checking -- and they are told they will have a higher chance of winning a prize if they self-report higher results. I ask you to imagine yourself as a participant in this experiment. What do you think is going on? Is there a moral obligation to tell the truth? Or is the whole thing just silly? The experimenters have brought you into this weird situation in which they seem, pretty much explicitly, to be asking you to lie to them. They, of course, are themselves lying to you, as you probably suspect. The connection between behavior in this setting and real-world honesty seems dubious at best.
In Study 7, participants were either asked to list three things about their day or three benefits of greed. They were then asked to self-report whether they would engage in immoral behavior in hypothetical scenarios. Participants who had been asked to list positive features of greed said that they would engage in more immoral behavior in the hypothetical scenarios, and this was especially the case for the lower socioeconomic status participants. Therefore...? In addition to the general types of concerns raised above, I might mention that an experimental context in which a researcher is asking you to list advantages of greed might encourage the respondent to entertain certain hypotheses about the experiment that influence her answers. It might also encourage the respondent to expect a more forgiving moral atmosphere in which self-report of selfish behavior would be viewed less negatively.
Real moral behavior is hard to measure. I appreciate the difficulty of the researchers' task. Three cheers for convergent measures! I think it's cool that this is being done, and I enjoyed reading the article and thinking about the issues. But I hope I will be forgiven for not buying it in this case.
Update, May 28:
Readers of the post might also be interested in this critical reaction and response (HT Rolf Degen).
Posted by Eric Schwitzgebel at 6:57 AM 10 comments
Labels: jerks, moral psychology, psychological methods
Friday, March 23, 2012
Why Tononi Should Think That the United States Is Conscious
The is the fourth and probably last in a series of posts on why several major theorists of consciousness should attribute literal "phenomenal" conscious experience to the United States, considered as a concrete but spatially distributed entity at least partly composed by citizens and residents. Previous posts treated Dennett, Dretske, and Humphrey. Humphrey and I have an extended exchange in the comments field of my post on his work, and I have offered general considerations supporting the view that if materialism is true the United States is probably conscious here, here, and here (page 18 ff). A full-length paper is in the works but not yet in circulatable shape.
I chose Dennett, Dretske, Humphrey, and Tononi as my sample theorists for two reasons: First, they represent a diverse range of very prominent materialist theories of conscious. And second, they are theoretically ambitious, trying to explain consciousness in general in any possible organism (and not just human consciousness or consciousness as it appears on Earth, like most scientific and neural accounts), covering the metaphysics from top to bottom (and not, say, resting upon a relatively unanalyzed notion of "representation" on which it would be unclear whether the United States literally has the right sort of representations).
Of our four theorists, neuroscientist Giulio Tononi’s view (2004, 2008; Balduzzi and Tononi 2009) enables the quickest argument to the consciousness of the United States. Tononi equates consciousness with “integrated information”. “Information”, in Tononi’s sense, is abundant in the universe – present everywhere or almost everywhere there is causation. And information is integrated, at least in a tiny degree, whenever there are contingent causal connections within a system with a bit a structure – a system that is not collapsed into maximum entropy. Since integrated information is pervasive, so also, Tononi says, is consciousness. He says that “even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness” (2008, p. 236; cf. Chalmers 1996 on thermostats). Likewise, Tononi attributes “qualia” (that is, consciousness) to simple logical AND and OR gates (Balduzzi and Tononi 2009). On Tononi’s view, what distinguishes human consciousness from photodiode consciousness, OR-gate consciousness, and speck-of-dust consciousness is its richness of detail: The brain is massively informationally complex and integrated, and thus enjoys consciousness orders of magnitude more complex than that of simple systems.
Before we saddle Tononi straightaway with commitment to the consciousness of the United States, though, there is one issue to address: Despite the liberality of his view, Tononi does not regard every putative system as an “entity” that could be the locus of consciousness. If a putative system contains no causal, that is, informational, connections between its parts, then it is not an entity in the relevant sense; it is not, he says, a “complex”. Also, a putative system is not a conscious entity or complex if a larger, more informationally integrated system entirely subsumes it. For example, two disparate nodes do not constitute a conscious complex if a third node lies between them creating a more informationally integrated network. This restriction on the possible loci of consciousness is still extremely liberal by commonsense standards: Complexes can nest and overlap, for example, within the brain, where tightly integrated subsystems interact within larger less-integrated systems.
It seems straightforward that residents of the United States also form multiple overlapping, causally connected complexes. Despite Tononi’s general caveat about what can legitimately count as an entity or a complex, there seem to be no Tononian grounds for denying that the United States is such an entity or complex and thus a locus of consciousness. Its subsystems are informationally connected, and it doesn’t appear to be subsumed within any more tightly informationally integrated system. (I’m assuming the world community and the Earth as a whole are not more tightly informationally integrated than is the U.S., but doesn’t matter for my ultimate argument if we relax this assumption and grant that on Tononi’s view it would be the world community or planet as a whole that is conscious, rather than the United States.) This conclusion seems especially evident given Tononi’s assertion that conscious complexes exist “at multiple spatial and temporal scales” “in most natural (and artificial) systems” (2004, p. 19). Choose the right temporal and spatial scale and Tononi’s view will deliver group consciousness.
The only question that would appear to remain is whether the United States is informationally integrated enough to have a rich stream of conscious experience, or whether its consciousness is substantially impoverished compared to that of a normal human being. This matter is somewhat difficult to assess, but given the massive informational transfer between people and the highly sensitive complex contingencies in human interaction, including in large-group interactions over longish time frames, I would think a plausible first guess from Tononi’s perspective should be that the United States (or world community), when assessed at the appropriate time scale, has at least as rich a stream of conscious experience as does a small mammal.
Update April 3:
In the comments section, Scott Bakker has kindly pointed me toward a new paper by Tononi. This paper seems to reflect a substantial change in Tononi's position with respect to the issues above. While I think the view above accurately captures Tononi's view through at least 2009, it will require substantial modification in light of his most recent remarks.
Update June 6:
See here for my reaction to Tononi's updated position.
Posted by Eric Schwitzgebel at 10:19 AM 30 comments
Labels: stream of experience
Friday, March 16, 2012
Final Call for Papers: Consciousness and Moral Cognition
Submissions for a special issue of the Review of Philosophy and Psychology on consciousness attribution in moral cognition are due at the end of this month. The list of invited authors includes: Kurt Gray (Maryland) and Chelsea Schein (Maryland), Anthony I. Jack (Case Western Reserve) and Philip Robbins (Missouri), Edouard Machery (Pittsburgh) and Justin Sytsma (East Tennessee State), and Liane Young (Boston College).
Submissions are due March 31, 2012.
The full CFP, including relevant dates and submission details, is available on RoPP's website.
Posted by Eric Schwitzgebel at 1:28 PM 0 comments
Labels: announcements
Ethicists No More Likely Than Non-Ethicists to Pay Their Registration Fees at APA Meetings
As some of you will know, I have an abiding interest in the moral behavior of ethics professors. I've collected a variety of evidence suggesting that ethics professors behave on average no morally better than do professors not specializing in ethics (e.g., here, here, here, here, and here). Here's another study.
Until recently, the American Philosophical Association had more or less an honor system for paying meeting registration fees. There was no serious enforcement mechanism for ensuring that people who attended the meeting -- even people appearing on the program as chairs, speakers, or commentators -- actually paid their registration fees. (Now, however, you can't get the full program with meeting room locations without having paid the fees.)
Registration fees are not exorbitant: Since at least the mid-2000s, pre-registration for APA members been $50-$60. (Fees are somewhat higher for non-members and for on-site registration. For students, pre-registration is $10 and on-site registration is $15.) According to the APA, these fees don't fully cover the costs of hosting the meetings, with the difference subsidized from other sources of revenue. Barring exceptional circumstances, people attending the meeting plausibly have an obligation to pay their registration fees. This might be especially true for speakers and commentators, since the APA has given them a podium to promulgate their ideas.
From personal experience, I believe that almost everyone appearing on the APA program attends the meeting (maybe 95%). What I've done, then, is this: I have compared published lists of Pacific APA program participants from 2006-2008 with lists of people who paid their registration fees at those meetings -- data kindly provided by the APA with the permission of the Pacific Division. (The Pacific Division meeting is the best choice for several reasons, and both of the recent Secretary-Treasurers, Anita Silvers and Dom Lopes have been generous in supporting my research.)
Let me emphasize one point before continuing: The data were provided to me with all names encrypted so that I could not determine the registration status of any particular individual. This was a condition of the Pacific Division's cooperation and of UC Riverside's review board approval. It is also very much my own preference. I am interested only in group trends.
To keep this post to manageable size, I've put further details about coding here.
Here, then, are my preliminary findings:
Overall, 76% of program participants paid their registration fees: 75% in 2006, 76% in 2007, and 77% in 2008. (The increasing trend is not statistically significant.)
74% of participants presenting ethics-related material (henceforth "ethicists": see the coding details) paid their registration fees, compared to 76% of non-ethicists, not a statistically significant difference (556/750 vs. 671/885, z = -0.8, p = .43, 95% CI for diff -6% to +3%).
Other predictors:
* People on the main program were more likely to have paid their fees than were people whose only participation was on the group program: 77% vs. 65% (p < .001).
* Gender did not appear to make a difference: 75% of men vs. 76% of women paid (p = .60).
* People whose primary participation was in a (generally submitted and blind refereed) colloquium session were more likely to have paid than people whose primary participation was in a (generally invited) non-colloquium session on the main program: 81% vs. 74% (p = .004).
* There was a trend, perhaps not statistically significant, for faculty at Leiter-ranked PhD-granting institutions to have been less likely to have paid registration fees than students at those same institutions: Leiter-ranked faculty 73% vs. people not at Leiter-ranked institutions (presumably mostly faculty) 75% vs. students at Leiter-ranked institutions 81% (chi-square p = .11; Leiter-ranked faculty vs. students, p = .03).
* There was a marginally significant trend for speakers and commentators to have been more likely to have paid their fees than people whose only role was chairing: 76% vs. 71% (p = .097).
Ethicists differed from non-ethicists in several dimensions.
* 33% of ethicists were women vs. 18% of non-ethicists (p < .001).
* 63% of participants whose only appearance was on the group program were ethicists vs. 42% of participants who appeared on the main program (p < .001).
* Looking only at the main program, 35% of participants whose highest level of participation was in a colloquium session were ethicists vs. 49% whose highest level of participation was in a non-colloquium session (p < .001). (I considered speaking as a higher level of participation than commenting and commenting as a higher level of participation than chairing.)
* Among faculty in Leiter-ranked departments, a smaller percentage were ethicists (38%) than among participants who were not Leiter-ranked faculty (49%, p < .001). (I've found similar results in another study too.)
I addressed these potential confounds in two ways.
First, I ran split analyses. For example, I looked only at main program participants to see if ethicists were more likely to have registered than were non-ethicists (they weren't: 77% vs. 77%, p = .90), and I did the same for participants who were only in group sessions (also no difference: 65% vs. 64%, p = .95). No split analysis revealed a significant difference between ethicists and non-ethicists.
Second, I ran logistic regressions, using the following dummy variables as predictors: ethicist, group program participant, colloquium participant, student at Leiter-ranked institution, chair. In one regression, those were the only predictors. In a second regression, each variable was crossed as an "interaction variable" with ethicist. No interaction variable was significant. In the non-interaction regression, colloquium role and main program participation were both positively predictive of having registered (p < .01) and participation only as chair was negatively predictive (p < .01). Being a student at a Leiter-ranked institution was not predictive (p = .18) and -- most importantly for my analysis -- being an ethicist was also not predictive (logistic beta = .04, p = .72), confirming the main result of the non-regression analysis.
[Thanks to the Pacific Division of the American Philosophical Association for providing access to their data, anonymously encoded, on my request. However, this research was neither solicited by nor conducted on behalf of the APA or the Pacific Division.]
Update March 17, for those concerned about privacy: See the comments section for a bit more detail on the methods used to ensure that no one outside the APA was able to determine any individual's registration status.
Posted by Eric Schwitzgebel at 11:04 AM 7 comments
Labels: ethics professors, sociology of philosophy
Thursday, March 15, 2012
Women's Roles in APA Meetings
I've been looking into data on whether ethicists are more or less likely than non-ethicists to pay their registration fees at meetings of the American Philosophical Association. As part of this project, I've coded program participation data from the Pacific APA from 2006-2008. Given the gender issues in philosophy, I thought readers might be interested to see the data broken down by gender.
Gender coding was based on first name only, excluding people with gender-ambiguous first names, first initials only, and foreign names if the gender was not obvious to the U.S. coders (altogether 10% of the program slots were excluded from gender coding for these reasons).
First: Women occupied 25% of the Pacific APA program slots each year. This rate was remarkably consistent, in fact: 25.3% in 2006, 24.8% in 2007, and 24.8% in 2008, with about 1000 gender-coded program slots each year. This 25% representation on the program is approximately in line with estimates of the percentage of U.S. philosophers who are women (compare, e.g., my 23% estimate across 5 U.S. states, Leiter's report of 21% from the National Center for Education Statistics, and the 2009 Survey of Earned Doctorates finding that women receive about 30% of U.S. philosophy PhDs).
One very consistent finding in my research is that female philosophers are more likely to be ethicists than non-ethicists. My Pacific APA data fit this pattern. I coded talks, by title, as "ethics", "non-ethics", or "excluded". "Ethics" was construed broadly to include political philosophy and philosophy of law. "Excluded" talks included talks on religion, philosophy of action, gender, race, and issues in the profession (such as technology or teaching) unless the title of those talks suggested an ethical focus. Philosophers chairing or commenting on sessions containing a mix of ethics and non-ethics talks were also excluded from this analysis. 33% of participant slots in ethics were occupied by women, compared to 18% in non-ethics (363/1085 vs. 232/1315, p less than .001).
I also broke the data down by role in the program. Women were slightly more likely to be on the "group program" than the "main program": 28% vs. 24% (579/2408 vs. 198/704, p = .03). However, this effect appears to be driven by the fact that the group program had proportionately more ethics slots than did the main program (60% of group program participant slots were ethics vs. 41% of main program participant slots, 338/561 vs. 876/2126, p less than .001). As noted above, women were more likely to occupy ethics slots. Regression analysis suggests that women were not more likely to be group program than main program participants when this other factor is taken into account.
Within the main program, I found no statistically detectable difference in the likelihood of being in the (usually submitted and blind refereed) colloquium sessions than in the (usually invited) non-colloquium sessions (23% vs. 25%, 245/1091 vs. 326/1318, p = .38) (See also Dom Lopes' analysis here and here). Nor did I find a difference in the likelihood as serving in the chair role as opposed to speaking or commenting (27% vs. 24%, 209/780 vs. 568/2332, p = .18).
Posted by Eric Schwitzgebel at 9:31 AM 7 comments
Labels: sociology of philosophy
Tuesday, March 13, 2012
Cohen, Dennett, and Humphrey
Readers might be interested to see Cohen and Dennett's reply to my February 28 post on their work on reportability and consciousness (which I have added as an update to that post) and/or my extended discussion with Nick Humphrey in the comments section of my March 8 post on why he should think that the United States is conscious.
Posted by Eric Schwitzgebel at 9:06 AM 0 comments
Labels: stream of experience
Thursday, March 08, 2012
Why Humphrey Should Think That the United States Is Conscious
In February, I argued that Daniel Dennett and Fred Dretske should, given their other views, hold that the United States is a spatially distributed group entity with a stream of experience of its own (a stream of experience over and above the experiences of the individual citizens and residents of the United States). Today I'm going to the suggest the same about psychologist Nicholas Humphrey.
My general project is to argue that that if materialism is true, the United States is probably conscious. I've advanced some general considerations in favor of that claim. But I also want to examine some particular materialist theories in more detail. I've chosen Dennett, Dretske, Humphrey, and (coming up) Guilio Tononi because their theories are prominent, aim to explain consciousness in any possible organism (not just human beings), and cover the metaphysics top to bottom.
Humphrey is a particularly interesting case because, awkwardly for my view, he explicitly denies that collective entities made of separate bounded individuals, such as swarms of bees, can have conscious experience (1992, p. 194). I will now argue that Humphrey, by the light of his own theory, should recant such remarks.
Humphrey argues that a creature has conscious experience when it has high-fidelity recurrent feedback loops in its sensory system (1992, 2011). A "sensory system", per Humphrey, is a system that represents what is going on inside the creature and directs behavior accordingly. No fancy minds are required for "sensation" in Humphrey's sense; such systems can be as simple as the reactivity of an amoeba to chemicals or light. (Humphrey also contrasts sensation with "perception", which provides information not about states of the body but rather about the outside world.) For consciousness, the only thing necessary besides a sensory system, on Humphrey's (1992) view, is that there be high-fidelity, momentarily self-sustaining feedback loops within that system -- loops between input and output, tuned and integrated across subjective time.
At first glance, you might think this theory would imply a superabundance of consciousness in the natural world, since sensory systems (by Humphrey's liberal definition) are cheap and feedback loops are cheap. But near the end of his 1992 book, Humphrey proves conservative. He rules out, for example, worms and fleas, saying that their sensory feedback loops "are too long and noisy to sustain reverberant activity" (1992, p. 195). Maybe even consciousness is limited only to "higher vertebrates such as mammals and birds, although not necessarily all of these" (ibid.).
Humphrey argues against conscious experience in spatially discontinuous entities as follows: Collective entities, he says, don't have bodies, and thus they lack a boundary between the me and the not-me (1992, p. 193-194). Since sensation (unlike perception) is necessarily directed at one's own bodily states, collective organisms necessarily lack sensory systems. Thus, they're not even candidates for consciousness. The argument is quick. Its subpoints are undefended, and it occupies less than a paragraph.
One plausible candidate for the boundary of a collective organism is the boundary of the discontinuous region occupied by all its members' bodies. The individual bees are each part of the colony; the flowers and birds and enemy bees are not part of the colony. That could be the me and the not-me -- at least as much me/not-me as an amoeba has! The colony reacts to disturbances of this body so as to preserve as much of it as possible from threats, and it deploys parts of its body (individual bees) toward collective ends, for example via communicative bee dances in a tangled informational loop at the center of the colony.
Humphrey makes no mention of spatial contiguity in developing his account of sensation, representation, responsiveness, and feedback loops. Nor would a requirement of contiguity appear to be motivated within the general spirit of his account. A sensory signal travels inward along a nerve from the bodily surface to the central tangle. Perhaps a species could evolve that sends its nerve signals by lightwave instead, along hollow reflective capillaries, saving precious milliseconds of response time. Perhaps this adaptation then allows a further adaptation in which peripheral parts can temporarily detach from the central body while still sending their light signals to targeted receptors. You can see how this adaptation could be useful for ambushing prey or reaching into long, narrow spaces. Viola, discontinuous organisms! Nothing in Humphrey's account seems to motivate ruling such possibilities out. Humphrey should allow that beings with discontinuous bodies can, at least in principle, have spatially distributed sensory surfaces that communicate their disturbances to the center of the organism and whose behavior is in turn governed by signals outbound from the center. He should allow the possibility of sensation, body, and the me/not-me distinction in spatially distributed organisms. And then for consciousness there remains only the question of whether there are sustained, high-fidelity feedback loops within those sensory systems.
So much for in-principle possibility. How about actual consciousness in actually existing distributed organisms? Since Humphrey sets a high bar for "high fidelity", bee colonies still won't qualify as conscious organisms by his standards; their feedback loops won't be high fidelity enough. But how about the United States? I think it will qualify. It will be helpful, however, to consider a cleaner case first: an army division.
An army division has clear boundaries. There are people who are in it and there are people who are outside of it. There's the division on the one hand and the terrain on the other. The division will act to preserve its parts, for example under enemy attack. Disturbances on the periphery (e.g., on the retinas of scouts) will be communicated to the center, and commands from the center will govern behavior at the periphery. If we can set aside prejudice against discontiguous entities and our commonsensical distaste at conceiving of human beings as mere parts of a larger organism, it seems that an army division has a body by Humphrey's general standards.
Does the division also have a sensory system? Again, it seems it should, by Humphrey's standards: Conditions on the periphery are represented by the center, which then governs the behavior of the periphery in response. That's all it takes for the amoeba, and if Humphrey is to be consistent that's all it should take for the division.
Now finally for the condition that Humphrey uses to exclude earthworms and fleas: Does the army division have high-fidelity, temporally extended feedback loops from the sensory periphery to the center? (Alternatively it might have, as in the human case per Humphrey, a more truncated loop from output signals, which needn't actually make it to the periphery, back to the center.) It seems so, at least sometimes. The commander can watch in real time on high-fidelity video as her orders are carried out. She can even stream live satellite video back and forth with her scouts and platoon leaders. Video feeds from the scouts' positions can come high-fidelity in a sustained stream to her eyes. Auditory feeds can return to her ears -- including auditory feeds containing the sound of her own voice issuing commands. For a modern army, there's plenty of opportunity for sustained high-fidelity feedback loops between center and periphery. With good technology, the feedback can be much higher fidelity, higher bandwidth, and more sustained than in the proprioceptive feedback loops I get when I close my eyes and wiggle my finger.
(In his 2011 book, Humphrey gestures toward further complexities of information flow that feedback loops enable (p. 57-59). However, as he suggests, such emergent complexities arise quite naturally once feedback loops are sustained and high-quality, and the same will presumably be true for some such feedback loops in an army division. In any case, since such remarks are gestural rather than fully developed, I focus primarily on the account in Humphrey's 1992 book.)
If I can convince you that a Humphrey-like view implies that army divisions have conscious experience, that's enough for my overall purposes. But to bring it back specifically to the United States: The U.S. has a boundary of me/not-me and a spatially distributed body, in roughly the same way an army division does. In Washington, D.C., it has a center of control of its official actions, which governs behaviors like declaring war, raising tariffs, and sending explorers to the moon. Signals from the periphery (and from the interior too, as in the human case) provide information to the center, and signals from the center command the periphery. And with modern technology, the feedback loops can be high fidelity, high bandwidth, temporally sustained, and almost arbitrarily complex. Humphrey's criteria are all met. Humphrey should abandon his apparent bias against discontinuous organisms and accept that the United States is literally conscious.
Update, March 13:
Readers might be interested to Nick Humphrey's reply in the comments section and the exchange between Nick and me that grows out of it.
Posted by Eric Schwitzgebel at 3:16 PM 15 comments
Labels: metaphysics, stream of experience