Thursday, February 09, 2012

Why Dennett Should Think That the United States Is Conscious

If you're reading this, you probably know who Dan Dennett is. His book Consciousness Explained is probably the best-known contemporary philosophical work on consciousness from a materialist perspective. Today I'll argue that Dennett should, by his own lights, accept the view that the United States is conscious -- that is, the view that the United States has a stream of subjective experience over and above the individual experiences of its residents and citizens, the view that there's "something it's like" to be the United States, as truly as there is something it's like to be you and me.

I offer these reflections not as an attempted refutation of Dennett's view. I think that the literal consciousness of the United States is a real possibility that merits serious consideration.

To start, Dennett is liberal about entityhood. In his essay "Real Patterns", Dennett accepts the real or real-enough existence of things like the his "lost sock center", the center of the smallest sphere than can be inscribed around all the socks he has ever lost. Dennett also appears comfortable with spatially distributed thinking subjects whose parts are connected by radio signals (e.g., Ned Block's Chinese Nation, accepted as conscious in Consciousness Explained, and a fictionalized version of himself in his essay "Where Am I?"). Dennett should have no trouble granting that the United States is a real or real-enough entity that could at least potentially have mental states.

And Dennett is similarly liberal about the ascription of beliefs and desires. In Dennett's view, as long as an entity is usefully describable as possessing beliefs and desires and acting rationally upon them -- as long as belief-desire ascriptions capture some pattern in the entity's behavior that would be left out or much more difficult to see without the help of belief-desire ascriptions -- then that entity has beliefs and desires. Even chess-playing computers qualify. (See especially his book The Intentional Stance.) It seems clear that the United States is usefully describable as having beliefs and desires. The United States wants Iran to cease pursuing nuclear technology. The United States believes that stable democracies make better international partners than do Islamic theocracies.

Thus, it seems clear that Dennett should accept that the United States is a real entity with real beliefs and desires. Ascribing attitudes to this entity helpfully captures "real patterns" in world politics. This isn't an especially unusual view in contemporary philosophy. What's highly unusual is ascribing literal consciousness to group entities.

And about consciousness Dennett is conservative. He's hesitant to ascribe full-blown consciousness even to non-human mammals. He argues that consciousness is a vague-bordered concept from folk psychology that begins to break down when applied to their simpler minds (e.g., here).

Why does consciousness-talk break down, exactly, on Dennett's view? Dennett argues that consciousness requires being an agent with a point of view. That is, consciousness requires "private, perspectival, interior ways of being apprised of some limited aspects of the wider world and our bodies' relations to it"; and this point of view should be conceivably shapable into a narrative if the entity could talk (for the quote, see here, p. 173). That sounds pretty fancy! But it's clear that Dennett intends "private", "perspectival", and "point of view" minimalistically in remarks of this sort, rather than with the full trappings of robust humanocentrism. This minimalism is evident from his assertion that even cherry trees and microphones meet a first-pass application of these criteria. For Dennett, what distinguishes human beings, fully capable of conscious experience, from beings incapable of consciousness and beings in the gray zone, is that the kind of environmental responsiveness possessed by cherry trees and simple robots is directed simply outward, whereas our own responsiveness is also directed at our own states, in complex, recursive, self-regulating loops.

So the question, then, for the Dennettian view, is whether the United States is a gray-zone case in this respect, or whether it has a sufficiently sophisticated perspective on its own internal states. Is its play of representation and responsiveness massively simpler than that of a conscious human being, on the order of the organization of a non-human mammal? Or is its recursive, complex self-sensitivity roughly human magnitude or more? The latter seems the more plausible. The United States, considered as a single, massive system, is vastly complex, arguably embracing within it much of the complexity of each human being who helps compose it. The complex and self-referential structures internal to citizens' minds, and between citizens in conversation, feeds our choices of President, our decisions to war, our environmental policies. That the people of the United States don't agree but adopt multiple competing perspectives is no objection here, and in fact fits very nicely with Dennett's remarks about consciousness involving multiple drafts and "fame in the brain". Both in the U.S. and in the individual human mind, subsystems with inconsistent perspectives compete for control of behavior and meaning-making.

Dennett emphasizes the importance of language and narratives in generating the complexities necessary for consciousness. And the United States exudes language. It apologizes to the descendants of slaves. It announces official foreign policies. It promises to reduce global warming. This isn't the same thing as any single individual apologizing, announcing, or promising, or any several individuals doing so in unison. It's one thing for Obama to apologize and quite another for the U.S. to apologize, even if the U.S. does so using Obama's mouth.

The U.S. even seems to generate "heterophenomenological" self-reports about the processes by which it reaches its beliefs and decides among its actions. We can ask the Census Bureau about its methods for counting residents. We can ask the National Archives and Records Administration how the President is chosen. Congressional leaders sometimes (dubiously) describe the methods by which the governing body has reached its decisions. Dennett seems happy to count even simple computer outputs as "heterophenomenological reports", if they present themselves as descriptions of the computer's internal workings (e.g., "Shakey" in Consciousness Explained, "Cog" in "The Case for Rorts" -- though actually existing robot systems may still be too simple to be literally conscious on Dennett's view). I see no Dennettian grounds for excluding bureaucratic self-descriptions as heterophenomenological reports, by the United States, of its internal workings. Indeed, few if any of us achieve bureaucratic heights of linguistic self-report.

Since Dennett has a generally pragmatic attitude about mental-state ascriptions, one potential Dennettian hesitation is that ascribers might too swiftly leap from what they think they know about human consciousness to conclusions about the conscious experiences of the United States. If you're upset about Iran's pursuit of nuclear technology, there might be some physiological changes in your stomach that contribute to your experience. If the U.S. is upset about Iran's pursuit of nuclear technology, its internal changes will be structurally rather different. But the correct advice here should be pragmatic caution in our heterophenomenological explorations. The same caution would be required if we met cognitively sophisticated aliens very structurally different from us. The same caution would be required in evaluating robots capable of outputs that are linguistically interpretable as sophisticated self-reports of their internal workings -- robots that Dennett argues should be viewed as conscious (e.g., in "The Case for Rorts").

I conclude that philosophers of a broadly Dennettian bent should accept that the United States has a stream of experience in the same sense that you and I do. That we tend to think otherwise, they should say, is merely evolutionarily and developmentally comprehensible, but theoretically ungrounded, morphological prejudice against discontiguous entities.

[Revised Feb. 10.]

19 comments:

Baron P said...
This comment has been removed by the author.
Carl said...

Hmm, it's definitely a fair question to ask.

One terminological note: I don't like how "consciousness" is used to "reflexively self-aware consciousness." In Latin, conscious is just "with" + "knowing," so it makes more sense to me for "consciousness" to be our term for bare receptivity, possession of intentional states, modeling of the outside world, etc.

Human consciousness is (apparently) unique because it goes beyond modeling the world to also model itself, and through language we can share our internal models of self with each other. (Other animals can share their models of the world—"Hey, there's a hawk! Everybody run!"—but apparently they can't communicate about their internal state in a very sophisticated way.)

I think the case for the US as having an internal state that models the world is pretty strong, but I'm not sure it has much in the way of reflexivity, you examples aside. But maybe I'm being too curmudgeonly about how "weird" of a thing it is.

An interesting Bayesian question is why, if entities larger than us like nations or smaller than us like cells are also conscious, are we the consciousnesses of human-sized objects instead of being the consciousness for something larger or smaller. Can we infer from the fact of our being human any evidence for the proposition that this is the only kind of consciousness to be? Probably not, but it seems difficult to argue against.

Anonymous said...

Unlike yourself, and provided Dennett is committed to this claim, I'd take that fact to be a nearly decisive refutation of his view. Call me conservative, but frankly, it's radically implausible that the United States of America is conscious in the same way I'm conscious. (And, of course, I'm not opposed to folks attempting to defend radically implausible positions.)

I'm a bit confused by at least one thing you've said, however. You claim that Dennett is committed to the claim that there is "something it is like" to be the United States of America. You seem to think that if the United States has "a stream of conscious experience" then there is "something it is like to be the United States". And so you appear to attribute qualia to the things that have streams of conscious experience. But Dennett doesn't seem to think there are qualia, or that there is "something it's like" to be something (in an standard sense of the usage of "something it's like"). How were you using the locution "something it's like"?

Eric Schwitzgebel said...

Thanks for the comments, folks!

@ Carl: Will chess-playing computers have genuine consciousness on the view you propose? Those of a "curmudgeonly" bent might find that objectionably weird, too! On why we have middle-sized consciousness: Possibly, that's just where we find ourselves and nothing particularly probabilistically surprising -- just as it's not really special that we find ourselves on Earth or in a universe capable of supporting life. (Yes, hints of cosmological principle here.)

@ Anon: Although Dennett rejects "qualia" I interpret that rejection as contingent upon qualia being things that are supposed to fill a role that Dennett thinks can't be filled -- things irreducible to (for example) "mechanically accomplished dispositions to react". But about consciousness, I interpret Dennett as a realist, or at least a "real patterns"-ist (though not an "industrial-grade" realist). On "what it's like" in particular, he seems okay with that language in 1991, ch. 14.

Anonymous said...

Ascribing consciousness to a chess-playing computer seems pretty hard to defend, at least if one is referring to contemporary chess computers, and not imaginary future ones.

Today's computers are not creative. They have relative values entered for the pieces (ie, knight is worth slightly less than a bishop in most endgame scenarios) and are preloaded with massive endgame tablebases, such that the computer plays purely mechanically once you get down to few enough pieces. In the middle game it's just a matter of running preloaded algorithms. If a chess-playing computer is conscious then so, I think, is a TI-83.

Eric Schwitzgebel said...

Near the heart of Dennett's view (and the view of many materialists) is that we all run "purely mechanically" -- and yet somehow we're conscious! Options: (1.) reject materialism, (2.) accept TI-83 consciousness, (3.) adopt a non-standard materialist view, (4.) say that consciousness requires further specific types of mechanisms that TI-83's lack, such as sophisticated self-regulation algorithms. Dennett takes the 4th tack.

Anonymous said...

I think the retinoid model of consciousness falls in your 4th category. See here:

http://evans-experientialism.freewebspace.com/trehub01.htm

Carl said...

"Will chess-playing computers have genuine consciousness on the view you propose? Those of a "curmudgeonly" bent might find that objectionably weird, too!"

It's definitely a weird view, but I'm a pan-experientialist, so I'm not too worried about it. :-) If an electron experiences a proton, why can't chess computers experience things? That said, I wouldn't use the terminology that a chess computer has "genuine" consciousness, because "genuine" seems to mean "human-like." On the other hand, is a chess computer or a TI-83 as conscious as a sunflower? Sure, why not. At the least, they're in similar ballparks. Of course, all of these things are leagues away from even primitive animal consciousness, to say nothing of human-style reflexively self-aware consciousness, which seems to be another ball game all together. But my view is that more or less everything that's a natural object is minimally conscious.

Unknown said...

A few things:

(1) I hearty thank you for posting this kind of stuff in an easy to access form and free for everyone to read. I wish more philosophers published at least some of their thoughts on blogs.

(2) It would seem that the reflexivity and looping of the United States experiences is hosted, in part, by inanimate objects. For instance, keyboards, cameras, processor chips, cables, and monitors do much of the work of social networking, which surely provides a venue for the reflexivity behavior you mention. Things like TV shows, radios, newspapers, cell phones, automobiles, roads, (as well as their composite parts and mediums, etc.) seem to play a non-negligible role in hosting social reflexivity and looping. I would imagine that dead people also play a role in hosting what you have called the consciousness of the Unites States. So when you say the United States is conscious you mean more than just [a conglomeration of people in the Unites States]. You mean the functional sum of people, technology, inanimate objects, fictional people, and in some cases plain old matter host this consciousness. That this sum could be a host of what most people understand by the word consciousness is claim that needs much more support. I wonder how you would support the claim that these inanimate objects play an essential part in hosting consciousness. Would you take the metaphorical road or not?

After all, if all this inanimate stuff between conscious persons fails to be a feasible host of consciousness in the way that they would need to be, then all we have is a complex network of conscious persons, that—given their ability to interact using non-conscious tools—appear to be instantiating group consciousness. Or perhaps worse: if the tools that allow for conscious people to network are actually the stuff of consciousness hosting, then I wonder if we have taken a non-negligible step towards panpsychism.

(3) It seems we have not surmounted Dennett's criteria of subjective perspective. The US might have more of a subjective perspective than, say, the entire world, but that does not necessarily fit Dennett's idea of subjective perspective (...but that could be my mere subjective perspective).

Nick
www.critiquemythinking.com

Michael Drake said...

But the U.S. doesn't "exude language" qua internal, first-person narrative. (Not to mention first-person singular narrative! Instead, and quite conspicuously, the subject of its statements is always cast as either "We" or "the United States.") Rather, its consensus statements are frank fictions understood as reflecting an agreement among some subset of the conscious agents the U.S. comprises.

Aside from which (though still relatedly), the U.S.'s linguistic products are the result of processes that have no real analogue in the brain, viz., in that they are created, revised, and recited or recorded by individual agents, all of who consciously attend to both syntactic and semantic features of those products.

I don't think this shows that the U.S. can't have a stream of consciousness in some sense. But I think it does show (from a broadly Dennettian perspective) that if it does have such a stream of consciousness, it's not likely to be "in the same sense that you and I do."

Zach said...

Professor S, I consider myself a philosopher "of a broadly Dennettian bent," and I generally like and agree with your ideas as well. I'd love to be the firebrand that bites the bullet and defends this craziness, but I think you stretch Dennett too far.

I agree that Dennett would/should accept that the United States is a real entity with real beliefs/desires. I also agree that there is support for this attribution of consciousness in much of Dennett's writing (especially Sweet Dreams). But I don't think Dennett would attribute conscoiusness to the USA.

Consider the USA as a heterophenomonological subject: How well can we explain USA's behavior without positing that the USA enjoys "a stream of subjective experience over and above the individual experiences of its residents"? Simply, I think we can do it quite well. Take the USA's bureaucratic "self-reports." In my view, they are satisfactorily explained solely by appealing to the conscious census workers that carried out the bureaucracy; there is no need to suppose that the USA is conscious in addition.

Now consider Block's Chinese Nation. Suppose that the simulated brain and a person Sarah discuss politics:

SARAH: Who was the best president?
NATION: Mmm, I'd have to say Lincoln: He was the only president to balance the importance of morality with respect for the individual.

(Apologies to any history buffs out there; I'm sure this is awful.)

We could try to explain NATION's response simply by appealing to the behavior of the conscious Chinese citizens carrying out the simulation. But this is no more satisfying than explaining my intelligent behavior by saying "My behavior is the result of neurons."

Importantly, it might be illustrative to view the USA as an intentional agent (e.g. "the USA 'judged' that Iran was a threat and 'decided' to protect itself"). But this is different: Dennett is willing to apply the intentional stance to non-human animals but is skeptical that they are conscious. So too he might be willing to apply it to the USA without granting that it is conscious.

Thanks for the thought-provoking entry, as usual.

readingandphilosophy said...

As always a good read Eric.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

@ Anon Feb 10: Yes, that seems right.

@ Carl: I see! If you're a pan-experientialist and are willing to treat the U.S. as an object with experience, good enough for me! I agree that it is structurally very different from a human being and so presumably experientially very different.

@ Nick: Thanks for the kind words. I think there are two ways of conceptualizing the U.S. for these purposes: (a.) as a collectivity of people, in which case that collectivity implements its cognition with the help of a multitude of external devices, much as we might cognize in constant interaction with external objects like our iPhones and the things we're looking at; or (b.) as you might prefer, as a collectivity that includes lots of non-human objects among its parts. As long as one or the other conception (not necessarily both) generates a stream of experience, my thesis is supported. On (3), I'm not sure why not. Tell me more if you have further thoughts.

Eric Schwitzgebel said...

@ Zach: Well, I haven't convinced Dennett yet either, so you're in good company! However, I haven't yet heard what I would regard as a good argument against the conclusion from him yet. (No doubt, he regards me as dense.)

What you say about the comparison to non-human animals fits with what Dan has tentatively suggested to me in a couple casual emails. What you add that is different than what Dan has expressed to me is the thought that for the USA, unlike the human or the Chinese nation, you can explain the bureaucratic self-reports well enough without appeal to consciousness. That's an interesting suggestion, giving "consciousness"-attribution a similar role to belief ascription on Dennett's view: It's real or real enough if it can pick up patterns left out by stingier attributions.

I have two reasons to be hesitant about applying that move to save Dennett from my conclusion. (A.) That's not the kind of consideration he tends to evoke in denying consciousness to animals, at least in what is fresh in my mind now. Instead the kind of consideration he invokes is the lack of a sophisticated, self-reflective, narrative-giving capacities. And the U.S. *does* seem to have those (on the assumption that it is a real-enough entity with real-enough beliefs and desires). (B.) Even accepting the real-patternish criteria you mention, I think the USA still would qualify. Although in principle one could explain the heterophenomenological reports of the bureaucracy in terms of the behavior of individuals, that seems to miss the larger pattern behind (say) the broad consistency of the Census Bureau's standards over time despite the ever-changing individuals who compose it. Similarly we *could* explain the chess-playing computer's moves by appeal to low-level programming features, but that misses out on the real pattern elegantly captured by, for example, saying that it wants to protect its queen. If Dennett thinks that attitude attribution is justified in the chess computer case, then if he adopts a similar real-patternish approach to heterophenomenological reports he ought to accept the consciousness of either the Census Bureau, the U.S. as a whole, or some other currently existing large corporate entity of that sort.

Zach said...

If Dennett thinks you're dense, then he and I have at least one major disagreement.

On your point (A): I just scoured my copies of Consciousness Explained, Sweet Dreams, Kinds of Minds as well as a few of his essays for anything that supports the consciousness-attribution criterion I credited to Dennett. I have to admit, I didn't find much.

In formulating that criterion, I think I was thinking of Dennett's discussion of 'competence without comprehension.' The Sphex wasp and the fledgling cuckoo demonstrate remarkably 'intelligent' behaviors; Dennett observes that we can explain their competences without supposing that these creatures comprehend their competences.

That aside, I'll concede point (A): my position probably is a departure from Dennett's stated views. But I'll stake it out on my own and push back against (B)!

I understand and agree with Dennett's contention that appealing to lower-level features can be insufficient to adequately explain larger patterns (like the chess-playing computer's tendency to protect its queen). In certain cases, however, lower-level features are sufficient to explain a larger pattern. Suppose you and I were trotting about in this costume. Suppose that the horse is seen running away from the source of a loud noise. One should explain this behavior simply by appealing to our shared fear of loud things we can't see. There's no need to suppose that the horse was scared, too.

You offered "the broad consistency of the Census Bureau's standards over time" as an example of a larger pattern that cannot be adequately explained by appealing to lower-level features. I think such consistency (if it exists) could certainly be explained by an appeal to such lower level features: Perhaps the executive director of the bureau crafted an explicit set of standards during the first Census and the bureau has adhered to it ever since; perhaps there has always been overwhelming pressure from the Department of Commerce not to stray from the successful censuses of previous years; perhaps it is widely believed that the current standards are nearly perfect to bring about the desired results. I might be confused, but to me, this particular pattern seems more similar to the scared horse than to the chess-playing CPU.

Here is an example of the sort of large pattern that would not be explicable in terms of lower-level features: Stringing together chronologically the first letter of the last name of every U.S. president produces "iamconsciousiamconsciousiamconsciousiamconsc..."

Zach said...

Above, I meant to link to this horse costume.

Khannea said...

That may be true, but if the US is conscious, it might also be stone smashed drunk.

Eric Schwitzgebel said...

Zack: Thanks for the very thoughtful reply, and sorry for the slow turnaround on my part!

I appreciate your having looked through the Dennett for support for your interpretation. I don't want to get the Dennett wrong.

On your defense of (B): I suppose it's a hard call. I'm pretty ready to think of complex interactions leading to emergent patterns (in some modest sense of "emergent") that aren't easily visible or calculatable from looking at the lower-level stuff, so I would (armchairishly) find it surprising if that weren't the case for the goal-directed behavior of the USA. Maybe the Census standards is a poor choice, though, for the reasons you say. I'll have to think about it more. It does seem that some of the most obvious types of group-level behavior (electing president, declaring war, the army's moving about) arise from fairly centralized decisions of individuals or straightforward applications of an algorithm from individuals. This would fit well with your view. The dynamics of an army in retreat or the settling of the West may be more emergent-dynamics-ish, but I'd want more examples of that sort, and maybe more compelling ones. Hm! Thanks!

Bence said...

Cool stuff. But couldn't Dennett just deny that the US is an intentional system? In that case, all the considerations about perspective etc. would be irrelevant.

And why should we think that attributing beliefs and desires to the US as an entity would increase predictive and explanatory power? What you say is:

"It seems clear that the United States is usefully describable as having beliefs and desires. The United States wants Iran to cease pursuing nuclear technology. The United States believes that stable democracies make better international partners than do Islamic theocracies."

But is it useful to describe the US, rather than some (not all) US citizens as having these beliefs and desires? I don't think so. It is definitely not MORE useful and, arguably, Dennett's necessary condition for being an intentional system is that describing it as having beliefs and desires has some extra predictive/explanatory value over the design stance. I don't see why there would be such extra explanatory/predictive value in the US case.

Anyway, this would be an easy way to block your argument. What do you think?