For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.
To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.
The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.
This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.
To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?
Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that
(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, andContra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.
Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.
The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.