One of the most prominent theories of consciousness is Guilio Tononi's Integrated Information Theory. The theory is elegant and interesting, if a bit strange. Strangeness is not necessarily a defeater if, as I argue, something strange must be true about consciousness. One of its stranger features is what Tononi calls the Exclusion Postulate. The Exclusion Postulate appears to render the presence or absence of consciousness almost irrelevant to a system's behavior.
Here's one statement of the Exclusion Postulate:
The conceptual structure specified by the system must be singular: the one that is maximally irreducible (Φ max). That is, there can be no superposition of conceptual structures over elements and spatio-temporal grain. The system of mechanisms that generates a maximally irreducible conceptual structure is called a complex... complexes cannot overlap (Tononi & Koch 2014, p. 5).The basic idea here is that conscious systems cannot nest or overlap. Whenever two information-integrating systems share any parts, consciousness attaches to the one that is the most informationally integrated, and the other system is not conscious -- and this applies regardless of temporal grain.
The principle is appealing in a certain way. There seem to be lots of information-integrating subsystems in the human brain; if we deny exclusion, we face the possibility that the human mind contains many different nesting and overlapping conscious streams. (And we can tell by introspection that this is not so -- or can we?) Also, groups of people integrate information in social networks, and it seems bizarre to suppose that groups of people might have conscious experience over and above the individual conscious experiences of the members of the groups (though see my recent work on the possibility that the United States is conscious). So the Exclusion Postulate allows Integrated Information Theory to dodge what might otherwise be some strange-seeming implications. But I'd suggest that there is a major price to pay: the near epiphenomenality of consciousness.
Consider an electoral system that works like this: On Day 0, ten million people vote yes/no on 20 different ballot measures. On Day 1, each of those ten million people gets the breakdown of exactly how many people voted yes on each measure. If we want to keep the system running, we can have a new election every day and individual voters can be influenced in their Day N+1 votes by the Day N results (via their own internal information integrating systems, which are subparts of the larger social system). Surely this is society-level information integration if anything is. Now according to the Exclusion Postulate, whether the individual people are conscious or instead the societal system is conscious will depend on how much information is integrated at the person level vs. the societal level. Since "greater than" is sharply dichotomous, there must be an exact point at which societal-level information integration exceeds the person-level information integration. Tononi and Koch appear to accept a version of this idea in 2014, endnote xii [draft of 26 May 2014]. As soon as this crucial point is reached, all the individual people in the system will suddenly lose consciousness. However, there is no reason to think that this sudden loss of consciousness would have any appreciable effect on their behavior. All their interior networks and local outputs might continue to operate in virtually the same way, locally inputting and outputting very much as before. The only difference might be that individual people hear back about X+1 votes on the Y ballot measures instead of X votes. (X and Y here can be arbitrarily large, to ensure sufficient informational flow between individuals and the system as a whole. We can also allow individuals to share opinions via widely-read social networks, if that increases information integration.) Tononi offers no reason to think that a small threshold-crossing increase in the amount of integrated information (Φ) at the societal level would profoundly influence the lower-level behavior of individuals. Φ is just a summary number that falls out mathematically from the behavioral interactions of the individual nodes in the network; it is not some additional thing with direct causal power to affect the behavior of those nodes.
I can make the point more vivid. Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate. But Tononi holds that a person's Φ can vary over the course of the day -- declining in sleep, for example. So suppose Jamie goes to sleep. Now the societal system has the highest Φ and no individual human being in the system is conscious. Now Jamie wakes and suddenly everyone is conscious again! This might happen even if most or all of the people in the society have no knowledge of whether Jamie is asleep or awake and exhibit no changes in their behavior, including in their self-reports of consciousness.
More abstractly, if you are familiar with Tononi's node-network pictures, imagine two very similar largish systems, both containing a largish subsystem. In one of the two systems, the Φ of the whole system is slightly less than that of the subsystem. In the other, the Φ of the whole system is slightly more. The node-by-node input-output functioning of the subsystem might be virtually identical in the two cases, but in the first case, it would have consciousness -- maybe even a huge amount of consciousness if it's large and well-integrated enough! -- and in the other case it would have none at all. So its consciousness or lack thereof would be virtually irrelevant to its functioning.
It doesn't seem to me that this is a result that Tononi would or should want. If Tononi wants consciousness to matter, given the Exclusion Postulate, he needs to show why slight changes of Φ, up or down at the higher level, would reliably cause major changes in the behavior of the subsystems whenever the Φ(max) threshold is crossed at the higher level. There seems to be no mechanism that ensures this.