Wednesday, July 16, 2014

Tononi's Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant

One of the most prominent theories of consciousness is Guilio Tononi's Integrated Information Theory. The theory is elegant and interesting, if a bit strange. Strangeness is not necessarily a defeater if, as I argue, something strange must be true about consciousness. One of its stranger features is what Tononi calls the Exclusion Postulate. The Exclusion Postulate appears to render the presence or absence of consciousness almost irrelevant to a system's behavior.

Here's one statement of the Exclusion Postulate:

The conceptual structure specified by the system must be singular: the one that is maximally irreducible (Φ max). That is, there can be no superposition of conceptual structures over elements and spatio-temporal grain. The system of mechanisms that generates a maximally irreducible conceptual structure is called a complex... complexes cannot overlap (Tononi & Koch 2014, p. 5).
The basic idea here is that conscious systems cannot nest or overlap. Whenever two information-integrating systems share any parts, consciousness attaches to the one that is the most informationally integrated, and the other system is not conscious -- and this applies regardless of temporal grain.

The principle is appealing in a certain way. There seem to be lots of information-integrating subsystems in the human brain; if we deny exclusion, we face the possibility that the human mind contains many different nesting and overlapping conscious streams. (And we can tell by introspection that this is not so -- or can we?) Also, groups of people integrate information in social networks, and it seems bizarre to suppose that groups of people might have conscious experience over and above the individual conscious experiences of the members of the groups (though see my recent work on the possibility that the United States is conscious). So the Exclusion Postulate allows Integrated Information Theory to dodge what might otherwise be some strange-seeming implications. But I'd suggest that there is a major price to pay: the near epiphenomenality of consciousness.

Consider an electoral system that works like this: On Day 0, ten million people vote yes/no on 20 different ballot measures. On Day 1, each of those ten million people gets the breakdown of exactly how many people voted yes on each measure. If we want to keep the system running, we can have a new election every day and individual voters can be influenced in their Day N+1 votes by the Day N results (via their own internal information integrating systems, which are subparts of the larger social system). Surely this is society-level information integration if anything is. Now according to the Exclusion Postulate, whether the individual people are conscious or instead the societal system is conscious will depend on how much information is integrated at the person level vs. the societal level. Since "greater than" is sharply dichotomous, there must be an exact point at which societal-level information integration exceeds the person-level information integration. Tononi and Koch appear to accept a version of this idea in 2014, endnote xii [draft of 26 May 2014]. As soon as this crucial point is reached, all the individual people in the system will suddenly lose consciousness. However, there is no reason to think that this sudden loss of consciousness would have any appreciable effect on their behavior. All their interior networks and local outputs might continue to operate in virtually the same way, locally inputting and outputting very much as before. The only difference might be that individual people hear back about X+1 votes on the Y ballot measures instead of X votes. (X and Y here can be arbitrarily large, to ensure sufficient informational flow between individuals and the system as a whole. We can also allow individuals to share opinions via widely-read social networks, if that increases information integration.) Tononi offers no reason to think that a small threshold-crossing increase in the amount of integrated information (Φ) at the societal level would profoundly influence the lower-level behavior of individuals. Φ is just a summary number that falls out mathematically from the behavioral interactions of the individual nodes in the network; it is not some additional thing with direct causal power to affect the behavior of those nodes.

I can make the point more vivid. Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate. But Tononi holds that a person's Φ can vary over the course of the day -- declining in sleep, for example. So suppose Jamie goes to sleep. Now the societal system has the highest Φ and no individual human being in the system is conscious. Now Jamie wakes and suddenly everyone is conscious again! This might happen even if most or all of the people in the society have no knowledge of whether Jamie is asleep or awake and exhibit no changes in their behavior, including in their self-reports of consciousness.

More abstractly, if you are familiar with Tononi's node-network pictures, imagine two very similar largish systems, both containing a largish subsystem. In one of the two systems, the Φ of the whole system is slightly less than that of the subsystem. In the other, the Φ of the whole system is slightly more. The node-by-node input-output functioning of the subsystem might be virtually identical in the two cases, but in the first case, it would have consciousness -- maybe even a huge amount of consciousness if it's large and well-integrated enough! -- and in the other case it would have none at all. So its consciousness or lack thereof would be virtually irrelevant to its functioning.

It doesn't seem to me that this is a result that Tononi would or should want. If Tononi wants consciousness to matter, given the Exclusion Postulate, he needs to show why slight changes of Φ, up or down at the higher level, would reliably cause major changes in the behavior of the subsystems whenever the Φ(max) threshold is crossed at the higher level. There seems to be no mechanism that ensures this.

27 comments:

  1. I’ve come across the work of Tononi recently (through Scott Aaronson’s blog http://www.scottaaronson.com/blog/ who is skeptical of IIT) and I don’t pretend to understand a word of it. To your knowledge does it (or would it) have any practical, technological (empirical) applications? Or are we dealing with yet another mathematically elegant theory that ultimately will have no useful bearing on reality?

    ReplyDelete
  2. Hi Eric. I have not read this more recent Tononi work, and it's not like I understand phi very well anyway. But just so we can be clearer about your objection: Couldn't Tononi bite the bullet on your thought experiments, but suggest that they are really just almost inconceivable thought experiments and not at all likely given the facts on the ground? Couldn't he say that the complexity of the human brain is such that the level of phi in it (while awake at least) is not going to be exceeded by a group of humans, even acting in harmony? Given simpler brains in ants or bees, maybe the colony or hives would have more phi than the individuals, but that's not so counter-intuitive. Anyway, I'm probably misunderstanding the problem you're posing.

    ReplyDelete
  3. Thanks for the comments, folks!

    modvs1: Tononi thinks it's useful in thinking about anaesthesia and twilight states, and for modeling the structure of the stream of consciousness. I wouldn't rule out the possibility that if IIT were true, it could have implications for such cases.

    Eddy: Yes, I think Tononi could and probably would say this. But (1.) It's so unclear how to calculate phi in such cases that it's not clear that he *can* legitimately be confident about this. And (2.) the theoretical objection stands in any case: Without a mechanism to ensure that small threshold-crossing variations in phi at the larger-system level (whatever the larger system might be) have major impact on the functioning of the subsystems, his theory would have the presumably unwelcome consequence that the consciousness or not of the lower systems will often be nearly irrelevant to their behavior.

    ReplyDelete
  4. It seems a refutation, but there seems some merit to the extension of the idea? I mean, assembly line workers, going through repetitive, mindless actions over and over as they serve a larger system? That sort of lines up with the idea, doesn't it? Or did I not really get the idea?

    ReplyDelete
  5. I wonder again if the domain of consciousness might not be a way of getting around this. In your democracy, the citizens are voting only on a few specific issues, one assumes. And I actually do think it's psychologically plausible to say that if you do a particular thing in concert with a fixed group of other people, getting continuous feedback on it, then you can enter into a form of group consciousness with those people *for that activity*. Sports are the obvious example.

    But individuals have many other areas of their lives which are not part of the gestalt. What they eat, read, enjoy... in these domains, consciousness remains at the individual level. Presumably Tonioni's theory could accommodate this because it includes information, and the domain of information has to be defined.

    I think there's something about agency missing, though. I don't believe Tonioni's idea because I don't think human beings are much like information processors, and the thing that makes us conscious in particular includes our agency and will. For instance, I think being conscious necessarily involves the ability to change our consciousness - that's why we're unconscious when we dream, and that's why lucid dreaming is so weird. But that's a separate argument.

    ReplyDelete
  6. Do quarks,neutrinos and other particles have consciousness? They move and MAY have free will.
    Do molecules have consciousness? There may be communication and judgement at that level of being. We have no way of discovering.

    ReplyDelete
  7. Thanks for the continuing comments, folks!

    Callan: I'm not sure -- were meaning that as a positive example of consciousness at the group level or as a negative example?

    chinaphil: It doesn't have to be voting -- voting is just informationally relatively simple and in principle numerically extendable. Maybe sports teams, or informal interactions at the nation level, would serve just as well or better. I think the 2004 and 2008 versions of Tononi's theory could accommodate this. It's his introduction of Exclusion in 2012 that raises the trouble. On agency: I have some sympathy with that idea, too, but it's not clear what justifies acceptance or denial of it.

    Anon: Such panpsychism is, I think, on the table -- and Tononi's theory seems to imply that they would often have a tiny bit of consciousness, if they are not part of a larger complex of higher phi.

    ReplyDelete
  8. Is Tononi saying that only one of the person inside the chinese room, or the system CR can be conscious but not both?

    If so, why on earth would he think that?

    ReplyDelete
  9. Anon Jul 22: Yes, I believe that would follow from his exclusion postulate. As to *why* he accepts the exclusion postulate, the main explicit reason he offers is Occam's Razor: It's simpler. This seems to me not an especially compelling reason in this context. He also defends exclusion by appeal to the intuition that individual people have only have one stream of experience (not relevant in part/whole cases, I think, but maybe relevant to overlap cases); and to the intuition that two people having a conversation wouldn't form a third conscious entity that includes them as parts (an odd intuition to take seriously, given that his near-panpsychism seems to conflict with similar intuitions against the consciousness of simple diodes).

    ReplyDelete
  10. How does the exclusion postulate effect streams of consciousness in individuals? Is he implying that all thoughts are distinct conscious entities in that they require a cascade of different neural circuits to be active at different times? Or that, within a complete thought, the most dominant (Phi-intensive) circuit is that which "owns" the consciousness?

    ReplyDelete
  11. Anon Jul 22 05:10: The latter. In Tononi's view, this is usually the corticothalamic system.

    ReplyDelete
  12. Eric, does it matter whether it would be positive or negative (in my subjective evaluation)? I was proposing it kind of makes sense as something that might just be. In my opinion the workers position is a poor one inflicted on them (and not by PVE, if I may use a mmorpg term)

    ReplyDelete
  13. I support IIT. (because it can be a tool for discussion about consciousness.)

    It seems to be criticism for Exclusion Postulate. However I think someone misunderstood. Consciousness should show nesting and overlapping. I think Tononi did not say about social networks.

    I support consciousness of social networks also. And also that of photodiode also. Further discussion with IIT should be possible.

    ReplyDelete
  14. I disagree strongly with the claim that introspection suggests that there are no nested or overlapping consciousnesses in our heads. I regard the exclusion postulate as a counterintuitive bug at best, and not a feature of ITT.

    As to ITT as a whole, I don't understand how the integration of information occurs except functionally, i.e. in such a way as to be defined in terms of an airy-fairy cloud of unrealized hypotheticals, and I don't see how those can make consciousness be or not be.

    -John Gregg
    www.jrg3.net/mind

    ReplyDelete
  15. Thanks for the continuing comments, folks!

    Callan: Are you imagining both the workers AND the larger entity to be conscious, or only the larger entity -- that was the question I meant to ask, though I didn't phrase it clearly.

    Mambo: Then you'll probably like the 2004-2008 versions of IIT better than the more recent versions with Exclusion.

    John: I'm not sure what introspection shows, so I don't really disagree with you on that point. I do also think it's a bit weird that hypotheticals could do so much work -- they do seem, in a way, pretty "airy-fairy"! But without hypotheticals I think you collapse pretty fast into Greg Egan's dust theory. So I feel like we're stuck giving them an important role. On Egan, see my post here:
    http://schwitzsplinters.blogspot.com/2009/01/dust-hypothesis.html

    ReplyDelete
  16. This comment has been removed by the author.

    ReplyDelete
  17. I know IIT v3.0.
    I could not find completely same expression of the exclusion postulate, but I found similar one from IIT v3.0 paper.
    Probably Tononi misunderstood. I think the exclusion postulate is not the most important part of IIT v3.0.
    I will think a little.

    ReplyDelete
  18. Eric - only the larger entity is concious (whatever that is). The workers have been assimilated. At least at the time of work - outside of that...?

    I mean people talk about soul draining jobs - what you're talking about might actually be involved somehow!?

    ReplyDelete
  19. So if Tononi's exclusion postulate is correct, then there would seemingly be conditions under which what you describe is actually realized. Right!

    ReplyDelete
  20. What you described, Eric, not me! :) It just seemed to resonate in a horrible way to me. I mean think of a small business - an employee might have an idea for how the business works and it might actually influence the procedures of that business. Now scale up the business in size - the very same employee might just be ignored at that point! Indeed they might not even bother trying to think of the idea, for how little they...I don't know how to describe it except perhaps for how little they matter?

    ReplyDelete
  21. The problem with your California election example is that the result of the election -- the victories in some combination of 40 or 50 different candidates or propositions -- is not an irreducible concept. It's a composite, which by Tononi's definition, cannot be the subject of consciousness. And it's not embodied in a singular experience that can be the subject of consciousness -- for example, reading about the results in the LA Times over breakfast.

    ReplyDelete
  22. Bill: I'm not sure I understand. The idea of what is composite or not, and what is singular or not, should flow out of the mathematics of his model, right? Is there some reason to think that the mathematics of his model is structured so as to avoid group consciousness in California as a result of integration through an election structured in the right way?

    ReplyDelete
  23. Take his two core examples: the dipolar switch and the digital camera. The dipolar switch integrates what little information it has in a singular event. The digital camera has no similar capacity for integration. The California election is like the digital camera. There is no single point at which the results in all of the races are integrated. They might all be reported on the same piece of paper, but integrating them in that way does not add any information to the results in each race that have already been calculated somewhere else. As I understand it, Tononi's theory requires both. Information has to be physically integrated (i.e. brought together) AND the integration has to add information that wasn't there before.

    ReplyDelete
  24. Thanks for a thought provoking post!
    As Bill Lane pointed out, it's not clear if this particular example of elections would generate high phi. Nevertheless, such nested minds are worth exploring, even if this particular mechanism wouldn't work. Let's consider two more examples: Borg (or any other sci-fi hive mind you like) and China brain.

    Borg is made up from many individual minds connected through some kind of high bandwidth neural lace. IIT predicts, that if they become integrated enough, they should merge into a collective consciousness. Now, let's try to recreate the scenario with Jamie going to sleep:
    Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate.
    Now, we have to remember what integration means - it means that two systems are causally intertwined, and disconnecting them causes major disruption of their behavior. That means, that if Jamie goes to sleep, behavior of the collective changes, and we can guess that it's phi reduces - just like turning off your prefrontal cortex would make you less conscious. Now, it can turn out, that other individuals still stay conscious, because collective phi has fallen lower than theirs. In the elections example this was less obvious, because it makes it too easy to ignore what integration really means.

    China brain example is trickier. Each of the individuals is told to simulate the behavior of a neuron - let's assume that they receive calls from other 'neurons', then mentally compute to decide if they should "fire", then call to their connected 'neurons'.
    Even though China brain is "computed" on those individuals, it seems that they aren't integrated with the China brain. To see this, consider Chen wondering what should she eat for dinner. Assuming she performs her assigned computations carefully, this thought wouldn't influence her output. Reverse is also true - China brain's thoughts wouldn't influence Chen's decision what should she eat, or any other thought (at least not significantly, maybe you could argue that some minimal influence happens). It shows, that they aren't causally linked, so cannot be integrated.

    Still, there is something spooky with one event taking part in two different experiences - e.g. Chen taking a call, experiences someone saying 'I fired', and the same event builds China brain's experience. Maybe you could explain it by saying that this event is used by those two systems on a different level of abstraction (or as Tononi would say, different spatio-temporal grain). I admit I'm not sure how IIT treats such cases.
    I feel IIT is quite undefined when it comes to this "grain". It says "just choose the grain that maximizes phi". This China brain example shows it's more complicated than that - you can have separate conscious beings existing at different grains, and as you rightly said, it would be ridiculous to say that the emergence of "higher" being would suddenly turn "lower" beings into zombies.

    ReplyDelete
  25. Thanks for these continuing comments, Filip!

    I'm not sure why we should assume a radical change in the system behavior when Jamie goes to sleep. If one is antecedently committed to IIT and the value of some particular way of computing integration, then maybe one should think that must be so, but from a more neutral perspective it's hard to see, if we assume that Jamie is just one of a billion people, just happening to have a slightly higher phi than the next highest-phi person. Remember, also that IIT is committed to a *universal* claim here. This must be the case for all possible Jamies in a setup like the one I've described.

    ReplyDelete
  26. It doesn't need to be radical. But it will be non-zero. If Jamie going to sleep would have no effect AT ALL on the system, that would mean, that they aren't causally linked. And according to IIT, two systems which are causally independent cannot be integrated. All IIT versions obeying the Integration Postulate will have this feature.

    It's true, that if Jamie is just one of a billion people in this system, his influence will probably be tiny. And that the difference between Jamie and next-highest phi person (let's call her Mary), can be arbitrarily small. But in the election example, we assume that phi of the system is somewhere between Jamie and Mary, so the amount that system's phi needs to fall to 'save' the people from becoming zombies, is tiny too. Of course, whether it really would happen, depends on the mathematical details of our version of IIT, and there may be versions, which are spooky, and create zombies suddenly. But this doesn't stop us from looking for other, 'sane' IIT versions.

    Although I must admit, the more I think about those strange cases, the more doubts I have. One case which really gets me is this:
    Consider a patient with a perfectly symmetrical brain. He's been prescribed to have his hemispheres surgically separated. The operation is performed while he is conscious, and the doctors cut one connection at a time. At the end, we get two separate hemispheres - so two separate conscious beings.
    There must be one tipping point, when one being suddenly becomes two, even though we only cut a single connection.
    It's extremely spooky, and I'm not sure how to avoid this problem.

    Maybe we should abandon the idea that conscious beings must be so clearly and sharply separate, and allow some 'softness'. We could then say, that there was no one tipping point, but rather a gradual process of one being becoming two. Although I fear that it brings even more problems than it solves. I'm really curious what are your thoughts on that.

    ReplyDelete
  27. The exclusion postulate is motivated partially by introspection and a reasonable enough rejection of the counter-intuitive notion of nesting consciousness in one brain, but also by a principle of causal parsimony to not allow for an overdetermination of causes during one event. The latter consideration is more philosophical rigorous.

    ReplyDelete