Wednesday, June 06, 2012

Why Tononi Should Allow That Conscious Entities Can Have Conscious Parts

On March 23, I argued that eminent theorist of consciousness Giulio Tononi should embrace the view that the United States is conscious -- that is, literally possessed of a stream of phenomenal, subjective experience of its own, above and beyond the experiences of all its citizens and residents considered individually. My argument drew on Tononi's work from 2004 through 2009 arguing that any system in which information is integrated -- that is, virtually any causal system at all! -- is conscious. Tononi's one caveat in those works is that to count as a "system" in the relevant sense, an informational network must not be merely a subsystem within a more tightly integrated larger system. I argued that since the U.S. is a system in Tononi's sense, either it or some more tightly integrated larger system (the whole Earth?) must be conscious by Tononi's lights. While in other posts I had to do some work to show how I thought Daniel Dennett's, Fred Dretske's, and Nicholas Humphrey's views implied group consciousness, Tononi seemed an easy case.

However, my March interpretation of Tononi was out of date.  More recently, (here [in note 9] and here [HT Scott Bakker and Luis Favela]), Tononi has endorsed what I will call an anti-nesting principle: A conscious entity cannot contain another conscious entity as a part. Tononi suggests that whenever one information-integrated system is nested in another, consciousness will exist only in the system with this highest degree of informational integration.

Tononi defends this principle by appeal to Occam's razor, with intuitive support from the apparent absurdity of supposing that a third group consciousness could emerge from two people talking. But it’s unclear why Tononi should put much weight on the intuitive resistance to group consciousness, given his near panpsychism. He thinks photodiodes and OR-gates have a little bit of conscious experience; so why not some such low-level consciousness from the group too? And Occam’s razor is a tricky implement: Although admitting the existence of unnecessary entities seems like a bad idea, what is an “entity” and what is “unnecessary” is often unclear, especially in part-whole cases. Is a hydrogen atom an unnecessary entity once one admits the proton and electron into one’s ontology? What makes it necessary, or not, to admit the existence of consciousness in the first place? It is obscure why the necessity of admitting consciousness in a large system should turn on whether it is also necessary to admit conscious experience in some of its subparts. (Consider my Betelgeusian beeheads, for example.) Tononi’s anti-nesting principle compromises the elegance of his earlier view.

Tononi's anti-nesting principle has some odd consequences. For example, it implies that if an ultra-tiny conscious organism were somehow to become incorporated into your brain, you would suddenly be rendered nonconscious, despite the fact that all your behavior, including self-reports of consciousness, might remain the same. (See Ned Block's "Troubles with Functionalism".) It also seems to imply that if there were a large enough election, with enough different ballot measures, the resulting informational integration would cause the voters, who can be conceptualized as nodes or node-complexes in a larger informational system, to lose consciousness. Perhaps we are already on the verge of this in California? Also, since “greater than” is a yes-or-no property rather than a matter of degree, there ought on Tononi’s view to be an exact point at which higher-level integration causes our human-level consciousness suddenly to vanish. Don’t add that one more vote!

Tononi's anti-nesting principle seems only to swap one set of counterintuitive implications for another, in the process abandoning general, broadly appealing materialist principles – the sort of principles that suggest that beings broadly similar in their behavior, self-reports, functional sophistication, and evolutionary history should not differ radically with respect to the presence or absence of consciousness.

25 comments:

Scott Bakker said...

I really need to reread these, but as a preliminary, why is that I’ve always assumed Tononi regards consciousness as an emergent product of a certain way (temporal, spatial, structural) of organizing information? The diode example he uses to make a simple point about information (not consciousness). The camera metaphor he uses to make a point about information integration.
His object of empirical study is consciousness as we know it, which is to say, something seated in the human brain. I agree that he sometimes makes gestures to ‘consciousness in general,’ but he always seems quick (as far as I can tell) to heavily qualify these forays. So he says in Note 9, for instance,

“An alternative notion enforces a ‘superposition’ principle. According to this notion (which was presented in some previous work), complexes can overlap, in whole or in part. Any set of elements with phi > 0 would then constitute a complex, even if it contains a subset of much higher phi. As a consequence, the same set of elements can support more than one consciousness, and different consciousnesses can overlap, although they share part of their informational structure. That an exclusion principle might apply is perhaps more in line with the intuitions that each of us has a single, sharply demarcated consciousness. Phenomena such as binocular rivalry and other kinds of metastable perception, not to mention dissociative personality disorders, are also suggestive of an exclusion principle.”

He’s relying on a phenomenological intuition of unitary consciousness, but remains open to alternative possibilities (superpositioning). His exclusion principle seems warranted given consciousness as we experience it - the immediate object of his concern.

I guess I’m not convinced he’s making the ‘consciousness in general’ argument you attribute to him (so much as airing tangential conceptual possibilities). But even if he is, the specificity of the constraints he poses always leaves me asking how noisy, ramshackle, obnoxious systems like the Californian electorate could ever generate the kind of high phi value integration required for consciousness to emerge.

Eric Schwitzgebel said...

Thanks, Scott. It's interesting that in his more recent paper where he comes closer to explicitly endorsing the exclusion principle he does not appeal to the intuitions he does in this note. (That's why, in the brief treatment in the post I didn't broach these other reasons.) My guess is that it's not entirely clear that we do intuitively agree that we have a sharply demarcated conscious stream, though it is much more commonsensical to think that a third stream of consciousness would not arise from two people talking, so he has moved to what he thinks is the stronger argument. (Just a guess.)

Tononi appears to come at the issue from an interest in the human case and in issues like sleep and neural organization, but it seems clear to me that he is making very broad claims on this basis about consciousness in general -- so I want to hold him responsible for the implications of those claims!

I'm not sure how high a phi Tononi thinks one needs to have a human like stream of conscious experience. I can't recall his committing to an approximate quantity. (If you know of something, I'd like to hear!) But the California election case was chosen because it seems likely to be high phi and *not* messy. I don't mean Californians talking to each other but rather the toting up of votes on voting day. Tononi is flexible about temporal grain. I choose a temporal grain of one day. On Tuesday early morning no one had voted. 24 hours later, we have an integrated output of 8,000,000 voters across (if it's a long slate) 25 elements some of which are more than one-bit choices. The output I'm imagining is: 4,103,414 for Prop X, 3,611,103 against; etc. Informationally, that's pretty rich.

Scott Bakker said...

You know what, it just dawned on me that I'm not even clear on what Tononi's position on functionalism is...

I see what you're saying. He does seem to be more invested in overtly considering philosophical aspects of his research in these papers - almost as if he's trying to conceptually clarify things for himself.

But I still think he would say the devil is in the informatic details: especially now that he's committed to considering II in dynamic terms. I'm not sure he would agree there's enough complexity of the right kind in your example - but I'm pretty sure that its one off nature would disqualify it.

If you have access, Seth has a great paper comparing the mathematical virtues and liabilities of II and Causal Density approaches. (It's locked behind a paywall, now, so I'm screwed).

http://rsta.royalsocietypublishing.org/content/369/1952/3748.abstract

Edelman, if I remember correctly, thinks II as mathematically defined by Tononi will be but one among a number of ways of quantifying (and so identifying) consciousness correlated neural activity.

Maybe it'll turn out to a be an informatic 'Omega' (a la Chaitin) and transcendental philosophy will be able sigh in relief!

Matt Sigl said...

More great IIT stuff!

I think you might be getting the anti-nesting principle wrong. The anti-nesting principle states, as I understand it, that within a complex --at a certain causal "level"--there cannot be nested consciousnesses. All subset's with a phi value are "subsumed" within the larger. higher phi complex.

The key point here is that it doesn't matter what the nodes ARE. Whether or not the nodes themselves are objects like people, which have their own phi values, or if they are just transistors in a computer, within the complex they are just nodes. The nesting principle is "complex-sensitive."

I don't think Tononi would believe that the Chinese Nation on cellphones would each loose their individual consciousness when they recreate the functional organization of a brain. After all, how could they? How could the information of their consciousnesses "get to" the new complex? Tononi's theory isn't magic; information has to have channels of intelligible transfer. The complex has no access to the minds of the Chinese Nation; all it can say is: there is a node Integrating Information, what that node is made of doesn't matter.

If I'm totally off-the-mark in my interpretation (totally possible) I'd love to learn why.

Matt Sigl said...

Just one further point.

Re-reading Tononi's note #9, I think the meaning of the anti-nesting principle is more ambiguous stated than I previously thought. Nonetheless, I suspect that it's best to read the "exclusion principle" or "anti-nesting" principle as still occurring on one "causal level."

Its opposite, the superposition principle, has to do with the conscious autonomy of complexes within a complex. For example, if my visual field is an informational network that generates a phi value greater than zero, does my visual field have its own consciousness? Certainly there is a higher-phi complex, namely what I usually call my "normal consciousness," that the visual phi network contributes to but, if superposition holds, then there is something it is like to be my visual field also. If exclusion is correct, this smaller phi complex gets wholly integrated into the larger one, losing all independent "existence" in the process.

So, I don't think the example of the California voters holds because the Phi complexes that generate the individual minds of the voters are not contributing pertinent information to the new macro-complex other than as nodes.

The one complication I see is that the effects of this "supra-voter-mind" should be causal and since the nodes are people, it should effect their behavior. But if their minds aren't integrated or subsumed into this larger complex how can the effects of the supra-mind cause behavior? It's a tricky question but, I see no contradiction in holding that complexes need not have access to all the causes of their behavior. It may be a better explanation that a super-ordinate mind "selects" certain outputs for the system and the node behaves accordingly never fully "knowing" the cause of its behavior in totality. This may threaten the close link we feel between intentionality and action, but, that was already a very tricky and ambiguous problem to begin with.

It's interesting to consider this as an explanation for the emergent properties of group behavior. When a certain critical mass makes a crowd of revolutionaries break through the barricades and storm the Bastille, might there be a supra-mind "choosing" that they charge ahead? (Not that this group mind would "know" that this is what it is doing. Its consciousness may be totally foreign to us and contain no content related to revolution or France.) It’s a bizarre way to imagine causation but, has some intuitive elegance. There is no question that human behavior does change drastically in group situations; perhaps this is why. What the Bastille-chargers DO NOT create is a super-ordinate mind that contains the contents of the minds of all its nodes. After all, how could it even know the node has a mind at all?

Of course, it may be that generating high-phi in reality is so immensely complicated and rare that only specific, selected-for structures like brains succeed at the task. Yes, when two people talk, they may create a third consciousness, but it may be only minimally so. In theory this is, of course, an empirical question.

Eric Schwitzgebel said...

@ Scott: Thanks for the reference. I've seen the Seth et al., but it's hard for me to make much of it given the current limits on my math skills (though I have recently started working again on developing those skills). It seems that it doesn't draw out the philosophical implications as clearly as Tononi does.

Eric Schwitzgebel said...

@ Matt: It would be nice, I think, if we could read Tononi in the way you suggest in your first comment. It seems to me that the most natural interpretation of what he says doesn't allow it, but maybe there's space to cut him some slack and read him "charitably" on the point -- if he *wants* to be read charitably in that way!

You write: "Its opposite, the superposition principle, has to do with the conscious autonomy of complexes within a complex. For example, if my visual field is an informational network that generates a phi value greater than zero, does my visual field have its own consciousness? Certainly there is a higher-phi complex, namely what I usually call my "normal consciousness," that the visual phi network contributes to but, if superposition holds, then there is something it is like to be my visual field also. If exclusion is correct, this smaller phi complex gets wholly integrated into the larger one, losing all independent "existence" in the process."

I wonder if we can do what you want with Tononi's *earlier* definition of a complex. A set of nodes won't count as a complex if one can add new nodes to create a higher-phi complex. If the visual system and the brain as a whole (or whatever hosts my reportable stream of experience) have the same spatiotemporal grain and the visual system is lower-phi than the brain as a whole then the visual system won't count as a complex on Tononi's 2004-2009 view and won't have its own stream of experience. But that's a different kind of restriction, and much more modest, than the principle Tononi flirts with in 2010 note 9 and more clearly embraces in his 2011/2012 piece.

If the voting case wouldn't count as a violation of the new anti-nesting principle, I wonder, then why would the two-people-in-conversation case, which Tononi says provides intuitive support for the anti-nesting principle? Couldn't we seem the conversation partners as contributing information also as nodes? (What would it be, anyway, to contribute information only as a node?)

I agree that the case is wholly different if the higher-level integration changes the behavior of the elements, as maybe in the mob case. But Tononi's anti-nesting principle doesn't seem to require that sort of causal influence from whole back to node. It's just: Tote up phi for the subsystem (e.g., a personal stream of consciousness); tote up phi for the supersystem (e.g., the California electorate); assign consciousness only to the system with the higher phi. At least, that's what it *seems* like he is saying.

Matt Sigl said...

Great thoughts.

I was being imprecise when I talked about "complexes within complexes;" if the anti-nesting principle is true then, obviously, you can't have complexes within complexes as a complex is necessarily unitary and composed of no other complexes. What I should have said instead is, in Tononi's language, "entangled q-arrows within complexes." A q-arrow is entangled if--and I quote from Tononi's paper on qualia-- "it's sub q-arrows generate information differently taken together than they do taken separately (note the analogy with phi.)" Sounds a lot like "complexes within a complex" to me. The anti-nesting principle claims that these entangled q-arrows have no conscious existence of their own. Why isn't clear. If they do then Tononi may just have to bite the bullet and accept that complexes can indeed nest and entangled q-arrows are also complexes. I still prefer the (let's call-it "soft" to be contrasted with "hard" shortly) anti-nesting principle for it's elegance and avoidance of counter-intuitive implications, like my sense of taste having it's own conscious autonomy. (I can't help but think of Woody Allen's "Everything You Always Wanted to Know About Sex But Were Too Afraid To Ask" where the different parts of the brain were all autonomous characters working together to create the experience of "being the man" about to have sex. Especially funny is the pleasure center which consists of a man in a sterile white room experiencing perpetual orgasm!)

As for the "hard" anti-nesting principle which implies a possible radical disconnect between behavior and presence of consciousness, I still think the theory steers clear. The answer it seems to me has to do with the spatio-temporal grain of a complex. In your voter-ballot example you give the granularity of 1 day. The complexes that compose the nodes work on a timescale of milliseconds. As such, the voters can be seen as "nodes" on the scale of days and "complexes" on the scale of milliseconds. There need not be a moment then when the voters lose consciousness or find their consciousnesses merged because those complexes are operating at an entirely different temporal scale. This is true of our brains too; the atoms that make up our neurons operate on a scale that is far, far faster than the causal relationship between our cells. Whatever consciousness is generated by these ultra-microscopic causal interactions remain "locked away" in a spatio-temporal grain that we do not have access to. The phenomenological content of atomic interaction cannot be informationally incorporated into our millisecond-based consciousness.

Finally, you write, "A set of nodes won't count as a complex if one can add new nodes to create a higher-phi complex." If I understand you right, I think this is totally wrong. If I remember correctly, Tononi even rhetorically asks in one of his papers (I can't remember which) if merely adding nodes to a causal network can increase its phi value. (This might be a brute-force way to create conscious AI, it seems to me.) And why not? While I certainly have a high phi value now, who's to say that adding a bunch of artificial but functionally operative neurons to my brain wouldn't increase my phi value? I suspect it would, though organization may be far more important than quantity. (Remember, random networks generate little phi.)

Unknown said...
This comment has been removed by the author.
Unknown said...

I haven't read the other comments to this article (I'll get to it), so apologies if someone already brought this up.

Responding to:

> it implies that if an ultra-tiny conscious organism were
> somehow to become incorporated into your brain, you would
> suddenly be rendered nonconscious,

Technically speaking, this is not in agreement with Tononi. Tononi doesn't care what things are members of the same physical space, he cares what things are part of the same informational mesh.

If you implanted a tiny conscious organism in my brain, the organism's causal interactions with the rest of my brain would very small---sufficiently small that the organism and my brain would be considered boundaries of two separate "complexes" (in Tononi-speak), however I think the tiny conscious organism would still be the "main complex".

If the conscious organism actually did lots of fancy causal interactions with my brain, then yes as far as I know your argument holds. As far as I know Tononi attributes consciousness to all complexes, not just main complexes.

Eric Schwitzgebel said...

Virgil, are you sure you're not still in Tononi 2004-2009? I think that's exactly the issue on which he changed his view. NOT all complexes are conscious. If they nest, consciousness only goes with the highest phi complex. Now maybe Tononi would want to say that a tiny organism who acted by himself as a single node (or even less) wouldn't violate the "exclusion principle" or anti-nesting principle due to some unmentioned constraint on temporal or spatial grain, with exclusion only pertaining to same grain-size complexes? But then I don't think Tononi could use his new principle to rule out a separate consciousness arising when two people are in conversation, since the temporal grain will be much slower....

Matt Sigl said...
This comment has been removed by the author.
Matt Sigl said...

@ Eric

I've read and re-read the Tononi passages ad nauseam now and, you know what, I think you're totally right, even in regards to complexes that exist on different spatio-temporal scales. He writes:

"the exclusion principle would also mandate that the spatial scale at which Φ is maximal, be it
neurons or neuronal groups, excludes finer or coarser groupings: informationally there is no superposition of (conscious) entities at different spatio-temporal scales."

And,

"the exclusion principle would mandate that, whatever the
temporal scale that maximizes Φ, be it spikes or bursts, there is no superposition of
(conscious) entities evolving at different temporal scales."

I also think you're right that his appeal to Occam's razor here is weak. I suspect the anti-nesting principle and Occam's razor ARE potentially (but only potentially) effective within specific causal timeslices, but it seems to me, as mentioned before, that different spatio-temporal granularities should allow for different complex structures that in some "physical" sense overlap with others. (Like in your very effective bee-brain example.)

Re the comment to Virgil: I'm not sure that the timescale of two people talking need be that different than the milliseconds that characterize normal consciousness; it's just that in the case of two people communicating the system is not one brain but two. Their neurons are causally linked through communication channels, in this case sound waves and not only synapses. In conversation, the neuronal activity in one brain is causally affecting the neurons in another at near the speed of thought. Still, if you're right and the two-brain system is too slow to generate a supra-phi at the 350- millisecond timescale like one brain does (assuming that superposition may be correct), one can nonetheless imagine perhaps a future micro-chip enabled direct brain-to-brain communication that, while not creating a higher phi than any single brain, generates a greater than zero phi level for the system of the two brains together, even at the same spatio-temporal scale as normal consciousness. It's strikes me as weird then that the two-brain system would have no consciousness; where would that two-brain phi information "go" if not into another complex?

I'm sure there will be a lot more to say on this issue over a very long temporal scale. ;)

Oh, and I promise I'm done ranting.

Scott Bakker said...

Consciousness rants are a pasttime of mine, Matt! You're making me feel, er... self-conscious.

I've been trying to pin down what's been troubling me about you characterization, Eric, to no avail. His exclusion principle does strand him with some peculiar implications. So what keeps nagging me? I really have no problem with the ubiquity of 'consciousness,' largely because I think most of the things that make consciousness seem so 'special' are likely illusory in various respects.

I think my real problem is that I don't want to see Tononi get bogged down (like I am) on these kinds of issues. When I first encountered him 8 years back, what excited me was the empirical prospect of mapping the informatic limits of consciousness through the brain. I wanted a bloody map beyond the hazy TCS. But in these latest papers this prospect seems as distant as ever.

I think my real problem with your post is that it will encourage him to waste more time with the likes of us!

Professional ranters.

On a different note, I would be very interested to hear what you think of the prospects of the Human Brain Project, Eric.

What would the simulation of the processes underwriting consciousness look like - from the inside?

windwheel said...

I wonder whether your objection to the anti-nesting principle appeals to me because I believe consciousness is something which evolved and that God doesn't exist. Clearly our evolved consciousness must incorporate something like mirror neurons- or going farther back, something like Gabriel Tarde's mimetic monadology- and therefore you must be right. To think otherwise is to imagine that what we call consciousness didn't really evolve but just emerged out of some complexity tipping point. But, in that case Haqq Ibn Yazdan- the babe brought up by a deer on a desert isle- would in fact have something we recognize as consciousness, indeed one of a superior sort.
Of course, simply evoking the dread name of Evo Devo is no sort of service to this very interesting debate but, in this context, it points to the existence of a powerful argument likely to be productive in other areas as well.

CLains said...

How is the voting case a case of integrated information? Nothing there depends in specific ways on the voting.

Compare to the visual system where each conscious experience depends specifically on whether a neuron in V1 went for a line with 60% orientation or 70% orientation, or whether this or that dot was blue32 or blue21.

A priori each informational reduction, each exclusion of an alternative, has to figure meaningfully in the whole?

And the idea that consciousness only exists at the highest level doesn't imply that a conscious entity cannot, in some sense, exist within it. If our society is conscious, I can still be conscious within my head. It would be my, as it were, output, that determines the consciousness of the whole, and that would not be conscious?

Eric Schwitzgebel said...

CLains: Before the election, each voter's informational state can be represented as a vector with one dimension for each of his candidate/issue preferences and one dimension for the number of people who voted his way on each candidate/issue. The latter dimensions are spread across a wide probability space. After the election, reading the news about exactly how many people voted each way, the probability collapses through the second set of dimensions.

I agree with your last rhetorical question, by the way. But Tononi does not, which is part of my objection to his newly modified view.

Tam Hunt said...

Eric, I'm working on a paper that uses a different solution than Tononi's ad hoc exclusion principle to solve the problem of attributing consciousness to what should be "mere aggregates." The solution I'm working on relies on a notion of quantized time and inherent limits on the speed of information/causation. May I send you a draft for feedback?

Eric Schwitzgebel said...

Sure, send away! I receive a lot of papers, so I can't promise a full read, but I will at least take a brief look -- maybe more than that, depending on what else is pressing for my time.

Tam Hunt said...

Great, thanks Eric. I'll send when ready.

csimmons said...

http://rstb.royalsocietypublishing.org/content/370/1668/20140167

Tononi 2015 certainly doesn't have an anti-nesting clause. He clearly believes that split brain patients are conscious, and that the left and right portions of the brain are separately conscious when the corpus callosum has been cut.

He does have an anti-wrapping clause. Individuals with internal high bandwidth connections connected via low bandwidth connections form an aggregate that is not conscious because the consciousness lies with the individuals. This seems a difficult position to hold since the brain is an aggregation of cells and since smaller components tend to have higher internal bandwidths simply because communicating locally is easier than communicating non-locally.

Eric Schwitzgebel said...

csimmons: I read the "exclusion" material in b.v as straightforwardly anti-nesting, no? The relation to bandwidth is complicated, I think: Broader bandwidth can result in lower phi in some conditions.

Filip Sondej said...

Thanks for a great post!
You write about one consequence of the theory:
"it implies that if an ultra-tiny conscious organism were somehow to become incorporated into your brain, you would suddenly be rendered nonconscious, despite the fact that all your behavior, including self-reports of consciousness, might remain the same"

I don't think that this theory predicts that. We need to be explicit by what we mean by "incorporated". Clearly, it means something else than just occupying the same physical space. We need to have causal relationships between those two systems. Consider two situations:
(1) Tiny system is loosely connected to the big one. The theory predicts that they are separate complexes because separating them has very little effect. From what I understand, the exclusion principle has no effect here, because the causal networks of those systems don't overlap.
(2) Two systems are densely connected with a lot of causal relationships. Now they are counted as one system because you cannot separate them without a major disruption. But also, it looks that the behavior of the big system won't remain the same! The more causal connections we add between the systems, the more they influence each other's behavior.

In (2) saying that big system ceases to be conscious in favor of the newly formed big+tiny system seems reasonable.

What's interesting, is what happens between (1) and (2). I agree with you that it would be a huge problem for the theory, if there is some tipping point - some state, where adding one connection suddenly changes complexes. It's reasonable that minimal physical change, should correspond to minimal phenomenal change.

Although I must admit, it's hard for me to tell just from looking at the equations if such discontinuity would happen. Maybe each added connection from the tiny system would "eat up" the conscious complex of the bigger system gradually? Fortunately, it's something that can be tested, because we have a tool to compute exact values of phi for small systems:
https://github.com/wmayner/pyphi

Also, as Max Tegmark pointed out here:
https://www.youtube.com/watch?v=QnEtNC8eFso
there are hundreds of alternative ways to compute phi, so we could test many of them - maybe we would find some which wouldn't violate this continuity rule.

Filip Sondej said...

Thanks for a great post!
You write about one consequence of the theory:
"it implies that if an ultra-tiny conscious organism were somehow to become incorporated into your brain, you would suddenly be rendered nonconscious, despite the fact that all your behavior, including self-reports of consciousness, might remain the same"

I don't think that this theory predicts that. We need to be explicit by what we mean by "incorporated". Clearly, it means something else than just occupying the same physical space. We need to have causal relationships between those two systems. Consider two situations:
(1) Tiny system is loosely connected to the big one. The theory predicts that they are separate complexes because separating them has very little effect. From what I understand, the exclusion principle has no effect here, because the causal networks of those systems don't overlap.
(2) Two systems are densely connected with a lot of causal relationships. Now they are counted as one system because you cannot separate them without a major disruption. But also, it looks that the behavior of the big system won't remain the same! The more causal connections we add between the systems, the more they influence each other's behavior.

In (2) saying that big system ceases to be conscious in favor of the newly formed big+tiny system seems reasonable.

What's interesting, is what happens between (1) and (2). I agree with you that it would be a huge problem for the theory, if there is some tipping point - some state, where adding one connection suddenly changes complexes. It's reasonable that minimal physical change, should correspond to minimal phenomenal change.

Although I must admit, it's hard for me to tell just from looking at the equations if such discontinuity would happen. Maybe each added connection from the tiny system would "eat up" the conscious complex of the bigger system gradually? Fortunately, it's something that can be tested, because we have a tool to compute exact values of phi for small systems:
https://github.com/wmayner/pyphi

Also, as Max Tegmark pointed out here:
https://www.youtube.com/watch?v=QnEtNC8eFso
there are hundreds of alternative ways to compute phi, so we could test many of them - maybe we would find some which wouldn't violate this continuity rule.

Eric Schwitzgebel said...

Thanks for this helpful comment, Filip!

I agree that I was too quick about the case of inhaling the organism, and my critique of Tononi now rests on a different type of case that relies on a similar point to the one you make about the space between your (1) and (2).

Phi is so intractable and unsettled for systems of the size of animals that it becomes unclear how much testable empirical content the theory actually has -- except in so far as it is empirically committed, or seems to be, to principles like sudden discontinuity of the sort you mention in your post. I have more discussion of the problem of discontinuity in posts here:
https://schwitzsplinters.blogspot.com/2014/07/tononis-exclusion-postulate-would-make.html

and here:
https://schwitzsplinters.blogspot.com/2018/11/the-phi-value-of-integrated-information.html