Tuesday, January 26, 2016

New Essay in Draft: Is the United States Phenomenally Conscious? Reply to Kammerer

Francois Kammerer has a forthcoming piece in Philosophia responding to my 2015 paper, "If Materialism Is True, the United States Is Phenomenally Conscious". I've drafted a reply to Kammerer.

In Schwitzgebel (2015) I argued that the United States, considered as a concrete entity with people as some or all of its parts, meets all plausible materialistic criteria for consciousness. Kammerer (forthcoming) defends materialism against this seemingly unintuitive conclusion by means of an "anti-nesting principle" according to which group entities cannot be literally phenomenally conscious if they contain phenomenally conscious subparts (such as people) who stand in a certain type of functional relation to the group as a whole. I raise three concerns about Kammerer's view. First, it does not appear to exclude the literal phenomenal consciousness of actually existing groups of people, as one might hope such a principle would do. Second, Kammerer's principle appears to make the literal phenomenal consciousness of a group depend in an unintuitive way on the motivations of individuals within the group. Third, the principle appears to be ad hoc.

Many thanks for reader comments on this earlier post, and especially to Francois Kammerer. Further thoughts, concerns, and comments welcome, either in the comments field below or by email to me.


Arnold said...

Does the word 'influence' with Phenomena help Foundationalism in proposing experiences of Consciousness when considering Materialism...

Ref: Ouine's 'behaviorist theory of meaning' as a phenomenal influence for Observation...

chinaphil said...

All very good stuff. I think one point which might help to make some meaningful distinctions between different cases of potential consciousness is the apparent exclusiveness of mono-consciousnesses.
What I mean is: one of the defining features of consciousness as we know it is often taken to be that the conscious being has unique and authoritative access to its own conscious states. I.e. if I say I'm in pain, I'm in pain, and it doesn't matter what your nervometers say.

In the article you refer to a car+driver whole, and also to multi-consciousness wholes (like the USA). Part of the argument is about whether a constituent of the whole is able to represent the whole. In the car+driver case, it seems that the driver is able to represent the whole to herself, quite straightforwardly. But in a multi-consciousness whole, there must necessarily be some parts of the whole that each sub-consciousness cannot access, and therefore cannot accurately represent.

I haven't yet thought through how this inability to represent parts relates to the ability of the sub-consciousness to represent the whole. Another issue is that I may be assuming a level of "base" consciousness, i.e. that there are no sub-sub-consciousnesses inside the sub-consciousness. But maybe not. Perhaps we can say this: whether or not there are sub-consciousnesses making us up, or super-consciousnesses of which we are a part, one thing we know about the consciousnesses on "our level" (whatever that means) is that they are exclusive.

If one wanted to work on an anti-nesting principle, that might be a better place to start. It feels like one ought to be able to argue from the exclusivity of same-level consciousnesses to the exclusivity of different-level consciousnesses. I'll work on it!

François Kammerer said...

Dear Eric,

Thanks for your interesting response paper. I will make a three comments about it (one for each of your concern regarding my own previous response paper).

1/ Concerning your first concern, I just want to make clear that, even though I agree that my principle leaves room for some possible group consciousness, I also think that, given my principle, group consciousness is quite unlikely to happen in the actual world (I set aside here the problem you rose concerning reference - I think I would go for a "magnetic response" here). Indeed, I think that it is quite unlikely (though not impossible) that a group, the members of which are conscious (and as autonomous as humans can be), ever comes to a degree of functional organization and coordination as a whole, without any of its subparts having a conscious representation of the whole itself (and without these representations playing a crucial role). Of course I have no argument to prove it, but it seems to me that in such a case, the burden of proof would be on your side: that is to say, I think that it would be up to you to describe an actual group that would behave in such way (or that would be likely to behave in such a way in the future). Again though, I think it is perfectly possible to get group consciousness, but I just think that it is extremely unlikely to happen in our world, with actual humans.

This first point is not an objection to your concern, but rather a specification of what may (or may not) be different between our views

François Kammerer said...

2/ My second point is quite similar to a point I already raised on the comments of your previous blogpost concerning my paper. It can be explained as follows: I have a little problem with your second concern, which is an objection to my principle that you describe through the case of Leo. Indeed: I may be missing an important point, but I'm not sure I realize why the anti-nesting principle would apply in Leo's case. Indeed, it seems to me that Leo's situation doesn't fulfill condition B of the principle. Indeed, it is not true that, if Leo was not having a mental state representing the whole as a whole, then the overall functional role of the whole which allows the whole to “see” the Tonga Trench would not exist anymore. And the proof of that fact is precisely that (by stipulation in your story), before Leo discovered the existence of the group, he was playing his part in a way which allowed for the relevant global functioning of the whole! So, to use again the terms in which I formulated condition B, it is not true that "if a functional role of such a kind that it requires that the subpart of the whole performing it has conscious mental states representing the whole, was not performed by at least of the subparts of the whole, then the whole would not display anymore the kind of functional organization/behavior that seemed to grant that it has a certain conscious state".

The same way, if we think about Ned Block's tiny intelligent organism who plays the role of one of my neurons: I think that my anti-nesting principle states that, as long as this organism really plays the role of one of my neurons, it doesn't matter (for my phenomenal states) if this organism (let's call him Pierre) thinks about the whole (me) or not, or needs (in order to keep on playing his role) to remember how much he likes playing the role of one my neurons, because he wants me to keep acting the way I acted before he arrived. Indeed, in such a case, condition B of the principle is not fulfilled (because there is no need for Pierre to have those representations of the whole as a whole in order for my brain to function as it does, and the proof of that is that my brain DID function as it does before the arrival of Pierre, when my head was just full of neurons).

So, in my view, the case you're describing doesn't fulfill the condition B of the principle. So, on the basis of your description, I think that my principle allows for the group to be visually conscious of the Tonga Trench the all time (providing my anti-nesting principle doesn't apply to other aspects of the functional organization of the group)

Of course, I may be missing something in your argument though, and I'm sorry if it the case, or if I misunderstood you.

François Kammerer said...

3/ Finally, when it comes to your third concern: I think that this is probably the strongest objection one could make to this principle (or to any anti-nesting principle, for that matter). One way to see your point is as follows: I try to justify my anti-nesting principle by appealing to simplicity considerations. But, if we followed radically simplicity considerations (that is to say, if we radically try to minimize the number of entities our explanations appeal to), then probably we should just give up all ascriptions of consciousness! And we should also stop saying that baseballs break windows. But if we don't want to do that, that is to say, if we want to say that baseballs break windows, and that some entities are conscious (and that their conscious states play a role in the explanation of their behaviors), then it means that we renounce simplicity considerations, and then why not say that groups are conscious too?

That seems like a interesting point to me. However, I'm not sure it is completely successful. Indeed, I think that we should recognize that, if simplicity considerations indeed motivate and constrain many of our explanatory talk, they are not (and cannot) constitute the only constraint, or the only criterion, with which we assess our explanatory talks. Elegance, predictive efficiency, usefulness, are also criterions we use to assess if a certain kind of explanation is a good explanation or not, and therefore if we can make (or not) an inference to the best explanation regarding the properties or entities posited by this explanation.
For example, it is true that we accept that baseballs break windows (even though at the same time we think that we could also describe the breaking of the windows from a fundamental point of view, by appealing only to the particles composing the baseball and the window). However, our explanatory talks about baseballs and windows (which takes place in the (probably naive) conceptual framework of middle-sized physical objects) is nevertheless ALSO constrained by considerations of simplicity. For example, we don't say that the right hemisphere of the baseball broke the window (even though it was the part which touched the window, and its mass and velocity was sufficient to cause the destruction of the window). That example (and I think that many others are available) shows that simplicity is not our only guide when we try to explain phenomenons, but it is nevertheless a guide, and all of our "emergent" explanations (whatever your precise theory of emergence states) are still constrained by considerations of simplicity.

I think that the same applies regarding consciousness. We ascribe consciousness because it is a convenient, useful way to describe, predict (and maybe justify) the behavior of some entities. If we are materialist, then it is true that if simplicity (in the sense of: minimization of the number of entities we posit) was our only goal, we wouldn't ascribe conscious states at all. We would just talk about, say, fundamental particles in the spatial regions some people call their "brains". However, even if simplicity is not our only goal, it is one of them, and we try to reach it by avoiding to multiply ascription of consciousness. It is not easy to give a precise description of what it means to avoid to multiply ascriptions of consciousness, and to restrain ourselves only to the ascriptions that are truly useful; however, I think that's still what we do intuitively and implicitly. My paper was an attempt to formulate (in the context of the problem of group consciousness) a rather principled and applicable version of the kind of reasoning we intuitively and implicitly follow in our ascriptions of consciousness, when we try to avoid to multiply those ascriptions.

François Kammerer said...

Overall, I certainly agree with you that my principle is certainly not perfectly formulated, and that some of its aspects are still obscure. I also agree that this subject is an under-explored and neglected topic, even though it is essential for any fundamental theory of consciousness ; so I hope that people will manage to clarify this issue in the future better than I do now.

Arnold said...

But Philosophy should not propose Formulation-ism, unless pursuing what parts make up a whole, the roll of parts then is to have a certain place simply clarifying The whole...

Ref: when philosophy turns to metaphysics...

Eric Schwitzgebel said...

Thanks for all those comments! Lots of meetings today, and I need to gather my thoughts. Back with some replies soon.

Eric Schwitzgebel said...

chinaphil: Interesting thought about the exclusivity of same-level consciousness. By this, I take it that you mean something like: I know that at roughly the organismic / human-size level, there is only one stream of consciousness in me. There isn't, say, one stream of consciousness for my entire body and another stream of consciousness for just my brain. Philosophers and consciousness scientists might disagree about exactly what the substrate is for "my" stream of conscious experience, but whether it is part of the brain, all of the brain, brain + body, or brain + body + immediate environment, they almost all agree it's only one basis and one stream. If we grant this, maybe we can leverage that fact somehow into a principle that disallows nesting (at least under certain conditions) between levels.

I like the idea -- though of course it's more of a sketch of a possible direction for an argument than an argument itself. Worth more thought, for sure.

Eric Schwitzgebel said...

Francois: Thanks for your detailed reply!

On 1: You write: "I think that it is quite unlikely (though not impossible) that a group, the members of which are conscious (and as autonomous as humans can be), ever comes to a degree of functional organization and coordination as a whole, without any of its subparts having a conscious representation of the whole itself (and without these representations playing a crucial role)." I'm not sure why you think this, but one possibility -- since you seem okay with my Antarean Antheads -- is that you think if the subparts have cognitive sophistication of the sort human beings have, they would naturally notice how much coordination there is and what the higher-level behavioral result of that coordination is, and then develop a concept of that higher level entity and then at least sometimes shape their contributions to the group level in light of that conceptualization. If that's your thinking, I see some plausibility in it -- and then the issue probably turns on the referential magnetism issue: How much looseness can there be between the ontology of the thing that houses the higher-level consciousness and the conceptualization of it that its subparts have, before the conceptualization is so far from the ontological target that your exclusion criteria fail to apply?

Eric Schwitzgebel said...

On 2: This is a pretty complicated issue. It might take a few go-rounds for us to get it right. The crucial bit of Condition B is "If such a functional role... was not performed by at least one of the subparts of W, W would no longer have the property P." This is a counterfactual claim about what-would-happen-if. In possible-worlds jargon, that means go to the nearest possible world in which the antecedent is false. Leo, in my example, is depressed and inclined not to do his part, but only becomes motivated to continue contributing when he explicitly reflects on the group's need of his contribution. As I intended the example, in the nearest possible world in which Leo fails to represent the group, he is unaware of the group's need, so he loses that motivation and the depression does it's job. He fails to do his part, and consequently the group fails to have visual experience of the relevant sector of the Tonga Trench. Although there are *more remote* possible worlds in which the job still gets done (by someone else, or by more remote counterparts of Leo), it is the nearest case that is most relevant to evaluating the counterfactual as stated -- no?

Eric Schwitzgebel said...

On 3: Ah, well, simplicity is a complex business! I agree it's not the only constraint. But I would also push back a bit against part of what you say. As I suggest with my army example, there are simplicity considerations on *both* sides of the equation. There is a sense in which ascription of mentality to the group (and causation to baseballs) is much simpler, explanatorily, than insisting on an explanation that appeals only to component parts.

Anonymous said...

OK I hope this is brief.. Let me start backwards,,, I used see movies of real people in my head actually looks like looking at the wrong side of a door peeper. It scared me and I stop thinking about it. After years I decided you try seeing the images without being afraid. It was always the same movie.. Black and white ..two people sitting at a counter talking to each other.. The movie didnt come all the time.. BUT now i can just think and see all kind of things like the movies and faces are real.. still its like looking at the wrong end of a door peep hole.

One night years ago,, I had a dream I actually left my body ..I rose up straight laying flat.. I floated through the house up the stairs hovered over my children s bed.. It was quiet and peaceful.. Floated down the stairs and look at the door to leave the house.. I wondered how wonderful it would be to go outside . I got really scared and chose not to.. I floated back to my room and descended back into my body.. I next experience was .. I was sound asleep by myself.. MY eyes suddenly opened without my body else moving. In my face almost touch was the ugliest face I have ever seen... I screamed and hid under the covers,, sweating and heart pounding..I asked who was there? Next.. I was leaving the barn where the hay was just stored.. As I was leaving I shut the doors behind me and looked them.. I had a cigarette in my hand and I flipped it in the dirt.. There was a little breeze that day... I felt three taps on my shoulder.. one two three.. It startled me .. I turned quickly to see who was there.. No one.. but just in time to see the cig roll under the barn door and rolled to the hay bales I picked up the cig and just said thank you.. Is something wrong with me.. one more.. I was in the DRS office waiting to be seen... It was a large room.. and I was bored.. I then had a thought .. In my mind I had a sting... I tied it to the leg of a chair across the room,, Just imagining it and tied the other end to my chair..I kid you not.. The next person that walked in front of me TRIPPED.. There was nothing on the floor just a flat carpet.. I quickly stopped that thought and never did anything like that again.. no I dont do drugs and I drink just a little I dont smoke anymore I am not on any meds..

François Kammerer said...

Eric: for those nice and very stimulating comments. Here are my thoughts:

On 1: Yes I had something like that in mind. I also had the intuition that, anyway, a group could not reach a certain degree of coordination without some of its members having a representation of the whole as a whole. For example: let's say that many citizens of the United States have conscious representations of their particular States (Colorado, Iowa, etc.), but that no one has a conscious representation of the US itself. In such a situation, I think that we could not expect a high degree of coordination between States, and I doubt that the “US” as a coordinated whole could still exist (NB: I’m not saying that it’s not possible to imagine a sophisticated case in which, in spite of the fact that no member of the group has a conscious representation of the US, the US still functions as a coordinated whole - I’m just saying that it seems to me to be very unlikely to happen in our world).
Of course in the end the reference problem is still there. But again, my intuition would be as follows: in "real" (human) groups, if members of the group have a representation of the whole that doesn't bear a lot of similarities to the ACTUAL group they are in (so that the reference of the representation is really different from the actual group, even if one has a magnetic conception of reference), then it is unlikely that this actual group comes to the required degree of coordination. In "normal" human groups, there needs to be enough structural similarities between the representation of the group that members have and the actual group in order for the group to function as a whole. If there is no such mapping, then I think the group won't reach a high degree of coordination. This is of course an unargued intuition, though I think this intuition could be shared by many.

François Kammerer said...

On 2: You’re right to say that this is a complicated issue. This complexity is probably my fault, as I should have defined my principle more clearly. What is sure though is that when I wrote “If such a functional role... was not performed by at least one of the subparts of W, W would no longer have the property P”, I didn’t have in mind the kind of interpretation you give (even though I grant that the interpretation you’re suggesting is both quite natural and perfectly charitable). I will now distinguish interpretation 1 of the condition B (the one you had in mind) and interpretation 2 of the condition B (the one I had in mind).

Interpretation 1/ “In the closest world in which the subpart doesn’t have the conscious mental state representing the whole, does the subpart still play a role which allows the whole to have the relevant organization? If yes, then the condition B is not fulfilled; if no, then the condition B is fulfilled, and the anti-nesting principle may apply.

This is, I think your interpretation (which is again a very natural interpretation of what I wrote, so it's my bad). But actually what I had in mind was something like that:

Interpretation 2/ “Would it be possible to replace the subpart which is conscious of the whole by a subpart without any conscious representation of the whole (given our theory of consciousness), and in such a way that this subpart would still play the role which allowed the whole to have the relevant organization? If yes, then the condition B is not fulfilled; if no, then the condition B is fulfilled, and the anti-nesting principle may apply”

I need of course to refuse to embrace interpretation 1 and to embrace interpretation 2 (or something in the vicinity). I need this, not only to answer your counter-example, but also because, actually, if I embraced interpretation 1, I’m not sure I could grant that in Block’s case (tiny organisms playing the role of my neurons) I would still be conscious (whereas I want to say that in this case I’m still conscious). After all, let's say that those tiny organisms, like Leo, looses the conscious desires to make me act as I did previously, so that they stop acting as one of my neurons! (the case is really parallel with Leo's case you described) In the same way, even though I don’t mention it in my paper, I think that the classical example of Block (Chinese Nation) is conscious (as though if we really specify that the people in Block's Chinese Nation don't do anything fancy, but really act as neurons would do when they make their phonecalls), even though most people find it counter-intuitive. But interpretation 1 of condition B would probably preclude me from saying that Block’s Chinese Nation is conscious (but it is not the case of interpretation 2 – at least that’s what seems to me).

Eric Schwitzgebel said...

Thanks for following up, Francois. On 1, I think the issues are difficult to resolve, but I think I get the general thrust of your idea here. I suspect that pushing more on this issue will turn it into some version of Issue 2 or 3 (unless it becomes a philosophy of language thing instead).

On 2: That's helpful. I worry that Interpretation 1 will make it too easy to say "no" and get stuck with applying Condition B and denying consciousness (as you say, maybe in Block's tiny alien case too); but that going instead to Interpretation 2 will make it too hard to say "no" with the result that Condition B will not apply and you will end up with consciousness in systems you don't want consciousness in. That there's *some replacement or other* that could play a sufficient functional role in the system without consciousness of the whole system might quite commonly be the case.

It's going to be hard to evaluate that claim for specific cases, for two reasons: (a.) We don't have a theory of consciousness and (b.) a non-conscious entity would probably contribute *differently* to the organization of the whole, but the question is whether it would still contribute *sufficiently*. However, let me present an abstract case:

Suppose you go for some sort of "homuncular functionalism" a la Fodor, on which every cognitive process can be subdivided into smaller and stupider subparts until you get down to very stupid elementary processes. Suppose that the whole group entity exhibits the minimum level of sophistication for consciousness, via the activity of its conscious subparts (people) who represent the whole. Ex hypothesi, those individual people's functional contribution to the whole is simple enough that it could be done without consciousness (since it is below the minimum threshold of sophistication necessary for consciousness). So Condition B is not fulfilled, and the anti-nesting principle does not apply, and the group is conscious. But it's plausible that societies have various degrees of higher-level organization -- both increasing over time and nesting within each other (U.S., California, Riverside, the U.C. system, whatever) -- so that even if the organization of the U.S. is too sophisticated to be done other than by conscious entities, there is likely to be an earlier stage of that group when it was less sophisticated or alternatively a currently existing subgroup that is sufficiently unsophisticated that the human functional contribution to that earlier group or subgroup could be substituted by some simpler non-conscious entity's functional contribution without losing the necessary level of sophistication. The somewhat surprising result, then, might be that the U.S. is *too* cognitively sophisticated at a group level to be conscious by Interpretation 2 of Condition B, but *less* cognitively sophisticated groups *would* be conscious.

Yeah, that's pretty abstract with lots of negations and intermediate steps -- rough going, but maybe still clear enough? The short version is that Interpretation 2 seems to make group-level consciousness likely in any groups that are just above whatever minimum threshold level of organization is necessary for consciousness.

François Kammerer said...

Hi Eric,
I agree with you that my anti-nesting principle is hard to evaluate, as it is merely kind of extra-component that has to be added to a pre-existing theory of consciousness (and that we don’t have yet a fully established and detailed theory of consciousness, of course).
I see how your abstract case is supposed to function. However, I have some concern with this case. First let’s go back to your case: if I understand you correctly, the case you’re describing would be a case of
1/ A group entity, which subparts are conscious of the whole
2/ These subparts act in a way that makes them coordinated enough to give to the group entity enough functional complexity and unity to grant a certain level of consciousness
3/ Their contributions to this complex functioning of the whole actually relies on their conscious representation of the whole (in the counterfactual sense that, if they suddenly stopped being conscious of the whole, the appropriate functioning would not take place)
4/ However, this contribution is quite simple and could be, in theory, be made by a stupid, non-conscious homunculus
You say a few things about this case. The first thing is that, in such a case, condition B of my anti-nesting principle would not apply, and that therefore the whole would be conscious. I agree with your diagnosis. However, I rather like it, as I find it quite intuitive. Indeed, this situation describes rather well what happens in Block’s Chinese Nation (and I personally would like to grant consciousness to Block’s Chinese Nation). This situation also describes what would happen in a case that would be a little variation on one of your imaginary cases. Let’s call this case: “Antarean Clever-Ant-Heads”. This is a case similar to your “Antarean Antheads”, except that we make it so that the ants in the Antheads brain are more clever, and that they manage to develop a representation of the Anthead as a whole. Actually, they rely on this representation of the whole in order to motivate themselves to act as they do. But, by hypothesis, their behavior (in virtue of which they make Antheads conscious) remains exactly the same: the contribution of CleverAnts to the functioning of the Anthead is exactly identical to the contribution of the Normal Ants.
I think that this case perfectly fits the abstract case you’re describing, and I’m quite happy with the fact that my view predicts that consciousness is maintained in such a case as, again, I find this intuitive.
Now, you seem to build a kind of objection on the basis of this abstract case. This objection seems twofold to me:

François Kammerer said...

1/ Such cases could already be instantiated in the real world, which means that group consciousness would actually exist (which is actually something I don’t want to grant)
2/ It could be the case that such a conscious group exists as a subpart of a group which would be more complex but less conscious, which seems a bit weird. For example, maybe Paris is conscious but not France as a whole, though Paris is a part of France (actually most Parisian people are so snooty about their city that I’m sure they would like that); maybe UC Riverside is conscious, but not the US.
My answer would be as follows:
1/ I don’t have a proof that such a case is instantiated in the real world, but I’m not sure that the burden of proof lies in my side. Morevoer, it also seems to me that such a case is unlikely. Here is why: conscious beings, such as human beings, are notably difficult to coordinate, because they tend to have their own thoughts, beliefs, desires. In order to coordinate a huge number of human beings, you will need, at a level or another, a representation of the whole of these human beings, and a certain way to enforce their coordination (this in turn requires that this representation of the whole is implemented in a system able to adapt to many situations in order to enforce the coordination, which makes it likely that the representation of the whole aforementioned is conscious). Now, where does this representation of the whole, as well as the mind/minds enforcing the coordination (which are both NECESSARY, I think, for the coordination of a huge number of conscious beings), lie? If it’s within the group, then condition B applies and group consciousness does not exist. If it’s outside the group, that means that we are in a “Chinese Nation” case, and group consciousness does exist. But if such a group organization would be instantiated, surely we would know about it, because that would require that a huge numbers of people are being required (perhaps willfully) to act in a coordinated way (and not any way, but a very complex one), each of them indeed acting in a way that could be done by a stupid homunculus, by someone/something who has a representation of the whole as a whole. I’m not saying that this kind of case is not possible, but I’m saying that it is very unlikely to happen unbeknownst to us.
2/ Even if it happened ; let’s say, even if the entire people of Paris were hugely paid by a crazy French billionaire in order to make coordinated phonecalls, the functioning of which simulates the brain of a simple-but-with-some-rough-phenomenal-states organism. The phonecalls would be very simple to make, so that people of Paris could in theory be replaced by simple chips: the anti-nesting principle would not apply and “Paris” would be conscious, in this sense. But France, of which Paris is a part, and which has a global functioning that is more sophisticated that the one of Paris, would not! However, I’m not sure this is a problem. Indeed, in such a case, the complexity of the functioning of France does partially rely on the complexity of Paris, but not to the extent that the people in Paris are making those dumb phonecalls (but only to the extent that they also do other things: go to the Parliament, take decisions collectively, etc.). So described, I don’t find it really counter-intuitive, so I’m ready to bite the bullet!

PS: sorry for the very long comments!

Eric Schwitzgebel said...

I appreciate the long comments, Francois -- very helpful in making things clear. I agree with your restatement of my argument and I allow that intuitions on 2 might legitimately differ. However, the main thought of my defense of 1 was something like a slippery slope argument. If we grant France-like non-conscious wholes that are nonconscious *because* their organizatinoal sophistication is so high that it would have to rely upon conscious representations of the whole by some of the parts; and if we think it plausible that such levels of organization are not achieved by sudden leaps; then it seems natural to think there would be intermediate cases of wholes, constituted by people, whose organization is *not* so sophisticated that it requires conscious representations by the subparts.

That's the temporal slippery slope case. There will be an analogous levels-of-organization slippery slope case, which is what I was going for with the U.S. - California - Riverside idea.

One way to avoid the slippery slope might be to say that in plausible actual cases a group can't get across the threshold of organization sufficient for consciousness without having a level of organization so high that it could (in plausible actual cases) only be instantiated by entities with conscious representation of the group. That might sound plausible, but I think that it's much more plausible on Interpretation 1 of the counterfactual (nearest world in which humans are conscious of the group, the organization is missing) than on Interpretation 2 (there is no way in principle to instantiate that organization other than by beings that consciously represent the whole).

François Kammerer said...

Eric: Overall, I agree with you. I think that I would indeed take the route you're describing in order to answer this kind of counter-examples (by saying "that in plausible actual cases a group can't get across the threshold of organization sufficient for consciousness without having a level of organization so high that it could (in plausible actual cases) only be instantiated by entities with conscious representation of the group").

I also agree that Interpretation 1 of the condition B would make the answer to this kind of counter-examples much easier; however, I think that this interpretation brings a lot of problem and does not allow us to deal properly with Block's case of a tiny organism in my brain (or with your "Leo" counter-example). So, overall, the interpretation 2 seems to me much better. I also find it more intuitive; finally, I must also say that I had the interpretation 2 in mind from the start (not that this is a sufficient reason for me to keep on endorsing it of course!)

Thanks for the stimulating discussion!