Anti-nesting principles, in consciousness studies, are principles according to which one stream of consciousness cannot "nest" inside another. According to such principles, a conscious being cannot have conscious subparts -- at least under certain conditions -- even if it meets all other plausible structural criteria for being a conscious system. Probably the best-known anti-nesting principles are due to Hilary Putnam (1965, p. 434) and Giulio Tononi (2012, p. 297). Putnam's version is presented bare, and almost unmotivated, and has been criticized by Ned Block (1981, p. 74-76). Tononi's version is more clearly motivated within his "Integrated Information Theory" of consciousness, but still (I think) has significant shortcomings.
In this forthcoming paper in Philosophia, Francois Kammerer takes another swing at an anti-nesting principle.
Though relatively neglected, nesting issues are immensely important to consciousness studies. Intuitively or pre-theoretically, it seems very plausible that neither subparts of people nor groups of people are literally phenomenally conscious. (Unless maybe the brain as a whole is the relevant subpart.) If we want to retain this intuitive idea, then either (a.) there must be some structural feature that individuals have, which groups and subparts of individuals do not, which is plausibly necessary for consciousness, or (b.) consciousness must not nest for some other reason even in cases where human groups or subparts would have the structural features otherwise necessary for consciousness.
In "If Materialism Is True, the United States Is Probably Conscious", I argue that human groups do have all the structural features that materialists normally regard as characteristic of conscious systems. A materialist who accepts that claim but wishes nonetheless to deny that groups of people are literally phenomenally conscious might then be attracted to an anti-nesting principle.
Kammerer's principle is a bit complex. Here it is in his own words:
"Given a whole W that instantiates the functional property P, such that W’s instantiation of P is normally sufficient for W to instantiate the conscious mental state S, W does not instantiate S if W has at least one subpart that plays a role in its functional organization which fulfills at the same time the two following conditions:
(A) The performing of this role by the subpart requires (given the nature of this functional role and our theory of consciousness) that this subpart has conscious mental states (beliefs, emotions, hopes, experiences, desires, etc.) that represent W (what it is, what it does, what it should do). That is to say, this subpart has a functional property Q, Q being a sufficient condition for the subpart having the conscious mental state R (where R is a mental state representing W). (B) If such a functional role (i.e., a functional role of such a kind that it requires that the subpart performing it has conscious mental states representing W) was not performed by at least one of the subparts of W, W would no longer have the property P (or any other functional property sufficient for the having of S). In other words: if no subpart of W had R, then W would no longer have S."
Short, somewhat simplified version: If the reason a larger entity acts like it’s conscious is that it contains smaller entities within it who have conscious representations of that larger entity, then that larger entity is not in fact conscious. (I hope that's fair, and not too simple to capture Kammerer's main idea.)
Though Kammerer's anti-nesting principle avoids some of the (apparent) problems with Putnam's and Tononi's principles, and is perhaps the best-developed anti-nesting principle to date, I'm not convinced that we should embrace it.
I'm working on a formal reply (which I'll probably post a link to later), but my main thoughts are three:
First, Kammerer's principle doesn't appear to fulfill the intended(?) role of excluding group-level consciousness among actually existing humans, since it excludes group consciousness in only a limited range of cases.
Second, Kammerer's principle appears to make the existence of consciousness at the group level depend oddly on factors on the individual-person level that might have no influence on group-level functioning (such as whether an individual's thinking of herself as part of the group is emotionally motivating to her, which might vary with her mood even while her participation in the group remains the same, creating "dancing qualia" cases).
Third, it appears to be unmotivated by a general theory that would explain why satisfying or failing to satisfy (A) or (B) would be crucial to the absence or presence of group-level consciousness.
None of these three points would be news to Kammerer, so to make them stick would require more development than I'm going to give them today. But before doing that, I thought I solicit reactions from others -- either to the general issue of anti-nesting principles or to Kammerer's specific principle.
Update Jan. 26, 2016:
I have now drafted a more formal reply essay here.
---------------------------------------------------------
Related posts:
Martian Rabbit Superorganisms, Yeah! (May 4, 2012)
Tononi's Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant (Jul 16, 2014)
The Copernican Sweets of Not Looking Too Closely Inside an Alien's Head (Mar 14, 2014)
Why [X] Should Think the United States Is Conscious (X = Dennett, Dretske, Humphrey) (Winter 2012).
Regarding whether one can motivate Kammerer's, or anyone's, anti-nesting principles by appeal to a more general theory, my 2011 Synthese paper, "Supervenience and Neuroscience" might perhaps be relevant. In it I derive a principle of "fine-grained" supervenience from a more general "no mental differences without physical differences" supervenience principle. The gist of the relevance for the nesting debate goes something like this: If some alleged group mind, M1, was implemented in virtue of the cooperative cognitions of a bunch individuals with their own minds M2-Mn, there's a danger of there being two sets of mental facts implemented in virtue of precisely the same physical facts, but this violates of general principle of "no mental differences without physical differences." The M1 facts are presumably different from the M2 etc facts--perhaps only M1 has any red qualia--but there would be no physical facts in virtue of which the difference arises.
ReplyDeleteEven if that's any good, I don't think it would threaten your USA arguments, Eric. I suppose your response should be something along the following lines: There are physical parts of the whole system that are included in the supervenience base of the group mind that don't figure in any of the individual minds, and those physical parts can be appealed to in spelling out the physical differences that make the mental difference. I wonder, though, if part of Kammerer's point (or possible point) in focusing on cases in which the component minds are thinking about the whole is to bring in some sort of externalist semantics whereby the component minds would have supervenience bases so broad as to compete with the candidate supervenience base of the hypothetical group mind. Like you say, though, these would only block, at best, a limited kind of nesting scenarios. When the components aren't thinking such thoughts, it could be nesting agogo!
Won't the application of this (or perhaps any) nesting principle depend on where we start our analysis? E.g. If a brain in a vat could be conscious, then maybe my brain is conscious. But if my brain is conscious, and its role in my functional organization requires that it represent "me" as a whole, etc. then *I* (an entity of which my brain is a proper subpart) am not conscious, by the nesting principle. But we can also run it the other direction: if I am conscious, and my brain is a proper subpart of me, then my brain can't be conscious.
ReplyDeleteSo in the case of the USA arguments, it seems that the nesting principle gives no way of deciding whether individual US citizens are conscious but the USA as a whole is not, or the USA is conscious and its citizens are not. (Obviously one of those choices is more plausible than the other, but the nesting principle itself doesn't tell us this.)
Consciousness nesting with our other nesting functions still leaves us with our ourness intact...even subjugating function to effort and will denies partial anti-nesting influences, as our ourness persists in representing some kind of wholeness...
ReplyDeletere: certainty in ontology...
Pete: That's an interesting argument. I'm not quite sure I understand how the problem is supposed to go for either the Chinese Nation case or a hypothetical externalized version of my USA case. FGS from your paper is "If, at a given time, a single entity instantiates two distinct mental properties, it must do so in virtue of instantiating two distinct physical properties." Could it be that FGS is irrelevant to these cases because the Chinese Nation entity and/or USA is a distinct entity from the entity constituted by each (possibly Clarkishly extended) individual? Maybe the group entity is not distinct from the collection of all (extended) individuals, but the collection might either have no mental states or its mental state might just be the mental state of the organized system.
ReplyDeleteNonmanifestation: Yes! Part of straightening out nesting will involve committing on the issue of whether the whole person or the brain is the locus of consciousness. But no one that I know of would say that the whole person is the location of one conscious stream and the brain is the location of a *different* conscious stream! (Actually, that would be a pretty interesting position to try to develop.) In the USA case, I assume that Kammerer and other anti-nesters will assume (with some justice) that we know that individual people are conscious, so if there's a nesting situation, it must be such that the USA is the entity deprived of consciousness thereby. (I could see this giving rise, possibly, to unintuitive epistemic consequences if one can learn from introspection that certain functional relationships don't hold among other people in the US, depending on how exactly the anti-esting principle is supposed to work.)
Unknown 4:47: I'm not quite following. Maybe you could expand a bit on that idea?
We don't need to violate the anti-nesting principle to get novel kinds of conscious beings like corporations. Personally, I see no motivation for thinking some physical process can only be part of one stream of consciousness, but let's suppose that's true. Even so...
ReplyDeleteIf anything has a claim to the consciousness that I'm aware of right now, it's the neural correlates of my consciousness. We know that these NCCs are a sub-set of the neurons in the brain. Some, such as those in the dorsal stream of visual processing, aren't feeding into the consciousness I'm referring to. But, functionally isomorphic neural pathways are responsible for crocodile vision. It's plausible to think that crocodiles see, therefore it's plausible to think that my dorsal stream can see.
Whether one accepts that reasoning or not, it postulates multiple conscious beings within a single human head without violating the anti-nesting principle, because my NCCs do not physically overlap with the dorsal stream.
One might also say that an economy, organization, or nation is conscious without violating the nesting principle. It's possible that, say, the internet has consciousness, without that consciousness being composed of any organic matter. Corporations and nations could be similarly defined by sets of documents, external behaviors of humans, transactions, etc.
As far as I can tell, you don't need any physical overlap to include strange kinds of conscious beings.
Hi Eric,
ReplyDeleteIt’s a nice post. It’s great to see this issue discussed here. I will try to answer to your thoughts the best I can (and sorry in advance for my English). Sorry also because I had to split my comment in two (it was slightly too long for the blog format)
First of all: for the people interested who cannot get access to my paper on the Springer Website, I put a version of it on academia.edu:
https://www.academia.edu/5975605/How_a_materialist_can_deny_that_the_USA_is_conscious
Now, concerning the content:
First, I am really sorry for the intricacy of my formulation of my anti-nesting principle. I really tried to make it simpler, but I didn’t manage to do it without leaving out some of the things I wanted to say. I think that your own (shorter) version of it is broadly correct (it partly captures the spirit of what I wanted to say), even though it seems to me that you perhaps did not give as much weight as I did to the clause B of my principle (and that may perhaps explain why I disagree with the point you made in your second thought, as I will explain below).
Second, concerning your three thoughts on my anti-nesting principle.
1/ You say that my anti-nesting principle does not exclude, in principle, all kinds of group consciousness among humans. Actually, I rather take it to be an advantage of that principle, as I tend to think of group consciousness as a genuine possibility (even though I don’t think it is actual). What I wanted was a principle which would allow us to deny group consciousness in cases in which such group consciousness seems too counter-intuitive (and in a way that kind of captures what is so counter-intuitive about those cases).
But I would be curious to know what kind of group consciousness you had in mind in that “first thought”. Your talk about “actually existing humans” makes me think that you were maybe referring to cases of group consciousness that would be instantiated in our world. If that’s what you wanted to say, I would perhaps indeed find that more problematic. So I think it would be nice to have an example of what you were thinking about.
Eric:
ReplyDelete2/ In your second thought, you seem to suggest that an objection (of the “dancing qualia” kind) could me mounted against my principle. The way I understand your remark, you’re thinking about a case in which, for example, an individual (let’s call her Laura) would act Monday on the basis of her thoughts, emotions, desires, regarding her group as a whole. Laura would then have exactly the same behavior on Tuesday, except that this behavior would be then only caused by thoughts, emotions and desires which do not involve representations of the group as a whole. This seems an open possibility at the very least.
You seem to imply that, in such a case, my principle would commit me to the idea that some extra conscious states S (partly constituted by the functioning involving Laura’s behavior) belong to the whole on Tuesday, but not on Monday (because, on Monday, the anti-nesting principle applies and precludes the whole to enjoy those extra states), even though the functioning of the whole as a whole was exactly the same on those two days.
This would indeed be a rather problematic consequence of my principle.
However, I actually don’t think at all that my principle is committed to this unfortunate consequence. Indeed, in the Monday/Tuesday case, it is obvious that the behavior of Laura which is responsible for the extra conscious states S that we are considering do not presuppose the having of a representation of the whole by Laura, as Laura can (by hypothesis) have this behavior on Tuesday (when she does not entertain those representations). So, when it comes to those conscious states S that we are considering, my anti-nesting principle would not apply (as the clause B of the principle would not be fulfilled by the situation). In other words, I think that my principle grants that, if Laura can have the relevant behavior (and therefore give rise to the relevant functioning of the whole) without entertaining representations of the whole, then it means that the conscious states of the whole hypothetically brought about by this behavior would exist whether or not they are actually caused by some representations of the whole. This is the very idea behind the clause B of my principle (and my principle states that a situation has to fulfill both A and B to be covered by the anti-nesting principle). So, I think that my principle does not fall prey to this kind of ‘dancing qualia’ argument (at least if they take this specific form).
Maybe I have misconstrued your objection (which was only sketchily described), in which case I’m really sorry and I would be glad to hear it on a more developed form.
3/ Your third point is perfectly correct, and I think that it constitutes indeed a problem with my anti-nesting principle (and probably for most of anti-nesting principles as well). I tried to tackle this problem the best I could in the fifth section of my paper. I acknowledge that my answer to this problem is probably not as convincing as one could wish.
It’s great to be able to have this discussion. I am looking forward to reading your detailed response to my own response to your paper.
Pete Mandik: I’m not sure I perfectly get the idea you mentioned (as I haven’t have time to read your paper yet), but it seems to me that, if you grant that “physical” properties include “functional” properties, then all cases of alleged nested consciousness presumably are not cases in which there is no mental difference without a physical difference, as some special functional organization is required of the whole in order for the whole to be conscious (for those who believe in nested consciousness). But perhaps there is something I don’t get here.
ReplyDeleteNonmanifestation: That’s an interesting point indeed. Prima facie, I’m with Eric Schwitzgebel on this one: my anti-nesting principle is not supposed to be a theory of consciousness, but rather a principle that we add to a pre-existing theory (of which it can be required that it states whether me or my brain is conscious). Moreover: in itself, it does not seem counter-intuitive to make a kind of anti-nesting move in the case of brains & persons; because whatever your pet answer is (“what is conscious is the brain”; “what is conscious is the person, of which the brain is a subpart”) no one (or almost no one?) indeed wants to say that the brain and the person are both genuinely conscious, with two distinct streams of consciousness.
This isn't a direct comment on the paper, but I just wanted to react to one of Eric's prefatory comments:
ReplyDelete"Intuitively or pre-theoretically, it seems very plausible that neither subparts of people nor groups of people are literally phenomenally conscious."
That's right, but we don't treat computers the same way, and so I think this question of groups or subparts will come up more pointedly if and when we start thinking that a computer might be conscious. The thought struck me when I was playing Battleships against a computer. Clearly I had input into the computer where my ships were; yet it was firing as if it did not know this information. And I trusted it to do so.
It struck me that this is one clear difference between human and computer: a human cannot "unknow" something; similarly, there are various capacities which a human cannot "unhave" - hearing, the ability to discern language, the sense of touch, etc. (And relatedly, suspension of these capacities, as in an isolation tank, is sometimes related to weird, out-of-body, or even out-of-identity experiences). But given current varieties of technology, we would believe a computer to be able to shut down any part of its knowledge, and almost any one of its cognitive faculties.
The relevance to this debate being: if we had a conscious computer, it almost certainly could divide itself into multiple subparts, each with/without consciousness. So it's not clear how much the pretheoretic intuition really holds.
I think I'd resist nesting principles at all, because I regard intention as being an important part of consciousness, and I reckon people can simultaneously hold two intentions, and so quite possibly have two consciousnesses (though I'm not sure if they'd be nested or parallel). But I haven't thought any of that through very carefully.
Francois, thanks for your gracious and detailed reply! It's not entirely fair to you, I know, to gesture briefly at concerns rather than developing them fully. I didn't want to overburden the reader with a lot of details. I'll expand a bit more now, and once I have a full draft version of my comments in circulating shape, I'll send you that too.
ReplyDeleteThe first point really has two main subpoints. One is something like a burden of proof point. You don't, I think, do anything that *shows* it unlikely that there is actually existing group consciousness right now. You do aim to undermine my best case. I don't know that my argument depends on that specific case working out. Despite the title, my thesis is something closer to "if materialism is true, some actually existing group of humans is probably conscious, for example the United States". To the extent my argument works by appeal to the specific case of the U.S., your principle, if accepted, clearly shifts the burden back to me. However, to the extent my argument work by appeal to more general observations, then it's less clear where the burden shifts if your principle is accepted.
The second subpoint is this: How *I* conceive of "the United States" for purposes of the argument (as a spatially distributed group entity with people as some or all of its parts) might not be how ordinary people conceive of "the United States". Depending on details of ontology and reference, then, it might turn out that even if we accept your principle and even if folk representation of "the United States" is necessary, the folk representation does not pick out exactly the entity I mean by "the United States" and thus the consciousness of *that* entity might not be excluded by your principle.
Continued....
On the second point, maybe I'll just paste the specific example I have in mind in the current draft of my reply piece:
ReplyDelete“Required” is a tricky word. It’s hard to know exactly how to read it, or how far afield to go in thinking about counterfactual cases, but on one possible flat-footed reading, proper motivation can sometimes be “required” for a person to do something. So let’s suppose that Sunyi Lopes is a member of a conscious group and she is responsible for one important task – a task, perhaps, essential to the group entity’s “seeing” something in some region of its environment – maybe she manages the video feeds from Sector 27A of the Tonga Trench. On Monday, she did not know about or represent the conscious group to which she belongs; she just did her business, making her contribution. On Tuesday, she learns about the existence of this conscious group, and now she does represent it, but she keeps on doing her thing. So far it seems plausible to say that her representation of the group is not “required” for her contribution to the group-level processes in virtue of which the group is conscious. So on Tuesday Condition A continues not to be met and thus the group remains visually conscious of Section 27A of the Tonga Trench. On Wednesday, however, Sunyi becomes depressed. She momentarily decides that she’s going to stop processing the video feeds. But then she thinks to herself that stopping would cause the group to lose visual information from that sector of the Tonga Trench. On Monday such considerations wouldn’t have moved her, but now she represents the larger group as a group, and because of this she feels bad enough about possibly stopping that she decides to continue processing the feeds. No one else in the group is aware of Sunyi’s hesitation, and it has no impact on how she processes the feeds. But now Condition A is arguably met: Sunyi’s representation of the group is (motivationally, emotionally) required for her to contribute her part of the group cognition. Thus, applying Condition A, now the group is not visually conscious of Section 27A of the Tonga Trench. This, despite presumably no difference in group-level self-report or behavior with regard to the Tonga Trench. Now suppose on Thursday Sunyi feels better, and the group level representation is no longer required for her to feel motivated to do her task. Consciousness would seem to be restored, by Kammerer’s criteria. Again, there need be no change in the behavior or processing of the system as a whole. This seems unintuitive.
On the third issue, I don't discuss your attempt to address it in the post, but I recognize that you do address it. One source of concern about how you address it is that I worry that if your simplicity considerations were to be applied generally, you'd risk running into Kim-style causal exclusion problems for human-level mental states.
ReplyDeletechinaphil: That's a very interesting point! I certainly agree that if AI continues to develop, we will see our intuitions about the mind -- already subject to problems for the human case, which I try to highlight in my work -- really go haywire. This is a nice way of developing that point.
ReplyDeleteNice connection to "contradictory intention" cases too. I've thought a bit about "contradictory belief" cases, which don't fit within my own preferred ontology, since I prefer to keep belief whole-person and treat seemingly contradictory cases as in-between cases instead. My hunch is the same for intention, but I haven't worked that through carefully. So yes, this is also tied up with nesting. The more I think of it, the more amazing it is that there isn't a lot more work on the nesting issue....
Unknown Oct 19 8:43 pm: Part of me agrees with you about all of that -- but part of me also sees a slippery slope there that leads to a view even more wildly at variance with common sense than my USA consciousness view, enough at variance to give me substantial pause. I'm not sure where to go with all this, but because of cases like those you mention I do think nesting issues deserve a lot more attention in consciousness studies than they currently receive.
ReplyDeleteAlthough most of the cases you mention wouldn't violate Kammerer's anti-nesting principle, most of them would violate other anti-nesting principles, like Putnam's and Tononi's. On a moderate approach like Kammerer's, nesting is possible but limited to certain types of cases.
Yes, the most I can now say (congratulations François Kammerer responseded to you) is...Are positions about conscious function then about the phenomena of function and phenomenal itself...
ReplyDeleteI can not access your email or post in a timely manner, so...
ReplyDeleteWonderful the quality of your "substantial pause" and Kammerer's "wish" in consciousness...
Hi Eric,
ReplyDeleteNice ideas. Here is my opinion on those:
Concerning your fist point :
-I agree that there may be cases of nesting that are not covered by my principle, and that in principle allowed by my principle. I also agree that amongst those cases of nesting, it may be that some cases are counter-intuitive cases of nesting, that I wouldn’t be glad to accept (even though I can’t currently think of one in particular). However, nothing forces me to say that this principle is the only anti-nesting principle that one should accept. Perhaps a complete anti-nesting theory would require the formulation of many anti-nesting principles, which would all be considered as true at the same time. It could even be quite natural if we consider that the exclusion of nested forms of consciousness comes from a need for simplicity and economy in consciousness ascriptions (as I tried to say briefly in the fifth part of my paper).
-The reference move seems to me to be a smart move. However, it seems to me that my principle would still holds if one grants a theory of reference that allows for some kind of “magnetic” component (and not a purely descriptivist theory). More generally, it seems to me that, it seems to me that, if the representations of the people composing the United States were such that they referred to something fundamentally different (even if we grant a magnetic “direct reference” component in our theory of reference); say, if we could really distinguish between two seriously distinct entities that are “United States 1” (as conceived by the members of the goup) and “United States 2” (that you are talking about and that would be hypothetically conscious) then it would be unlikely that the United States 2 would still be allowed to act in a sufficiently coherent manner as a whole. And it would also be surprising that you (and not the other members of the group) managed to get a representation of this US2 whole (it seems to me that you would have to elaborate on that). But perhaps I didn’t fully get your point.
Concerning your second point:
-I see your point more clearly now. My answer would be as follows: in both cases, the kind of functioning that is brought by Sunyi cannot constitute a conscious state of the whole (cannot lead, for example, to a “conscious perception” of the Tonga Trench), whether or not Sunyi thinks or does not think about the US when she behaves the way she does. Indeed, it is not Sunyi behavior alone that could anyway be the basis of such a conscious state, but only her behavior coupled with many other facts (such as structural facts). But this function, of which her behavior is only a part of, and which necessitates the instantiation of some structural properties (such as her spatial position, the social fact that she has such a job, the fact that machines have been built around her to allow her to do her job, the fact that she decided to take that job in virtue of some socio-economic considerations, etc.) seems to me to necessitate that some representations of the whole as a whole are consumed at least by some subparts which play an essential role in this functioning. It seems to me that it’s enough for the hypothetical conscious states that we are considering to fall under the anti-nesting principle, and therefore to be dismissed.
I hope my point is clear enough (I grant that I should develop it in a more detailed manner, but I hope you got the gist of my response).
Eric:
ReplyDeleteConcerning the third point:
I see your point, as well as the danger of any “simplicity consideration” when it comes to consciousness. After all, why can’t we do without any consciousness at all? I think that we have to distinguish here between metaphysical simplicity and epistemological simplicity. In the metaphysical sense, it is true that we could, in a way, “do without consciousness”, and it’s not surprising because if we are materialist then we think that “consciousness” only refers to some set of physical/functional properties, and that there is no conscious extra-ingredient to look for.
But from an epistemological point of view, we can still talk about “consciousness” as a convenient way to describe the world, to explain/justify thought, behavior, etc. ; the same way we can still talk about “chairs” and “tables” even though there is no extra-ingredient in them in addition with the atoms arranged table-wise. But we have to respect some simplicity rules when we talk of chairs and tables (and when we say truthfully that there is a chair there, for example). Otherwise, couldn’t it be many chair in front of me right now? The one I normally talk about, but also the one that is made of the same atoms minus one on the left, etc. I think that the same think roughly applies to consciousness.
Unknown Mon Oct 19, 08:43:00
I don’t think that my anti-nesting principle can exclude the cases you describe, even though (contrary to what you think) I would like to exclude them. I think that a complete theory of consciousness would therefore have to include many anti-nesting principles (ideally, all of them could be deduced from a single principle. I’m not sure that it’s possible; but we could still have many principles, all of them motivated – though not deducible of – a single general principle governing consciousness ascriptions, which would have to do with the epistemological simplicity that should govern those ascriptions)
Unknown Tue Oct 20, 10:07:00
I’m sorry but I simply don’t get your point. This may be linked to the fact that English is not my first language, and not even my usual working language. Sorry about that.
Eric Schwitzgebel-François Kammerer, thank you....
ReplyDeleteRemembering-reflecting "single principle" for stream of consciousness, as instinct (reproduction-survival), also provides (apparently) energy, in the form of stimulation
to the cause of epistemologists and metaphysicists attracted to simplicity-reaches natures origins by individuals for community...
After reading Schwitzgebel and Kammerer's papers, I led a discussion on strange minds (brain sub-regions, corporations, etc.) in our school's philosophy club (approximately 25 philosophically oriented undergrads in attendance). I did my best to represent Schwitzgebel's position, but without explicitly spelling out the arguments for or against believing the US (etc.) is conscious.
ReplyDeleteWhat I found most interesting is this. While the strong majority of students didn't agree that any strange minds exist, not one person gave anything like the anti-nesting principles as the explanation. They seemed just as resistant to believing in strange minds without nesting as they were strange minds that violated the anti-nesting principle. Towards the end, I raised the suggesting of an anti-nesting principle, but the undergrads seemed unmoved. While they liked the upshot (the US is not conscious), they didn't seem to care much for this particular reason, and a few pointed out that it doesn't address *all* of the unusual kinds (a point raised earlier in this discussion), so they didn't see the point of bringing it up.
I think I'm with them. It's hard to see any non-ad hoc motives for introducing the principle, and it doesn't even do the work it's meant to do, even in the ad hoc way.
Interesting, Andrew -- thanks for that!
ReplyDelete