Call a theory of consciousness nonlocal if two entities that are molecule-for-molecule perfectly similar in their physical structure could nonetheless differ in their conscious experiences. My thought today is: Nonlocal theories of consciousness face an unattractive dilemma between (a.) allowing for physically implausible means of knowledge or (b.) allowing for the in-principle introspective inaccessibility of consciousness.
Clarifications:
Examples of nonlocal theories:Of course everyone agrees that entities with different causal histories and environments will tend to differ in their physical structure. Nonlocality requires the more unusual (and, to many, unintuitive) view that two entities could differ in their conscious experiences even if their local physical structure were somehow exactly the same. I intend the "could" in "could nonetheless differ" to reflect natural or nomic possibility, that is, consistency with the laws of nature, rather than conceptual or metaphysical possibility. So a view, for example, that holds that consciousness-lacking "zombie" twins of us are metaphysically but not nomically possible still counts as a local theory if molecule-for-molecule locally identical twins would be nomically guaranteed to have the same conscious experiences. "Local" needn't mean "in the brain". Theories on which conscious experience depends on states of the body or nearby environment still count as local in the intended sense. I won't try to draw a principled line between local and nonlocal, but nonlocal theories of the sort I have in mind make consciousness depend on events far away or deep in the past. For sake of this argument, I'm assuming the falsity of certain types of dualist and non-naturalist views. If consciousness depends on an immaterial substance not located in space, today's arguments don't apply.
The Dilemma, Illustrated with a Simplistic Version of Lewis's ViewDavid Lewis's view in "Mad Pain and Martian Pain". Lewis holds that whether a brain state is experienced as pain depends on the causal/functional role that that type of brain state plays in an "appropriate population" (such as your species). This is a nonlocal theory because what role a brain state type plays in a population depends on what is going on with other members of that population who, presumably, can be far away from you or exist in the past. Views on which conscious experience depends on functions or representations that nonlocally depend on evolutionary or learning history. Fred Dretske's view in Naturalizing the Mind is an example. Your heart has a function of circulating blood, due to its evolutionary history. If by freak quantum chance a molecule-for-molecule locally identical heart-looking-thing were to congeal in a swamp, it would not have that evolutionary history and it would not have that same function. Similarly for mental states, including conscious experiences, on Dretske's view: If a freak molecule-for-molecule duplicate of you were to randomly congeal ("Swampman"), the states of its brain-like-subpart would lack functions and/or representational content, and on views of this sort it would either have no conscious experiences or conscious experiences very different from your own.
Consider a crude version of Lewis's theory: You are in Brain State X. Whether Brain State X is experienced at painful depends on whether Brain State X plays the causal/functional role of pain for the majority of the currently existing members of your species. Suppose that Brain State X does indeed play the causal/functional role of pain for 90% of the currently existing members of your species. For that majority, it is caused by tissue stress, tissue damage, etc., and it tends to cause avoidance, protective tending, and statements like "that hurts!". However, for 10% of the species, Brain State X plays the causal role of a tickle: It is caused by gentle, unpredictable touching of the armpits and tends to cause withdrawal, laughter, and statements like "that tickles!". On Lewis's theory, this minority will be experiencing pain, but in a "mad" way -- caused by gentle, unpredictable touching and causing tickle-like reactions.
Now suppose a tragic accident kills almost all of that 90% majority while sparing the 10% minority. Brain State X now plays the causal role of pain only for you and a few others. For the majority of currently existing members of your species, Brain State X plays the causal role of a tickle. On this implementation of Lewis's theory, your experience of Brain State X will change from the experience of pain, caused in the normal way and with the normal effects, to the experience of a tickle, but caused in a "mad" way with "mad" effects.
If this seems bizarre, well, yes it is! With no internal / local change in you, your experience has changed. And furthermore, it has changed in a peculiar way -- into a tickle that plays exactly the causal role of pain (caused in the same way as pain and causing the same reactions). However, as I have argued elsewhere (e.g., here and here), every philosophical theory of consciousness will have bizarre implications, so bizarreness alone is no defeater.
If I can tell that my pain has turned into a tickle, then physically implausible forms of communication become possible. Suppose almost all of the 90% of normals, pre-accident, live on an island on the other side of the globe. I am worried that a bomb might be dropped which would kill them all. So I pinch myself, creating a state of pain: Brain State X. As soon as the bomb is dropped, Brain State X becomes a tickle, and I know they are dead, even though no signal has been sent from that far-away island. If the far-away island is on a planet around a distant star, the signal might even constitute an instance of faster-than-light communication.
But maybe I can't tell that my pain has turned into a tickle. If the causal role of Brain State X remains exactly the same, and if our knowledge of our own conscious states is an ordinary causal process, then maybe this is the more natural way to interpret this implementation of Lewis's view. I will still say and judge that I am in pain, despite the fact that my experience is actually just a tickle. This is a bit odd, but introspection is fallible and perhaps even sometimes massively and systematically mistaken. Still, my ignorance is remarkably deep and intractable: There is no way, even in principle, that I could know my own experience by attending just to what's going on locally in my own mind. I can only know by checking to see that the distant population still exists. Self-knowledge becomes in principle a non-local matter of knowing what is going on with other people. After all, if there was any way of knowing, locally, about the change in my experience, that would put us back on the first horn of the dilemma, allowing physically implausible forms of communication.
(For a similar argument, see Boghossian's objection to self-knowledge of externally determined thought contents. The externalists' containment/inheritance reply might work for Boghossian's specific objection, but it seems more strained for this case, especially when the difference might be between Experience X and no experience at all.)
The Dilemma, for Evolutionary Types
Alternatively, consider a view on which Brain State X gives rise to Experience Y because of its evolutionary history. Now of course that particular instance of Brain State X, and you as a particular person, did not exist in the evolutionary past. What existed in the past, and was subject to selection pressures, were brain states like X, and people like you.
We thus end up with a version of the same population problem that troubles the Lewis account. If what matters is the selection history of your species, then whether you are experiencing Y or experiencing Z or experiencing nothing, will depend on facts about the membership of your species that might have no physical connection to you -- members who were not your direct ancestors, who maybe migrated to a remote island without further contact. If you have any way to tell whether you are experiencing Y, Z, or nothing, you can now in principle know something about how they fared, despite no ordinary means of information transfer. If you have no way to tell whether you are experiencing Y, Z, or nothing, you are awkwardly ignorant about your own experience.
The dilemma can't be avoided by insisting that the only relevant members of the evolutionary past are your direct ancestors. This is clearest if we allow cases where the relevant difference is between whether you currently experience Y or nothing (where the latter is possible if the state doesn't have the right kind of evolutionary history, e.g., is a spandrel or due to an unselected recent mutation). If whether you experience Y or nothing depends on whether the majority of your ancestors had Feature F in the past, we can construct alternative scenarios in which 60% of your ancestors had Feature F and in which only 40% of your ancestors had Feature F, but the genetic result for you is the same. Now again, you can either mysteriously know something about the past with no ordinary means of information transfer or you are in principle ignorant about whether you are having that experience.
Other ways of attempting to concretize the role of evolutionary history generate the same dilemma. The dilemma is inherent in the nonlocality itself. To the extent your current experience depends on facts about you that don't depend on your current physical structure, either you seemingly can't know whether you are having Experience Y, or you can know nonlocal facts by means other than ordinary physical Markov processes.
[Whitefield Green Man by Paul Sivell]
The Dilemma, for Swamp Cases
Let's tweak the Swampman case: You walk into a swamp. Lightning strikes. You black out and fall on your face. By freak quantum chance a swamp duplicate of you is formed. You wake fifteen minutes later, face down in the mud, side-by-side with a duplicate. Are you the one who walked in, or are you the duplicate?
If an evolutionary history is necessary for consciousness, and if you can tell you are conscious, then you know you aren't the duplicate. But can you tell you're conscious? If so, it wouldn't seem to be by any ordinary, locally causal process, since those processes are the same in you and the duplicate. If not, then introspection has failed you catastrophically. So we see the same dilemma again: either a source of knowledge that fits poorly with naturalistic understandings of how knowledge works, or a radical failure of self-knowledge.
Or consider partial swamp-cases. You and your twin stroll into the swamp. Lightning strikes, you both collapse, to one of you the following happens: One part of their brain is destroyed by the lightning, but by freak quantum accident 15 seconds later molecules congeal with exactly the same structure as the destroyed part. Suppose the visual areas are destroyed. Then you both wake up. On the natural reading of an evolutionary account, although both you and your twin are conscious and able to use meaningful language (unlike in evolutionary interpretations of the original Swampman case), one of you has no visual experiences at all. Again, either you can know which you are by some method at odds with our usual understanding of how knowledge works, or you can't know and are radically in-principle ignorant about whether you have visual experience.
Of course all such swamp-cases are far-fetched! But on current scientific understandings, they are nomically possible. And they are just the sort of pure-form thought experiment needed to illustrate the commitments of nonlocal theories of consciousness. That is, it's a distilled test case, designed to cleanly separate the relevant features -- a case in which entities are locally identical but differ in history and thus, according to history-based nonlocal theories, also differ in conscious experience. (If there were no such possible cases, then consciousness would supervene locally and history would contribute only causally and not constitutively to conscious experience.)
The Dilemma, in General
Nonlocal theories of consciousness allow in principle for local twins with different experiences. If the local twins' self-knowledge tracks these differences in experience, it must be by some means other than normal causal traces. So either there's a strange form of knowing at variance with our ordinary physical accounts of how knowledge works, or the differences in experience are in principle unknowable.
---------------------------------
Related:
"The Tyrant's Headache", Sci Phi Journal, issue #3 (2015), 78-83.
The Weirdness of the World, Chapter 2, Princeton University Press.
"David Lewis, Anaesthesia by Genocide, and a Materialistic Trilemma" (Oct 13, 2011).
4 comments:
New York Times today "A New Scientific Field Is Recasting Who We Are and How We Got That Way...I read it and Eric's post, my questioning...
In the sociogenomics metaphysics field (behavior and genes) for adult humans...there-there is allowed the use of any unused energy, available and accessible from causality for creating new relationships in the observation of our attitudes, intentions/intensions' toward the will to be free of causality...
Theory around nonlocality is highly speculative, seems to me. It is an interesting, if paradoxical subject---the sort of thing philosophy thrives on, and, can argue and discuss until way after the cows have come home. There was something, in one of Dennett's works (maybe, Intuition Pumps?) about Zombies and Zimboes? I can't recall the full substance of his account therefor, but It may have been connected with nonlocality.Interesting stuff.
So, might we simplify this, somewhat, without that being too much? Let's assess/consider "identical twins". According to the physiological circumstances of how they were conceived, they are about as alike as "two-peas-in-a-pod", physiologically...I do not know about blood types, here.However, these twins are as alike in the physiological sense, as they can be...stay with me, this is going somewhere, I hope. So, will the twins attain identical consciousness? I don't think so. Why? In spite of all their similarity, they experience life differently. One might become a quantum physicist; the other, a, ahem, philosopher. So far as I know, consciousness is immeasurable, unlike IQ.
That is part of why it is a "hard problem". Such twins may sometimes claim to know what each other are thinking. Maybe they do. but, it is easy to bluff a bit.
Review Piaget, if you wish...
Is it to late...gene molecules, in this sense, are segments of DNA molecules...
Post a Comment