Consciousness must, it seems, be a vague phenomenon. The newly formed zygote has no conscious experience; the two-year-old child has conscious experience. I find it hard to believe that consciousness suddenly pops in, in a quantum leap. Even if it emerges fairly suddenly, it must still spread out across some stretch of time (a day, a second, a hundred milliseconds?) such that at a narrow enough temporal resolution it is gradual. Actually, I suspect the transitional period is fairly long: I know of no sudden neural or behavioral change (even at birth) that suggests anything other than an extended gradualism. In the transitional period, by definition, it will be a vague matter whether the organism is conscious, such that it's not quite right simply to say it's conscious and not quite right to say it's not, except perhaps as governed by flexible practical norms. (Compare going bald.)
The same considerations apply phylogenetically: Humans clearly have consciousness. Viruses clearly do not (unless you go panpsychist and say that everything is conscious -- that would clear up the zygote problem too -- but I assume we don't want to do that!). Why think there will be a sharp line across the phylogenetic tree? The only plausible place, it seems, for a sharp line would be between human beings and all others. But then evolutionary history becomes a problem: Modern humans, early homo sapiens, homo erectus, homo habilis, australopithecus, etc., which of these have conscious experience?
And yet I can't get my head around this vagueness. I can imagine what it is like for it to be a vague matter whether one is bald; I can picture the gradual transition. I can imagine what it is for it to be a vague matter whether a bottle is in a backpack (perhaps the bottle is partly melted or hanging half out). But how can it be a vague matter whether a particular being has conscious experience or whether a particular state is conscious?
Consider visual experience: I can imagine a very small speck of visual experience, with the visual field limited to one second of arc. But that's still straightforwardly a case of conscious experience, even if it's very limited. It wouldn't be a matter of convention or pragmatics whether to regard such a case as qualifying as conscious experience. I can imagine a very hazy visual experience, or a gappy experience, or one fading out toward medium gray; but all these too are straightforwardly cases of experience, however impoverished that experience is. They are discretely different in kind from lacking visual experience. So what then would it be to be partway between having visual experience and lacking it, so that it's a vague matter, so that it would be a pragmatic decision like with baldness or the water bottle half hanging out of the pack? As soon as there's the itsiest bit of visual consciousness, there's visual consciousness, end of story -- right?
You might point to cases of organisms with barely any sensory capacities as plausibly having perceptual consciousness so limited that it would be a vague case. But what would those organisms be? Not snails or ants, surely, who have complex sensory systems; the most extremely limited perceptual manifold must belong to the simplest of the mutlicellular animals, maybe even single celled organisms. That seems rather far down the phylogenetic tree to find vague cases of consciousness, doesn't it? And if such simple organisms are conscious, why not also the personal computer you're viewing this post on, or the nation in which you live, which is in some ways as complicated in its reactions? If simple reactivity to the environment is sufficient for consciousness, why not go wholly panpsychist? If it takes more than simple reactivity, then there will be organisms with complex perceptual manifolds but for which it is transitional, vague, indeterminate, a pragmatic decision about the application of terms, whether they are conscious. So there would be complex perceptual not-quite-experience-not-quite-non-experience. Huh?
Theoretically, it seems to me that there must be vague cases, and that those cases must be reasonably far along the developmental and phylogenetic spectrum, to the point where the organism's reactivity is fairly richly structured. But I cannot imagine such vague cases. I can't make sense of them. I can't understand what they would be like.
And that seems to indicate that there's something fundamentally flawed in my concept of consciousness.
Hey Eric,
ReplyDeleteI love your conclusion. Also, I agree with it. There's something wrong with your concept of consciousness. Mine too. The interesting question, to me, is what to do about it. And the answer I favor is: let's roll up our sleeves and revise until we have a concept that's worth having.
The skills of conceptual analysis are relatively useless for discovery, since it seems like no concepts actually have analyses. But the skills we have as trained analytic philosophers can be pretty darned useful in various projects conceptual re-engineering, so let's have a go at it!
It seems like having a simpler brain that can only "think" a limited scope of thoughts would limit how rich the content of your thoughts about yourself would be. Maybe we think our personal experience of the world is rich and deep and profound simply because there's so many different thoughts we can think about ourselves. So maybe it doesn't "feel like" much to be a mouse because mice can notice fewer things about the experience, and spend less time trying.
ReplyDeleteIf that's all true then being a mouse might feel to a mouse like thinking about your foot feels to you: something you can be aware of, but aren't most of the time.
Some where I once read that consciousness was a constant line and it exists so long as you can perceive time. So to remove the vagueness, one would have to determine when one begins to perceive time. Since the only true way we to do that is to with change, knowing when one begins to notice & recognize change, of course there is no way to determine that as an external entity.
ReplyDeleteyou really need to talk to some yogis
ReplyDeletewestern mindset is not cut out for anything to do with consciousness
The ability or inability to conceptualise a particular state of affairs really says nothing about whether or not that state of affairs is possible. Epsecially when one is thinking about consciousness - introspection is notoriously unreliable.
ReplyDeleteIt seems unremarkable to me that a fully conscious being cannot imagine partial consciousness. You would have to be partially conscious in order to conceive it (obviously ignoring the fact that a partially conscious being would not be capable of abstract conceptions). It's something like the idea of not being able to unlearn a skill. If e.g. you are a good guitar player it is well nigh impossible to remember how it felt to not be able to make chord shapes with your fingers and change quickly between them.
I agree with ledge - being unable to grasp the concept of partial consciousness says very little about our theories of consciousness. (Although, I'm not sure what the problem actually is, because I can understand more limited consciousness ranging down from our rich, reflective self-consciousness to less concept of self/other, to more immediate consciousness without reflection, to externally-triggered conceptual flashes like "danger" "food".)
ReplyDeleteJust to throw analogies around, assuming you are a sighted person, can you imagine being blind from birth? I can't. I know I can close my eyes and imagine seeing nothing but darkness, but seeing only darkness is not lacking sight - especially if it's, say, brain damage in the visual cortex where the brain cannot even comprehend sight. My inabilty to conceive of total lack of vision as opposed to seeing nothing but darkness has no bearing whatsoever on the existence of either state nor any theories of vision. I've simply been locked into one so long that I cannot conceive of the other. The limitation is my perceptions, not any theory.
To flip the issue in the other direction, is it possible to be more conscious - have an even richer, more reflective awareness? Well, if you believe consciousness is neural activity, then there's no reason to think that we are the absolute pinnacle.
Take whales and dolphins for example - although the logic/reason portions of their cerebral cortex is very primitive compared to ours, the emotional areas put ours to shame. So they, theoretically, have an emotional richness that we physically could never have. Can I conceive of what that would be like? No, not really. Does it make it any less true or mean there's automatically flaws in my theories of emotion? Of course not.
(Also, just to toss another fun fact out there - unlike primate brains where there is a visual cortex, auditory cortex, etc. - cetaceans have all of their senses feeding into a single area of the brain. So, *perhaps*, they don't see, hear, etc. but merely "sense" their surroundings holistically. I cannot even begin to imagine what that sort of perception would be like where hearing and sight and smell are indistinguishable. But I also don't think my lack of understanding makes the reality any less real.)
Great post. I think anyone interested in the problem would want to ruminate on this aspect of it.
ReplyDeleteBest regards,
- Steve Esser
Thanks for all the interesting comments, folks!
ReplyDeletePete: I'm pessimistic about how well grounded such a conceptual re-engineering can be in this case, and you're optimistic, a key difference between us!
Chris: Maybe it doesn't feel like much to be a mouse, but if it feels like anything then it's not a vague case, is it?
James: I'm not sure about the background theory about time. Why accept that?
Gregory: Maybe so! (Though I confess I'm not optimistic.)
ReplyDeleteLedge: Yes, I agree that conceivability and possibility (in the sense of possibility I care about) often come apart. The guitar-player case is different from the vagueness case in the following way, I think: Even if I can't really imagine what it was like to be a beginner, beginner-like experience with such-and-such properties seems to me like a coherent possibility. In the case of vagueness about consciousness, I can't even conceive of it as a coherent possibility, except in a very abstract way. In-between cases of consciousness don't *make sense* to me the way it makes sense to me that a beginning guitarist might [fill in the blank].
Ken: I like your examples. Partly, I want to respond as I responded to ledge: I think my point here is different from the widely-acknowledged point that it's hard or maybe impossible to know what it's like to have a very different stream of experience. I haven't done a very good job, perhaps, articulating why. It seems to me that there's no incoherence in the idea of a being with radically different sensory or emotional experience; that doesn't seem to undermine my conception of consciousness in any way. But the idea of an in-between case of consciousness, that does seem incoherent to me -- unimaginable in some stronger sense -- and if it's also true that there must be such cases, then my concept of consciousness must be somehow flawed. All the "limited" cases of consciousness you mention see to be straightforward cases of consciousness, not vague cases, right? So we still haven't managed to concoct a plausible example of an in-between case, have we?
ReplyDeleteSuggestion: what about peripheral vision? I am aware of the door in my peripheral vision, but it's quite a vague awareness. As it happens, I think there are even studies about peripheral vision which show that your brain has a lot more information about what's going on than you are aware of; so you might guess the pattern on the door right more often than not even if you can hardly "see" it.
ReplyDeleteWhat about the subconscious? Certainly lots of interesting stuff going on there that we are not conscious of.
By the way, personally, I lean towards panpsychism, for roughly the reasons you outline. It seems absurd to me to say that consciousness just starts at a certain point... I think there is probably a (boring) thing-that-it-is-like to be a thermostat.
I wonder what you think the well-grounded-ness of a re-engineered concept consists in.
ReplyDeleteSheep: My sense is that peripheral objects are still visually experienced -- at least when my attention is right. When I'm not attending at all peripherally, maybe they're not experienced at all (it's a matter of debate). It might even be a vague matter, but if so still not one I can wrap my mind around: It seems to me they're either determinately experienced (in low resolution) or determinatelyl not experienced.
ReplyDeletePete: It would have to capture the what-it's-like-ness, not change the subject -- otherwise it would be a different concept, not the same one fixed up. I more or less share Chalmers's 1995 and 1996 assessment of the bait-and-switch problem in theories of consciousness. Although I see how I could tweak around and clean up certain of my concepts -- e.g., my concept of belief, or of chairs -- I don't see how I'd even begin to do so with my concept of consciousness.
ReplyDeleteBut I'm still worried that there's a flaw in it.
Eric,
ReplyDeleteThis is a helpful way to focus our disagreement. I reject Chalmer's "bait-and switch" claims.
On this issue, I recommend pp. 8-10 in mine and Weisberg's paper, linked here:
http://www.petemandik.com/philosophy/papers/typeq.pdf
It seems to me that the main confusion in your post (and which I'm confused about too), is that you acknowledge that it doesn't make sense to view development (either individual or evolutionary) as having a sudden transition in terms of conscious-having, but then you insist on using the word "conscious" as a binary all-or-nothing trait. That is, you're searching for "vague" cases, by which it appears, from your usage, that you're already decided that conscious-having is a yes/no trait, and that you're looking for cases where it's hard to decide whether to give a "yes" or "no." But then you'll always have a continuum of cases, and cases near each other across the yes/no dividing line that you made that will make your line look arbitrary. Given that you don't see individual or evolutionary development as having a sharp dividing line, perhaps it would make more sense to think of conscious-having as continuous, rather than binary; some things are more conscious, and other things less conscious. i.e. Being conscious has something do to with the richness of how you process your environment, and I process my environment in a way that's richer than that of a newborn human baby, which is in turn richer than that of a snail. So we all are conscious, but at different levels. And as Ken points out, perhaps it's not even a single-dimensional variable, but rather, I might be more conscious than entity X in one category, but less conscious in another category.
ReplyDeleteIn a similar vein, Sheep gives peripheral vision as an example where one's visual experience is signficantly diminshed, and your response is that it's still determinely experienced or not. It seems to me that the same issue arises here: doesn't it make more sense to describe visual experience as a continuum? Something that I barely process out of the corner of my eye as "something, probably mid-sized, of indeterminate color and shape, or possibly nothing" may still qualifies as a visual experience, but one so quantitatively different that it should be rated as visual experience, level=28.5, rather than my normal full-on, focused, visual experience, level=77.6. An organism that has nothing more than a single photoreceptor that it uses for a binary response to move towards or away from light wold be visual experience, level=0.1.
Thanks for the link, Pete, and the continuing discussion. We should go out for coffee one of these days and hash it out!
ReplyDeleteI'm with Chalmers and Carnap -- even more Carnap -- on the analytic/synthetic distinction. Quine's criticism of it seems facile to me.
My reason for thinking that there's a bait and switch going on partly inductive. Every functional analysis I've seen *seems* to me like a bait and switch; I can't seem to fashion one myself that's not; therefore I infer that they probably all are. Also (more a priori) it just seems to me to be part of the concept that consciousness is not amenable to functional reduction. But I don't regard this as airtight.
A resource for the friend of analytic definition to shifts in definition in face of empirical facts is to individuate concepts finely: Here's an a priori definition of one concept (which we might call "pain") which makes phenomenality essential to it; here's an a priori definition of another concept (which we might also or instead call "pain") that does not do so. Emerging empirical knowledge or pragmatic considerations of various sorts may militate in favor of one or the other.
Thanks for the comment, Autumnal! That's right, my problem is that I think reality must be continuous and have vague cases while my conception of consciousness does not allow for them. In cases like you mention, I allow for degree, but even the lowest degree is still determinately a case of consciousness, it seems to me -- unless there is no consciousness whatsoever! It's like money: You can have one cent or $10 or $1,000,000, but even having one cent is having money, and is discretely different from having not even a penny. (But this money analogy isn't perfect since I can comprehend vague cases of having money, where it would a pragmatic decision what to say, e.g., if one owes more than one has or if one is owed money by someone who may not pay back, or if the electronic transfer is temporarily hung up in cyberspace, or....)
ReplyDeleteI have for some time shared Eric's skepticism of carving off just what we find so interesting about our phenomenal life (what it is like to be at time t) just at the point of reductive modeling or scientific explanation; and no doubt philosophers will be giving the normative 'no, no, no' to many eager scientists over the next few decades. However, I have monitored how my imagination has drastically changed just in the course of a few years as I have poked around in the mind science literature: with respect to, for example, the unity, transcendence, historical priority, and sophistication of conscious experience. Much of our folk concepts pertaining to consciousness - access, control, sophistication, etc - have turned out to be globally illusory, and I am skeptical (!) that armchair skepticism has had much light to offer on the subject. I am also fascinated about the possibility of changing intellectual history the last couple hundred years by simply transporting - in a time machine for books - a few rough scientific notes on brain damage to Rene Descartes. I am therefore inclined to agree with Pete's conclusion: "Now let’s do some science." (Mandik & Weisberg, 2008)
ReplyDelete. . . to clarify: I think you make a very good point here, Eric, about our current concept of consciousness; it is just that I am not all that worried anymore about science's ability, in conjunction with other interdisciplinary work (in the New Enlightenment), to throw extremely satisfying light on this difficulty.
ReplyDeleteIs it really so hard to imagine something a bit lower on the scale of consciousness? What about a creature that experiences subjective sensations of pain, fear, joy, the taste of food, etc., but has extremely limited cognitive abilities so that it perceives these things only in a real-time, relatively unstructured way? What kind of consciousness is that? What about a creature with a more-or-less human brain, but without any education, language, or civilization -- some kind of feral child, say? Don't you think that the consciousness of such a creature would be different than our own?
ReplyDeleteMaybe what's wrong with your concept of consciousness is that you seem to assume that it's a unitary thing. But why couldn't what we experience as consciousness be the consequence of a lot of different, interacting systems in the brain, some of which other creatures might possess and some of which they might not? (And, of course, they might have others that we don't.)
Eric, but it seems to be a problem only arises if you view conscious-having as so drastically different than not-conscious-having that it becomes a serious problem that there are cases where the level of conscious-possessing is so low that we're not sure if it's zero or just tiny. But I don't think that ambiguity is generally regarded a serious problem for cases where we have a continuous range of levels, because tiny is not really that different than zero. To use your analogy, having net wealth, net debt, or precisely zero wealth, are discrete categories. But we don't generally worry a ton if someone is near the border as to how to classify them. You're right, of course, that someone with one penny can be classified as "having money," while someone who owes two cents can be classified as in "debt." And for a homeless person who owns the clothes on her back (what's their monetary value?), has a stolen shopping cart (is this now "hers"?), and owes another person a pack of cigarettes (is that debt enforceable?), we might have a lot of difficulty pinning down which of the three discrete categories she falls into. But we also wouldn't find it that problematic that her position is a little fuzzy. Similarly, if we accept that conscious-having falls on a continuum, with humans at 45.2, bats at 16.1, and amoebae at 0.2, why is it so troubling that there are cases (e.g. viruses) where we're not clear if the value is tiny or zero?
ReplyDeleteAnacoluthons: I think I can imagine that -- but those are still straightforward cases of consciousness, right? If so, they're not the vague cases I'm looking for. But still, you may be right that there's something problematically "unitary" in my notion of consciousness; I just can't see around that.
ReplyDeleteMichael: I agree that if the concept of consciousness has changed some already, it's plausible to suppose that it may change more in light of further scientific discoveries. I wonder how much it has changed, in your case or in mine, as opposed to views about the contents or properties of consciousness having changed (if the two types of change can be distinguished). I don't feel that my concept of consciousness has changed in any radical way at least since my first exposure to philosophy of mind (and maybe considerably farther back).
ReplyDeleteAutumnal: I agree with you about money -- and in the abstract, perhaps, it seems that consciousness must work the same way. But I feel like the money cases make sense to me in the way that the consciousness case does not. I still can't get my head around what would be involved in one of these vague cases. I hope I'm not just being deliberately thick-witted!
ReplyDeleteOne possiblity is that part of the issue is the conceptual reducibility, or not, of consciousness. I understand the in-between cases of money and baldness and having a bottle in the pack because I know the types of lower-level properties undergirding these notions and I can imagine having some amount of or some degree of such properties. No such hope though, for me, in the case of consciousness, where no such reductive base is apparent to me....
On my view, we will be left with some sort of metaphorical mapping - as is perhaps the case with all such inquiry - as we understand more fully what in-between cases of consciousness are like. So just to begin, why not consider (and I am truly just wondering out loud here) the vague recollections combined with reports from conscious witnesses of what it was like to wake up from having your wisdom teeth pulled? From what I recall, that was some really weird in-betweeness. And what of those strange moments when waking up takes a bit long. I had an occasion a few days ago trying to analyze my experience as I stared at my hand, which I had not fully owned again yet. And what about those more bizarre dream experiences that you can barely grasp only when startled out of your sleep? Or what about getting fully taken up in a rage, with primitive, hot emotion crowding out prototypical forms of human consciousness. Our brains tell a story as do the rings of a tree, right? Perhaps we really do have some historical resources that will allow us to at least get on with the metaphorical mapping. No?
ReplyDeleteAnd Eric, as for my concept of consciousness, I think it really has changed significantly. But perhaps me and you started out with different versions. Or perhaps you have not been as effected by the idea of the sophistication of the unconscious mind. Whatever the case, this idea of a changing concept of consciousness is fascinating, so thanks for the post! (and I wonder if we now need to change our concept of concepts before reinvestigating this question)
Hi Eric,
ReplyDeleteI've worked a fair bit on this (though I'm not currently). I've got two papers you might want to take a look at, both of which can be accessed from my web page (along with a few other relevant papers):
(1) "Are Our Concepts Conscious State and Conscious Creature Vague?" Erkenntnis 68(2), March 2008, pp. 239-263.
(2) "Vagueness and the Metaphysics of Consciousness," Philosophical Studies 128(3), 2006, 515-538
Best wishes,
Michael Antony
Please all of you read the works of Bjorn Merker, especially 'The Liabilities of Mobility', at Consciousness and Cognition and available at ScribID and 'Consciousness without a Cortex' at Brain and Behavior.
ReplyDeleteHis papers remove the vagueness from the concept of consciousness. Googling those two titles brings many other interesting papers.
Michael: Yes, someone else mentioned to me your work on this. I apologize for not knowing it. I've put it near the top of my reading list.
ReplyDeleteInterstellar Bill: Merker's view is interesting, but I don't think it removes the vagueness. Nor do I think (independently of that) that it compels assent. There are *so many* theories of consciousness; this one seems to me not particularly better or worse than the competitors. For one thing, it seems to be what Hurley called a "classical sandwich" view of the sort that has been criticized by people like Hurley, Noe, and Wilson.
I think Julian Jaynes has the most eloquent, logical, and empirically supported concept of consciousness out there. I recommend everyone to check out his work and see for yourself. To account for your paradox, he says:
ReplyDelete"If understanding a thing is arriving at a familiarizing metaphor for it, then we can see that there always will be a difficulty in understanding consciousness. For it should be immediately apparent that there is not and cannot be anything in our immediate experience that is like immediate experience itself. There is therefore a sense in which we shall never be able to understand consciousness in the same way that we can understand things that we are conscious of."
The solution here is to stop thinking of consciousness or subjectivity as something we can or cannot "possess" and instead see it as a cognitive operator based on a vocabulary of embodied metaphors.
"Subjective conscious mind is an analog of what is called the real world. It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world. Its reality is of the same order as mathematics. It allows us to shortcut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or repository. And it is intimately bound up with volition and decision."
This approach to consciousness as a social-linguistic construction based on metaphors of behavioral experience fits in nicely with the work of Lakoff, Johnson, and Dennett. It also works well with the 4E paradigm of people like Noƫ and Clark, who emphasize that language expands cognitive capacity through scaffolding. See Douglas Hofstadter for an exculpation of this scaffolding in terms of self-referential analogies of self.
According to Jaynes then, consciousness is nothing but a linguistic scaffold that operates as an analog of the real world and our experience in that world. Because we are primarily visual creatures, this is why most metaphors of mind are based on visual access to a landscape. We can have mental "viewpoints" and "see" solutions to problems. "Seeing is believing." Mental space is a analog of real space, wherein things can be at the "back" or "top" of our minds. Lakoff and Johnson develop an extensive account of this metaphor of "Knowing as Seeing" in addition to all the other metaphors we employ in understanding the mind.
Jaynes even has an empirically worked out theory of how this linguistic consciousness developed in accordance with historical and archeological evidence. I highly, highly recommend reading the first chapter of his "The Origin of Consciousness in the Breakdown of the Bicameral Mind". In fact, read the whole book, you won't regret it. He goes over all the failed attempts at understanding consciousness and dissects them in terms of being bad metaphors. The history of psychology is filled with such failed metaphor: "Consciousness as a Property of Matter," "Consciousness as a Property of Protoplasm,"Consciousness as Learning, "Consciousness as a Metaphysical Imposition," "The Helpless Spectator Theory," "Emergent Evolution," "Behaviorism," "Consciousness as the Reticular Activating System".
Jaynes presents a thorough destruction of all these failed theories and provides a precursor theory to all modern embodied/embedded approaches. Consciousness is not a "thing" but a metaphorical process based on analogies of embodied experience.
For what it's worth, I think it's possible I once experienced something which was somewhere in the vague area between visual experience and lack of visual experience.
ReplyDeleteI was at a party and had passed out. (Don't these stories always start this way?) Not due to drunkenness or anything. Long story. Anyway, point is, I'd passed out. And as I was coming to, I began to experience something _like_ visual experience, but not quite up to the level of full blooded visual experience. I can describe it in hindsight as follows. It seems that all of the visual impressions that should have been there were there. But there was no sense in which the impressions came together to form experiences of objects. Nor was it like there were floating color patches in front of my face. Indeed, there was no seeing going on. There wasn't even an attempt to puzzle out what I was seeing--because I _wasn't_ seeing. Nevertheless, I clearly remember there was something going on visually. I remember the impressions being there before my mind, so to speak. It's just that they did not cohere at all as _images_ or even as _parts_ of images.
I'm sorry I can't describe it any more clearly or accurately than that. But it's what happened, and it seems to me to be a candidate for something that is vaguely similar to visual experience, but also vaguely similar to having no visual experience at all.
Gary: That's a very interesting quote from Jaynes at the beginning of your comment. I definitely think there's something to that. Jaynes's positive theory I find unsatisfying, though. For one thing, I have a lot of trouble with the thought that consciousness only emerged in recent history.
ReplyDeleteKris: Interesting experience! But it sounds like it was definitely an *experience* of some sort or other, however, vague in its contents, yes? If so, it wouldn't be a vague case between having experience and not having it, would it?
ReplyDeleteI don't see why there is anything wrong with consciousness suddenly turning on at some point in development. After all, we wouldn't expect to see a difference in behavior based on that. Of course, when the lights turned on seems to be an empirical question with no possible way to answer it.
ReplyDeleteRead Tononi on consciousness and meanwhile think about falling asleep, which we talk about as a gradual diminution consciousness or awareness. Sure, one is in a sense either asleep or awake, but something essential about consciousness--about our experience of it--is not so simply binary. Also read Alison Gopnik about baby vs adult attention, if you haven't. Babies seem to be aware of everything at once, arguably because they have no algorithms to say what's salient, and nothing has been made rote or unconscious (like driving a car or playing Rachmaninoff becomes rote after you've learned). Your concern for the all or nothing of consciousness to me suggests a belief in something sacred. But I think you're looking at the very opposite, the thing people have in common with dogs and perhaps fish. I imagine you disapprove of "pulling the plug" on Sciavo-like human bodies with no cortical brain function, because of this concern. Bodies that will never awake have no rights, I would say. Self evidently, we are all created equal, but this is only trivially true when we are dead or asleep.
ReplyDeleteThanks for the comment, MT. I have read Tononi and Gopnik, but I'm afraid I'm still confused!
ReplyDeleteEric
ReplyDeleteI don't think you are confused at all in terms of your concept of consciousness. Consider that there can be no middle ground between:
(a) something it is like to be X
and
(b) nothing it is like to be X
You can change your concept of consciousness if you like, but please don't! Then you'll just not be talking about consciousness any more.
Why not go panpsychist? It has its problems, but none so grave as the problems of finding a suitable non-vague event in complex animals (or even chemistry) for consciousness to emerge in.
I agree panpsychism is elegant. Rather hard to swallow, though! I'm not so sure about your claim that there's no middle ground between (a) and (b). I'm not a big fan of the law of the excluded middle in logical models of vague cases, and to assume a lack of vagueness is to beg the question at the heart of this post.
ReplyDeletethis is tough because
ReplyDelete1) with conciousness, we havent really got the definition tied down.
2) conciousness is most accuratly ascribed to processes not to structures like a animal (eg when my process, life, ceases i am no longer concious despite my physical structure remaining mostly the same).
So our question starts from a very difficult position.
I tend towards saying that conciousness is a process from which there is a meaningful "what it is like" interpretation (and that there is no other requirement). the first time that occured is the first "flash" of conciousness regardless of the structure on which it occured.
am a panprotopsychist? or maybe a panprotoexperientialist? or something else?
GNZ
I'm inclined to agree with your characterization, gnz, though I think one can accept that an still think that (proto-?)consciousness is sparse rather than abundant. Radical abundance is theoretically elegant in a way, since it avoids the sudden occurrence issue.
ReplyDeleteSo what is the concern in regard to radial (proto) abundance? i mean, if it more elegant, I feel we would need a positive reason (based on logic) to reject it.
ReplyDeletei guess I would have an issue with it if it as an explanation if it was mystical radical abundance but i don't take it that way...
But I'm not sure if the key thing is that or just a difference in our view on the likelihood and required complexity of having a process for which there is a "meaningful what it is like"..
GNZ
So I am trying to imagine your marginal case... Maybe this will make some sense...
ReplyDeleteI imagine a brief flash of "conciousness" with the minimal amount of context (/richness) and then nothing.
This process is one that meets whatever criteria we have set for "there being somthing it is like" but it is so vague and short that we could not even make sense of it as conciousness if it happened in our minds amongst our other thoughts.
the thought is very simple - like 'that somthing is not nothing' and does not link to anything like a visual experience of colour, with all the richness that we might assocaite with that.
GNZ
GNZ: Against panpsychism, I appeal to its radical conflict with common sense, coupled with much more to support it *other* than theoretical elegance. I have argued that conflict with common sense shouldn't be a defeater, and in fact all ambitious metaphysical theories of the mind inevitably conflict with common sense, so I do think panpsychism is a live option. (For more on this see "The Crazyist Metaphysics of Mind" in draft.)
ReplyDeleteGNZ: Like you, I have trouble conceptualizing an in-between case. But I also think that (barring panpsychism!) there are pretty attractive theoretical reasons to think there must be in-between cases. And that's exactly why I'm concerned about my concept! On the general issue of vagueness and two-valued logic, well, it's a big issue. But see my treatment of in-between cases of belief for some discussion of it, e.g., my 2010 paper "Acting Contrary...". Short version: I'm not a fan of two-valued logic in modeling vague cases.