Thursday, November 12, 2020

The Nesting Problem for Theories of Consciousness

In 2016, Tomer Fekete, Cees Van Leeuwen, and Shimon Edelman articulated a general problem for computational theories of consciousness, which they called the Boundary Problem. The problem extends to most mainstream functional or biological theories of consciousness, and I will call it the Nesting Problem.

Consider your favorite functional, biological, informational, or computational criterion of consciousness, criterion C. When a system has C, that system is, according to the theory, conscious. Maybe C involves a certain kind of behaviorally sophisticated reactivity to inputs (as in the Turing Test), or maybe C involves structured meta-representations of a certain sort, or information sharing in a global workspace, or whatever. Unless you possess a fairly unusual and specific theory, probably the following will be true: Not only the whole animal (alternatively, the whole brain) will meet criterion C. So also will some subparts of the animal and some larger systems to which the animal belongs.

If there are relatively functionally isolated cognitive processes, for example, they will also have inputs and outputs, and integrate information, and maybe have some self-monitoring or higher-order representational tracking -- possibly enough, in at least one subsystem, if the boundaries are drawn just so, to meet criterion C. Arguably too, groups of people organized as companies or nations receive group-level inputs, engage in group-level information processing and self-representation, and act collectively. These groups might also meet criterion C.[1]

Various puzzles, or problems, or at least questions immediately follow, which few mainstream theorists of consciousness have engaged seriously and in detail.[2] First: Are all these subsystems and groups conscious? Maybe so! Maybe meeting C truly is sufficient, and there's a kind of consciousness transpiring at these higher and/or lower levels. How would that consciousness relate to consciousness at the animal level? Is there, for example, a stream of experience in the visual cortex, or in the enteric nervous system (the half billion neurons lining your gut), that is distinct from, or alternatively contributes to, the experience of the animal as a whole?

Second: If we want to attribute consciousness only to the animal (alternatively, the whole brain) and not to its subsystems or to groups, on what grounds do we justify denying consciousness to subsystems or groups? For many theories, this will require adjustment to or at least refinement of criterion C or alternatively the defense of a general "exclusion postulate" or "anti-nesting principle", which specifically forbids nesting levels of consciousness.

Suppose, for example, that you think that, in humans, consciousness occurs in the thalamacortical neural loop. Why there? Maybe because it's a big hub of information connectivity around the brain. Well, the world has lots of hubs of complex information connectivity, both at smaller scales and at larger scales. What makes one scale special? Maybe it has the most connectivity? Sure, that could be. If so, then maybe you're committed to saying that connectivity above some threshold is necessary for consciousness. But then we should probably theorize that threshold. Why is it that amount rather than some other amount? And how should we think about the discontinuity between systems that barely exceed the threshold versus barely fall short?

Or maybe instead of a threshold, it's a comparative matter: Whenever systems nest, whichever has the most connectivity is the conscious system.  But that principle can lead to some odd results. Or maybe it's not really C (connectivity, in this example) alone but C plus such-and-such other features, which groups and subsystems lack. Also fine! But again, let's theorize that. Or maybe groups and subsystems are also conscious -- consciousness happens simultaneously at many levels of organization. Fine, too! Then think through the consequences of that.[3]

My point is not that these approaches won't work or that there's anything wrong with them. My point is that this is a fundamental question about consciousness which is open to a variety of very different views, each of which brings challenges and puzzles -- challenges and puzzles which philosophers and scientists of consciousness, with a few exceptions, have not yet seriously explored.

--------------------------

Notes

[1] For an extended argument that the United States, conceived of as an entity with people as parts, meets most materialist criteria for being a consciousness entity, see my essay here. Philip Pettit also appears to argue for something in this vicinity.

[2] Giulio Tononi is an important exception (e.g., in Oizumi, Albantakis, and Tononi 2014 and Tononi and Koch 2015).

[3] Luke Roelofs explores a panpsychist version of this approach in his recent book Combining Minds, which was the inspiration for this post.

[image source]

23 comments:

Arnold said...

Psychological-consciousness or physiological/neurological-consciousness...are problems...

Phenomenological-consciousness's problem is understanding...what is an object...
...that understanding and consciousness are objects of self, of an objective self...

You wrote, 'a general problem for computational theories of consciousness, which they called the Boundary Problem.', thanks for searches...
...Do you have a view about the nestinglessness of phenomenological-consciousness...

Caesar said...

Thanks for this post it was an interesting read. I do have a question that I hope you can answer.

We seem to be, in many of these instances, conflating Consciousness with what may end up being other, non-consciousness related things.

We have the following givens when talking about Consciousness.

1) Our only experience of Consciousness right now is in biological organisms, and that too one particular biological organism, homo sapiens.

2) Along with Consciousness, biological organisms have evolved other traits and features over time in order to survive. Things such as locomotion, vision, hearing, etc.

3) So Consciousness is either a feature we evolved to aid in survival or an epiphenomenon arising from something else that evolved for survival.

Now, when talking about other biological features, we don't go assigning them to anything other than the organism itself, so why do it with Consciousness?

For example, we wouldn't say that the locomotive ability of an animal means that we must somehow consider that all it's parts have some small bits of Locomotion in them. Nor would we say that all the individual locomotions of each member of a group of animals somehow adds up to a group level locomotion feature. We would consider these sorts of conjectures absurd for most biological features but not when talking about Consciousness.

Any thoughts why?

Thanks,
Caesar

SelfAwarePatterns said...

To be fair to many of these theories, it matters what domain the theory claims to apply to. A theory that simply says what it postulates *is* consciousness, like IIT, will claim that any system that meets its criteria is conscious. However, many other theories, like higher order or global workspace, make no such broad claims. Their domain is organic brains. In other words, they're only claiming to be a theory of how consciousness works in an overall brain, not consciousness in any system anywhere. That's not to say those theories may not pertain to AI, but you have to bring in a lot of other neuroscientific concepts.

That said, each of these theories involves a particular definition of consciousness. And that's problematic, since there is no consensus on such a definition. In truth, no one consistent definition of consciousness seems to include all systems we intuitively see as conscious and reliably excludes systems we intuitively see as not.

I think the reality is, consciousness is in the eye of the beholder. We can talk about systems that have memory, perception, attention, learning, imagination, emotion, and introspection, but which collections of such capabilities are conscious will always be a judgment call.

This shouldn't be surprising. The concept of consciousness began rooted in Cartesian dualism. As that notion has become untenable, that conception of consciousness has as well.

Put another way, when Alice ponders Bob's consciousness, she's pondering how much Aliceness Bob has. When Bob ponder Alice's consciousness, he's pondering how much Bobness Alice has. When people ponder animal consciousness, it's a question of how much humanness the animal might have. And when we ponder a machine, it's a question of how much lifeness it might have. In other words, what we really seem to mean by "consciousness" is: "like us."

When we look at it that way, it becomes much more obvious that there's no fact of the matter, just degrees of similarity.

Eric Schwitzgebel said...

Thanks for the comments, folks!

Arnold: My own view is that there are variety of options, all with bizarre-seeming implications, and we don't have a good method for settling among them.

Caesar: I'm not sure about the exact meaning of claim 1. In a certain sense, my only experience of consciousness is in a single individual organism: me. In another sense, I experience other people and other mammals (such as my pet dog Cocoa) as conscious. Claim 3 seems right (on a broad reading of "epiphenomenon", but is quite weak. The analogy to locomotion doesn't quite work, I think. That is a trait that is clearly applicable to the whole, yes. But other traits are applicable both to parts and wholes, such as being alive or representing. With being alive and representing, there doesn't seem to be any big obvious problem with nesting attributions (there might be non-obvious problems) but with consciousness, nesting attributions seem to lead to consequences that many people would find bizarre.

SelfAware: Right, good point that some of the more empirically-grounded theories of consciousness are explicitly limited to certain types of system and don't aim at a general C. I do think the question still arises about how to extend their domain-specific C to other entities or other levels, even if the theorist avoids commitment about how it would extend. The shift to being in "the eye of the beholder" seems wrong to me though. I have trouble wrapping my mind around non-realist views of that sort. I'm a phenomenal realist in the weak sense of that term (not committed to denying reduction, or materiality, or anything spooky): I think there are facts of the matter, independent of our judgments about those facts, about what organisms are or are not conscious. (Such of view is compatible with in-between or indeterminate cases.) I'm happy to hear more, but that's probably an immovable starting point for me.

SelfAwarePatterns said...

Eric,
I should clarify that I think for any precise definition of "consciousness" there is a fact of the matter. The difficulty is in getting everyone to agree with such a definition.

It's worth noting that consciousness isn't the only concept that's difficult to define. Life is another one. Are viruses alive? What about viroids? Or prions? The United States? :-) We can talk about whether these things replicate, maintain homeostasis, undergo evolution, etc. Each of those questions seem to have a definite answer. On the other hand, whether the entities in question have a single vitalistic life force depends on what we mean.

Likewise with consciousness, we can talk about whether a system has distance senses it uses to build image maps, exogenous attention, endogenous attention, episodic memory, various types of learning, introspection, etc. Each of these are easier to get a definite answer on than an overall assessment of consciousness.

The overall conclusion I reach is that consciousness, like life, is unlikely to ever be solved with a single theory. It will likely be a galaxy of theories, many of them complex and difficult for the lay public to understand, just as the various microbiological ones on the border between biology and chemistry are.

Patrick Glass said...

As I write my PhD sample there is definitely tension between addressing the fringe and more orthodox issues. I rather expect that my panpsychist views make the more conservative approach the wiser one.

Philosopher Eric said...

Professor,
I wonder if you could clarify your current stance on IIT? I was just reading Scott Aaronson’s 2014 assessment, though the conclusion left me unfulfilled. Instead of explaining that some sort of non-human object had high Phi / consciousness, what he actually said was:

“We could achieve pretty-good information integration using a bipartite expander graph together with some fixed, simple choice of logic gates at the nodes: for example, XOR gates.”

What the hell is that? A mathematical description fit for the printed page? A general symbolic representation? But fortunately he also linked to your 2012 post about IIT implying that the USA itself should be conscious. Apparently there was a hiccup however because you weren’t using the latest IIT version. Though I think you did do a good job undressing that version, apparently right now you’re giving IIT a pass. Are they allowed to simply state things such as “By the way, we’re no longer panpsychists” or “Nestings and expansions of our theory are no longer considered valid interpretations”?

Anyway I’m pleased that you’ve observed this trait in various other proposals. It doesn’t make them “wrong” as you’ve noted, though the point I’d include as well is that their open nature should put them in the class of “supernatural” when they are taken alone, or at least in need of a statement that they’re agnostic regarding whatever it is that effectively does create qualia. To stay right with John Searle and my own “thumb pain” version of his Chinese room, in a natural world processed information will require mechanical instantiation.

In a 2002 paper UK professor Johnjoe McFadden offered such theories a potential escape, though without takers. Note that there is no place in the body which produces the complex electromagnetic radiation which McFadden theorizes to exist as qualia, except for the brain.

Beyond his recent paper I’ve heard that he’s finally writing a book on his cemi field theory, hopefully to be published next year. Personally I think he should quit being so congenial and go on the offensive, preferably by incorporating my own thought experiment. Do you remember it professor? The TL;DR is that information based consciousness theories imply that if the right information on paper were properly converted into other information on paper, then something in this paper shuffle would thus feel the qualia that we know of when our thumbs get whacked. To avoid such funkiness a given theorist could either admit “Beyond my theory itself I have no opinion about which mechanisms create qualia”, or try to incorporate a theory which does specifically addresses those mechanics, such as McFadden’s.

jblackmon said...

I think there are good reasons to accept nesting despite it's initial counterintuitive feel for many of us. For one, it's a medical reality that some people who have lost parts of their brains can report on the before/after phenomenal differences. I've argued that the best (though certainly not the only) interpretation of this is that the portion which survives the disconnection was already independently conscious beforehand. Our brains include multiple overlapping independently conscious regions. It seems crazy until you compare it to the alternatives, and some of the initially compelling objections (I only introspect myself, not others) don't hold up. Consciousness then would be like other physical properties that permit nesting and overlap: temperature, charge, volume, mass, energy.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

SelfAware: Ah, that's clearer. For sure, there are complications about defining the term. I *hope* that there is a single best natural-kind "reference magnet" once we get our positive and negative examples lined up right -- i.e., a fact in nature about what-it's-like/consciousness/experience that is the natural referent of terms like "phenomenal consciousness" in philosophers' usage. But that is a risky assumption. (I have a paper about this in JCS in 2016.)

Patrick: Some people find panpsychism laughably absurd. I don't. But there is a risk!

P Eric: I definitely don't want to give IIT a pass, though I am politic in this post. I think the critique of Exclusion that I posted in 2014 (a version is also in my 2015 USA consciousness paper) is devastating to IIT 3.0 and later and has never been seriously addressed by friends of IIT, despite some off-the-cuff remarks in Tononi and Koch 2015. Luis Favela has the best engagement with it, in one of his papers (2018?), and suggests that IIT might need to revise Exclusion to be workable, though it's also a brief treatment and he doesn't have a full positive solution. On the expander graphs: I think they can be instantiated physically though of course they are usually instantiated virtually in typical serial computers. On paper: Yes. A related case is simply a Turing machine instantiated as Turing imagined on printed paper with a read/write head. If you accept that the Turing-machine equivalent of any conscious being is also conscious, and that conscious beings do have Turing-machine equivalents, then such an arrangement would be conscious.

jblackmon: That is an interesting idea, and the comparison to physical quantities is similar to Roelofs. It has some wild-seeming implications about the number of different streams of consciousness going on in your head, and working out their relationships and boundaries will be weird and interesting. But something wild and weird must be true. I completely agree that all the alternative views also have bizarre implications. There are no non-bizarre options left, and something "crazy"-seeming must be true! (On the last point, see my 2014 paper in AJP.)

jblackmon said...

I've yet to read Roelofs. Unger's 'The Problem of the Many' makes clear how absurd he finds the view. I argue for it in my 2016 paper on hemispherectomies and independently conscious brain regions.

Arnold said...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4731102/ ...
...'For many decades, neuroscientists understood the brain as a 'stimulus–response' organ, consisting … In this traditional model, learning and experience merely modulate neural activity that is driven by sensory events in the world'...

Isn't still to soon to attach consciousness to anything...
...that "many deep brain neural prediction system layer's" work, is to sort...
...not 'stream consciousness'...

Arnold said...

https://phys.org/
Does the human brain resemble the Universe?..by Università di Bologna, NOVEMBER 16, 2020

Article about the possible commonality of universes, cosmoses, brain-systems, bio-systems, AI-systems and consciousness (systems)...thanks

Patrick Glass said...

I was naive last year, and, not realizing the risk, I believe I leaned into the panpsychism a bit too much. I read Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage and it has been a guide for me as I rewrite my sample.

Your blog was/is incredibly helpful in the admission process, even though I found it late in the game. I'm hopeful that I can improve on last year's outcome of two wait-lists and one (disappointing) offer.

chinaphil said...

In the full knowledge of how dumb it is to declare that there's an easy solution to a hard problem... I'm going to declare that I still think there is a reasonably straightforward solution to this problem.

First, it seems sensible to say that it seems fine that consciousness exists within some subset of the thing to which we normally assign consciousness - and that doesn't imply multiple streams of consciousness at all. So for example, I think consciousness is a feature of humans, and humans are 100% physical beings (is that called physicalism? I can never get these terms straight). If you cut a finger off a person, that would make them physically smaller, but wouldn't alter the fact that they're conscious. Clearly, a human being's consciousness can survive the removal of some physical parts of their body. So the fact that the term "conscious" can be applied to a complete human, but also some subset of a human, shouldn't strike us as inherently problematic. The fact that we *apply the term conscious* to a human, and also to some subset of a human, doesn't mean that two consciousnesses exist. It's just a feature of the way the word "conscious" (and most other words) works.

Nesting would exist as a *problem* iff we see certain things in the universe as conscious, AND believe that certain subsets of those things are also conscious, AND and we believe that the consciousnesses we see in the things and their subsets are different consciousnesses. But in general, it's normal to see things (people) as conscious, and some subset of each of those people as conscious, simply because we usually work with "people" as a unit, without worrying about exactly which section of the person does what. In this sense, consciousness would be no different from redheadedness or jollity.

Even if we do see separate consciousnesses, I'm not keen to jump straight to Tonioni's exclusion principle "at any given time there is only one experience having its full content...[within] a particular spatial and temporal grain". That just doesn't sound much like life as I know it. I feel like I'm having several experiences at the same time - as I write I'm sitting at the table, my wife and son are both at the table too, and I'm aware of them and of what I'm writing here. I don't see any reason to define this experience as "only one"; nor do I think that I necessarily spatially exclude other consciousnesses. For example, I could have a "split personality," with two persons inside my body having different experiences at the same time (I don't know if that's psychologically possible, but I don't think there's any conceptual problem with it).

So, I'm still leaning towards a Dennett-style view that there's nothing to see here. These worries about nesting and exclusion just seem to be... invented.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Jblackmon: Yes, interesting article! I think Unger misses the severity of the problem by assimilating it too quickly to superficially similar-seeming problems about object constitution.

Arnold: Thanks for the suggestion!

Patrick: Thanks. I'm glad you found the series, and the article, helpful.

chinaphil: As you say, the problem is:
"Nesting would exist as a *problem* iff we see certain things in the universe as conscious, AND believe that certain subsets of those things are also conscious, AND and we believe that the consciousnesses we see in the things and their subsets are different consciousnesses." As I see it, it's not a Tibbles/Tib+tail problem, like your finger case, which I'll leave to be sorted out in the metaphysics of objects. It's rather that you (arguably) get different streams of consciousness at different levels. I might also pull apart a couple of things in your last paragraph: A single unified stream of experience can presumably have several elements in it (see, e.g., Tim Bayne) or the elements could really belong to separate streams of experience that are only partly unified. The first option doesn't challenge Exclusion at all and is a very of saying "exactly one stream, exactly one privileged level". The second is more challenging to standard views, introducing questions and puzzles that people haven't yet worked out very well -- for example, concerning unity, introspective privilege, resolution of conflict, and a potentially unintuitive multiplicity of streams of consciousness.

jblackmon said...

Btw, I meant to refer to Unger's 'The Mental Problems of the Many' which is not just to good ol' problem of the many.

Stephen Wysong said...

I believe a clear and useful consensus definition can be crafted of consciousness as sentience, which means feeling (not intelligence as is often incorrectly assumed). Why not make the attempt and see how it develops? Here’s my starter contribution:

Definition:

Consciousness, noun. A biological, embodied simulation in feelings of external and internal sensory events and neurochemical states that is produced by activities of the brain.

Description:

That consciousness is a simulation is routinely overlooked, but is obvious when considering, for example, that we don’t see wavelengths of light, we see colors; we don’t hear compression waves in the air, we hear sounds ... and so on for each sensory mode. Consciousness is the simulation. The feelings are the contents of consciousness. Feelings are physical feelings like touch, pain, hunger, sight, sound and smell. Each sensory track is simulated in unique feelings. Diverse elements of a single sensory track are uniquely simulated, as in feelings of different colors in vision and different odors in olfaction. Conscious thought in words or pictures seems non-physical, but thought is sensory-inhibited physical feeling—speech-inhibited or vision-inhibited feeling respectively. Emotions are feelings that are simulations of a brain’s limbic system and neurochemical states: fear, anger, depression and so on. Normal conscious experience presents as a unified “movie-like” flow, as in “stream of consciousness.”

Note that David Chalmers of “hard problem” fame said in an interview that Thomas Nagel’s “what it’s like” to be a particular animal is what it feels like to be that creature. In his critique (“Is There Anything It Is Like to Be a Bat?”) of the “something it is like” phrase, P. M. S. Hacker uses the words “feel” and “feelings” about 60 times. Those two instances alone (and there are many more) would seem to validate a definition of consciousness as sentience.

Unconscious (non-conscious) brain processes may be involved in the resolution of feelings but should not be mistaken as consciousness. For instance, intelligence (pattern matching) and expectation resolution are not conscious processes.

Armed with the sentience-centered definition and description we can ask and answer questions about who or what is conscious. As I’ve mentioned in previous “The Splintered Mind” comments, even though we can only be certain about our own consciousness, we can infer consciousness in others based on biostructural, neurochemical/biochemical and DNA similarity. We can very confidently infer that primates and all mammals are conscious. The inference strength for the consciousness of birds and octopuses is very high as well. Rocks and computer laptops are not conscious. The United States is not conscious. We frequently consider the possible consciousness of AI systems, apprehensively or for moral reasons, but AI systems like Star Trek’s Commander Data are not conscious. That’s not to say, however, that we should rule out moral consideration for a non-sentient system that computes itself centered in a world.

Some philosophical preferences or conceptions would surely take issue with this definition but I believe the objections should be disallowed. GWT and IIT are theories with philosophical abstractions about the creation of consciousness, but workspaces and information relationships are not biological. Panpsychists and neutral monists routinely fail to define consciousness but use the term in a non-biological disembodied way, leading to the conclusion that panpsychism, neutral monism and similar philosophical beliefs are not about biological sentience.

Take any claim or alternate definition of consciousness, weigh it against the preliminary descriptive definition I’ve provided and let me know how it goes. A group effort would be helpful in refining this definition and overcoming its weaknesses, so jump right in with suggestions for improvements and clarifications.

Stephen Wysong said...

By the way, this definition was suggested by William James of stream-of-consciousness fame, in a talk of December 1, 1884 as printed in his book The Meaning of Truth. In the biography William James, Robert D. Richardson writes

“The Function of Cognition,” which might more helpfully have been called “What Cognition Is,” claims only to be “a chapter in descriptive psychology ... not an inquiry into the ‘how it comes,’ but merely into the ‘what it is’ of cognition.” It is, says James, “a function of consciousness,” which “at least implies the existence of a feeling.” He explains that he is using the word “feeling” to “designate generically all states of consciousness,” including those sometimes called “ideas” or “thoughts.” “Feeling” remains for James the most general, most inclusive term for “state of consciousness.”

Philosopher Eric said...

We seem to be really close on your proposal Stephen, so I’d like to help. Furthermore after reading the professor’s challenge to Keith Frankish, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage” (thanks to Patrick Glass above), I’d think he’d be open to your “sentience” based consciousness definition as well, rather than “intelligence”. (Before this post I hadn’t realized what a bad ass professor S. happens to be! Like Searle before him he seems to use sensible reasoning to challenge all sorts of pompous intellectuals who use their verbal acumen and charm to “bewitch” many intelligent people into funky positions.)

One suggestion that I’d make would be for us to reduce your “biology” requirement to something more basic. Notice that evolution should use the properties of physics / chemistry to “blindly engineer” the traits of life. Thus in principle it should be possible for something other than evolution to technologically create a sentient entity using that same physics / chemistry, and yet not also by means of any “evolved biological stuff”.

Personally I’m not optimistic about humanity building functional conscious robots some day, and since our machines seem ridiculously primitive when compared against biological machines. But I also see no reason for us to draw our definitional circle smaller than it technically should be drawn. Accuracy here might even help our cause in the eyes of any sensible sci-fi lovers out there. I suspect that many can’t stand Searle mainly because they falsely presume his opposition to robot consciousness, essentially given his prominent use of the “biology” term. Beyond his strengths, let’s also use the faults of this UC Berkley professor to help instruct us.

Eric Schwitzgebel said...

Thanks for the continuing thoughtful comments, everyone!

jblackmon: Right, I figured! (I do still think it overassimilates to the non-mental problem of the many.)

Stephen & Phil Eric: While I think Stephen's definition might work reasonably well for someone who is already in broad theoretical agreement with him, I prefer the much less theory-laden definition by example that I offer in the 2016 paper that Phil Eric mentions. It is for instance unclear to me why biology or neurochemistry should be required for consciousness, unless "biology" and "neurochemistry" are construed very broadly to include, for example, AIs that we might create in the future and organized groups and various types of hypothetical (possibly actual) alien entities -- maybe (if we want to engage also with dualists or idealists) even immaterial souls. Such entities might or might not be conscious (or in the case of souls, might or might not exist), but in my view that's a matter to be settled by reasoning and evidence rather than a matter that can be settled by the *definition* of "consciousness".

Kaplan Family said...

what if consciousness explains the part whole relationship? -- in other words for something to be a part is if we are conscious of it as as a part, for something to be a whole is to think of it as a whole. If that's the order of explanation, it would make sense to me that we will never get anywhere trying to use part/whole logic to explain consciousness.
[ironically to leave this comment I had to check the squares with traffic lights -- there was a piece of a traffic light in one square and a piece in another square. I had to decide does "a piece of a traffic light" count as a traffic light. Signs and wonders!]

Philosopher Eric said...

Professor,
The reason I speculated that you might agree with Stephen’s “sentience” definition for consciousness, is because it seems isomorphic with the “phenomenal experience” definition that you championed in that 2016 paper, effectively argued by means of relavent examples. I consider each of us to be referring to the same essential idea here. Hopefully Stephen will be able to reduce his “biology” stipulation to something more basic, at least conceptually, since you and I consider this essential.

Furthermore beyond your illustration by example (which is clearly a great way to go), I’m interested in your “wonderfullness condition”. I agree that whatever creates phenomenal experience should remain wondrous to us, even after being somewhat explained.

An analogy lies in Newton’s gravity. He famously left the reason that mass would attract mass open for future natural philosophers to address. If he’d have instead postulated that his theory was the whole of it, and thus no underlying wonder was left to discover, this would be analogous to the position of global workspace theorists today. It may be that their theory does have some validity to it, but just as Newton postulated something wondrous beyond his base theory itself (a wonder later confirmed by Einstein and others), they should open up this door as well. There’s nothing wondrous about global information processing producing phenomenal experience, any more than mass inherently attracting mass. Furthermore their puts them under fire from accomplished philosophers such as Searle and yourself. (My own “thumb pain” thought experiment has yet to see the academic light of day, though I consider it devastating.)

Here Bernard Baars and newer proponents might ask, “Okay, what would it effectively take for us to follow the model of Newton and stay right with Eric Schwitzgebel’s “wonderfullness condition”? It seems to me that they’d need to stop claiming that their theory does more than a natural world would permit it to. For example, I personally am quite impressed with the electromagnetic radiation proposal of Johnjoe McFadden, but in any case to deny “wonderfullness” here is to also deny naturalism.

Stephen Wysong said...

Eric, when SelfAware commented about “systems we intuitively see as conscious … and not,” he wrote that “consciousness is in the eye of the beholder.” You responded “I think there are facts of the matter, independent of our judgments about those facts, about what organisms are or are not conscious.”

The sentience definition I proposed precisely specifies the facts of the matter of consciousness. I repeat it here for easy reference:

Consciousness, noun. A biological, embodied simulation in feelings of external and internal sensory events and neurochemical states that is produced by activities of the brain

This definition states the facts of the matter for the only consciousness we know of—human consciousness. Respecting evolutionary knowledge, biostructural, bio-neurochemical and DNA similarity, the definition generalizes ‘human’ to ‘biological’ to allow the definition to encompass closely related animals and even distantly related species like the octopus. I believe the remaining elements of the definition are scientifically valid and uncontroversial facts of the matter—activities of the brain resolve sensory events and neurochemical states and produce simulations of them in feelings.

If biology and neurochemistry shouldn’t be required, as you and PhilEric suggest, the definition would depart from known facts of the matter and be diverted into imaginative territory. The resulting evidence-free, nonfactual undefinition would then truly be in the eye of the beholder, which, in my opinion and perhaps that of SelfAware, is the current confusing state of much thinking and writing about consciousness.

Eric, if you would like a word to refer to an AI’s computation of itself centered in a world, why not invent one? Philosophy commonly invents vocabulary and with the word consciousness already taken and clearly defined, inventing a new word would avoid creating confusion and promoting nonsense. For the AI case I suggest ‘aiwareness’ (pronounced eyewareness). If alien entities when encountered provide evidence that they are biological organisms similarly structured to ourselves with feeling-based consciousness such as our own, they’d fit the facts-of-the-matter consciousness definition. If not, create another new word for the phenomenon the aliens explain, perhaps ‘alienwareness’. Create another new word to refer to a group alienwareness once we have evidence it exists. And, since consciousness is already taken, why not also create new words for the fundamental ‘sensitivity’ of some kind that panpsychists and neutral monists suppose?

The existence of ghostly immaterial minds (dualism) and souls (religion) must be demonstrated before investigating their features so, until convincing evidence is provided, a consideration of their sensitivities is meaningless.

The new vocabulary words would probably not be as popular in philosophical publications as the current equivocation of the word consciousness provides, but I hope that consideration isn’t relevant to Philosophy’s quest to understand.