Thursday, June 14, 2018

Slippery Slope Arguments and Discretely Countable Subjects of Experience

I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while.

To build a slippery slope argument concerning the presence of consciousness, do this:

* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).

* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)

* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.

* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].

Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)

However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.

Consider these three familiar slippery slopes.

(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)

(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.

(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.

Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).

(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.

Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:

(a.) Deny one of the anchors. There was group consciousness all along, perhaps!

(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).

(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.

(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.

I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.

[image adapted from Pixabay]

11 comments:

  1. Would split brains give us a way of imagining that there are no discrete subjects of experience? I think that's my preferred way of understanding things. When it's not clear whether there are two minds rather than one, maybe that's because there's just no answer. (Why do we think that subjects are countable anyway? Is it because the unity of consciousness is supposed to imply *a* unified consciousness?)

    ReplyDelete
  2. Hasn't the slippery slope been on-going since we left Socratic here-ism for Platonic there-ism...
    ...That we seem to forget we are conscious--forgetting we have the only means at our disposal--our consciousness--with which to Be conscious...
    ...Perhaps we have--just so much attention towards consciousness because everything else on this planet is so interesting that we set aside our consciousness--as though it does not need attention...

    ReplyDelete
  3. good stuff, see what you make of:
    http://newbooksnetwork.com/edouard-machery-philosophy-within-proper-bounds-oxford-up-2017/

    ReplyDelete
  4. Let me see if I can give you a bit more fodder for option (d).

    I believe we can agree that not all streams of consciousness are of equal quality - that if you consider a person to have a stream of consciousness, and a person who is in a vegetative state (functionally brain-dead) to NOT have a stream of consciousness, that there will be situations where a person can waver between the two, such as someone going in and out of a coma. It could be argued that every time we go to sleep, there is an interruption in our stream of consciousness anyways... which raises some interesting questions about continuity of person from day to day. Are we even the same consciousness as yesterday, or are we each a brand new consciousness, only gifted with the memories of the person we were before?

    Similarly, there is a qualitative difference between someone who is in the late stages of Alzheimer's, whose stream of consciousness is fragmented.

    I would posit that the same qualitative differences exist within the spectrum of the animal kingdom as well; a toad does not have the same depth and breadth of consciousness as a human does. Perhaps as a tadpole they exist primarily on instinct, but then rise out of it to gain a consciousness with some continuity as an adult... but not a very strong continuity.

    A slightly dumber toad, however, would be functionally identical to the conscious toad while perhaps not meeting what your ethiologist claims to be the threshold for consciousness. But in the same way that our quality of consciousness varies from day to day (there are definitely days I go more on autopilot than others), a toad might actually dip below and above that arbitrary threshold from day to day... but be on the whole, functionally identical on those days.

    The "functionally identical" part is the critical part. If there is no practical difference, is there a difference that actually means anything?

    ReplyDelete
  5. (Apologies if this is a duplicate. First attempt to post it got no type of confirmation, so I'm assuming that it failed...)

    Surely (a) cannot be the answer for all slippery slopes*, (b) is unacceptably magical without some explanation of *what* might be so crucial at, eg, the ethologist's toads/frogs frontier, and (c), even if you were to accept its strangeness in the group consciousness case, doesn't extend to slopes of other kinds.

    (d) FTW! What is so strange about it actually? Everything we know about consciousness suggests a systems phenomenon, so it wouldn't be surprising if the slippery slope for consciousness turned out to be similar to the slippery slope for making a system simpler and simpler: it doesn't cross a boundary into non-system-hood, it just gradually becomes trivial.

    Doesn't it seem likely that the notion of "unitary subject / binarily conscious or not" is an artifact of how we've evolved to represent other actors in the environment, projected back onto ourselves?

    *Perhaps a panpsychist could say, "No, no: (a) is my answer for all slopes, because there's just never a non-conscious anchor for any of them." But I'd claim she's stuck with (d) anyway; otherwise it's death by the combination problem.

    ReplyDelete
  6. How about inner speech in bilinguals? Once one becomes fluent, thinking (and dreaming) in the second language becomes the usual experience, but recollections of one's earlier life might be in the first language, as well as auditory verbal hallucinations. Is there a slippery slope between the two? (younger children, creoles)

    ReplyDelete
  7. This reminds me of the genera of arguments Dennett argued against in the 90s. Do you know / remember the "is consciousness the leading edge of memory" or "consciousness more like fame than TV"? They're classic Dennett intuition pumps to try and stop us from expecting that intentional stance discriminations (conscious / not-conscious) have to have neurological correlates. And without them I'm not sure your SSAs slip.
    Maybe Searle is wrong about consciousness?

    ReplyDelete
  8. If you asked the question instead in terms of movement (specifically self powered movement), would it be as hard to pin down?

    Perhaps the issue is what movement does can be understood fairly readily. So what does consciousness do? Really?

    ReplyDelete
  9. What if “Consciousness” was related to a class of processes in the same way digestion (digestiveness?) is related to a class of processes?

    What if these conscious-related processes could be described as processes wherein there is an agent (or mechanism, or constructor) such that when the agent is presented with a given input it generates a given output and the Agent itself is relatively unchanged so that it remains capable of repeating the process?

    Input —>[agent]—> Output. Let’s call such a process a “task”

    What if such agents could be linked, outputs from some tasks becoming inputs to others, creating higher order agents?

    What if various properties relative to the inputs and outputs “emerge” as agents are combined in particular ways?

    What if there were a hierarchy of tasks that looked something like this:

    1. Lowest level: at least one task. Example Agent: proton

    2. A task which can sensibly be described as having a functional purpose. Example Agent: cell surface receptor? Single cell? Virus?

    3. A task whose purpose is to generate an essentially arbitrary signal intended to be used as input to a subsequent task. Example Agent: neuron

    4. A task wherein the input constitutes semantic information (I.e., an arbitrary signal created by a previous task) and the output constitutes a valuable response to the meaning of the input. Example Agent: [here be dragons. ]

    5a. A task wherein the valuable output to semantic input includes memory

    5b. A task wherein the valuable output to semantic input includes system-wide influences, aka, emotions.

    6. A task wherein the valuable output constitutes the establishment of a concept.

    7. A task wherein the input is semantic information whose meaning references a concept.

    8. A task wherein the valuable output constitutes an abstract (non-physical) concept

    9. A task wherein the input is two or more concepts and the output is a single concept of the combination

    10. A task wherein the input concepts are incompatible and the output concept is self-contradictory, eg., complex numbers, “John the married bachelor”, “philosophical zombies”

    What if we gave a name to the task which had the minimum necessary requirement for consciousness? What if we called that task a psychule (like a molecule is the minimal unit of a given substance)? Thus a panpsychist could say that the level 1 task was the psychule. A functionalist might say the level 2 task is the psychule. Many psychologists and neuroscientists might choose level 3. I, personally would choose level 4. Those requiring an awareness of self (mirror test) might choose level 6 or higher. [There’s conceptual work to be done here.]

    If the above is correct, then it would be easier to pinpoint where on the slide from one anchor to the other one loses the capability of performing psychules, depending on what level one has placed the psychule.

    Quick notes:
    — you would not say the Agent is conscious. The agent would be part of a system which is conscious. As Dennett would say, the Agent has competence without comprehension.
    — it’s perfectly acceptable to refer to the consciousness of composite agents, such as the United States. You would simply be referring to the repertoire of psychules available to that agent.
    — the consciousness of sub agents would not be changed when they are considered part of a composite agent.

    *

    ReplyDelete
  10. I don't find Searle's consciousness/money analogy very convincing. Here are some edge cases where there's no straight answer whether a person has money or not:

    - someone with no money apart from a foreign denomination coin
    - someone with no money apart from a Roman denarius
    - someone who is heavily in debt but believes they will be able to pay it back
    - someone who is heavily in debt and doesn't believe they will be able to pay it back
    - someone without any money of their own who is holding onto a coin for someone else
    - a cult leader who is provided with everything they need by their followers

    I agree you can't say things like 'two-and-a-half streams of consciousness', but if consciousness was a vague property you wouldn't expect to be able to say them. If the six people in the examples above were gathered together in a room, you couldn't say "so-and-so many people here have money"; all you could do would be to describe their individual situations.

    ReplyDelete
  11. Sorry about the very slow reply, folks! I stopped getting comments alerts in my inbox, and then I was traveling and didn't think to look.

    Anon Jun 14: Split brains is one interesting case -- as are craniopagus twins!
    https://en.wikipedia.org/wiki/Craniopagus_twins

    Sparro: I agree that amount of consciousness can vary, but I'm not sure about your claims about functional identity. Problematizing that supposed threshold is part of my aim in this post.

    Michael: If I can get my head around (d), that would be my preference. I'm not sure I can, though.

    David: Maybe that's a slippery slope, but not from conscious to non-conscious, right? So I'm not sure how it's relevant?

    James: Maybe so. I bet that we can create gray zones between your discrete numbers too, though.

    John: I agree about money. I meant it only to convey the idea, on a certain simple way of thinking about money. If it's not helpful, toss out the analogy!

    ReplyDelete