In an earlier post, I argued that the question “is there something it’s like to be a garden snail?” or equivalently “are garden snails conscious?” admits of three possible answers – yes, no, and *gong* (that is, neither yes nor no) – and that each of these answers has some antecedent plausibility. That is, prior to detailed theoretical argument, all three answers should be regarded as viable possibilities (even if we have a favorite). To settle the question, then, we need a good theoretical argument that would reasonably convince people who are antecedently attracted to a different view.
It is difficult to see how such an argument could go, for two related reasons: (1.) lack of sufficient theoretical common ground and (2.) the species-specificity of introspective and verbal evidence.
Lack of sufficient theoretical common ground.
Existing theories of consciousness, by leading researchers, range over practically the whole space of possibilities from panpsychism on one end, according to which consciousness is ubiquitous, to very restrictive meta-representational views on the other end that deny consciousness even to dogs.
The most common (which is not to say the best) arguments against these extreme views illustrate the common ground problem. The most common argument against panpsychism -- the reason most people reject it, I suspect -- is just that it seems absurd to suppose that consciousness is literally everywhere, even in, say, protons or simple logic gates. We know, we think, prior to our theory-building, that the range of conscious entities does not include protons or simple logic gates! Some of us -- including those who become panpsychists -- might hold that commitment only lightly, ready to abandon it if presented attractive theoretical arguments to the contrary. However, many of us strongly prefer more moderate views. We feel, not unreasonably, more confident that there is nothing it is like to be a proton than we could ever be that a clever philosophical argument to the contrary was in fact sound. Thus, we construct and accept our moderate views of consciousness partly from the starting background assumption that consciousness isn’t that abundant. If a theory looks like it implies that protons are conscious, we reject the theory rather than accepting the implication; and no doubt we can find some dubious-enough step in the panpsychist argument if we are motivated to do so.
Similarly, the most common argument against extremely sparse views that deny consciousness to dogs and babies is that it seems absurd to suppose that dogs and babies are not conscious. We know, we think, prior to our theory-building, that the range of conscious entities includes dogs and babies. Thus, we construct and accept our moderate views of consciousness partly on the starting background assumption that consciousness isn’t that sparse.
In order to develop a general theory of consciousness, one needs to make some initial assumptions about the approximate prevalence of consciousness. Some theories, from the start, will be plainly liberal in their implications about the abundance of consciousness. Others will be plainly conservative. Such theories will rightly be unattractive to people whose initial assumptions are very different; and if those initial assumptions are sufficiently strongly held, theoretical arguments with the type of at-best-moderate force that we normally see in the philosophy and psychology of consciousness will be insufficiently strong to reasonably dislodge those initial assumptions.
For example, Integrated Information Theory is a lovely theory of consciousness. Well, maybe it has a few problems, but it is renowned, and it has a certain elegance. It is also very nearly panpsychist, holding that consciousness is present wherever information is integrated, even in tiny little systems with simple connectivity, like simple logic gates. For a reader who enters the debates about consciousness attracted to the idea that consciousness might be sparsely distributed in the universe, it’s hard to imagine any sort of foreseeably attainable evidence that ought rightly to lead them to reject that sparse view in favor of a view so close to panpsychism. They might love IIT, but they could reasonably regard it as a theory of something other than conscious experience – a valuable mathematical measure of information integration, for example.
Or consider a moderate view, articulated by Zohar Bronfman, Simona Ginsburg, and Eva Jablonka. Bronfman and colleagues generate a list of features of consciousness previously identified by consciousness theorists, including “flexible value systems and goals”, “sensory binding leading to the formation of a compound stimulus”, a “representation of [the entity’s] body as distinct from the external world, yet embedded in it”, and several other features (p. 2). It’s an intriguing idea. Determining the universal features of consciousness and then looking for a measureable functional relationship that reliably accompanies that set of features -- theoretically, I can see how that is a very attractive move. But why those features? Perhaps they are universal to the human case (though even that is not clear), but it’s doubtful that someone antecedently attracted to a more liberal theory is likely to agree that flexible value systems are necessary for low-grade consciousness. If you like snails... well, why not think they have integration enough, learning enough, flexibility enough? Bronfman and colleagues’ criteria are more stipulated than argued for.
The species-specificity of verbal and introspective evidence.
The study of consciousness appears to rely, partly, but in an important way, on researchers’ or participants’ introspections, judgments about their experiences, or verbal reports, which need somehow to be related to physical or functional processes. We know about dream experiences, or inner speech, or visual imagery, or the presence or absence of an experience of unattended phenomena in our perceptual fields, partly because of what people judge or say about their experiences. Despite disagreements about ontology and method, this appears to be broadly accepted among theorists of consciousness.
Behavior and physiology are directly observable (or close enough), but the presence or absence of consciousness must normally be inferred -- or at least this is so once we move beyond the most familiar cases of intuitive consensus. However, the evidential base grounding such inferences is limited. The farther we move away from the familiar human case, the shakier our ground. We have to extrapolate in a risky way, far beyond the scope of our direct introspective and verbal evidence. Perhaps an argument for extrapolation to nearby species (apes? all mammals? all vertebrates?) can be made on grounds of evolutionary continuity and morphological similarity. Extrapolating beyond the familiar cases to, for example, garden snails will inevitably be conjectural and uncertain. The uncertainties involved provide basis for ample reasonable doubt among theorists who are antecedently attracted to very different views.
Let’s optimistically suppose that we learn that, in humans, consciousness involves X, Y, and Z physiological or functional features. Now, in snails we see X’, Y’, and Z’, or maybe W and Z”. Are X’, Y’, and Z’, or W and Z”, close enough? Maybe consciousness in humans requires recurrent neural loops of a certain sort (Humphrey 2011; Lamme 2018). Well, snail brains have some recurrent processing too. But of course it doesn’t look either entirely like the recurrent processing that we see in the human case when we are conscious, nor entirely like the recurrent processing that we see in the human case when we’re not conscious. Or maybe consciousness involves availability to, or presence in, working memory or a “global workspace” (Baars 1988; Dehaene and Changeux 2011; Prinz 2012). Well, information travels broadly through snail brains, enabling coordinated action. Is that global workspace enough? It’s like our workspace in some ways, unlike it in others. In the human case, we might be able to -- if things go very well! -- rely on introspective reports to help ground a theory about how broadly information must be shared within our cognitive system for that information to be consciously experienced, but it is by no means clear how we should then generalize such findings to the case of the garden snail.
So we can imagine that the snail is conscious, extrapolating from the human case on grounds of properties we share with the snail; or we can imagine that the snail is not conscious, extrapolating from the human case on grounds of properties we don’t share with the snail. Both ways of doing it seem defensible, and we can construct attractive, non-empirically-falsified theories that deliver either conclusion. We can also think, again with some plausibility, that the presence of some relevant properties and the lack of other relevant properties makes it a case where the human concept of consciousness fails to determinately apply.