Friday, October 04, 2019

What Makes for a Good Philosophical Argument, and The Common Ground Problem for Animal Consciousness

What is it reasonable to hope for from a philosophical argument?

Soundness would be nice -- a true conclusion that logically follows from true premises. But soundness isn't enough. Also, in another way, soundness is sometimes too much to demand.

To see why soundness isn't enough, consider this argument:

Premise: Snails have conscious sensory experiences, and ants have conscious sensory experiences.

Conclusion: Therefore, snails have conscious sensory experiences.

The argument is valid: The conclusion follows from the premises. For purposes of this post, let's assume that premise, about snails and ants, is also true and that the philosopher advancing the argument knows it to be true. If so, then the argument is sound and known to be so by the person advancing it. But it doesn't really work as an argument, since anyone who isn't already inclined to believe the conclusion won't be inclined to believe the premise. This argument isn't going to win anyone over.

So soundness isn't sufficient for argumentative excellence. Nor is it necessary. An argument can be excellent if the conclusion is strongly suggested by the premises, despite lacking the full force of logical validity. That the Sun has risen many times in a regular way and that its doing so again tomorrow fits with our best scientific models of the Solar System is an excellent argument that it will rise again tomorrow, even though the conclusion isn't a 100% logical certainty given the premises.

What then, should we want from a philosophical argument?

First, let me suggest that a good philosophical argument needs a target audience, the expected consumers of the argument. For academic philosophical arguments, the target audience would presumably include other philosophers in one's academic community who specialize in the subarea. It might also include a broader range of academic philosophers or some segment of the general public.

Second, an excellent philosophical argument should be such that the target audience ought to be moved by the argument. Unpacking "ought to be moved": A good argument ought to incline members of its target audience who began initially neutral or negative concerning its conclusion to move in the direction of endorsing its conclusion. Also, members of its target audience antecedently inclined in favor of the conclusion ought to feel that the argument provides good support for the conclusion, reinforcing their confidence in the conclusion.

I intend this standard to be a normative standard, rather than a psychological standard. Consumers of the argument ought to be moved. Whether they are actually moved is another question. People -- even, sad to say, academic philosophers! -- are often stubborn, biased, dense, and careless. They might not actually be moved even if they ought to be moved. The blame for that is on them, not on the argument.

I intend this standard as an imperfect generalization: It must be the case that generally the target audience ought to be moved. But if some minority of the target audience ought not to be moved, that's consistent with excellence of argument. One case would be an argument that assumes as a premise something widely taken for granted by the target audience (and reasonably so) but which some minority portion of the target audience does not, for their own good reasons, accept.

I intend this standard to require only movement, not full endorsement: If some audience members initially have a credence of 10% in the conclusion and they are moved to a 35% credence after exposure to the argument, they have been moved. Likewise, someone whose credence is already 60% before reading the argument is moved in the relevant sense if they rationally increase their credence to 90% after exposure to the argument. But "movement" in the sense needn't be understood wholly in terms of credence. Some philosophical conclusions aren't so much true or false as endorseable in some other way -- beautiful, practical, appealing, expressive of a praiseworthy worldview. Movement toward endorsement on those grounds should also count as movement in the relevant sense.

You might think that this standard -- that the target audience ought to be moved -- is too much to demand from a philosophical argument. Hoping that one's arguments are good enough to change reasonable people's opinions is maybe a lot to hope for. But (perhaps stubbornly?) I do hope for it. A good, or at least an excellent, philosophical argument should move its audience. If you're only preaching to the choir, what's the point?

In his preface to Consciousness and Experience, William G. Lycan writes

In 1987... I published a work entitled Consciousness. In it I claimed to have saved the materialist view of human beings from all perils.... But not everyone has been convinced. In most cases this is due to plain pigheadedness. But in others its results from what I now see to have been badly compressed and cryptic exposition, and in still others it is articulately grounded in a peril or two that I inadvertently left unaddressed (1996, p. xii).

I interpret Lycan's preface as embracing something like my standard -- though with the higher bar of convincing the audience rather than moving the audience. Note also that Lycan's standard appears to be normative. There may be no hope of convincing the pigheaded; the argument need not succeed in that task to be excellent.

So, when I write about the nature of belief, for example, I hope that reasonable academic philosophers who are not too stubbornly committed to alternative views, will find themselves moved in the direction of thinking that a dispositional approach (on which belief is at least as much about walking the walk as talking the talk) will be moved toward dispositionalism -- and I hope that other dispositionalists will feel reinforced in their inclinations. The target audience will feel the pull of the arguments. Even if they don't ultimately endorse my approach to belief, they will, I hope, be less averse to it than previously. Similarly, when I defend the view that the United States might literally be conscious, I hope that the target audience of materialistically-inclined philosophers will come to regard the group consciousness of a nation as less absurd than they probably initially thought. That would be movement!

Recently, I have turned my attention to the consciousness, or not, of garden snails. Do garden snails have a real stream of conscious experience, like we normally assume that dogs and ravens have? Or is there "nothing it's like" to be a garden snail, in the way we normally assume there's nothing it's like to be a pine tree or a toy robot? In thinking about this question, I find myself especially struck by what I'll call The Common Ground Problem.

The Common Ground Problem is this. To get an argument going, you need some common ground with your intended audience. Ideally, you start with some shared common ground, and then maybe you also introduce factual considerations from science or elsewhere that you expect they will (or ought to) accept, and then you deliver the conclusion that moves them your direction. But on the question of animal consciousness specifically, people start so far apart that finding enough common ground to reach most of the intended audience becomes a substantial problem, maybe even an insurmountable problem.

I can illustrate the problem by appealing to extreme cases; but I don't think the problem is limited to extreme cases.

Panpsychists believe that consciousness is ubiquitous. That's an extreme view on one end. Although not every panpsychist would believe that garden snails are conscious (they might think, for example, that subparts of the snail are conscious but not the snail as a whole), let's imagine a panpsychist who acknowledges snail consciousness. On the other end, some philosophers, such as Peter Carruthers, argue that even dogs might not be (determinately) conscious. Now let's assume that you want to construct an argument for (or against) the consciousness of garden snails. If your target audience includes the whole range of philosophers from panpsychists to people with very restrictive views about consciousness like Carruthers, it's very hard to see how you speak to that whole range of readers. What kind of argument could you mount that would reasonably move a target audience with such a wide spread of starting positions?

Arguments about animal consciousness seem always to start already from a set of assumptions about consciousness (this kind of test would be sufficient, this other kind not; this thing is an essential feature of consciousness, the other thing not). The arguments will generally beg the question against audience members who start out with views too far away from one's own starting points.

How many issues in philosophy have this kind of problem? Not all, I think! In some subareas, there are excellent arguments that can or should move, even if not fully convince, most of the target audience. Animal consciousness is, I suspect, unusual (but probably not unique) in its degree of intractability, and in the near-impossibility of constructing an argument that is excellent by the standard I have articulated.

[image source]

12 comments:

SelfAwarePatterns said...

I think when it comes to the common ground problem, one strategy is to clarify exactly what is being discussed. This is particularly problematic with something like consciousness, where people argue past each other endlessly with different definitions. But if we can delineate more specific capabilities, then the nature of the debate might become more clear.

A good example is Joseph LeDoux, who is on the skeptical side for animal consciousness (although not to the extent of Carruthers), and Antonio Damsio, who is far more open to the idea. LeDoux recently noted in an interview that he and Damsio agree on the basic facts. They just interpret them differently.

So when looking at a particular animal, we could ask:
1. Does it react to the environment?
2. Does it build sensory representations of that environment?
3. Does it have valence oriented action selection?
4. Can it deliberate in an imaginative manner?
5. Does it introspect?

It seems like these capabilities, or ones like them, are much easier to reach agreement on than whether a particular animal is conscious. Once that agreement is reached, there can be a separate discussion on which definition of consciousness is most applicable.

A similar approach can be taken for a debate on something like free will. Exactly what kind of freedom are we talking about? And what do we mean by "will"?

Josh Rust said...


Great post Eric! When it comes to finding common ground about snail consciousness, one strategy is to see if there might be more common ground concerning a related but sufficiently distinct concept. Perhaps, for example, more common ground could be found regarding the question about whether normative terms are genuinely applicable to snails. In a universe without human cognizers, does it make sense to say of a snail that didn’t find food that it *failed*? Searle doesn’t think so (1995), but maybe Searle’s view is in the minority. If there is common ground here, and if it can be shown that there is some link between the possibility of failure and the possibility of consciousness (both of these are big ifs), then maybe those who think that snails can fail but deny that they are consciousness might then find the claim that snails have consciousness slightly more convincing.

Anonymous said...

The solipsistic self-model is driven by a pathology, and that pathology is the innate need for a sense of control. The structural qualitative properties of any given solipsistic self-model is determined by those properties and those properties will underwrite the pathology of what is required for a sensation of control. In order to move an audience with a persuasive argument, there has to be a payoff for the individuals within that targeted audience. And that payoff is this: "Does the argument I just heard reinforce a sensation of control which my own beliefs already provide, or does it destabilize the foundations upon which my own beliefs are grounded and therefore destabilizes my own sensation of control?

Feeling and sensations will trump the validity of intellectual arguments every time...

Eric Schwitzgebel said...

Thanks for the interesting comments, folks!

SelfAware: Consciousness does seem harder to agree on than these other issues. But I don't think that that's primarily because people mean different things by the word "conscious". To see the debate as basically a verbal dispute about the meaning of that word, while the facts are agreed on, gets the dialectical situation wrong and underplays the real difficulty of the issue! I think, or at least hope, that we can agree on a relatively theoretically innocent definition of consciousness or phenomenal consciousness or conscious experience; and despite agreeing on that innocent definition we might find ourselves far apart on issues of substance about how far consciousness in that sense applies. (On defining consciousness see my
https://faculty.ucr.edu/~eschwitz/SchwitzAbs/DefiningConsciousness.htm

Josh: Great to hear from you! Maybe it's not totally hopeless. It would be terrific to find some link -- for example between normativity and consciousness -- that can be widely agreed on by the target audience, which can then be used as an effective argumentative lever in moving opinions. I do somewhat despair, however, after mucking around in this particular debate for a while!

Lee: That's a somewhat darker view than my own. Of course you might say that I only fail to accept the view you advocate because it doesn't reinforce my sensation of control!

SelfAwarePatterns said...

Eric,
I think there's value in defining a theoretically innocent version of consciousness, purely from a phenomenological perspective. It's why, although largely agreeing with the illusionists ontologically, I prefer to simply say that phenomenal consciousness only exists, well, phenomenally, that is, subjectively.

The problem though, is what then? Any theory neutral version we come up with will be from the human perspective, the only perspective we can really take. We can't ask a garden snail for *its* theoretically neutral idea of whatever consciousness it might have. We have no choice but to take the pre-theory version and try to work out, with as few theoretical assumptions as possible, what observable capabilities might be entailed, and then using that try to infer what might or might not be there.

The result, I think, is that a particular non-human animal will have some of your positive examples but not others, and will have some of them in a greatly reduced, perhaps borderline unrecognizable fashion. Reality seems to delight in frustrating our common sense definitions.

Eric Schwitzgebel said...

SelfAware: I basically agree with everything you just said — which is the pessimistic conclusion of my reflections on garden snails. Because of these problems, there is no good non question begging way to settle among yes, no, or a third option I call *gong* (reject the question).

Philosopher Eric said...

Lee,
I think you went too dark there as well, and ironically so. If you’re essentially right (and if cleaned up a bit, I think your rant was about right) then why take such a dark angle? If we’re all self interested products of our circumstances which thus seek control, then why not use some diplomacy to help us feel a bit better about your message? Speaking of us in terms of our “pathologies”, does not give us a feeling of “control”!

Professor,
Great pairing of topics! I’m going to get a bit deep so hopefully this can be followed.

It seems to me that if we define primary consciousness as the standard “What it’s like” idea, then anything which is conscious may be assessed in terms of something that feels positive to negative — remove that and no consciousness shall exist for anything. So here we’re talking about a machine which harbors personal value. If existence can feel good or bad to a garden snail, then it’s conscious… conceptually simple.

But now let’s add a variety of life to this situation which is quite clever, or the human. It’s a highly social creature which is thus able to grasp that others desire their own happiness as well, or has tremendous theory of mind skills. So if it’s able to grasp that we’re all self interested products of our circumstances, this could be disconcerting when openly stated. We don’t want others to behave selfishly against us (and even if we behave this way from time to time, not that we always grasp our own selfishness). So instead of being honest about our nature, it may be best for us to claim that selfishness itself is “immoral”. Here we might reap the rewards of our portrayed altruism by gaining the trust of others. Conversely the “honest” (like my good friend Lee) may be persecuted for that very reason.

So strong is this “morality paradigm”, I think, that our mental and behavioral sciences have not yet been permitted to honestly grasp the essentials of our nature. Thus these fields remain quite soft, and unlike the less humanly connected fields which carry no such a burden. I perceive a misconception that the reason psychology remains such a soft variety of science today, is because it’s actually far more complex than harder sciences such as physics. Consider the thought that it may instead be taboo…

chinaphil said...

In your discussion of what an argument is, I think you undervalue innovation. A good philosophical argument gives an audience a new perspective, either by introducing a new concept or applying a concept in a novel way. This idea of innovation also opens up the possibility of good arguments that don’t persuade: it could still be a good argument if it gives me a new way to think, even if I’m completely unmoved by the specific conclusion argued for. This aspect may be particularly important in arguments used in philosophical education.

Anonymous said...

Wait, I am confused by your 'argument'
"Premise: Snails have conscious sensory experiences, and ants have conscious sensory experiences.

Conclusion: Therefore, snails have conscious sensory experiences."

You have no premise for 'conscious sensory experience'
Then you merely list part of your premise as the conclusion.

Maybe there was a bad edit or I am missing something being assumed.
stu

Stephen Wysong said...

What’s the purpose of a philosophical argument Eric? Do we possess knowledge that we wish to share with others? Do we desire a community of believers for our own beliefs?

Being a wholly subjective phenomenon, the consciousness of any organism, including other humans, cannot be known with certainty and I suggested many of your posts ago that we can only infer the consciousness of other organisms. We can then rate the strength of our inference on some scale from zero to near-certainty.

But I suggest that knowledge of the consciousness of snails is not a philosophical concern at all and philosophical arguments pro and con are irrelevant. SInce I vote with Hacker that the goal of Philosophy is not about knowledge but is about understanding, I suggest that the concern of any philosophical argument should be the structure and validity of arguments and claims in other domains. In this case—the consciousness of a snail—the soundness of the definitions and the validity of the grounds chosen for evaluating inference strength would be the ingredients of Philosophy’s concern.

But, of course, that approach takes away all of the fun, doesn’t it? It bursts the balloon of high-flying philosophical Consciousness Studies and lets all of the HOT air escape, while leaving the speculative pleasures to the neurospecialists. Not all is lost though—Philosophy must still hold the neuronuts accountable.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Phil Eric: I'm inclined to agree that part of the difficulty with psychology as a science is the complexity of its subject matter compared to, say, planets orbiting a star. I'm not sure I agree with you about selfishness, though. People often sacrifice their self-interest, often their lives. As you note, the mind is complex! Reducing motivation to selfishness without allowing morality to also play a role is probably too simple.

Chinaphil: Yes, I agree that is an important value in philosophical argument. Thanks for the reminder and correction!

Stu: Right, the point is that it is question-begging. It is still technically valid and maybe even sound. The point is to remind readers that soundness isn't enough.

Stephen: While I don't wholly disagree, my perspective is slightly different. At the highest levels of generality, on the biggest-picture questions, empirically-oriented philosophy and philosophically-acute science merge into each other, so that (for example) what Peter Carruthers and Eva Jablonka write on animal consciousness is not radically different in general approach. I think this is more how things do and should go than having a strict division of labor between the empirical science and the abstract philosophy.

Philosopher Eric said...

Actually professor, my point was that psychological dynamics probably aren’t any more complex than the dynamics of physics. The theory is that while it’s not difficult for us to objectively explore the nature of physics, it can be extremely difficult for us to objectively explore ourselves (and thus effective broad theory has eluded us so far). Apparently there’s an element of our nature which we’d rather not formally admit, and given that the social tool of morality lies in opposition (as I discussed earlier).

The taboo element, I think, is this: Feeling good, as opposed to bad, constitutes the welfare of anything conscious. This will apply to a garden snail, a human, or anything sentient at a given moment. So if you want to know the welfare of a prisoner over a five day period, take a summation of his or her positive minus negative sensations (not that we have very good measurement tools for this yet, but conceptually at least). Or we might want to understand the welfare of an entire society of people. Is recreational drug legalization best for California? Theoretically the answer will be determined by the happiness of its people under either condition.

Utilitarians fail, I think, because they tend to bastardize their theory by trying to keep it moral. My own theory is instead amoral and so conforms with all accepted theory in our harder varieties of science. To be sure, highly repugnant implications do exist. But as long at our paradigm of morality prevents mental and behavioral scientists from formally acknowledging the nature of the welfare of what they study, it seems to me that they should continue to struggle with effective general theory regarding our nature, and so remain soft. This is a problem which I’d like to help fix.