Thursday, September 27, 2018

Philosophical Skepticism Is, or Should Be, about Credence Rather Than Knowledge

Philosophical skepticism is usually regarded as primarily a thesis about knowledge -- the thesis that we don't know some of the things that people ordinarily take themselves to know (such as that they are awake rather than dreaming or that the future will resemble the past). I prefer to think about skepticism without considering the question of "knowledge" at all.

Let me explain.

I know some things about which I don't have perfect confidence. I know, for example, that my car is parked in Lot 1. Of course it is! I just parked it there ninety minutes ago, in the same part of the parking lot where I've parked for over a year. I have no reason to think anything unusual is going on. Now of course it might have been stolen or towed, for some inexplicable reason, in the past ninety minutes, or I might be having some strange failure of memory. I wouldn't lay 100,000:1 odds on it -- my retirement funds gone if I'm wrong, $10 more in my pocket if I'm right. My confidence or credence isn't 1.00000. Of course there's a small chance it's not where I think it is. Acknowledging all of this, it's still I think reasonable for me to say that I know where my car is parked.

Now we could argue about this; and philosophers will. If I'm not completely certain that my car is in Lot 1, if I can entertain some reasonable doubts about it, if I'm not willing to just entirely take it for granted, then maybe it's best to say I don't really know my car is there. There is something admittedly odd about saying, "Yes, I know my car is there, but of course it might have recently been towed." Admittedly, explicitly allowing that possibility stands in tension, somehow, with simultaneously asserting the knowledge.

In a not-entirely-dissimilar way, I know that I am not currently dreaming. I am almost entirely certain that I am not dreaming, and I believe I have excellent grounds for that high level of confidence. And yet I think it's reasonable to allow myself a smidgen of doubt on the question. Maybe dreams can be (though I don't think so) this detailed and realistic; and if so, maybe this is one such super-realistic dream.

Now let's imagine two sorts of debates that we could have about these questions:

Debate 1: same credences but disagreement about knowledge. Philosopher A and Philosopher B both have 99.9% credence that their car is in Lot 1 and 99.99% credence that they are awake. Their degrees of confidence in these propositions are identical. But they disagree about whether it is correct to say, in light of their reasonable smidgens of doubt, that they know. [ETA 10:11 a.m.: Assume these philosophers also regard their own degrees of credence as reasonable. HT Dan Kervick.]

Debate 2: different credences but agreement about knowledge. Philosopher C and Philosopher D differ in their credences: Philosopher C thinks it is 100% certain (alternatively, 99.99999% certain) that she is awake, and Philosopher D has only a 95% credence; but both agree that they know that they are awake. Alternatively, Philosopher E is 99.99% confident that her car is in Lot 1 and Philosopher F is 99% confident; but they agree that, given their small amounts of reasonable doubt, they don't strictly speaking know.

I suggest that in the most useful and interesting sense of "skeptical", Philosophers A and B are similarly skeptical or unskeptical, despite the fact that they would say something different about knowledge. They have the same degrees of confidence and doubt; they would make (if rational) the same wagers; their disagreement seems to be mostly about a word or the proper application of a concept.

Conversely, Philosophers C and E are much less skeptical than Philosophers D and F, despite their agreement about the presence or absence of knowledge. They would behave and wager differently (for instance, Philosopher D might attempt a test to see whether he is dreaming). They will argue, too, about the types of evidence available or the quality of that evidence.

The extent of one's philosophical skepticism has more to do with how much doubt one thinks is reasonable than with whether, given a fixed credence or degree of doubt, one thinks it's right to say that one genuinely knows.

How much doubt is reasonable about whether you're awake? In considering this issue, there's no need to use the word "knowledge" at all! Should you just have 100% credence, taking it as an absolute certainty foundational to your cognition? Should you allow a tiny sliver of doubt, but only a tiny sliver? Or should you be in some state of serious indecision, giving the alternatives approximately equal weight? Similarly for the possibility that you're a brain in a vat, or that the sun will rise tomorrow. Philosophers in the first group are radically anti-skeptical (Moore, Wittgenstein, Descartes by the end of the Meditations); philosophers in the second group are radically skeptical (Sextus, Zhuangzi in Inner Chapter 2, Hume by the end of Book 1 of the Treatise); philosophers in the middle group admit a smidgen of skeptical doubt. Within that middle group, one might think the amount of reasonable doubt is trivially small (e.g., 0.00000000001%, or one might think that the amount of reasonable doubt is small but not trivially small, e.g., 0.001%). Debate about which of these four attitudes is the most reasonable (for various possible forms of skeptical doubt) is closer to the heart of the issue of skepticism than are debates about the application of the word "knowledge" among those who agree about the appropriate degree of credence.

[Note: In saying this, I do not mean to commit to the view that we can or should always have precise numerical credences in the propositions we consider.]

--------------------------------------------

Related: 1% Skepticism (Nous 2017).

Should I Try to Fly, on the Off-Chance This Might Be a Dream Body? (Dec 18, 2013).

Thursday, September 20, 2018

Are Garden Snails Conscious? Yes, No, or *Gong*

If you grew up in a temperate climate, you probably spent some time bothering brown garden snails (Cornu aspersum, formerly known as Helix aspersa). I certainly did. Now, as a grown-up (supposedly) expert (supposedly) on the science and philosophy of consciousness, I've decided to seriously consider a question that didn't trouble me very much when I was seven: Are garden snails conscious?

Being an "experimental philosopher", I naturally started with a Facebook poll of my friends, who obligingly fulfilled my expectations by answering, variously, "yes" (here's why), "no" (here's why not), and "OMG that is the stupidest question". I'll call this last response "*gong*" after The Gong Show, an amateur talent contest in which performers whose acts were sufficiently horrid would be interrupted by a gong and ushered off the stage.

It turns out that garden snails are even cooler than I thought, now that I'm studying them more closely. Let me fill you in.

Garden Snail Cognition and Behavior

(Most of this material is drawn from Ronald Chase's 2002 book Behavior & Its Neural Control in Gastropod Molluscs.)

The central nervous system of the brown garden snail contains about 40,000 neurons. That's quite a few more neurons than the famously mapped 302 neurons of the Caenorhabditis elegans roundworm, but it's modest compared to the quarter million neurons of an ant or fruitfly. The snail's brain is organized into several clumps of ganglia, mostly in a ring around its esophagus. Gastropod neurons generally resemble vertebrate neurons, with a few notable exceptions. One exception is that gastropod neurons usually don't have a bipolar structure with axons on one side of the cell body and dendrites on the other side. Instead, input and output typically occurs on both sides without a clear differentiation between axon and dendrite. Another difference is that although gastropods' small-molecule neural transmitters are the same as in vertebrates (e.g., acetylcholine, serotonin), their larger-molecule neuropeptides are mostly different.

Snails navigate primarily by chemoreception, or the sense of smell, and mechanoreception, or the sense of touch. They will move toward attractive odors, such as food or mates, and they will withdraw from noxious odors and tactile disturbance. Although garden snails have eyes on the tips of their posterior tentacles, their eyes seem to be sensitive only to light versus dark and the direction of light sources, rather than to the shapes of objects. The internal structure of snail tentacles shows much more specialization for chemoreception, with the higher-up posterior tentacles perhaps better for catching odors on the wind and the lower anterior tentacles better for odors closer to the ground. Garden snails can also sense the direction of gravity, righting themselves and moving toward higher ground to avoid puddles.

Snails can learn. Gastropods fed on a single type of plant will prefer to move toward that same plant type when offered the choice in a Y-shaped maze. They can also learn to avoid foods associated with noxious stimuli, in some cases even after only a single trial. Some species of gastropod will modify their degree of attraction to sunlight if sunlight is associated with tumbling inversion. In warm ocean Aplysia californica gastropods, the complex role of the central nervous system in governing reflex withdrawals has been extensively studied. Aplysia californica reflex withdrawals are centrally mediated, and can be inhibited, amplified, and coordinated, maintaining a singleness of action across the body and regulating withdrawal according to circumstances, with both habituation and sensitization possible. Garden snail nervous systems appear to be similarly complex, generating unified action that varies with circumstance.

Garden snails can coordinate their behavior in response to information from more than one modality at once. For example, as mentioned, when they detect that they are surrounded by water, they can seek higher ground. They will cease eating when satiated, withhold from mating while eating despite sexual arousal, and exhibit less withdrawal reflex while mating. Before egg laying, garden snails use their feet to excavate a shallow cavity in soft soil, then insert their head into the cavity for several hours while they ovulate.

Garden snail mating is famously complex. Cornu aspersum is a simultaneous hermaphrodite, playing both the male and female role simultaneously. Courtship and copulation requires several hours. Courtship begins with the snails touching heads and posterior antennae for tens of seconds, then withdrawing and circling to find each other again, often consuming each other's slime trails, or alternatively breaking courtship. They repeat this process several times. During mating, snails will sometimes bite each other, then withdraw and reconnect. Later in courtship, one snail will shoot a "love dart" consisting of calcium and mucus at the other, succeeding in penetrating the skin about one third of the time; tens of minutes later, the other snail will reciprocate. Courtship continues regardless of whether the darts successfully land. Sex culminates when the partners manage to simultaneously insert their penises into each other, which may require dozens of attempts.

Impressive accomplishments for creatures with brains of only 40,000 neurons! Of course, snail behavior is limited compared to the larger and more flexible behavioral repertoire of mammals and birds.

Garden Snail Consciousness: Three Possibilities

So, knowing all this... are garden snails conscious? Is there something it's like to be a garden snail? Do snails have, for example, sensory experiences?

Suppose you touch the tip of your finger to the tip of a snail's posterior tentacle, and the tentacle retracts. Does the snail have tactile experience of something touching its tentacle, a visual experience of a darkening as your finger approaches and occludes the eye, an olfactory or chematosensory experience of the smell or taste or chemical properties of your finger, a proprioceptive experience of the position of its now-withdrawn tentacle?

(1.) Yes. It seems like we can imagine that the answer is yes, the snail does have sensory experiences. Any specific experience we try to imagine from the snail's point of view, we will probably imagine too humanocentrically. Withdrawing a tentacle might not feel much like withdrawing an arm; and with 40,000 neurons total, presumably there won't be a wealth of detail in any sensory modality. Optical experience in particular might be so informationally poor that calling it "visual" is already misleading, inviting too much analogy with human vision. Nonetheless, I think we can conceive in a general way how it might be the case that garden snails have sensory experiences of some sort or other.

(2.) No. We can also imagine, I think, that the answer is no, snails entirely lack sensory experiences of any sort -- and thus, presumably, any consciousness at all, on the assumption that if snails are conscious they have at least sensory consciousness. If you have trouble conceiving of this possibility, consider dreamless sleep, toy robots, and the enteric nervous system. (The enteric nervous system is a collection of about half a billion neurons lining your gut, governing motor function and enzyme secretion.) In all three of these cases, most people think, there is no genuine stream of conscious experience, despite some organized behavior and environmental reactivity. It seems that we can coherently imagine snail behavior to be like that: no more conscious than turning over unconsciously in sleep, or than a toy robot, or than the neurons lining your intestines.

We can make sense of both of these possibilities, I think. Neither seems obviously false or obviously refuted by the empirical evidence. One possibility might strike you as intuitively much more likely than the other, but as I've learned from chatting with friends and acquaintances (and from my Facebook poll), people's intuitions vary -- and it's not clear, anyway, how much we ought to trust our intuitions in such matters. You might have a favorite scientific or philosophical theory from which it follows that garden snails are or are not conscious; but there is little consensus on general theories of consciousness and leading candidate theories yield divergent answers. (More on this, I hope, in a follow-up post.)

(3.) *Gong*. To these two possibilities, we can add a third, the one I am calling *gong*. Not all questions deserve a yes or a no. There might be a false presupposition in the question (maybe "consciousness" is an incoherent concept?), or the case might be vague or indeterminate such that neither "yes" nor "no" quite serves as an adequate answer. (Compare vague or indeterminate cases between "green" and "not green" or between "extraverted" and "not extraverted".)

Indeterminacy is perhaps especially tempting. Not everything in the world fits neatly into determinate, dichotomous yes-or-no categories. Consciousness might be one of those things that doesn't dichotomize well. And snails might be right there at the fuzzy border.

Although an indeterminate view has some merits, it is more difficult to sustain than you might think at first pass. To see why, it helps to clearly distinguish between being a little conscious and being in an indeterminate state between conscious and not-conscious. If one is a little conscious, one is conscious. Maybe snails just have the tiniest smear of consciousness -- that would still be consciousness! You might have only a little money. Your entire net worth is a nickel. Still, it is discretely and determinately the case that if you have a nickel, you have some money. If snail consciousness is a nickel to human millionaire consciousness, then snails are conscious.

To say that the dichotomous yes-or-no does not apply to snail consciousness is to say something very different than that snails have just a little smidgen of consciousness. It's to say... well, what exactly? As far as I'm aware (correct me if I'm wrong!), there's no well developed theory of kind-of-yes-kind-of-no consciousness. We can make sense of a vague kind-of-yes-kind-of-no for "green" and "extravert"; we know more or less what's involved in being a gray-area case of a color or personality trait. We can imagine gray-area cases with money too: Your last nickel is on the table over there, and here comes the creditor to collect it. Maybe that's a gray-area case of having money. But it's much more difficult to know how to think about gray-area cases of being somewhere between a little bit conscious and not at all conscious. So while in the abstract I feel the attraction of the idea that consciousness is not a dichotomous property and garden snails might occupy the blurry in-between region, the view requires entering a theoretical space that has not yet been well explored.

The Possibilities Remain Open

There is, I think, some antecedent plausibility to all three possibilities, yes, no, and *gong*. To really decide among them, to really figure out the answer to our question about snail consciousness, we need an epistemically well-grounded general theory of consciousness, which we can apply to the case.

Unfortunately, we have no such theory. The live possibilities appear to cover the entire spectrum from the panpsychism or near-panpsychism of Galen Strawson and of Integrated Information Theory to very restrictive views, like those of Daniel Dennett and Peter Carruthers, on which consciousness requires some fairly sophisticated self-representational capacities of the sort that well beyond the capacity of snails.

Actually, I think there's something wonderful about not knowing. There's something marvelous about the fact that I can go into my backyard, lift a snail, and gaze at it, unsure. Snail, you are a puzzle of the universe, right here in my garden, eating the daisies!

[image by Bryony Pierce]

Wednesday, September 12, 2018

One-Point-Five Cheers for a Hugo Award for a TV Show about Ethicists’ Moral Expertise

[cross-posted at Kittywumpus]

When The Good Place episode “The Trolley Problem” won one of science fiction’s most prestigious awards, the Hugo, in the category of best dramatic presentation, short form, I celebrated. I celebrated not because I loved the episode (in fact, I had so far only seen a couple of The Good Place’s earlier episodes) but because, as a philosophy professor aiming to build bridges between academic philosophy and popular science fiction, the awarding of a Hugo to a show starring a professor of philosophy discussing a famous philosophical problem seemed to confirm that science fiction fans see some of the same synergies I see between science fiction and philosophy.

I do think the synergies are there and that the fans see and value them – as also revealed by the enduring popularity of The Matrix, and by West World, and Her, and Black Mirror, among others – but “The Trolley Problem”, considered as a free-standing episode, fumbles the job. (Below, I will suggest a twist by which The Good Place could redeem itself in later episodes.)

Yeah, I’m going to be fussy when maybe I should just cheer and praise. And I’m going to take the episode more philosophically seriously than maybe I should, treating it as not just light humor. But taking good science fiction philosophically seriously is important to me – and that means engaging critically. So here we go.

The Philosophical Trolley Problem

The trolley problem – the classic academic philosophy version of the trolley problem – concerns a pair of scenarios.

In one scenario, the Switch case, you are standing beside a railroad track watching a runaway railcar (or “trolley”) headed toward five people it will surely kill if you do nothing. You are standing by a switch, however, and you can flip the switch to divert the trolley onto a side track, saving the five people. Unfortunately, there is one person on the side track who will be killed if you divert the trolley. Question: Should you flip the switch?

In another scenario, the Push case, you are standing on a footbridge when you see the runaway railcar headed toward the five people. In this case, there is no switch. You do, however, happen to be standing beside a hiker with a heavy backpack, who you could push off the bridge into the path of the trolley, which will then grind to a halt on his body, killing him and saving the five. (You are too light to stop the trolley with your own body.) He is leaning over the railing, heedless of you, so you could just push him over. Question: Should you push the hiker?

The interesting thing about these problems is that most people say it’s okay to flip the switch in Switch but not okay to push the hiker in Push, despite the fact that in both cases you appear to be killing one person to save five. Is there really a meaningful difference between the cases? If so, what is it? Or are our ordinary intuitions about one or the other case wrong?

It’s a lovely puzzle, much, much debated in academic philosophy, often with intricate variations on the cases. (Here’s one of my papers about it.)

The Problem with “The Trolley Problem”

“The Trolley Problem” episode nicely sets up some basic trolley scenarios, adding also a medical case of killing one to save five (an involuntary organ donor). The philosophy professor character, Chidi, is teaching the material to the other characters.

Spoilers coming.

The episode stumbles by trying to do two conflicting things.

First, it seizes the trope of the philosophy professor who can’t put his theories into practice. The demon Michael sets up a simulated trolley, headed toward five victims, with Chidi at the helm. Chidi is called on to make a fast decision. He hesitates, agonizing, and crashes into the five. Micheal reruns the scenario with several variations, and it’s clear that Chidi, faced with a practical decision requiring swift action, can’t actually figure out what’s best. (However, Chidi is clear that he wouldn’t cut up a healthy patient in an involuntary organ donor case.)

Second, incompatibly, the episode wants to affirm Chidi’s moral expertise. Michael, the demon who enjoys torturing humans, can’t seem to take Chidi’s philosophy lessons seriously, despite Chidi’s great knowledge of ethics. Michael tries to win Chidi’s favor by giving him a previously unseen notebook of Kant’s, but Chidi, with integrity that I suppose the viewer is expected to find admirable, casts the notebook aside, seeing it as a bribe. What Chidi really wants is for Michael to recognize his moral expertise. At the climax of the episode, Michael seems to do just this, saying:

Oh, Chidi, I am so sorry. I didn’t understand human ethics, and you do. And it made me feel insecure, and I lashed out. And I really need your help because I feel so lost and vulnerable.

It’s unclear from within the episode whether we are supposed to regard Michael as sincere. Maybe not. Regardless, the viewer is invited to think that it’s what Michael should say, what his attitude should be – and Chidi accepts the apology.

But this resolution hardly fits with Chidi’s failure in actual ethical decision making in the moment (a vice he also reveals in other episodes). Chidi has abstract, theoretical knowledge about ethical quandaries such as the trolley problem, and he is in some ways the most morally admirable of the lead characters, but his failure in vividly simulated trolley cases casts his practical ethical expertise into doubt. Nothing in the episode satisfactorily resolves that practical challenge to Chidi’s expertise, pro or con.

Ethical Expertise?

Now, as it happens, I am the world’s leading expert on the ethical behavior of professional ethicists. (Yes, really. Admittedly, the competition is limited.)

The one thing that shows most clearly from my and others’ work on this topic, and which is anyway pretty evident if you spend much time around professional ethicists, is that ethicists, on average, behave more or less similarly to other people of similar social background – not especially better, not especially worse. From the fact that Chidi is a professor of ethics, nothing in particular follows about his moral behavior. Often, indeed, expertise in philosophical ethics appears to become expertise in constructing post-hoc intellectual rationales for what you were inclined to do anyway.

I hope you will agree with me about the following, concerning the philosophy of philosophy: Real ethical understanding is not a matter of what words you speak in classroom moments. It’s a matter of what you choose and what you do habitually, regardless of whether you can tell your friends a handsome story about it, grounded in your knowledge of Kant. It’s not clear that Chidi does have especially good ethical understanding in this practical sense. Moreover, to the extent Chidi does have some such practical ethical understanding, as a somewhat morally admirable person, it is not in virtue of his knowledge of Kant.

Michael should not be so deferential to Chidi’s expertise, and especially he should not be deferential on the basis of Chidi’s training as a philosopher. If, over the seasons, the characters improve morally, it is, or should be, because they learn from the practical situations they find themselves in, not because of Chidi’s theoretical lessons.

How to Partly Redeem “The Trolley Problem”

Thus, the episode, as a stand-alone work, is flawed both in plot (the resolution at climax failing to answer the problem posed by Chidi’s earlier practical indecisiveness) and in philosophy (being too deferential to the expertise of theoretical ethicists, in contrast with the episode’s implicit criticism of the practical, on-the-trolley value of Chidi’s theoretical ethics).

When the whole multi-season arc of The Good Place finally resolves, here’s what I hope happens, which in my judgment would partly redeem “The Trolley Problem”: Michael turns out, all along, to have been the most ethically insightful character, becoming Chidi’s teacher rather than the other way around.

[image source]

-----------------------------------------------

Update, October 21, 2018:

Wisecrack has a terrific treatment of the philosophy of The Good Place, revealing that the show has a more nuanced view of the role of ethics lessons than one might infer from treating "The Trolley Problem" as a stand-alone work. Bonus feature: I am depicted wearing a "Captain Obvious" hat.

Thursday, September 06, 2018

Inflate and Explode

Here's a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong -- if things of Type X needn't necessarily have Property A -- then you've given what I'll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and "illusionism" about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to "consciousness" (or "qualia" or "what-it's-like-ness" or...), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

I am motivated to write this post in part due to influential recent work by Keith Frankish and Jay Garfield, who I think make this mistake.

-----------------------------------------

Some earlier examples of the inflate-and-explode strategy include:

Paul Feyerabend (1965) denies that mental processes of any sort exist. He does so on the grounds that "mental processes", understood in the ordinary sense, are necessarily nonmaterial, and only material things exist.

Patricia Churchland (1983) argues that the concept of consciousness may "fall apart" or be rendered obsolete (or at least require "transmutation") because the idea of consciousness is deeply, perhaps inseparably, connected with false empirical views about the transparency of our mental lives and the centrality of linguistic expression.

Daniel Dennett (1991) argues that "qualia" do not exist, on the grounds that qualia are supposed by their nature to be ineffable and irreducible to scientifically discoverable mental mechanisms.

Unfortunately, philosophical enthusiasts for the importance of conscious experience tend to set themselves up for the inflate-and-explode move, making Feyerabend's, Churchland's, and Dennett's critisms understandable.

The problem on the enthusiasts' side, as I see it, is that they tend to want to do two things simultaneously:

(1.) They want to use the word "consciousness" or "phenomenology" or "qualia" or whatever to refer to that undeniable stream of experience that we all have.

(2.) In characterizing that stream, or for the sake of some other philosophical project, they typically make some dubious assertions about its nature. They might claim that we know it infallibly well, or that it forms the basis of our understanding of the outside world, or that's irreducible to merely functional for physical processes, or....

Now if the additional claims that the enthusiasts made in (2) were correct, the double purpose would be approximately harmless. However, I'm inclined to think that these types of claims are generally not correct, or at least are quite legitimately disputable. Thus, the enthusiasts unfortunately invite inflate-and-explode. They invite critics to think that those dubious claims are essential to the existence of consciousness in the intended sense, such that if those dubious claims prove false, that's sufficient to show that consciousness doesn't exist.

The reason I think that Feyerabend, Churchland, and Dennett are inflating the target, rather than just correctly interpreting the target, is that I believe the enthusiasts would much more readily abandon the dubious claims, if required to do so by force of argument, than they would deny the existence of consciousness. Those claims aren't really ineliminably, foundationally important to their concept of consciousness. It's not like the relation between magical powers and witches on some medieval European conceptions of witches, such that if magical powers were shown not to exist, the right conclusion would be that witches don't exist. Even if we must jettison thoughts of infallibility or immateriality, consciousness in our communally shared sense of the term still exists. The core conception of phenomenal consciousness in philosophy of mind is, I think or suspect or at least hope, the conception of the stream of experience that it is almost impossible to deny the existence of -- not that stream-of-experience-plus-such-and-such-a-dubious-property.

-----------------------------------------

Frankish's and Garfield's more recent illusionist arguments, as I see them, employ the same mistaken inflate-and-explode strategy. Keith Frankish (2016) argues that phenomenal consciousness is an "illusion" because there are no phenomenal properties that are "private", ineffable, or irreducible to physical or functional processes. Jay Garfield (2015) denies the existence of phenomenal consciousness on the broadly Buddhist grounds that there is no "subject" of experience of the sort required and that we don't have the kind of infallibility about experience that friends of phenomenal consciousness assume.

Now it is true that many recent philosophers think that consciousness involves privacy, ineffability, irreducibility, infallibility, or a subject of experience of the sort not countenanced by (some) Buddhists; and maybe they are wrong to think so. On these matters, Frankish's and Garfield's (and Feyerabend's and Churchland's and Dennett's) criticisms have substantial merit. But it does not follow that consciousness is a mere illusion or does not exist. We can, and I think normally do, conceptualize consciousness more innocently. We need not commit to such dubious theses; our shared conception can survive without them.

To avoid commitment to dubious theses, we can and do define consciousness primarily by example. We gesture, so to speak, toward our sense experiences, our imagery experiences, our vividly felt emotions, our inner speech. We notice that there is something extremely obvious that all of these examples vividly share. Consciousness is that obviously shared thing. Maybe it's reducible; maybe not. Maybe there's a "subject" in a Cartesian sense; maybe not. Why commit on such matters, right out of the gate? Keep it theoretically innocent! Consciousness, in this innocent sense, is almost undeniably real. (I say "almost" because the clever philosopher can find a way to deny anything.)

Now admittedly, this sort of theoretically innocent definition by example is not quite as simple as I've just portrayed it. For a more careful attempt see Schwitzgebel 2016.

-----------------------------------------

I've tried this argument on both Frankish and Garfield, in critical commentaries (contra Frankish; contra Garfield). They remain unconvinced. (Well, this is philosophy!) Let me summarize their replies and share my reaction.

Frankish says that he agrees that consciousness, defined innocently by example as I have done, does indeed exist. He graciously allows that I have executed the important task of identifying a "neutral explanandum" for theories of consciousness that both realists and illusionists can accept (p. 227). However, Frankish also asserts that my definition is "not substantive" "in the substantive sense created by the phenomenality language game" (ibid.), and thus he feels licensed to continue to embrace illusionism about phenomenal consciousness.

I remain unsure why my definition by example is insufficiently substantive. Surely some definitions by example are substantive, or substantive enough. For instance, I might define "furniture" by reference to a diversity of positive and negative examples. That seems to pick out a substantive target of things that exist, and done well, it's good enough to let us start counting pieces of furniture (maybe with some disputable cases), evaluating the quality and function of different types of furniture, etc. Why wouldn't example-by-definition of consciousness work similarly? What is missing?

Garfield responds differently, doubling down, as I see it, on the inflation move:

I argue that if by 'qualitative states' we mean states that are the objects of immediate awareness, the foundation of our empirical knowledge, inner states that we introspect, with qualitative properties that are properties of those states and not of the objects we perceive, there are no such states (Garfield 2018).

Whoa! I don't think I meant all that! My whole aim in definition by example is to avoid such commitments.

Maybe Garfield takes himself to be denying the existence only of properties that most 21st century Anglophone philosophers don't actually endorse? No, I don't think so. It is clear from context that in denying the existence of qualitative properties, Garfield takes himself to be in conflict with the mainstream view in philosophy of mind, the view of people like me who accept the existence of phenomenal consciousness. But I don't see why Nagel, Block, Searle, Chalmers, Strawson, Carruthers, Kriegel, Siegel, Siewert, Thompson, etc. need to be committed to the dubious package of views Garfield lists in the blockquote above, simply by virtue of accepting the existence of consciousness. Of course they may also make other, further claims about consciousness, besides merely asserting that it exists, and those further claims might commit some of them to the dubious theses that Garfield wisely rejects.

-----------------------------------------

[image source]