Friday, February 15, 2019

Studying Ethics Should Influence Your Behavior (But It Doesn't Seem to)

Some academic disciplines have direct relevance to day-to-day life. Studying these disciplines, you might think, would have an influence on one's practical behavior. Studying nutritional health, it seems plausible to suppose, would have an influence of some sort on your food choices. Studying the stock market would likely have an influence on your investment strategies. Studying parenting styles in developmental psychology would have an influence on your parenting decisions. The effects might not be huge: A scholar of nutrition might not be able to entirely sacrifice Twinkies. A scholar of parenting styles might sometimes lose her temper in ways she knows from her research to be counterproductive. But it would be strange if studying such topics had no effect whatsoever -- if there were a perfect isolation between one's research on nutrition, investment, or parenting and one's personal food choices, investments, and approaches to parenting.

[A doctor doing what doctors in fact don't do very much of.]

Other academic topics have tenuous connections at best to practical matters of day-to-day life: studying the first second of the Big Bang, or mereological approaches to objecthood, or tortoise-shell divination in ancient China. Of course, studying such things could have behavioral effects. Maybe immersion in Big Bang cosmology inspires one to a broader, less parochial worldview. But I don't think we should particularly expect that or think something is strange if it doesn't. It's not strange for a cosmologist to be parochial in the same way it would be for an anti-trans-fat health researcher to not attempt to reduce her own trans-fat intake.

Ethics seems clearly to be in the category of academic disciplines that are directly relevant to scholars' day-to-day lives. Not every sub-issue of every sub-specialization of ethics is so, of course. Some ethical questions are highly abstract or concern matters irrelevant to the immediate choices of the scholars' lives; but few ethicists spend all of their energy on issues of that sort. Issues like our obligations to the poor, the ethics of honesty and kindness, animal rights and environmentalism, prejudice, structural injustices in our society, the proper weighing of selfish concerns against the demands of others, the question of how much to abide by laws or directives with which you disagree -- all seem directly relevant to our lives. It would be odd if devoting a substantial part of one's career to thinking about such issues had no influence of any sort on one's day-to-day behavior.

And yet it's not clear to me that studying ethics does have any influence on day-to-day behavior. Across a wide range of studies, my collaborators and I have found no convincing evidence of systematic behavioral differences between ethicists and non-ethicists of similar social background. Also, impressionistically, in my personal interactions with professional ethicists, my sense is that they behave overall similarly to non-ethicists. Furthermore, there's little evidence that university-level ethics classes influence students' behavior either.

Maybe studying ethics does sometimes have a practical effect. It would, in my mind, be stunning if studying ethics never had any influence of any sort on one's behavioral choices! But the effects, if any, are subtle and difficult to detect empirically.

Why this should be so is an underappreciated puzzle.

The easiest answers -- "academic ethics is all abstract and impractical", "ethics is all post-hoc rationalization of what you were going to do anyway", "our immoral desires are so compelling that no amount of rational thought could lead us to act otherwise" -- don't withstand critical scrutiny as fully adequate answers (although each may have some element of truth).

For several of my imperfect attempts to resolve this puzzle, see:

"The Moral Behavior of Ethicists and the Power of Reason" (with Joshua Rust), Advances in Experimental Moral Psychology (ed. H. Sarkissian and J. Wright, 2014).

"Rationalization in Moral and Philosophical Thought" (with Jon Ellis), Moral Inferences (ed. J.F. Bonnefon and B. Tremoliere, 2017).

"Aiming for Moral Mediocrity" (manuscript in draft).

I'm still banging my head against it.

[image source]

Friday, February 08, 2019

Is a Blind Person's Consciousness Partly Contained in Her Cane?

Karina Vold spoke interestingly last week at UC Riverside on "vehicle externalism" about consciousness. (Here's some of her published work on the topic.) According to vehicle externalism, the "vehicle" of consciousness -- that is, its actual physical basis -- is not, as many think, the brain, but rather the brain plus some of the world beyond the body. Central to Vold's presentation was the classic example of the blind person and her cane.

Before discussing that example, I want to start (as Vold did) with a well-known argument fom Andy Clark and David Chalmers. Otto is a forgetful fellow, and so he always carries a notepad with him, full of reminders. When he wants to go to the museum, he pulls out his notebook, which reminds him that the museum is on 53rd Street, and then heads that way. Without the notebook, he would be lost. Clark and Chalmers argue that Otto's retrieving information from the notebook is not relevantly functionally different from a less forgetful person's retrieving the same information from memory. Because of this, they argue, it's correct to say that as long as the notebook is reliably in his pocket (even before he consults its contents), Otto knows that the museum is on 53rd Street. Otto's memory, and thus his mind, is not entirely confined within his skull. It extends to the notepad in his pocket.

Vold and some other defenders vehicle externalism (but not Clark and Chalmers) want to say something similar about consciousness. To explain how and why, Vold invokes a blind person with a cane -- let's call her Genesis [name randomly selected].

Genesis skillfully uses her cane to help her move about, swinging and tapping it to detect obstacles, objects, slopes, textures, and materials. A skilled cane user can learn a lot from a tap or two! Her cane is always with her when she walks, and in some sense it feels almost as if it were a part of her body. (You might experience a lesser version of this if you take a pen in your hand and use it to stroke and tap things. Many people report that it feels almost as though they are directly feeling objects via the pen, as opposed to feeling the pen in their fingers and inferring from those sensations what the objects beyond must be like.) Genesis experiences the world via a sensorimotor loop that travels from brain through arm and hand, through cane, then both into the ears and back up through the arm into the brain.

If a brain-based view of consciousness is correct, it is from her brain alone that Genesis's sensory experience arises, with signals up the arm serving only as input to the brain processes supporting consciousness. In contrast, according to vehicle externalism, the cane and arm do not merely provide inputs. Rather, consciousness arises from cane and arm and brain jointly operating as an integrated system. Consciousness does not depend only causally on the input Genesis receives from the stick. Rather the stick itself is a constituent of an extended system that as a whole constitutes or gives rise to consciousness.

The difference between the internalist input view and the vehicle externalist view can be a little hard to fathom. Here, I think the philosophical concept of supervenience can help. But before I get to supervenience, let me stave off one possible error. The vehicle externalist is not merely saying that consciousness feels as though it is located in the cane. Everyone agrees that consciousness can feel as though it's somewhere other than the location of the system that gives rise to the experience. Pain can feel as though it's in your toe even though the material process that directly gives rise to it, on an internalist view, is in the brain rather than the toe. This seems especially clear in the case of phantom limb pain.

A set of properties, properties of Type A, supervenes on another set of properties, properties of Type B, if and only if there is no possible difference in the As without a difference in the Bs. For example, the score of a baseball game supervenes on the number of times each individual player has officially crossed home plate. There is no possible difference in score without some difference in the number of times at least one individual player has crossed home. (The converse is not the case: It is possible for the score to be the same even if some individual runners had scored a different number of runs.)

The core question of vehicle externalism, as I see it, can be expressed in terms of supervenience: Does consciousness supervene on the brain, as the internalist would say? Or is it possible to have different conscious experiences despite having exactly the same brain states? (Not all vehicle externalists need actually be committed to denying the supervenience of consciousness on the brain, but then I worry that their view might collapse into a notational variant of the internalist input view.)

Consider Otto. If you accept vehicle externalism about Otto's memory, then it should be possible to change Otto's memory state without changing his brain state at all. Otto-1 has the notepad in his pocket. Otto-2 has been pickpocketed, though he hasn't noticed it. Assume that Otto-1 and Otto-2 have exactly the same brain states. On Clark and Chalmers's vehicle externalism, Otto-2 no longer remembers where the museum is (and many other things), although Otto-1 does remember. Despite this difference in memory, on Clark and Chalmers's internalist view about consciousness, Otto-1 and Otto-2 will differ not at all in their conscious experiences. Their conscious experiences will only start to diverge when Otto-2 reaches for his missing notebook. Otto's consciousness, but not his memory, supervenes on his brain state.

Now consider Genesis. If a substantive form of vehicle externalism about consciousness is correct, we should be able to concoct a case in which Genesis-1 and Genesis-2 have identical brain states but non-identical conscious experiences. To claim that such a case is possible is just what it is to deny that consciousness supervenes on the brain.

In other words, we want to imagine changing the cane in some important way, without changing the brain. The vehicle externalist ought to say that the conscious experiences will be different. If consciousness is literally partly contained in or constituted by the cane, Genesis-1 and Genesis-2, with identical brains but different canes, ought to have different conscious experiences.

Let's consider a small slice of time -- a third of a second -- when Genesis's cane is swinging through the air between things. Suppose, hypothetically, that a clever prankster were able to undetectably change the tip of the cane just before that third of a second. Genesis is swinging her cane and touches a passerby who "accidentally" jostles the cane a bit. In Genesis-1, the cane is unchanged after the jostling. In Genesis-2, the long tip of her cane has been covertly swapped for a shorter tip of the right weight (slightly heavier, to compensate for the difference in leverage). The swap is so perfect that for the entire third of a second after the bump and before her cane strikes the next object, the signals into their brains are identical. For this brief duration, Genesis-1 and Genesis-2 have identical brain states but different cane states.

Shortly, their brain states will diverge: Genesis-2 will think the next object is closer than it is, and after the strike she might notice some difference in the sound and tactile dynamics of her cane. The question concerns what happens before that next tap of the cane. The canes are different. The sensorimotor contingencies are different (though that difference has not yet resulted in divergent input signal). Will the conscious experiences, already, be different?

It seems unintuitive that they would be. Most of the vehicle externalists about consciousness I have pressed on this question either hesitate to answer, or try to wiggle out in some way. Also, if introspection is something that happens in the brain, and if consciousness is always immediately available for introspection, the case seems to present trouble. Some philosophers might think that if the view implies a not-immediately-introspectible difference in conscious experience, that would be a fatal flaw. But I disagree. Intuitions about consciousness aren't always correct, and simple, immediate-access models of introspection may not be correct. That Genesis-1 and Genesis-2 have different conscious experiences already in that first third of a second is an interesting possibility, worth considering, and it seems to flow naturally from vehicle externalism. If this is a "bullet", I think the vehicle externalist should bite it.

[image by Ken Walton]

Friday, February 01, 2019

Do You Have Whole Herds of Swiftly Forgotten Microbeliefs?

I believe that the Earth revolves around the Sun, that I am wearing shoes, and that my daughter is in school right now. That seems unproblematic. But do I believe (assuming I don't stop to explicitly think about it) that my desk chair will swivel to the left, as I push off with my right foot to reach for a book? Driving to campus today, did I believe that the SUV approaching on the crossroad wouldn't blow through the red light? Reaching into my box of Triscuits, do I believe that the inner plastic bag is folded over twice to the left and that I need to unfold it to the right to access the snacks inside, as I absentmindedly, but skillfully enough, do just that?

Many philosophers, following Donald Davidson, hold that when you act intentionally, you must have beliefs about what you are doing -- beliefs that what you are doing will (or at least has a chance of) bringing about the event you want to bring about by doing that thing. Intricacies bloom if you analyze the idea carefully (which is what makes it fun philosophy), but the basic thought is that when I intentionally flip the switch to turn on the light, I want to turn on the light and I believe that by flipping the switch I will do so.

But those types of beliefs are about your actions and their effects -- not about the details of the world that shapes your actions. It would be odd to think that you could intentionally act in the world without a broad range of beliefs about your environment, but it's much less clear how much you have to believe, that is, how fine-grained or detailed your beliefs need to be. If I'm intentionally driving to work, I have to believe that I'm in my car. Maybe I have to believe that this is the route to work (though absent-mindedly driving toward a bridge you know is closed is a problem case for this claim). Probably I have to believe that the light is red, as I stop for it. But do I have to believe that all four wheels of my car are currently touching the road? Do I have to believe that the drivers nearby will stay in their lanes? Do I believe that lightpost that I don't really notice, but certainly see, is right there?

Here are two extreme views:

Restrictive View: I only believe what I explicitly reflect on. Unless the thought "I am on the correct route to work" actually bubbles up to consciousness, I don't really believe that. Unless I specifically attend to the fact that I still have two arms, I don't believe that I do.

Liberal View: I believe everything about the world that my skillful actions depend on. I believe that by shifting my weight such-and-so as I walk, I won't tip over. I believe that my right forefinger now needs to come down a half inch from the Y to hit the H I am about to strike on the keyboard. I believe that this patch of road is not yellow, and this patch, and this patch, for every patch of road that flies past my peripheral vision if it is the case (as it presumably is) that were one of those patches yellow I would steer slightly differently to give it a wider berth.

The Restrictive view seems too restrictive -- at least on the standard Anglophone philosopher's understanding of "belief", according to which, belief is, to a first approximation, synonymous with a certain standard usage of "think" or "thought" -- not the active use ("I am thinking of Triscuits") but a more passive use ("I thought there were Triscuits in the cupboard, honey?"). I can truly say to a colleague "I thought you were coming to the department meeting", i.e., I believed that he was, even if I only assumed that he was coming and I didn't entertain the specific conscious thought "Isaiah will be coming to the meeting". If knowledge of the fact that P requires believing that P, as most philosophers think (other than me, but never mind that!), it seems that we can truly say "I knew you wouldn't let me down" even if the thought that you wouldn't let me down never explicitly came to mind -- perhaps especially if the thought that you wouldn't let me down never explicitly came to mind.

On the other hand, the Liberal view seems too liberal. Normally I can report my beliefs, if you ask, but I cannot report these. (I might not even remember where the Y and the H are relative to each other, if you asked me when I didn't have a keyboard to look at.) Normally, also, our beliefs can broadly inform our reasoning, but these cannot do so. Although our habits and our fine-grained motor skills depend in some way on responsiveness to environmental details, whatever is guiding that responsiveness appears to be isolated to the execution of specific tasks rather that being generally available for cognition.

If what I've said so far is right, the best view is somewhere in the middle, between the Restrictive and the Liberal. But where in the middle? Do we have whole herds of microbeliefs that guide our action -- and which we could report, perhaps, if asked that very split second -- but which are almost all swiftly forgotten? Or are our beliefs more coarse-grained and durable than that -- the big-picture stuff that we are likely to remember for at least a minute or two?

Here's what I think: It's fine to talk either way, within broad limits, as long as you are consistent about it.

What's not fine, I'd suggest, is to commit to there being an ontologically or psychologically real sharp line, such that exactly this much availability and reportability and memory (or whatever), no more, no less, is what's necessary for belief. What's not fine, I'd suggest, is the kind of industrial-grade realism that holds that there is a precise fact of the matter, if only we could know it, that I believe exactly P, Q, and R while I'm driving and not the ever-so-slightly-more-fine-grained T, U, and V.

I hope you find this assertion plausible, reflecting on the range of examples I've given. If not, maybe I can strengthen my case by referring to other classes of phenomena where the boundaries of belief appear to be fuzzy (e.g., here, here, here, and here).

If I am right in denying this sharp line, that fact fits much more comfortably with dispositionalism about belief, which treats believing as a matter of matching, to a sufficient degree, a profile pattern of actions, thoughts, and reactions characteristic of that belief than it fits with a view on which believing requires that you possess discrete representations of P, Q, and R, which are always either discretely stored, or not stored, somewhere in the functional architecture of your mind.



Do You Have Infinitely Many Beliefs about the Number of Planets? (Oct 17, 2012)

It's Not Just One Thing, to Believe There's a Gas Station on the Corner (Feb 28, 2018)

In-Between Believing (Philosophical Quarterly, 2001)

A Phenomenal, Dispositional Account of Belief (Nous, 2002).

A Dispositional Approach to the Attitudes: Thinking Outside of the Belief Box, in Nottelmann, ed., 2013.

And Daniel Dennett's classic Real Patterns (Journal of Philosophy, 1991).

[image source]