Friday, February 01, 2019

Do You Have Whole Herds of Swiftly Forgotten Microbeliefs?

I believe that the Earth revolves around the Sun, that I am wearing shoes, and that my daughter is in school right now. That seems unproblematic. But do I believe (assuming I don't stop to explicitly think about it) that my desk chair will swivel to the left, as I push off with my right foot to reach for a book? Driving to campus today, did I believe that the SUV approaching on the crossroad wouldn't blow through the red light? Reaching into my box of Triscuits, do I believe that the inner plastic bag is folded over twice to the left and that I need to unfold it to the right to access the snacks inside, as I absentmindedly, but skillfully enough, do just that?

Many philosophers, following Donald Davidson, hold that when you act intentionally, you must have beliefs about what you are doing -- beliefs that what you are doing will (or at least has a chance of) bringing about the event you want to bring about by doing that thing. Intricacies bloom if you analyze the idea carefully (which is what makes it fun philosophy), but the basic thought is that when I intentionally flip the switch to turn on the light, I want to turn on the light and I believe that by flipping the switch I will do so.

But those types of beliefs are about your actions and their effects -- not about the details of the world that shapes your actions. It would be odd to think that you could intentionally act in the world without a broad range of beliefs about your environment, but it's much less clear how much you have to believe, that is, how fine-grained or detailed your beliefs need to be. If I'm intentionally driving to work, I have to believe that I'm in my car. Maybe I have to believe that this is the route to work (though absent-mindedly driving toward a bridge you know is closed is a problem case for this claim). Probably I have to believe that the light is red, as I stop for it. But do I have to believe that all four wheels of my car are currently touching the road? Do I have to believe that the drivers nearby will stay in their lanes? Do I believe that lightpost that I don't really notice, but certainly see, is right there?

Here are two extreme views:

Restrictive View: I only believe what I explicitly reflect on. Unless the thought "I am on the correct route to work" actually bubbles up to consciousness, I don't really believe that. Unless I specifically attend to the fact that I still have two arms, I don't believe that I do.

Liberal View: I believe everything about the world that my skillful actions depend on. I believe that by shifting my weight such-and-so as I walk, I won't tip over. I believe that my right forefinger now needs to come down a half inch from the Y to hit the H I am about to strike on the keyboard. I believe that this patch of road is not yellow, and this patch, and this patch, for every patch of road that flies past my peripheral vision if it is the case (as it presumably is) that were one of those patches yellow I would steer slightly differently to give it a wider berth.

The Restrictive view seems too restrictive -- at least on the standard Anglophone philosopher's understanding of "belief", according to which, belief is, to a first approximation, synonymous with a certain standard usage of "think" or "thought" -- not the active use ("I am thinking of Triscuits") but a more passive use ("I thought there were Triscuits in the cupboard, honey?"). I can truly say to a colleague "I thought you were coming to the department meeting", i.e., I believed that he was, even if I only assumed that he was coming and I didn't entertain the specific conscious thought "Isaiah will be coming to the meeting". If knowledge of the fact that P requires believing that P, as most philosophers think (other than me, but never mind that!), it seems that we can truly say "I knew you wouldn't let me down" even if the thought that you wouldn't let me down never explicitly came to mind -- perhaps especially if the thought that you wouldn't let me down never explicitly came to mind.

On the other hand, the Liberal view seems too liberal. Normally I can report my beliefs, if you ask, but I cannot report these. (I might not even remember where the Y and the H are relative to each other, if you asked me when I didn't have a keyboard to look at.) Normally, also, our beliefs can broadly inform our reasoning, but these cannot do so. Although our habits and our fine-grained motor skills depend in some way on responsiveness to environmental details, whatever is guiding that responsiveness appears to be isolated to the execution of specific tasks rather that being generally available for cognition.

If what I've said so far is right, the best view is somewhere in the middle, between the Restrictive and the Liberal. But where in the middle? Do we have whole herds of microbeliefs that guide our action -- and which we could report, perhaps, if asked that very split second -- but which are almost all swiftly forgotten? Or are our beliefs more coarse-grained and durable than that -- the big-picture stuff that we are likely to remember for at least a minute or two?

Here's what I think: It's fine to talk either way, within broad limits, as long as you are consistent about it.

What's not fine, I'd suggest, is to commit to there being an ontologically or psychologically real sharp line, such that exactly this much availability and reportability and memory (or whatever), no more, no less, is what's necessary for belief. What's not fine, I'd suggest, is the kind of industrial-grade realism that holds that there is a precise fact of the matter, if only we could know it, that I believe exactly P, Q, and R while I'm driving and not the ever-so-slightly-more-fine-grained T, U, and V.

I hope you find this assertion plausible, reflecting on the range of examples I've given. If not, maybe I can strengthen my case by referring to other classes of phenomena where the boundaries of belief appear to be fuzzy (e.g., here, here, here, and here).

If I am right in denying this sharp line, that fact fits much more comfortably with dispositionalism about belief, which treats believing as a matter of matching, to a sufficient degree, a profile pattern of actions, thoughts, and reactions characteristic of that belief than it fits with a view on which believing requires that you possess discrete representations of P, Q, and R, which are always either discretely stored, or not stored, somewhere in the functional architecture of your mind.



Do You Have Infinitely Many Beliefs about the Number of Planets? (Oct 17, 2012)

It's Not Just One Thing, to Believe There's a Gas Station on the Corner (Feb 28, 2018)

In-Between Believing (Philosophical Quarterly, 2001)

A Phenomenal, Dispositional Account of Belief (Nous, 2002).

A Dispositional Approach to the Attitudes: Thinking Outside of the Belief Box, in Nottelmann, ed., 2013.

And Daniel Dennett's classic Real Patterns (Journal of Philosophy, 1991).

[image source]


SelfAwarePatterns said...

Interesting post Eric, as always.

I tend to think of beliefs as something that is part of our mental model of reality. It might not be an aspect of the model we're presently considering, but it's something that if brought up, we'd recall our stance toward it.

Included in this category would be conscious and unconscious beliefs, that is, beliefs we know we have, and beliefs we may not realize we have. For example, there have been a lot of studies demonstrating implicit biases in people, biases they often aren't aware of.

Then we have habits, such as driving to work while thinking about something else. I've formed habits in the past because of something I believed, but continued the habit long after the original belief had faded, although often I wasn't consciously aware it had faded. For example, I can recall on multiple occasions reflexively making an argument for something, before realizing that I no longer believed what I was arguing for, that it had faded at some point without my realizing it.

Where things get particularly murky is on the border between habits and unconscious beliefs. For example, maybe I was picked on as a boy by a red headed boy with a certain look. As an adult, I encounter a red headed adult with a similar look and instantly have a dislike for them. Do I really believe this new person will behave like my childhood nemesis? It could be argued that at some level I do, although it could also be argued that it's just an unconscious unthinking bias.

All of which is to say that there are beliefs, semi-beliefs, and habits, not to mention innate impulses. Actions might be driven by any of these.

Anonymous said...

Just to make sure I understand the position, could it make room for something like this: we in fact do have a whole host of such micro-beliefs that reflect our engagement with our environment, but depending on our practice and fluency with reflective concepts, with our capacities to articulate our beliefs, there may be more or may be less of these beliefs available to consciousness depending on the individual in question, but insofar as such micro-beliefs influence our action and have an impact on our dispositions for action (ex. would make a difference to how I would behave or react in a given situation), they are just as real as any of our other beliefs, even as they typically simmer under the veil of conscious thought. So ex. two people in the exact same situation might have the exact same, or very close to the same, set of micro-beliefs even as they would make very different reports if you interrogated them about them. Someone whose mind isn't constantly rattling off the stuff of such reports would not make the same reports after the fact about their beliefs. It depends on how reflectively engaged they are at the time, but those beliefs can nevertheless still make a difference to their behavior.

David Duffy said...

If our cognition is at least partly like current artificial neural network models, then propositional-type knowledge will not explicitly be anywhere, until we make a verbal summary of the regularities implicit in that linear model.

Kati Farkas said...

Hi Eric, I'm also very interested in micro beliefs. I've been puzzling over the following question coming from epistemology: if I watch a plane landing, I seem to know at each moment where the plane is. On the usual analysis, this means that I know an answer to the question "where is the plane?". Since this has about a 100 discernible answers for the 1 minute I'm watching (it's there1; it's there2, etc) if belief is a necessary condition for knowledge, I adopt and abandon 100 beliefs in 1 minute. We can of course deny that knowledge requires belief (I myself am sympathetic to that option), but it would be for quite different reasons here than in the cases you discuss in the paper you link in the post.

Kati Farkas said...

Iʼm not sure I quite understand the contrast between dispositionalism and the view that beliefs are stored representation in the functional architecture of the mind. I take it that on the second view, whether a stored representation is the content of a belief or a doubt or a desire depends on its place in the functional architecture. The key here is ‟a place in the functional architecture”. Isnʼt that something that includes ‟a profile pattern of actions, thoughts, and reactions characteristic of that belief ‟? (Functional architecture maybe richer than dispositions, because it includes not only possible effects but also causes; but this doesnʼt matter for the point Iʼm trying to make here). If thatʼs right, then the alternative view can allow the same gradual ascription of belief as the dispositionalist view. The difference is in a commitment on how the content of attitudes are stored: yours have no commitment, the other view has a certain commitment. But on what makes a belief a belief (ie. this attitude) – the two views could give the same answer. Tell me what Iʼm missing.

Eric Schwitzgebel said...

Thanks for the comments, everyone!

SelfAware: That seems sensible. The key question for me will be what it is to have a model.

Anon: Yes, my view does make room for that possibility. The key question here, for me, would be how broad-ranging and temporally stable a person's relevant dispositions have to be, in order for it to be a belief.

David: I agree.

Kati: Yes, as I was writing this post, I found myself thinking that this is another angle into denying that K -> B (different from the considerations in my paper with Myers-Schulz). On your second point, I realize I wasn't clear enough in the original post that the model I'm objecting to has TWO forms of discreteness: One in whether a representation that P is present or not, and one in how broadly it fits into the functional architecture. You mention the latter kind (as I also do, as a path to in-between belief in my 2010 paper "Acting Contrary), but the core architectural problem, in my view, is the former kind of discreteness. Does that make sense?

John Holbo said...

Berkeley uses the figure of ideas 'crowding into' the mind, but only when he is considering the case of the blind man, suddenly given sight, who can't at first synthesize all the simple ideas into one complex one. The idea is basically that healthy, normal thought needs to be a unity, not some jostling herd.

And here is Dennett in the NY Times:

'the role words play inside the brain. Learn a bit of wine speak — “ripe black plums with an accent of earthy leather” — and you are suddenly equipped with anchors to pin down your fleeting gustatory impressions. Words, he suggested, are “like sheepdogs herding ideas.”'

I got curious about that metaphor and tried some searches for earlier metaphors along the same lines. Here's one:

"In swift Phlox my thoughts run like timorous sheep/
Yet they're never beguiled from the paths they should keep"

Read on, he's got some sheepdog keeping his thoughts herded.;view=1up;seq=86

Funny but not really helpful, I suppose. Focusing on the 'herd' metaphor, that is. I agree there needs to be something in the middle

Eric Schwitzgebel said...

Fun stuff on the herd metaphor, John! :-)