Since the beginning, two concerns have continually nagged at me.
One concern is the metaphysical relation between belief and outward behavior. It seems that beliefs cause behavior and are metaphysically independent of behavior. But it's not clear that my dispositional account allows this -- a topic for a future post.
The other concern, my focus today, is this: My account struggles to explain what has gone normatively wrong in many "in-between" cases of belief.
The Concern
To see the worry, consider personality traits, which I regard as metaphysically similar to beliefs. What is it to be extraverted? It is just to match, closely enough, the dispositional stereotype that we tend to associate with being extraverted -- that is, to be disposed to enjoy parties, to be talkative, to like meeting new people, etc. Analogously, on my view, to believe there is beer in the fridge is, ceteris paribus, to be disposed to go to the fridge if one wants a beer, to be disposed to feel surprise if one were to open the fridge and find no beer, to answer "yes" when asked if there is beer in the fridge, etc.
One interesting thing about personality traits is that people are rarely 100% extravert or 100% introvert, rarely 100% high-strung or 100% mellow. Rather, people tend to be between the extremes, extraverted in some respects but not in others, or in some types of contexts but not in others. One feature of my account of belief which I have emphasized from the beginning is that it easily allows for the analogous in-betweenness: We often match only imperfectly, and in some respects, the stereotype of the believer in racial equality, or of the believer in God, or of the believer that the 19th Street Bridge is closed for repairs. ("The Splintered Mind"!)
The worry, then is this: There seems to be nothing at all normatively wrong -- no confusion, no failing -- with being an in-between extravert who has some extraverted dispositions and other introverted ones; while in contrast it does seem that typically something has gone wrong in structurally similar cases of in-between believing. If some days I feel excited about parties and other days I loathe the thought, with no particular excuse or explanation for my different reactions, no problem, I'm just an in-between extravert. In contrast, if some days I am disposed to act and react as if Earth is third planet from the Sun and other days I am disposed to act and react as if it is the fourth, with no excuse or explanation, then something has gone wrong. Being an in-between extravert is typically not irrational; being an in-between believer typically is irrational. Why the difference?
My Answer
First, it's important not to exaggerate the difference. Too arbitrary an arrangement of, or fluctuation in, one's personality dispositions does seem at least a bit normatively problematic. If I'm disposed to relish the thought of a party when the wall to my left is beige and to detest the thought of a party when the wall to my left is truer white, without any explanatory story beneath, there's something weird about that -- especially if one accepts, as I do, following McGeer and Zawidzki, that shaping oneself to be comprehensible to others is a central feature of mental self-regulation. And on the other hand, some ways of being an in-between believer are entirely rational: for example, having an intermediate degree of confidence or having procedural "how to" knowledge without verbalizable semantic knowledge. But this so far is not a full answer. Wild, inexplicable patterns still seem more forgivable for traits like extraversion than attitudes like belief.
A second, fuller reply might be this: There is a pragmatic or instrumental reason to avoid wild splintering of one's belief dispositions that does not apply to the case of personality traits. It's good (at least instrumentally good, maybe also intrinsically good?) to be a believer of things, roughly, because it's good to keep track of what's going on in one's environment and to act and react in ways that are consonant with that. Per impossibile, if one were faced with the choice of whether or not to be a creature with the capacity to form dispositional structures in response to evidence that stay mostly stable, except under the influence of new evidence, and which guide one's behavior accordingly, vs. being a creature without the capacity to form such evidentially stable dispositional structures, it would be pragmatically wise to choose to be the former. On average, plausibly, one would live longer and attain more of one's goals. So perhaps the extra normative failing in wildly splintering belief dispositions derives from that. An important part of the value of having stable belief-like dispositional sets is to guide behavior in response to evidence. In normatively defective in-between cases, that value isn't realized. And if one explicitly embraces wild in-betweenness in belief, one goes the extra step of thumbing one's nose at such structures, when one could, instead, try to employ them toward one's ends.
Whether these two answers are jointly sufficient to address the concern, I haven't decided.
[Thanks to Sarah Paul and Matthew Lee for discussion.]
Not to add any more complexity to an already complex set of issues, but what role does so-called "occurrent judgment" play here? It seems like confusion would need to manifest itself after one observes one's own splintering (or have it pointed out by a 3rd-party observer). The story then would appear to motivate some kind of account of belief-revision, where noticed inconsistencies motivate an agent to monitor or otherwise alter their future behavior in a way that would lead to a more consistent dispositional profile.
ReplyDeleteBeing a big fan of your account and finding much to recommend it, the only thing that ever stuck in my craw was not making the leap from occurrent judgment to actual occurrent belief. Seems to me like the later would be able to systematically interpreted, reasoned over, acted-upon, etc. in ways that the former might not be.
Does this make any sense to you?
Thanks for the interesting comment, Paul! I think that the thing that "stuck in your craw" might be -- if I'm understanding correctly -- a version of the same issue I'm discussing in the post
ReplyDeleteHypothetical case: I "occurrently judge" that P. (In my view, as you probably know but for the sake of other readers, the disposition to occurrently judge that P is only one -- albeit a central one -- of the suite of dispositions constitutive of believing that P.) Then I notice that in lots of other ways I act non-belief-that-P-ishly. Then I analogize to personality traits. I say to myself: There's nothing wrong with this any more that there's something wrong with being an extravert-in-regard-to-enjoying parties but a non-extravert-in-regard-to-being-talkative.
That seems the wrong reaction to have! So this post could be seen as an attempt at a diagnosis of what has gone wrong in that hypothetical case. My answer: This is the "thumbing one's nose" at the value of belief structures that I mention in the penultimate sentence of the post.
As I acknowledge at the end of the post, though, I'm still undecided whether this answer gives me everything I should want in reply to the issue. It's a troubling issue -- the one that probably troubles me most, of all the objections I've heard over the years.
I suspect at least one of your hunches about traits is exactly right -- if one sees things like beliefs and traits being something like "covering concepts," traits are in some sense generally more abstract. They cover a wider swath of behavior. Perhaps in virtue of this, it seems to be more tolerable to have splintering in trait-space than belief-space. Evidentially, I think that information about a particular agent's beliefs is probably more useful in terms of prediction than information about their character traits -- it's just a guess, but perhaps our tolerance for ambiguity in trait-space is nothing more than a reaction to the general utility of such information.
ReplyDeleteAll speculative, of course. In any case, I love when you post about this stuff. Always a pleasure to read. Have a great holiday weekend!
I'm having trouble seeing the disanalogy. Is there a closely matched case that could explicate the difference between trait and attitude with minimal confounds? It doesn't seem like being fickle about going to individual parties is a very fair comparison with acting as though the position of the earth has changed. In any event, aren't broad traits, to the extent they exist, fairly consistent over time, like broad beliefs usually are?
ReplyDeleteI guess that means I'm a fan of your exaggeration response to the worry. It would be problematic if both trait and attitude were wildly unstable in general. There's nothing problematic about both varying in many specific instances. I don't have the intuition that inexplicable patterns are less problematic for traits than attitudes. If a friend suddenly stopped socializing entirely or vice versa I'd probably become very concerned.
Thanks, Wesley! Of course, I'm happy to hear that you think the exaggeration response has legs. I do think that the more closely matched the cases, the less extreme the difference seems to be. And it would most convenient for my view if there were no difference at all.
ReplyDeleteAlthough I didn't fully flesh it out in the post, let's consider the beige wall case in more detail.
Extraversion: When there's a beige wall to my right, the thought of going to parties excites me, and I want to meet new people, and I'm very talkative, etc. When the wall is other than beige, the thought of going to parties seems loathsome, I'd rather stick to my few friends, and I clam up, etc. (Assume the ceteris paribus clause is satisfied: Nothing unusual is going on that makes sense of this, it's just a wild splintering.)
Belief that Earth is the 3rd planet: When there's a beige wall to my right, I sincerely say that Earth is the 3rd planet, I would be surprised by a map of the solar system that put Earth 4th, etc. When the wall is other than beige, I say that Earth is the 4th planet, I would be not at all surprised by a map of the solar system that put Earth 4th, etc. (Assume again the ceteris paribus clause is met; e.g., I haven't been told by God that if Earth is 4th, He'll let me know by painting the wall beige.)
My hunch is that there's something weird and wrong about the first case, but something weirder and more wrong about the second case.
Now part of what's weird and wrong about the extraversion case is my wild unpredictability to others; but I'd also add that I'm going to have trouble satisfying my goals. For example, I'll say "Yes, let's party!" when I'm next to the wall, but then as soon as I step away from the wall to go the party I'll regret having agreed. But even acknowledging that, I still have the hunch that there's something further gone awry in the Earth case. It's even more baldly irrational, or something. And that was what I was hoping to capture (maybe not successfully) with my second argument.
I guess I have a hard time assessing whether one of those cases is more weird or wrong than the other, since both seem pretty impractical and hard to believe. But I agree closer matched cases decrease potential weirdness, which makes me doubt the problem. So it might help to consider cases with more closely related content. After all, the belief about the earth happens to involve a super obvious deeply familiar scientific fact that the trait example didn’t happen to involve, which could be responsible for differences. On the other hand, maybe we just don’t like calling people’s preferences irrational, which is kind of what the trait example feels like we’re doing.
ReplyDeleteWesley: All the better if there's no problem, of course! But I share the hunch that you mention in the last paragraph -- that we are disinclined to call preferences irrational. Or maybe, at least there's more rational leeway in inconstancy about them. (Still bad to have synchronous non-transitive preferences of A > B > C > A.) And that might be close to the root issue here.
ReplyDeleteSo -- right to the issue of the nature of normativity/rationality. What a mess of a topic! If I can finesse that topic with my two moves above, that would have the benefit of insulating my account of belief from too deep an entanglement in that mess. I guess that's kind of the secret agenda!
Wesley: All the better if there's no problem, of course! But I share the hunch that you mention in the last paragraph -- that we are disinclined to call preferences irrational. Or maybe, at least there's more rational leeway in inconstancy about them. (Still bad to have synchronous non-transitive preferences of A > B > C > A.) And that might be close to the root issue here.
ReplyDeleteSo -- right to the issue of the nature of normativity/rationality. What a mess of a topic! If I can finesse that topic with my two moves above, that would have the benefit of insulating my account of belief from too deep an entanglement in that mess. I guess that's kind of the secret agenda!
(The secret agenda behind the secret agenda might be hoping that not much of the tangle is really left once all the finessing is done properly, across the board.)
RE: "There's nothing problematic about both [beliefs and traits] varying in many specific instances."
ReplyDeleteI wonder if one could go further. Both empirical work and anecdotal experience suggest that beliefs do vary based on context — sometimes mildly, but occasionally to the point of patent inconsistency. We do/say something P-ish and then someone points out that some of our other views are rather not-P-ish. When this happens to us, we bite the bullet, double down, or try to tell a coherent/consistent story about how the dissonance is only apparent. Or if we're stumped, we'll say something to the effect of "I'll have to figure out how to make sense of that." We don't say, "By Jove, you're right! That is inconsistent/irrational/etc. I should change my view(s)!"
There are plenty of reasons we might respond this way. And not all of them turn out to be irrational. For instance, we might have pragmatic reasons for maintaining seemingly inconsistent beliefs across contexts (Stich 1990). Or maybe we arrive at such seemingly inconsistent beliefs via strategies/processes that are robustly reliable in each respective context (Bishop & Trout 2004, 2008, forthcoming).
All that to say the following: not only is it not necessarily irrational to have differential/inconsistent dispositions to believe P across contexts. In realistic cases at least, it might be positively rational.
Of course, I'm open to hearing problems with this proposal.
I'm tempted to say something like this. Lots of stereotypical belief-that-p behaviors will only "succeed" when p. E.g., a stereotypical belief-that-it-will-rain behavior is to carry an umbrella. That bit of behavior only "succeeds" if it rains--if it doesn't rain, then carrying an umbrella is wasted energy. I suspect that the cases of bad-seeming in between belief will involve various behaviors, not all of which can succeed. E.g., if I carry an umbrella, but I also wear a silk shirt that will be ruined if it gets wet, at least one of those behaviors will turn out badly--either carrying the umbrella will be a waste (if it doesn't rain) or the shirt will be ruined (if it does). There's nothing analogous going on if I'm extroverted on Tuesdays and introverted on Wednesdays--there's no behavior I'll be engaged in that's guaranteed to "fail" for any natural sense of failure. This is obviously far from the whole story--there are lots of times when it's normatively fine to "hedge one's bets", by performing each of a set of actions, not all of which can turn out to have been for the best (e.g., in cases of literally hedging one's bets). But I still suspect that part of the explanation of the difference will turn on the idea that in-between belief often involves self-defeating behavior in a way that in-between extroversion does not.
ReplyDeleteI'm very unconvinced by personality traits, so the problem would barely exist for me. That is, I think that the difference between an extravert and an introvert is 99% social construction, 1% actual statistical behaviour, so I would in fact expect a person's character to fluctuate wildly depending on who they're with.
ReplyDeleteThat points to when fluctuations in beliefs are acceptable and when they're not: beliefs about the physical world shouldn't vary much (and in fact don't); beliefs about other people should vary all the time (because they are all mostly social construction anyway), and in fact they do.
Thanks for the continuing comments, folks!
ReplyDeleteNick: Nice point. I think I agree with everything you say, as long as you don't mean anything *too* radical by it. By "too radical" I mean something like this: There's no rational pressure against having, say, a belief-that-P-ish set of verbal dispositions simultaneously with a belief-that-not-P-ish dispositions for short-term behavioral planning, ceteris paribus. I would agree that sometimes it can make practical sense to allow oneself to remain splintered; but there's normally a cost, too, even if that cost might be outweighed.
Daniel: Yes, that seems right. A nice way of thinking about it. It also helps a bit in supporting my first response, too. Splintering personality *can* be irrational when its self-defeating: When extraverted-Wednesday-you agrees to attend a party on introverted-Thursday.
chinaphil: I agree that personality is often overestimated as a predictor (perhaps especially in the West), but 99%/1% seems stronger split than the empirical evidence supports. (I don't think that's what Nisbett concludes, for example.) But bracketing that difference of degree between our opinions, it's an interesting question whether beliefs about people should be highly temporally variable. I'm not sure that follows from the skeptical literature on personality traits. Maybe we should stably believe of people that they don't have much by way of broad, stable personality traits and instead stably believe that they are instead driven by X, Y, and Z?
I will go and read the literature - my 99% is a pre-empirical bias with which to approach the subject (set up as deliberately contrarian, to some extent!), not a real, supported "opinion".
ReplyDeleteOn the relationship between traits and beliefs: the problem is that I don't think we're going to be able to work out what what X, Y and Z are. X, Y and Z are by their nature the kinds of things we miss; the kind of thing we elide when we talk about personality traits. Talk of personality traits isn't stupid. It's the best we can do given our woefully inadequate knowledge of what's going on inside and around other people (and ourselves). So I'm not saying it's impossible to have stable, true beliefs about people. I'm saying that people are intrinsically in a bad position to develop them. It's like trying to develop beliefs about microfauna before the invention of the microscope. Our tools for looking into people's behaviour (or minds) are gradually developing, but it's a slow process. With a bit of luck, big data will help drive a new round of insights.