I'm a dispositionalist about belief. To believe that there is beer in the fridge is nothing more or less than to have a particular suite of dispositions. It is to be disposed, ceteris paribus (all else being equal, or normal, or absent countervailing forces), to behave in certain ways, to have certain conscious experiences, and to transition to related mental states. It is to be disposed, ceteris paribus, to go to the fridge if one wants a beer, and to say yes if someone asks if there is beer in the fridge; to feel surprise should one open the fridge and find no beer, and to visually imagine your beer-filled fridge when you try to remember the contents of your kitchen; to be ready to infer that your Temperance League grandmother would have been disappointed in you, and to see nothing wrong with plans that will only succeed if there is beer in the fridge. If you have enough dispositions of this sort, you believe that there is beer in the fridge. There's nothing more to believing than that. (Probably some sort of brain is required, but that's implementational detail.)
To some people, this sounds uncomfortably close to logical behaviorism, a view according to which all mental states can be analyzed in terms of behavioral dispositions. On such a view, to be in pain, for example, just is, logically or metaphysically, to be disposed to wince, groan, avoid the stimulus, and say things like "I'm in pain". There's nothing more to pain than that.
It is unclear whether any well-known philosopher was a logical behaviorist in this sense. (Gilbert Ryle, the most cited example, was clearly not a logical behaviorist. In fact, the concluding section of his seminal book The Concept of Mind is a critique of behaviorism.)
Part of the semi-mythical history of philosophy of mind is that in the bad old days of the 1940s and 1950s, some philosophers were logical behaviorists of this sort; and that logical behaviorism was abandoned due to several fatal objections that were advanced in the 1950s and 1960s, including one objection by Hilary Putnam that turned on the idea of super-spartans. Some people have suggested that 21st-century dispositionalism about belief is subject to the same concerns.
Putnam asks us to "engage in a little science fiction":
Imagine a community of 'super-spartans' or 'super-stoics' -- a community in which the adults have the ability to successfully suppress all involuntary pain behavior. They may, on occasion, admit that they feel pain, but always in pleasant well-modulated voices -- even if they are undergoing the agonies of the damned. The do not wince, scream, flinch, sob, grit their teeth, clench their fists, exhibit beads of sweat, or otherwise act like people in pain or people suppressing their unconditioned responses associated with pain. However, they do feel pain, and they dislike it (just as we do) ("Brains and Behavior", 1965, p. 9).
Here is some archival footage I have discovered:
A couple of pages later, Putnam expands the thought experiment:
[L]et us undertake the task of trying to imagine a world in which there are not even pain reports. I will call this world the 'X-world'. In the X-world we have to deal with 'super-super-spartans'. These have been super-spartans for so long, that they have begun to suppress even talk of pain. Of course, each individual X-worlder may have his private way of thinking about pain.... He may think to himself: 'This pain is intolerable. If it goes on one minute longer I shall scream. Oh No! I mustn't do that! That would disgrace my whole family...' But X-worlders do not even admit to having pains" (p. 11).
Putnam concludes:
"If this last fantasy is not, in some disguised way, self-contradictory, then logical behaviourism is simply a mistake.... From the statement 'X has a pain' by itself no behavioral statement follows -- not even a behavioural statement with a 'normally' or 'probably' in it. (p. 11)
Putnam's basic idea is pretty simple: If you're a good enough actor, you can behave as though you lack mental state X even if you have mental state X, and therefore any analysis of mental state X that posits a necessary connection between mentality and behavior is doomed.
Now I don't think this objection should have particularly worried any logical behaviorists (if any existed), much less actual philosophers sometimes falsely called behaviorists such as Ryle, and still less 21st-century dispositionalists like me. Its influence, I suspect, has more to do with how it conveniently disposes of what was, even in 1965, only a straw man.
We can see the flaw in the argument by considering parallel cases of other types of properties for which a dispositional analysis is highly plausible, and noting how it seems to apply equally well to them. Consider solubility in water. To say of an object that it is soluble in water is to say that it is apt to dissolve when immersed in water. Being water-soluble is a dispositional property, if anything is.
Imagine now a planet in which there is only one small patch of water. The inhabitants of that planet -- call it PureWater -- guard that patch jealously with the aim of keeping it pure. Toward this end, they have invented technologies so that normally soluable objects like sugar cubes will not dissolve when immersed in the water. Some of these technologies are moderately low-tech membranes which automatically enclose objects as soon as they are immersed; others are higher-tech nano-processes, implemented by beams of radiation, that ensure that stray molecules departing from a soluble object are immediately knocked back to their original location. If Putnam's super-spartans objection is correct, then by parity of reasoning the hypothetical possibility of the planet PureWater would show that no dispositional analysis of solubility could be correct, even here on Earth. But that's the wrong conclusion.
The problem with Putnam's argument is that, as any good dispositionalist will admit, dispositions only manifest ceteris paribus -- that is, under normal conditions, absent countervailing forces. (This has been especially clear since Nancy Cartwright's influential 1983 book on the centrality of ceteris paribus conditions to scientific generalizations, but Ryle knew it too.) Putnam quickly mentions "a behavioural statement with a 'normally' or 'probably' in it", but he does not give the matter sufficient attention. Super-super-spartans' intense desire not to reveal pain is a countervailing force, a defeater of the normality condition, like the technological efforts of the scientists of PureWater. To use hypothetical super-super-spartans against a dispositional approach to pain is like saying that water-solubility isn't a dispositional property because there's a possible planet where soluble objects reliably fail to dissolve when immersed in water.
Most generalizations admit of exceptions. Nerds wear glasses. Dogs have four legs. Extraverts like parties. Dropped objects accelerate at 9.8 m/sec^2. Predators eat prey. Dispositional generalizations are no different. This does not hinder their use in defining mental states, even if we imagine exceptional cases where the property is present but something dependably interferes with its manifesting in the standard way.
Of course, if some of the relevant dispositions are dispositions to have certain types of related conscious experiences (e.g., inner speech) and to transition to related mental states (e.g., in jumping to related conclusions), as both Ryle and I think, then the super-spartan objection is even less apt, because super-super-spartans do, by hypothesis, have those dispositions. They manifest such internal dispositions when appropriate, and if they fail to manifest their pain in outward behavior that's because manifestation is prevented by an opposing force.
(PS: Just to be clear, I don't myself accept a dispositional account of pain, only of belief and other attitudes.)
17 comments:
Eric, as you know I like your dispositionalist account of belief, and I think you are right that the Super Spartan problem is inconsequential, something the last paragraph of the post is sufficient to demonstrate.
I think the main implausibility of your view is that it must deny what the vast majority of people will insist is true of belief viz., that we go to the fridge to fetch a beer because we believe there is beer in the fridge, express surprise when we do not find it because we believed there was beer in the fridge, etc. "I did X because I was disposed to do X," can have some minimal explanatory content in specific contexts, but not nearly enough to justify our intuitions about the explanatory significance of belief attributions.
This makes it either an error theory (the vast majority of our ordinary belief-based explanations are radically false) or an act of explication, which involves advancing a distinct meaning for technical purposes. Do you classify your view as one or the other? Both?
Interesting analysis. But it brings to my mind concerns I've had about ceteris paribus clauses in general. Namely: how are we to distinguish between a case where the ceteris paribus clause is violated and a counterexample to the claim in question? Obviously if you have a huge dataset you could run some statistical analyses that might settle the question to the satisfaction of all reasonable parties, but what if we don't have such a dataset (as we often don't.) It seems easy to abuse (deliberately or accidentally) this kind of escape hatch, immunizing them to counterexample.
This is complicated by the fact that the dispositions in question are often cluster properties, and the items in the cluster often overlap with other clusters for completely different states. People in pain often wince, scream and inhale deeply, but people also do those things when they're having an orgasm.
These complications seem to create problems for any pragmatic application of a dispositionalist account of belief. In principle at least (and I suspect in practice often) there will be multiple, mutually incompatible dispositions that are consistent with the behaviors exhibited, and the ceteris paribus clause will monkey wrench any attempts to disambiguate.
So where does logical reasoning fit in a dispositionalist picture, in the sense that some dispositions might completely dissolve when I learn a single new fact? I tend to think of dispositions as a single scalar value, like a weight that applies to a range of relevant behaviours. This works well for a broad disposition to search out beer wherever it might be, even though the mechanics might be complex eg I might have to know what a refrigerator is, or where I hid that last bottle.
Thanks for the comments, folks!
Randy: I often analogize beliefs to personality traits. Many people seem willing to accept that personality traits are dispositional. To be hot-tempered, for example, is to be prone to anger at slight provocations, slow to cool down after a fight, quick to enter conflict, etc. Now consider the question: "Why did he react so strongly when Jill called him a dweeb?" "He's hot-tempered." Or: Why did he say yes to coming to that party despite the fact that he knows only one person going? He's an extravert.
These as dispositional explanations. On my view, belief works the same way. There's some debate about whether dispositional explanations are causal explanations, but either way they're natural. So the folk locution "because she believed such-and-such" can still be preserved as an explanation. What can't be preserved is a conjunction of views about (1) beliefs having a certain specific type of causal role in producing behavior, and (2) the metaphysical view that dispositional structures can't play that type of causal role.
I don't think that my view needs to be seriously revisionary. However, I do accept that it's an explication with *some* revisionary aspects. Actually, I think that's inevitable because the folk conception of belief is incoherent -- so any coherent theory must be at least somewhat revisionary!
David: Yes, it's somewhat amazing, on my view, that hearing a sentence like "I drank your last beer" can change so many dispositions, all in a twinkling! (Often the change is imperfect, hence in-between/fragmentation cases.) What the mechanism is for this, I don't think we know. The underlying story might be partly representationalist -- though if so, I doubt there's a clean belief-boxish mapping between person level beliefs that P and P-content representations that play a wide range of belief-like cognitive roles.
Eric, yes, I agree that's the minimal explanatory content that we can get out of dispositional traits. If you subscribe to a dispositionalist view of personality traits, then "He's hot-tempered," is short for "He's the sort of person who does that sort of thing." It sounds tautological, but it has real explanatory value because it informs us that it is not an aberration, but normal behavior, and so it improves our predictive capacities through reclassification.
I think the analogy with beliefs is pretty strong when we think of them as constitutive of personal identity or group membership ("He believes in dispositionalism," "She believes in reproductive freedom.") But it seems less compelling when we are talking about beliefs that are subject to change. I am not the sort of person who believes there is beer in the fridge. It's just that, at this moment, I believe that there is beer in the fridge, and if I am right, very shortly there will not be.
As long as we're not trying to naturalize belief (i.e., explicate it as a natural kind term for scientific purposes) but rather to rationalize it so that it can do better work within the manifest image, I'm just not sure what the payoff is for denying the causal explanatory role that most people attribute to beliefs.
This all sounds pretty good to me, and I especially like that you're defending Ryle against the entirely unfair caricature that his work has become.
I think this highlights a possible difference between what Putnam was arguing, and what Putnam's article is usually used to teach students. As I read it, the point of Putnam's Super-Spartans seems to be: If an analysis of a qualitative mental state (like pain) says that if the mental state can be correctly applied, then this entails the truth of certain behavioural (and only behavioural) statements, then this such an analysis can't be right, because it is coherent that a being is in that mental state but no true behavioural statements follow. So, he says, logical behaviourism can't be right because it is committed to the idea that the true ascription of a mental state entails the truth of a bunch of behavioural statements.
I think that you are right to say that this is not a good argument, and also that it attacks a strawman view.
However, it seems to me that the article is often used, in classroom context, to 'demonstrate' the following: that saying that mental states can be analyzed into behaviour alone (whether as a matter of strict meaning-identity, or logical entailment) would leave out something important about some mental states--qualitative ones--namely, that there is a subjective experience component that isn't captured in mere behaviour.
But when I read Putnam's article, I really don't think that's the argument he's making. But this argument is attributed to him, and is part of the general story about how behaviourism was left behind in favour of other theories.
So, is *that* argument any better off? Or is it also knocking down a strawman?
Thanks for the continuing comments, folks!
Randy: Even the quick-changing beliefs can be thought of dispositionally, I think. One argument toward this conclusion (though I worry that maybe it's as strawman as Putnam!) is my BetaHydri argument: We might know nothing about the internal structures of aliens from BetaHydri or the causes of their attitudes, but if they perfectly match the stereotype, that's conceptually sufficient for belief (and not just nomologically sufficient), on one reasonable conception of belief. This applies as much to beer-in-the-fridge as other beliefs.
On dumping the causal story. I'm not sure how essential it is to the folk conception, but even if it is an important part, I think can explication can dump it in favor of a cleaner account that is more scientifically and philosophically useful. A thought experiment for the folk to help determine if the causal story really is important (as opposed to being some adjunct thing): Imagine a BetaHydri case but deny that there is any one unified thing or state inside that causes all of the different manifestations. Not sure exactly how to spell that out clearly and plausibly....
Brandon: Thanks for the kind comments! Interesting suggestion about the classroom usage of the super-spartan case. Whether the case does demonstrate that there's a subjective experiential aspect of pain that isn't captured by mere behavior, hm -- I'm inclined to think it does at least *illustrate* that view or render it vivid. One odd thing about using it that way, though, is that if you dissociate causes of pain in the same way it seems like you can similarly illustrate inverted qualia and the like which good functionalists like Putnam of the Turing table phase would have rejected!
Garett: Sorry I missed your comment earlier! That's a tricky question for me and a source of concern about my own account. Currently, I favor a pragmatic/normative answer: If your model is useful in accurately capturing/describing the range of phenomena of interest, and you either are uninterested in the error introduced when you disregard a class of forces or cases, or if you are willing to specially mark or bracket off a class of cases as "not normal", then go ahead and use the ceteris paribus clause for that. So it's *not* statistical. For example, most things don't actually fall at 9.8 m/sec^2. Also, ceteris paribus, if you think it's going to rain you bring your umbrella (silently assuming in the ceteris paribus that it's "normal" to have access to an umbrella and to not want to get wet, etc.).
Eric, I think that's right, except that I think multiple-platform analyses are most compelling when our aim is to explicate a concept for service within the scientific image. I don't see you doing that with belief. For example, you've argued elsewhere against intellectualism and in favor of dispositionalism on the grounds that it best captures our moral intuitions about belief (that what is most important about belief is not what it makes us say, but what it makes us do.) I think that when our philosophical aim is to clean up the network of concepts populating the manifest image as much as possible-a very important aim in my view- we simply accept that lots of the entities and processes to which we make reference exist intersubjectively, not objectively. And that plausibly goes for the causal powers of belief as well. I just wonder if we don't do unnecessary damage to the MI by trying to restrict it to a purely dispositional account if so much of the belief-based reasoning we naturally approve of depends on a causal account. Of course, I am assuming that when we do analysis we are doing one or the other- improving the MI or contributing to the SI, and you may simply reject that.
Fascinating piece. Since the evolutionary point of the cognitive systems underwriting belief talk is to troubleshoot ourselves and our fellow travelers, we should expect that physical facts—-be they brain states or behaviours or dispositions to behave—-about those travelers will systematically covary with belief talk *somehow.* But given the intractable complexities of our fellow travelers--not to mention ourselves!--we should expect that covariance will be strategic, and that the way physical facts covary *will itself vary* depending on the kinds of problems belief talk engages. Belief talk is nothing if not heuristic, reliant on cues and ecologies.
Now it makes sense to study belief talk, to discover the kinds of events/behaviours cuing the successful application of belief talk, and that list, I suspect, will be consonant with your list of dispositions to X. Simply because belief talk is heuristic, it’s reliant on cues. Thus the covariance between applications of belief talk and what generally cues those applications is bound to be robust. Since those cues possess physical sources, that robust covariance carries over to dispositions to cue applications of belief talk.
If you overlook the heuristic nature of belief talk, and think that ‘beliefs’ refer to something out there in nature (as opposed to organize natures), then dispositions to X are going to be awfully tempting, are they not?
Are you overlooking the heuristic nature of belief talk, here? And if not, how does that nature fit in with your account?
Thanks for the continuing comments!
Randy: I think what I'm probably doing, in the MI vs SI framework, is working on an SI approach that is parasitic on features of the MI, since I think, when it comes to theorizing belief, there's probably still more value in the stereotypes of the MI than in representationalist boxologies.
Scott: Although your question is very different from Randy's I think my answer is similar. I think there's more interesting juice in the folk heuristic associations than there is in (current) representation-in-the-box theories of the metaphysics of belief. So I'm building an approach to belief that is parasitic on those heuristics while it ignores commitments to (a) underlying mechanisms or (b) any tight or reliable connection among the features that are heuristically related.
Eric: "So I'm building an approach to belief that is parasitic on those heuristics while it ignores commitments to (a) underlying mechanisms or (b) any tight or reliable connection among the features that are heuristically related."
So would you be comfortable with amending your definition into:
"To believe there's beer in the fridge is just to be disposed to cue (via exogenous or endogenous inputs) first/second/third person reports that you believe there's beer in the fridge."
If you say, no, it seems to me, you just plain reject a heuristic understanding of belief. But if you say, yes, it becomes very unclear what dispositions (and I admit it could just be a failure of imagination/legwork on my part) add to the picture. It could be argued that knapping an account of belief, as you do, possessing the flexibility to cover 'in-between cases' does more to cover over the heuristic nature of belief talk, insofar as those cases allow us to map the boundaries between felicitous and infelicitous applications.
I say no. And maybe that means rejecting a heuristic account, but I'm not sure. The issues are at least two: there are other factors that can cue reports, and it's not the reports that matter in counterfactual cases.
Consider friendliness. Although our concept might be built out of a stereotype parasitic on folk psychology, according to which "friendly" people are disposed in X, Y, and Z ways, we probably don't want to say that to be friendly is just to be disposed to cue reports of friendliness. Or maybe we *could* say that, if we had the right reading of being "disposed to cue reports of friendliness" -- but the heart of the matter is whether the person really is disposed in X, Y, and Z ways, not what people would say about her. So the folk understanding gives us the pairing of X, Y, and Z. But then once we have the X, Y, and Z in hand we can start to diverge from the folk.
But counterfactual cases are also grounded in reports, are they not? If someone like Trump, say, regularly cues reports of friendliness in some contexts and cues contrary reports in other contexts, and we discover that the former contexts only include family members and people who flatter him, the commonsense thing to say is that he isn't really friendly, and this provides us with a certain kind of predictive power. If I flatter him, odds are he will be friendly.
Where does the ‘fact of the matter’ of friend talk lie here? In Trump’s ‘dispositions,’ which are meaningless absent the reports cued, or in the larger communicative-behavioural system?
The worry has to be that *defining* friendliness (or belief, etc.) dispositionally amounts to the confusion of a component of the friend talk system with that system. The 'friendliness of Trump' *is a determination of that system,* not something pertaining to dispositional properties belonging to this or that component of the system. The reason I’m so enamoured with a heuristic approach is that it provides a genuinely naturalistic handle on this systematic level absent any normative, Wittgensteinian claptrap.
Wait, why think that the dispositions are meaningless absent the reports that are cued? Isn't it more important to friendliness that one (ceteris paribus) do and be disposed to do friendly things than that one cue *reports* that one is so disposed?
The folk stereotype reflects some (defeasible) wisdom about what sorts of things tend to hang together (smiling in certain situations, offering help when needed, greeting people warmly). This gives us a starting point in thinking about attitudes that is currently better than more seemingly scientific accounts in terms of oxytocin or "belief boxes" or whatever. Once we have this starting point, we can see where things go. Good science > folk psychology > cartoon science.
Is a punch in the shoulder friendly? Is a compliment friendly? A scowl or a grin? The dispositions to X are indeterminate outside the superordinate communicative circuits they contribute to. All of us do 'friendly' things that turn out to be 'mean-spirited' all the time, though we quite often have to rely on more clear-eyed spouses or friends to realize as much. The point being that any individual's dispositions only comprise one indeterminate piece of a larger, dynamic puzzle.
On this way of looking at things what renders Garrett's concerns forceful is simply the way generalized appeals to 'normal circumstances/situations' throw a blanket over these integral elements, feeding the illusion that things like friendliness, belief, etc., can be atomistically defined.
I was surprised by your statement that 'it's somewhat amazing, on my view, that hearing a sentence like "I drank your last beer" can change so many dispositions, all in a twinkling!' - so much so, that I wondered if you were being humorous, but if so, the joke has gone right over my head. Does not your profession and vocation (or, at the very least, this and similar articles) have a large component devoted to persuading people to change their minds (or at least to persuade third parties that you have the better argument?)
My views on climate change have shifted in response to evidence, but my expectation that it will end badly is at least partly a reflection of my pessimistic disposition. Is belief bimodal?
Post a Comment