I'm working again on the nature of belief. Increasingly, I find myself drawn to be explicit about my pragmatist approach to the metaphysics of attitudes.
Sometimes the world divides into neat types -- neat enough that you can more or less just point your science at it and straightforwardly sort the As from the Bs. Sometimes instead the world is fuzzy-bordered, full of intermediate cases and cases where plausible criteria conflict. When the world is the latter way, we face antecedently unclear cases. Antecedently unclear cases are, or can be, decision points. Do you want to classify this thing as an A or a B? Would there be some advantage in thinking of the category of "A" so that it sweeps in that case? Or is it better to think of "A" in a way that excludes that case or leaves it intermediate? Such decisions can reflect, often do at least implicitly reflect, our interests and values. Such decisions can also shape, often do at least implicitly shape, future outcomes and values, influencing both how we think about that particular type of case and how we think about As in general.
Pragmatic metaphysics is metaphysics done with these thoughts explicitly in mind. For instance: There are lots of ways of thinking about what a person is. Usually, the cases are antecedently clear: You are a person, I am a person, this coffee mug is not a person. But some interesting cases are intermediate or break in different directions depending on what criteria are emphasized: a fetus, a human without much cortex, a hypothetical conscious robot, a hypothetical enhanced chimpanzee. There is no settled fact about what exactly the boundaries of personhood are. We can choose to think of personhood in a way that includes or excludes such cases or leaves them intermediate -- and in doing so we both express and buttress certain values, for example, about what sorts of being deserve the highest level of moral consideration.
The human mind is a complex and fuzzy-bordered thing, right at the center of our values. Because it is complex and fuzzy-bordered, there will be lots of antecedently unclear cases. Because it is central to our values, how we classify such cases matters. Does being happy require feeling happy? Is compassion that doesn't privilege its object as irreplaceably special still love? Our classification decisions here aren't compelled by the phenomena. Instead, we can decide. What range of phenomena deserve such important labels as "happiness" and "love"? We might think of metaphysical battles over the definitions of those terms as political battles between philosophers with different visions and priorities, for control of our common disciplinary language.
At the center of my interest in belief are a set of antecedently unclear cases in which one intellectually assents to a proposition (e.g., "death is not bad", "women are just as intelligent as men") but fails to act and react generally as though that proposition is true (e.g., quakes with fear on the battlefield, treats most women as stupid). The pragmatic metaphysical question is: How should we classify such cases? What values are expressed in saying, about such cases, that we really do or really do not believe what we say we believe? What vision of the world manifests in these different ways of speaking, what projects are supported, what phenomena rendered more and less visible?
----------------------------------------------
Related post:
Against Intellectualism About Belief (July 31, 2015)
11 comments:
It's worth distinguishing two ways this can go. As it happens, I have some terminology for the difference.
1. We start out with a vague category and have to make a choice about how to precisify it, but (unbeknownst to us) only one of the choices would support successful enquiry. There is a natural kind in our domain of enquiry but we don't start out knowing what it is. Our choice is effectively a bet as to where the borders of the natural kind will be.
2. We start out with vague categories and have to make a choice about how to precisify it, but our choice is also a decision about how how to treat these things. Either choice will lead to successful enquiry, because in each case the objects of enquiry will be crafted to fit the category scheme. There isn't a natural kind here, because the world doesn't constrain our taxonomy. Instead, it's what I call a "fungible kind". For fungible kinds, scientific success can be secured using different category schemes because each approach involves different adjustments to the domain of enquiry.
Thanks, P.D. That's a useful distinction. I'm something of a pragmatist about "natural kinds" too, so the differences between 1 and 2 won't be sharp, in my view.
I'm also a pragmatist about natural kinds, but I think it can be helpful to recognize that the world condemns many taxonomies to failure. In domains where that kind of constraint is strong, I'm willing to call the kinds in the successful taxonomy "natural". Since this constraint is a matter of scope and degree, it will often be a comparative matter: that one kind is more natural than an alternative. [/soapbox]
I agree with that P.D. [handing you megaphone]
"The human mind is a complex and fuzzy-bordered thing, right at the center of our Values."
...Are metaphysical concerns pragmatic by necessity when facing intellectualism...
...Attitude in the beginning is questioning oneself, questioning one's Confidence in doing something metaphysical...
...What is doing then, is it affirmation-founding of senses, emotions and mentations in searching-'drawn' to be explicit...
...Provided there is or is not a Interaction Value observed...
I'm sympathetic to the idea of 'fungible' versus 'natural' kinds but since 'kinds' are themselves fungible, I fear the distinction will only serve to inflect as opposed to dissolve the controversy.
What we need is some general, empirically responsible theory of heuristic cognition, isn't it? Belief may not be specifiable in high-dimensional, mechanical terms (a 'natural kind'), but the systems that leverage solutions out of such posits almost certainly are. So I would add a third alternative to the two possibilities provided by PD: treat belief as a *heuristic posit* belonging to a system adapted to predict/explain/manipulate individuals absent access to any mechanical information.
This approach forces you to make more in the way of empirical bets (always a good thing)while simultaneously side-stepping intentional ontological baggage. Most importantly, I think, it detaches the issue from any commitment to pragmatism.
How a posit like belief is modified to solve different case/problems can always be rationalized via our projects, but the posit itself is explicable in natural terms otherwise. 'Heuristic posit' allows us to split belief along these explanatory axes. You could say it's a natural kind that allows us to understand the distinction between natural and fungible kinds.
Scott: When you talk about mechanical terms and ontological baggage, it looks like you're assuming that natural kinds are necessarily spelled out in terms of the configuration of fundamental particles. They don't need to be.
If we think of kinds as heuristic, we can still ask about how large the space of viable heuristic categories is. If scientific success requires these kinds rather than others, then they are (on my account) natural kinds. If scientific success would go just as well given almost any kinds, then they aren't.
Moreover, focus on heuristic utility rather than ontology won't sidestep the entanglement with values that Eric mentions in the original post. Heuristic for what? Why that and not something else?
PD: So you don't think science trades in 'fungible kinds'? I think it clear that not all modes of scientific theorization/explanation are equal, and that those termed 'mechanistic' in biological contexts deliver the biggest cognitive punch. This is where I would speak of 'natural kinds.'
The advantage of this is that it allows you to understand heuristics in mechanistic terms, which is to say, something empirically tractable. How would you render them tractable?
Regarding values. What triggers the application and reinforcement of any brand of heuristic cognition can always be intentionally rationalized, sure. But again, the virtue of my approach, I think, is that it allows us to separate these angles of explanation. The global 'value' of a given attenuation of 'belief,' in other words, has no bearing on it's local efficacy. The scientist's interests become a separate question, which is certainly preferable, it seems to me, given the radically heuristic nature of folk-psychological explanations!
It makes me itchy, using systems (such as those involved in determinations of value) designed to get around the lack of access to what's going on to provide an account of what's going on with scientific ontologies.
I think that part of the issue which should be made explicit is the question of levels. You give the example of personhood, and it seems like the concept of personhood is more fundamental than the concept of a foetus, or a chimpanzee. When you sculpt a fundamental concept like personhood specifically based on less fundamental factors, like chimpanzeeness,that feels like a weird, marked thing to do. Whereas we allow fundamental concepts to sculpt the way we understand more specific concepts all the time. Do platypuses feed their young milk? Well, they're mammals, so yes. You don't have to go to Australia to find out, because the more fundamental concept tells you, and we're happy to allow that to happen.
But if you deny that these levels that I'm talking about are a meaningful construct, then I think the distinction disappears. And I think I ultimately would deny the levels, because they continually feed into one another. We define solidity by a human sense; but based on the human senses we do lots of science, and now we know that solid things are actually mostly empty space. That scientific concept of an atom feeds back into our mental understanding of solidity, and perhaps ultimately into our very sense data.
One example where this is going to happen really soon is in VR and mind manipulation. At the moment, we have a concept "red" which involves both light of a certain wavelength and the firing of certain cells in our eyes (this is already a highly evolved understanding of red, of course!). Fairly soon, these two phenomena will become separate, as we begin to be able to create the sensation of red without firing that kind of light at people's eyes. We will then develop new concepts, maybe even new words, to handle the new concepts, and they will feed round into various other aspects of our lives.
I think all of what I'm saying fits roughly inside P.D.'s category 2. For the most part I don't think there are natural kinds. Or, they might exist, but they inform very little of our mental world, I think.
Scott: I think science (sometimes) works with fungible kinds. I don't make the distinction as some kind of demarcation criterion. Instead, it can be helpful to consider how and to what extent our choice of category scheme works out because of how the world is (more natural) and to what extent it works out because we are selecting and constructing domains as we go (more fungible).
In biological contexts, particular lineages are important taxa which often turn out to be natural kinds. I don't see them as "mechanistic" in any interesting sense, but I'm not sure I follow how you're using "mechanistic".
PD: In as flat-footed a manner as possible: 'mechanisms' are where information leverages the most by way of explanation, prediction, and manipulation. Or as I like to call it, MEAT.
"We are selecting and constructing domains as we go" construed pragmatically effectively delivers cognition to perpetual underdetermination. I can't help but see this as a problem. We can reference domain specificity of different cognitive modes via countless different normative apparatuses, in which case we will never understand cognition, or we can reference it in terms of heuristic mechanisms, in terms of what gets triggered in what environments.
Like I say, this provides a nonintentional way to generalize over activities that (for myriad reasons) generally cue various ('intentional') heuristic systems out of school otherwise, that fool us into using systems adapted to get by in the absence of information regarding what's really going on (aka, 'the meat').
Post a Comment