Standard philosophical representationalist approaches to belief seem to fare poorly when faced with this question.
Here's a typical first-pass representationalist account of belief: To believe some proposition P (e.g., that the solar system has fewer than 9 planets) is to have stored somewhere in one's mind a language-like representation with the content P -- and to have it stored in such a way that the representation can be deployed for appropriate use in inference and practical reasoning. Now, one possibility, on such a view, is to say we have all the beliefs described above and thus that we have a vast number of stored representations with very similar content. But that doesn't seem very appealing. Another possibility seems more initially promising: Say that we really only have a few stored representations concerning the number of planets. Probably, then you didn't believe (until you thought about it just now) that there were fewer than 14 planets.
But there are two problems with this approach. First, although I certainly agree that it would be weird to say that you believe, before doing the relevant calculation, that there are -i^2*e^0*sqrt(64) planets, it seems weirdly misleading to say that you don't believe that there are fewer than 14. But if we do want to include the latter as a belief, there are probably going to have to be, on this view, quite a few stored representations regarding the number of planets (at least the 15 representations indicating that the number is >0, >1, >2, ... <14). Second, the line between what people believe and what they don't believe turns out, now, to be surprisingly occult. Does my wife believe that the solar system contains more than just the four inner planets? Well, I know she would say it does. But whether she believes it is now beyond me. Does she have that representation stored or not? Who knows?
Jerry Fodor and Dan Dennett, discussing this problem, suggest that the representationist might distinguish between "core" beliefs that require explicitly stored representations and "implicit" beliefs which are beliefs that can be swiftly derived from the core beliefs. So, if I have exactly one stored representation for the number of planets (that there are 8), I have a core belief that there are 8 and I implicitly believe that there are fewer than 14 and fewer than 134,674.6, etc. Although this move lets me safely talk about my wife -- I know she believes either explicitly or implicitly that there are fewer than 14 planets -- the occult is not entirely vanquished. For now there is a major, sharp architectural distinction in the mind -- the distinction between "core" beliefs and the others (and what could be a bigger architectural difference, really, for the philosophical representationalist?) -- with no evident empirical grounding for that distinction and no clear means of empirical test. I suspect that what we have here is nothing but ad hoc maneuver to save a theory in trouble by insulating it from empirical evidence. Is there some positive reason to believe that, in fact, all the things we would want to say we believe are either the explicit contents of stored representations or swiftly derivable from those contents? It seems we're being asked merely to accept that it must be so. (If the view generated some risky predictions that we could test, that would be a different matter.)
An alternative form of representationalism -- the "maps" view -- has some advantages. A mental map or picture of the solar system would, it seems, equally well represent, in one compact format, the fact that the solar system has more than 1 planet, more than 2, more than 3,... exactly 8, fewer than 9, ... fewer than 14.... That's nice; no need to duplicate representations! Similarly, the same representation can have the content that Oregon is south of Washington and that Washington is north of Oregon. On the language view, it seems, either both representational contents would have to be explicitly stored, which seems a weird duplication; or one would have to be core and the other merely implicit, which seems weirdly asymmetrical for those of us who don't really think much more about one of those states than about the other; or there'd have to some some different core linguistic representation, an unfamiliar concept, from which xNORTHy and ySOUTHx were equally derivable as implicit beliefs, which seems awkward and fanciful, at least absent supporting empirical evidence.
However, these very advantages for the maps view become problems when we consider other cases. For it seems like a map of the solar system represents that there are -i^2*e^0*sqrt(64) planets, and that there are 1000(base 2) planets, just as readily as it represents that there are 8. Maps aren't intrinsically decimal, are they? And it seems wrong to say that I believe those things, especially if I am disposed to miscalculate and thus mistakenly deny their truth. For related reasons, it seems difficult if not impossible to represent logically inconsistent beliefs on a map; and surely we do sometimes have logically inconsistent beliefs (e.g., that there are four gas giants, five smaller planets, and 8 planets total).
It seems problematic to think of belief either in terms of discretely stored language-like style representations (perhaps plus swift derivability allowing implicit beliefs), or in terms of map-like representations. Is there some other representational format that would work better?
Maybe the problem is in thinking of belief as a matter of representational storage and retrival in the first place.
Fun post!
ReplyDeleteWhy couldn't many of those beliefs be derived? I find myself thinking that the belief "there are fewer than 14 planets" is just as "calculated" as the belief that there are -i^2*e... etc. planets. It is just so much swifter and easier, so we tend to think of it as a core or standing belief.
Or perhaps best of both worlds: there are maps, but the maps don't have any explicitly numerical representations. (Don't ask me what they do have). For both questions, we use the map to derive the relevant belief (or figure out whether we assent/dissent). Seeing that there are around 8, and knowing that 8<14, is much easier and quicker than the relevant math for the more complex belief...
You and I must be tuned into the same psychic channel, Eric! I'm wrestling with Churchland's use of maps and second-order resemblances in *Plato's Camera* at the moment. He spends so much time describing homomorphisms in terms of activation spaces, talking in strict causal terms, which is to say 'maps FROM,' only two make the magical jump to talking in terms of 'maps ABOUT' - pulling the semantic rabbit out of a neural hat, in effect.
ReplyDeleteAs far as I can see, the massively parallel nature of the neurofunctional 'maps' he discusses do slip the noose of any frame problem, but like I say, I find it hard to believe that he's actually describing 'maps' beyond a certain metaphoric rhetorical conceit.
The thing is, when he discusses van Gelder he talks about the similarity between his view and the dynamic systems approach, arguing for an important homomorphic component to machinery in general, not just the brain. You suddenly realize that all his talk of maps and concepts amounts to a rather arbitrary focus on a single species of cog in a far greater mechanistic system - primarily to discharge some nostalgic commitment to semantics and Scientific Realism. To rationalize a more traditionally palatable conclusion, in effect.
Which leads me to my question apropos your consideration: to what degree do you think the problem is not so much one of structural homomorphisms as one of trying to use these as anchors for our assumptions regarding intentionality more generally?
An alternative for the language-like approach might be that we actually store algorithms for determining the truth of appropriately-presented claims. 'Core' beliefs could just be those claims that are decided very quickly (perhaps, for the very core of the core, because there's an algorithm with a condition like 'if the claim is "there are 8 planets", the claim is true').
ReplyDeleteOn the map-like approach, aren't there also going to be 'maps' that allow you to represent sentence structures to understand the sentences? We could say that we have at least two faculties: one for determining the subject matter of a claim via our maps of said claim's structure, and a second for determining whether, given maps for both a claim and its subject matter, the two fit together in such a way that the claim is true. That would mean that "there are 8 planets" and "there are 2^3 planets" are processed differently at the stage of mapping the claims themselves, but are referred to the same subject-matter map.
On inconsistent beliefs, if subject matters are finely enough individuated then inconsistency become possible, because there might be one map for the subject matter 'gas giant', one for 'rocky planet' and one for 'planet', or whatever, and the person in question at that moment fails to compare or combine them in an appropriate way. (I can't say that this answer really appeals, but I can imagine someone proposing it.)
It's also plausible that, even if the ability to form, combine and discard beliefs is dependent on the existence of maps, we also have a store of quick-access beliefs that we respond to reflexively without going back to the maps that support them. These might be the equivalents of 'core beliefs' for the language-like approach. That would mean, though, that there's a dispositional aspect to having beliefs that goes beyond representation, which is perhaps just what you want. Clearly, for our dispositions to certain responses to count as beliefs, they'll have to interact with each other and our representational faculties in the right ways. But our representations will be for reflecting on beliefs, which is a distinct activity from merely holding them.
Terrific comments, folks!
ReplyDelete@ Brandon: My hunch is against a sparse view on which we somehow get by with very few representations and derive the rest. For one thing, it seems unstable and nonredundant in a way I am generally disinclined to think is true of the mind. Lose one rep and your in trouble! On your second point: It makes me think of the notorious Stalnaker move of saying that we do believe [insert complex mathematical truth], we just don't believe that the *sentence* expressing that complex mathematical truth is true.
@ Scott: I'm inclined to react to Churchland's approach with the thought that if you have a liberal structure of vectors of sufficient dimensionality, you can model anything; at some point you lose your risky empirical content. But there's a lot to be said for starting with multi-dimensional state spaces, as physicists sometimes do, and then making risky predictions about what spaces are possible or likely and how they evolve over time. To fully evaluate the power of the approach, we need to see it at work generating cool results we wouldn't otherwise have arrived at. I am sympathetic with state-space dynamical systems approaches as at least not oversimplifying from the start.
On your last point, Scott: I think what we have here is another case of using structures we know about the outside world (sentences, maps) and trying to model the mind on the basis of them -- not unlike how we lead ourselves astray by trying to model dreams and imagery on pictures.
@ Kester: I'm not sure what it means to say we "store algorithms" for evaluating claims. Interpreted weakly, maybe it means only that we do in fact evaluate claims? Of course that's true! Interpreted more strongly it means... what?
ReplyDeleteI like your idea of taking the maps view back to sentence structures. If we have "maps" of sentence structures, too, then that gives us a model for what I called the "Stalnaker move" in my reply to Brandon. We already believe there are 2^3 planets (and -i^2*e^0*SQRT(64) planets) but we need a mapping procedure to evaluate those sentences, explaining our hesitation and possible disagreement.
I agree that you can get inconsistency on the maps view by having different maps that are inconsistent with each other. But that risks a proliferation of maps and demands a mechanism of map comparison. Would there be separate maps anytime there's even a *possibility* of inconsistency? Then the maps view loses its elegant leanness.
In your last paragraph you're right about what I want. Consider this further example, too, from Dennett: A chess-playing computer generally gets its queen out early. There might be no explicit representation "get the queen out early" or any few representations from which that is quickly derivable; it falls out, in a complex way, from a number of features of the programming. If we now say that it wants to get its queen out early or believes that it should (bracket for now whether such computers have beliefs and desires at all), it really seems like the dispositional structures rather than the underlying architecture is what attribution is tied to.
BTW, I followed the link to your profile and see we share an appreciation of Al Stewart!
Eric, you ask, "Would there be separate maps anytime there's even a *possibility* of inconsistency?"
ReplyDeleteTo follow out the analogy, maps can result in inconsistent claims either because we have maps that disagree or because we have conflicting map-reading practices of a single map. In a similar way, inconsistent beliefs might arise from a single cognitive map because of separate cognitive mechanisms operating on it - or even because of one cognitive mechanism operating with different parameters.
Wait, we have to *read* our maps? I'm worried that will lead to a regress, like the one Wittgenstein imagines about interpreting our images. Do we need to make a map of the map?
ReplyDeleteIn a behavioral approach to PAs, it appears that the distinction between "core" and "implicit" beliefs can be described as the distinction between dispositions to assert propositions and dispositions to assent to propositions. In my version of such an approach, a person's belief that P="there are eight planets in our solar system" is the behavioral disposition to assert P in certain contexts. As you note, the person may not (and typically doesn't) have a disposition to assert (for example) Q="there are -i^2*e^0*sqrt(64) planets in our solar system". However, when the context is a person's being confronted with someone's asserting Q, that person may have a set of behavioral dispositions that come together to form a disposition to assent to Q (eg, the dispositions necessary to effect determination of the numerical content of Q and comparison of that content and the numerical content of P).
ReplyDeleteOf course, this doesn't answer your specific question, which is expressed in terms of belief that Q. But it does suggest an out: while the intentional idiom of beliefs, desires, etc, is convenient in some contexts, in others it causes more trouble than it's worth so that dropping it and using instead the somewhat more cumbersome vocabulary of behavioral dispositions may be appropriate. In that vocabulary, there is no "occult" issue of whether one has a "belief box" that may or may not "contain" Q. There is only the question of having the set of behavioral dispositions necessary to assent to Q. The straightforward description of your state is then simply that you have a disposition to assert P in certain contexts and also a disposition to assent to Q in other contexts.
Note that you implicitly do this in your post:
Does my wife believe that the solar system contains more than just the four inner planets? Well, I know she would say it does.
In other words, she may not be disposed to assert the proposition, but she is disposed to assent to it.
Charles, I am very sympathetic with the general idea that in such cases the best approach is to look at the dispositional patterns rather than try to judge whether some content is in or out of the "belief box". In fact, supporting that idea is really my background agenda.
ReplyDeleteMy only quibble -- I don't know how serious this is from your perspective -- is that by tying it so tightly to assertion and assent, you privilege linguistic dispositional structures more than I would be inclined to do.
Eric: "I think what we have here is another case of using structures we know about the outside world (sentences, maps) and trying to model the mind on the basis of them -- not unlike how we lead ourselves astray by trying to model dreams and imagery on pictures."
ReplyDeleteOn my Blind Brain account, this 'importation problem' actually pertains to a more fundamental challenge (the one I argued you were driving at in Perplexities): which is the fact that our cognitive systems are *environmentally adapted* heuristics. The fact that we 'see environments' when confronted with certain kinds of environmentally homomorphic structures demonstrates this powerful 'environment bias' - it creates them when they're not even there!
But the real problem, on the BBT account, isn't so much modelling the inner by reference to some enviromental analogue, but in the structure of environmentally adapted *informatic neglect* exhibited by the 'representational heuristic' itself.
'Aboutness,' whatever the hell it is, relates us to our environments absent any substantial causal information pertaining to this relation. Thus, 'transparency,' so-called. This is well and fine dealing with functionally independent systems in our environment, but when applied to functionally *dependent* systems *in our own brain* it clearly neglects/occludes the *very information we require*!
This explains many, many things... I'm tweaking a post on this that I'll link very soon.
Seems like what is believed in can, by the nature of itself, distribute belief. The nature of numbers means 8 is smaller than 14. Believe there is eight and the nature of numbers distributes belief that it's below 14. Kind of like if the idea is a bucket, there's a pipe leading off to distribute the water to other things as well. Which is interesting in how the logical nuances of what is believed in can divert belief so autonomously.
ReplyDeleteOf course your many scenarios are exhausting. It's interesting to note how belief can exhaust and reduce to the most straightforward structure that can be managed (kind of like zipping up a file)
Thanks for the reply.
ReplyDeleteFirst of all: Hooray for Al Stewart and all who appreciate him! I'll have to make sure to mention him regularly.
I'm trying to remember why I brought up algorithms. I think the plan was to defend the language-like approach by looking for a language-like item that could be understood to underlie all those infinitely many beliefs, as well as the fact that some are more 'core' than others. The idea then is that we don't learn or store anything with a structure like that of the sentence 'There are 8 planets', but like that of a rule (I think I said 'algorithm' because it sounded more linguistic) for generating those things (or responses to them). That rule is easily confused with its most core (immediate) output, the sentence 'there are 8 planets', so we attribute possession of the rule using the sentence. I don't know whether the rule itself should be counted as the content of the belief - that would get very revisionary.
I suppose a map theorist would merely say that there is a possibility of different maps when there is a possibility of inconsistency. The actual mechanics of their generation. persistence and dissolution could be discoverable only empirically.
I have an over-long expression of confusion over the chess case that I'll post separately.
On the chess-playing computer, I remember the example, but I need the details fleshing out. Is it just that in most tactical situations the computer will respond by getting its queen out early? If so, it seems wrong to say that it believes without qualification that it should get its queen out early. Otherwise it would be revising its beliefs in situations where it doesn't respond that way, and it seems inappropriate to me to attribute a change of belief there.
ReplyDeleteIf the case is such that there are lots of situations in which the computer responds that way, though, then all we are really saying about it is that there are lots of descriptions D of the early tactical situation such that the computer believes 'if D, then I should get my queen out now'. It's less plausible to me, though, that these situation-based beliefs don't have a fairly simple representational basis. (It might be that this is too simplistic in that the computer might not always have a belief like 'I should perform this move now'.)
If the above looks like I'm saying that human players also couldn't believe they should get the queen out early, I'm not. The difference is that humans can make generalisations that computers can't, e.g. when teaching less experienced players or writing books, and so genuinely experience a change of belief when faced with an unexpected tactical situation that forces them not to get their queen out early. A computer capable of learning might be capable of the same beliefs. I don't know a lot about what the internal workings of such a thing might be like.
Apologies for expending so many words on the case, but I want to understand what the details actually are.
Kester, I think such change is more a matter of being far less dedicated to getting the queen out early, than changing belief.
ReplyDelete@ Scott: I think I agree with you about this. Looking forward to seeing your post!
ReplyDelete@ Callan: The hydraulics of the mind. I like it! I like it not so much because I think it a model that will work well across all cases but rather because I think that by diversifying our range of models and seeing how certain ones work well in some situations and others work better in other situations we undercut views that privilege any one of these types of models as accurately capturing the real structure.
@ Kester: Right, either revisionary or superficial. I agree those are the choices. In a way, that's what I'm trying to get at in my "Belief Box" paper, recently posted on my website. On the chess-playing computer: I'd treat human beings the same way, e.g., in racist attitudes. If lots of specific situations draw forth racist reactions, I want to attribute the racist belief even if the person would deny the generalization. (Actually, my view is a bit more complicated than that, but that will work as a first pass.)
Eric, ooh, I wouldn't try to say it's the real structure - the real structure is the real structure, down to each the synaptic connection. Representations hover above the real structure (rather than being flush with it), the coarser the representation, the higher they float.
ReplyDeleteI hadn't really thought of it as hydrolics, but given I imagined gravity in the model I guess it is under pressure!
by tying [the idea of behavioral dispositions] so tightly to assertion and assent, you privilege linguistic dispositional structures
ReplyDeleteIn my previous comment, the statement "In my ... approach, a person's belief that P ... is the behavioral disposition to assert P in certain contexts" may have misled you. It was a mistake and evidence of how hard it is to shake a familiar vocabulary. Since I don't aggregate dispositions into PAs, I have no need for the intentional vocabulary and should have said simply "A person may have a disposition to assert P in certain contexts".
I don't limit dispositions to linguistic behaviors such as assertion and assent, although for my simple examples I did assume contexts (ie, scenarios) in which the behavioral dispositions are linguistic. In the case of P, it was assumed (implicitly) that the subject has been asked "How many planets are in our solar system?" - to which a natural response is to assert P. (Call this context "C1".) But I could have assumed instead that the subject is asked to select coins from a jar and arrange them on a table so as to roughly represent our solar system. If the subject selected nine coins and arranged them so that it was clear that some large coin - perhaps a silver dollar - represented the sun and smaller coins represented the planets, that would be evidence that the subject's likely disposition in C1 would be to assert P. However, I couldn't think of any context that would elicit a behavior with respect to Q other than the linguistic one that I assumed. So, I assumed linguistic context C1 for P as well.
that idea [dispositional patterns] is really my background agenda.
Because I've developed a bad habit of skimming over discussions that include "representation", I didn't catch your background agenda until a more careful reading after I'd already posted my comment. Apologies for jumping the gun.
@ Charles: No need for apology! It sounds like our overall views are pretty similar.
ReplyDeleteAsk and ye shall receive:
ReplyDeletehttp://rsbakker.wordpress.com/2012/10/22/v-is-for-defeat-the-total-and-utter-annihilation-of-representational-theories-of-mind/
I'd like to encourage anyone rubbing shoulders with representationalists to pass this link on. I really have no idea how they would respond to the dilemma posed by my two questions. At the moment, anyway, it really feels like it puts them in a real pickle.
Thanks, Scott!
ReplyDelete