Most philosophers of mind think that believing requires having mental representations. Jerry Fodor and many others suggest that these representations have a language-like structure. Frank Jackson and a few others suggest that their structure is more map-like.
Consider this example. Sam believes that the mountain peak is 10 km north of the river and 10 km south of the coast and that there's an oasis 5 km due east of it. What happens when he learns that the mountain peak is actually 15 km north of the river and 5 km south of the coast?
If Sam's representations are map-like, then changes to any part will ramify automatically throughout the system: Moving the mountain peak farther north on the map automatically changes its position relative to the oasis (which is now southeast, not due east, and a bit farther away), automatically changes the distance between it and the road south of the river, the length of the northbound trail, etc. A single change on the map draws with it a vast number of changes in the relationships of the elements. If his representations are language-like, no automatic ramification ensues: One can change the sentences about the distance of the mountain from the river and the coast without altering anything else.
Whether the advantage here goes to the maps view or the language view is unclear: On the one hand, the maps view nicely captures the fact that when one belief changes, we seem to update our connected beliefs effortlessly and automatically. (Consider, analogously, where a geographical map isn't at issue, the case of learning that Gary has replaced Georgia as chair; I now know that Gary will be running the meetings, that he'll be signing the forms, that Georgia will be back in her regular office, etc., for quite a large number of propositions, with no noticeable effort on my part.) On the other hand, the sentences view seems more naturally to account for those cases in which we fail to ramify -- if for example, Sam fails to take account of the fact that the oasis will no longer be due east of the mountain, or if I go to visit Georgia in the chair's office after Gary has moved in.
Relatedly, the sentences view seems more naturally and easily to account for inconsistency and indeterminacy in one's belief system. Can a map even be logically inconsistent? The maps view can perhaps avoid this problem by postulating multiple maps, each inconsistent with the others, but that drags in its train a lot of questions and apparatus pertaining to issues like reduplication and the relationship of the maps to each other.
The maps view seems to avoid the problems of implicit belief and overcrowding of the "belief box" in cases like that of the number of planets (less than 100, less than 101, less than 102, etc.: many sentence-tokens, but a small map; see this post). On the other hand, we probably don't want to say that you believe every consequence of what appears on your map, and it's not clear where to draw the line.
Is there a fact of the matter here (or a third model that avoids all these problems?) -- or just competing models with different strengths and weakness?
Wednesday, August 30, 2006
Sentence-Like vs. Map-Like Representations
Posted by Eric Schwitzgebel at 7:55 AM
Subscribe to:
Post Comments (Atom)
4 comments:
Pure armchair speculation, but could you have a language-like module where beliefs are nodes in a map with weighted connections to other language-like nodes? The picture here is a map of nodes with weighted connections where each node is sententially structured. Depending on the "strength" of the belief change (i.e. the signal of the input, affected by things like attention, perhaps also by frequency of node access [i.e. 'importance' of the node in one's conceptual scheme]), a change at one node will ramify to other nodes by virtue of their conceptual connections while being coherent with the idea of there being inconsistency in the "map".
On second thought, I'm not sure how such a model could be computationally implemented without bringing in the syntax AND semantics of the node language into the computations themselves.
Thanks for the comment, Charles! Yes, it's tempting to try to work out something like you suggest in your first paragraph. I think Fodor and "language of thought" types would resist, though, for reasons similar to those you cite in your second paragraph: For the changes to ramify in the right way, there will have to be (at least, and maybe only) "syntax"-like connections between the nodes, not simple weights -- it's not pure association, but has a logical structure. But now you're looking at something like inference after all, rather than dumb association.
But this doesn't take away from your thought that to a certain extent the ramification might be automatic -- but how much so, and how far? And is there a plausible way to develop this with discrete states discretely held?
Might be that the issue need to be seen from a wider context.
Instead of analyzing having a belief, as some kind of statical representation that "suddenly appears" in someones mind/brain-state, it seems to me that it is needed to take a look at the notion of acquiring of belief.
For example, if I hear an uttered sentence by someone I trust,how does that sentence "make its way" to the belief-representation-system (whatever it might be)?
Putting attention to the phenomenology of acquiring the belief on base of that sentence (so explicitly ignoring the issues like implicit learning), it is unproblematical to say that the acquiring of belief is connected to comprehending of what that person is saying to us, understanding of the uttered sentence - understanding what the sentence means.
It seems to me that it is this understanding which can't be divided from the issue of holding belief - if we trust our friend, we will end up believing what we understood the sentence to mean. Or we can say that understanding the sentence is necessary, but not sufficient for possibility of believing that sentence.
I guess it is obvious, what I want to say now, connected to the problems of representation of a belief. We believe something we understood. The representation of that something will depend on how that something is understood.
There are few things which might be noted here in connection the understanding and belief-representation...
Understanding most times happens, in some kind of motivated communication. That is, our friend is telling us something, because that something matters somehow. In such situation, the understanding involves not merely some superficial understanding of the sentence as some "closed" system, but we can say that we properly understood the sentence if we understand what that fact communicated to us MEANS, i.e. how it matters for us. If we fail to see such connection, we might be puzzled by the fact presented to us, and even ask "What do you mean?".
So, it seems that always, the presented sentence needs to be understood by its implications. What the sentence means in such ways, should not be seen merely as some semantic content which can be found in the sentence itself, but it has meaning which connects to the seing of.., well - what it means.
So, if my friend says to me:
"Gary has replaced Georgia as chair", it might be motivated by my need to know that thing, because it will have consequences on MY work. I.E I need to send some documents to Gary instead to Georgia. Observer that we could say also that IT MEANS that I weill nead to send the documents to Gary instead to Georgia.
Or if my friend says to me:
"The mountain peak is actually 15 km north of the river, and not 10 km", it might be motivated by my need to know that thing, as I will need to go there, so it would MEAN that I need to e.g. leave earlier to get their on time.
But on other side, if we are working in e.g. cable company, the sentence will have different meaning, it would mean that we need to change the calculations of how to e.g. most efficently put the cable between all those places (mountain peak, river, coast, oasis).
This kind of use of "understanding" and "meaning" which is connected to motivated communication, can be also seen in ordinary language, when after someone has been told something, but doesn't react as we expect, we can say: -He still doesn't understand what that means.
So, basicly, I think that holding of belief, is tightly connected to the understanding. As I said... we can only believe something we understand/comprehend. But this understanding/comprehension as motivated might include (or maybe must include), not just a mechanical remembering of the fact (would that even count as a belief?), but understanding what that fact means (as in - what it means for the person who believes it, for someone he knows, for the other persons -real or imagined, for society, etc...).
So, such understanding might include imaginaing of spatial relations, or imagination of communications between people, and what not.
It seems to me that acquiring of belief hence is not merely just ending with some static representation in the mind, but usually is more holistic. Involves all those mental processes of acquiring of belief, maybe leave us with problems we need to solve, and so on.
Thanks, Tanasije -- that's a very good point! A completely isolated factoid, entirely unconnected with anything else, might not even qualify as a thing believed. In some sense, to comprehend "Gary has replaced Georgia as chair" involves, necessarily, believing lots of other things -- perhaps especially things relevant in the conversational context. This is, in a way, a familiar point to "holists" like Quine, Davidson, and Dennett.
It seems to me that such observations fit more naturally with the maps view than the sentences-in-the-belief-box view. But I should refresh myself on Fodor & Lepore's criticism of holism....
Post a Comment