If you're like me, you believe that the solar system contains eight planets. (Stubborn Plutophiles may adjust accordingly.) You probably also believe that the solar system contains fewer than nine planets. And you probably believe that it contains more than just the four inner planets. Do you also believe that the solar system contains fewer than 14 planets? Fewer than 127? Fewer than 134,674.6 planets? That there are eight planet-like bodies within half a light year? That there are 2^3 planets within the gravitational well of the nearest large hydrogen-fusing body? That there are 1000(base 2) planets, or -i^2*e^0*sqrt(64) planets? That Shakespeare's estimate of the number of planets was probably too low? Presumably you can form these beliefs now, if you didn't already have them. The question, really, is whether you believed these things before thinking specifically about them.
Standard philosophical representationalist approaches to belief seem to fare poorly when faced with this question.
Here's a typical first-pass representationalist account of belief: To believe some proposition P (e.g., that the solar system has fewer than 9 planets) is to have stored somewhere in one's mind a language-like representation with the content P -- and to have it stored in such a way that the representation can be deployed for appropriate use in inference and practical reasoning. Now, one possibility, on such a view, is to say we have all the beliefs described above and thus that we have a vast number of stored representations with very similar content. But that doesn't seem very appealing. Another possibility seems more initially promising: Say that we really only have a few stored representations concerning the number of planets. Probably, then you didn't believe (until you thought about it just now) that there were fewer than 14 planets.
But there are two problems with this approach. First, although I certainly agree that it would be weird to say that you believe, before doing the relevant calculation, that there are -i^2*e^0*sqrt(64) planets, it seems weirdly misleading to say that you don't believe that there are fewer than 14. But if we do want to include the latter as a belief, there are probably going to have to be, on this view, quite a few stored representations regarding the number of planets (at least the 15 representations indicating that the number is >0, >1, >2, ... <14). Second, the line between what people believe and what they don't believe turns out, now, to be surprisingly occult. Does my wife believe that the solar system contains more than just the four inner planets? Well, I know she would say it does. But whether she believes it is now beyond me. Does she have that representation stored or not? Who knows?
Jerry Fodor and Dan Dennett, discussing this problem, suggest that the representationist might distinguish between "core" beliefs that require explicitly stored representations and "implicit" beliefs which are beliefs that can be swiftly derived from the core beliefs. So, if I have exactly one stored representation for the number of planets (that there are 8), I have a core belief that there are 8 and I implicitly believe that there are fewer than 14 and fewer than 134,674.6, etc. Although this move lets me safely talk about my wife -- I know she believes either explicitly or implicitly that there are fewer than 14 planets -- the occult is not entirely vanquished. For now there is a major, sharp architectural distinction in the mind -- the distinction between "core" beliefs and the others (and what could be a bigger architectural difference, really, for the philosophical representationalist?) -- with no evident empirical grounding for that distinction and no clear means of empirical test. I suspect that what we have here is nothing but ad hoc maneuver to save a theory in trouble by insulating it from empirical evidence. Is there some positive reason to believe that, in fact, all the things we would want to say we believe are either the explicit contents of stored representations or swiftly derivable from those contents? It seems we're being asked merely to accept that it must be so. (If the view generated some risky predictions that we could test, that would be a different matter.)
An alternative form of representationalism -- the "maps" view -- has some advantages. A mental map or picture of the solar system would, it seems, equally well represent, in one compact format, the fact that the solar system has more than 1 planet, more than 2, more than 3,... exactly 8, fewer than 9, ... fewer than 14.... That's nice; no need to duplicate representations! Similarly, the same representation can have the content that Oregon is south of Washington and that Washington is north of Oregon. On the language view, it seems, either both representational contents would have to be explicitly stored, which seems a weird duplication; or one would have to be core and the other merely implicit, which seems weirdly asymmetrical for those of us who don't really think much more about one of those states than about the other; or there'd have to some some different core linguistic representation, an unfamiliar concept, from which xNORTHy and ySOUTHx were equally derivable as implicit beliefs, which seems awkward and fanciful, at least absent supporting empirical evidence.
However, these very advantages for the maps view become problems when we consider other cases. For it seems like a map of the solar system represents that there are -i^2*e^0*sqrt(64) planets, and that there are 1000(base 2) planets, just as readily as it represents that there are 8. Maps aren't intrinsically decimal, are they? And it seems wrong to say that I believe those things, especially if I am disposed to miscalculate and thus mistakenly deny their truth. For related reasons, it seems difficult if not impossible to represent logically inconsistent beliefs on a map; and surely we do sometimes have logically inconsistent beliefs (e.g., that there are four gas giants, five smaller planets, and 8 planets total).
It seems problematic to think of belief either in terms of discretely stored language-like style representations (perhaps plus swift derivability allowing implicit beliefs), or in terms of map-like representations. Is there some other representational format that would work better?
Maybe the problem is in thinking of belief as a matter of representational storage and retrival in the first place.