Friday, June 30, 2023

Mostly Overlapping Minds: A Challenge for the View that Minds Are Necessarily Discrete

Last fall I gave a couple of talks in Ohio. While there, I met an Oberlin undergraduate named Sophie Nelson, with whom I have remained in touch. Sophie sent some interesting ideas for my paper in draft "Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons", so I invited her on as a co-author and we have been jointly revising. Check out today's new version!

Let's walk through one example from the paper, originally suggested by Sophie but mutually written for the final draft. I think it stands on its own without need of the rest of the paper as context. For the purposes of this argument we are assuming that broadly human-like cognition and consciousness is possible in computers and that functional and informational processes are what matter to consciousness. (These views are widely but not universally shared among consciousness researchers.)

(Readers who aren't philosophers of mind might find today's post to be somewhat technical and in the weeds.  Apologies for that!)

Suppose there are two robots, A and B, who share much of their circuitry in common. Between them hovers a box in which most of their cognition transpires. Maybe the box is connected by high-speed cables to each of the bodies, or maybe instead the information flows through high bandwidth radio connections. Either way, the cognitive processes in the hovering box are tightly cognitively integrated with A's and B's bodies and the remainders of their minds -- as tightly connected as is ordinarily the case in ordinary unified minds. Despite the bulk of their cognition transpiring in the box, some cognition also transpires in each robot's individual body and is not shared by the other robot. Suppose, then, that A has an experience with qualitative character α (grounded in A's local processors), plus experiences with qualitative characters β, γ, and δ (grounded in the box), while B has experiences with qualitative characters β, γ, and δ (grounded in the box), plus an experience with qualitative character ε (grounded in B's local processors).

If indeterminacy concerning the number of minds is possible, perhaps this isn't a system with a whole number of minds. Indeterminacy, we think, is an attractive view, and one of the central tasks of the paper is to argue in favor of the possibility of indeterminacy concerning the number of minds in hypothetical systems.

Our opponent -- whom we call the Discrete Phenomenal Realist -- assumes that the number of minds present in any system is always a determinate whole number. Either there's something it's like to be Robot A, and something it's like to be Robot B, or there's nothing it's like to be those systems, and instead there's something it's like to be the system as a whole, in which case there is only one person or subjective center of experience. "Something-it's-like-ness" can't occur an indeterminate number of times. Phenomenality or subjectivity must have sharp edges, the thinking goes, even if the corresponding functional processes are smoothly graded. (For an extended discussion and critique of a related view, see my draft paper Borderline Consciousness.)

As we see it, Discrete Phenomenal Realists have three options when trying to explain what's going on in the robot case: Impossibility, Sharing, and Similarity. According to Impossibility, the setup is impossible. However, it's unclear why such a setup should be impossible, so pending further argument we disregard this option. According to Sharing, the two determinately different minds share tokens of the very same experiences with qualitative characters β, γ, and δ. According to Similarity, there are two determinately different minds who share experiences with qualitative characters β, γ, and δ but not the very same experience tokens: A's experiences β1, γ1, and δ1 are qualitatively but not quantitatively identical to B's experiences β2, γ2, and δ2. An initial challenge for Sharing is its violation of the standard view that phenomenal co-occurrence relationships are transitive (so that if α and β phenomenally co-occur in the same mind, and β and ε phenomenally co-occur, so also do α and ε). An initial challenge for Similarity is the peculiar doubling of experience tokens: Because the box is connected to both A and B, the processes that give rise to β, γ, and δ each give rise to two instances of each of those experience types, whereas the same processes would presumably give rise to only one instance if the box was connected only to A.

To make things more challenging for the Discrete Phenomenal Realist who wants to accept Sharing or Similarity, imagine that there's a switch that will turn off the processes in A and B that give rise to experiences α and ε, resulting in A's and B's total phenomenal experience having an identical qualitative character. Flipping the switch will either collapse A and B to one mind, or it will not. This leads to a dilemma for both Sharing and Similarity.

If the defender of Sharing holds that the minds collapse, then they must allow that a relatively small change in the phenomenal field can result in a radical reconfiguration of the number of minds. The point can be made more dramatic by increasing the number of experiences in the box and the number of robots connected to the box. Suppose that 200 robots each have 999,999 experiences arising from the shared box, and just one experience that's qualitatively unique and localized – perhaps a barely noticeable circle in the left visual periphery for A, a barely noticeable square in the right visual periphery for B, etc. If a prankster were to flip the switch back and forth repeatedly, on the collapse version of Sharing the system would shift back and forth from being 200 minds to one, with almost no difference in the phenomenology. If, however, the defender of Sharing holds that the minds don't collapse, then they must allow that multiple distinct minds could have the very same token-identical experiences grounded in the very same cognitive processors. The view raises the question of the ontological basis of the individuation of the minds; on some conceptions of subjecthood, the view might not even be coherent. It appears to posit subjects with metaphysical differences but not phenomenological ones, contrary to the general spirit of phenomenal realism about minds.

The defender of Similarity faces analogous problems. If they hold the number of minds collapses to one, then, like the defender of Sharing, they must allow that a relatively small change in the phenomenal field can result in a radical reduction in the number of minds. Furthermore, they must allow that distinct, merely type-identical experiences somehow become one and the same when a switch is flipped that barely changes the system's phenomenology. But if they hold that there's no collapse, then they face the awkward possibility of multiple distinct minds with qualitatively identical but numerically distinct experiences arising from the same cognitive processors. This appears to be ontologically unparsimonious phenomenal inflation.

Maybe it will be helpful to have the possibilities for the Discrete Phenomenal Realist depicted in a figure. Click to enlarge and clarify.

Friday, June 23, 2023

Dishonesty among Honesty Researchers

Until recently, one of the most influential articles on the empirical psychology of honesty was Shu, Mazar, Gino, Ariely, and Bazerman 2012, which purported to show, across three studies, that people who sign honesty pledges at the beginning of a process in which dishonesty is possible will be much more honest than those who sign at the end of the process. The result is intuitive (if you sign before doing a task, that might change your behavior in a way that signing after wouldn't), and it suggests straightforward, practical interventions: Have students, customers, employees, etc., sign honesty pledges before occasions in which they might be tempted to cheat.

Unfortunately, there appear to have been not just one but two separate instances of fraud in this study, and the results appear not to replicate in an honest (presumably), preregistered replication attempt.

The first fraud was revealed in 2021, and concerned customers' honest or dishonest reporting of mileage to an insurance company. The data appear to have been largely fabricated, either by Ariely or by whoever supplied him the data; none of the other collaborators are implicated.

The second fraud was revealed last week, and appears to be entirely separate, involving multiple papers by Gino, including Study 1 in the already-retracted Shu et al. 2012. In Shu et al., Study 1, participants could receive financial advantage by overreporting travel expenses or how many math puzzles they had solved earlier in the experiment, and purportedly there was less overreporting if participants signed an honesty pledge first. Several participants' results appear to have been strategically shifted from one condition to another to produce the reported effect. In light of an apparent pattern of fraud across several papers, Harvard has put Gino on administrative leave.

Yes, two apparently unrelated instances of fraud, by different researchers, on the very same famous article about honesty.

[some of the evidence of fraud, from https://datacolada.org/109]

For those who follow such news, Gino's case might bring to mind another notorious case of fraud by a Harvard psychologist: In 2010, Marc Hauser, was found to have faked and altered data in his work on rule-learning in monkeys (e.g., here) and subsequently resigned his academic post.

I have three observations:

First, Gino, Ariely, and Hauser are (or were) three of the most prominent moral psychologists in the world. Although Hauser's discovered fraud concerned monkey rule-learning, he was probably as well known for his work on moral cognition, which culminated in his 2006 book Moral Minds. This is a high rate of discovered fraud among leading moral psychology researchers, especially if we assume that most fraud goes undiscovered. I am struck by the parallel to my series of papers on the moral behavior of ethicists (overview here). Ethicists, and apparently also moral psychologists, appear to behave no better on average than socially similar people who don't study morality.

One might think that ethics and moral psychology would either (a.) tend to draw people particularly interested in advancing ethical ends (for example, particularly attuned to the importance of honesty) and thus presumably personally more ethical than average or (b.) at least make ethics personally more salient for them and thus presumably motivationally stronger. Either neither (a) nor (b) are true or studying ethics and moral psychology also has some countervailing negative effect.

Second, around the time of the discovery of his fraud, Hauser was working on a book titled Evilicious, concerning humans' widespread appetite for behaving immorally, and similarly Gino recently published a book titled Rebel Talent: Why It Pays to Break the Rules at Work and in Life. The titles perhaps speak for themselves: Part of studying moral psychology is studying bad moral psychology. (The "rebels" Gino celebrates might not be breaking moral rules -- celebrating that might impair sales and lucrative speaking gigs -- but the idea does appear to generalize.)

Third, the first and second observation suggest a mechanism by which the study of ethics and moral psychology can negatively affect the ethics of the researcher. If people mostly aim -- as I think they do -- toward moral mediocrity, that is, not to be good or bad by absolute standards but rather to be about as morally good as their peers, then if your opinion about what is common changes, your own behavior will tend to change accordingly to match. The more you study the worst side of humanity, the more you can think to yourself "well, even if X is bad, it's not as bad as all that, and people do X all the time". If you study dishonesty, you might be struck by the thought that dishonesty is everywhere -- and then if you are tempted to be dishonest you might think, "well, everyone else is doing it". I can easily imagine someone in Gino's position thinking, probably most researchers have from time to time shifted around a few rows of data to make their results pop out better. Is it really so bad if I do it too? And then once the deed has been done, it probably becomes easier, for multiple reasons, to assume that such fraud is widespread and just part of the usual academic game (grounds for thinking this might include rationalization, positive self-illusion, and using oneself as a model for thinking about others).

I do still think and hope that fraud is very much the exception. In interacting with dozens of researchers over the years and working through a variety of raw datasets, I've seen some shaky methodology and maybe a few instances of unintentional p-hacking; but I have never witnessed, suspected, seen signs of, or heard any suggestion of outright fraud or data alteration.

Saturday, June 17, 2023

Flipping Pascal's Wager on Its Head

In his famous Wager, Pascal contemplates whether one should choose to believe in God. (Maybe we can't directly choose to believe in God any more than we can simply choose to believe that the Sun is purple; but we can choose to expose ourselves to conditions, such as regular association with devoted theists, that are likely to eventually lead us to believe in God.) Although there's some debate about how exactly Pascal conceptualizes the decision, one interpretation is this:

  • Choose to believe: If God exists, infinite reward; if God does not exist status quo.
  • Choose to not to believe: If God exists, infinite punishment; if God does not exist status quo.
  • Suppose that your antecedent credence that God exists is some probability p strictly between 0 and 1. Employing standard decision theory, the expected payoff of believing is p * ∞ [the expected payoff if God does exist] + (1-p) * 0 [the expected payoff if God does not exist] = ∞. The payoff of not believing is p * -∞ + (1-p) * 0 = ∞. Since ∞ > -∞ (to put it mildly), belief is the rational choice.

    Now maybe it's cheating to appeal to infinitude. Is Heaven literally infinitely good? (There might, for example, be diminishing returns on joyful experiences over time.) And maybe decision theory in general breaks down when infinitudes are involved (see my recent discussion here). But finite values also work. As long as the "status quo" value is the same in both conditions (or better in the belief condition than the non-belief condition), the calculus still yields a positive result for belief.

    If not believing is better in the absence of God, it's a bit more complicated. (Non-belief might be better in the absence of God if believing truths is intrinsically better than believing untruths or if believing that God exists leads one to make sacrifices one wouldn't otherwise make.) But if Heaven would be as good as advertised, even a smidgen of a suspicion that God exists favors belief. For example, if life without belief is one unit better than life with belief, contingent on the non-existence of God, and if Heaven is a billion times better than that one-unit difference and Hell a billion times worse, then the expected payoff for believing in God is p * 1,000,000,000 + (1-p) * -1, and the expected payoff of not believing is p * -1,000,000,000 + (1-p) * 0. This makes belief preferable as long as you think the chance of God's existing is greater than about one in two billion.

    So far, so Pascalian. But there's God and then there's the gods. It seems that a more reasonable approach to the wager would consider theistic possibilities other than Pascal's God. Maybe God is an adolescent gamer running Earth in a giant simulation. Maybe the universalists are correct and a benevolent God just lets everyone into Heaven. Or maybe a jealous sectarian God condemns everyone to Hell for failing to believe the one correct theological package (different from Pascal's).

    If so, then the decision matrix looks something like this [click to enlarge and clarify]:

    In other words, quite an un-Pascalian mess! If the positive and negative values are infinite, then we're stuck adding ∞ and -∞ in our outcomes, normally a mathematically undefined result. If the values are finite but large, then the outcome will depend on the particular probabilities and payoffs, which might be sensitive to hard-to-estimate facts about the total finite goodness of Heaven or badness of Hell. And of course even the decision matrix above is highly simplified compared to the range of diverse theistic possibilities.

    But let me suggest one way of clarifying the decision. If God is not benevolent, all bets are off. Who knows what, if anything, an unbenevolent God might reward or punish? Little evidence on Earth points toward one vs another strategy for attaining a good afterlife under a hypothetical unbenevolent deity. I propose that we simplify by removing this possibility from our decision-theoretical calculus, instead considering the decision space on the assumption that if God exists God is benevolent. Doing that, we can get some decision-theoretic traction: a benevolent God, if he/she/it/they reward anything, should reward what's good.

    This, then gives us mortals some (additional) reason to do whatever is good.

    Here's something that's good: apportioning one's beliefs to the evidence. The world is better off, generally speaking, if people's credence that it will rain on Tuesday tends to match the extent of the evidence that it will rain on Tuesday. The world is better off, generally speaking, if people come to believe that cigarette smoking is bad for one's health once the evidence shows that, if people come to believe in anthropogenic climate change once the evidence shows that, if people decline to believe in alien abductions given that the evidence suggests against it, and so on. Apportioning our beliefs to the evidence is both a type of intellectual success that manifests the flourishing of our reasoning and a pragmatic path to the successful execution of our plans.

    This is true for religious belief as well. Irrationally high credence in some locally popular version of God doesn't improve the world, but in fact has historically been a major source of conflict and suffering. Humanity would be better off without a tendency toward epistemically unjustified religious dogmatism. Nor should a benevolent God care much about being worshipped or believed in; that's mere vanity. A truly benevolent God, with our interests at heart, should care mainly that we do what is good -- and this, I suggest includes apportioning our religious beliefs to the evidence.

    The evidence does not suggest that we should believe in the existence of God. (We could get into why, but that's a big topic! We can start by considering religious disagreement and the problem of evil.) If a benevolent God rewards or at least does not punish those who apportion their belief to tge evidence, a benevolent God should reward or at least not punish non-believers.

    If God does not exist, we're better off apportioning our (non)belief to the (non)evidence. If a benevolent God exists, we're still better off not believing in the God. If God exists but is not benevolent, then decision-making policies break. Thus, we can flip Pascal's wager on its head: Unless we reject decision theory entirely as a means to evaluate the case, we're better off not believing than believing.

    Thursday, June 08, 2023

    New Paper in Draft: Introspection in Group Minds, Disunities of Consciousness, and Indiscrete Persons

    I have a new paper in draft, for a special issue of Journal of Consciousness Studies.  Although the paper makes reference to a target article by Francois Kammerer and Keith Frankish, it should be entirely comprehensible without knowledge of the target article, and hopefully it's of independent interest.

    Abstract:

    Kammerer and Frankish (this issue) challenge us to expand our conception of introspection, and mentality in general, beyond neurotypical human cases. This article describes a technologically possible "ancillary mind" modeled on a system envisioned in Ann Leckie's (2013) science fiction novel Ancillary Justice. The ancillary mind constitutes a borderline case between an intimately communicating group of individuals and a single, unified, spatially distributed mind. It occupies a gray zone with respect to personal identity and subject individuation, neither determinately one person or conscious subject nor determinately many persons or conscious subjects. Advocates of a Phase Transition View of personhood or Discrete Phenomenal Realism might reject the possibility of indeterminacy concerning personal identity and subject individuation. However, the Phase Transition View is empirically unwarranted, and Discrete Phenomenal Realism is metaphysically implausible. If ancillary minds defy discrete countability, the same might be true for actual group minds on Earth and human cases of multiple personality or Dissociative Identity.

    ----------------------------------------

    Full draft here.  As usual, comments, questions, objections welcome, either as comments on this post or directly by email to my academic address.

    Wednesday, June 07, 2023

    The Fundamental Argument for Dispositionalism about Belief

    I'm back from travel to Paris and Antwerp, where I spoke with people influenced by, and critical of, my "dispositionalist" approach to belief.  A critique of my work has also just appeared in the journal Theoria.  It's an honor to be criticized!

    Over the years, I've advanced various arguments for dispositionalism about belief (here, here, here, here, here, here, here).  Today, I want to synthesize and restate the most fundamental one.

    Dispositionalism Characterized

    According to dispositionalism as I understand it, to believe some proposition P is nothing more or less than to have a certain suite of behavioral, cognitive, and phenomenal (that is, conscious experience involving) dispositions.  Which dispositions?  Dispositions of the sort that we are apt to associate with belief that P.  This might sound circular, but it's not: It is to ground metaphysics in commonsense psychology.  Stealing an example from Gilbert Ryle, to believe that the ice is dangerously thin is to be prone to skate warily, to dwell in the imagination on possible disasters, to warn other skaters, to agree with other people's assertions to that effect, to feel alarm and surprise upon seeing someone skate successfully across it, and so on, including being ready generally to make plans and draw consequences that depend on the truth of P.

    [Midjourney rendition of skaters approaching thin ice]

    As is typical of dispositions in general, the relevant dispositions hold only ceteris paribus (all else being equal or normal or right).  For example, you might not be disposed to warn other skaters if you'd like to see them fall in.  The dispositions are potentially limitless in number: Less obvious ones include the disposition to assume that a circular disk cut from the ice would be fragile and the disposition to attend curiously to a fox walking across the ice.  One needn't possess every disposition on the infinitely expandable list in order to count as a P believer -- just enough of the dispositional structure that attributing the belief to you adequately captures your general cognitive posture toward P.

    Although dispositionalism about belief can seem confusing, dispositionalism about personality traits is intuitive.  Thus, comparison to personality traits is instructive.  Consider extraversion.  To be an extravert is nothing more or less than to have the dispositional profile stereotypical of extraversion: a tendency to say yes to party invitations, a tendency to enjoy meeting new people, a readiness to take the lead in conversation and in organizing social events, and so forth.  Of course all of these dispositions are ceteris paribus and no one is going to to be 100% extraverted down the line.  Match the dispositional characterization of extraversion closely enough, and you're an extravert; that's all there is to it.  Similarly, match the dispositional characterization of being a thin-ice believer closely enough, and you are one; that's all there is to it.

    The Fundamental Argument

    Now consider some alternative, non-dispositionalist account of belief, where believing is constituted by some feature other than one's overall dispositional profile with respect to P.  Call that alternative feature Feature X.  Maybe Feature X is having a stored representation with the content that P.  Maybe Feature X is having a particular neural structure.  Maybe Feature X involves being responsive to evidence for or against P.  If Feature X is metaphysically distinct from having a belief-that-P-ish dispositional structure, then it ought to be possible in principle to imagine cases in which Feature X is absent and the dispositional structure is present and vice versa.  I submit that in such cases, first, we intuitively do, and second, we pragmatically ought, to attribute belief in a way that tracks the belief-that-P-ish dispositional structure rather than the presence or absence of Feature X.  That's the fundamental argument.

    As a warm-up, consider space aliens.  Tomorrow, they arrive from Alpha Centauri.  They learn our languages, trade with us, fall in love with some of us, join our corporations and governments, write philosophy and psychology articles, reveal their technological secrets, and recount amazing tales about their home planet.  Stipulate, too, that we somehow know their cognitive dispositions and their phenomenology (that is, their streams of conscious experience).  We know that if they say "P is so, I'm sure of it!" and then not-P is revealed to be true, they normally feel surprise.  We know that they have inner speech and imagery like ours, and that when they assert that P they tend to draw further logical conclusions from P and make plans that will only work if P is true.

    I submit that if we know all these dispositional facts about the aliens, we know that they have beliefs.  What kinds of brains do they have?  What kind of underlying cognitive architecture?  How did they come to have their present dispositional structures?  Who knows!  As far as belief is concerned, it doesn't matter.  They have what it takes to believe.

    Suppose that Feature X is having internal structured representations of a certain sort.  Imagine, now, a space alien -- call her Breana -- with a radically different cognitive architecture from ours, lacking internal structured representations of the required sort, but possessing the behavioral, cognitive, and phenomenal dispositions characteristic of belief.  Breana will act and react, behaviorally, emotionally, and cognitively, just like an entity with beliefs.  Maybe Breana says "microbes live beneath the ice of Europa" with a feeling of sincerity and accompanying visual imagery of slime beneath the ice.  She draws relevant conclusions (Earth is not the only planet with life in the Solar System).  She would feel confused and surprised if a seemingly knowledgeable friend contradicted her.  If a human asked where we should probe to find alien life, she would recommend Europa.  And so on.  Breana believes, despite lacking internal structured representations of the required sort.

    My opponent might say that if Breana acts and reacts as I have described her, then she must have internal representations of the required sort and thus would not be a counterinstance to a representationalist account.  I respond with a dilemma: Either representationalism does not commit to the existence of internal representations of the required sort whenever the belief-that-P-ish dispositional structure is present or it does commit.  If the former, then we ought to be able to construct a Breana-like case without the required underlying representations, producing the objectionable case of the previous paragraph.  If the latter -- that is, if it is metaphysically or conceptually necessary that whenever a belief-that-P-ish dispositional structure is present, the entity has the required representational structure -- then the so-called "representationalism" collapses into dispositionalism: The dispositions drive the metaphysical car, so to speak, and having the pattern of dispositions is just what it is to represent.

    How about the converse case, where the representational structure is present but the dispositions are absent?  Breana has a stored representation with the content P, but she has no inclination to act or react, or to reason or plan, accordingly.  "Microbes live beneath the ice in Europa" is somehow internally represented, but she would not assent to that proposition verbally or in inner speech; she would feel no surprise upon learning that it is false; she would never make any plans contingent upon its truth; when she imagines Europa, she pictures it as sterile; if quizzed on the topic, she would fail; in no condition could she be provoked to remember that she learned this; and so on.  Breana will of course sincerely deny that she believes P.  It would be strange to insist that despite her sincere protests, she really does believe.

    Thus, to the extent that representationalism's Feature X comes apart from dispositional structure, our intuitive standards of belief attribution follow the dispositional structure rather than the presence or absence of Feature X.  And that's how it should be.  What we can and do care about in belief attribution is the believer's overall cognitive posture toward the world -- how they are disposed to act and react, what they will affirm and depend on, what they take for granted in their inferences, and so forth -- not whether there's a tokening of "P" in a hidden mental warehouse.

    Note that unlike old-school behaviorist dispositionalism, this argument is immune to concerns about puppets and play-acting.  (Actually, I think the behaviorist has some underappreciated resources here, but dispositionalism of the form I prefer need not rely on those resources.)  Puppets aren't conscious and don't reason, so they have no phenomenal or cognitive dispositions.  Actors who pretend to believe that P while really holding not-P have the phenomenal and cognitive dispositions characteristic of not-P, including relying on not-P in their own deceptive plans, and the ceteris paribus clause is triggered for their behavioral dispositions, explaining why they don't act like P believers.  (Compare: An extravert paid a large sum to act like an introvert is really still an extravert.)

    Other approaches to belief are vulnerable to the same fundamental argument.  Suppose that Feature X involves the capacity to rationally revise your belief in the face of counterevidence.  Now imagine someone who acts and reacts exactly like a P-believer, matching the dispositional profile in every respect, except that they stubbornly eschew any counterevidence.  We ordinarily would, and should, describe that person as a P-believer.  They will self-ascribe the belief, insist on its truth, reason from it, plan on its basis, and so forth.  Of course they believe it!  In fact, if they would never revise it, we might be inclined to say they believe it very strongly indeed.

    Suppose that Feature X involves something about the causal history of the belief state: To be a belief, the cognitive state must have been caused in a certain way.  Now imagine Swampman: Lightning strikes a swamp, and by freak quantum chance a molecule-for-molecule duplicate of your favorite philosopher emerges.  Let's stipulate (though it's complicated) that this philosopher has all the dispositions characteristic of belief.  We would, and should, say that Swampman believes.  Or imagine a conscious robot, printed fresh from the factory with a full set of behavioral, cognitive, and phenomenal dispositions.  This robot believes, even though the disposition, for example, to say "Earth is approximately spherical" did not arise in the usual way from evidence or testimony to that effect.  Again, this seems both natural to say and to track what we do and should care about in ascribing beliefs. 

    Suppose that Feature X involves a normative standard.  A belief is a state that is "correct" only if it is true.  Either assessibility by this standard corresponds with having the required dispositional structure, or assessibility by this standard can come apart from having the required dispositional structure.  If the former, then being assessible by the normative standard is a consequence of having the dispositional structure constitutive of believing.  If the latter, then either there are beliefs that are not normatively assessible in that way (e.g., religious beliefs?) or there are states that are normatively assessible in the way we normally assess beliefs but which are not in fact beliefs (e.g., delusions?).

    The Metaphilosophical Frame

    Although I think alternative views often face empirical problems (especially concerning "in-between" cases of belief in which people only approximately match the relevant dispositional profiles), the fundamental argument as I've described it here doesn't rely on empirical objections to alternative views.  Instead, it relies on a particular metaphilosophical approach.

    Stipulate that we can define "belief" in a dispositionalist way or alternatively in some other way, and that both definitions are coherent and face no insuperable empirical obstacles.  Now we, as philosophers, face a choice.  How do we want to define belief?  What definition captures what we do care about and should care about in belief ascription?  What way of thinking about belief sorts cases in the way it's most useful to sort them?

    We will ordinarily, I think, find it natural and intuitive to sort cases by the dispositionalist criteria.  That creates a default supposition in favor of dispositionalism.  But dispositionalism might not always track ordinary patterns of belief ascription.  In some cases, our intuitions are ambivalent or go the other way.  (See my discussion of intellectualism, for example.)  More important, the dispositionalist approach tracks what matters in belief ascription.  Dispositionalism captures what we should most want to capture with the term "belief" -- that is, our general dispositional posture toward the truth of P, whether we will affirm it, defend it, rely on it in planning and inference, feel confident when we contemplate it, and so forth.