Monday, February 25, 2013

An Objection to Some Accounts of Self-Knowledge of Attitudes

You believe some proposition P. You believe that Armadillo is the capital of Texas, say.[footnote 1] Someone asks you what you think the capital of Texas is. You say, "In my opinion, the capital of Texas is Armadillo." How do you know that that is what you believe?

Here's one account (e.g., in Nichols and Stich 2003): You have in your mind a dedicated "Monitoring Mechanism". The job of this Monitoring Mechanism is to scan the contents of your Belief Box, finding tokens such as P ("Armadillo is the capital of Texas") or Q ("There's an orangutan in the fridge"), and producing, in consequence, new beliefs of the form "I believe P" or "I believe Q". Similarly, it or a related mechanism can scan your Desire Box, producing new beliefs of the form "I desire R" or "I desire S". Call this the Dedicated Mechanism Account.

One alternative account is Peter Carruthers's. Carruthers argues that there is no such dedicated mechanism. Instead, we theorize on the basis of sensory evidence and our own imagery. For example, I hear myself saying -- either aloud or in inner speech (which is a form of imagery) -- "The capital of Texas is Armadillo", and I think something like, "Well, I wouldn't say that unless I thought it was true!", and so I conclude that I believe that Armadillo is the capital. This theoretical, interpretative reasoning about myself is usually nonconscious, but in the bowels of my cognitive architecture, that's what I'm doing. And there's no more direct route to self-knowledge, according to Carruthers. We have to interpret ourselves given the evidence of our behavior, our environmental context, and our stream of imagery and inner speech.

Here's an argument against both accounts.

First, assume that to believe that P is to have a representation with the content P stored in a Belief Box (or memory store), i.e., ready to be accessed for theoretical inference and practical decision making. (I'm not keen on Belief Boxes myself, but I'll get to that later.) A typical deployment of P might be as follows: When Bluebeard says to me, "I'm heading off to the capital of Texas!", I call up P from my Belief Box and conclude that Bluebeard is heading off to Armadillo. I might similarly ascribe a belief to Bluebeard on that basis. Unless I have reason to think Bluebeard ignorant about the capital of Texas or (by my lights) mistaken about it, I can reasonably conclude that Bluebeard believes that he is heading to Armadillo. All parties agree that I need not introspect to attribute this belief to Bluebeard, nor call upon any specially dedicated self-scanning mechanism (other than whatever allows ordinary memory retrieval), nor interpret my own behavior and imagery. I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.

Now suppose the conclusion of interest -- the "Q" in this case -- is just "I believe that P". What other beliefs does P need to be hooked up with to license this conclusion? None, it seems! I can go straightaway, in normal cases, from pulling up P to the conclusion "I believe that P". If that's how it works, no dedicated self-scanning mechanism or self-interpretation is required, but only ordinary belief-retrieval for cognition, contra both Carruthers's view and Dedicated Mechanism Accounts.

That will have seemed a bit fast, perhaps. So let's consider some comparison cases. Suppose Sally is the school registrar. I assume she has true beliefs about the main events on the academic calendar. I believe that final exams end on June 8. If someone asks me when Sally believes final exams will end, I can call up P1 ("exams end June 8") and P2 ("Sally has true beliefs about the main events on the academic calendar") to conclude Q ("Sally believes exams end June 8"). Self-ascription would be like that, but without P2 required. Or suppose I believe in divine omniscience. From P1 plus divine omniscience, I can conclude that God believes P1. Or suppose that I've heard that there's this guy, Eric Schwitzgebel, who believes all the same things I believe about politics. If P1 concerns politics, I can conclude from P1 and this knowledge about Eric Schwitzgebel that this Eric Schwitzgebel guy believes P1. Later I might find out that Eric Schwitzgebel is me.

Do I need to self-ascribe the belief that P1 before reaching that conclusion about the Eric Schwitzgebel guy? I don't see why I must. I know that moving from "P1 is true and concerns politics" to "that Eric Schwitzgebel guy believes P1" will get me true conclusions. I can rely on it. It might be cognitively efficient for me to develop a habit of thought by which I leap straight from one to the other.

Alternatively: Everyone thinks that I can at least sometimes ascribe myself beliefs as a result of inference. I subscribe to a general theory, say, on which if P1 and P2 are true of Person S and if P3 and P4 are true in general about the world, then I can conclude that S believes Q. Now suppose S is me. And suppose Q is "I believe P" and suppose P3 is P. And then jettison the rest of P1, P2, and P4. Voila![footnote 2]

If there is a Desire Box, it might work much the same way. If I can call up the desire R to join with some other beliefs and desires to form a plan, in just the ordinary cognitive way that desires are called up, so also it seems I should be able to do for the purposes of self-ascription. It would be odd if we could call up beliefs and desires for all the wide variety of cognitive purposes that we ordinarily call them up for but not for the purposes of self-ascriptive judgment. What would explain that strange incapacity?

What if there isn't a Belief Box, a Desire Box, or a representational storage bin? The idea remains basically the same: Whatever mechanisms allow me to reach conclusions and act based on my beliefs and desires should also allow me to reach conclusions about my beliefs and desires -- at least once I am cognitively sophisticated enough to have adult-strength concepts of belief and desire.

This doesn't mean I never go wrong and don't self-interpret at all. We are inconsistent and unstable in our belief- and desire-involving behavioral patterns; the opinions we tend to act on in some circumstances (e.g., when self-ascription or verbal avowal is our task) might very often differ from those we tend to act on in other circumstances; and it's a convenient shorthand -- too convenient, sometimes -- to assume that what we say, when we're not just singing to ourselves and not intending to lie, reflects our opinions. Nor does it imply that there aren't also dedicated mechanisms of a certain sort. My own view of self-knowledge is, in fact, pluralist. But among the many paths, I think, is the path above.

(Fans of Alex Byrne's approach to self-knowledge will notice substantial similarities between the above and his views, to which I owe a considerable debt.)

Update, February 27

Peter Carruthers replies as follows:

Eric says: “I can just pull up P to join it with other beliefs, and conclude that Q. Nothing special here or self-interpretive. Just ordinary cognition.” This embodies a false assumption (albeit one that is widely shared among philosophers; and note that essentially the same response to that below can be made to Alex Byrne). This is that there is a central propositional workspace of the mind where beliefs and desires can be activated and interact with one another directly in unconstrained ways to issue in new beliefs or decisions. In fact there is no such amodal workspace. The only central workspace that the mind contains is the working memory system, which has been heavily studied by psychologists for the last half-century. The emerging consensus from this work (especially over the last 15 years or so) is that working memory is sensory based. It depends upon attention directed toward mid-level sensory areas of the brain, resulting in globally broadcast sensory representations in visual or motor imagery, inner speech, and so on. While these representations can have conceptual information bound into them, it is impossible for such information to enter the central workspace alone, not integrated into a sensory-based representation of some sort.

Unless P is an episodic memory, then (which is likely to have a significant sensory component), or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation.

This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same. Indeed, a second error made by Eric in his post is a failure to note that the mindreading system bifurcates into two (or more) distinct components: a domain-specific system that attributes mental states to others (and to oneself), and a set of domain-general planning systems that can be used to simulate the reasoning of another in order to generate predictions about that person’s other beliefs or likely behavior. On this Nichols & Stich and I agree, and it provides the former the wherewithal to reply to Eric’s critique also. For the “pulling up of beliefs” to draw inferences about another’s beliefs takes place (unconsciously) in the planning systems, and isn’t directly available to the domain-specific system responsible for attributing beliefs to others or to oneself.

Peter says: "Unless P is an episodic memory... or unless it is a semantic memory stored, at least in part, in sensory format (e.g. a visual image of a map of Texas), then the only way for P to “join with other beliefs, and conclude that Q” is for it to be converted into (say) an episode of inner speech, which will then require interpretation." I don't accept that theory of how the mind works, but even if I did accept that theory, it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode. Or am I misunderstanding the import of his sentence? If I'm not misunderstanding, then hasn't just given me all I need for this step of my argument? Let's imagine that "Armadillo is the capital of Texas" is stored in partly sensory format (as a visual map of Texas with the word "Armadillo" and a star). Now Peter seems to be allowing that it can drive inferences without requiring an intermediate act of self-interpretation. So then why not allow it to drive also the conclusion that I believe that Armadillo is the capital? We're back to the main question of this post, right?

Peter continues: "This is not to deny that some systems in the mind can access beliefs and draw inferences without those beliefs needing to be activated in the global workspace (that is, in working memory). In particular, goal states can initiate searches for information to enable the construction of plans in an “automatic”, unconscious manner. But this doesn’t mean that the mindreading system can do the same." First, let me note that I agree that the fact that some systems can access stored representations without activating those representations in the global workspace doesn't stricly imply that the mindreading system (if there is a dedicated system, which is part of the issue in dispute) can also do so. But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading. I don't think Peter really offers us as much as we should want to explain this disability.

... which brings me to my second reaction to this quote. What Peter seems to be presenting as a secondary feature of the mind -- "the construction of plans in an 'automatic', unconscious manner" -- is, in my view, the very heart of mentality. For example, to create inner speech itself, we need to bring together a huge variety of knowledge and skills about language, about the social environment, and about the topic of discourse. The motor plan or speech plan constructed in this way cannot mostly be driven by considerations that are pulled explicitly into the narrow theater of the "global workspace" (which is widely held to host only a small amount of material at a time, consciously experienced). Our most sophisticated cognition tends to be what happens before things hit the global workspace, or even entirely independent of it. If Peter allows, as I think he must, that that pre-workspace cognition can access beliefs like P, what then remains to be shown to complete my argument is just that these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.

Update, February 28

Peter responds:

Eric writes: “it seems now like Peter is allowing that if P is a "semantic memory" stored in partly "sensory format" it can join with other beliefs to drive the conclusion Q without an intermediate self-interpretative episode.”

I allow that the content of sensory-based memory can enter working memory, and so can join with other beliefs to drive a conclusion. But that the content in question is the content of a memory rather than a fantasy or supposition requires interpretation. There is nothing about the content of an image as such that identifies it as a memory, and memory images don’t come with tags attached signifying that they are memories. (There is a pretty large body of empirical work supporting this claim, I should say. It isn’t just an implication of the ISA theory.)

Eric writes: “But I do think that if, for a broad range of purposes, we can access these stored beliefs, it would be odd if we couldn't do so for the purpose of reaching conclusions about our own minds. We'd then need a pretty good theory of why we have this special disability with respect to mindreading.”

Well, I and others (especially Nichols & Stich in their mindreading book) had provided that theory. The separation between thought-attribution and behavioral prediction is now widely accepted in the literature, with the latter utilizing the subject’s own planning systems, which can in turn access the subject’s beliefs. There is also an increasing body of work suggesting that on-line, unreflective, forms of mental-state attribution are encapsulated from background beliefs. (I make this point at various places in The Opacity of Mind. But more recently, see Ian Apperly’s book Mindreaders, and my own “Mindreading in Infancy”, shortly to appear in Mind & Language.) The claim also makes good theoretical sense seen in evolutionary and functional terms, if the mindreading system evolved to track the mental states of others and generate predictions therefrom. From this perspective one might predict that thought-attribution could access a domain-specific database of acquired information (e.g. “person files” containing previously acquired information about the mental states of others), without being able to conduct free-wheeling searches of memory more generally.

Eric writes: “these highly sophisticated P-accessing processes can drive the judgment or the representation or the conclusion that I believe that P, just as they can drive many other judgments, representations, or conclusions. Again, I think the burden of proof should be squarely on Peter to show why this wouldn't be possible.”

First, I concede that it is possible. I merely claim that it isn’t actual. As for the evidence that supports such a claim, there are multiple strands. The most important is evidence that people confabulate about their beliefs and other mental states in just the sorts of circumstances that the ISA theory predicts that they would. (Big chunks of The Opacity of Mind are devoted to substantiating this claim.) Now, Eric can claim that he, too, can allow for confabulation, since he holds a pluralist account of self-knowledge. But this theory is too underspecified to be capable of explaining the data. Saying “sometimes we have direct access to our beliefs and sometimes we self-interpret” issues in no predictions about when we will self-interpret. In contrast, other mixed-method theorists such as Nichols & Stich and Alvin Goldman have attempted to specify when one or another method will be employed. But none of these accounts is consistent with the totality of the evidence. The only theory currently on the market that does explain the data is the ISA theory. And this entails that the only access that we have to our own beliefs is sensory-based and interpretive.

I agree that people can certainly make "source monitoring" and related errors in which genuine memories of external events are confused with merely imagined events. But it sounds to me like Peter is saying that a stored belief, in order to fulfill its function as a memory rather than a fantasy or supposition must be "interpreted" -- and, given his earlier remarks, presumably interpreted in a way that requires activation of that content in the "global workspace". (Otherwise, his main argument doesn't seem to go through.) I feel like I must be missing something. I don't see how spontaneous, skillful action that draws together many influences -- for example, in conversational wit -- could realistically be construed as working this way. Lots of pieces of background knowledge flow together in guiding such responsiveness; they can't all be mediated in the "global workspace", which is normally thought to have a very limited capacity. (See also Terry Horgan's recent work on jokes.)

Whether we are looking at visual judgments, memory judgments, social judgments about other people, or judgments about ourselves, the general rule seems to be that the sources are manifold and the mechanisms complex. "P, therefore I believe that P" is far too simple to be the whole story; but so also I think is any single-mechanism story, including Peter's.

I guess Peter and I will have a chance to hammer this out a bit more in person during our SSPP session tomorrow! _______________________________________________

[note 1]: Usually philosophers believe that it's raining. Failing that, they believe that snow is white. I just wanted a change, okay?

[note 2]: Is it really "inference" if the solidity of the conclusion doesn't require the solidity of the premises? I don't see why that should be an essential feature of inferences. But if you instead want to call it (following Byrne) just an "epistemic rule" that you follow, that's okay by me.

8 comments:

Anonymous said...

Is it right to understand Carruthers as proposing that we have testimonial evidence of our own mental states, where we are the source of that testimony?

Eric Schwitzgebel said...

Bearistotle: That seems to me basically right. Just as I will ascribe Joe the belief that P when he says "P" and there are no countervailing considerations, so also for very similar reasons in my own case.

Gary Williams said...

Hi Eric,

You say:

"I can go straightaway, in normal cases, from pulling up P to the conclusion "I believe that P". If that's how it works, no dedicated self-scanning mechanism or self-interpretation is required"


I don't really see how this is an argument against Carruther's view. After all, he wouldn't deny that it'd be possible to do some theoretical creature construction and design a cognitive system that works without heavily relying on a self-interpretation mechanism. But Carruther's view is not that cognition necessarily has to involve a self-interpretation mechanism but that that we *in fact* have such a mechanism, and he uses a variety of evidence to support the view (mainly from studies on confabulation). At the very least, one would have to challenge his empirical case in order to argue against his position, which is a factual claim about human cognition actually works. That is, it doesn't seem enough to say "I can imagine human cognition works like X" when Carruthers claim is "As a matter of contingent fact, cognition works like Y". Carruther's seems to acknowledge that we'd be more rational/efficient cognizers if we didnt always have to go through the hassle of self-interpretation of sensory images. But for contingent evolutionary reasons, this is what humans are stuck with for better or worse.

Eric Schwitzgebel said...

Thanks for the thoughtful comment, Gary! I agree with Carruthers that we can do interpretation, and often do in fact do interpretation as an important part of the process. And you're right that speaking strictly the fact that we could hypothetically do what I describe is no knock on Carruthers' view if it can be shown that we don't in fact do what I describe.

But I do think that the considerations I raise make a strong prima facie empirical case that Carruthersian interpretation isn't the whole picture. As a matter of empirical fact, we can deploy the belief that P to ground all sorts of inferences and actions. If he is going to say, "but not this one particular form of inference!" he's going to need a darn compelling empirical case, which I don't think he has.

Carruthers does nicely review empirical evidence that suggests that we can make mistakes in patterns that suggest that self-interpretation is an important part of the self-attributive story. I think he deserves a lot of credit for highlighting how big a part of the story it is, in contrast with almost all other recent philosophers. But it's a big leap from that to the anti-pluralist conclusion that processes like the ones described in the post can play no role.

Anonymous said...

Hi Eric,
I'm a big fan of the pluralism paper. Yet given your pluralism, how seriously are we to take the view you offer here? Pluralism means that self-knowing does not always work this way, but it's not very interesting to say that there is at least one instance of such self-knowing. Would you say that this is how we most often self-know?

Also, I think you have a point against Carruthers, though I'm less sure about the dedicated mechanism view. With the belief-box metaphor, there is talk of belief as "ready to be accessed," and of "calling up" a desire... Without the belief-box metaphor, there is still the claim that "Whatever _mechanisms_ allow me to reach conclusions and act, based on my beliefs and desires, should also allow me to reach conclusions about my beliefs and desires" (my underline).

If a mechanism allows me to reach conclusions/act "on the basis" of what I believe/desire, isn't that a mechanism which at some level tracks what I believe/desire? And isn't that essentially what the dedicated mechanism view says?

Eric Schwitzgebel said...

Ted: Thanks for the thoughtful comment!

I'm not sure about "most often". In fact, the whole thing is a bit hypothetical, given my concerns about the representational-storage/belief-box architecture. That's part of why it's framed more as an objection to Carruthers and Nichols & Stich than as a positive account. They're more Belief-Boxy in their cognitive architectural commitments than I am. The extension to the non-Belief-Box architecture is pretty hand-wavy!

Your defense of the dedicated mechanism view might work better for someone like Hill than for Nichols & Stich. Nichols & Stich are committed to its being a substantial piece of cognitive architecture, different in kind from ordinary reasoning, that can break or double-dissociate from other types of mechanisms. Some other views, like maybe Hill's, seem open to having a lighter touch about how independent the "mechanism" is from other types of thinking.

Anibal Monasterio Astobiza said...

To my lights Carruthers´model of introspection and his division (introspection for perceptual states and not for cognitive states like judgements and decisions)is counterintuitive.

I can´t explain why evolution shapes organisms that only can introspect its affective states, but not its cognitive ones. Monitoring and assesing cognitve states seems to me adaptive because allow organisms to change behaviour to attain a goal.

Furthermore, i believe he is confusing consciousness with introspection. That confusion leads him to suggest that because we can´t introspect judgements and decisions then judgements and decisions are unconscious.

With respect to apply "mindreading" mechanisms to know about oneself sounds odd to me. I know introspection as classically conceived fails (Schwitzgebel 2008) but we are not so strange to ourselves tat we need to "mindread" our own minds as if we are "the other whithin us"

Eric Schwitzgebel said...

Thanks for the thoughtful comment, Anibal! I'm also updating the post with a reply from Carruthers himself.