Showing posts with label G. Randolph Mayes. Show all posts
Showing posts with label G. Randolph Mayes. Show all posts

Thursday, November 04, 2010

Not By Argument Alone (by Guest Blogger G. Randolph Mayes)

I just gave a talk at Gonzaga University called “Not by Argument Alone” in which I tried to show how explanatory reasoning figures into the resolution of philosophical problems. It begins with the observation that we sometimes have equally good reasons for believing contradictory claims. This is the defining characteristic of philosophical antinomies, but it is a common feature of everyday reasoning as well.

For example, Frank told me to meet him at his office at 3 PM if I wanted a ride home. But I’ve been waiting for 15 minutes now and still no Frank. This problem can be represented as a contradiction of practical significance: Frank both will and will not be giving me a ride home. One of these claims must go. The problem is that I have very good reasons for believing both. Frank is a very reliable friend, as is my memory for promises made. On the other hand, my ability to observe the time of day and the absence of Frank at that time and location is quite reliable as well.

So how do I decide which claim to toss? I consider the possibility that Frank is not coming, but this immediately raises the following question: Why not? (He forgot; he lied, he was mugged; I am late?) I consider the possibility that Frank will still show. This immediately raises another question: Why isn’t he here? ( He was delayed; I am early; he is here but I don’t see him?) Both of these questions are requests for explanations and producing good answers to them is essential to the rational resolution of the contradiction. Put differently, I should deny the claim whose associated explanation questions I am best capable of answering.

This is one way of explicating the view that rational belief revision depends on considerations of ‘explanatory coherence.’ The idea is typically traced to Wilfrid Sellars, and it has since been developed along epistemological, psychological, and computational lines. Oddly, however, it has not been explored much as a model for the resolution of philosophical questions. I don’t know why, but I speculate that it is because philosophers don’t naturally represent philosophical thinking in explanatory terms. Typically, a philosophical ‘theory’ is represented not so much as a proposed explanation of some interesting fact as it is a proposed analysis of some problematic concept.

In my view, though, philosophers engage in the creation of explanatory hypotheses all the time. Consider the traditional problem of perception. Just about everyone agrees that we perceive objects. But whereas the physicalist argues that we perceive independently existing physical objects, the phenomenalist is equally persuasive that the objects of perception are mind-dependent. Again, one claim must go. Suppose we deny the phenomenalist’s claim. But then how do we explain illusions and hallucinations, which are phenomenologically indistinguishable from physical objects? Suppose we deny the physicalist’s claim. But then how do we explain the origin of experience itself?

When we explicitly acknowledge that explanation is a necessary step in philosophical inquiry, we thereby acknowledge the responsibility to identify criteria for evaluating the explanations that we propose. Too often philosophical theories are defended simply on the basis of their intuitive appeal. But why would we expect this to reflect anything more than our intuitive preference for believing the claims that they preserve? In science, the ability of a theory to explain things we already know is a paltry achievement. A good explanation must successfully predict novel phenomena or unify familiar phenomena not previously known to be related. Are philosophical explanations subject to the same criteria? If so, then let’s explicitly apply them. If not, well, then I think we’ve got some explaining to do.

This is my last post! Thanks very much for reading and thanks especially to Eric for giving me this opportunity to float some of my thoughts on The Splintered Mind.

Friday, October 29, 2010

The Convincing Explanation (by Guest Blogger G. Randolph Mayes)

The Stone is the new section of the New York Times devoted to philosophy and this week it contains an interesting piece called “Stories vs. Statistics” by John Allen Paulos. It is worth reading in its entirety, but for my money the most important point he makes is this:

The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true.
Our tendency to confuse plausibility with probability is also at the heart of a short essay of mine (forthcoming in the journal Think), called “Beware the Convincing Explanation.” Paulos clarifies the excerpt above by reference to the ‘conjunction fallacy,’ which I discussed in an earlier post. In my essay I try to get at it from a different angle, by distinguishing the respective functions of argument and explanation.

Here is the basic idea: Normally, when we ask for an argument we are asking for evidence, which is to say the grounds for believing some claim to be true. An explanation, on the other hand, is not meant to provide grounds for belief; rather it tells us why something we already believe is so. Almost everyone understands this distinction at an intuitive level. For example, suppose you and I were to have this conversation about our mutual friend Toni.
Me: Boy, Toni is seriously upset.

You: Really? Why?

Me: She’s out in the street screaming and throwing things at Jake.
You can tell immediately that we aren’t communicating. You asked for an explanation, the reason Toni is upset. What I gave you is an argument, my reasons for believing she is upset. But now consider a conversation in which the converse error occurs:
Me: Boy, Toni is seriously upset.

You: Really? How do you know that?

Me: Jake forgot their date tonight and went drinking with his pals.
This time my response actually begs the question. Jake blowing off the date would certainly explain why Toni is upset, but an explanation is only appropriate if we agree that she is. Since your question was a request for evidence, it is clear that you are not yet convinced of this and I’ve jumped the gun by explaining what caused it.

What’s interesting is that people do not notice this so readily. In other words, we often let clearly explanatory locutions pass for arguments. This little fact turns out to be extremely important, as it makes us vulnerable to people who know how to exploit it. For example, chiropractic medicine, homeopathy, faith healing -- not to mention lots of mainstream diagnostic techniques and treatments -- are well known to provide little or no benefit to the consumer. Yet their practitioners produce legions of loyal customers on the strength of their ability to provide convincing explanations of how their methods work. If we were optimally designed for detecting nonsense, we would be highly sensitive to people explaining non-existent facts. We aren’t.

Now, to be fair, there is a sense in which causes can satisfy evidential requirements. After all, Jake blowing off the date can be construed as evidence that Toni will be upset when she finds out. However, it is quite weak evidence compared to actually watching Toni go off on him. So, we can put the point a bit more carefully by saying that what people don’t typically understand is how weak the evidence often is when an explanation gets repurposed as an argument.

Following Paulos, we can say that the convincing explanations succeed in spite of their evidential impotence because they are good stories that give us a satisfying feeling of understanding a complex situation. Importantly, this is a feeling that could not be sustained if we were to remain skeptical of the claim in question, as it is now integral to the story.

Belief in the absence of evidence is not the only epistemic mischief that explanations can produce. The presence or absence of an explanation can also inhibit belief formation in spite of strong supporting evidence. The inhibitory effect of explanation was demonstrated in a classic study by Anderson, Lepper and Ross which showed that people are more likely to persist in believing discredited information if they had previously produced hypotheses attempting to explain that information. Robyn Dawes has documented a substantial body of evidence for the claim that most of us are unmoved by statistical evidence unless it is accompanied by a good causal story. Of particular note are studies by Nancy Pennington and Reid Hastie which demonstrate a preference for stories over statistics in the decisions of juries.

Sherlock Holmes once warned Watson of the danger of the convincing explanation: “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” Damn good advice from one of the greatest story-tellers of all.

Sunday, October 24, 2010

Why We Procrastinate (by guest blogger G. Randolph Mayes)

James Surowiecki recently wrote a nice full-length review of The Thief of Time for The New Yorker magazine. It sounds like a fantasy novel by Terry Pratchett, but is actually a collection of mostly pointy-headed philosophical essays about procrastination edited by Chrisoula Andreou and Mark White. Procrastination is a great topic if you are interested in the nature of irrationality, as philosophers and psychologists tend to think of procrastination as something that is irrational by definition. For example, in the lead article of this volume George Ainslie defines procrastination as “generally meaning to put off something burdensome or unpleasant, and to do so in a way that leaves you worse off.”

I recently published an article about cruelty in which I argued that it is a mistake for scientists to characterize the phenomenon of cruelty in a way that respects our basic sense that it is inherently evil. I find myself wondering whether the same sort of point might be raised against the scientific study of procrastination.

Most researchers appear to accept Ainslie’s characterization of procrastination as an instance of "hyperbolic discounting," which is an exaggeration of an otherwise defensible tendency to value temporally proximate goods over more distant ones. Everyone understands that there are situations (like a time-sensitive debt or investment opportunity) when it is rational to prefer to receive 100 dollars today rather than 110 dollars next week. But Ainslie and many others have demonstrated that we typically exhibit this preference even when it makes far more sense to wait for the larger sum.

Hyperbolic discounting subsumes procrastination in a straightforward way. According to Ainslie, whenever we procrastinate we are choosing a more immediately gratifying activity over one whose value can only be appreciated in the long run. When making plans a week in advance, few would choose to schedule the evening before a big exam catching up on neglected correspondence or deleting old computer files. But when the decision is left until then, that’s exactly the sort of thing we find ourselves doing.

One interesting result of defining procrastination as Ainslie does is that whether we are procrastinating at any given time depends on what happens later, not how we feel about it now. For example, reading this blog is something you might describe as procrastinating on cleaning your filthy apartment. But, according to Ainslie’s definition, you are only procrastinating now if you subsequently fail to get the apartment clean before your guests arrive for dinner (because otherwise you aren’t “worse off”). There is nothing absurd about this, and science certainly has no obligation to be faithful to ordinary usage. But this disparity does highlight an interesting possibility, namely that what Ainslie and his colleagues call procrastination is really just the downside of a generally rational tendency to avoid beginning onerous tasks much before they really, really need to be done.

Why would this be rational? Well, you could start cleaning your apartment right now. But- wait! -there is a good chance that if you do you will become the victim of Parkinson’s Law: Work expands so as to fill the time available for its completion. Putting it off until the last minute can be beneficial because you work much more energetically and efficiently when you are under the gun. (And if you don’t, then you will learn to, which is an important life skill.) Of course, this strategy occasionally backfires. We sometimes underestimate the time we need to meet our goals; unanticipated events, like a computer crashing or guests arriving early, can torpedo the deadlining strategy. But these exceptions, which are often uncritically taken as proof of the irrationality of procrastination, may simply be a reasonable price to pay for the value it delivers when it works.

Most of us think of procrastination as a bad thing and we tell researchers that we do it too much. But should this kind of self-reporting be trusted? Do we just know intuitively that we would be generally better off if we generally procrastinated less? Scientists can define procrastination as harmful if they want to, but they also might want to reconsider the wisdom of a definition that makes beneficial procrastination a logical absurdity. In doing so, they may discover that the powerful notion of hyperbolic discounting has made them too quick to accept a universal human tendency as a fundamentally irrational one.

Friday, October 15, 2010

The Illusion of Understanding (by guest blogger G. Randolph Mayes)

Every teacher knows that magic moment when the light snaps on in a student’s head and bitter confusion gives way to the warm glow of understanding. We live for those moments. Subsequent moments can be slightly less magical, however. There is, for example, the moment we begin to grade said student’s exam, and realize that we’ve been had yet again by the faux glow of illusory understanding.

The reliability and significance of our sense of understanding (SOU) has been the subject of research in recent years. I indicated in the previous post that philosophers of science generally agree that there is a tight connection between explanation and understanding. Specifically, they agree that the basic function of explanation is to increase our understanding of the world. But this agreement is predicated on an objective sense of the term ‘understanding,’ typically referring to a more unified belief system or a more complete grasp of causal relations. There is no similar consensus concerning how our subjective SOU relates to ‘real’ understanding, or indeed whether it is of any philosophical interest at all.

One leading thinker who has argued for the relevance of the SOU to the theory of explanation is the developmental psychologist Alison Gopnik. Gopnik is a leading proponent of the view that the developing brains of children employ learning mechanisms that closely mirror the process of scientific inquiry. As the owner of this blog has aptly put it, Gopnik believes that children are invested with ‘a drive to explain,’ a drive she compares to the drive for sex.

For Gopnik, the SOU is functionally similar to an orgasm. It is a rewarding experience that occurs in conjunction with an activity that tends to enhance our reproductive fitness. So just as a full theory of reproductive behavior will show how orgasm contributes or our reproductive success, a full theory of explanatory cognition will show how the SOU contributes to our explanatory success.

Part of the reason Gopnik compares the SOU to the experience of orgasm is that they can both be detached from their respective biological purposes. Genital and theoretical masturbation are both pleasurable yet non (re)productive human activities. Gopnik thinks that just as no one would consider the high proportion of non reproductive orgasms as evidence that orgasm is unrelated to reproduction, no one should take a high frequency of illusory SOU’s as evidence that the SOU is unrelated to real understanding.

But the analogy between orgasm and the SOU has its limits. The SOU can not really be detached from acts of theorizing as easily as orgasm can be detached from acts of reproduction. One might achieve a free floating SOU as a result of meditation, mortification or drug use, but this will be relatively unusual in comparison to the ease and frequency with which orgasms can be achieved without reproductive sex. For the most part SOU’s come about as a result of unprotected intercourse with the world. If illusory SOU’s are common, and this can not be explained by reference to their detachability, it is reasonable to remain skeptical about the importance of the SOU in producing real understanding.

One such skeptic is the philosopher of science J. D. Trout. Trout does not deny that our SOU may sometimes result from real understanding, but he thinks it is the exception rather than the rule. Moreover, Trout thinks that illusory SOU’s are typically the result of two well-established cognitive biases: overconfidence and hindsight. (Overconfidence bias is the tendency to overestimate the likelihood that our judgments are correct. Hindsight bias is the tendency to believe that past events were more predictable than they really were.) Far from being a reliable indicator of real understanding, Trout holds that the SOU mostly reinforces a positive illusion we have about our own explanatory abilities. (This view also finds support in the empirical research of Frank Keil who has documented an ‘illusion of explanatory depth’)

Is it true that illusory SOU’s are more common than veridical ones? I’m not sure about this. I‘m inclined to think most of our daily explanatory episodes occur below the radar of philosophers of science. Consider explanations that occur simply as the result of the limits of memory. My dog is whining and it occurs to me that I haven’t fed her. The mail hasn’t been delivered, and then I recall it is a holiday. I see a ticket on my windshield and I remember that I didn’t feed the meter. I have an dull afternoon headache and realize I’ve only had three cups of coffee. These kinds of explanatory episodes occur multiple times every day. The resulting SOU’s are powerful and only rarely misleading.

But when choosing between competing hypotheses or evaluating explanations supplied by others Trout is surely correct that the intensity of an SOU has little to do with our degree of understanding. We experience very powerful SOU’s from just-so stories and folk explanations that have virtually no predictive value. Often a strong SOU is simply the result of the fact that it allays our fears or settles cognitive dissonance in an emotionally satisfying way.

In the end, I’m not sure that Trout and Gopnik have a serious disagreement. For one thing, Gopnik’s focus in on the value of the SOU for the developing mind of a child. It may be that the the unreflective minds of infants are uncorrupted by overconfidence, hindsight, or the need to believe. It may also be that a pre-linguistic child’s SOU is particularly well-calibrated for the kind of learning it is required to do.

Trout does not argue that the SOU is completely unreliable, and Gopnik only needs it to be reliable enough to have conferred a selective advantage on those whose beliefs are reinforced by it. There are different ways that this can happen. As Trout himself points out, the SOU may contribute to fitness simply by reinforcing the drive to explain. But even if our SOU is only a little better than chance at selecting the best available hypothesis at any given time, it could still be tremendously valuable as part of an iterated process that remains sensitive to negative feedback. As I indicated in the previous post, our mistake may be to think of the SOU as something that justifies us in believing our hypotheses. It may simply help us to generate or select hypotheses that are slightly more likely to be true than their competitors.

Wednesday, October 06, 2010

Is Explanation the Foundation? (by guest blogger G. Randolph Mayes)

One of my main interests is explanation. I think there may be no other concept that philosophers lean on so heavily, yet understand so poorly. Here are some examples of how critical the concept of explanation has become to contemporary philosophical debates.

1. A popular defense of scientific realism is that the existence of theoretical entities provides the best explanation of the success of the scientific enterprise.

2. A popular view concerning the nature of inductive rationality is that it rests on an inference to the best explanation.

3. A popular argument for the for the existence of other minds is that other minds provide the best explanation of the behavior of other bodies.

4. A popular argument for the existence of God is that a divine intelligence is the best explanation of the observed order in the universe.

This is a short list. The concept of explanation has been invoked in similar ways to analyze the nature of knowledge, theories, reduction, belief revision, and abstract entities. Interestingly, few of the very smart people who defend these views tell us what explanation is. The reason is simple: we don’t really know. The dirty secret is that explanation is just no better understood than any of the things that explanation is invoked to explain. In fact, it is actually worse than that. If you spend some time studying the many different theories of explanation that have been developed during the last 60 years or so, you’ll find that most of them give little explicit support to these arguments.

The reason for this is worth knowing. Most philosophical theories of explanation have been developed in an attempt to identify the essential features of a good scientific explanation. The good-making features of explanation were generally agreed to be those that would account for how explanation produces (and expresses) scientific understanding. There are many different views about this, but an assumption common to most of them is that a good scientific explanation must be based on true theories and observations. That sounds pretty reasonable, but here’s the rub: If truth is a necessary condition of explanatory goodness, then it makes no sense at all to claim that a theory’s explanatory goodness is our basis for thinking it is true.

All of the arguments noted above do just this, invoking a principle commonly known as “inference to the best explanation” (IBE, aka ‘abduction’). This idea, first articulated by Charles Peirce, has been the hope of philosophy ever since W.V.O. Quine pounded the last nail into the coffin of classical empiricism. This latter tradition had sought in vain to demonstrate that inductive rationality could ultimately be reduced to logic. For many, IBE is a principle that, while not purely logical, might serve as a new ‘naturalized’ foundation of inductive rationality.

Bas van Fraassen, the great neo-positivist, has blown the whistle on IBE most loudly, arguing that it is actually irrational. One of his criticisms is quite simple: It is literally impossible to infer the best explanation; all we can infer is the best explanation we have come up with so far. It may just be the best of a bad lot.

One way to understand the disconnect between traditional theories of explanation and IBE is to note that there are two fundamentally different ways of thinking about explanation. In one, basically transactional sense, explanations are the sorts of things we seek from pre-existing reserves of expert knowledge. When we ask scientists why the night sky is dark or why it feels good to scratch an itch, we typically accept as true whatever empirical claims they make in answering our question. Our sense of the quality of the explanation is limited to how well we think this information has answered the question we’ve posed. This, I think, is the model implicit in most traditional theories of explanation. The aim is to show in what sense, beyond the mere truth of the claims, that science can be said to provide the best answers.

In my view, IBE has more to do with a second sense of explanation, belonging to the context of discovery rather than communication of expert knowledge. In this sense, explaining is a creative process of hypothesis formation in response to novel or otherwise surprising information. It can occur within a single individual, or within a group, but in either case it occurs because of the absence of authoritative answers. It is in this sense of the term that it can make sense to adopt a claim on the basis of its explanatory power.

Interestingly, much of the work done on transactional accounts of explanation is highly relevant to the discovery sense of the term. Many of the salient features of good explanations are the same in both, notably: increased predictive power, simplicity, and consilience. (This point is made especially clearly in the work of philosophically trained cognitive psychologists like Tania Lombrozo.) What is not at all clear, however, is that any of the IBE arguments noted above will have the intended impact when the relevant sense of explanation belongs more to what Reichenbach called “the context of discovery” rather than the “context of justification.”

Thursday, September 30, 2010

Explaining Irrationality (by guest blogger G. Randolph Mayes)

In one of the last papers he wrote before dying almost exactly one year ago, John Pollock posed what he called “the puzzle of irrationality”:

Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question ... [Assuming] rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally?
Consider just one example, taken from Philip Johnson-Laird’s recent book How We Reason: Paolo went to get the car, a task that should take about five minutes, yet 10 minutes have passed and Paolo has not returned. What is more likely to have happened?
1. Paolo had to drive out of town.

2. Paolo ran into a system of one way streets and had to drive out of town.
The typical reader of this blog probably knows that the answer is 1. After all (we reason) 2 can’t be more likely, since 1 is true whenever 2 is. But I’ll bet you felt the tug of 2 and may still feel it. (This human tendency to commit the ‘conjunction fallacy’ was famously documented by the Israeli psychologists Daniel Kahneman and Amos Tversky.)

So we feel the pull of wrong answers, yet are (sometimes) capable of reasoning toward the correct ones.

Pollock wanted to know why we are built this way. Given that we can use the rules that lead us to the correct answers, why didn’t evolution just design us to do so all the time? Part of his answer- well-supported by the last 50 years of psychological research - is that most of our beliefs and decisions are the result of ‘quick and inflexible’ (Q&I) inference modules, rather than explicit reasoning. Quickness is an obvious fitness conferring property, but the inflexibility of Q&I modules means that they are prone to errors as well. (They will, for example, make you overgeneralize, avoiding all spiders, snakes, and fungi rather than just the dangerous ones.)

Interestingly, though, Pollock does not think human irrationality is simply a matter of the error proneness of our Q&I modules. In fact, he would not see a cognitive system composed only of Q&I modules as capable of irrationality at all. For Pollock, to be irrational, an agent must be capable of both monitoring the outputs of her Q&I modules and overriding them on the basis of explicit reasoning (just as you may have done above.) Irrationality, then, turns out to be any failure to override these outputs when we have the time and information needed to do so. Why we are built to often fail at this task is not entirely clear. Pollock speculates that it is a design flaw resulting from the fact that our Q&I modules are phylogenetically older than our reasoning mechanisms.

I think on the surface this is actually a very intuitive account of irrationality, so much so that it is easy to miss the deeper implications of what Pollock has proposed here. Most people think of rationality as a very special human capacity, the ‘normativity’ of which may elude scientific understanding altogether. But for Pollock, rationality is something that every cognitive system has simply by virtue of being driven by a set of rules. Human rationality is certainly interesting in that it is driven by a system of Q&I modules that can be defeated by explicit reasoning. What really makes us different, though, is not that we are rational, but that we sometimes fail to be.