In one of the last papers he wrote before dying almost exactly one year ago, John Pollock posed what he called “the puzzle of irrationality”:
Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question ... [Assuming] rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally?Consider just one example, taken from Philip Johnson-Laird’s recent book How We Reason: Paolo went to get the car, a task that should take about five minutes, yet 10 minutes have passed and Paolo has not returned. What is more likely to have happened?
1. Paolo had to drive out of town.The typical reader of this blog probably knows that the answer is 1. After all (we reason) 2 can’t be more likely, since 1 is true whenever 2 is. But I’ll bet you felt the tug of 2 and may still feel it. (This human tendency to commit the ‘conjunction fallacy’ was famously documented by the Israeli psychologists Daniel Kahneman and Amos Tversky.)
2. Paolo ran into a system of one way streets and had to drive out of town.
So we feel the pull of wrong answers, yet are (sometimes) capable of reasoning toward the correct ones.
Pollock wanted to know why we are built this way. Given that we can use the rules that lead us to the correct answers, why didn’t evolution just design us to do so all the time? Part of his answer- well-supported by the last 50 years of psychological research - is that most of our beliefs and decisions are the result of ‘quick and inflexible’ (Q&I) inference modules, rather than explicit reasoning. Quickness is an obvious fitness conferring property, but the inflexibility of Q&I modules means that they are prone to errors as well. (They will, for example, make you overgeneralize, avoiding all spiders, snakes, and fungi rather than just the dangerous ones.)
Interestingly, though, Pollock does not think human irrationality is simply a matter of the error proneness of our Q&I modules. In fact, he would not see a cognitive system composed only of Q&I modules as capable of irrationality at all. For Pollock, to be irrational, an agent must be capable of both monitoring the outputs of her Q&I modules and overriding them on the basis of explicit reasoning (just as you may have done above.) Irrationality, then, turns out to be any failure to override these outputs when we have the time and information needed to do so. Why we are built to often fail at this task is not entirely clear. Pollock speculates that it is a design flaw resulting from the fact that our Q&I modules are phylogenetically older than our reasoning mechanisms.
I think on the surface this is actually a very intuitive account of irrationality, so much so that it is easy to miss the deeper implications of what Pollock has proposed here. Most people think of rationality as a very special human capacity, the ‘normativity’ of which may elude scientific understanding altogether. But for Pollock, rationality is something that every cognitive system has simply by virtue of being driven by a set of rules. Human rationality is certainly interesting in that it is driven by a system of Q&I modules that can be defeated by explicit reasoning. What really makes us different, though, is not that we are rational, but that we sometimes fail to be.
No comments:
Post a Comment