It seems a little strange to think so, and the philosophers I've asked about this in the last few days tend to say no. But here are three possible examples:
Inferential Ascent:
Consider the following rule: If P is true, then conclude that I believe that P is true. Of course, it's not generally true that for all P I believe that P. (Sadly, I'm not omniscient. Or happily?) However, if I apply this rule in my thinking, I will almost always be right, since employing the rule will require judging, in fact, that P is true. And if I judge that P is true, normally it is also true that I believe that P is true. So if by employing the rule I generate the belief or judgment that I believe that P is true, that belief or judgment is generally correct. The rule is, in a way, self-fulfilling. (Gareth Evans, Fred Dretske, Richard Moran, and Alex Byrne have all advocated rules something like this.)
And of course the conclusion "I believe that P is true" (the conclusion I now believe, having applied the rule) will itself generally be true even if P is false. I'm inclined to think it's usually knowledge.
One question is: Is this really inference? Well, it looks a bit like inference. It seems to play a psychological role like that of inference. What else would it be?
Instrumentalism in Science:
It's a common view in science and in philosophy of science that some scientific theories may not be strictly speaking true (or even approximately true) and yet can be used as "calculating devices" or the like to arrive at truths. For example, on Bas Van Fraassen's view, we shouldn't believe that unobservably small entities like atoms exist, and yet we can use the equations and models of atomic physics to predict events that happen among the things we can observe (such as tracks in a cloud chamber or clicks in a Geiger counter). Let's further suppose that atoms do not in fact exist. Would this be a case of scientific inference in which false premises (about atoms) generate conclusions (about Geiger counters) that count as knowledge?
Perhaps the relevant premise is not "atoms behave [suchly]" but "a model in which atoms are posited as fictions that behave [suchly] generates true claims about observables". But this seems to me needlessly complex and perhaps not accurate to psychological reality for all scientists who'd I'd be inclined to say derive knowledge about observables using atomic models even if some of the crucial statements in those models are false.
Tautologous Conclusions:
In standard two-valued logic, I can derive "P or Q" from P. What if Q, in some particular case, is just "not P"? Perhaps, then, I can derive (and know) "P or not P" from P, even if P is false?
What's the problem here? Why do philosophers seem to be reluctant to say we can sometimes gain knowledge through inference from false premises?
Friday, October 31, 2008
Can Inferences Sometimes Lead to Knowledge Even if Their Premises Are False?
Posted by Eric Schwitzgebel at 9:55 PM
Labels: epistemology
Subscribe to:
Post Comments (Atom)
19 comments:
Warfield has a good paper on this arguing that you can get knowledge from falsehoods.
He's got several good examples in that paper.
http://www3.interscience.wiley.com/journal/118716081/abstract?CRETRY=1&SRETRY=0
Here are some reasons to say that these are not cases of knowledge from false premises.
(1) I am inclined to treat present-tense belief self-ascription as transparent; ie saying 'I believe that P' is just to report 'P.'
(2) The instrumentalist has an implicit inference from premises that the theory is empirically adequate and that the theory predicts observable O, to the conclusion O. But those premises are supposed to be literally literally true.
(3) If I don't recognize that the conclusion is a tautology, then I am tempted to say that I don't really know it; if I noticed that the premise was false I would withdraw belief. On the other hand: If I recognize that it is a tautology, then the false premise isn't actually doing any work.
"P.D." seems to be onto a few things here... so here's my take:
Re (1): It would seem to me that inferential ascent is a case of knowledge, but only in a fairly trivial sense. One thing that springs to mind here is something I read in Robert Audi's Architecture of Reason regarding psychological economy. Having the "belief that P" may provide justification (perhaps indefeasible justification) for a claim that "I know that I believe that P" - but we don't seem to necessarily draw this conclusion, or else you have an infinite 'stack' of meta-level beliefs. In the basic case of uttering "I believe that P" it seems logical to assume that one is just reporting "P" - belief self-ascription as self-referential disquotation, in a sense (not sure exactly how to express this, but the Tarski parallel is obvious).
Re (2): I'm not entirely sure that this counts as knowledge - but I'm dubious as to whether inference to the best explanation counts towards knowledge in general, without some form of empirical grounding. Plus, we do have indirect empirical evidence for atoms and subatomics based on energy output readings. Just because some things are unobservable by unaided humans doesn't render them necessarily unobservable by other means.
Re (3): I think this is another case of psychological economy. We don't necessarily draw the conclusion, but it is permissible deductive knowledge, as an instance of a logical truth.
I didn't read the comments above closely, so someone might have said this. It seems obvious that we can get from false premises to knowledge. Take any proposition that you know, P0. Take any valid inference P1, P2 |- C such that P1 and P2 are false. P1 and P2 also entail (C v P0), which you know. Is that cheating?
I think yes: sometimes you need a temporary trick to get across a chasms ... even in retropsect you were walking on air.
There might be a few others. Let P be false but not impossible. Every false but possible proposition P has some positive probability, n. So, infer from the false proposition P, the true proposition that the probability of ~P is (n-1). You can also infer from the false proposition P that ~P; so if you know that P is false, you know inferentially that ~P.
A clearer science example is the ether. 19th century physics developed with the assumption of the ether and made huge progress (Maxwell). In the 20th century they disposed of the ether but kept most of the science that developed with that assumption.
Wellknowledge that can come from it may be not in the same direction or intent as the inferences. Awareness and discovery of the wrong assumption can lead to knowledge of how to not or prevent similar wrong assumptions; We can discover the human mind is prone to make pitfalls in some conditions more than others and is more tricked in others or is not so discriminant in other situations because of biases
Wow, thanks for all the great comments folks!
Andrew: Thanks for the tip on the Warfield essay. I agree with him, but his examples cover a much narrower range than the proposed examples here (if these examples work).
PD: (1.) That's a pretty strong view of "transparency". Would it render false a second person ascription "He said he thinks that P" or a past-tense first person ascription "I said I believed that P"? That seems unintuitive to me. (2.) Here I wonder if it's useful to distinguish psychological from epistemic bases. At least psychologically speaking I think what you say here can't be right, since not all scientists are instrumentalists, even if their conclusions are justified by an instrumentalist interpretation of their practices. Epistemically, though, maybe their conclusions are still *grounded* in this literally true claim? I think that's an available move, but I'm not sure quite what to make of it. (3.) I think I'd better take the second horn here, fearing Gettier examples on the first horn. I wonder what it means, in this context, though, that the false premise is doing no "work". How about the more complicated case: I know Q. I falsely believe P. I infer: P, Q, P&Q, P&Qv-P. Now we aren't introducing a tautology. We could get to the same conclusion by starting with Q and then introducing a tautology, but that's not how the inference I'm positing actually goes.
Sean: On your (1) and (3), I agree that we don't *necessarily* draw those inferences. But suppose that in some particular case a person does. I'm unsure what your interpretation would be then. On your (2): If you don't buy instrumentalism for atoms, maybe we could use the ether or internal representations or something else. Would you generally deny that we can have knowledge via a scientific practice that has an instrumentalist grounding?
Mike: Thanks for the cute examples. They seem just about as much cheating (maybe a little less?) as my example (3). I'll take them, until convinced otherwise! On your second example, though, I see people saying that you're not inferring from the false proposition P but rather from the true proposition that P is false. Hm, what's the difference...?
Robert: Nice metaphor!
Anon 7:44: Thanks, that's a helpful example.
Gola: I certainly agree that our minds are fallible and biased and that we need to deal with that in thinking about thinking.
hi Eric,
Interesting post. In the comments, re: case 3, you say: I wonder what it means, in this context, though, that the false premise is doing no "work". How about the more complicated case: I know Q. I falsely believe P. I infer: P, Q, P&Q, P&Qv-P. Now we aren't introducing a tautology. We could get to the same conclusion by starting with Q and then introducing a tautology, but that's not how the inference I'm positing actually goes.
Call this the complex case and the original case the easy case. In the original case, I too (like one of the previous commenters) have the intuition that the false premise is doing no work. Here's one way to flesh that out: if you tried to derive the conclusion in a natural deduction system, it wouldn't depend on that premise. (In fact, it wouldn't depend on any premise, since it's a tautology.)
In the complex case, I do have the intuition that the false premise is doing some work. But here I am having some trouble agreeing that you know the conclusion. I wish I could explain more clearly why, but I have a suspicion that there's something wrong with your justification. Maybe this kind of inference in unreliable? (Not sure how to spell out why though.)
Eric:
Re (1) and (3) - If we happen to form the beliefs, then I'd say these count as cases of knowledge:
I'd definitely say this is the case in (1), because its a meta-belief about an internal mental state (insert preferred terminology on Phil of Mind here), so the referent is one's mental state, not the (potential/hypothetical/postulated?) corresponding facts to "P".
In the case of (3), the false premise does very little, if any work. This would seem to be more of a simple instantiation/property ascription of the logical truth of any case Pv~P. Still knowledge, but not epistemically dependent on P for its status as knowledge. It might still be psychologically dependent, because thinking about P gave rise to the belief that Pv~P in this particular case. So, knowledge yes, but a trivial a priori (or some suitable approximate if one doesn't believe in strict a priori cases - I withhold judgment) case that is not dependent on P for any type of justification.
Re (2) - this is the trickiest and most important case here. Roughly, I think the instrumental model specified can provide justification about causality related to the observables, but not about the atoms themselves. However, I also think that strict instrumentalism makes very little sense. In clear physical cases, I think the entities postulated are in principle observable (even if methods are indirect). I think most "hard scientists" would agree that they are describing something that does exist. The internal representations (or ether) case is trickier. I think I want to hold out here for some type of "groundedness" relation - an accurate and coherent instrumentalist model might provide 100% lucky guesses, but to count it as knowledge seems to imply a regularity theory regarding causation - you wouldn't have any way of knowing if, how, or why the model could break down. I think internal representations are likely to supervene on physical properties (say a complex neural network). Even if you got lucky and had an instrumentalist model based on internal representations for prior decisions, if something physical happened to shift the way representations were "processed" in the network, the model would stop working. At this point, without a way to explain that shift, do we think we really had "knowledge" of what was going on?
Phil of Mind wasn't a huge thing at my undergrad, and I'm not in a grad program yet (applying after a hiatus), so please let me know if I just said something monumentally dumb.
Amy: I agree there's something a little funny about the complex deduction case, but it's hard to put a finger on. My inclination is to think that at least it should make us think more about whether knowledge by inference really does require true premises.
Sean: On (3), I'll grant that it's trivial if you'll grant that it's knowledge by inference from a false premise! On (2): I don't think instrumentalism requires luck. Here's one kind of case where it might work. Suppose that we posited that behavior was caused by little men in our heads flipping levers. We know there aren't little men, of course, but maybe a theory that worked that way could do just a dandy job of getting in right in all sorts of behavior predictions. Presumably it would do so because there is something in our heads in an important way structurally isomorphic to what's going on in our stories about the little men. A well-confirmed little-men theory might allow us accurately to predict (and thus know) some outward behavior, despite the falsity of premises like "little man #821 flips lever 6 into the down position". That, at least, is the thought.
Hi Eric - (3) sounds like a good compromise.
As far as (2) is concerned - the little men theory is the type of "lucky coherence" I was trying to get at. It conveniently predicts all of the past cases of behavior, and may even predict most future cases. But, it doesn't correspond to any real state of affairs, it's just (for lack of a better phrase) a useful explanatory myth. Because there's no correspondence, and thus no causal link between little men and behaviors, I think this type of case is prey to Humean skepticism in a way that cases of directly observed causation aren't.
I'm not 100% sure how to phrase why I think the Humean case applies to little men but not to actually observed causation - probably some quasi-Kantian line about how the world behaves according to a set of laws, but without actual observation of the components transmitting the law we can't know what the law really is, so our predictions based on unobserved "little men with levers" are just speculative myths.
To come back to a case in reality - Ptolemy certainly had plenty of justifying evidence for his geocentric model, and certainly had a justified belief in a geocentric scheme with epicycles, but given that we now have direct contradictory evidence supporting heliocentricity, do we really want to say he had "knowledge"? (Perhaps the same could be said for Copernicus - until we observed actual positions and gravitational forces, they were just competing theories.) If we had knowledge in the first place, why continue with scientific endeavor?
It would seem to be a consequence of your view that a Ptolemaic astronomer would not know that the solstice will be on December 21, despite the fact that the model has correctly predicted that many times in the past and correctly predicts it in the present case. Perhaps, also, you'll be committed to thinking that no one could have scientific knowledge of any sort with a model that errs about the fundamental constituents. Suppose it turns out there really are no quarks. Do people making accurate predictions in subatomic physics and building huge, successful devices in accord with their predictions still not know that [fill in here an accurate, reliably true prediction based on subatomic physics]? That's a more stringent criterion for knowledge than seems desirable to me.
Of course science can still proceed even if we call these cases knowledge, since there will be strains and tensions and inaccuracies and blank spots in the existing models; and eventually enough anomalies may generate a whole new set of fundamental models.
Hi Eric - Don't think we're going to agree on this one (of course, that's half the fun!).
I do want to be clear on one thing though - I do think that instrumentalism provides justification for predictions in the absence of countervailing evidence, and thus reason for taking actions in accordance with that evidence. I just think that calling such cases "knowledge" seems to be going a bit far. The whole strength of skeptical intuitions seems to be that there is always (or very nearly always) the possibility of countervailing evidence to defeat our knowledge claims.
Incidentally, I'm actually 100% OK with Ptolemy not knowing that the Winter Solstice is 12/21... but maybe my intuitions are just spoiled by skeptics!
Sean: Yeah, I think we're going to have to agree to disagree on that one!
I believe it can, I follow a path of thinking that is apparent in the nature of things as we perceive them no a daily basis. To use an adage, "as above so below" a gnostic term, but yet a sort of consciousness common sense. Areas and shapes in painting are not often directly painted or drawn, they are often defined by the colors and shapes of other instances around them. like looking into a tree, the gaps in the branches create no real image, but our minds can create a shape and say "hey that gap (of nothing) looks like a rabbit" . granted perception, but the negative element of space invites a new perception.
Eric, you should also look at Peter Klein's paper, "Useful Falsehoods," in Quentin Smith (ed.), *Epistemology: New Essays*. It is quite a bit longer than Warfield's paper.
Post a Comment