Friday, March 23, 2007

"Reliability" in Epistemology

There are two ways of being unreliable. Something, or someone, might be unreliable because it often goes wrong or yields the wrong result, or it might be unreliable because it fails to do anything or yield any result at all. A secretary is unreliable in one way if he fouls up the job, unreliable in another if he simply doesn't do it. A program for delivering stock prices is unreliable in one way if it tends to misquote, unreliable in another if it crashes. Either way, they can't be depended on to do what they ought.

Contemporary epistemologists tend to classify only the first sort of failure as a failure in reliability. Here's Alvin Goldman, probably the world's leading "reliabilist":

An object (a process, method, system, or what have you) is reliable if and only if (1) it is a sort of thing that tends to produce beliefs, and (2) the proportion of true beliefs among the beliefs it produces meets some threshold, or criterion, value (1986, p. 26).

We can easily liberalize this definition to accommodate the stock quote program: The stock quote program is "reliable" on this definition if most of its quotes are right, no matter how much it crashes or how rarely it successfully delivers a quote when asked.

This peculiarity serves a purpose: For reliabilists, knowledge and justification require (something like) reliability -- and what matters in knowing or being justified, it seems, is that you're not likely to err. Regardless how glitchy the stock quote program is, if whenever it does happen to give a quote it gives the right quote, you can have knowledge and justification from it (setting aside some complexities).

Yet I wonder if epistemologists haven't lost something valuable in giving up on the ordinary notion of reliability. In cognition -- for example in introspection -- the difference between failing to reach a judgment about whether you have (e.g.) very detailed current imagery or not and reaching the wrong judgment about that is sometimes vague and cognitively minor. Either way, introspection (like the secretary or stock quote program) has failed to deliver what one might reasonably hope it should. There's often no firm line between guesses, conjectures, impressions, and definite opinions, and whether one expresses oneself hesitantly, unhesitantly, or not at all -- to oneself or aloud -- may depend more on context and temperament than anything else. These different ways of failing must of course be distinguished -- yet drawing too sharp a distinction between them, and giving them vastly different roles in our epistemology, is artificial and misses something important.

So: When I say introspection is unreliable, I mean that in the broad, ordinary sense of "unreliable". It is no objection to my pessimism, but rather supports it, if the reader or general population fails to reach introspective judgments about their experience -- as long as it's a case where it seems like introspection should be able to deliver results, like a basic and pervasive aspect of currently ongoing conscious experience patiently considered.


Clark Goble said...

That's a really interesting take on one problem of reliabilism that I'd never considered. I wonder though in the context of epistemological reliabilism what a "system crash" would consist of? After all running an old buggy version of WindowME and Excel with tons of cruft might be unreliable yet I can trust the results Excel gives me.

So what is the mental equivalent of this situation? Epilepsy? And does it really have any epistemological ramifications?

It seems like you're just using in terms of introspection and suggesting that the failure isn't a failure in process (i.e. the process only works 70% of the time) but rather a failure in the sense that the equipment isn't working. But while this may be a critique for Cartesian internalists I'm not sure it really gets at what is both right and wrong in reliabilism.

(I'm sympathetic to the general approach of reliabilism - and especially this form captures our need to have trust in our judgments - but reliabilism overall just also misses what is essential in the question about knowledge in my mind - that is in terms of providing justifications)

Jonathan Ichikawa said...

Hi Eric, yes, what you say sounds basically right. I wonder whether we can make sense of it under the traditional understanding, though. The Goldman criteria do include "tends to produce belief"; this will admit of degrees, and maybe it's plausible to say that if it doesn't produce belief often enough, it fails to be reliable.

Eric Schwitzgebel said...

Thanks for the interesting comments, Clark and Jonathan!

Clark: Epilepsy would be a general crash, but I think individual systems can crash or fail without everything crashing -- as when someone attempts to remember what they had for dinner two hours ago and simply doesn't come up with anything. That's a failure of the reliability of memory (as a failure to remember what you had for dinner Aug 16 is not) because it's something memory *should* be able to deliver. This kind of crash is common in introspection, I think, if one is sufficiently skeptical about it to let go of our general tendency to irresponsible glibness in matters introspective!

I don't pretend that this is a, or the, fundamental flaw in reliabilism (if there is a fundamental flaw) -- but I do think it's at least an infelicity.

(I should also mention Goldman's concept of "power" -- the ability of a system to generate beliefs. He can say at least some obvious things, in his vocabulary, about tradeoffs between power and reliability....)

Jonathan -- Yes, maybe Goldman could go that way, though my impression is that he wouldn't call that a failure of reliability, though he might call it a matter of having poor power. (See the parenthetic remark above.)

Clark Goble said...

Eric, the problem of memory is that we're expecting memory to be something it was never intended to be - an absolute recording in some sense. Whereas it is at best rough notes we then have to interpret a whole out of with the notes being rewritten both when we re-remember and probably when we dream.

i.e. I don't think the memory example is a good one since it is less an issue of reliabilism than simply making a demand that memory be something it's not. It's akin to saying a car is unreliable because it can't cross the ocean.

Now my short term memory is very unreliable where my long term memory is surprisingly reliable. But I don't see that as a "system crash" kind of fault but merely the applicability of a function towards a given task. In the way that perhaps a kitchen towel isn't reliable as a paint brush but can be used as such.

The problem is our expectations of memory and the perhaps unrealistic telos we put it to. (And I'd argue most of the unrealistic expectations were ushered in by Descartes)

Eric Schwitzgebel said...

I agree with you, Clark, that we often expect too much of memory. But I don't think it's unreasonable to expect *something* of it. Then, on my view, we get unreliability-type-1 if it yields nothing instead of what's properly expected. We get unreliability-type-2 (Goldman-type) if it yields the wrong answer (whether it should properly be expected to yield and answer or not).

I was thinking that what-I-had-for-dinner-two-hours-ago would be in the category of what can legitimately be expected of memory without holding it to too high a standard; but I'm not attached to that particular example.

The Irreverent Seraph said...

There is a relativist quality to memory, just as there is a relativist quality to the knowedge base of humanity. Custom may be the great guide to life, but at some point we need to stand up and acknowledge the variances with each person. Memory is a form of reasoning, one based on previous experience. When we turn on a light switch, we reason that the bulb will turn on because our memory tells us that is what happened last time we hit the switch. We trust our memory implicitly until we come to the concious realization that we cannot.