Might there be excellent reasons to embrace radical skepticism, of which we are entirely unaware?
You know brain-in-a-vat skepticism -- the view that maybe last night while I was sleeping, alien superscientists removed my brain, envatted it, and are now stimulating it to create the false impression that I'm still living a normal life. I see no reason to regard that scenario as at all likely. Somewhat more likely, I argue -- not very likely, but I think reasonably drawing a wee smidgen of doubt -- are dream skepticism (might I now be asleep and dreaming?), simulation skepticism (might I be an artificial intelligence living in a small, simulated world?), and cosmological skepticism (might the cosmos in general, or my position in it, be radically different than I think, e.g., might I be a Boltzmann brain?).
"1% skepticism", as I define it, is the view that it's reasonable for me to assign about a 1% credence to the possibility that I am actually now enduring some radically skeptical scenario of this sort (and thus about a 99% credence in non-skeptical realism, the view that the world is more or less how I think it is).
Now, how do I arrive at this "about 1%" skeptical credence? Although the only skeptical possibilities to which I am inclined to assign non-trivial credence are the three just mentioned (dream, simulation, and cosmological), it also seems reasonable for me to reserve a bit of my credence space, a bit of room for doubt, for the possibility that there is some skeptical scenario that I haven't yet considered, or that I've considered but dismissed and should take more seriously than I do. I'll call this wildcard skepticism. It's a kind of meta-level doubt. It's a recognition of the possibility that I might be underappreciating the skeptical possibilities. This recognition, this wildcard skepticism, should slightly increase my credence that I am currently in a radically skeptical scenario.
You might object that I could equally well be over-estimating the skeptical possibilities, and that in recognition of that possibility, I should slightly decrease my credence that I am currently in a radically skeptical scenario; and thus the possibilities of over- and underestimation should cancel out. I do grant that I might as easily be overestimating as underestimating the skeptical possibilities. But over- and underestimation do not normally cancel out in the way this objection supposes. Near confidence ceilings (my 99% credence in non-skeptical realism), meta-level doubt should tend overall to shift one's credence down.
To see this, consider a cartoon case. Suppose I would ordinarily have a 99% credence that it won't rain tomorrow afternoon (hey, it's July in southern California), but I also know one further thing about my situation: There's a 50% chance that God has set things up so that from now on the weather will always be whatever I think is most likely, and there's a 50% chance that God has set things up so that whenever I have an opinion about the weather he'll flip a coin to make it only 50% likely that I'm right. In other words, there's a meta-level reason to think that my 99% credence might be an underestimation of the conformity of my opinions to reality or equally well might be an overestimation. What should my final credence in sunshine tomorrow be? Well, 50% times 100% (God will make it sunny for me) plus 50% times 50% (God will flip the coin) = 75%. In meta-level doubt, the down weighs more than the up.
Consider the history of skepticism. In Descartes's day, a red-blooded skeptic might have reasonably invested a smidgen more doubt in the possibility that she was being deceived by a demon than it would be reasonable to invest in that possibility today, given the advance of a science that leaves little room for demons. On the other hand, a skeptic in that era could not even have conceived of the possibility that she might be an artificial intelligence inside a computer simulation. It would be epistemically unfair to such a skeptic to call her irrational for not considering specific scenarios beyond her society's conceptual ken, but it would not be epistemically unfair to think she should recognize that given her limited conceptual resources and limited understanding of the universe, she might be underestimating the range of possible skeptical scenarios.
So now us too. That's wildcard skepticism.
6 comments:
Does the specific form of skepticism matter less than the general thrust of the argument; namely the irreality of what we take as real? Just like there is math based in tens or twos, the essential argument is the same?
To what degree does the skeptical argument rely on the reality that is assumed to be the case initially?
Howard, I'm inclined to think it very much depends on the initial assumptions. That's part of what makes my "1% skepticism" view different from, say, "brain-in-a-vat" skepticism: It depends on there being grounds for doubt that are reasonable given the reasoner's intellectual starting point.
yes, but all the scenarios tend to sing as one voice in some force, external or internal, making our experiences illusory.
Descartes threw every doubt imaginable, in his first meditation.
Still your approach makes sense.
Could there be some kind of theorem like Godel's incompleteness theorem that says any worldview is subject to radical doubt?
I just like how goofy looks like an idiot in that picture, but at the same time that'd be bloody hard to do!
Sorry, too much subtext - blame Scott!
I don't know about the proof, Howie, unless you think the "regress argument" works. (Search "regress argument" skepticism and you'll see what I mean.)
In a recent debate I hear and interesting approach to this problem.
The suggestion was that you don't have logical access to determining the probability of these scenarios. This means that it is an act of defying logic to add them into your decision matrix in the same way as to defy logic more generally. So the logical answer is to just not do it.
This was proposed as a way out of decision paralysis due to possible non zero probabilities of infinities that are potentially impossible to keep out of decision matrices.
Post a Comment