I argue -- not very likely, but I think reasonably drawing a wee smidgen of doubt -- are dream skepticism (might I now be asleep and dreaming?), simulation skepticism (might I be an artificial intelligence living in a small, simulated world?), and cosmological skepticism (might the cosmos in general, or my position in it, be radically different than I think, e.g., might I be a Boltzmann brain?).
"1% skepticism", as I define it, is the view that it's reasonable for me to assign about a 1% credence to the possibility that I am actually now enduring some radically skeptical scenario of this sort (and thus about a 99% credence in non-skeptical realism, the view that the world is more or less how I think it is).
Now, how do I arrive at this "about 1%" skeptical credence? Although the only skeptical possibilities to which I am inclined to assign non-trivial credence are the three just mentioned (dream, simulation, and cosmological), it also seems reasonable for me to reserve a bit of my credence space, a bit of room for doubt, for the possibility that there is some skeptical scenario that I haven't yet considered, or that I've considered but dismissed and should take more seriously than I do. I'll call this wildcard skepticism. It's a kind of meta-level doubt. It's a recognition of the possibility that I might be underappreciating the skeptical possibilities. This recognition, this wildcard skepticism, should slightly increase my credence that I am currently in a radically skeptical scenario.
You might object that I could equally well be over-estimating the skeptical possibilities, and that in recognition of that possibility, I should slightly decrease my credence that I am currently in a radically skeptical scenario; and thus the possibilities of over- and underestimation should cancel out. I do grant that I might as easily be overestimating as underestimating the skeptical possibilities. But over- and underestimation do not normally cancel out in the way this objection supposes. Near confidence ceilings (my 99% credence in non-skeptical realism), meta-level doubt should tend overall to shift one's credence down.
To see this, consider a cartoon case. Suppose I would ordinarily have a 99% credence that it won't rain tomorrow afternoon (hey, it's July in southern California), but I also know one further thing about my situation: There's a 50% chance that God has set things up so that from now on the weather will always be whatever I think is most likely, and there's a 50% chance that God has set things up so that whenever I have an opinion about the weather he'll flip a coin to make it only 50% likely that I'm right. In other words, there's a meta-level reason to think that my 99% credence might be an underestimation of the conformity of my opinions to reality or equally well might be an overestimation. What should my final credence in sunshine tomorrow be? Well, 50% times 100% (God will make it sunny for me) plus 50% times 50% (God will flip the coin) = 75%. In meta-level doubt, the down weighs more than the up.
Consider the history of skepticism. In Descartes's day, a red-blooded skeptic might have reasonably invested a smidgen more doubt in the possibility that she was being deceived by a demon than it would be reasonable to invest in that possibility today, given the advance of a science that leaves little room for demons. On the other hand, a skeptic in that era could not even have conceived of the possibility that she might be an artificial intelligence inside a computer simulation. It would be epistemically unfair to such a skeptic to call her irrational for not considering specific scenarios beyond her society's conceptual ken, but it would not be epistemically unfair to think she should recognize that given her limited conceptual resources and limited understanding of the universe, she might be underestimating the range of possible skeptical scenarios.
So now us too. That's wildcard skepticism.