Thursday, April 16, 2015

How to Disregard Extremely Remote Possibilities

In 1% Skepticism, I suggest that it's reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we're in a short-term sim), sometimes making decisions that we wouldn't otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).

But what about extremely remote possibilities with extremely large payouts? Maybe it's reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes' worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn't seem right!

I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.

First, symmetry:
My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I'm not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways -- one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don't greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense -- no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.

Second, diminishing returns:
Bernard Williams famously thought that extreme longevity would be a tedious thing. I tend to agree instead with John Fischer that extreme longevity needn't be so bad. But it's by no means clear that 10^20 years of bliss is 10^20 times more choiceworthy than a single year of bliss. (One issue: If I achieve that bliss by repeating similar experiences over and over, forgetting that I have done so, then this is a goldfish-pool case, and it seems reasonable not to think of goldfish-pool cases as additively choiceworthy; alternatively, if I remember all 10^20 years, then I seem to have become something radically different in cognitive function than I presently am, so I might be choosing my extinction.) Similarly for bad outcomes and for extreme but instantaneous outcomes. Choiceworthiness might be very far from linear with temporal bliss-extension for such magnitudes. And as long as one's credence in remote outcomes declines sharply enough to offset increasing choiceworthiness in the outcomes, then extremely remote possibilities will not be action-guiding: a one in 10^50 credence of a utility of +/- 10^30 is negligible.

Third, loss aversion:
I'm loss averse rather than risk neutral. I'll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn't avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I'm loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes' worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability -- one in 10^50? -- no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.

These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.

9 comments:

  1. I think a more important point to bear in mind at all times is that humans can't do big numbers. I have no intuitive conception of what 10^50 is, and in general I would submit that once we get above 1000, everything's just big; and once we get smaller than 1/1000, everything's just small. So one reason to disregard small possibilities is that unless you've worked them out in a rigorous way, you don't know what they are.

    In connection with this - I went through the calculations for the million monkeys, million typewriters and million years thing a while ago. What's interesting is that they wouldn't just fail to produce Shakespeare, they wouldn't even produce one line, not by more than a dozen orders of magnitude. The numbers you offer in the post are out by an unimaginably long way: the probability of one line of Shakespeare is one in 10^50. The probability of many lifetimes of pleasure (which might include reading a lot of Shakespeare) are probably more like 10^10^10^50. The relative sizes of these numbers matter.

    I also think there's a difference between motivated small possibilities and unmotivated possibilities. There doesn't seem to be any reason to believe in that deity. So it's hard to imagine why a probability should be assigned to it. That would seem to be giving too much weight to my imagination - after all, I can only assign a probability to something I've thought of, but I can't see any principled difference between the set of things I've thought of and the set of all things which could be thought of (but haven't been).

    ReplyDelete
  2. chinaphil: Thanks for that thoughtful comment! I agree that we are bad with very small and very large numbers, and assessments like 10^20 vs. 10^50 vs. 10^10^100 are very difficult to make -- so that's another argument to add to my quiver here. On unmotivated possibilities: Well, in connection with your first point, zero is also a very small number in a certain sense! My hunch is that although we can often simplify to 0s and 1s in practical reasoning, in another sense assigning a 0 credence is like saying that the event is vastly less likely than 1 in 10^10^100 -- and maybe one doesn't need much positive motivation to assign a tiny sliver as much as a large negative motivation to refuse to do so. So then it's nice to have the arguments in the post to justify nonetheless ignoring such remote chances anyway.

    ReplyDelete
  3. I think a challenge arises during the seemingly innocuous act of conceding a number.
    Using tools like Bayes - if you can fasten on a number to start with - you can arrive at the most magnificent and ridiculous conclusions.
    As an engineer - I would never allow a bootstrap assumption unless it is grounded in something - and then only maybe.

    ReplyDelete
  4. If only decision theory weren't so cool and useful!

    ReplyDelete
  5. I can't decide what happened to my post - did it get lost (I did this once before on another forum - kept posting thinking the system swollowed it. Took me some time to figure the blog owner was deleting me over and over)?

    Anyway, here it is again (it's fairly uncontroversial so I'm guessing the system ate it)

    Took me a moment or three, but I realised I had a skepticism failure - I mean, why am I taking it that "10^50 lifetimes' worth of pleasure" is available? What is the likelyhood of that? It is its own extremely remote chance - or one could estimate it that way if one applies skepticism to it.

    Somehow it avoided my skepticism radar, initially. Interesting. Perhaps I'm mostly aligned to generally only deal with one probability at a time? Maybe because I stayed up late playing a FPS? Quite a hiccup and a curious one at that.

    ReplyDelete
  6. Callan -- Sorry about your previous comment being lost! The Splintered Mind is now on pre-approval for comments on an experimental basis.

    Yes, the chance is extremely remote! But the possible problem (which I am trying to avoid) is that if the benefits are enough, standard application of decision theory would recommend that you act on the basis of it. For an engaging discussion of one example, check out "Pascal's Mugging": http://www.nickbostrom.com/papers/pascal.pdf

    I do think we tend to think mostly in terms of one (or two) possibilities at a time -- but of course that's a simplification of the options.

    ReplyDelete
  7. Hi Eric,

    But don't the benefits - get less of a chance of existing the more there is? Thus countering the size of the return? Or if they don't, I have this thing called a ponzi scheme I'd like to talk about...

    On the Pascal example - I'd say it really depends on whether you can afford to speculate. Every day people give up money (sometimes the contents of their wallet) on lotteries - of which they have a better chance of being hit by lightning than winning.

    It's not just a matter of return, but current need. A guy dying of thirst in the desert needs to keep that bottle of water rather than give it up for a 100% chance of a return of three...in three days time. As he'll be dead by then.

    But if you're fat with assets, then the crazy mugger might be an amusing speculation to take.

    ReplyDelete
  8. Yes, I agree with all that, Callan! In the diminishing returns argument, I meant to rely on declining credences as the payoffs become larger, though looking again at it now, that assumption was somewhat cryptically buried in there!

    ReplyDelete
  9. Came across this and it might be of interest for the whole possibility theory thing: http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

    ReplyDelete