Thursday, May 09, 2024

Formal Decision Theory Is an Optional Tool That Breaks When Values are Huge

Formal decision theory is a tool -- a tool that breaks, a tool we can do without, a tool we optionally deploy and can sometimes choose to violate without irrationality.  If it leads to paradox or bad results, we can say "so much the worse for formal decision theory", moving on without it, as of course humans have done for almost all of their history.

I am inspired to these thoughts after reading Nick Beckstead and Turuji Thomas's recent paper in Nous, "A Paradox for Tiny Probabilities and Enormous Values".

Beckstead and Thomas lay out the following scenario:

On your deathbed, God brings good news. Although, as you already knew, there's no afterlife in store, he'll give you a ticket that can be handed to the reaper, good for an additional year of happy life on Earth. As you celebrate, the devil appears and asks, ‘Won't you accept a small risk to get something vastly better? Trade that ticket for this one: it's good for 10 years of happy life, with probability 0.999.’ You accept, and the devil hands you a new ticket. But then the devil asks again, ‘Won't you accept a small risk to get something vastly better? Trade that ticket for this one: it is good for 100 years of happy life—10 times as long—with probability 0.999^2—just 0.1% lower.’ An hour later, you've made 10^50,000 trades. (The devil is a fast talker.) You find yourself with a ticket for 10^50,000  years of happy life that only works with probability .999^50,000, less than one chance in 10^21. Predictably, you die that very night. 

Here are the deals you could have had along the way:

[click image to enlarge and clarify]

On the one hand, each deal seems better than the one before. Accepting each deal immensely increases the payoff that's on the table (increasing the number of happy years by a factor of 10) while decreasing its probability by a mere 0.1%. It seems unreasonably timid to reject such a deal. On the other hand, it seems unreasonably reckless to take all of the deals—that would mean trading the certainty of a really valuable payoff for all but certainly no payoff at all. So even though it seems each deal is better than the one before, it does not seem that the last deal is better than the first.

Beckstead and Thomas aren't the first to notice that standard decision theory yields strange results when faced with tiny probabilities of huge benefits: See the literature on Pascal's Wager, Pascal's Mugging, and Nicolausian Discounting.

The basic problem is straightforward: Standard expected utility decision theory suggests that given a huge enough benefit, you should risk almost certainly destroying everything.  If the entire value of the observable universe is a googol (10^100) utils, then you should push a button that has a 99.999999999999999999999% chance of destroying everything as long as there is (or you believe that there is) a 0.00000000000000000000001% chance that it will generate more than 10^123 utils.

As Beckstead and Thomas make clear, you can either accept this counterintuitive conclusion (they call this recklessness) or reject standard decision theory.  However, the nonstandard theories that result are either timid (sometimes advising us to pass up an arbitrarily large potential gain to prevent a tiny increase in risk) or non-transitive (denying the principle that, if A is better than B and B is better than C, then A must be better than C).  Nicolausian Discounting, for example, which holds that below some threshold of improbability (e.g., 1/10^30), any gain no matter how large should be ignored, appears to be timid.  If a tiny decrease in probability would push some event below the Nicolausian threshold, then no potential gain could justify taking a risk or paying a cost for the sake of that event.

Beckstead and Thomas present the situation as a trilemma between recklessness, timidity, and non-transitivity.  But they neglect one horn.  It's actually a quadrilemma between recklessness, timidity, non-transitivity, and rejecting formal approaches to decision.

I recommend the last horn.  Formal decision theory is a limited tool, designed to help with a certain type of decision.  It is not, and should not be construed to be, a criterion of rationality.

Some considerations that support treating formal decision theory as a tool of limited applicability:

  • If any one particular approach to formal decision theory were a criterion of rationality such that defying its verdicts were always irrational, then applying any other formal approach to decision theory (e.g., alternative approaches to risk) would be irrational.  But it's reasonable to be a pluralist about formal approaches to decision.
  • Formal theories in other domains break outside of their domain of application.  For example, physicists still haven't reconciled quantum mechanics and general relativity.  These are terrific, well confirmed theories that seem perfectly general in their surface content, but it's reasonable not to apply both of them to all physical predictive or explanatory problems.
  • Beckstead and Thomas nicely describe the problems with recklessness (aka "fanaticism") and timidity -- and denying transitivity also seems very troubling in a formal context.  Problems for each of those three horns of the quadrilemma is pressure toward the fourth horn.
  • People have behaved rationally (and irrationally) for hundreds of thousands of years.  Formal decision theory can be seen as a model of rational choice.  Models are tools employed for a range of purposes; and like any model, it's reasonable to expect that formal decision theory would distort and simplify the target phenomenon.
  • Enthusiasts of formal decision theory often already acknowledge that it can break down in cases of infinite expectation, such as the St. Petersburg Game -- a game in which a which a fair coin is flipped until it lands heads for the first time, paying 2^n, where n is the number of flips, yielding 2 if H, 4 if TH, 8 if TTH, 16 if TTTH, etc. (the units could be dollars or, maybe better, utils).  The expectation of this game is infinite, suggesting unintuitively that people should be willing to pay any cost to play it and also, unintuitively, that a variant that pays $1000 plus 2^n would be of equal value to the standard version that just pays 2^n.  Some enthusiasts of formal decision theory are already committed to the view that it isn't a universally applicable criterion of rationality.

In a 2017 paper and my 2024 book (only $16 hardback this month with Princeton's 50% discount!), I advocate a version of Nicolausian discounting.  My idea there -- though I probably could have been clearer about this -- was (or should have been?) not to advocate a precise, formal threshold of low probability below which all values are treated as zero while otherwise continuing to apply formal decision theory as usual.  (I agree with Monton and Beckstead and Thomas that this can lead to highly unintuitive results.)  Instead, below some vague-boundaried level of improbability, decision theory breaks and we can rationally disregard its deliverances.

As suggested by my final bullet point above, infinite cases cause at least as much trouble.  As I've argued with Jacob Barandes (ch. 7 of Weirdness, also here), standard physical theory suggests that there are probably infinitely many good and bad consequences of almost every action you perform, and thus the infinite case is likely to be the actual case: If there's no temporal discounting, the expectation of every action is ∞ + -∞.  We can and should discount the extreme long-term future in our decision making much as we can and should discount extremely tiny probabilities.  Such applications take formal decision theoretical models beyond the bounds of their useful application.  In such cases, it's rational to ignore what the formal models tell us.

Ah, but then you want a precise description of the discounting regime, the thresholds, the boundaries of applicability of formal decision theory?  Nope!  That's part of what I'm saying you can't have.

8 comments:

Richard Y Chappell said...

Are you suggesting that some *informal* decision theory yields better verdicts in B&T's weird cases? Or do you instead mean to suggest that it's simply unknowable (or, stronger still, that there is no fact of the matter) what one ought to do in such a case?

I don't see how the former would work. Any complete theory of decision whatsoever - formal or otherwise - will end up being either reckless, timid, or non-transitive, by B&T's argument.

So I take it you instead mean the latter. But then isn't it misleading to single out "formal" decision theory as the culprit? The problem is with trying to specify what it's rational to do in these cases. Your "solution", it seems, is to give up on even trying to answer the normative question *at all* in these cases. (Though I wonder if rejecting completeness in this way is also, in effect, a form of non-transitivity?)

Eric Schwitzgebel said...

Thanks, Richard! Well, a sufficiently informal decision theory might be something like "do the reasonable thing". Maybe that's sufficiently vague as to be unobjectionable; so I don't want to be committed to denying that all informal decision theories will fail to be perfectly general. But maybe you would say that such a decision theory isn't "complete" or isn't a "theory". I'd prefer not to have to stake out a view on exactly those issues, so I've restricted my claims to the formal models that people actually publish about.

One comparison point is vagueness, where I also think formal models break down.

I think it's reasonable to say I shouldn't take the 10^500 option and I should take the 10^1 option, and of course transitivity is generally reasonable. If committing to transitivity entails committing to *always* adhering to it, then by virtue of not aiming for a complete theory, I am in a sense rejecting transitivity. But I don't think that constitutes actually rejecting transitivity in an appropriately watered-down sense. There might be no logical principle (even non-contradiction) that I think we ought to commit ex ante to adhering to with perfect generality; so if committing with perfect generality is built into accepting a logical principle, then I accept no logical principles. But that conclusion proves too much, so it's a reductio: I don't actually count as rejecting transitivity, in an appropriately watered-down sense, by virtue of resisting adhering to it with perfect generality.

Richard Y Chappell said...

Something I'm a bit unclear on: Do you think that there *is* a reasonable thing to do in all these cases? If you have any dispositions whatsoever that cover all these cases, your dispositions will either be objectionably timid, reckless, or non-transitive, right?

I think these cases are really puzzling. I worry that taking "formal models break down" as the lesson is a cop-out. If you're inclined to go that way, the proper lesson is surely that *normativity itself* breaks down. You can't avoid this just by rejecting formalisms (and I worry that your framing seems to suggest otherwise).

Eric Schwitzgebel said...

Committing to a formal account that applies to all cases means committing in advance to an answer to the devil's deals. I think it's reasonable to resist committing in advance. It doesn't follow that, in the unlikely event of my actually being confronted with such a series of choices, there wouldn't be a most reasonable thing for me to do. There might be a most reasonable thing to do, but if so, it's reasonable for me *now* to say that I can't know in advance what it would be. Or there might not be a uniquely most reasonable thing to do. The problem with formal models of universal applicability is that they entail that commitment in advance. Informal principles or and formal principles of vague-boundaried, limited application needn't commit us in advance.

The possibility of such a deal is so remote from everything that I've experienced that if I actually were to experience it and to judge the options to be plausibly as described, that would so rock my presuppositions about the world that I don't know how I would react. Just for starters, 10^500 years of life is almost inconceivable. Would it still be "me" in the sense relevant to prudential decision-making? Are years of life additively good? There might be no determinate answer to these questions; or no determinate answer that I can a priori know; or multiple, conflicting equally reasonable answers. To rationally evaluate the deal, I would need to understand the options, but I'm not sure I do (or even could?) understand them....

Arnold said...

I've come to see that what ever thought, feeling or experience I process...
...has been done before and is being done right now...

0 t0 infinity provides place for my decisions...
...learning to stay in place luckily is a valuable option to work with...

Another way to put it, again and again...
...'everything I need is right in front of me'...

Paul D. Van Pelt said...

Never bargain with the Devil. I know little about things like probability and chaos theory, but I subscribe to *anything that can go wrong will---at the worst possible time*---Murphy, again. Sure, it can be OK to hedge one's bets, but I don't recommend that, where lots of money or something of greater value are at stake. With or without the Devil'$ intervention, there remains an element of chance---better, where possible then, to reject the bet, a priori. There aren't too many 'sure things'. My wife of nearly thirty-four years suffered three falls (that I knew of) in the last two months of her life. Ironically, the third fall propelled her into a night stand in her bedroom. Perhaps more ironic still, the stand is part of a bedroom set I built for her about a year ago. The fall resulted in an irreparable brain hemorrhage. She died within days. She had cheated the Devil before and lived a dozen more years. you cannot cover all bets. Seems to me.

J. C. Lester said...

When people refer to what is "rational" or "reasonable" they appear to mean what is prudent, or efficient, or logical, etc. If that is the case, then it might help to clarify matters by using one of the relevant latter terms instead.

https://jclester.substack.com/p/rationality-a-libertarian-viewpoint?utm_source=publication-search

Paul D. Van Pelt said...

Chance does not favor the prepared mind. It favors nothing. When armed forces people adhere to: never apologize, it is a sign of weakness, there is reason for that:indecision on the battlefield results in loss. Yes.