Friday, February 10, 2023

How Not to Calculate Utilities in an Infinite Universe

Everything you do causes almost everything -- or so I have argued (blog post version here, more detailed and careful version collaborative with Jacob Barandes in my forthcoming book).  On some plausible cosmological assumptions, each of your actions ripples unendingly through the cosmos (including post-heat-death), causing infinitely many good and bad effects.

Assume that our actions do have infinitely many good and bad effects.  My thought today is that this would appear to ruin some standard approaches to action evaluation.  According to some vanilla versions of consequentialist ethics and ordinary decision theory, the goodness or badness of your actions depends on their total long-term consequences.  But since almost all of your actions have infinitely many good consequences and infinitely many bad consequences, the sum total value of almost all of your actions will be ∞ + -∞, a sum which is normally considered to be mathematically undefined.

Suppose you are considering two possible actions with short-term expected values m and n.  Suppose, further, that m is intuitively much larger than n.  Maybe Action 1, with short-term expected value m, is donating a large some of money to a worthwhile charity, while Action 2, with short-term expected value n, is setting fire to that money to burn down the house of a neighbor with an annoying dog.  Infinitude breaks the mathematical apparatus for comparing the long-term total value of those actions: The total expected value of Action 1 will be m + ∞ + -∞, while the total expected value of Action 2 will be n + ∞ + -∞.  Both values are undefined.

Can we wiggle out of this?  An Optimist might try to escape thus: Suppose that overall in the universe, at large enough spatiotemporal scales, the good outweighs the bad.  We can now consider the relative values of Action 1 and Action 2 by dividing them into three components: the short-term effects (m and n, respectively), the medium-term effects k -- the effects through, say, the heat death of our region of the universe -- and the infinitary effects (∞, by stipulation).  Stipulate that k is unknown but expected to be finite and similar for Actions 1 and 2.  The expected value of Action 1 is thus m + k + ∞.  The expected value of Action 2 is n + ∞.  These values are not undefined; so that particular problem is avoided.  The values are, however, equal: simple positive infinitude in both cases.  As the saying goes, infinity plus one just equals infinity.  A parallel Pessimistic solution -- assuming that at large enough time scales the bad outweighs the good -- runs into the same problem, only with negative infinitude.

Perhaps a solution is available for someone who holds that at large enough time scales the good will exactly balance the bad, so that we can compare m + k + 0 to n + k + 0?  We might call this the Knife's Edge solution.  The problem with the Knife's Edge solution is delivering that zero.  Even if we assume that the expected value of any spatiotemporal region is exactly zero, the Law of Large Numbers only establishes that as the size of the region under consideration goes to infinity, the average value is very likely to be near zero.  The sum, however, will presumably be divergent – that is, will not converge upon a single value.  If good and bad effects are randomly distributed and do not systematically decrease in absolute value over time, then the relevant series would be a + b + c + d + ... where each variable can take a different positive or negative value and where this is no finite limit to the value of positive or negative runs within the series -- seemingly the very archetype of a poorly behaved divergent series whose sum cannot be calculated (even by clever tools like Cesaro summation).  Thus, mathematically definable sums still elude us.  (Dominance reasoning also probably fails, since Actions 1 and 2 will have different rather than identical infinite effects.)

This generates a dilemma for believers in infinite causation, if they hope to evaluate actions by their total expected value.  Either accept the conclusion that there is no difference in total expected value between donating to charity and burning down your neighbor's house (the Optimist's or Pessimist's solution), or accept that there is no mathematically definable total expected value for any action, rendering proper evaluation impossible.

The solution, I suggest, is to reject certain standard approaches to action evaluation.  We should not to evaluate actions based on their total expected value over the lifetime of the cosmos!  We must have some sort of discounting with spatiotemporal distance, or some limitation of the range of consequences we are willing to consider, or some other policy to expunge the infinitudes from our equations.  Unfortunately, as Bostrom (2011) persuasively argues, no such solution is likely to be entirely elegant and intuitive from a formal point of view.  (So much the worse, perhaps, for elegance and intuition?)

The infinite expectation problem is robust in two ways.

First, it affects not only simple consequentialists.  After all, you needn't be a simple consequentialist to think that long-term expected outcomes matter.  Virtually everyone think that long-term expected outcomes matter somewhat.  As long as they matter enough that an infinitely positive long-term outcome, over the course of the entire history of the universe, would be relevant to your evaluation of an action, you risk being caught by this problem.

Second, the problem affects even people who think that infinite causation is unlikely.  Even if you are 99.99% certain that infinite causation doesn't occur, your remaining 0.01% credence in infinite causation will destroy your expected value calculations if you don't do something to sequester the infinitudes.  Suppose you're 99.99% sure that your action will have the value k, while allowing 1 0.01% chance that it's value will be ∞ + -∞.  If you now apply the expected value formula in the standard way, you will crash straightaway into the problem.  After all, .9999 * k + .0001 * (∞ + -∞) is just as undefined as ∞ + -∞ itself.  Similarly, .9999 * k + ∞ is simply ∞.  As soon as you let those infinitudes influence your decision, you fall back into the dilemma.

13 comments:

  1. Something to think about. If we count 1, 2, 3, 4, 5 etc we can continue to infinity. Let's call that "infinuty 1." We can also count 2, 4, 6, 8, 10 ets to infinity. Let's call that "Infinity 2." No problem. But notice at any point infinity 1 has twice as many numbers as infinity 2. So if we divide infinity 1 by infinity 2 we will get 2. Conclusion, two infinities are not always equal and (+) infinity does not necessarily negate (-) infinity.

    ReplyDelete
  2. If everything causes everything, then everything was caused by everything. Any effect that occurs after a short time has a large number of causes, so the fraction attributable to any one cause in particular is tiny. Couldn't this lead to a convergent sum? At some point the effects of my actions become random background noise.

    ReplyDelete
  3. Paul David Van PeltSat Feb 11, 05:54:00 AM PST

    I guess my view is similar in content, or context(?), to the one prior. In that respect, it is Panpsychist, even though I don't subscribe to the universal consciousness notion. There are many causes, many effects. Humans account for much, not all of that.

    ReplyDelete
  4. Thanks for the comments, folks!

    Groov: That’s not mathematically standard, and paradox likely awaits!

    D: As long as the effects are there, even if a tiny fraction, we get the infinitude problem over infinite time, yes?


    ReplyDelete
  5. Paul: adding in panpsychism puts a whole new spin on it, if we’re summing the positive or negative experiences of protons, etc! I’m inclined to think infinitude still generates puzzles though!

    ReplyDelete
  6. Life seems easier when you're not a moral realist. It allows us to just focus on what works for us and our foreseeable descendants. Certainly no form of discounting will be perfect, but it doesn't have to be for pragmatic considerations. I think we should be okay with ultimate consequences not weighing on our decisions.

    Mike

    ReplyDelete
  7. Forced to research the 'utility of thought' again, thanks...
    ...is thought a finite sense for infinite attitudes...

    Are all substances employed by evolution's ethics for life...
    ...the forces of a cosmos towards balancing...

    That zero to infinity is the search for where in 'Process philosophy'...
    ..and maybe in Splintered Mind...

    https://plato.stanford.edu/search/r?entry=/entries/process-philosophy/&page=1&total_hits=2206&pagesize=10&archive=None&rank=0&query=process%20philosophy

    ReplyDelete
  8. Paul David Van PeltSun Feb 12, 07:26:00 AM PST

    One last remark, likening modern complexity to, well...:

    Complexity is collapsing under its' own weight. Is IT a figurative black hole, subsuming everything else?

    ReplyDelete
  9. "The sum, however, will presumably be divergent – that is, will not converge upon a single value."
    Here's a reason to think that the sum will be convergent.
    The further into the future you go, the larger the light cone of possible causes that could have contributed to a given outcome will be. If we're saying that everything causes everything else, then all of these events in the light cone are, on average, the causes of the event. And our action has only a small share of this total causality. The total size of the causality light cone will go up with the square of the distance from my action, Therefore my action's total share in the causation will decrease in proportion to the square of time, for events in the more and more distant future. Coefficients will vary, but it should be a series of the form:
    (1/1)+(1/4)+(1/9)+(1/16)...
    And that's convergent.
    I dunno if this argument really holds water, though.

    ReplyDelete
  10. Thanks for the continuing comments, folks!

    Mike: Fair enough -- certain things are easier if you're not a moral realist!

    Chinaphil: I think that might work if we're comparing the *ratio* of good to bad effects or the *average* of the effect sizes. But if we're considering the *sum*, we won't get that divergence. Compare having two coins that you independently flip. The ratio of heads on coin one to heads on coin 2 will converge toward 1:1, but the difference between the total number of heads on coin 1 and the total number on coin 2 will not converge. Standard utility theory looks at sums, not ratios.

    ReplyDelete
  11. I think this is why I became a Bayesian - there is no point in time reckoning - only an initial hypothesis and constant updates as additional information arrives. Where you choose to terminate the Bayesian reckoning is strictly up to you and really a very individual choice.

    ReplyDelete
  12. An intuitive solution might be something like: "You only calculate the net utility from your action until it collides with a causal chain traceable to another agentic act".

    So, for example, if I trolley problem a 5:1 split in favor of the 5, I wouldn't then count any of the things that the survivors do against me, even if all 5 of them turn out to be Hitler-like.

    The discounting I'm proposing here isn't based on time, but on computational complexity (which tends to increase with time). As soon as the chain of events you unleash collides with another agent (typically a human person, although things are starting to get weird), your ability to computationally extrapolate further consequences basically becomes 0, so you are exculpated from any further accounting.

    The upshot is that you're basically exculpated from MOST of the consequences of anything you do, since most of your actions collide with other free agents immediately. But it would crucially NOT exculpate you from things with obvious immediate negative consequences such as murder, stealing etc.

    ReplyDelete
  13. Oof, seems the spam has already started on this one... Well, I'll try to move it back to productive discussion. I'm not entirely sure (it's been already discussed, so I might be missing something important), but I'm still attracted to the idea that even though ∞ + -∞ is undefined, we should still treat the expected value of the infinitarian consequences as 0 simply because we have no way of determining whether they will contain more good or more bad.

    After all, something similar has been used to criticize Pascal's Wager - that the possible positive and negative consequences of believing in a religion should effectively cancel out - even though it also doesn't seem likely to me that they exactly do in the most strictly mathematical, Bayesian sense. (I'm not religious, but it seems the existence of a lot of people who believe in some religions is technically evidence, no matter how incredibly weak, for the truth of those religions. And that reasoning means it's very slightly more likely to get an infinite reward and avoid an infinite punishment if you're religious than if you aren't, but it hasn't convinced me to become religious!)

    Yes, there are other possible counterarguments to Pascal's Wager, but in my limited experience, I've heard the "canceling out" version more often than the idea that it leads to insincere belief or that people don't actually have an infinite preference for heaven. (Although there's the caveat that not everyone arguing against religion is philosophically sophisticated).

    Seems I've talked more about Pascal's Wager than the actual topic, but my point is that I think somewhat similar reasoning might still work for the infinite universe case. Come to think of it, this might be sort of similar to Jorge A.'s idea above. I personally doubt the stopping point should always be "collides with a causal chain traceable to another agentic act," because sometimes you can predict the actions of another agent to a reasonable degree. But the general idea of ignoring stuff you cannot predict at all might be useful.

    ReplyDelete