Tuesday, May 17, 2022

Our Infinite Predecessors: Flipping the Doomsday Argument on Its Head

The Doomsday Argument purports to show, probabilistically, that humanity will not endure for much longer: Likely, at least 5% of the humans who will ever live have already lived. If 60 billion have lived so far, then probably no more than 1.2 trillion humans will live, ever. (This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year.) According to this argument, the odds that humanity colonizes the galaxy with many trillions of inhabitants are vanishingly small.

Why think we are doomed? The core idea, as developed by Brandon Carter (see p. 143), John Leslie, Richard Gott, and Nick Bostrom is this. It would be statistically surprising if we -- you and I and our currently living friends and relatives -- were very nearly the first human beings ever to live. Therefore, it's unlikely that we are in fact very nearly the first human beings ever to live. But if humanity continues on for many thousands of years, with many trillions of future humans, then we would in fact be very nearly the first human beings ever to live. Thus, we can infer, with high probability, that humanity is doomed before too much longer.

Consider two hypotheses: On one hypothesis, call it Endurance, humanity survives for many millions more years, and many, many trillions of people live and die. On the other, call it Doom, humanity survives for only another a few more centuries or millennia. On Endurance, we find ourselves in a surprising and unusual position in the cosmos -- very near the beginning of a very long run! This, arguably, would be as strange and un-Copernican as finding ourselves in some highly unusual spatial position, such as very near the center of the cosmos. The longer the run, the more surprisingly unusual our position. In contrast, Doom suggests that we are in a rather ordinary temporal position, roughly the middle of the pack. Thus, the reasoning goes, unless there's some independent reason to think Endurance to be much more plausible than Doom, we ought to conclude that Doom is likely.

Let me clarify by showing how Doomsday-style reasoning would work in a few more intuitive cases. But first, here's an inverted mushroom cloud to symbolize that I'll soon be flipping the argument over.

Imagine two lotteries. One has ten numbers, the other a hundred numbers. You don't know which one you've entered into, but you go ahead and draw a number. You discover that you have ticket #6. Upon finding this out, you ought to guess that you probably drew from the ten number lottery rather than the hundred number lottery, since #6 would be a surprisingly low draw in a hundred-number lottery. Not impossible, of course, just relatively unlikely. If your prior credence was split 50-50 between the two lotteries, you can use Bayesian inference to derive a posterior credence of about 91% that you are in the ten-number lottery, given that you see a number among the top ten. (Of course, if you have other evidence that makes it very likely that you were in the hundred-number lottery, then you can reasonably retain that belief even after drawing a relatively low number.)

Alternatively, imagine that you're one of a hundred people who have been blindfolded and imprisoned. You know that 90% of the prison cells are on the west side of town and 10% are on the east side. Your blindfold is removed, but you don't see anything that reveals which side of town you're on. Nonetheless, you ought to think it's likely you're on the west side of town.

Or imagine that you know that 10,000 people, including you, have been assigned in some order to view a newly discovered painting by Picasso, but you don't know in what order people actually viewed the painting. Exiting the museum, you should think it unlikely that you were either among the very first or very last.

The reasoning of the Doomsday argument is intended to be analogous: If you don't know where you're temporally located in the run of humans, you ought to assume it's unlikely that you're in the unusual position of being among the first 5% (or 1% or, amazingly, .001%).

Now various disputes and seeming paradoxes arise with respect to such probabilistic approaches to "self-location" (e.g., Sleeping Beauty), and a variety of objections have been raised to Doomsday Argument reasoning in particular (Leslie's book has a good discussion; see also here and here). But let's bracket those objections. Grant that the reasoning is sensible. Today I want to add a pair of observations that have the potential to flip the Doomsday Argument on its head, even if we accept the general style of reasoning.

Observation 1: The argument assumes that only about 60 billion humans have existed so far, rather than vastly many more. Of course this seems plausible, but as we will see there might be reason to reject it.

Observation 2: Standard physical theory appears to suggest that the universe will endure infinitely long, giving rise to infinitely many future people like us.

There isn't room here to get into depth on Observation 2. I am collaborating with a physicist on this issue now; draft hopefully available soon. But the main idea is this. There's no particular reason to think that the universe has a future temporal edge, i.e., that it will entirely cease. Instead, standard physical theory suggests that it will enter permanent "heat death", a state of thin, high-entropy chaos. However, there will from time to time be low-probability events in which people, or even much larger systems, spontaneously congeal from the chaos, by freak quantum or thermodynamical chance. There's no known cap on the size of such spontaneous fluctuations, which could even include whole galaxies full of evolving species, eventually containing all non-zero-probability life forms. (See the literature on Boltzmann brains.) Perhaps there will even be new cosmic inflations, for example, caused by black holes or spontaneous fluctuations. Vanilla cosmology thus appears to imply an infinite future containing infinitely many people like us, to any arbitrarily specified degree of similarity, perhaps in very large chance fluctuations or perhaps in newly nucleated "pocket universes".

Now if we accept this, then by reasoning similar to that of the Doomsday Argument, we ought to be very surprised to find ourselves among the first 60 billion people like us, or living in the first 14 billion years of an infinitely existing cosmos. We'd be among the first 60 billion out of infinity. A tiny chance indeed! On Doomsday-style reasoning, it would be much more reasonable, if we think the future is infinite, to think that the past must be infinite too. Something existed before the Big Bang, and that something contained observers like us. That would make us appropriately mediocre. Then, in accordance with the Copernican Principle, we'd be in an ordinary location in the cosmos, rather than the very special location of being within 14 billion years of the beginning of an infinite duration.

The situation can be expressed as follows. Doomsday reasoning implies the following conditional statement:

Conditional Doom: If only 60 billion humans, or alternatively human-like creatures, have existed so far, then it's unlikely that many trillions more will exist in the future.

If we take as a given that only 60 billion have existed so far, we can apply modus ponens (concluding Q from P and if P then Q) and conclude Doom.

But alternatively, if we take as a given that (at least) many trillions will exist in the future, we can apply modus tollens (concluding not-P from not-Q and if P then Q) and conclude that many more than 60 billion have already existed.

The modus ponens version is perhaps more plausible if we think in terms of our species, considered as a local group of genetically related animals on Earth. But if we think in terms of humanlike creatures instead specifically of our local species, and if we accept an infinite future likely containing many humanlike creatures, then the modus tollens version becomes more plausible, and we can conclude a long past as well as a long future, full of humanlike creatures extending infinitely forward and back.

Call this the Infinite Predecessors argument. From infinite successors and Doomsday-style self-location reasoning, we can conclude infinite predecessors.

---------------------------------------------

Related:

Almost Everything You Do Causes Almost Everything (Mar 18, 2021)

My Boltzmann Continuants (Jun 6, 2013).

How Everything You Do Might Have Huge Cosmic Significance (Nov 29, 2016).

And Part 4 of A Theory of Jerks and Other Philosophical Misadventures.

[image adapted from here]

32 comments:

  1. Isn't this a motif of science fiction?
    In the eighties in an airport I read a story with the theme of earlier doomed civilizations.
    But you assume intelligent life can pop up in the ether like a quark

    ReplyDelete
  2. "This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year"

    I think I see a problem with their argument right here.

    ReplyDelete
  3. In philosophy of psychology is infinity in descendance ascendance transcendence...
    ...free of times arrow for Life's predecessors...

    ReplyDelete
  4. Slight aside question:

    The article on sleeping beauty leaves the final state of the discussion at the point where Bostrom argues from the million-day version that she should lean towards thinking the coin flip came up tails.

    That feels mostly convincing to me as I reason that if she bets a dollar it came up tails on 1:1.01 odds, she's going to come out winning about five thousand dollars on average. If she bets a dollar it came up heads on the same odds, she's going to lose a million dollars minus some small number of cents I haven't thought through.

    (I think it's just minus one cent.)

    But is that the current state of the game? Has no one responded to Bostrom's point about the million day version?

    ReplyDelete
  5. The Farnham mouse experiment returns to mind. My brother likes this one. The experimenters grossly underestimated the needs of the colony. Social structure broke down more rapidly and in ways either unanticipated or underanticipated. Diamond's book, Collapse, was instructive for what he found or theorized, and was perhaps as significant for conjectures he made, in lieu of convincing evidence. The parables of probability, Murphy's Laws, were headed up by the capstone, which proclaimed: anything that can go wrong will. On my thirty-seventh birthday, a number of years ago, a trip into space went horribly wrong. Richard Feynman showed conference-goers why this happened. So,my contention is the demise of humankind is likely to be of our own making, among other things---a so-called 'perfect storm'...in other colloquial terms, a totality of the circumstances: Murphy's admonition, ANYTHING that can go wrong. Thanks!

    ReplyDelete
  6. Thanks, all!

    Garret: I'm not sure I understand your objection. Current demographic projections is that population growth is stabilizing at approximately that over the next century. Of course, it's anyone's guess about the farther future, but the argument doesn't turn on the particulars of population growth rates.

    Kris: On Sleeping Beauty, I too am a 1/3-er. But opinion remains divided:
    https://survey2020.philpeople.org/survey/results/5170

    Paul: You might like Toby Ord's recent book Precipice, which carefully explores the various ways in which we might exterminate ourselves in the next few centuries, considering what steps we might take to reduce the likelihood of this.

    ReplyDelete
  7. I've always felt like doomsday arguments and the fine-tuning argument and things like that are misusing statistics. Everything that has ever happened had a vanishingly small chance of happening if looked at in a certain way.

    ReplyDelete
  8. I do like the logic associated with “the doomsday” argument, even if I don’t like the name much. It doesn’t say that it’s impossible to randomly draw a 1 in a set of whole numbers between 0 and 1,000,000,000. It just says that it’s far more likely for such a number to result in far lower sized drawings. So if you draw a 1 and have no information about the size of the drawing, then suspecting larger and larger numbered drawings should be less and less rational. It’s a perfectly logical position that ought to give some of the silliest future humanity predictors pause, but probably does not.

    I consider there to be plenty of reason to believe that an animal like us would not tend to kill itself off before it became technological, though would increasingly tend to do so afterwards. Furthermore it could be that the age of sci-fi has obscured to many that the only spaceship which we should ever have that’s reasonably sustainable, is the one that created us.

    If we do kill ourselves off in millennia sooner rather than later, and many other species here follow our pattern of technology to extinction (possibly even with scientists able to detect various terrestrial technology ages), wouldn’t it be strange that we were the first? Yes, but it is what it is. And we might even be the last to do so here to thus invalidate that strangeness.

    Regarding this inversion of the doomsday (or “rational”?) argument, yes I’ll go along with it. Of course the big caveat is that we’re discussing creatures that would merely function somewhat like us. But given infinite time I think we can indeed conclude that there will be infinite such predecessors.

    ReplyDelete
  9. Will these hypothetical people in other solar systems and galaxies really be like us?
    Evolution of human intelligence depends on the local circumstances of the savannah in which we sprouted and spurted into the world- we weren't built, like computers- would other 'earths' really be like us creating other 'humans' like us?
    Until you prove that I'd assume the opposite

    ReplyDelete
  10. Thanks for the continuing comments, folks!

    Unknown May 18: You aren't alone in this suspicion. I'm inclined to think that doomsday-style reasoning is sound in general but it remains contentious among experts.

    Phil E: Of course this also connects to the "Fermi paradox". Why don't we see others? Could we be first? That seems unlikely unless technological intelligence is extremely rare. But maybe technological intelligence *is* extremely rare. Or maybe we're not the first and the others on other planets quickly died off, or are hiding, or....

    Howard: Pretty much all you need is infinite time, a finite chance, and not to get stuck in a looping subset of the possibility space. See "Poincare recurrence" or my old posts on Boltzmann brains and/or eternal recurrence.

    ReplyDelete
  11. Yes but

    The laws of physics might differ in the oases rising from this quantum soup and if various constants and laws differ that might up the great stream of the great chain of being change the type of living organisms that result
    Think of Asimov's The Gods Themselves

    ReplyDelete
  12. Christopher Devlin BrownFri May 20, 10:12:00 PM PDT

    Michael Huemer provides a similar argument in https://onlinelibrary.wiley.com/doi/epdf/10.1111/nous.12295. I believe he has called it his best published paper.

    ReplyDelete
  13. This seems like a fairly straightforward abuse of Bayes. If you don't know the position of X, but know that it falls randomly in a range of 100 different positions, then yes, you know that it's unlikely that it falls in the first position.
    But if you know X is in the first position, that doesn't tell you anything about the size of the distribution. You can't use the fact that X is in position one to inform your estimate of how large the range of possible positions is. I guess in formal terms, it's because you can't build a prior (your knowledge that X is in 1) into your conclusion. Or perhaps the problem is that the doomsday argument tries a weird combination of Bayesian and frequentist approaches. First it asks us to look at what we know (a Bayesian move); then it asks us to imagine that we already know the distribution of humans throughout history (a frequentist move).
    Either way, it seems pretty suss.

    ReplyDelete
  14. Should a Hippocratic Oath, in a on going broadly construed philosophy of psychology, be utilized constantly...
    ...for goodness sake ya all...

    ReplyDelete
  15. Right professor, I did also allude to an answer for the Fermi paradox. For certain Sci-Fi lovers however, going further might effectively be like explaining why it is that Santa doesn’t exist. But what the hell, we should all be adults here...

    I have two explanations which suggest that we shouldn’t have any evidence of technological species beyond us, and even if countless tend to exist as I suspect. The first is really just a mathematical implication of the doomsday argument. It makes perfect sense that something like the human would tend not to kill itself off without technology, though would with it. Thus perhaps we’ve now begun a 10,000 year run of this before our end. Our planet however is currently 6.4 billion years old. Therefore if some extraterrestrial species were looking for technological life specifically on our planet, so far the math says that they’d have a 1 in 640,000 chance of doing so at the right time! Of course when we expire it’s possible that we won’t kill off certain reasonably intelligent birds, rodents, or something else that will evolve to become technological. And perhaps when their technology leads to their extinction they’ll leave some reasonable successors… and on and on. If 10,000 years of technology generally becomes separated by 10,000,000 years of evolution, then the chance of finding technology here during this cycle would still be pretty bad at 1 in 1,000. And note that in order to detect associated electromagnetic radiation before it becomes noise they’d probably need to be in our solar system. (That is unless we were to aim focused beams to enough good spots in the sky that harbored other technological creatures (under doomsday restraints themselves probably) that had detectors able to discern this from noise.) In any case my point is that it seems like pretty tough odds for others to even find technological life on Earth, let alone us trying to find it on other planets.

    My other Fermi answer was alluded to above, or that we evolved for the conditions on our planet rather than other places. Even if there are certain planets out there that could essentially sustain us as well, they should all be too far away for us to ever make such a voyage. Some acknowledge this but then fall back on human made machines. Clearly we can build them to travel through space for example. So couldn’t we build some of them to reach a reasonable planet under the domain of another star, and then have them go on to build a second great voyage for another such expansion, and so on?

    Even if we were able to get enough machines to a reasonable planet, no I don’t think that they would ever build something like themselves for a next succession, and even if we were able to provide them with consciousness. We are self sustaining (for now anyway) because we sit atop a vast ecosystem that evolved niches that fit in with the whole. Our robots however would somehow need to build what we did without having anything like what we do on Earth. So if neither the human nor its machines will ever become self sustaining elsewhere, the same should apply for the countless other technological species that I presume have existed and will exist.

    ReplyDelete
  16. When you say 'infinite' do you mean Aleph One or Aleph Null? It is more germane to Huemer's argument than yours, but we have to be precise in our use of the word 'infinity'
    I mean regular or vulgar infinity is a blink in the eye of God- no?

    ReplyDelete
  17. Reasoning from SSA (self-sampling assumption) is tricky because it will always be relative to one's reference class.

    So, for example, even if the infinite predecessors argument works, it has no real application to the doomsday argument because the reference classes are different. The doomsday argument just takes all of humanity as its reference class, whereas the infinite predecessors argument uses all actual conscious observers. Even if it's true that we are not in a special place among the class of all actual observers, this wouldn't change the fact that we should expect the particular human species to go extinct soon.

    That said, the arbitrariness that is the human reference class can cut both ways. One could argue that humans will indeed go extinct soon, but that a more post-human advanced species will take their place. This is hardly 'bad' news. On the other hand, we can try to construct a new reference class which might encapsulate any such post-human creatures.

    For example, take the reference class of all intelligent (at least human intelligence) observers that originally descended from earth. This necessarily captures any post-human remnants. If we apply the doomsday argument to this class of observers, then we'll reach the conclusion that our reference class (including the post-humans) will shortly go extinct.

    Of course, this all seems somewhat arbitrary, how does one even go about objectively constructing a reference class? This really just shows the problems with SSA, which are already well acknowledged in the literature. However, if we abandon SSA, we also have to abandon the impetus behind the infinite predecessors argument.

    Not to mention that there's a lot of assumptions going into this argument, like the heat death assumption. If the value of the cosmological constant was even slightly deviated from -1, then our universe might end 'relatively' soon (e.g. big rip), and we have no current way of knowing the value of lambda with such precision.

    ReplyDelete
  18. Thanks for the continuing comments, folks!

    Howie: Yes, but that doesn't change the fundamental argument, as long as there are infinitely many regions with laws similar enough to our own -- right?

    Christopher: Thanks for the suggestion! I noticed that paper when it came out. Huemer's argument is related but crucially different in its support for an infinite past. His argument relies not on a self-location argument but rather on an a priori claim about the nature of time and a claim about the unlikelihood of a low-entropy beginning of the universe.

    Chinaphil: I'm not sure exactly what the mistake is here. Would you deny the reasonableness of the following inference? I know I received ticket #1 in some lottery, but I don't know how big the lottery was. If we want to be concrete, maybe I have a 50% credence that it's a 100 ticket lottery and a 50% credence than it's a 1000 ticket lottery. But those particular credences aren't essential, nor need we even think in terms of specific credences. Here's the inference: It's more likely that the lottery was relatively small than it was relatively large.

    Arnold: An interesting question, how you weigh "do no harm" against "follow your argument wherever it leads". I hope that's not really relevant here, though, since I doubt this argument is harmful!

    Phil E: The first part of your reply is quite a reasonable (and as I'm sure you know standard) piece of the Fermi puzzle -- though whether it's a sufficient explanation by itself is contentious. On the second: Shouldn't it depend on how robust we make them and with what prioritization of survival and reproduction vs. other goals?

    Howard: Aleph-null should do the trick, though I'm willing to entertain arguments for higher cardinalities.

    Alex: I pretty much agree with all of that, with a couple of caveats. First, while reference class is somewhat arbitrary, I think it's reasonable to choose fairly natural or a priori attractive ones (in some informal sense of "natural" or "a priori attractive"), including both "humanity" and "humanlike entities". The reasoning of the post supports applying the Doomsday argument to both groups, thus concluding that humanity is Doomed but humanlike entities are not. As for the heat death assumption, yes. All of this is premised on standard, "vanilla" cosmology. As I understand it, heat death and no big rip is the more vanilla view.

    ReplyDelete
  19. Having been reminded today of John Mills and 1960 high-school readings On Liberty...

    Is it then, we cannot expand fiction into non fiction, beyond our earth's atmosphere...

    For the Sake of utility today, we have AI satellites around our small planet...

    And posed, as we are here, with lower and higher values needing tending...

    ...thank you Splintered Mind for the good leads and reads

    ReplyDelete
  20. Hey Eric,

    In reply to your post. If I understand you correctly, are you saying that we should discard reasoning based on reference class construction which seems arbitrary? That seems rather arbitrary... given that deciding what counts as a 'natural' construction or not is entirely subjective to our brain's conceptual schema. In any case, would you say that the reference class of (intelligent observers who originated from earth) was non-natural? Because if you accept it, then you have to accept that not only are humans doomed, but any human-like post-humans are also equally doomed.

    As for heat death. I would say that's it's really an open question at this point, because we know so little (basically nothing) about dark energy. It's easy to say that the lambda value is -1, because it makes everything symmetrical, and that's the impetus behind the heat death scenario. However, I think most physicists would admit that they were very unsure about this.

    We simply don't have the experimental evidence to confirm this, and more importantly there is no theoretical justification for assuming this (other than the fact that the symmetry makes the math easier). There were plenty of neat looking symmetry theories in the past (like SUSY) which ended up in the dustbin. Extrapolating this to dark energy seems like wishful thinking. And there exist alternative theories, like quintessence, which don't assume a static cosmological constant.

    ReplyDelete
  21. Thanks, folks!

    Alex: Yes, I think "intelligent observers originating from Earth" is also a reasonable natural category, so that if we accept Doomsday-style reasoning it probably dooms that class too. On arbitrariness: We need some sort of line between what's a natural or a priori attractive category, so that we don't see randomly generated license plate 2XNE488 and say, "Wow, how surprising! What are the odds of that?!" in the same way we would if our (supposedly) randomly generated license plate said "HELL000". But it's going to be difficult to formalize that line. In terms of Doomsday, "humans", "intelligent observers originating from Earth", and "humanlike observers" are all reasonable reference categories such that it would be surprising to be among the first 1%, but "humans living in 2020 or later" seems designed ex post facto to make us among the first, so it's not then surprising that we should be among the first. Similarly, it would be surprising to be near the exact center of the universe but not in the same way surprising to be about 93 million miles from a star of such-and-such mass. On heat death: I'm happy enough, if you grant the rest of the argument, to just say that having infinite predecessors would be a surprising consequence of one fairly standard cosmological position that might not on first glance be thought to have that consequence.

    ReplyDelete
  22. Eric: "Would you deny the reasonableness of..ticket #1 in some lottery...more likely that the lottery was relatively small than it was relatively large."
    Yes, I definitely would deny that. I don't know the maths of this at all, but I wonder if your mini-options can help to work it out.
    If there is a 50% chance that the lottery has 100 tickets, and a 50% chance that it has 1000 tickets, then:
    My total chance of receiving ticket 1 (or any other number up to 100) is (50% * 1/100) + (50% * 1/1000) = 0.55%
    And the probability of the lottery being a 100 ticket lottery given than I received ticket 1 is 0.5% / 0.55% = 10/11.
    So worked out like that, yes, having a smaller number does seem to imply a greater probability that it is a small lottery.
    But I don't think that model holds, because one of the assumptions of that model is that I was handed a lottery ticket with a random number on it. It relies on the size of the lottery decided, and the tickets being printed and randomized before I ever get my ticket.
    But that's not what the situation is like. In reality, we're in a queue for lottery tickets, and they're not being handed out at random. They're being handed out sequentially.
    Our situation is more like this: There are two lottery windows, and you line up each day at whichever window you please. One window will sell 100 tickets a day, the other 1,000, but you have no way of knowing which is which on any given day. One day you get there early, choose a window, and nab first place in the queue. As a result, we get ticket number 1. But the fact that we got first place in the queue doesn't tell us which window we picked. The number 1 ticket doesn't give us information in this model, unlike the random ticket assignment model above, because the ticket number is dependent on our sequential position, which is external to the question of which window we're at.

    ReplyDelete
  23. Interesting alternative analogy, chinaphil. I agree with your assessment of that case. So the question is which is the better model for thinking about the situation of finding yourself among the first 60 billion humans. To me, it seems like the original analogy is closer. In reality, there's only one lottery, but you don't know how long it is. For simple numbers, consider it a 50% chance of its being a lottery of 200 billion, and 50% chance of its being a lottery of 200 trillion. That's analogous to my two-lottery case.

    In your setup, maybe you know that you woke up early to be sure to be early in line; but of course we don't know that. So let's tweak your setup so that you don't know when the line started or when it stopped, you know only that it's either a line that runs long enough to have 10 people or long enough to have a billion, starting at some unknown time and ending at some unknown time. (I'm choosing extreme numbers to make the intuitions clearer.) If you wander over and find yourself 6th in line, you could think about it in one of two ways. If, antecedently, you thought it equally probable that you're in any of the one billion and ten available *slots*, you should find yourself surprised to discover either that your by unlikely chance in the the short line or by unlikely chance near the front of the billion person line. But if, antecedently, you though it equally probably that you're in either of the two *lines*, you should conclude it's very likely you're in the short line.

    So then the question is whether we should be indifferent between slots or lines. I think lines. If we knew antecedently that there would be 100 trillion of humans, 100 billion on planet 1 and the rest of planet 2, then we should be surprised to either be in the middle of the run on the less populated planet or near the beginning of the run on the more populated planet. That's like the slot-indifference case. But we don't know that antecedently. Antecedently, before knowing our position in the series, we should think it reasonably likely (approximately indifferent) *either* that humanity is a species that dies out soon or one that lives a long time. That's like the line-indifference case. Yes?

    ReplyDelete
  24. What if the universe were infinitely old already and stretched out infinitely to the future?
    The numbers would change from 100 trillion humans to an infinity of humans
    How does one count probability with transfinite cardinals?
    Huemer's blog post raises that prospect.
    I think since there would be a finite number of intelligent beings alive at any time the infinity would be just plain infinity
    But still

    ReplyDelete
  25. @Eric
    I *think* I agree with that. The key part is in this sentence:

    "In your setup, maybe you know that you woke up early to be sure to be early in line; but of course we don't know that."

    What I'm thinking is that "we are early in the line" is exactly what we do know. There are two separate senses of "early" to tease apart here: early as in near the beginning; and early as in a long way from the end. We don't know where the end is, so we can't know if we're early in that sense. But we do know that we're near the beginning, that is, we know ordinally where we are relative to the beginning. That's the premise of the whole question! We know that we're in the first 60 billion. And seeing as the lottery numbers we're being handed are nothing other than our ordinal position in the line, once we've accepted that we do know our position, our lottery number cannot give us any more information.

    ReplyDelete
  26. Of course once we know our position, our lottery number doesn't give us more information. The question is about what we know in virtue of the first part, knowing our position. We know we are in position number approximately sixty billion; we don't know whether position sixty billion is in the middle, very near the beginning, or very near the end. Despite not having that knowledge, we can make an educated guess that it's more likely to be near the middle than to be very far out toward one extreme (unless we have further information, such as that a huge asteroid is about to strike).

    One way of imagining this is imagining that you have no information about either the length of the line or your position and then updating with knowledge of your position. If you update with knowledge that your position is 62,223,457,108, then you should think it unlikely that the length of the line is exactly 62,223,457,108. That would be so surprising, to have the very last position in such a long line! Would you agree? Doomsday reasoning is just an extension of that way of thinking.

    Another way of imagining this is via the Copernican Principle in cosmology: We should think we're in an average, mediocre place in the cosmos, rather than a super special place like the exact center -- unless there's some positive evidence that we're in a super special place. Being very near the beginning of a very long run would be un-Copernican in this sense -- too special a place.

    ReplyDelete
  27. The word "doomsday" would seem to be built upon a widely held assumption that life is better than death. Thus, we typically see the death of an individual, society or species as being a negative, a form of doom.

    The highest service philosophers may offer is to examine and challenge unexamined assumptions at the heart of any group consensus. This seems an important contribution because if any perspective, activity or decision is built upon the foundation of a false assumption, problems are likely if not certain.

    As one quick example, in the 19th century the group consensus almost universally assumed that people of color were of less value than whites, and this false assumption led to unspeakable suffering for millions of people of color, plus a half million whites in the Civil War. False assumption = vast suffering.

    Today we still typically assume, and usually without much if any questioning, that life is better than death. It seems interesting to note that there is no proof of this at all. Many theories, some deeply held, but no proof, or anything close.

    One way to flip the doomsday concept on it's head would be to question why we think we know that what we call doomsday is actually bad.

    What if the assumption that life is better than death is wrong, and the widespread promotion of this assumption by society's leading authority figures has also led to vast suffering?

    ReplyDelete
  28. Well said Phil! As it happens I don’t consider “life”, or “conscious life”, or even “existence”, to be inherently good. So yes I do roll my eyes a bit when I hear people, and even prominent philosophers, presume such truth. People around here in general probably know my answer to this question, though before I provide that answer I wonder if you have one that you’d like to provide? What do you consider supremely valuable?

    ReplyDelete
  29. Hi PE,

    Well, if we consider good and bad to be human inventions, then nothing would be inherently either.

    What do I consider supremely valuable is a good question, and a big one. Hmm...

    I write a lot about doomsday type topics, and when I do it seems I'm operating from the "life is good, death is bad" assumption, and am thus a card carrying member of the group consensus.

    When I'm not writing I'm in the North Florida woods where I see the life vs. death question far more holistically. It's from that perspective that I'm questioning our doomsday assumptions.

    Perhaps what I personally see as being supremely valuable is my wife, because it wasn't for the influence of her extremely grounded and practical nature, I just might over think my way in to Nurse Ratched's care, if you catch that reference. :-)

    Being new here I don't yet know your own answer, so feel free to share it.



    ReplyDelete
  30. @Phil:
    It's an interesting question, I don't necessarily have strong arguments in favour of the "life is good" camp (although I do have strong intuitions in support). What I will say is that it is hard to see how something can be good without that something existing. By definition, a thing needs to exist in order to instantiate the property of "good", whatever that good thing happens to be. That's not quite the same as the argument that life itself is inherently valuable and/or good, but I feel it's a short step from the premise that value requires existence to the conclusion that observers are a necessary precondition for value. It's not much of a universe if it's stuffed full of 'good' things but completely bereft of observers.

    The alternative is to hold that value, whether aesthetic, moral, or some other kind, exists as part of an abstract platonic realm, or as some kind of intrinsic property of the natural world. I find neither option to be particularly appealing. If valuable things exist, then they must surely require a conscious entity to appreciate that value. This is enough in my view to show that life (or at least consciousness) is a necessary precondition for the existence of good and/or worthwhile value. Not of course a sufficient condition, as we can imagine all sorts of lives which are not worth living, but necessary, nonetheless.

    Of course, none of this tackles the individual question of whether a particular life is worth living, or of whether it is a bad thing for the human species to go extinct.

    ReplyDelete
  31. Actually Phil I meant supreme value in a supremely fundamental way. If you factor some of my recent comments here together, I mean an axiological premise from which to potentially found the work of psychologists, psychiatrists, sociologists, and soft scientists in general. Consider the following scenario. (I’d be interested in your thoughts on this as well Alex.)

    Events should have happened causally before life existed, though without any goodness or badness for what existed. Neither life should have changed this nor even brain based life. At some point the algorithms associated with brain based life should have reached critical constraints regarding what they could effectively do however. Then imagine an irrelevant subjective component serendipitously emerging from brain function as well (and probably in the form of neuron based electromagnetic fields). For this sort of entity alone existence could be good/bad, though it should have simply been along for the ride, or at least initially. Given that algorithms alone couldn’t provide life with various important abilities however, imagine that the otherwise irrelevant experiencer began to be given opportunities to decide certain things, and over the course of evolution did so somewhat better than straight algorithm function alone. Here successive iterations of the experiencer would tend to evolve to be given greater and greater resources for it’s inputs (like vision), processing (thought), and output (muscle operation). And note that feeling good/bad to such an entity would theoretically constitute all that’s good/bad to anything, anywhere. This is to say, I consider there to be a special kind of brain physics which evolution has harnessed to create purpose based entities like you and I, with that purpose or fuel being to feel as good as possible for as long as possible.

    There are several more components to this scenario that I could get into if interested, but does this scenario seem plausible so far?

    ReplyDelete
  32. @Alex, thanks for your engagement, much appreciated.

    As a start, I think my response would be that, given we have no idea what death actually is, we don't seem to be in a position to make claims about what does or doesn't exist after death, whether an observer is still present etc.

    I would agree that, as best I can tell, the concepts of good and bad do seem to be human inventions, which if true would depend therefore upon human existence.

    It's more difficult to agree that physical death represents the end of human existence, given what seems a total lack of information regarding what may or may not happen after our current state of being ends.

    Whatever the truth of that matter may be, there is a separate issue of what story we use in the attempt to explain the unknown. I'm more interested in this, as this is an issue we can do something about.

    I was listening to a story on NPR awhile back where they interviewed some very well intentioned folks working to help people avoid suicide. I have no complaint at all with such a project, but did wonder about the following...

    Consider the message being (unintentionally) sent in efforts to prevent suicide. We seem to be saying to the suffering that as bad as their life is now, the alternative is even worse.

    That may indeed be true, I obviously have no idea. But it's a pretty dark message, and it's not clear to me how it succeeds in cheering people up.









    ReplyDelete