Tuesday, May 17, 2022

Our Infinite Predecessors: Flipping the Doomsday Argument on Its Head

The Doomsday Argument purports to show, probabilistically, that humanity will not endure for much longer: Likely, at least 5% of the humans who will ever live have already lived. If 60 billion have lived so far, then probably no more than 1.2 trillion humans will live, ever. (This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year.) According to this argument, the odds that humanity colonizes the galaxy with many trillions of inhabitants are vanishingly small.

Why think we are doomed? The core idea, as developed by Brandon Carter (see p. 143), John Leslie, Richard Gott, and Nick Bostrom is this. It would be statistically surprising if we -- you and I and our currently living friends and relatives -- were very nearly the first human beings ever to live. Therefore, it's unlikely that we are in fact very nearly the first human beings ever to live. But if humanity continues on for many thousands of years, with many trillions of future humans, then we would in fact be very nearly the first human beings ever to live. Thus, we can infer, with high probability, that humanity is doomed before too much longer.

Consider two hypotheses: On one hypothesis, call it Endurance, humanity survives for many millions more years, and many, many trillions of people live and die. On the other, call it Doom, humanity survives for only another a few more centuries or millennia. On Endurance, we find ourselves in a surprising and unusual position in the cosmos -- very near the beginning of a very long run! This, arguably, would be as strange and un-Copernican as finding ourselves in some highly unusual spatial position, such as very near the center of the cosmos. The longer the run, the more surprisingly unusual our position. In contrast, Doom suggests that we are in a rather ordinary temporal position, roughly the middle of the pack. Thus, the reasoning goes, unless there's some independent reason to think Endurance to be much more plausible than Doom, we ought to conclude that Doom is likely.

Let me clarify by showing how Doomsday-style reasoning would work in a few more intuitive cases. But first, here's an inverted mushroom cloud to symbolize that I'll soon be flipping the argument over.

Imagine two lotteries. One has ten numbers, the other a hundred numbers. You don't know which one you've entered into, but you go ahead and draw a number. You discover that you have ticket #6. Upon finding this out, you ought to guess that you probably drew from the ten number lottery rather than the hundred number lottery, since #6 would be a surprisingly low draw in a hundred-number lottery. Not impossible, of course, just relatively unlikely. If your prior credence was split 50-50 between the two lotteries, you can use Bayesian inference to derive a posterior credence of about 91% that you are in the ten-number lottery, given that you see a number among the top ten. (Of course, if you have other evidence that makes it very likely that you were in the hundred-number lottery, then you can reasonably retain that belief even after drawing a relatively low number.)

Alternatively, imagine that you're one of a hundred people who have been blindfolded and imprisoned. You know that 90% of the prison cells are on the west side of town and 10% are on the east side. Your blindfold is removed, but you don't see anything that reveals which side of town you're on. Nonetheless, you ought to think it's likely you're on the west side of town.

Or imagine that you know that 10,000 people, including you, have been assigned in some order to view a newly discovered painting by Picasso, but you don't know in what order people actually viewed the painting. Exiting the museum, you should think it unlikely that you were either among the very first or very last.

The reasoning of the Doomsday argument is intended to be analogous: If you don't know where you're temporally located in the run of humans, you ought to assume it's unlikely that you're in the unusual position of being among the first 5% (or 1% or, amazingly, .001%).

Now various disputes and seeming paradoxes arise with respect to such probabilistic approaches to "self-location" (e.g., Sleeping Beauty), and a variety of objections have been raised to Doomsday Argument reasoning in particular (Leslie's book has a good discussion; see also here and here). But let's bracket those objections. Grant that the reasoning is sensible. Today I want to add a pair of observations that have the potential to flip the Doomsday Argument on its head, even if we accept the general style of reasoning.

Observation 1: The argument assumes that only about 60 billion humans have existed so far, rather than vastly many more. Of course this seems plausible, but as we will see there might be reason to reject it.

Observation 2: Standard physical theory appears to suggest that the universe will endure infinitely long, giving rise to infinitely many future people like us.

There isn't room here to get into depth on Observation 2. I am collaborating with a physicist on this issue now; draft hopefully available soon. But the main idea is this. There's no particular reason to think that the universe has a future temporal edge, i.e., that it will entirely cease. Instead, standard physical theory suggests that it will enter permanent "heat death", a state of thin, high-entropy chaos. However, there will from time to time be low-probability events in which people, or even much larger systems, spontaneously congeal from the chaos, by freak quantum or thermodynamical chance. There's no known cap on the size of such spontaneous fluctuations, which could even include whole galaxies full of evolving species, eventually containing all non-zero-probability life forms. (See the literature on Boltzmann brains.) Perhaps there will even be new cosmic inflations, for example, caused by black holes or spontaneous fluctuations. Vanilla cosmology thus appears to imply an infinite future containing infinitely many people like us, to any arbitrarily specified degree of similarity, perhaps in very large chance fluctuations or perhaps in newly nucleated "pocket universes".

Now if we accept this, then by reasoning similar to that of the Doomsday Argument, we ought to be very surprised to find ourselves among the first 60 billion people like us, or living in the first 14 billion years of an infinitely existing cosmos. We'd be among the first 60 billion out of infinity. A tiny chance indeed! On Doomsday-style reasoning, it would be much more reasonable, if we think the future is infinite, to think that the past must be infinite too. Something existed before the Big Bang, and that something contained observers like us. That would make us appropriately mediocre. Then, in accordance with the Copernican Principle, we'd be in an ordinary location in the cosmos, rather than the very special location of being within 14 billion years of the beginning of an infinite duration.

The situation can be expressed as follows. Doomsday reasoning implies the following conditional statement:

Conditional Doom: If only 60 billion humans, or alternatively human-like creatures, have existed so far, then it's unlikely that many trillions more will exist in the future.

If we take as a given that only 60 billion have existed so far, we can apply modus ponens (concluding Q from P and if P then Q) and conclude Doom.

But alternatively, if we take as a given that (at least) many trillions will exist in the future, we can apply modus tollens (concluding not-P from not-Q and if P then Q) and conclude that many more than 60 billion have already existed.

The modus ponens version is perhaps more plausible if we think in terms of our species, considered as a local group of genetically related animals on Earth. But if we think in terms of humanlike creatures instead specifically of our local species, and if we accept an infinite future likely containing many humanlike creatures, then the modus tollens version becomes more plausible, and we can conclude a long past as well as a long future, full of humanlike creatures extending infinitely forward and back.

Call this the Infinite Predecessors argument. From infinite successors and Doomsday-style self-location reasoning, we can conclude infinite predecessors.



Almost Everything You Do Causes Almost Everything (Mar 18, 2021)

My Boltzmann Continuants (Jun 6, 2013).

How Everything You Do Might Have Huge Cosmic Significance (Nov 29, 2016).

And Part 4 of A Theory of Jerks and Other Philosophical Misadventures.

[image adapted from here]


Howard said...

Isn't this a motif of science fiction?
In the eighties in an airport I read a story with the theme of earlier doomed civilizations.
But you assume intelligent life can pop up in the ether like a quark

Garret Merriam said...

"This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year"

I think I see a problem with their argument right here.

Arnold said...

In philosophy of psychology is infinity in descendance ascendance transcendence...
...free of times arrow for Life's predecessors...

Kris Rhodes said...

Slight aside question:

The article on sleeping beauty leaves the final state of the discussion at the point where Bostrom argues from the million-day version that she should lean towards thinking the coin flip came up tails.

That feels mostly convincing to me as I reason that if she bets a dollar it came up tails on 1:1.01 odds, she's going to come out winning about five thousand dollars on average. If she bets a dollar it came up heads on the same odds, she's going to lose a million dollars minus some small number of cents I haven't thought through.

(I think it's just minus one cent.)

But is that the current state of the game? Has no one responded to Bostrom's point about the million day version?

Paul D. Van Pelt said...

The Farnham mouse experiment returns to mind. My brother likes this one. The experimenters grossly underestimated the needs of the colony. Social structure broke down more rapidly and in ways either unanticipated or underanticipated. Diamond's book, Collapse, was instructive for what he found or theorized, and was perhaps as significant for conjectures he made, in lieu of convincing evidence. The parables of probability, Murphy's Laws, were headed up by the capstone, which proclaimed: anything that can go wrong will. On my thirty-seventh birthday, a number of years ago, a trip into space went horribly wrong. Richard Feynman showed conference-goers why this happened. So,my contention is the demise of humankind is likely to be of our own making, among other things---a so-called 'perfect storm'...in other colloquial terms, a totality of the circumstances: Murphy's admonition, ANYTHING that can go wrong. Thanks!

Eric Schwitzgebel said...

Thanks, all!

Garret: I'm not sure I understand your objection. Current demographic projections is that population growth is stabilizing at approximately that over the next century. Of course, it's anyone's guess about the farther future, but the argument doesn't turn on the particulars of population growth rates.

Kris: On Sleeping Beauty, I too am a 1/3-er. But opinion remains divided:

Paul: You might like Toby Ord's recent book Precipice, which carefully explores the various ways in which we might exterminate ourselves in the next few centuries, considering what steps we might take to reduce the likelihood of this.

Unknown said...

I've always felt like doomsday arguments and the fine-tuning argument and things like that are misusing statistics. Everything that has ever happened had a vanishingly small chance of happening if looked at in a certain way.

Philosopher Eric said...

I do like the logic associated with “the doomsday” argument, even if I don’t like the name much. It doesn’t say that it’s impossible to randomly draw a 1 in a set of whole numbers between 0 and 1,000,000,000. It just says that it’s far more likely for such a number to result in far lower sized drawings. So if you draw a 1 and have no information about the size of the drawing, then suspecting larger and larger numbered drawings should be less and less rational. It’s a perfectly logical position that ought to give some of the silliest future humanity predictors pause, but probably does not.

I consider there to be plenty of reason to believe that an animal like us would not tend to kill itself off before it became technological, though would increasingly tend to do so afterwards. Furthermore it could be that the age of sci-fi has obscured to many that the only spaceship which we should ever have that’s reasonably sustainable, is the one that created us.

If we do kill ourselves off in millennia sooner rather than later, and many other species here follow our pattern of technology to extinction (possibly even with scientists able to detect various terrestrial technology ages), wouldn’t it be strange that we were the first? Yes, but it is what it is. And we might even be the last to do so here to thus invalidate that strangeness.

Regarding this inversion of the doomsday (or “rational”?) argument, yes I’ll go along with it. Of course the big caveat is that we’re discussing creatures that would merely function somewhat like us. But given infinite time I think we can indeed conclude that there will be infinite such predecessors.

Howard said...

Will these hypothetical people in other solar systems and galaxies really be like us?
Evolution of human intelligence depends on the local circumstances of the savannah in which we sprouted and spurted into the world- we weren't built, like computers- would other 'earths' really be like us creating other 'humans' like us?
Until you prove that I'd assume the opposite

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Unknown May 18: You aren't alone in this suspicion. I'm inclined to think that doomsday-style reasoning is sound in general but it remains contentious among experts.

Phil E: Of course this also connects to the "Fermi paradox". Why don't we see others? Could we be first? That seems unlikely unless technological intelligence is extremely rare. But maybe technological intelligence *is* extremely rare. Or maybe we're not the first and the others on other planets quickly died off, or are hiding, or....

Howard: Pretty much all you need is infinite time, a finite chance, and not to get stuck in a looping subset of the possibility space. See "Poincare recurrence" or my old posts on Boltzmann brains and/or eternal recurrence.

Howard said...

Yes but

The laws of physics might differ in the oases rising from this quantum soup and if various constants and laws differ that might up the great stream of the great chain of being change the type of living organisms that result
Think of Asimov's The Gods Themselves

Christopher Devlin Brown said...

Michael Huemer provides a similar argument in https://onlinelibrary.wiley.com/doi/epdf/10.1111/nous.12295. I believe he has called it his best published paper.

chinaphil said...

This seems like a fairly straightforward abuse of Bayes. If you don't know the position of X, but know that it falls randomly in a range of 100 different positions, then yes, you know that it's unlikely that it falls in the first position.
But if you know X is in the first position, that doesn't tell you anything about the size of the distribution. You can't use the fact that X is in position one to inform your estimate of how large the range of possible positions is. I guess in formal terms, it's because you can't build a prior (your knowledge that X is in 1) into your conclusion. Or perhaps the problem is that the doomsday argument tries a weird combination of Bayesian and frequentist approaches. First it asks us to look at what we know (a Bayesian move); then it asks us to imagine that we already know the distribution of humans throughout history (a frequentist move).
Either way, it seems pretty suss.

Arnold said...

Should a Hippocratic Oath, in a on going broadly construed philosophy of psychology, be utilized constantly...
...for goodness sake ya all...

Philosopher Eric said...

Right professor, I did also allude to an answer for the Fermi paradox. For certain Sci-Fi lovers however, going further might effectively be like explaining why it is that Santa doesn’t exist. But what the hell, we should all be adults here...

I have two explanations which suggest that we shouldn’t have any evidence of technological species beyond us, and even if countless tend to exist as I suspect. The first is really just a mathematical implication of the doomsday argument. It makes perfect sense that something like the human would tend not to kill itself off without technology, though would with it. Thus perhaps we’ve now begun a 10,000 year run of this before our end. Our planet however is currently 6.4 billion years old. Therefore if some extraterrestrial species were looking for technological life specifically on our planet, so far the math says that they’d have a 1 in 640,000 chance of doing so at the right time! Of course when we expire it’s possible that we won’t kill off certain reasonably intelligent birds, rodents, or something else that will evolve to become technological. And perhaps when their technology leads to their extinction they’ll leave some reasonable successors… and on and on. If 10,000 years of technology generally becomes separated by 10,000,000 years of evolution, then the chance of finding technology here during this cycle would still be pretty bad at 1 in 1,000. And note that in order to detect associated electromagnetic radiation before it becomes noise they’d probably need to be in our solar system. (That is unless we were to aim focused beams to enough good spots in the sky that harbored other technological creatures (under doomsday restraints themselves probably) that had detectors able to discern this from noise.) In any case my point is that it seems like pretty tough odds for others to even find technological life on Earth, let alone us trying to find it on other planets.

My other Fermi answer was alluded to above, or that we evolved for the conditions on our planet rather than other places. Even if there are certain planets out there that could essentially sustain us as well, they should all be too far away for us to ever make such a voyage. Some acknowledge this but then fall back on human made machines. Clearly we can build them to travel through space for example. So couldn’t we build some of them to reach a reasonable planet under the domain of another star, and then have them go on to build a second great voyage for another such expansion, and so on?

Even if we were able to get enough machines to a reasonable planet, no I don’t think that they would ever build something like themselves for a next succession, and even if we were able to provide them with consciousness. We are self sustaining (for now anyway) because we sit atop a vast ecosystem that evolved niches that fit in with the whole. Our robots however would somehow need to build what we did without having anything like what we do on Earth. So if neither the human nor its machines will ever become self sustaining elsewhere, the same should apply for the countless other technological species that I presume have existed and will exist.

Howard said...

When you say 'infinite' do you mean Aleph One or Aleph Null? It is more germane to Huemer's argument than yours, but we have to be precise in our use of the word 'infinity'
I mean regular or vulgar infinity is a blink in the eye of God- no?

Alex Popescu said...

Reasoning from SSA (self-sampling assumption) is tricky because it will always be relative to one's reference class.

So, for example, even if the infinite predecessors argument works, it has no real application to the doomsday argument because the reference classes are different. The doomsday argument just takes all of humanity as its reference class, whereas the infinite predecessors argument uses all actual conscious observers. Even if it's true that we are not in a special place among the class of all actual observers, this wouldn't change the fact that we should expect the particular human species to go extinct soon.

That said, the arbitrariness that is the human reference class can cut both ways. One could argue that humans will indeed go extinct soon, but that a more post-human advanced species will take their place. This is hardly 'bad' news. On the other hand, we can try to construct a new reference class which might encapsulate any such post-human creatures.

For example, take the reference class of all intelligent (at least human intelligence) observers that originally descended from earth. This necessarily captures any post-human remnants. If we apply the doomsday argument to this class of observers, then we'll reach the conclusion that our reference class (including the post-humans) will shortly go extinct.

Of course, this all seems somewhat arbitrary, how does one even go about objectively constructing a reference class? This really just shows the problems with SSA, which are already well acknowledged in the literature. However, if we abandon SSA, we also have to abandon the impetus behind the infinite predecessors argument.

Not to mention that there's a lot of assumptions going into this argument, like the heat death assumption. If the value of the cosmological constant was even slightly deviated from -1, then our universe might end 'relatively' soon (e.g. big rip), and we have no current way of knowing the value of lambda with such precision.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Howie: Yes, but that doesn't change the fundamental argument, as long as there are infinitely many regions with laws similar enough to our own -- right?

Christopher: Thanks for the suggestion! I noticed that paper when it came out. Huemer's argument is related but crucially different in its support for an infinite past. His argument relies not on a self-location argument but rather on an a priori claim about the nature of time and a claim about the unlikelihood of a low-entropy beginning of the universe.

Chinaphil: I'm not sure exactly what the mistake is here. Would you deny the reasonableness of the following inference? I know I received ticket #1 in some lottery, but I don't know how big the lottery was. If we want to be concrete, maybe I have a 50% credence that it's a 100 ticket lottery and a 50% credence than it's a 1000 ticket lottery. But those particular credences aren't essential, nor need we even think in terms of specific credences. Here's the inference: It's more likely that the lottery was relatively small than it was relatively large.

Arnold: An interesting question, how you weigh "do no harm" against "follow your argument wherever it leads". I hope that's not really relevant here, though, since I doubt this argument is harmful!

Phil E: The first part of your reply is quite a reasonable (and as I'm sure you know standard) piece of the Fermi puzzle -- though whether it's a sufficient explanation by itself is contentious. On the second: Shouldn't it depend on how robust we make them and with what prioritization of survival and reproduction vs. other goals?

Howard: Aleph-null should do the trick, though I'm willing to entertain arguments for higher cardinalities.

Alex: I pretty much agree with all of that, with a couple of caveats. First, while reference class is somewhat arbitrary, I think it's reasonable to choose fairly natural or a priori attractive ones (in some informal sense of "natural" or "a priori attractive"), including both "humanity" and "humanlike entities". The reasoning of the post supports applying the Doomsday argument to both groups, thus concluding that humanity is Doomed but humanlike entities are not. As for the heat death assumption, yes. All of this is premised on standard, "vanilla" cosmology. As I understand it, heat death and no big rip is the more vanilla view.

Arnold said...

Having been reminded today of John Mills and 1960 high-school readings On Liberty...

Is it then, we cannot expand fiction into non fiction, beyond our earth's atmosphere...

For the Sake of utility today, we have AI satellites around our small planet...

And posed, as we are here, with lower and higher values needing tending...

...thank you Splintered Mind for the good leads and reads

Alex Popescu said...

Hey Eric,

In reply to your post. If I understand you correctly, are you saying that we should discard reasoning based on reference class construction which seems arbitrary? That seems rather arbitrary... given that deciding what counts as a 'natural' construction or not is entirely subjective to our brain's conceptual schema. In any case, would you say that the reference class of (intelligent observers who originated from earth) was non-natural? Because if you accept it, then you have to accept that not only are humans doomed, but any human-like post-humans are also equally doomed.

As for heat death. I would say that's it's really an open question at this point, because we know so little (basically nothing) about dark energy. It's easy to say that the lambda value is -1, because it makes everything symmetrical, and that's the impetus behind the heat death scenario. However, I think most physicists would admit that they were very unsure about this.

We simply don't have the experimental evidence to confirm this, and more importantly there is no theoretical justification for assuming this (other than the fact that the symmetry makes the math easier). There were plenty of neat looking symmetry theories in the past (like SUSY) which ended up in the dustbin. Extrapolating this to dark energy seems like wishful thinking. And there exist alternative theories, like quintessence, which don't assume a static cosmological constant.

Eric Schwitzgebel said...

Thanks, folks!

Alex: Yes, I think "intelligent observers originating from Earth" is also a reasonable natural category, so that if we accept Doomsday-style reasoning it probably dooms that class too. On arbitrariness: We need some sort of line between what's a natural or a priori attractive category, so that we don't see randomly generated license plate 2XNE488 and say, "Wow, how surprising! What are the odds of that?!" in the same way we would if our (supposedly) randomly generated license plate said "HELL000". But it's going to be difficult to formalize that line. In terms of Doomsday, "humans", "intelligent observers originating from Earth", and "humanlike observers" are all reasonable reference categories such that it would be surprising to be among the first 1%, but "humans living in 2020 or later" seems designed ex post facto to make us among the first, so it's not then surprising that we should be among the first. Similarly, it would be surprising to be near the exact center of the universe but not in the same way surprising to be about 93 million miles from a star of such-and-such mass. On heat death: I'm happy enough, if you grant the rest of the argument, to just say that having infinite predecessors would be a surprising consequence of one fairly standard cosmological position that might not on first glance be thought to have that consequence.

chinaphil said...

Eric: "Would you deny the reasonableness of..ticket #1 in some lottery...more likely that the lottery was relatively small than it was relatively large."
Yes, I definitely would deny that. I don't know the maths of this at all, but I wonder if your mini-options can help to work it out.
If there is a 50% chance that the lottery has 100 tickets, and a 50% chance that it has 1000 tickets, then:
My total chance of receiving ticket 1 (or any other number up to 100) is (50% * 1/100) + (50% * 1/1000) = 0.55%
And the probability of the lottery being a 100 ticket lottery given than I received ticket 1 is 0.5% / 0.55% = 10/11.
So worked out like that, yes, having a smaller number does seem to imply a greater probability that it is a small lottery.
But I don't think that model holds, because one of the assumptions of that model is that I was handed a lottery ticket with a random number on it. It relies on the size of the lottery decided, and the tickets being printed and randomized before I ever get my ticket.
But that's not what the situation is like. In reality, we're in a queue for lottery tickets, and they're not being handed out at random. They're being handed out sequentially.
Our situation is more like this: There are two lottery windows, and you line up each day at whichever window you please. One window will sell 100 tickets a day, the other 1,000, but you have no way of knowing which is which on any given day. One day you get there early, choose a window, and nab first place in the queue. As a result, we get ticket number 1. But the fact that we got first place in the queue doesn't tell us which window we picked. The number 1 ticket doesn't give us information in this model, unlike the random ticket assignment model above, because the ticket number is dependent on our sequential position, which is external to the question of which window we're at.

Eric Schwitzgebel said...

Interesting alternative analogy, chinaphil. I agree with your assessment of that case. So the question is which is the better model for thinking about the situation of finding yourself among the first 60 billion humans. To me, it seems like the original analogy is closer. In reality, there's only one lottery, but you don't know how long it is. For simple numbers, consider it a 50% chance of its being a lottery of 200 billion, and 50% chance of its being a lottery of 200 trillion. That's analogous to my two-lottery case.

In your setup, maybe you know that you woke up early to be sure to be early in line; but of course we don't know that. So let's tweak your setup so that you don't know when the line started or when it stopped, you know only that it's either a line that runs long enough to have 10 people or long enough to have a billion, starting at some unknown time and ending at some unknown time. (I'm choosing extreme numbers to make the intuitions clearer.) If you wander over and find yourself 6th in line, you could think about it in one of two ways. If, antecedently, you thought it equally probable that you're in any of the one billion and ten available *slots*, you should find yourself surprised to discover either that your by unlikely chance in the the short line or by unlikely chance near the front of the billion person line. But if, antecedently, you though it equally probably that you're in either of the two *lines*, you should conclude it's very likely you're in the short line.

So then the question is whether we should be indifferent between slots or lines. I think lines. If we knew antecedently that there would be 100 trillion of humans, 100 billion on planet 1 and the rest of planet 2, then we should be surprised to either be in the middle of the run on the less populated planet or near the beginning of the run on the more populated planet. That's like the slot-indifference case. But we don't know that antecedently. Antecedently, before knowing our position in the series, we should think it reasonably likely (approximately indifferent) *either* that humanity is a species that dies out soon or one that lives a long time. That's like the line-indifference case. Yes?

Howard said...

What if the universe were infinitely old already and stretched out infinitely to the future?
The numbers would change from 100 trillion humans to an infinity of humans
How does one count probability with transfinite cardinals?
Huemer's blog post raises that prospect.
I think since there would be a finite number of intelligent beings alive at any time the infinity would be just plain infinity
But still

chinaphil said...

I *think* I agree with that. The key part is in this sentence:

"In your setup, maybe you know that you woke up early to be sure to be early in line; but of course we don't know that."

What I'm thinking is that "we are early in the line" is exactly what we do know. There are two separate senses of "early" to tease apart here: early as in near the beginning; and early as in a long way from the end. We don't know where the end is, so we can't know if we're early in that sense. But we do know that we're near the beginning, that is, we know ordinally where we are relative to the beginning. That's the premise of the whole question! We know that we're in the first 60 billion. And seeing as the lottery numbers we're being handed are nothing other than our ordinal position in the line, once we've accepted that we do know our position, our lottery number cannot give us any more information.

Eric Schwitzgebel said...

Of course once we know our position, our lottery number doesn't give us more information. The question is about what we know in virtue of the first part, knowing our position. We know we are in position number approximately sixty billion; we don't know whether position sixty billion is in the middle, very near the beginning, or very near the end. Despite not having that knowledge, we can make an educated guess that it's more likely to be near the middle than to be very far out toward one extreme (unless we have further information, such as that a huge asteroid is about to strike).

One way of imagining this is imagining that you have no information about either the length of the line or your position and then updating with knowledge of your position. If you update with knowledge that your position is 62,223,457,108, then you should think it unlikely that the length of the line is exactly 62,223,457,108. That would be so surprising, to have the very last position in such a long line! Would you agree? Doomsday reasoning is just an extension of that way of thinking.

Another way of imagining this is via the Copernican Principle in cosmology: We should think we're in an average, mediocre place in the cosmos, rather than a super special place like the exact center -- unless there's some positive evidence that we're in a super special place. Being very near the beginning of a very long run would be un-Copernican in this sense -- too special a place.