Friday, September 02, 2011

Bostrom's Response to My Discussion of the Simulation Argument

A few days ago I posted a discussion of Nick Bostrom's Simulation Argument, which aims to show that there is a substantial chance -- perhaps about one in three -- that we are living in a computer simulation. I raised three concerns about the argument but ultimately concluded that although simulationism is crazy (in my technical sense of "crazy"), it's a cosmological possibility I don't feel I can dismiss.

Bostrom and I had an email exchange about that post, and he has agreed to let me share it on the blog.

So, first, you'll want to read my discussion, if you haven't already, and maybe also Bostrom's article (though I hope the summary in my discussion does justice enough to the main idea for the casual reader).

Bostrom's Reply:

Dear Eric,

Thanks for your thoughtful comments and for posting on your blog.

A few brief remarks. Regarding (A) and deriving an objection from externalism: There are a few things said in the original paper about this (admittedly quickly). For example, what do you think of the point that if we consider a case where we knew that 100% of everybody with our observations are in sims, we could logically deduce that we are in sims; and therefore if we consider a series of cases in which the fraction gradually approaches one, 90%, 98%, 99%, 99.9%, … , it is plausible that our credence should similarly gradually approach one? (There is also the whole area of observation selection theory, which uses stronger principles from which the “bland indifference principle” needed for the simulation argument drops out as an almost trivial special case.)

Regarding (B), I think it’s a somewhat open question how many conscious fellow traveler the average simulated conscious being has. Note that there need be only a few ancestor simulations (big ones with billions of people) to make up for billions of “me-simulations” (simulations with only one conscious person). Another issue is whether it would be feasible to create realistic replicas of human beings without simulating them in enough detail that they are conscious - if not, we have observational evidence against the me-sim hypothesis.

Regarding possible interaction between (B) and (C): no. 4 in might be relevant?

With best wishes,

My Follow-Up:

Dear Nick,

Thanks for the thoughtful reply! ...

[Regarding the response to A] I don’t accept the slippery slope. As long as there is one non-sim, that person’s grounds for believing she is a non-sim might be substantially different than the grounds of all simulated persons no matter how many sims there are, especially if we accept an externalist epistemology. Compare the Napoleon case. No matter how many crazy people think they are Napoleon, Napoleon’s own grounds for thinking he is Napoleon are different, and (arguably) the existence of those crazy people shouldn’t undercut his self-confidence. It would be very controversial, for example, to accept an indifference principle suggesting that if 10,000 crazy people have thought they are Napoleon, and if (hypothetically) Napoleon does or should know this about the world, then Napoleon himself should only believe he is Napoleon with a credence of 1/10,000. Of course, there are important disanalogies between the sims case and the Napoleon case. My point is only that your argument has a larger gap here than you seem to be granting.

[Regarding the response to B] Agreed. It’s an open question. I wouldn’t underplay the likelihood that many future sims might exist in short-term entertainments or single-mind AIs. I like your suggestion, though, that it might be impractical or impossible to have a solo sim with realistic non-conscious replicas as the solo’s apparent interactive partners – but that point will interact at least with the issue of sim duration. I’ve been solo in my office for half an hour. If I’m a short-duration sim, then probably non-conscious quantities AI will have been sufficient to sustain my illusion of a half-hour’s worth of internet interactions with people. How the openness of the duration/solo question plays out argumentatively might depend on one’s argumentative purposes. For purposes of establishing the sims possibility the non-trivial likelihood of the existence of massive ancestor simulations is sufficient. But for purposes of evaluating the practical consequences of accepting the sims possibility, it might be important to bear in mind that many sims may not exist in long-duration, high-population sims. It seems to me that unless you can establish that most sims do live in long-duration, high-population sims, the sims possibility has more skeptical consequences, and perhaps more of a normative impact on practical reflection, than you suggest.

[Regarding the possible interaction between (B) and (C)] I agree with your reasoning in faq 4. Just to be clear, what I was suggesting in my comment on the interaction between (B) and (C) was not intended as an objection of the sort posed in faq 4. In fact, the second of the two connections I remark on seems to increase the probability that I am a sim (and secondarily, to a lesser extent, that we are sims) by reducing the probability of DOOM/PLATEAU.


Bostrom's Response to My Follow-Up:


[On Issue A] That was intended as a continuity argument rather than a slippery slope argument. I’m not saying one point should be regarded as equivalent to another point because there is a smooth slope between them, but rather that credence should vary continuously as the underlying situation varies continuously. Is the alternative that I should assign credence 1 to being in a sim conditional on everybody being in a sim, but credence 0 to being in a sim conditional on there being at least one person like me who is not in a sim? Would an analogous principle also hold if we imagine that a single individual could be taken in and out of a simulation on alternating days without noticing the transfer? Folks willing to bet according to those obstinate odds would then soon deplete their kids’ college funds.

(For more background, see e.g.

[On Issue B] I broadly agree with that. Pending further information, it seems the simulation hypothesis should lead us to assign somewhat greater probability than we otherwise would to a range of outlier possibilities (including variations of solipsism, creationism, impending extinction, among others). To go beyond that, I think one needs to start to think specifically about what motives advanced civilizations might have for creating simulations (and one should then be diffident but not necessarily completely despairing about our ability to figure out the motives of such presumably posthuman people). The practical import of the simulation hypothesis is thus perhaps initially relatively slight but it might well grow as our analysis advances enabling us to see things more clearly.



Jeremy Goodman said...
This comment has been removed by the author.
Jeremy Goodman said...

Bostrom writes: "Folks willing to bet according to those obstinate odds would then soon deplete their kids’ college funds."

Here's a boilerplate externalist reply. Let's stick with the Napoleon example. It's true that if all self-professed Napoleons put their money where their mouths are, then all but one of them will go broke. But the point of externalism is to have the flexibility to offer different council to different self-professed Napoleons: namely, that Napoleon bet on his being Napoleon and that the crazies not. If all self-professed Napoleons bet as that sort of externalism advises, most of them won't go broke.

Eric Schwitzgebel said...

@ Jeremy: Agreed. Also: Going broke can also result from being ignorant relative to one's betting partner, even if one is (by certain standards) rational.

Anonymous said...

The fraction of individuals in simulated rather than real human-type civilizations is dependent on 4 parameters, not 2 as per Bostrom's original paper (see his recent paper on the Simulation Argument): N the average number of human-type simulations run by real civilizations, F the fraction of real civilizations that run simulations of human-type civilizations, R the average number of individuals in real human-type civilizations and H the average number of individuals in simulated human-type civilizations. I don't see how we can ever have knowledge of what the 'true' values of these parameters are or even hope to approximate them, which would be required for a probability that our civilization is itself simulated, even if we were to go on to run simulations of human-type civilizations ourselves that wouldn't tell us anything about the actual values. His original disjunction only works if we assume certain relative values of R and H, and assume that these are representative of their actual values. If we can't say with any certainty that any estimate we might make of the values of these parameters is equal to or approximates the 'actual' values then the probability represented by the formula isn't the probability that we might be a simulation, rather it is a probability that a randomly selected individual from a subset of the hypothetical simulation hierarchy for which these values hold inhabits a simulation. The Simulation Argument presented by Bostrom doesn't actually say anything about whether we might be a simulation, all that can be taken from it is that we might be.

Eric Schwitzgebel said...

@ Anon: Granted, Bostrom is making assumptions about those other variables, or more specifically the ratio of R/H, which he assumes will be very low. I think he's pretty clear about this, and he can shuffle non-low R/H ratios either into not-really-a-post-human-civilization or not-interested-enough-in-running sims.

Anonymous said...

The point is that the assumptions about values for these parameters invalidates calling the fraction f=N/(N + R/FH) the probability that we are simulated. It doesn't matter whether he is upfront about his assumptions (if you call only noting that there was an implicit assumption of R=H some 10 years after the original paper upfront), it is contradictory to make assumptions about the values of N, F, R and H and then say that the fraction derived thus represents the probability that we are simulated, unless you are sure that his assumptions mirror reality.

Anonymous said...

Not to pester you, but I wonder if you have a response to the flaw I note with the Simulation Argument or was the silence tacit agreement?

Simply put we cannot rule out large values of the ratio R/H, so arbitrarily assuming R~H is erroneous and since the disjunction is invalid for large values of R/H the argument is fallacious.

e.g. if F=0.1 R/H=100 and N=1000 then f=0.5 and all three propositions of the disjunction are false. If we were to arbitrarily impose R/H=1 then f=0.99. By this we can see that assuming R/H~1 makes a nonsense of the bland indifference principle Cred(SIM | f=x) = x.

Eric Schwitzgebel said...

I thought my previous reply to your comment was adequate. For example, Bostrom could say that if R > H then that is either because there aren't the resources for lots of H (and thus not really a sufficiently posthuman civilization) or there's not sufficient interest in making sims. Thus R>H cases can be shuffled into his first two disjuncts.

Anonymous said...

What? No Eric, it simply can't be shuffled elsewhere, that makes no sense at all, and I've just given you an example where all three propositions are false.

How is 10% of civilizations becoming post-human, an average of 1000 sims per post-human civilization and a resulting fraction of simulated individuals of 0.5 in agreement with any of the three propositions: F->0, N->0 and f->1?

Eric Schwitzgebel said...

Say that the low # of sims is due to technological constraints. Then it's not really a post-human civ. So really, in Bostrom's terms, you're arguing for something approaching a 100% chance of "doom". Bostrom thinks 1/3 is a better assessment of that probability.

Anonymous said...

No, F is 0.1, 10% of civilizations go on to run simulations of human-type civilizations, I am not arguing for P(DOOM)=1, not at all and how is N=1000 low? In any case the N can be much larger if R/H is also larger.

Also, you're trying to use a special definition of a simulating civilization to make the argument work, you can't do this, the argument has to be general if you wish it to speak to the probability that we are simulated.

Ben said...

I can't say I understand Bostrom's rationale for including pre-posthuman individuals of human-type civilizations in the reference class over which the fraction of simulated individuals is calculated, but not posthuman individuals of those civilizations (by definition of H in the core of the argument). I'm not even sure what qualities constitute the human-type criterion that individuals must satisfy before they are included in the reference class, it seems arbitrary and deeply suspect.

Under the bland indifference principle Bostrom suggests that we should reason such that if we don't have any information that indicates that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then these experiences should be dismissed from consideration in determining our credence that we are simulated. This sentiment seems contradictory to the way the core of the argument is formulated, in that we explicitly take into account experience by discarding individuals that don't belong to human-type civilizations and excluding posthuman observers. Surely the same reasoning as in the BPI applies? After all, we don't know whether post-human (or even human-type) observers are more likely to be implemented in vivo rather than in machina.

Regarding his response to B, a few billion me sims wouldn't make up for a few billion mind sims, it would drastically alter the average population per sim of minds we could be and since we're interested in the fraction of simulated people not the fraction of simulated civilizations that's kind of a big deal.

Eric Schwitzgebel said...

Thanks for the comment, Ben. The reason to exclude posthumans from the calculation is, I assume, that we know we're not posthumans: We are either humans or sims of humans. I agree that the numbers if sims vs humans in different possibilities is crucial.

Ben said...

@ Eric: Hmm, but I could equally say that I know I'm not an individual from the dawn of civilization and likewise I know I'm not an individual living in a period in which a posthuman age is immediately imminent, so why should I include all such individuals in my reference class when determining the probability that I am simulated but exclude others that I know I'm not?

Eric Schwitzgebel said...

I agree that the reference class needs to be made clear, and the issue gets tricky. Maybe the easiest version would just be to have real 2011 people and Sim 2011 people in the reference class, but that doesn't do full justice to the sim possibility because the class of beings relevantly like us who are sims thinking they're real is probably best conceptualized as substantially larger than that -- "relevantly" being a very tricky word here!

Ben said...

@ Eric: I think that the same criticism applies to a reference class consisting of 2011 people. There are innumerable ways that others in this reference class aren't like me, so why should I dismiss such characteristics and reason as if I am indistinguishable from these people, but not also dismiss characteristics that pertain to time? Why give preference to time and not hair color? Neither has a known causal relation to whether you or I are simulated.

In defining a reference class we're really just guessing at the motivations of potential simulators, unless we choose the universal reference class. Depending on what characteristics we select to define our reference class, the size of the reference class might be vastly different and the corresponding probability of simulation would be arbitrary (although presumably the probability has a limit in the universal reference class case).

Clive said...

What about just thinking in basic practical operational simulation terms?

'IF' we are in a simulation then it is LIKELY that our future selves will have built it. It is also likely that if we are in a simulation that it is a basic simulation project with ourselves as copied people because this is the natural development out of simulations we have now.

If we are being simulated as copied people living out someone else's life then 'ethics' aside as to whether anyone 'should' simulate self aware, free thinking people then if you think about it for a while then you will eventually realize that there will be deducible and observable behavioural differences between a copied simulated population and a hypothetical real population.

These differences turn out to be exactly what we have identified as cognitive dissonance and confirmation bias. This is explained in great detail on the linked page below and the next page in that series:

Another factor that I've not seen mentioned is that it is very obvious that any simulation designer of a simulation PROJECT attempting to simulate free thinking people would absolutely directly MANAGE their simulated populations awareness, thinking and evaluating capacities to make sure that they DON'T think of realistic 'earth as a simulation' possibilities. This is because if your copied population becomes aware that they absolutely are in a simulation project then it will likely sabotage the aims of the project because this happening will change their behaviour and likely have them deviating from being accurate simulated as a copied population. This is explained in detail with evidence for this being operational here:

Unfortunately 'IF' you do start thinking in realistic terms then it turns out that there is an abundance of macro observable evidence that we are in a simulation, including plenty of evidence that we are being managed to not think of OBVIOUS possibilities: