Nick Bostrom argues, in a 2003 article, that there's a substantial probability that we're living in a computer simulation. One way to characterize Bostrom's argument is as follows:
First, let's define a "post-human civilization" as a civilization with enough computing power to run, extremely cheaply, large-scale simulations containing beings with the same general types of cognitive powers and experiences that we have.
The argument, then, is this: If a non-trival percentage of civilizations at our current technological stage evolve into post-human civilizations, and if a non-trivial percentage of post-human civilizations have people with the interest and power to run simulations of beings like us (given that it's very cheap to do so), then most of the beings like us in the universe are simulated beings. Therefore, we ourselves are very likely simulated beings. We are basically Sims with very good AI.
Bostrom emphasizes that he doesn't accept the conclusion of this argument (that we are probably sims), but rather a three way disjunction: Either (1.) Only a trivial percentage of civilizations at our current technological stage evolve into post-human civilizations, or (2.) only a trivial percentage of post-human civilizations are interested in running simulations of beings like us, or (3.) we are probably living in a computer simulation. He considers each of these disjuncts about equally likely. (See for example his recent Philosophy Bites interview.)
Bostrom's argument seems a good example of disjunctive metaphysics and perhaps also a kind of crazyism. I applaud it. But let me mention three concerns:
(A.) It's not as straightforward as Bostrom makes it seem to conclude that we are likely living in a computer simulation from the fact (if it is a fact) that most beings like us are living in computer simulations (as Brian Weatherson, for example, argues). One way to get the conclusion about us from the putative fact about beings like us would be to argue that the epistemic situation of simulated and unsimulated beings is very similar, e.g., that unsimulated beings don't have good evidence that they are unsimulated beings, and then to argue that it's irrational to assign low probability to the possibility we are sims, given the epistemic similarity. Compare: Most people who have thought they were Napoleon were not Napoleon. Does it follow that Napoleon didn't know he was Napoleon? Presumably no, because the epistemic situation is not relevantly similar. A little closer to the mark, perhaps: It may be the case that 10% of the time when you think you are awake you are actually dreaming. Does it follow that you should only assign a 90% credence to being awake now? These cases aren't entirely parallel to the sims case, of course; they're only illustrative. Perhaps Bostrom is on firmer ground. My point is that it's tricky epistemic terrain which Bostrom glides too quickly across, especially given "externalism" in epistemology, which holds that there can be important epistemic differences between cases that from the inside seem identical. (See Bostrom's brief discussion of externalism in Section 3 of this essay.)
(B.) Bostrom substantially underplays the skeptical implications of his conclusion, I think. This is evident even in the title of his article, where he uses the first-person plural, "Are We Living in a Computer Simulation?". If I am living in a computer simulation, who is this "we"? Bostrom seems to assume that the normal case is that we would be simulated in groups, as enduring societies. But why assume that? Most simulations in contemporary AI research are simulations of a single being over a short run of time; and most "sims" operating today (presumably not conscious) are instantiated in games whose running time is measured in hours not years. If we get to disjunct (3), then it seems I might accept it as likely that I will be turned off at any moment, or that godzilla will suddenly appear in the town in which I am a minor figure, or that all my apparent memories and relationships are pre-installed or otherwise fake.
(C.) Bostrom calls the possibility that only a trivial portion of civilizations make it to post-human stage "DOOM", and he usually characterizes that possibility as involving civilization destroying itself. However, he also gives passing acknowledgement to the possibility that this so-called DOOM hypothesis could be realized merely by technologically stalling out, as it were. By calling the possibility DOOM and emphasizing the self-destructive versions of it, Bostrom seems to me illegitimately to reduce its credibility. After all, a post-human civilization in the defined sense, in which it would be incredibly cheap to run massive simulations of genuinely conscious beings, is an extremely grand achievement! Why not call the possibility that only a trivial percentage of civilizations achieve such technology PLATEAU? It doesn't seem unreasonable -- in fact, it seems fairly commonsensical (which doesn't mean correct) -- to suppose that we are in a computational boomtime right now and that the limits on cheap computation fall short of what is envisioned by transhumanists like Kurzweil. In the middle of the 20th century, science fiction futurists, projecting the then-booming increase in travel speeds indefinitely into the future, saw high-speed space travel as a 21st century commonplace; increases in computational power may similarly flatten in our future. Also: Bostrom accepts as a background assumption that consciousness can be realized in computing machines -- a view that has been challenged by Searle, for example -- and we could build the rejection of this possibility into DOOM/PLATEAU. If we define post-human civilization as civilization with computing power enough to run, extremely cheaply, vast computationally simulated conscious beings, then if Searle is right we will never get there.
Thoughts (B) and (C) aren't entirely independent: The more skeptically we read the simulation possibility, the less credence, it seems, we should give to our projections of the technological future (though whether this increases or decreases the likelihood of DOOM/PLATEAU is an open question). Also, the more we envision the simulation possibility as the simulation of single individuals for brief periods, the less computational power would be necessary to avoid DOOM or to rise above PLATEAU (thus decreasing the likelihood of DOOM/PLATEAU).
All that said: I am a cosmological skeptic and crazyist (at least I am today), and I would count simulationism among the crazy disjuncts that I can't dismiss. Maybe that's enough for me to count as a convinced reader of Bostrom.
You were one of the first people I thought of when I was listening to the Philosophy Bites interview on my run the other day. I was delighted to see in my RSS feeds today that you decided it worthy of writing about. I think your points are important, especially the bits in (C) about making very shaky assumptions about future computing power and consciousness. Thanks for sharing.
ReplyDeleteThanks for the generous comment, Nick!
ReplyDeleteInteresting post. On the point that the sort of cheap computation power necessary to create such a simulation could be out of our reach is quite possible. However I'm not sure that its safe to assume that our laws of reality which limit our computational ability and the laws of reality of potential simulation creators are the same.
ReplyDeleteNot to mention that it assumes methods of computation similar to our own computers are used. Using our own laws of reality a living or organic based computational system would be able to mimic the human experience without much difficulty given that we are organic computational systems.
I agree with that, Jernau. I don't mean to assert that such developments are impossible or even particularly unlikely, simply that we should be cautious about projecting current rates of increase in computational power into the future.
ReplyDeleteHi Eric. At the end of your article you mention that you consider yourself to be a "cosmological skeptic". My question is: what do you mean by that? Does it have something to do with physics, or does it also encompass work done by philosophers i.e. metaphysicians?
ReplyDelete- Juan
Eric
ReplyDeleteGreat article. I was wondering about the problem about assuming a 'we' if it turns out that I am a living computer simulation. If I am then perhaps so is everyone else, in a kind of multi-player scenario. Also, if the Searlean position is refuted by this scenario - I'm not saying I think it is but just supposing he would have to be for Bostrom's idea to work - then why not have sponataneous multi-subjectivities forming within the simulation. Or perhaps a manifestation constraint argument a la Wittgenstein might suggest that even if a solipsism is metaphysically true of the simulation, the vividness of the simuation would require that I assume other minds, infering them ( wrongly in this case) from my limited (false) understanding that the world is not a simulation.
@ Juan: By cosmological crazyism, I mean the view that something "crazy" (i.e., both bizarre and insufficiently justified to merit rational belief) must be true about the general structure of the universe. The various bizarre options here derive both from philosophy (with simulationism being one option, idealism another, recent religious creation scenarios another, etc.) and from physical cosmology (with the pluriverse being one option, divinely orchestrated big bang another, etc., crisscrossing with the various bizarre possible interpretations of quantum mechanics and the insufficiently appreciated weirdness of the relativistic account of distance).
ReplyDelete@ Richard: I agree with your first point about nested and/or emerging simulated beings. Your second point depends on what you mean by "assume": If you mean something like "spontaneously act as if" then perhaps psychologically it's unavoidable and pragmatically justified (although I'm not sure of that), but if you mean "believe as my best philosophical judgment about what is probably the case" then I would disagree. I think that starting from our existing understanding of the world and pursuing out implications carefully we can find ourselves forced to acknowledge that various bizarre possibilities might well be the case, contra Wittgenstein.
ReplyDeleteEric
ReplyDeleteNeat. I go with your anti-Wittgensteinian crazyism.
I would add a possibility: A post-human civilization has lots of resource-efficient computing, but it has even more demand for non-ancestor-simulation computing power, so it uses almost none for ancestor simulations.
ReplyDeleteThe most plausible scenario of this type is a civilization whose inhabitants are themselves computer programs with a desire to have more runtime for themselves and their copies (and we know they are not us because they must be aware of their civilization). This would quickly drive demand for computing power arbitrarily high, no matter how much there is of it (as long as it is finite). An analogy in our own world would be if every Sim in The Sims needed real food while real people are starving.
Right, nice point! A lot of different scenarios can be loaded into possibility (2) above.
ReplyDeleteI just posted my response to the Bostrom Argument on my Literary Blog at redheiferpress.com. I find it ironic that after proving (at least to my own satisfaction) that I am not a computer simulation, your blogger is now asking me to prove that I am not a robot.
ReplyDeleteYes, funny! I don't think you're right, though, that no one has noticed Bostrom's anti-Searlian assumption about the possibility of a computationally instantiated mind. Bostrom flags this assumption under the label of "substrate independence".
ReplyDelete