Nick Bostrom argues, in a 2003 article, that there's a substantial probability that we're living in a computer simulation. One way to characterize Bostrom's argument is as follows:
First, let's define a "post-human civilization" as a civilization with enough computing power to run, extremely cheaply, large-scale simulations containing beings with the same general types of cognitive powers and experiences that we have.
The argument, then, is this: If a non-trival percentage of civilizations at our current technological stage evolve into post-human civilizations, and if a non-trivial percentage of post-human civilizations have people with the interest and power to run simulations of beings like us (given that it's very cheap to do so), then most of the beings like us in the universe are simulated beings. Therefore, we ourselves are very likely simulated beings. We are basically Sims with very good AI.
Bostrom emphasizes that he doesn't accept the conclusion of this argument (that we are probably sims), but rather a three way disjunction: Either (1.) Only a trivial percentage of civilizations at our current technological stage evolve into post-human civilizations, or (2.) only a trivial percentage of post-human civilizations are interested in running simulations of beings like us, or (3.) we are probably living in a computer simulation. He considers each of these disjuncts about equally likely. (See for example his recent Philosophy Bites interview.)
Bostrom's argument seems a good example of disjunctive metaphysics and perhaps also a kind of crazyism. I applaud it. But let me mention three concerns:
(A.) It's not as straightforward as Bostrom makes it seem to conclude that we are likely living in a computer simulation from the fact (if it is a fact) that most beings like us are living in computer simulations (as Brian Weatherson, for example, argues). One way to get the conclusion about us from the putative fact about beings like us would be to argue that the epistemic situation of simulated and unsimulated beings is very similar, e.g., that unsimulated beings don't have good evidence that they are unsimulated beings, and then to argue that it's irrational to assign low probability to the possibility we are sims, given the epistemic similarity. Compare: Most people who have thought they were Napoleon were not Napoleon. Does it follow that Napoleon didn't know he was Napoleon? Presumably no, because the epistemic situation is not relevantly similar. A little closer to the mark, perhaps: It may be the case that 10% of the time when you think you are awake you are actually dreaming. Does it follow that you should only assign a 90% credence to being awake now? These cases aren't entirely parallel to the sims case, of course; they're only illustrative. Perhaps Bostrom is on firmer ground. My point is that it's tricky epistemic terrain which Bostrom glides too quickly across, especially given "externalism" in epistemology, which holds that there can be important epistemic differences between cases that from the inside seem identical. (See Bostrom's brief discussion of externalism in Section 3 of this essay.)
(B.) Bostrom substantially underplays the skeptical implications of his conclusion, I think. This is evident even in the title of his article, where he uses the first-person plural, "Are We Living in a Computer Simulation?". If I am living in a computer simulation, who is this "we"? Bostrom seems to assume that the normal case is that we would be simulated in groups, as enduring societies. But why assume that? Most simulations in contemporary AI research are simulations of a single being over a short run of time; and most "sims" operating today (presumably not conscious) are instantiated in games whose running time is measured in hours not years. If we get to disjunct (3), then it seems I might accept it as likely that I will be turned off at any moment, or that godzilla will suddenly appear in the town in which I am a minor figure, or that all my apparent memories and relationships are pre-installed or otherwise fake.
(C.) Bostrom calls the possibility that only a trivial portion of civilizations make it to post-human stage "DOOM", and he usually characterizes that possibility as involving civilization destroying itself. However, he also gives passing acknowledgement to the possibility that this so-called DOOM hypothesis could be realized merely by technologically stalling out, as it were. By calling the possibility DOOM and emphasizing the self-destructive versions of it, Bostrom seems to me illegitimately to reduce its credibility. After all, a post-human civilization in the defined sense, in which it would be incredibly cheap to run massive simulations of genuinely conscious beings, is an extremely grand achievement! Why not call the possibility that only a trivial percentage of civilizations achieve such technology PLATEAU? It doesn't seem unreasonable -- in fact, it seems fairly commonsensical (which doesn't mean correct) -- to suppose that we are in a computational boomtime right now and that the limits on cheap computation fall short of what is envisioned by transhumanists like Kurzweil. In the middle of the 20th century, science fiction futurists, projecting the then-booming increase in travel speeds indefinitely into the future, saw high-speed space travel as a 21st century commonplace; increases in computational power may similarly flatten in our future. Also: Bostrom accepts as a background assumption that consciousness can be realized in computing machines -- a view that has been challenged by Searle, for example -- and we could build the rejection of this possibility into DOOM/PLATEAU. If we define post-human civilization as civilization with computing power enough to run, extremely cheaply, vast computationally simulated conscious beings, then if Searle is right we will never get there.
Thoughts (B) and (C) aren't entirely independent: The more skeptically we read the simulation possibility, the less credence, it seems, we should give to our projections of the technological future (though whether this increases or decreases the likelihood of DOOM/PLATEAU is an open question). Also, the more we envision the simulation possibility as the simulation of single individuals for brief periods, the less computational power would be necessary to avoid DOOM or to rise above PLATEAU (thus decreasing the likelihood of DOOM/PLATEAU).
All that said: I am a cosmological skeptic and crazyist (at least I am today), and I would count simulationism among the crazy disjuncts that I can't dismiss. Maybe that's enough for me to count as a convinced reader of Bostrom.
Tuesday, August 30, 2011
Monday, August 29, 2011
New: Paperback release of Hulburt & Schwitzgebel (2007), Describing Inner Experience? Proponent Meets Skeptic
Wednesday, August 24, 2011
On Containers and Content, with a Cautionary Note to Philosophers of Mind
I was recently reminded of a short paper I abandoned work on in 2001, but for which I still have a certain amount of affection. Here it is in its entirety (slightly amended and minus a few references).
On Containers and Content, with a Cautionary Note to Philosophers of Mind
The prototypical container relation is a relation between a single item or group of countable items, and some distinct item with the approximate shape of a cylinder, box, or bag, open on no more than one side, such that the volume of the container substantially exceeds and has as a subset the volume of the items contained. (However, see below for a couple of variations.) For concreteness, we may consider the prototypical container to be a bucket and its contents to be balls.
Consider some potentially interesting features of this system:
(1.) A bucket contains a ball just in case the ball is physically inside the bucket. It does not matter how things stand outside of the bucket.
(2.) In the normal (upright, gravitational) case, it takes a certain amount of effort to get a ball into a bucket and a certain amount of effort to get it back out again.
(3.) Balls take up space. A finite bucket can only contain a limited number of non-infinitesmal balls.
(4.) Balls are clearly individuated, countable entities.
(5.) It is rarely a vague matter whether a bucket contains a ball or not.
(6.) There is typically no reason why any two balls can’t go in the same bucket or why a ball can’t be removed from one bucket and put into another without changing any of the other contents.
(7.) A bucket can contain many balls, or only one ball, or no balls.
(8.) The ball and the bucket are distinct. The ball is not, for example, a state or configuration of the bucket.
(9.) A ball in a bucket is observable only from a privileged position inside or above the bucket.
Some modifications, particularly to (4) and (5), are required if we take as prototypical the relation between such a container and a certain amount of stuff, such as water or sand, characterized non-countably by means of a mass noun. If only one kind of stuff is to fill the bucket, there will be no multiple, discrete contents, but one content in varying amounts. Alternatively, if one fills the bucket with wholly distinct kinds of stuff, (4) may be preserved; if we consider semi-distinct fluids, such as orange juice and apple juice, (4) must be discarded.
If one takes the relationship between a packaging box and the packaged item as prototypical, the volume of the item contained will approach the volume of the container, requiring the modification of (6) and (7).
Cautionary note to philosophers of mind:
Often it is said that beliefs, desires, etc. -- the "propositional attitudes" -- are “contents” of minds. Also, and quite differently, propositional attitudes are said to have contents, propositional, conceptual, or otherwise. It is infelicitous to invoke the container metaphor in this way (or, alternatively, to extend literal usage of ‘content’ to cover these cases), if there are divergences between the features described above and features of the mind-propositional attitude relation or the propositional attitude-proposition relation, and if incautious use of the metaphor might draw the reader (or the writer) mistakenly to attribute to the latter relations features of the former. Similar remarks apply to visual (and other) images, which are sometimes described as contained in the mind and sometimes described as themselves having contents.
I will leave the explicit comparisons to the reader; they should be obvious enough (e.g., (1) makes “content externalism” an oxymoron; (4) - (7) are atomistic). If we give ourselves completely over to the container metaphor, we end up with a position that looks something like a caricature (not an exact portrait) of Jerry Fodor’s views. The metaphor thus pulls in Fodor’s direction, and Fodor, perhaps sensing this, delightedly embellishes it with his talk of “belief boxes.” Conversely, those among us who wish to resist the approach to the mind that the container metaphor suggests may do well to be wary the word ‘content’ in its current philosophical uses. Would we lapse so easily into atomistic habits if instead of saying that someone has (in her mind) a particular belief with the content P, we said that she matches (to some extent) the profile for believing that P?
Admittedly, avoiding the word ‘content’ would make some things harder to say – but maybe those things should be harder to say.
On Containers and Content, with a Cautionary Note to Philosophers of Mind
The prototypical container relation is a relation between a single item or group of countable items, and some distinct item with the approximate shape of a cylinder, box, or bag, open on no more than one side, such that the volume of the container substantially exceeds and has as a subset the volume of the items contained. (However, see below for a couple of variations.) For concreteness, we may consider the prototypical container to be a bucket and its contents to be balls.
Consider some potentially interesting features of this system:
(1.) A bucket contains a ball just in case the ball is physically inside the bucket. It does not matter how things stand outside of the bucket.
(2.) In the normal (upright, gravitational) case, it takes a certain amount of effort to get a ball into a bucket and a certain amount of effort to get it back out again.
(3.) Balls take up space. A finite bucket can only contain a limited number of non-infinitesmal balls.
(4.) Balls are clearly individuated, countable entities.
(5.) It is rarely a vague matter whether a bucket contains a ball or not.
(6.) There is typically no reason why any two balls can’t go in the same bucket or why a ball can’t be removed from one bucket and put into another without changing any of the other contents.
(7.) A bucket can contain many balls, or only one ball, or no balls.
(8.) The ball and the bucket are distinct. The ball is not, for example, a state or configuration of the bucket.
(9.) A ball in a bucket is observable only from a privileged position inside or above the bucket.
Some modifications, particularly to (4) and (5), are required if we take as prototypical the relation between such a container and a certain amount of stuff, such as water or sand, characterized non-countably by means of a mass noun. If only one kind of stuff is to fill the bucket, there will be no multiple, discrete contents, but one content in varying amounts. Alternatively, if one fills the bucket with wholly distinct kinds of stuff, (4) may be preserved; if we consider semi-distinct fluids, such as orange juice and apple juice, (4) must be discarded.
If one takes the relationship between a packaging box and the packaged item as prototypical, the volume of the item contained will approach the volume of the container, requiring the modification of (6) and (7).
Cautionary note to philosophers of mind:
Often it is said that beliefs, desires, etc. -- the "propositional attitudes" -- are “contents” of minds. Also, and quite differently, propositional attitudes are said to have contents, propositional, conceptual, or otherwise. It is infelicitous to invoke the container metaphor in this way (or, alternatively, to extend literal usage of ‘content’ to cover these cases), if there are divergences between the features described above and features of the mind-propositional attitude relation or the propositional attitude-proposition relation, and if incautious use of the metaphor might draw the reader (or the writer) mistakenly to attribute to the latter relations features of the former. Similar remarks apply to visual (and other) images, which are sometimes described as contained in the mind and sometimes described as themselves having contents.
I will leave the explicit comparisons to the reader; they should be obvious enough (e.g., (1) makes “content externalism” an oxymoron; (4) - (7) are atomistic). If we give ourselves completely over to the container metaphor, we end up with a position that looks something like a caricature (not an exact portrait) of Jerry Fodor’s views. The metaphor thus pulls in Fodor’s direction, and Fodor, perhaps sensing this, delightedly embellishes it with his talk of “belief boxes.” Conversely, those among us who wish to resist the approach to the mind that the container metaphor suggests may do well to be wary the word ‘content’ in its current philosophical uses. Would we lapse so easily into atomistic habits if instead of saying that someone has (in her mind) a particular belief with the content P, we said that she matches (to some extent) the profile for believing that P?
Admittedly, avoiding the word ‘content’ would make some things harder to say – but maybe those things should be harder to say.
Monday, August 08, 2011
Stanley Fish in the New York Times: Philosophy Doesn't Matter
... by which he means something like: Philosophical reflection has no bearing on the real, practical decisions of life. See here and here.
My guess: Most philosophers will react negatively to Fish. When faced with an outsider's attack of this sort, our impulse (my impulse too) is to insist that philosophy does matter to the ordinary decisions of life -- or at least to insist that portions of philosophy, perhaps especially ethics and political philosophy, can matter, maybe should matter.
But that brings us straightaway to the ethics professors problem. If philosophy "matters" in Fish's sense, then it seems that people who frequently and skillfully engage in philosophical thinking should, at least on average and to a modest degree, make better decisions on topics that philosophy touches. They should behave a bit more ethically, perhaps, or show a bit more wisdom. And yet it seems as though people with philosophical training (e.g., ethics professors) are not better behaved or wiser than others of similar social background, even in areas on which there is extensive philosophical literature.
My thought here can be formulated as a trilemma on the conditional A -> C where A is that philosophy matters, in Fish's sense, to certain areas of practical life, and C is that philosophical training improves practical wisdom in those areas of life. We can either deny the antecedent and say philosophy is irrelevant to ordinary life, accept the consequent and say that people with philosophical expertise have more practical wisdom in those areas of life as a result of their philosophical training, or somehow reject the major premise connecting the antecedent and the consequent. All three horns of the trilemma are, I think, a bit uncomfortable. At least I find them uncomfortable. Oddly, most philosophers I speak to seem to be eerily comfortable with one horn -- unreflectively, un-self-critically comfortable, I'm tempted to say -- though they don't always choose the same horn.
My guess: Most philosophers will react negatively to Fish. When faced with an outsider's attack of this sort, our impulse (my impulse too) is to insist that philosophy does matter to the ordinary decisions of life -- or at least to insist that portions of philosophy, perhaps especially ethics and political philosophy, can matter, maybe should matter.
But that brings us straightaway to the ethics professors problem. If philosophy "matters" in Fish's sense, then it seems that people who frequently and skillfully engage in philosophical thinking should, at least on average and to a modest degree, make better decisions on topics that philosophy touches. They should behave a bit more ethically, perhaps, or show a bit more wisdom. And yet it seems as though people with philosophical training (e.g., ethics professors) are not better behaved or wiser than others of similar social background, even in areas on which there is extensive philosophical literature.
My thought here can be formulated as a trilemma on the conditional A -> C where A is that philosophy matters, in Fish's sense, to certain areas of practical life, and C is that philosophical training improves practical wisdom in those areas of life. We can either deny the antecedent and say philosophy is irrelevant to ordinary life, accept the consequent and say that people with philosophical expertise have more practical wisdom in those areas of life as a result of their philosophical training, or somehow reject the major premise connecting the antecedent and the consequent. All three horns of the trilemma are, I think, a bit uncomfortable. At least I find them uncomfortable. Oddly, most philosophers I speak to seem to be eerily comfortable with one horn -- unreflectively, un-self-critically comfortable, I'm tempted to say -- though they don't always choose the same horn.