Showing posts with label skepticism. Show all posts
Showing posts with label skepticism. Show all posts

Wednesday, December 18, 2024

Reply to Chalmers: If I'm Living in a Simulation, It Might be Brief or Small

Suppose we take the "simulation hypothesis" seriously: We might be living not in the "base level" of reality but instead inside of a computer simulation.

I've argued that if we are living in a computer simulation, it might easily be only city-sized or have a short past of a few minutes, days, or years. The world might then be much smaller than we ordinarily think it is.

David Chalmers argues otherwise in a response published on Monday. Today I'll summarize his argument and present my first thoughts toward a rebuttal.

The Seeding Challenge: Can a Simulation Contain Coherent, Detailed Memories and Records but Only a Short Past?

Suppose an Earth-sized simulation was launched last night at midnight Pacific Standard Time. The world was created new, exactly then, with an apparent long past -- fake memories already in place, fake history books, fake fossil records, and all the rest. I wake up and seem to recall a promise I made to my wife yesterday. I greet her, and she seems to recall the same promise. We read the newspaper, full of fake news about the unreal events of yesterday -- and everyone else on the planet reads their own news of the same events, and related events, all tied together in an apparently coherent web.

Chalmers suggests that the obvious way to make this work would be to run a detailed simulation of the past, including a simulation of my conversation with my wife yesterday, and our previous past interactions, and other people's past conversations and actions, and all the newsworthy world events, and so on. The simulators create today's coherent web of detailed memories and records by running a simulated past leading up to the "start time" of midnight. But if that's the simulators' approach, the simulation didn't start at midnight after all. It started earlier! So it's not the short simulation hypothesized.

This reasoning iterates back in time. If we wanted a simulation that started on Jan 1, 2024, we'd need a detailed web of memories, records, news, and artifacts recently built or in various stages of completion, all coherently linked so that no one detects any inconsistencies. The obvious way to generate a detailed, coherent web of memories and records would be to run a realistic simulation of earlier times, creating those memories and records. Therefore, Chalmers argues, no simulation containing detailed memories and records can have only a short past. Whatever start date in the recent past you choose, in order for the memories and records to be coherent, a simulation would already need to be running before that date.

Now, as I think Chalmers would acknowledge, although generating a simulated past might be the most obvious way to create a coherent web of memories and records, it's not the only way. The simulators could instead attempt to directly seed a plausible network of memories and records. The challenge would lie in seeding them coherently. If the simulators just create a random set of humanlike memories and newspaper stories, there will be immediately noticeable conflicts. My wife and I won't remember the same promise from yesterday. The news article dated November 1 will contradict the article dated October 31.

Call this the Seeding Challenge. If the Seeding Challenge can be addressed, the simulators can generate a coherent set of memories and records without running a full simulation of the past.

To start, consider geological seeding. Computer games like SimCity and Civilization can autogenerate plausible, coherent terrain that looks like it has a geological history. Rivers run from mountains to the sea. Coastlines are plausible. Plains, grasslands, deserts, and hills aren't checkered randomly on the map but cluster with plausible transitions. Of course, this is simple, befitting simple games with players who care little about strict geological plausibility. But it's easy to imagine more careful programming by more powerful designers that does a better job, including integrating fossil records and geological layers. If done well enough, there might be no inconsistency or incoherence. Potentially, before finalizing, a sophisticated plausibility and coherence checker could look for and repair any mistakes.

I see no reason in principle that human memories, newspaper stories, and the rest couldn't be coherently seeded in a similar way. If my memory is seeded first, then my wife's memory will be constrained to match. If the November 1 news stories are seeded first, then the October 31 stories will be constrained to match. Big features might be seeded first -- like a geological simulation might start with "mountain range here" -- and then details articulated to match.

Naturally, this would be extremely complicated and expensive! But we are imagining a society of simulators who can simulate an entire planet of eight billion conscious humans, and all of the many, many physical interactions those humans have with the simulated environment, so we are already imagining the deployment of huge computational power. Let's not underestimate their capacity to meet the Seeding Challenge by rendering the memories and records coherent.

This approach to the Seeding Challenge gains plausibility, I think, by considering the resource-intensiveness of the alternative strategy of creating a deep history. Suppose the simulators want a start date of midnight last night. Option 1 would be to run a detailed simulation of the entire Earth from at least the beginning of human history. Option 2 would be to randomly generate a coherent seed, checking and rechecking for any detectable inconsistencies. Even though generating a coherent seed might be expensive and resource intensive, it's by no means clear that it would be more expensive and resource intensive than running a fully detailed simulated Earth for thousands of years.

I conclude that Chalmers' argument against short-historied simulations does not succeed.


The Boundaries Challenge: Can a Simulation Be City-Sized in an Apparently Large World?

I have also suggested that a simulation could easily just be you and your city. Stipulate a city that has existed for a hundred years. Its inhabitants falsely believe they are situated on a large planet containing many other cities. Everyone and everything in the city exists, but everything stops at the city's edge. Anyone who looks beyond the edge sees some false screen. Anyone who travels out of the city disappears from existence -- and when they return, they pop back into existence with false memories of having been elsewhere. News from afar is all fake.

Chalmers' objection is similar to his objection to short-past simulations. How are the returning travelers' memories generated? If someone in the city has a video conversation with someone far away, how is that conversation generated? The most obvious solution again seems to be to simulate the distant city the traveler visited and to simulate the distant conversation partner. But now we no longer have only a city-sized simulation. If the city is populous with many travelers and many people who interact with others outside the city, to keep everything coherent, Chalmers argues, you probably need to simulate all of Earth. Thus, a city-sized simulation faces a Boundaries Challenge structurally similar to the short-past simulation's Seeding Challenge.

The challenge can be addressed in a similar way.

Rendering travelers' memories coherent is a task structurally similar to rendering the memories of newly-created people coherent. The simulators could presumably start with some random, plausible seeds, then constrain future memories by those first seeds. This would of course be difficult and computationally expensive, but it's not clear that it would be more difficult or more expensive than simulating a whole planet of interacting people just so that a few hundred thousand or a few million people in a city don't notice any inconsistencies.

If the city's inhabitants have real-time conversations with others elsewhere, that creates a slightly different engineering challenge. As recent advances in AI technology have vividly shown, even with our very limited early 21st century tools, relatively plausible conversation partners can easily be constructed. With more advanced technology, presumably even more convincing conversation partners would be possible -- though their observations and memories would need to be constantly monitored and seeded for coherence with inputs from returning travelers, other conversation partners, incoming news, and so on.

Chalmers suggests that such conversation partners would be simulations -- and thus that the simulation wouldn't stop at the city's edge after all. He's clearly right about this, at least in a weak sense. Distant conversation partners would need voices and faces resembling the voices and faces of real people. In the same limited sense of "simulation", a video display at the city's edge, showing trees and fields beyond, simulates trees and fields. So yes, the borders of the city will need to be simulated, as well as the city itself. Seeming-people in active conversation with real citizens will in the relevant sense count as part of the borders of the city.

But just as trees on a video screen need not have their backsides simulated, so also needn't the conversation partners continue to exist after the conversation ends. And just as trees on a video screen needn't be as richly simulated as trees in the center of the city, so also distant conversation partners needn't be richly simulated. They can be temporary shells, with just enough detail to be convincing, and with new features seeded only on demand as necessary.

The Boundary Problem for simulated cities introduces one engineering challenge not faced by short-history whole-Earth simulations: New elements need to be introduced coherently in real time. A historical seed can be made slowly and checked over patiently as many times as necessary before launch. But the city boundaries will need to be updated constantly. If generating coherent conversation partners, memories, and the like is resource intensive, it might be challenging to do it fast enough to keep up with all the trips, conversations, and news reports streaming in.

Here, however, the simulators can potentially take advantage of the fact that the city's inhabitants are themselves simulations running on a computer. If real-time updating of the boundary is a challenge, the simulators can slow down the clock speed or pause as necessary, while the boundaries update. And if some minor incoherence is noticed, it might be possible to rewrite citizens' memories so it is quickly forgotten.

So although embedding a city-sized simulation in a fake world is probably more complicated than generating a short-past simulation with a fake history, ultimately my response to Chalmers' objections is the same for both cases: There's no reason to suppose that generating plausible, coherent inputs to the city would be beyond the simulator's capacities, and doing so on the fly might be much less computationally expensive than running a fully detailed simulation of a whole planet with a deep history.

Related:

"1% Skepticism" (2017), Nous, 51, 271-290.

"Let’s Hope We’re Not Living in a Simulation" (2024), Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13125.

Chalmers, David J. (2024) "Taking the Simulation Hypothesis Seriously", Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13122.

Thursday, March 09, 2023

New Paper in Draft: Let's Hope We're Not Living in a Simulation

I'll be presenting an abbreviated version of this at the Pacific APA in April, as a commentary on David Chalmers' book Reality+.

According to the simulation hypothesis, we might be artificial intelligences living in a virtual reality.  Advocates of this hypothesis, such as Chalmers, Bostrom, and Steinhart, tend to argue that the skeptical consequences aren’t as severe as they might appear.  In Reality+, Chalmers acknowledges that although he can’t be certain that the simulation we inhabit, if we inhabit a simulation, is larger than city-sized and has a long past, simplicity considerations speak against those possibilities.  I argue, in contrast, that cost considerations might easily outweigh considerations of simplicity, favoring simulations that are catastrophically small or brief – small or brief enough that a substantial proportion of our everyday beliefs would be false or lack reference in virtue of the nonexistence of things or events whose existence we ordinarily take for granted.  More generally, we can’t justifiably have high confidence that if we live in a simulation it’s a large and stable one.  Furthermore, if we live in a simulation, we are likely at the mercy of ethically abhorrent gods, which makes our deaths and suffering morally worse than they would be if there were no such gods.  There are reasons both epistemic and axiological to hope that we aren’t living in a simulation.

Paper here.

As always, comments welcome!

Friday, November 11, 2022

Credence-First Skepticism

Philosophers usually treat skepticism as a thesis about knowledge. The skeptic about X holds that people who claim to know X don't in fact know X. Religious skeptics think that people who say they know that God exists don't in fact know that. Skeptics about climate change hold that we don't know that the planet is warming. Radical philosophical skepticism asserts broad failures of knowledge. According to dream skepticism, we don't know we're not dreaming. According to external world skepticism, we lack knoweldge about the world beyond our own minds.

Treating skepticism as a thesis about knowledge makes the concept or phenomenon of knowledge crucially important to the evaluation of skeptical claims. The higher the bar for knowledge, the easier it is to justify skepticism. For example, if knowledge requires perfect certainty, then we can establish skepticism about a domain by establishing that perfect certainty is unwarranted in that domain. (Imagine here the person who objects to an atheist by extracting from the atheist the admission that they can't be certain that God doesn't exist and therefore they should admit that they don't really know.) Similarly, if knowledge requires knowing that you know, then we could establish skepticism about X by establishing that you can't know that you know about X. If knowledge requires being able to rule out all relevant alternatives, then we can establish skepticism by establishing that there are relevant alternatives that can't be ruled out. Conversely, if knowledge is cheaper and easier to attain -- if knowledge doesn't require, for example, perfect certainty, or knowledge that you know, or being able to rule out every single relevant alternative -- then skepticism is harder to defend.

But we don't have to conceptualize skepticism as a thesis about knowledge. We can separate the two concepts. Doing so has some advantages. The concept of knowledge is so vexed and contentious that it can become a distraction if our interests in skepticism are not driven by an interest in the concept of knowledge. You might be interested in religious skepticism, or climate change skepticism, or dream skepticism, or external world skepticism because you're interested in the question of whether god exists, whether the climate is changing, whether you might now be dreaming, or whether it's plausible that you could be radically mistaken about the external world. If your interest lies in those substantive questions, then conceptual debates about the nature of knowledge are beside the point. You don't want abstract disputes about the KK principle to crowd out discussion about what kinds of evidence we have or don't have for the existence of God, or climate change, or a stable external reality, and how relatively confident or unconfident we should be in our opinions about such matters.

To avoid distractions concerning knowledge, I recommend that we think about skepticism instead in terms of credence -- that is, degree of belief or confidence. We can contrast skeptics and believers. A believer in X is someone with a relatively high credence in X, while a skeptic is someone with a relatively low credence in X. A believer thinks X is relatively likely to be the case, while a skeptic regards X as relatively less likely. Believers in God find the existence of God likely. Skeptics find it less likely. Believers in the external world find the existence of an external world (with roughly the properties we ordinarily think it has) relatively likely while skeptics find it relatively less likely.

"Relatively" is an important word here. Given that most readers of this blog will be virtually certain that they are not currently dreaming, a reader who thinks it even 1% likely that they're dreaming has a relatively low credence -- 99% instead of 99.999999% or 100%. We can describe this as a moderately skeptical stance, though of course not as skeptical as the stance of someone who thinks it's 50/50.

[Dall-E image of a man flying in a dream]

Discussions of radical skepticism in epistemology tend to lose sight of what is really gripping about radically skeptical scenarios: the fact that, if the skeptic is right, there's a reasonable chance that you're in one. It's not unreasonable, the skeptic asserts, to attribute a non-trivial credence to the possibility that you are currently dreaming or currently living in a small or unstable computer simulation. Whoa! Such possibilities are potentially Earth-shaking if true, since many of the beliefs we ordinarily take for granted as obviously true (that Luxembourg exists, that I'm in my office looking at a computer screen) would be false.

To really assess such wild-seeming claims, we should address the nature and epistemology of dreaming and the nature and epistemology of computer simulations. Can dream experiences really be as sensorily rich and realistic as the experiences that I'm having right now? Or are dream experiences somehow different? If dream experiences can be as rich and realistic as what I'm now experiencing, then that seems to make it relatively more reasonable to assign a non-trivial credence to this being a dream. Is it realistic to think that future societies could create vastly many genuinely conscious AI entities who think that they live in worlds like this one? If so, then the simulation possibility starts to look relatively more plausible; if not, then it starts to look relatively less plausible.

In other words, to assess the likelihood of radically skeptical scenarios, like the dream or simulation scenario, we need to delve into the details of those scenarios. But that's not typically what epistemologists do when considering radical skepticism. More typically, they stipulate some far-fetched scenario with no plausibility, such as the brain-in-a-vat scenario, and then ask questions about the nature of knowledge. That's worth doing. But to put that at the heart of skeptical epistemology is to miss skepticism's pull.

A credence-first approach to skepticism makes skepticism behaviorally and emotionally relevant. Suppose I arrive at a small but non-trivial credence that I'm dreaming -- a 0.1% credence for example. Then I might try some things I wouldn't try if I had a 0% or 0.000000000001% credence I was dreaming. I might ask myself what I would do if this were a dream -- and if doing that thing were nearly cost-free, I might try it. For example, I might spread my arms to see if I can fly. I might see if I can turn this into a lucid dream by magically lifting a pen through telekinesis. I'd probably only try these things if I had nothing better to do at the moment and no one was around to think I'm a weirdo. And when those attempts fail, I might reduce my credence that this is a dream.

If I take seriously the possibility that this is a simulation, I can wonder about the creators. I become, so to speak, a conditional theist. Whoever is running the simulation is in some sense a god: They created the world and presumably can end it. They exist outside of time and space as I know them, and maybe they have "miraculous" powers to intervene in events around me. Perhaps I have no idea what I could do that might please or displease them, or whether they're even paying attention, but still, it's somewhat awe-inspiring to consider the possibility that my world, our world, is nested in some larger reality, launched by some creator for some purpose we don't understand. If I regard the simulation possibility as a live possibility with some non-trivial chance of being true, then the world might be quite a bit weirder than I would otherwise have thought, and very differently constituted. Skepticism gives me material uncertainty and opens up genuine doubt. The cosmos seems richer with possibility and more mysterious.

We lose all of this weirdness, awe, mystery, and material uncertainty if we focus on extremely implausible scenarios to which we assign zero or virtually zero credence, like the brain-in-a-vat scenario, and focus our argumentative attention only on whether or not it's appropriate to say that we "know" we're not in those admittedly extremely implausible scenarios.

Friday, April 22, 2022

Let's Hope We Don't Live in a Simulation

reposting from the Los Angeles Times, where it appears under a different title[1]

------------------------------------------

There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a “simulation.”

It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.

Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+. Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.

Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals.

In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: “sims” living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or “real,” people. If so, then we ourselves might well be among the sims.

The argument requires some caveats. It’s possible that no technological society ever can produce sims. Even if sims are manufactured, they may be rare — too expensive for mass manufacture, or forbidden by their makers’ law.

Still, the reasoning goes, the simulation hypothesis might be true. It’s possible enough that we have to take it seriously. Bostrom estimates a 1-in-3 chance that we are sims. Chalmers estimates about 25%. Even if you’re more doubtful than that, can you rule it out entirely? Any putative evidence that we aren’t in a sim — such as cosmic background radiation “proving” that the universe originated in a Big Bang — could, presumably, be simulated.

Suppose we accept this. How should we react?

Chalmers seems unconcerned: “Being in an artificial universe seems no worse than being in a universe created by a god” (p. 328). He compares the value of life in a simulation to the value of life on a planet newly made inhabitable. Bostrom acknowledges that humanity faces an “existential risk” that the simulation will shut down — but that risk, he thinks, is much lower than the risk of extinction by a more ordinary disaster. We might even relish the thought that the cosmos hosts societies advanced enough to create sims like us.

In simulated reality, we’d still have real conversations, real achievements, real suffering. We’d still fall in and out of love, hear beautiful music, climb majestic “mountains” and solve the daily Wordle. Indeed, even if definitive evidence proved that we are sims, what — if anything — would we do differently?

But before we adopt too relaxed an attitude, consider who has the God-like power to create and destroy worlds in a simulated universe. Not a benevolent deity. Not timeless, stable laws of physics. Instead, basically gamers.

Most of the simulations we run on our computers are games or scientific studies. They run only briefly before being shut down. Our low-tech sims live partial lives in tiny worlds, with no real history or future. The cities of Sim City are not embedded in fully detailed continents. The simulated soldiers dying in war games fight for causes that don’t exist. They are mere entertainments to be observed, played with, shot at, surprised with disasters. Delete the file, uninstall the program, or recycle your computer and you erase their reality.

But I’m different, you say: I remember history and have been to Wisconsin. Of course, it seems that way. The ordinary citizens of Sim City, if they were somehow made conscious, would probably be just as smug. Simulated people could be programmed to think they live on a huge planet with a rich past, remembering childhood travels to faraway places. Their having these beliefs in fact makes for a richer simulation.

If the simulations that we humans are familiar with reveal the typical fate of simulated beings, long-term sims are rare. Alternatively, if we can’t rely on the current limited range of simulations as a guide, our ignorance about simulated life runs even deeper. Either way, there are no good grounds for confidence that we live in a large, stable simulation.

Taking the simulation hypothesis seriously means accepting that the creator might be a sadistic adolescent gamer about to unleash Godzilla. It means taking seriously the possibility that you are alone in your room with no world beyond, reading a fake blog post, existing only as a short-lived subject or experiment. You might know almost nothing about reality beyond and beneath the simulation. The cosmos might be radically different from anything you could imagine.

The simulation hypothesis is wild and wonderful to contemplate. It’s also radically skeptical. If we take it seriously, it should undermine our confidence about the past, the future and the existence of Milwaukee. What or whom can we trust? Maybe nothing, maybe no one. We can only hope our simulation god is benevolent enough to permit our lives to continue awhile.

Really, we ought to hope the theory is false. A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.

Postscript:

In Reality+, Chalmers argues against the possibility that we live in a local or a temporary simulation on grounds of simplicity (p. 442-447). I am not optimistic that this response succeeds. In general, simplicity arguments against skepticism tend to be underdeveloped and unconvincing -- in part because simplicity itself is complex to evaluate (see my paper with Alan T. Moore, "Experimental Evidence for the Existence of an External World"). And more specifically, it's not clear why it would be easier or simpler to create a giant, simulated world than to create a small simulation with fake indicators of a giant world -- perhaps only enough indicators to effectively fool us for the brief time we exist or on the relatively few tests we run. (And plausibly, our creators might be able to control or predict what thoughts we have or tests we will run and thus only create exactly the portions of reality that they know we will examine.) Continuing the analogy from Sim City, our current sims are more easily constructed if they are small, local, and brief, or if they are duplicated off a template, than if each is giant, a unique run of a whole universe from the beginning. I see no reason why this fact wouldn't generalize to more sophisticated simulations containing genuinely conscious artificial intelligences.

------------------------------------------

[1] The Los Angeles Times titled the piece "Is life a simulation? If so, be very afraid". While I see how one might draw that conclusion from the piece, my own view is that we probably should react emotionally as we react to other small but uncontrollable risks -- not with panic, but rather with a slight shift toward favoring short-term outcomes over long-term ones. See my discussion in "1% Skepticism" and Chapter 4 of my book in draft, The Weirdness of the World. I have also added links, a page reference, and altered the wording for clarity in a few places.

[image generated from inputting the title of this piece into wombo.art's steampunk generator]

Tuesday, March 08, 2022

How to Defeat Higher-Order Regress Arguments for Skepticism

In arguing for radical skepticism about arithmetic knowledge, David Hume uses what I'll call a higher-order regress argument. I was reminded of this style of argument when I read Francois Kammerer's similarly structured (and similarly radical) argument for skepticism about the existence of conscious experiences, forthcoming in Philosophical Studies. In my view, Hume's and Kammerer's arguments fail for similar reasons.

Hume begins by arguing that you should have at least a tiny bit of doubt even about simple addition:

In accompts of any length or importance, Merchants seldom trust to the infallible certainty of numbers for their security.... Now as none will maintain, that our assurance in a long numeration exceeds probability, I may safely affirm, that there scarce is any proposition concerning numbers, of which we can have a fuller security. For 'tis easily possible, by gradually diminishing the numbers, to reduce the longest series of addition to the most simple question, which can be form'd, to an addition of two single numbers.... Besides, if any single addition were certain, every one wou'd be so, and consequently the whole or total sum (Treatise of Human Nature 1740/1978, I.IV.i, p. 181)

In other words, since you can be mistaken in adding long lists of numbers, even when each step is the simple addition of two single-digit numbers, it follows that you can be mistaken in the simple addition of two single-digit numbers. Therefore, you should conclude that you know only with "probability", not with absolute certainty, that, say, 7 + 5 = 12.

I'm not a fan of absolute 100% flat utter certainty about anything, so I'm happy to concede this to Hume. (However, I can imagine someone -- Descartes, maybe -- objecting that contemplating 7 + 5 = 12 patiently outside of the context of a long row of numbers might give you a clear and distinct idea of its truth that we don't normally consistently maintain when adding long rows of numbers.)

So far, what Hume has said is consistent with a justifiable 99.99999999999% degree of confidence in the truth of 7 + 5 = 12, which isn't yet radical skepticism. Radical skepticism comes only via a regress argument.

Here's the first step of the regress:

In every judgment, which we can form concerning probability, as well as concerning knowledge, we ought always to correct that first judgment, deriv'd from the nature of the object, by another judgment, deriv'd from the nature of the understanding. 'Tis certain a man of solid sense and long experience... must be conscious of many errors in the past, and must still dread the like for the future. Here then arises a new species of probability to correct and regulate the first, and fix its just standard and proportion. As demonstration is subject to the controul of probability, so is probability liable to a new correction by a reflex act of the mind, wherein the nature of our understanding, and our reasoning from the first probability become our objects.

Having thus found in every probability, beside the original uncertainty inherent in the subject, a new uncertainty deriv'd from the weakness of that faculty, which judges, and having adjusted these two together, we are oblig'd by our reason to add a new doubt deriv'd from the possibility of error in the estimation we make of the truth and fidelity of our faculties (p. 181-182).

In other words, whatever high probability we assign to 7 + 5 = 12, we should feel some doubt about that probability assessment. That doubt, coupled with our original doubt, produces more doubt, thus justifying a somewhat lower -- but still possibly extremely high! -- probability assessment. Maybe 99.9999999999% instead of 99.99999999999%.

But now we're down the path toward an infinite regress:

But this decision, tho' it shou'd be favourable to our preceeding judgment, being founded only on probability, must weaken still further our first evidence, and must itself be weaken'd by a fourth doubt of the same kind, and so on in infinitum; till at last there remain nothing of the original probability, however great we may suppose it to have been, and however small the diminution by every new uncertainty. No finite object can subsist under a decrease repeated in infinitum; and even the vastest quantity, which can enter into human imagination, must in this manner be reduc'd to nothing (p. 182).

We should doubt, Hume says, our doubt about our doubts, adding still more doubt. And we should then doubt our doubt about our doubt about our doubt, and so on infinitely, until nothing remains but doubt. With each higher-order doubt, we should decrease our confidence that 7 + 5 = 12, until at the end we recognize that the only rational thing to do is shrug our shoulders and admit we are utterly uncertain about the sum of 7 and 5.

If this seems absurd... well, probably it is. I'm sympathetic with skeptical arguments generally, but this seems to be one of the weaker ones, and there's a reason it's not the most famous part of the Treatise.

There are at least three moves available to the anti-skeptic.

First, one can dig in against the regress. Maybe the best place to do so is the third step. One can say that it's reasonable to have a tiny initial doubt, and then it's reasonable to add a bit more doubt on grounds that it's doubtful how much doubt one should have, but maybe third-order doubt is unwarranted unless there's some positive reason for it. Unless something about you or something about the situation seems to demand third-order doubt, maybe it's reasonable to just stick with your assessment.

That kind of move is common in externalist approaches to justification, according to which people can sometimes reasonably believe things if the situation is right and their faculties are working well, even if they can't provide full, explicit justifications for those beliefs.

But this move isn't really in the spirit of Hume, and it's liable to abuse by anti-skeptics, so let's set it aside.

Second, one can follow the infinite regress to a convergent limit. The mathematical structure of this move should be familiar from pre-calculus. It's more readily seen with simpler numbers. Suppose that I'm highly confident of something. My first impulse is to assign 100% credence. But then I add a 5% doubt to it, reducing my credence to 95%. But then I have doubts about my doubt, and this second-order doubt leads me to reduce my credence another 2.5%, to 92.5%. I then have a third-order doubt, reducing my credence by 1.25% to 91.25%. And so on. As long as each higher-order doubt reduces the credence by half as much as the previous lower-order doubt, we will have a convergent sum of doubt. In this case, the limit as we approach infinitely many layers of doubt is 10%, so my rational credence need never fall below 90%.

This response concedes a lot to Hume -- that it's reasonable to regress infinitely upward with doubt, and that each step upward should reduce our confidence by some finite amount -- and yet it avoids the radically skeptical conclusion.

Interestingly, Hume himself arguably could not have availed himself of this move, given his skepticism about the infinitesimal (in I.II.i-ii). We can have no adequate conception of the infinitesimal, Hume says, and space and time cannot be infinitely divided. Therefore, when Hume concludes the quoted passage above by saying "No finite object can subsist under a decrease repeated in infinitum; and even the vastest quantity, which can enter into human imagination, must in this manner be reduc'd to nothing", he is arguably relying on his earlier skepticism about infinite division. For that reason, Hume might be unable to accept the convergent limit solution to his puzzle -- though we ourselves, rightly more tolerant of the infinitesimal, shouldn't be so reluctant.

Third, higher-order doubts can take the form of reversing lower-order doubts. Your third-order thought might be that your second-order doubt was too uncertain, and thus on reflection your confidence might rise again. If my first inclination is 100% credence, and my second thought knocks it down to 95%, my next thought might be that 95% is too low rather than too high. Maybe I kick it back up to 97.5%. My fourth thought might then involve tweaking it up or down from there. Thus, even without accepting convergence toward a limit, we might reasonably suspect that ever-higher orders of reflection will always yield a degree of confidence that bounces around within a manageable range, say 90% to 99%. And even if this is only a surmise rather than something I know for certain, it's a surmise that could be either too high or too low, yielding no reason to conclude that infinite reflection would tend toward low degrees of confidence.

* - * - *

Well, that was longer than intended on Hume! But I think I can home in quickly on the core idea from Kammerer that precipitated this line of reflection.

Kammerer is a "strong illusionist". He thinks that conscious experiences don't exist. If this sounds like such a radical claim as to be almost unbelievable, then I think you understand why it's worth calling a radically skeptical position.[1]

David Chalmers offers a "Moorean" reply to this claim (similarly, Bryan Frances): It's just obvious that conscious experience exists. It's more obvious that conscious experience exists than any philosophical or scientific argument to the contrary could ever be, so we can reject strong illusionism out of hand, without bothering ourselves about the details of the illusionist arguments. We know in advance that whatever the details are, the argument shouldn't win us over.

Kammerer's reply is to ask whether it's obvious that it's obvious.[2] Sometimes, of course, we think something is obvious, but we're wrong. Some things we think are obvious are not only non-obvious but actually false. Furthermore, the illusionist suspects we can construct a good explanation of why false claims about consciousness might seem obvious despite their falsity. So, according to Kammerer, we shouldn't accept the Moorean reply unless we think it's obvious that it's obvious.

Kammerer acknowledges that the anti-illusionist might reasonably hold that it is obvious that it's obvious that conscious experience exists. But now the argument repeats: The illusionist might anticipate an explanation of why, even if conscious experience doesn't exist, it seems obvious that it's obvious that conscious experience exists. So it looks like the anti-illusionist needs to go third order, holding that it's obvious that it's obvious that it's obvious. The issue repeats again at the fourth level, and so on, up into a regress. At some point high enough up, it will either no longer be obvious that it's obvious that it's [repeat X times] obvious; or if it's never non-obvious at any finite order of inquiry, there will still always be a higher level at which the question can be raised, so that a demand for obviousness all the way up will never be satisfied.

Despite some important differences from Hume's argument -- especially the emphasis on obviousness rather than probability -- versions of the same three types of reply are available.

Dig in against the regress. The anti-illusionist can hold that it's enough that the claim is obvious; or that it's obvious that it's obvious; or that it's obvious that it's obvious that it's obvious -- for some finite order of obviousness. If the claim that conscious experience exists has enough orders of obviousness, and is furthermore also true, and perhaps has some other virtues, perhaps one can be fully justified in believing it even without infinite orders of obviousness all the way up.

Follow the regress to a convergent limit. Obviousness appears to come in degrees. Some things are obvious. Others are extremely obvious. Still others are utterly, jaw-droppingly, head-smackingly, fall-to-your-knees obvious. Maybe, before we engage in higher-order reflection, we reasonably think that the existence of conscious experience is in the last, jaw-dropping category, which we can call obviousness level 1. And maybe, also, it's reasonable, following Kammerer and Hume, to insist on some higher-order reflection: How obvious is it that it's obvious? Well, maybe it's extremely obvious but not utterly, level 1 obvious, and maybe that's enough to reduce our total epistemic assessment to overall obviousness level .95. Reflecting again, we might add still a bit more doubt, reducing the obviousness level to .925, and so on, converging toward obviousness level .9. And obviousness level .9 might be good enough for the Moorean argument. Obviously (?), these are fake numbers, but the idea should be clear enough. The Moorean argument doesn't require that the existence of conscious experience be utterly, jaw-droppingly, head-smackingly, fall-to-your-knees, level 1 obvious. Maybe the existence of consciousness is that obvious. But all the Moorean argument requires is that the existence of consciousness be obvious enough that we reasonably judge in advance that no scientific or philosophical argument against it should justifiably win us over.

Reverse lower-order doubts with some of the higher-order doubts. Overall obviousness might sometimes increase as one proceeds upward to higher orders of reflection. For example, maybe after thinking about whether it's obvious that it's obvious that [eight times] it's obvious, our summary assessment of the total obviousness of the proposition should be higher than our summary assessment after thinking about whether it's obvious that it's obvious that [seven times] it's obvious. There's no guarantee that with each higher level of consideration the total amount of doubt should increase. We might find as we go up that the total amount of obviousness fluctuates around some very high degree of obviousness. We might then reasonably surmise that further higher levels will stay within that range, which might be high enough for the Moorean argument to succeed.

---------------------------------

[1] Actually, I think there's some ambiguity about what strong illusionism amounts to, since what Kammerer denies the existence of is "phenomenal consciousness", and it's unclear whether this really is the radical thesis that it is sometimes held to be or whether it's instead really just the rejection of a philosopher's dubious notion. For present purposes, I'm interpreting Kammerer as holding the radical view. See my discussions here and here.

[2] Kammerer uses "uniquely obvious" here, and "super-Moorean", asking whether it's uniquely obvious that it's uniquely obvious. But I don't think uniqueness is essential to the argument. For example, that I exist might also be obvious with the required strength.

Friday, December 11, 2020

On Self-Defeating Skeptical Arguments

Usually, self-defeating arguments are bad. If say "Trust me, you shouldn't trust anyone", my claim (you shouldn't trust anyone), if true, undermines the basis I've offered in support (that you should trust me). Whoops!

In skeptical arguments, however, self-defeat can sometimes be a feature rather than a bug. Michel de Montaigne compared skeptical arguments to laxatives. Self-defeating skeptical arguments are like rhubarb. They flush out your other opinions first and themselves last.

Let's consider two types of self-defeat:

In propositional self-defeat, the argument for proposition P relies on a premise inconsistent with P.

In methodological self-defeat, one relies on a certain method to reach the conclusion P, but that very conclusion implies that the method employed shouldn't be relied upon.

My opening example is most naturally read as methodologically self-defeating: the conclusion P ("you shouldn't trust anyone") implies that the method employed (trusting my advice) shouldn't be relied upon.

Since methods (other than logical deduction itself) can typically be characterized propositionally then loaded into a deduction, we can model most types of methodological self-defeat propositionally. In the first paragraph, maybe, I invited my interlocutor to accept the following argument (with P1 as shared background knowledge):

P1 (Trust Principle). If x is trustworthy and if x says P, then P.
P2. I am trustworthy.
P3. I say no one is trustworthy.
C. Therefore, no one is trustworthy.

C implies the falsity of P2, on which the reasoning essentially relies. (There are surely versions of the Trust Principle which better capture what is involved in trust, but you get the idea.)

Of course, there is one species of argument in which a contradiction between the premises and the conclusion is exactly what you're aiming for: reductio ad absurdum. In a reductio, you aim to prove P by temporarily assuming not-P and then showing how a contradiction follows from that assumption. Since any proposition that implies a contradiction must be false, you can then conclude that it's not the case the not-P, i.e., that it is the case that P.

We can treat self-defeating skeptical arguments as reductios. In Farewell to Reason, Paul Feyerabend is clear that he intends a structure of this sort.[1] His critics, he says, complain that there's something self-defeating in using philosophical reasoning to show that philosophical reasoning shouldn't be relied upon. Not at all, he replies! It's a reductio. If philosophical reasoning can be relied upon, then [according to Feyerabend's various arguments] it can't be relied upon. We must conclude, then, that philosophical reasoning can't be relied upon. (Note that although "philosophical reasoning can't be relied upon" is the P at the end of the reductio, we don't accept it because it follows from the assumptions but rather because it is the negation of the opening assumption.) The ancient skeptic Sextus Empiricus (who inspired Montaigne) appears sometimes to take basically the same approach.

Similarly, in my skeptical work on introspection, I have relied on introspective reports to argue that introspective reports are untrustworthy. Like Feyerabend's argument, it's a methodological self-defeat argument that can be formulated as a reductio. If introspection is a reliable method, then various contradictions follow. Therefore, introspection is not a reliable method.

You know who drives me bananas sometimes? G.E. Moore. It's annoyance at him (and some others) that inspires this post.

Here is a crucial turn in one of Moore's arguments against dream skepticism. (According to dream skepticism, for all you know you might be dreaming right now.)

So far as I can see, one premiss which [the dream skeptic] would certainly use would be this: "Some at least of the sensory experiences which you are having now are similar in important respects to dream-images which actually have occurred in dreams." This seems a very harmless premiss, and I am quite willing to admit that it is true. But I think there is a very serious objection to the procedure of using it as a premiss in favour of the derived conclusion. For a philosopher who does use it as a premiss, is, I think, in fact implying, though he does not expressly say, that he himself knows it to be true. He is implying therefore that he himself knows that dreams have occurred.... But can he consistently combine this proposition that he knows that dreams have occurred, with his conclusion that he does not know that he is not dreaming?... If he is dreaming, it may be that he is only dreaming that dreams have occurred... ("Certainty", p. 270 in the linked reprint).

Moore is of course complaining here of self-defeat. But if the dream skeptic's argument is a reductio, self-contradiction is the aim and the intermediate claims needn't be known.

----------------------------------

ETA 11:57 a.m.: I see from various comments in social media that that last sentence was too cryptic. Two clarifications.

First, although the intermediate claims needn't be known, everything in the reductio needs to be solid except insofar as it depends on not-P. Otherwise, it's not necessarily not-P to blame for the contradiction.

Second, here's a schematic example of one possible dream-skeptical reductio: Assume for the reductio that I know I'm not currently dreaming. If so, then I know X and Y about dreams. If X and Y are true about dreams, then I don't know I'm not currently dreaming.

----------------------------------

[1] I'm relying on my memory of Feyerabend from years ago. Due to the COVID shutdowns, I don't currently have access to the books in my office.

Thursday, September 27, 2018

Philosophical Skepticism Is, or Should Be, about Credence Rather Than Knowledge

Philosophical skepticism is usually regarded as primarily a thesis about knowledge -- the thesis that we don't know some of the things that people ordinarily take themselves to know (such as that they are awake rather than dreaming or that the future will resemble the past). I prefer to think about skepticism without considering the question of "knowledge" at all.

Let me explain.

I know some things about which I don't have perfect confidence. I know, for example, that my car is parked in Lot 1. Of course it is! I just parked it there ninety minutes ago, in the same part of the parking lot where I've parked for over a year. I have no reason to think anything unusual is going on. Now of course it might have been stolen or towed, for some inexplicable reason, in the past ninety minutes, or I might be having some strange failure of memory. I wouldn't lay 100,000:1 odds on it -- my retirement funds gone if I'm wrong, $10 more in my pocket if I'm right. My confidence or credence isn't 1.00000. Of course there's a small chance it's not where I think it is. Acknowledging all of this, it's still I think reasonable for me to say that I know where my car is parked.

Now we could argue about this; and philosophers will. If I'm not completely certain that my car is in Lot 1, if I can entertain some reasonable doubts about it, if I'm not willing to just entirely take it for granted, then maybe it's best to say I don't really know my car is there. There is something admittedly odd about saying, "Yes, I know my car is there, but of course it might have recently been towed." Admittedly, explicitly allowing that possibility stands in tension, somehow, with simultaneously asserting the knowledge.

In a not-entirely-dissimilar way, I know that I am not currently dreaming. I am almost entirely certain that I am not dreaming, and I believe I have excellent grounds for that high level of confidence. And yet I think it's reasonable to allow myself a smidgen of doubt on the question. Maybe dreams can be (though I don't think so) this detailed and realistic; and if so, maybe this is one such super-realistic dream.

Now let's imagine two sorts of debates that we could have about these questions:

Debate 1: same credences but disagreement about knowledge. Philosopher A and Philosopher B both have 99.9% credence that their car is in Lot 1 and 99.99% credence that they are awake. Their degrees of confidence in these propositions are identical. But they disagree about whether it is correct to say, in light of their reasonable smidgens of doubt, that they know. [ETA 10:11 a.m.: Assume these philosophers also regard their own degrees of credence as reasonable. HT Dan Kervick.]

Debate 2: different credences but agreement about knowledge. Philosopher C and Philosopher D differ in their credences: Philosopher C thinks it is 100% certain (alternatively, 99.99999% certain) that she is awake, and Philosopher D has only a 95% credence; but both agree that they know that they are awake. Alternatively, Philosopher E is 99.99% confident that her car is in Lot 1 and Philosopher F is 99% confident; but they agree that, given their small amounts of reasonable doubt, they don't strictly speaking know.

I suggest that in the most useful and interesting sense of "skeptical", Philosophers A and B are similarly skeptical or unskeptical, despite the fact that they would say something different about knowledge. They have the same degrees of confidence and doubt; they would make (if rational) the same wagers; their disagreement seems to be mostly about a word or the proper application of a concept.

Conversely, Philosophers C and E are much less skeptical than Philosophers D and F, despite their agreement about the presence or absence of knowledge. They would behave and wager differently (for instance, Philosopher D might attempt a test to see whether he is dreaming). They will argue, too, about the types of evidence available or the quality of that evidence.

The extent of one's philosophical skepticism has more to do with how much doubt one thinks is reasonable than with whether, given a fixed credence or degree of doubt, one thinks it's right to say that one genuinely knows.

How much doubt is reasonable about whether you're awake? In considering this issue, there's no need to use the word "knowledge" at all! Should you just have 100% credence, taking it as an absolute certainty foundational to your cognition? Should you allow a tiny sliver of doubt, but only a tiny sliver? Or should you be in some state of serious indecision, giving the alternatives approximately equal weight? Similarly for the possibility that you're a brain in a vat, or that the sun will rise tomorrow. Philosophers in the first group are radically anti-skeptical (Moore, Wittgenstein, Descartes by the end of the Meditations); philosophers in the second group are radically skeptical (Sextus, Zhuangzi in Inner Chapter 2, Hume by the end of Book 1 of the Treatise); philosophers in the middle group admit a smidgen of skeptical doubt. Within that middle group, one might think the amount of reasonable doubt is trivially small (e.g., 0.00000000001%, or one might think that the amount of reasonable doubt is small but not trivially small, e.g., 0.001%). Debate about which of these four attitudes is the most reasonable (for various possible forms of skeptical doubt) is closer to the heart of the issue of skepticism than are debates about the application of the word "knowledge" among those who agree about the appropriate degree of credence.

[Note: In saying this, I do not mean to commit to the view that we can or should always have precise numerical credences in the propositions we consider.]

--------------------------------------------

Related: 1% Skepticism (Nous 2017).

Should I Try to Fly, on the Off-Chance This Might Be a Dream Body? (Dec 18, 2013).

Tuesday, January 30, 2018

The Philosophical Overton Window?

It seems like I've been hearing a lot recently about the "Overton Window" in politics. The idea is that there's a range of normal policy positions (within the window), which a politician can be adopt without being regarded as radical or extreme; and then there are radical or extreme positions, outside of the window. Over time, what is within the window can change. Gay marriage, for example, was outside of the window in U.S. politics in the 1980s, then entered the window in the 1990s or early 2000s.

A common thought is that one way to move the window is to prominently voice a position so extreme that a somewhat less extreme position seems moderate in comparison, and perhaps enters the window. After Bernie Sanders starts saying "free college education for everyone!", maybe "only" offering $10,000 toward every student's tuition no longer seems extreme.

Before going further, a big heaping caveat. I figured I'd go back to the original Overton article to confirm that the picture in the popular press conforms to the scholarship. (Reports of the Dunning-Kruger effect, for example, which is also recently hot in blogs and op-eds, often do not.)

And... whoops. There is no Overton article! There is no scholarship. Not unless you count Glenn Beck. This Joseph P. Overton was a not-very-well-known libertarian think-tank guy who died in a plane crash before writing the idea up. As far as I can tell, this is as close as we get to the root scholarly source. (See also Laura Marsh's discussion.)

Still, the idea has some theoretical appeal. Might it capture some of the dynamics in philosophy?

For it to work, first we'd need some sense of what positions qualify as extreme and what positions qualify as moderate in a philosophical cultural context. Then we'd need some way of measuring (through citations?) the increasing visibility of an extreme position and see if that opens up "moderate" philosophers to positions that they might previously have regarded as too extreme.

Here's one possibility: Panpsychism is the view that everything in the universe is conscious, even elementary particles. Generally, it's regarded as an extreme position. However, it has recently been gaining visibility. If the Overton Window idea is correct, then we might expect some formerly "extreme" positions in that direction, but not as extreme as panpsychism, to come to seem less extreme or maybe even moderate.

Hmmmmm. I'm not sure it's so. A couple of obvious candidates are group consciousness and plant cognition. These would seem to be less extreme positions in the same direction as panpsychism, since instead of ascribing mind or consciousness to everything, they extend it to a only limited range of things that aren't usually regarded as having mental lives. If the Overton Window idea is right, then, given the increasing visibility of radical panpsychism, group consciousness and plant cognition will come to seem less extreme than they previously were.

Hard to tell if that's true. Both positions are probably more popular now than they were 15 years ago (in academic Anglophone philosophy), but they'd still probably be considered extreme.

Eh. You know what? My heart isn't in it. I'm too bummed about the Glenn Beck thing. I wanted this to be an idea with a more solid scholarly foundation.

[image source]

Friday, January 12, 2018

Grounds for a Sliver of Skepticism

Yesterday, Philosophy Bites released a brief podcast interview of me on skepticism. Listening to the interview now, I feel that I didn't frame my project as well as I might have, so I'll add a few remarks here.

I want to think about what grounds we might have for a non-trivial sliver of radically skeptical doubt.

There is, in my mind, an important difference between, for example, "brain-in-a-vat" skepticism and dream skepticism. Brain-in-a-vat skepticism asks you how you know that genius alien neuroscientists didn't remove your brain last night while you were sleeping, drop it into a vat, and start feeding it stimuli as though you were having a normal day. Dream skepticism asks how you know that you are not currently dreaming. The difference is this: There are no grounds for thinking that there's any but an extremely remote chance that you have been envatted, while there are some reasonable grounds for thinking there's a non-trivial sliver of a chance that you are presently dreaming.

It's crucial here to recognize the role played by theories that are probably wrong. It is, I think, probably wrong that people often have sensory experiences just like waking experiences when they sleep. Dreams are, in my view, always sketchy or even imagistic, rather than quasi-sensory with rich realistic detail. However, I'm hardly certain of this theory, and some prominent dream theorists argue that dream experiences are often highly realistic or even phenomenologically indistinguishable from waking life (e.g. Revonsuo 1995; Hobson, Pace-Schott & Stickgold 2000; Windt 2010). Contingently upon accepting that latter view, it seems to me that I ought to reasonably have some doubt about my current state. Maybe this now is one of those highly realistic dreams.

The idea here is that there are grounds for accepting, as a live possibility, a theoretical view from which it seems to follow that I might be radically wrong about my current situation. I don't prefer that theoretical view; but neither can I reject it with high certainty. It is thus reasonable for me to reserve a non-trivial sliver of doubt about my current wakefulness.

I would argue similarly with respect to two other skeptical possibilities: the idea that we are Artificial Intelligences living in a simulated world, and a somewhat less familiar form of skepticism I call "cosmological skepticism". In both cases, there are grounds, I think, for treating as a live possibility theories that, while probably not correct, would, if correct, imply that you might easily be radically wrong in many of your ordinary beliefs.

In concluding the interview, I also make an empirical conjecture: that seriously entertaining radically skeptical possibilities has the psychological effect of reducing dogmatic self-confidence and increasing tolerance, even regarding non-skeptical possibilities. I hope to more fully explore this in a future post.

Full interview here.

Related papers:

  • 1% Skepticism
  • The Crazyist Metaphysics of Mind
  • Experimental Evidence for the Existence of an External World
  • Zhuangzi's Attitude Toward Language and His Skepticism
  • Wednesday, July 05, 2017

    What's the Likelihood That Your Mind Is Constituted by a Rabbit Reading and Writing on Long Strips of Turing Tape?

    Your first guess is probably not very likely.

    But consider this argument:

    (1) A computationalist-functionalist philosophy of mind is correct. That is, mentality is just a matter of transitioning between computationally definable states in response to environmental inputs, in a way that hypothetically could be implemented by a computer.

    (2) As Alan Turing famously showed, it's possible to implement any finitely computable function on a strip of tape containing alphanumeric characters, given a read-write head that implements simple rules for writing and erasing characters and moving itself back and forth along the tape.

    (3) Given 1 and 2, one way to implement a mind is by means of a rabbit reading and writing characters on a long strip of tape that is properly responsive, in an organized way, to its environment. (The rabbit will need to adhere to simple rules and may need to live a very long time, so it won't be exactly a normal rabbit. Environmental influence could be implemented by alteration of the characters on segments of the tape.)

    (4) The universe is infinite.

    (5) Given 3 and 4, the cardinality of "normally" implemented minds is the same as the cardinality of minds implemented by rabbits reading and writing on Turing tape. (Given that such Turing-rabbit minds are finitely probable, we can create a one-to-one mapping or bijection between Turing-rabbit minds and normally implemented minds, for example by starting at an arbitrary point in space and then pairing the closest normal mind with the closest Turing-rabbit mind, then pairing the second-closest of each, then pairing the third-closest....)

    (6) Given 5, you cannot justifiably assume that most minds in the universe are "normal" minds rather than Turing-rabbit implemented minds. (This might seem unintuitive, but comparing infinities often yields such unintuitive results. ETA: One way out of this would be to look at the ratios in limits of sequences. But then we need to figure out a non-question-begging way to construct those sequences. See the helpful comments by Eric Steinhart on my public FB feed.)

    (7) Given 6, you cannot justifiably assume that you yourself are very likely to be a normal mind rather than a Turing-rabbit mind. (If 1-3 are true, Turing-rabbit minds can be perfectly similar to normally implemented minds.)

    I explore this possibility in "THE TURING MACHINES OF BABEL", a story in this month's issue of Apex Magazine. I'll link to the story once it's available online, but also consider supporting Apex by purchasing the issue now.

    The conclusion is of course "crazy" in my technical sense of the term: It's highly contrary to common sense and we aren't epistemically compelled to believe it.

    Among the ways out: You could reject the computational-functional theory of mind, or you could reject the infinitude of the universe (though these are both fairly common positions in philosophy and cosmology these days). Or you could reject my hypothesized rabbit implementation (maybe slowness is a problem even with perfect computational similarity). Or you could hold a view which allows a low ratio of Turing rabbits to normal minds despite the infinitude of both. Or you could insist that we (?) normally implemented minds have some epistemic access to our normality even if Turing-rabbit minds are perfectly similar and no less abundant. But none of those moves is entirely cost-free, philosophically.

    Notice that this argument, though skeptical in a way, does not include any prima facie highly unlikely claims among its premises (such as that aliens envatted your brain last night or that there is a demon bent upon deceiving you). The premises are contentious, and there are various ways to resist my combination of them to draw the conclusion, but I hope that each element and move, considered individually, is broadly plausible on a fairly standard 21st-century academic worldview.

    The basic idea is this: If minds can be implemented in strange ways, and if the universe is infinite, then there will be infinitely many strangely implemented minds alongside infinitely many "normally" implemented minds; and given standard rules for comparing infinities, it seems likely that these infinities will be of the same cardinality. In an infinite universe that contains infinitely many strangely implemented minds, it's unclear how you could know you are not among the strange ones.

    Tuesday, December 06, 2016

    A Philosophical Critique of the Big Bang Theory, in Four Minutes

    I've been invited to be one of four humanities panelists after a public lecture on the early history of the universe. (Come by if you're in the UCR area. ETA: Or watch it live-streamed.) The speaker, Bahram Mobasher, has told me he likes to keep it tightly scientific -- no far-out speculations about the multiverse, no discussion of possible alien intelligences. Instead, we'll hear about H/He ratios, galactic formation, that sort of stuff. I have nothing to say about H/He ratios.

    So here's what I'll say instead:

    Alternatively, here’s a different way our universe might have begun: Someone might have designed a computer program. They might have put simulated agents in that computer program, and those simulated agents might be us. That is, we might be artificial intelligences inside an artificial environment created by some being who exists outside of our visible world. And this computer program that we are living in might have started ten years ago or ten million years ago or ten minutes ago.

    This is called the Simulation Hypothesis. Maybe you’ve heard that Elon Musk, the famous tycoon of Paypal, Tesla, and SpaceX, believes that the Simulation Hypothesis is probably true.

    Most of you probably think that Musk is wrong. Probably you think it vastly more likely that Professor Mobasher’s story is correct than that the Simulation Hypothesis is correct. Or maybe you think it’s somewhat more likely that Mobasher is correct.

    My question is: What grounds this sense of relative likelihood? It’s doubtful that we can get definite scientific proof that we are not in a simulation. But does that mean that there are no rational constraints on what it’s more or less reasonable to guess about such matters? Are we left only with hard science on the one hand and rationally groundless faith on the other?

    No, I think we can at least try to be rational about such things and let ourselves be moved to some extent by indirect or partial scientific evidence or plausibility considerations.

    For example, we can study artificial intelligence. How easy or difficult is it to create artificial consciousness in simulated environments, at least in our universe? If it’s easy, that might tend to nudge up the reasonableness of the Simulation Hypothesis. If it’s hard, that might nudge it down.

    Or we can look for direct evidence that we are in a designed computer program. For example, we can look for software glitches or programming notes from the designer. So far, this hasn’t panned out.

    Here’s my bigger point. We all start with framework assumptions. Science starts with framework assumptions. Those assumptions might be reasonable, but they can also be questioned. And one place where cosmology intersects with philosophy and the other humanities and sciences is in trying to assess those framework assumptions, rather than simply leaving them unexamined or taking them on faith.

    [image source]

    Related:

    "1% Skepticism" (Nous, forthcoming)

    "Reinstalling Eden" (with R. Scott Bakker; Nature, 2013)

    Monday, June 06, 2016

    If You/I/We Live in a Sim, It Might Well Be a Short-Lived One

    Last week, the famous Tesla and SpaceX CEO and PayPal cofounder Elon Musk said that he is almost certain that we are living in a sim -- that is, that we are basically just artificial intelligences living in a fictional environment in someone else's computer.

    The basic argument, adapted from philosopher Nick Bostrom, is this:

    1. Probably the universe contains vastly many more artificially intelligent conscious beings, living in simulated environments inside of computers ("sims"), than flesh-and-blood beings living at the "base level of reality" ("non-sims", i.e., not living inside anyone else's computer).

    2. If so, we are much more likely to be sims than non-sims.

    One might object in a variety of ways: Can AIs really be conscious? Even if so, how many conscious sims would there likely be? Even if there are lots, maybe somehow we can tell we're not them, etc. Even Bostrom only thinks it 1/3 likely that we're sims. But let's run with the argument. One natural next question is: Why think we are in a large, stable sim?

    Advocates of versions of the Sim Argument (e.g., Bostrom, Chalmers, Steinhart) tend to downplay the skeptical consequences: The reader is implicitly or explicitly invited to think or assume that the whole planet Earth (at least) is (probably) all in the same giant sim, and that the sim has (probably) endured for a long time and will endure for a long time to come. But if the Sim Argument relies on some version of Premise 1 above, it's not clear that we can help ourselves to such a non-skeptical view. We need to ask what proportion of the conscious AIs (at least the ones relevantly epistemically indistinguishable from us) live in large, stable sims, and what proportion live in small or unstable sims?

    I see no reason here for high levels of optimism. Maybe the best way for the beings at the base level of reality to create a sim is to evolve up billions or quadrillions of conscious entities in giant stable universes. But maybe it's just as easy, just as scientifically useful or fun, to cut and paste, splice and spawn, to run tiny sims of people in little offices reading and writing philosophy for thirty minutes, to run little sims of individual cities for a couple of hours before surprising everyone with Godzilla. It's highly speculative either way, of course! That speculativeness should undermine our confidence about which way it might be.

    If we're in a sim, we probably can't know a whole lot about the motivations and computational constraints of the gods at the base level of reality. (Yes, "gods".) Maybe we should guess 50/50 large vs. small? 90/10? 99/1? (One reason to skew toward 99/1 is that if there are very large simulated universes, it will only take a few of them to have the sims inside them vastly outnumber the ones in billions of small universes. On the other hand, they might be very much more expensive to run!)

    If you/I/we are in a small sim, then some version of radical skepticism seems to be warranted. The world might be only ten minutes old. The world might end in ten minutes. Only you and your city might exist, or only you in your room.

    Musk and others who think we might be in a simulated universe should take their reasoning to the natural next step, and assign some non-trivial credence to the radically skeptical possibility that this is a small or unstable sim.

    -----------------------------------------

    Related:

    "Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov 15, 2013).

    "Our Possible Imminent Divinity" (Jan 2, 2014).

    "1% Skepticism" (forthcoming, Nous).

    [image source]

    Thursday, March 03, 2016

    From God to Skepticism

    Maybe God created the world. But what kind of god?

    It seems reasonable to have doubts about God's moral character. Some religions claim that if God exists, he/she/it is morally perfect. Other religions, especially polytheistic religions, make no such claims. Even the Old Testament, if read at face value, does not appear to portray a morally perfect God.

    And of course there's the "problem of evil": The fact that the world is -- or appears to be -- full of needless suffering and wickedness that one might hope a morally good God would work to prevent. God could, it seems, have given Hitler a heart attack. God could, it seems, prevent people from dying young of painful diseases. One possible explanation for God's failure to prevent evil and suffering is that evil and suffering really don't bother God so much. Maybe God even enjoys watching us suffer. That would be one reason to create a world -- as a kind of LiveLeak voyeurism on human misery.

    Similarly, it seems reasonable to have doubts about the extent of God's power. Maybe God really wanted to stop the Holocaust, but just couldn't get there in time, or was constrained by non-interference regulations enforced by the Council of Worldbuilders, or was so busy stopping other bad things that this one slipped through the net. Maybe God would have liked to create human teeth sufficiently robust that they did not decay, but had to compromise given the resources at hand.

    Here's one way gods might work: by creating simulated worlds inside their computers, populated by conscious AIs who experience those simulated worlds as real. (Imagine the computer game "The Sims", but with conscious people inside.) I've argued that any manager of such a world would literally be a god to the beings inside that world. But of course those sorts of gods might be highly limited in their abilities. Maybe we too are in a Sim. (Personally, this strikes me as a more plausible version of theism than orthodox Catholic theology.) There's no guarantee that if some god launched our world, that god is all-powerful.

    So maybe God (if there is a god) is all-powerful and morally perfect, and maybe not. I think it's reasonable to have an open mind about that question. But now radically skeptical doubts seem to arise.

    An imperfect god might, for example, create millions of brief universes, one after the next, as trial runs -- beta versions, or quick practice sketches. An imperfect God might require multiple attempts to get things right. If so, then maybe we're in one of the betas or sketches, without much past or future, rather than in the final product.

    An imperfect god, once it/she/he has the knack of things, might just create favorite moments, or interesting moments, in multiple copies -- might create you, or your city, or your planet with a fake past, then suddenly introduce a change of laws, or a disaster, or a highly unlikely stroke of good fortune, just to see what happens. Why not? If you're going to create a world, you might as well play around with it.

    An imperfect god might create a universe as a project that runs for a while, but which will be shut down the moment God gets bored or receives a passing grade from the other gods or fails to pay the utility bill.

    It was crucial to Descartes' famous (and famously unsuccessful) argument against skepticism that he establish that God is perfect and, specifically, not a deceiver. Descartes was right to emphasize this for his anti-skeptical aims. If you admit that God might have created the world but then don't put substantial constraints on God's behavior, then you are imagining a being with the power and motive to create worlds who really kind of might do anything -- and who (if we use human psychology as our best-guess model) seems reasonably likely to do something other than create a boring, stable, predictable, one-shot universe of the sort we normally think we inhabit.

    ***************************************

    Related:

  • Reinstalling Eden (with R. Scott Bakker; Nature 503: 562, Nov. 28, 2013)
  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • What Kelp Remembers (Weird Tales, Apr. 14, 2014)
  • Out of the Jar (Magazine of Fantasy & Science Fiction, Jan/Feb 2015)
  • 1% Skepticism (Nous, forthcoming)


  • [image source]

    Thursday, October 01, 2015

    Against the "Still Here" Reply to the Boltzmann Brains Problem

    I find the Boltzmann Brain skeptical scenario interesting. I've discussed it in past posts, as well as in this paper, which I'll be presenting in Chapel Hill on Saturday.

    A Boltzmann Brain, or "freak observer" is a hypothetical self-aware entity that arises from a low-likelihood fluctuation in a disorganized system. Suddenly, from a chaos of gasses, say, 10^27 atoms just happen to converge in exactly the right way to form a human brain thinking to itself, "I wonder if I'm a Boltzmann Brain". Extremely unlikely. But, on many physical theories, not entirely impossible. Given infinite time, perhaps inevitable! Some cosmological theories seem to imply that Boltzmann Brains vastly outnumber ordinary observers.

    This invites the question, might I be a Boltzmann brain?

    The idea started getting attention in the physics community in the late 2000s. One early response, which seems to me superficially appealing but not to withstand scrutiny, is what I'll call the Still Here response. Here's how J. Richard Gott III put it in 2008:

    How do I know that I am an ordinary observer, rather than just a BB [Boltzmann Brain] with the same experiences up to now? Here is how: I will wait 10 seconds and see if I am still here. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ... Yes I am still here. If I were a random BB with all the perceptions I had had up to the point where I said "I will wait 10 seconds and see if I am still here," which the Copernican Principle would require -- as I should not be special among those BB's -- then I would not be answering that next question or lasting those 10 extra seconds.

    There's also a version of the Still Here response in Max Tegmark's influential 2014 book:

    Before you get too worried about the ontological status of your body, here's a simple test you can do to determine whether you're a Boltzmann brain. Pause. Introspect. Examine your memories. In the Boltzmann-brain scenario, it's indeed more likely that any particular memories that you have are false rather than real. However, for every set of false memories that could pass as having been real, very similar sets of memories with a few random crazy bits tossed in (say, you remembering Beethoven's Fifth Symphony sounding like pure static) are vastly more likely, because there are vastly more disembodied brains with such memories. This is because there are vastly more ways of getting things almost right than of getting them exactly right. Which means that if you really are a Boltzmann brain who at first thinks you're not, then when you start jogging your memory, you should discover more and more utter absurdities. And after that you'll feel your reality dissolving, as your constituent particles drift back into the cold and almost empty space from which they came.

    In other words, if you're still reading this, you're not a Boltzmann brain (p. 307-308)

    I see two problems with the Still Here response.

    First, we can reset the clock. While after ten seconds I could ask the question "am I a Boltzmann Brain who has already lasted ten seconds?", that question is not the sharpest form of the skeptical worry. A sharper question would be this, "Am I a Boltzmann Brain who came into existence just now with a false memory of having counted out ten seconds?" In other words, there seems to be nothing that prevents the Boltzmann Brain skeptic from restarting the clock at will. Similarly, a Boltzmann Brain might come into existence thinking that it had just finished introspecting its memories Tegmark-style, having found them coherent. That's the possibility that the Boltzmann Brain skeptic will be worried about, after having completed (or seeming to have completed) Tegmark's test. The Still Here response begs the question, or argues in a circle, by assuming that we can have veridical memories of implementing such tests over the course of tens of seconds; but it is exactly the veridicality of such memories, even over short durations, that the Boltzmann Brain hypothesis calls into doubt.

    Second, this response ignores the base rate of Boltzmann Brains. It's widely assumed that if there are Boltzmann Brains, they might be vastly more numerous than normally embodied observers. For example, a universe might produce a finite number of normal observers and then settle into an infinitely enduring high entropy state that gives rise, at extremely long intervals, to an infinite number of Boltzmann Brains. Since infinitude is hard to deal with, let's hypothesize a cosmos with a googolplex (10^(10^100)) of Boltzmann Brains for every normal observer. Given some sort of indifference principle, the Boltzmann Brain argument goes, I should initially assign a 1-in-a-googolplex chance to being a normal observer instead of a Boltzmann Brain. Not good. But now, what are the odds that a Boltzmann Brain can hold it together for ten seconds without lapsing into incoherence? Tiny! Let's assume one in a googol (10^100). The exact number doesn't matter. Setting aside worries about resetting the clock, let's assume that I now find that I have indeed endured coherently for ten seconds. What should be my new odds that I am a Boltzmann brain? Much lower than 1-in-a-googolplex. Yay! Only about a googolth of a googolplex! Let's see, how much is that? Instead of a ten followed by a googol of zeroes, it's only ten followed by a googol-minus-100 zeros. So... still virtual certainty that I am a Boltzmann Brain.

    So how should we respond to the Boltzmann Brain hypothesis, then? Sean Carroll has a two-pronged answer that I think makes a lot of sense.

    First, one can consider whether physical theories can be independently justified which imply a low ratio of Boltzmann Brains to normal observers. Boddy, Carroll, and Pollack 2015 offer such a theory. If it turns out that the best physical theories imply that there are zero or very few Boltzmann Brains, then we lose some of our grounds for worry.

    Second, one can point to the cognitive instability of the Boltzmann Brain hypothesis (Carroll 2010, p. 223, drawing on earlier work by David Albert). Here's how I'd put it: To the extent I think it likely that I am a Boltzmann Brain, I think it likely that evidence I have in favor of that hypothesis is delusional -- which should undercut my credence in that evidence and thus my credence in the hypothesis itself. If I think it 99% likely that I'm a Boltzmann Brain, for example, then I should think it 99% likely that my evidence in favor of the Boltzmann Brain hypothesis is in fact bogus evidence -- false memories, not reflecting real evidence from the world outside -- and that should in turn reduce my credence in the Boltzmann Brain hypothesis.

    An interesting feature of Carroll's responses, which distinguishes them from the Still Here response, is this: Carroll's responses appear to be compatible with still assigning a small but non-trivial subjective probability to being a Boltzmann Brain. Maybe the best cosmological theory turns out not to allow for (many) Boltzmann Brains. But we shouldn't have 100% confidence in any such theory -- certainly not at this point in the history of cosmological science -- and if there are still some contender cosmologies that allow for many Boltzmann Brains, we (you? I?) might want to assign a small probability to being a Boltzmann Brain, in view the acknowledged possibility that the cosmos might, though unlikely, have a non-trivial ratio of Boltzmann Brains to normal observers. And although a greater than 50% credence in the Boltzmann Brain hypothesis seems cognitively unstable in Carroll's sense, it's not clear that, say, an approximately 0.1% credence in the Boltzmann Brain hypothesis would be similarly unstable, since in that case one still might have quite a high degree of confidence in the physical theories that lead one to speculate about the small-but-not-minuscule possibility of being a Boltzmann Brain.

    [image source]