Thursday, March 09, 2023

New Paper in Draft: Let's Hope We're Not Living in a Simulation

I'll be presenting an abbreviated version of this at the Pacific APA in April, as a commentary on David Chalmers' book Reality+.

According to the simulation hypothesis, we might be artificial intelligences living in a virtual reality.  Advocates of this hypothesis, such as Chalmers, Bostrom, and Steinhart, tend to argue that the skeptical consequences aren’t as severe as they might appear.  In Reality+, Chalmers acknowledges that although he can’t be certain that the simulation we inhabit, if we inhabit a simulation, is larger than city-sized and has a long past, simplicity considerations speak against those possibilities.  I argue, in contrast, that cost considerations might easily outweigh considerations of simplicity, favoring simulations that are catastrophically small or brief – small or brief enough that a substantial proportion of our everyday beliefs would be false or lack reference in virtue of the nonexistence of things or events whose existence we ordinarily take for granted.  More generally, we can’t justifiably have high confidence that if we live in a simulation it’s a large and stable one.  Furthermore, if we live in a simulation, we are likely at the mercy of ethically abhorrent gods, which makes our deaths and suffering morally worse than they would be if there were no such gods.  There are reasons both epistemic and axiological to hope that we aren’t living in a simulation.

Paper here.

As always, comments welcome!

10 comments:

  1. Hi Eric,

    Haven't had the time to read your paper in depth (sorry!) but one comment you made struck me as unconvincing: "If we live in a simulation, we are likely at the mercy of ethically abhorrent gods, which makes our deaths and suffering morally worse than they would be if there were no such gods"

    Not trying to argue for moral relativism or anything like that here, I'm simply wondering whether there isn't a situation, by our own standards, which might justify the existence of evil in a simulation. For example, suppose that pain and pleasure existed on an axis from -100 to 100, such that the worst possible painful state was rated at -100 and the best possible pleasurable state was rated at 100. Now suppose that the worst possible human agonies (e.g. being burned alive, experiencing the mental anguish of losing your child) only count as a -0.1 in the total landscape of pain and pleasure. We can't imagine what veritable hell would constitute a -1 experience, much less a -100 experience.

    Now suppose that our creators live in a world with experiential peaks of +10 and troughs of -10. They decide to create a simulation of conscious beings, but naturally some object that it would be unconscionable to replicate painful experiences in their simulated entities. So, in their merciful wisdom, they cap the painful experiences at -0.1 (they also cap the pleasurable experiences to avoid the criticism that simulated beings live meaningful lives, worthy of the free choice to decide their own simulated fates). From their point of view, we are like phenomenal zombies, living lives of hedonic mediocrity.

    Imagine their would-be surprise to hear that there exist simulated beings who complain ruefully of the existence of evil and suffering in their worlds. From out point of view, it would probably be comparable to hearing of the complaints of those who experience nothing more than an itch. The point is that suffering is relative, and if we could experience a -10 sensation, we would probably think that our simulator overlords are merciful angels in comparison.

    Not trying to extend this to a general argument against the problem of evil, since it could be argued that a real omnibenevolent God would never let any suffering pass, no matter how minor. But for morally mediocre beings (and is there any reason to think that our simulated creators wouldn't be morally mediocre?), I think the above would get them a pass.

    ReplyDelete
  2. I would not presume to write a book on reality. The topic just Isn't all that vast. But, Mr. Chalmers ought to know what he is getting into. My own take on reality is covered in an essay and focuses on contextual facets of the term; those interests, preferences and motives (IPMs) that may come and go with changes in trends, memes, mass and popular culture. Changing laws of physics does not enter the discussion. Whether something like anti-matter exists; can or could exist, is not my concern. Other man-made realities emerge when we make them up as we go. They are realities, if and only if, they meet the IPMs of the propagators. I don't know if anyone still clings to "alternate facts". Those were a contextual reality to their adherents. We don't hear much about them right now.

    ReplyDelete
  3. The AI challenge is...It' will have hope one day...
    ...Go Eric...make them cry with joy and wonder...

    The strength of a skeptic is from simple hope...being virtuousness...
    ...Reality too...with out virtue is without hope...

    I am going to invent a video game with my grandson...
    ...to search and see hope is everything in cosmology...

    ReplyDelete
  4. Well and tersely put, Arnold! Sceptics and pragmatists have kinship, I think. Cosmology is not my bowl of porridge, but I don't despair it for anyone else. We need to try harder, think better, and do the best we can with what we have and know..learning more, as we go. Is virtuosity reality? I do not know. But, I guess we need to start somewhere. Or end, nowhere---hmmmm?

    ReplyDelete
  5. Thanks for the comments, folks!

    Alex: I agree that’s possible. There are lots of possibilities! But is that the most natural reading of the evidence? Is there anything that suggests that it is so?

    Paul & Arnold: Reality is certainly a big question! We’ve all got to take a stab at it though. I don’t disapprove of hope!

    ReplyDelete
  6. Perhaps the Designers are long since dead and we are in a runaway simulation generated in a the black hole (a requirement for computing power required for the simulation). Would they be ethically abhorrent if they couldn't bring themselves to shut it down after they had achieved whatever purpose they intended?

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. @Eric: I think there is a middle ground category between a pathetic (impotent) simulator overlord, and an evil one. The middle ground category is apathetic, where they simply don't care enough to "fix" suffering and evil in our world. The example I gave was meant to demonstrate the possibility of justifiable apathy, where our simulator overlords are right not to care about our suffering, given that it is so negligible in the total state of affairs.

    The relevant question is whether justifiable apathy is the more likely state of affairs. I think it is, and I'll give three arguments in support:

    1) The argument via extrapolation: If we organized all life on the planet earth according to a taxonomy of scaling intelligence/hedonic capacity, we would see that there is a correlation between the justifiability of human apathy towards a particular lifeform and the degree of intelligence/hedonic capacity (IHC) that the lifeform exhibits. For instance, we don't really care about killing something like a fly, or any 'suffering' that it might undergo, because the pain response of a fly is so basic and can't really encapsulate all the different varieties of pain that a chicken might experience, much less human pain experience. In turn, a chicken doesn't seem to have the additional emotional capacity for suffering that we can experience.

    Importantly, it's not just that we're apathetic towards a fly's 'pain', but that we are actually justified in our apathy. Extrapolating from this, it is therefore reasonable to believe that sentient creatures with much greater IHC than us would also be justified in their apathy towards our suffering.

    2) We have every physical reason to think that the total landscape of IHC extends far beyond the human capacity for IHC, given our biological constraints. Even if we already hit a physical limit with respect to the possible ranges of hedonic qualities, which seems very unlikely, so that there didn't exist any particular hedonic qualities beyond the human experiences (e.g. there exist no additional varieties, over the different ways that human can be in pain or pleasure, in the total landscape of experiences), there is still a matter of scale. A Matrioshka brain which can experience 100 quintillion human equivalent "being burned alive" experiences at once, will still proportionately suffer much greater than any human can, and therefore will still have a claim to justifiable apathy.

    3) It's reasonable to think that our simulator overlords would have to exhibit a capacity for at least as much IHC as humans do in order to simulate us, therefore if we're being simulated, we should guess that the IHC of our simulator overlords ranges anywhere from (human-level IHC) to (maximum possible IHC). Given the fact that we have good physical reasons to believe that the maximum IHC can be astronomical in comparison to human-level experiences, that would seem to entail a default assumption of justifiable apathy on behalf of our simulator overlords.

    ReplyDelete
  9. No insult intended, but, the reasoning and content of such musings is worthy of Heinlein, Bradbury, Farmer or Herbert: all exemplary science fiction writers, who, incidentally were also unintentional(?)philosophers. I only use the (?) characterization because those thinkers were wise enough to know philosophy is fun, even interesting, but only a tentative way to earn a living. So, there THAT is. If I had ever thought differently,I suppose I might have tried to do better. This is something like the cult figure, Howard the Duck. Trapped in a world he never made.Pre- wisely. Philosophy never made the world. It only tries to figure it out.

    ReplyDelete
  10. Eric

    How do the issues around sims dovetail with those of AI?
    Plus, Harari is extremely vigilant about the destructive capacities of AI and not the moral quandaries.
    A lot of this is how can we extrapolate about such a novel threat? Is the danger as clear as nuclear weapons?
    Finally, can philosophers or
    perfectly" analyzed people or so on have resistance to the bewildering capacities of AI?

    ReplyDelete