Thursday, August 24, 2017

Am I a Type or a Token? (guest post by Henry Shevlin)

guest post by Henry Shevlin

Eric has previously argued that almost any answer to the problem of consciousness involves “crazyism” – that is, a commitment to one or another hypotheses that might reasonably be considered bizarre. So it’s in this spirit of openness to wild ideas I’d like to throw out one of my own longstanding “crazy” ideas concerning our identity as conscious subjects.

To set the scene, imagine that we have one hundred supercomputers, each separately running a conscious simulation of the same human life. We’re also going to assume that these simulations are all causally coupled together so that they’re in identical functional states at any one time – if a particular mental state type is being realized in one at a given time, it’s also being realized in all the others.

The question I want to ask now is: how many conscious subjects – subjective points of view – exist in this setup? A natural response is “one hundred, obviously!” After all, there are one hundred computers all running their own simulations. But the alternate crazy hypothesis I’d like to suggest is that there’s just one subject in this scenario. Specifically, I want to claim that insofar as two physical realizations of consciousness give rise to a qualitatively identical sequence of experiences, they give rise to a single numerically identical subject of experience.

Call this hypothesis the Identity of Phenomenal Duplicates, or IPD for short. Why would anyone think such a crazy thing? In short, I’m attracted by the idea that the only factors relevant to the identity and individuation of a conscious subject are subjective: crudely, what makes me me is just the way the world seems to me and my conscious reactions to it. As a subject of phenomenal experience, in other words, my numerical identity is fixed just by those factors that are part of my experience, and factors that lie outside my phenomenal awareness (for example, which of many possible computers are running the simulation that underpins my consciousness) are thus irrelevant to my identity.

Putting things another way, I’d suggest that maybe my identity qua conscious subject is more like a type than a token, meaning that a single conscious subject could be multiply instantiated. As a helpful analogy, think about the ontology of something like a song, a novel, or a movie. The Empire Strikes Back has been screened billions of times over the years, but all of these were instantiations of one individual thing, namely the movie itself. If the IPD thesis is correct, then the same might be true for a conscious subject – that I myself (not merely duplicates of me!) could be multiply instantiated across a host of different biological or artificial bodies, even at a single moment. What *I* am, then, on this view, is a kind of subjective pattern or concatenation of such patterns, rather than a single spatiotemporally located object.

Here’s an example that might make the view seem (marginally!) more plausible. Thinking back to the one hundred simulations scenario above, imagine that we pick one simulation at random to be hooked up to a robot body, so that it can send motor outputs to the body and receive its sensory inputs. (Note that because we’re keeping all the simulations coupled together, they’ll remain in ‘phenomenal sync’ with whichever sim we choose to be embodied as a robot). The robot wakes up, looks around, and is fascinated to learn it’s suddenly in the real world, having previously spent its life in a simulation. But now it asks us: which of the Sims am I? Am I the Sim running on the mainframe in Tokyo, or the one in London, or the one in Sao Paolo?

One natural response would be that it was identical to whichever Sim we uploaded the relevant data from. But I think this neglects the fact that all one hundred Sims are causally coupled with one another, so in a sense, we uploaded the data from all of them – we just used one specific access point to get to it. To illustrate this, note that in transferring the relevant information from our Sims to the robot, we might wish (perhaps for reasons of efficiency) to grab the data from all over the place – there’s no reason we’d have to confine ourselves to copying the data over from just one Sim. So here’s an alternate hypothesis: the robot was identical to all of them, because they were all identical to one another – there was just one conscious subject all along! (Readers familiar with Dennett’s Where Am I? may see clear parallels here.)

I find something very intuitive about the response IPD provides in this case. I realize, though, that what I’ve provided here isn’t much of an argument, and invites a slew of questions and objections. For example, even if you’re sympathetic to the reading of the example above, I haven’t established the stronger claim of IPD, which makes no reference to causal coupling. This leaves it open to say, for example, that had the simulations been qualitatively identical by coincidence (for example, via being a cluster of Boltzmann brains) rather than being causally coupled, their subjects wouldn’t have been numerically identical. We might also wonder about beings whose lives are subjectively identical up to a particular point in time, and afterwards diverge. Are they the same conscious subject up until the point of divergence, or were they distinct all along? Finally, there’s also some tricky issues concerning what it means for me to survive in this framework – if I’m a phenomenal type rather than a particular token instantiation of that type, it might seem like I could still exist in some sense even if all my token instances were destroyed (although would Star Wars still exist in some relevant sense if every copy of it was lost?).

Setting aside these worries for now, I’d like to quickly explore how the truth or falsity of IPD might actually matter – in fact, might matter a great deal! Consider a scenario in which some future utilitarian society decides that the best way to maximize happiness in the universe is by running a bunch of simulations of perfectly happy lives. Further, let’s imagine that their strategy for doing this involves simulating the same single exquisitely happy life a billion times over.

If IPD is right, then they’ve just made a terrible mistake: rather than creating a billion happy conscious subjects, they’ve just made one exquisitely happy subject with a billion (hedonically redundant) instantiations! To rectify this situation, however, all they’d need to do would be to introduce an element of variation into their Sims – some small phenomenal or psychological difference that meant that each of the billion simulations was subjectively unique. If IPD is right, this simple change would increase the happiness in the universe a billion-fold.

There are other potential interesting applications of IPD. For example, coupled with a multiverse theory, it might have the consequence that you currently inhabit multiple distinct worlds, namely all those in which there exist entities that realize subjectively and psychologically identical mental states. Similarly, it might mean that you straddle multiple non-continuous areas of space and time: if the same identical simulation is run at time t1 and time t2 a billion years apart, then IPD would suggest that a single subject cohabits both instantiations.

Anyway, while I doubt I’ve convinced anyone (yet!) of this particular crazyism of mine, I hope at least it might provide the basis for some interesting metaphysical arguments and speculations.

[Image credit: Paolo Tonon]

6 comments:

  1. Wouldn't each instance immediately diverge experientially, given different inputs (environments)?

    Unless all the instances are networked into some kind of superorganism or meta-conscious being, I think you'd quickly be dealing with 100 unique (if very similar) instances. The nature of the body or substrate (physical or virtual) would change the consciousness of the instance quickly, as would social ties, power/ability, etc.

    For example, if I were able to make a copy of myself within a machine or simulation, the simulated "me" would have a very different experience of life, and identity. Consciousness is closely tied to relationships, responsibilities, social roles, etc.

    Even if I made an instance in an identical physical body, which instance would "own" my life (family ties, occupation, bank accounts, etc.)? The answers to those questions would inform identity.

    Fun post, enjoyed it!

    ReplyDelete
  2. I haven't thought hard enough about it, but it strikes me that at least in the case of human beings, none of us can be synchronically token-identical phenomenologically at a particular time purely based on perspectival differences. It may be possible diachronically, but I'm not as clear on this. On this basis, the thought experiment seems a bit bizarre because I don't quite understand what it would mean for a machine to be conscious in the absence of having a first-person perspective (even in the rudimentary non-self-representational sense), and especially each machine having exactly the same FPP at any given time. Have I just lost myself somewhere or missed something?

    ReplyDelete
  3. Could there be anything to the thought that there is no me which is me. If me is just an experience and a reaction to it, where is the me?

    ReplyDelete
  4. A fun read. But surely personal identity requires token identity. Type identity is just not good enough.

    If 100 computers are forever in the same functional state, then there is really only one token computer—since function is what we care about—in 100 boxes.

    A Start Trek thought experiment can explore what kind of identity matters and for whom. Imagine that Mary the Super-scientist has a second career as designer of the transporter aboard the Starship Enterprise, and that you are Captain James T. Kirk. You are honored to have Mary paying you a visit as you prepare to beam down to the surface of a planet. Mary has something surprising to say about the transporter:

    "Actually Captain, the body that comes out the other end and arrives on the planet is only type identical to you, not token identical. You—the guy standing right here—are destroyed by the process. But don’t worry. The copy of you will be just like you, perfectly type identical. He will be your functional equivalent. The crew will notice no difference at all. Of course, when the copy of you beams back up to the Enterprise, that copy will be destroyed. What appears back on the Enterprise will be a copy of a copy, ready to assume command and fully capable. That second copy will think he is the Kirk that has been around all along. The very first Kirk—the one whose childhood you remember— was destroyed a long time ago, the first time he stepped into the transporter. You— the guy I’m talking to right now— is a 133rd copy of the original, who is long gone.

    "Now, Captain, if you’ll just step into the transporter, we can get on with the mission. Oh, and… well… good-bye."

    Are you stepping into the transporter? Surely you are going to hesitate, at least. This is the most important decision of your life and possibly your last. For your crew, it makes no difference whether the transporter spits out the token identical guy, or merely someone type identical. But for you, it means everything. From the outside, who cares? From the inside, it’s the only thing that matters.

    More at  https://better-questions-than-answers.blog. Three posts deal with the Star Trek Transporter problem. Pt 1 Original Atoms or Local Materials, Pt 2 King of Planet X, Pt 3 Mary the Super-scientist.

    ReplyDelete
  5. This is the idea that any physicalist should arrive at if pressed hard on the issue of mind copying.

    Actually, i wouldn't call it any crazier than physicalism (although i of course did think it was crazy to actually believe it when i first heard it); the latter already implies that there is no such thing as phenomenological consciousness. Thus many physicalists (who do not reject the word altogether) start using the term "consciousness" in various "functional" meanings and your definition is a very adequate one.

    > We might also wonder about beings whose lives are subjectively identical up to a particular point in time, and afterwards diverge. Are they the same conscious subject up until the point of divergence, or were they distinct all along?

    If you watch two versions of a movie which only differ in endings, are those the same movie up to that point or are distinct all along? What if you're still watching second version of a movie and haven't reached point of divergence - what will you answer on the question "is this the same movie?"? If you watch two versions of a movie one of which has a few pixels different, is it still the same movie? What if it has a different resolution? Or compression artifacts?

    Really, if we embrace physicalism, we should stop caring that much about this identity problem. If physicalism is true, than there is literally no relevant difference between "consciousness" and a movie. The only difference is that we learned to store (and thus rewatch and manipulate) movies and we can only observe "consciousness" in the moment.

    So there would be nothing wrong in saying "these two consciousness are mostly the same" or "these two have slight variations in the last few years".

    And on the contrary if we reject physicalism for the sake of having phenomenological consciousness, this idea seems pretty absurd. Why'd you call "consciousness" something other than what we already postulated?

    ReplyDelete
  6. It doesn't seem that you are a type then, but just a really complex token.

    ReplyDelete