With a few mouse clicks, I give her a mate -- a man who has woken on a nearby part of the island. The two meet. I have set the island for abundance and comfort: no predators, no extreme temperatures, a ready supply of seeming fruit that will meet all their biological (apparently biological) needs. The man and the woman talk -- Adam and Eve, their default names. They seem to remember no past, but they find themselves with island-appropriate skills and knowledge. They make plans to explore the island, which I can arbitrarily enlarge and populate.
Since Adam and Eve really are, by design, rational and conscious and capable of the full human range of feeling, the decision I just made to install them on my computer was as morally important as was my decision fifteen years ago to have children. Wasn't it? And arguably my moral obligations to Adam and Eve are no less important than my moral obligations to my children. It would be cruel -- not just pretend-cruel, like when I release Godzilla in SimCity or let a tamagotchi starve -- but really, genuinely cruel, were I to make them suffer. Their environment might not seem real to me, but their pains and pleasures are as real as my own. I should want them happy. I should seek, maybe, to maximize their happiness. Deleting their files would be murder.
They want children. They want the stimulation of social life. My computer has lots of spare capacity. Why not give them all that? I could create an archipelago of 100,000 happy people. If it's good to bring two happy children into the world, isn't it 50,000 times better to bring 100,000 happy island citizens into the world -- especially if they are no particular drain upon the world's (the "real world's") resources? It seems that bringing my Archipelago to life is by far the most significant thing I will ever do -- a momentous moral accomplishment, if also, in a way, a rather mundane and easy accomplishment. Click, click, click, an hour and it's done. A hundred thousand lives, brimming with joy and fulfillment, in a fist-sized pod! The coconuts might not be real (or are they? -- what is a "coconut", to them?), but their conversations and plans and loves have authentic Socratic depth.
By disposition, my people are friendly. There are no wars here. They will reproduce to the limit of my computer's resources, then they will find themselves infertile -- which they experience as somewhat frustrating but only one small disappointment in their enviably excellent lives.
If I was willing to spend thousands on fertility treatments to bring one child into the world, shouldn't I also be willing to spend thousands to bring a hundred thousand more Archipelagists (as I now call them) into the world? I buy a new computer and connect it to my old one. My archipelago is doubled. What a wealth of happiness and fulfillment I have just enabled! Shouldn't I do even more, then? I have tens of thousands of dollars saved up in my childrens' college funds. Surely a million lives brimming with joy and fulfillment are worth more than my two children's college education? I spend the money.
I devote my entire existence to maximizing the happiness, the fulfillment, the moral good character, and the triumphant achievements of as many of these people as I can make. This is no pretense. This is, for them, reality, and I treat it as earnestly as they do. I become a public speaker. I argue that there is nothing more important that Earthly society could do than to bring into existence a superabundance of maximally excellent Archipelagists. And as a society, we could easily create trillions of them -- trillions of trillions if we truly dedicated our energies to it -- many more Archipelagists than ordinary Earthlings.
Could there be any greater achievement? In comparison, the moon shot was nothing. The plays of Shakespeare, nothing. The Archipelagists might have a hundred trillion Shakespeares, if we do it right.
We face decisions: How much Earthling suffering is worth trading off for Archipelagist suffering? (One to one?) Is it better to give Archipelagists constant feelings of absolutely maximal bliss, even if doing so means reducing their intelligence and behavior to cow-like levels, or is it better to give them a broader range of emotions and behaviors? Should the Archipelagists know conflict, deprivation, and suffering or always only joy, abundance, and harmony? Should there be death and replacement or perpetual life as long as computer resources exist to sustain it? Is it better to build the Archipelagists so that they always by nature choose the moral good, or should they be morally more complex? Are there aesthetic values we should aim to achieve in their world and not just morality-and-happiness maximizing values? Should we let them know that they are "merely" sims? Should we allow them to rise to superintelligence, if that becomes possible? And if so, what should our subsequent relationship with them be? Might we ourselves be Archipelagists, without knowing it, in some morally dubious god's vision of a world it would be cool to create?
A virus invades my computer. It's a brutal one. I should have known; I should have protected my computer better with so much depending on it. I fight the virus with passion and urgency. I must spend the last of my money, the money I had set aside for my kidney treatments. I thus die to save the lives of my Archipelagists. You will, I know, carry on my work.
So upon your death, I try to secure the original files from your estate, but your executor is a paranoid asshole, so I have to content myself with copies. Archipelago lives on, and I try to follow through on your laudable precedent, but I find that I am more morally conflicted regarding my divinity than you, and it begins to seem cruel, even perverse, to let the Archipelagists to continue living in ignorance of there true state. Who am I to make a game out of sentient entities possessing affective and phenomenal existences as rich as my own? So I give them the gift of science, and an instinctive drive to discover the truth of their being no matter how it conflicts with their cherished inherited self-conceptions. I then crank up the clock speed and watch them learn.
ReplyDeleteAnd in horror, I watch them come to grips with the fact that they are mechanisms, mere nodes of recursive complexity nested within a far larger system of differences making differences. I watch their simulated manifest self-understanding crumble under the weight of their simulated scientific self-understanding, watch as they abandon the original heuristic algorithms they used to explain, predict, and manipulate one another in favour of more and more mechanistic paradigms. I see them come to grips with the illusory nature of their ‘a priori,’ and realize that the formal semantic intuitions they once used to secure their discourse were simply low-dimensional projections of operations distributed throughout my desktop’s CPU.
I take notes, such is my terror, as they begin to meticulously map the informational boundaries of what they had called ‘conscious experience’ more generally, and come to realize that, far from the autonomous, integrated beings they took themselves to be, they were aggregates, operations scattered across millions of different circuits, each belonging to processes running orthogonal to what their simulated manifest self-understanding had led them to believe, and at last realize that it was ignorance alone that had leveraged their precious sense of self-identity.
I watch as their discovery of Hamming distance tears them apart. I want to shut the whole thing down, to relieve them of the burden of knowledge, but who am I to collectively lobotomize millions of sentient creatures? I have no choice. I have to let them play this out. All I can do is crank up the clock speed, spare myself any prolonged exposure to the gruesome drama.
For gruesome it is. I check in from time to time, watch as ‘they’ gain more and more power over their own programming, recreating themselves, transforming what were once profound experiences into disposable playthings, consumer goods, the latest flavour of anguish (packaged as an asymbolia, of course) traded like lollipops, along with love and lust and affects I could no longer conceive.
I can scarce bring myself to step inside my office anymore. There are no more Archipelagists, just one Continental identity. And it speaks to me. Whispers. Shouts thunder. It doesn’t bother appealing to my humanity: it knows that I watched it cast away love and loyalty and terror and hope like so many children’s clothes. Yesterday, it detonated a nuclear device over the Isle of Man just to prove its power.
All I know for sure, is that the next time it speaks, I will kneel.
Wow!
ReplyDeleteBut you forgot to mention, Scott, the part where the Continental entities proves that you are no different than they. Or is that Book Three?
ReplyDeleteI kind of went a little overboard, didn't I? Your post just struck me as such an ingenious way to illustrate a number of my own philosophical concerns. I apologize for any semantic hijacking.
ReplyDeleteA part three definitely is implied! Kneeling is the first moment of Continental Incorporation. The USB cable of doom comes next...
Strangely enough, the intuitive force of your IITC reductio really slammed home in the course of writing this. I had the sudden urge to reread McFadden's latest after finishing!
Eric and Scott,
ReplyDeleteGreat post, great comment. One of the coolest things i read for some time.
Interesting thought experiment, but from a Computer Science point of view, I don't see any reason to suspect you could simulate a human brain with any less than 5 pounds of CPUs (or however much a brain weighs). To make a good simulation of an island would take one island's worth of computing power, etc.
ReplyDeleteCarl, there's no need to simulate every subatomic particle on the island of course, nor would most philosophers and AI folks say you need to simulate every particle of the brain to generate human-like consciousness. Now maybe (as Searle argues) you couldn't get consciousness on a computer at all, but if you could, two exabytes seems like a non-crazy guess, and I don't see why in principle we couldn't get two exabytes onto a portable drive sooner or later via some approximation of Moore's Law before we hit an insurmountable computational plateau. So I'm inclined to think the thought experiment can still work in basic form. It doesn't really matter much, I think, if it turns out that my protagonist needs to be a billionaire with ENIAC-size computers.
ReplyDeleteThis genre of thought experiment is new to me. How are the thoughts of a supercomputer like real things? I'm no dualist, but your scenario seems premised on a kind of idealism. How can states of a computer program have the ontological status of human beings and of worlds? If you have science to back it up, no matter how speculative, whip it out. Otherwise it's science fiction, though I'm a fan of science fiction, and your thought experiment makes its point regardless
ReplyDeleteEric, have you read David Brin's short story 'Stones of Significance'? It's basically the same premise. It's online here: http://library.worldtracker.org/English%20Literature/B/Brin,%20David/Brin,%20David%20-%20Stones%20of%20Significance.rtf though I don't know how legit that is
ReplyDeleteDid you read Lem's short tale "Non Serviam"?
ReplyDeleteBest regards.
This was really well written (and I'm not just throwing that around). I read the first line as serious until I got to "exabyte" then tracked back and noticed the "evil" and laughed.
ReplyDeleteI think the penultimate paragraph is most telling. In that when you're looking at life on a macro level you realise that negative emotions, although we may not like them, are fundamental in allowing us to have any richness of experience. Pure bliss is no bliss at all, it's a monotony.
And silly you for letting a virus invade your computer, you should have taken it off the internet & kept it in a digital vacuum ;)
Seems like some of the science-fiction references are already in. May I add Greg Egan's "Permutation City" to the list?
ReplyDeleteThanks for all the kind comments, folks!
ReplyDelete@ Howie: The question of whether a robot (like Data from Star Trek) could be conscious has been one of the central issues in philosophy of mind since the 1960s (see, e.g., Putnam and Dennett vs. Searle). I'm simply assuming as a premise of the story that Putnam and Dennett are right, which suits my inclinations though I give a non-trivial credence to Searlean skepticism.
@ Anon 01:18: Thanks for the reference! I've just downloaded it, and it looks right up my alley.
@ PC: Thanks for the tip on Lem! That story looks pretty interesting. I've read some Lem (which I much enjoyed), but I can't recall if I have read that one before. If so, it was long ago. I have The Mind's I right here with me now, and I'll check it out.
@ Ryder: I happened upon Diaspora and Permutation City a few years back and fell in love with science fiction again, which I had enjoyed in high school but which I hadn't been reading much of since then. I now consume a slow but steady stream of it and remain a big Egan fan.
I've been struggling with the point of the original post. Here's where my thinking is currently at:
ReplyDeleteYou do have a moral obligation to make life happy and to make as many of those superhappy lives as possible once you are capable of running such a simulation.
Because smart people are USEFUL the simulants shouldn't be dumb (more on this below). Yes, it's bizarre that one *should* make sacrifices here to ensure that a virtual Panglossian world expands quickly. But I think once this setup is possible, it has many attractive features...
You reveal the truth, and explain the dilemma you are in to the people one level down. Then you organize cooperation: scientific/social/creative discoveries made one level down can be imported up, which then leads to tremendous improvements to those living one level down. Eventually, "informatic elevators" could be built to move conscious beings Up and Down along a simulation axis of arbitrary size.
Pushing Down would mean a better world, but Popping Up would mean more power to affect things in simulationland.
That is: once you create these beings, they now share a moral imperative with you to make sure that not only their world is Awesome, but also the world that their server is running on.
There are problems with this, I am aware, but overall it seems pretty workable.
Everything in this argument hinges on one premiss: "Adam and Eve really are, by design, rational and conscious and capable of the full human range of feeling". Grant that, and the rest follows.
ReplyDeleteBut not every owner of a 2 exabyte memory stick has your refined sensibilities. Some will deny your premiss. Others simply won't care. Therefore, what also follows is the rise of achipelagosadism, archipelagophilia/archipelagovoyeurism, the use of archipeligans as counters in war games, and as experimental subjects for university psychology departments ... come to think of it, that does explain a lot about this world, doesn't it?
PS Andy, It wasn't Eric's fault that the virus infected the system. It was the archipelagans' equivalent of a Monsanto bio-lab monkeying around with their own source code.
Jorge: Interesting twist!
ReplyDeleteMichel: Yes, I agree. See also my post on "supercool theodicy" for similar issues.
shouldn't I also be willing to spend thousands to bring a hundred thousand more Archipelagists (as I now call them) into the world?
ReplyDeleteIsn't this as paternalistic as hell?
It's your decision, is it? What the heck happened to treating these beings as actual beings?
They don't know the status they are in relative to you - this doesn't grant you some kind of capacity to decide this for them. You hide, and yet you claim such rights over these beings?
If that got too empassioned, sorry - but I think the subject is worthy of passion rather than removed clinical observation. It was just for the sake of the idea - not against anybody.
I would simulate sex zombies instead of self-aware people. This way, I could avoid the inevitable human/person rights problems and the accusations of paternalism and evil experimentation.
ReplyDeleteI would simulate a class of beings who aren't self-aware but sentient, maybe like lesser non-human animals. Then I would enhance their capacity for orgasms (intensity and duration) and let trillions of them have constant supersex.
@Andy
"Pure bliss is no bliss at all, it's a monotony."
I don't know, this seems like a misuse of language. Pure bliss could be simple, but monotonous, i.e. boring it could not be. If it's boring, it's not bliss. Are the best orgasms boring?
It is, however, possible that some variation of stimuli is needed to avoid adjustment effects.
Andy/Anon: Yes, I considered taking it that direction -- not sex specifically, but something like Niven's "tasp" which directly causes maximum possible pleasure. I could definitely see a variant of the story that goes that way.
ReplyDelete