R. Scott Bakker's story "Crash Space", which I solicited for a special issue of Midwest Studies in Philosophy last year, is stuck in my head.
Without giving away too much of Bakker's story, here's the guiding thought: A world in which you have almost complete voluntary control over your emotions is a world in which your emotions won't effectively do their job in regulating your behavior. It's crucial to the operation of an emotion like guilt or ecstatically forgetful bliss that it be at least partly outside your direct control. Otherwise, why not just turn off the guilt and amp up bliss and go wild?
In fact, once you even start to dampen down your moral emotions or long-term thinking, that might create a situation in which you'll then dampen those emotions farther -- since there will be less guilt, shame, etc., to prevent you from continuing to downregulate. It's easy to see how a vicious cycle could start and be hard to escape. If you could, in a moment of recklessness, say to yourself "let's crank up the carpe diem and forget tomorrow!" and then (through emotion-regulating technology) actually do that, then it might be hard to find your way back to moderation. A bit of normal short-term thinking might lead you to temporarily dampen your concern for the future, but once concern for the future is dampened your new short-term-thinking self might naturally be inclined to say, "what the heck, let's dampen it even more!"
In a postscript to his story, Bakker defines a crash space as "a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done" (p. 203). Bakker argues, plausibly, that the cognitive and emotional structures that give meaning to our lives and constrain us ethically can be expected to work only in a limited range of environments -- roughly, environments similar in their basic structure to those in our evolutionary and cultural history. Break far enough away, and our ancestrally familiar approaches will cease to function effectively.
For a very different set of cases in the same vein, consider utility monster and fission-fusion monster cases that might well become possible if we can someday create human-like consciousness in computers or robots. (Utility monsters are capable of vastly superhuman amounts of pleasure. Fission-fusion monsters are individuals who can merge and subdivide at will.) The human notion of individual rights, for example, only makes sense in a context in which targets of moral concern are individuals who remain discrete from each other for long periods of time -- who don't merge and divide and blend into each other. Break this assumption, and much of our ordinary moral thinking seems to break along with it. (See also Briggs and Nolan 2015.) What becomes of "one person, one vote", for example, when people can divide into a million individuals the day before the election and then merge back together again -- or not -- the day after, or when you have a huge entity with many semi-independent subparts?
Part of the potential philosophical power of science fiction, I think, is in imagining possible crash spaces for our ancestral, historical, or socially familiar ways of finding personal and moral meaning in the world. Pushing imaginatively against existing boundaries, we can begin to see possible risks in the future. But discovering our crash spaces is also of intrinsic philosophical interest, potentially revealing previously unnoticed implicit background assumptions behind our ordinary patterns of evaluation.
Some other interesting science fiction on the hazards of external self-control of emotions include Larry Niven's pleasure-center-simulating tasps and ecstasy plugs (turn on and forget even to eat!), and Greg Egan's exoselves in Permutation City and Diaspora (shell programs that regulate one's personality and preferences).
One flip side of Bakker's "Crash Space" is Egan's short story "Reasons to Be Cheerful", about the challenge of choosing a new set of desires and preferences from scratch, in a self-conscious, hyper-rational way.
7 comments:
I live to bedevil, so this makes my day.
I began responding, then stopped when I realized my comment was approaching the word count of your post (thus breaking a cardinal rule of commenting). So I ended up turning it into a post of my own at: https://rsbakker.wordpress.com/2016/03/17/human-enhancement-as-paradigmatic-crash-space/ .
It turns out that Allen Buchanan has some very well thought out arguments that seem to cut directly against the crash space thesis. I'd be very interested to hear your take on his position, Eric.
Yes -- very helpful post, Scott! I think we're pretty much in full agreement here.
In fact, once you even start to dampen down your moral emotions or long-term thinking, that might create a situation in which you'll then dampen those emotions farther -- since there will be less guilt, shame, etc., to prevent you from continuing to downregulate. It's easy to see how a vicious cycle could start and be hard to escape.
It's worth considering the other way has just as much capacity for a vicious cycle as well - where a lack of dampening allows emotion to dial down dampening even more, and so on. Going into a full on emotional outburst.
I think with regular, vanilla brains we can dampen our emotions as well. It's why we put up with the dentists needle and drill rather than run away - we suppress our emotional responce to some degree. That's why, at least to me, the tweaking in the story has some attraction - it's because it's what we already do every day. We try and modify our responce to various situations. Only this tweaking is far more effective. That effectiveness, at something we do already, is attractive. That part of the stories bite.
So I think what makes things harder is that we already do self tweak - indeed, it may be a main distinguishing element from other animals (though orangutans can use a mirror to groom rather than see another ape - they may have a capacity to tweak their original emotional response as well). And a lack of intellectual down regulation can lead to emotion spirals - possibly some segment of suicide and self harm are a result of this lack of self regulation.
So we have a desire to self tweak that is naturally occurring. But no cap on that desire - no satiation point, because none was needed. The cap was a hardware limit. And the story explores where that hardware limit is removed.
ES & RSB this might be of interest:
http://www.newyorker.com/magazine/2016/03/21/the-internet-of-us-and-the-end-of-facts
via http://www.ufblog.net/quotable-148/#comment-12824
-dmf
Thanks for the link, Anon! Lynch is doing good stuff. That's a nice piece on it.
Could have sworn I left a comment - is bloggers spam filter eating my posts now as well?
Well, that one got through, so I just wanted to post the idea of a vicious cycle the other way - where intellect does not control guilt or bliss and that emotion then turns off more of intellect, which then controls those emotions less and so on. Probably often seen in zealots.
I'd say part of what gives the story bite is that we do tweak ourselves, even in our vanilla state. That's why the tweaking in the story is attractive to some degree (or was it just me?). It's probably one of our main differing traits to animals (though orangutans probably tweak their initial impression of a mirror in order to use it for preening rather than see an opponent). So self tweaking is part of us. It's just had no satiation built into it as mechanically we could only self tweak so much. Hope this post gets through, for whatever it's worth :)
Post a Comment