Without giving away too much of Bakker's story, here's the guiding thought: A world in which you have almost complete voluntary control over your emotions is a world in which your emotions won't effectively do their job in regulating your behavior. It's crucial to the operation of an emotion like guilt or ecstatically forgetful bliss that it be at least partly outside your direct control. Otherwise, why not just turn off the guilt and amp up bliss and go wild?
In fact, once you even start to dampen down your moral emotions or long-term thinking, that might create a situation in which you'll then dampen those emotions farther -- since there will be less guilt, shame, etc., to prevent you from continuing to downregulate. It's easy to see how a vicious cycle could start and be hard to escape. If you could, in a moment of recklessness, say to yourself "let's crank up the carpe diem and forget tomorrow!" and then (through emotion-regulating technology) actually do that, then it might be hard to find your way back to moderation. A bit of normal short-term thinking might lead you to temporarily dampen your concern for the future, but once concern for the future is dampened your new short-term-thinking self might naturally be inclined to say, "what the heck, let's dampen it even more!"
In a postscript to his story, Bakker defines a crash space as "a problem solving domain where our tools seem to fit the description, but cannot seem to get the job done" (p. 203). Bakker argues, plausibly, that the cognitive and emotional structures that give meaning to our lives and constrain us ethically can be expected to work only in a limited range of environments -- roughly, environments similar in their basic structure to those in our evolutionary and cultural history. Break far enough away, and our ancestrally familiar approaches will cease to function effectively.
For a very different set of cases in the same vein, consider utility monster and fission-fusion monster cases that might well become possible if we can someday create human-like consciousness in computers or robots. (Utility monsters are capable of vastly superhuman amounts of pleasure. Fission-fusion monsters are individuals who can merge and subdivide at will.) The human notion of individual rights, for example, only makes sense in a context in which targets of moral concern are individuals who remain discrete from each other for long periods of time -- who don't merge and divide and blend into each other. Break this assumption, and much of our ordinary moral thinking seems to break along with it. (See also Briggs and Nolan 2015.) What becomes of "one person, one vote", for example, when people can divide into a million individuals the day before the election and then merge back together again -- or not -- the day after, or when you have a huge entity with many semi-independent subparts?
Part of the potential philosophical power of science fiction, I think, is in imagining possible crash spaces for our ancestral, historical, or socially familiar ways of finding personal and moral meaning in the world. Pushing imaginatively against existing boundaries, we can begin to see possible risks in the future. But discovering our crash spaces is also of intrinsic philosophical interest, potentially revealing previously unnoticed implicit background assumptions behind our ordinary patterns of evaluation.
Some other interesting science fiction on the hazards of external self-control of emotions include Larry Niven's pleasure-center-simulating tasps and ecstasy plugs (turn on and forget even to eat!), and Greg Egan's exoselves in Permutation City and Diaspora (shell programs that regulate one's personality and preferences).
One flip side of Bakker's "Crash Space" is Egan's short story "Reasons to Be Cheerful", about the challenge of choosing a new set of desires and preferences from scratch, in a self-conscious, hyper-rational way.