I've just started reading Kim Stanley Robinson's acclaimed climate-science utopia, The Ministry for the Future. How might society plausibly get it right and avert the climate disaster toward which we seem to be headed? (So far in the novel things aren't looking good, but I gather that will change.)
I was struck by a few of Robinson's comments about the value of science fiction in a recent interview on the Crisis and Critique podcast.
[Kim Stanley Robinson, and The Ministry for the Future; image source]
Reading Science Fiction Encourages a Flexible Conception of the Future
Robinson describes the reader as finishing a science fiction novel and thinking that the future will be like that, then finishing another science fiction novel and thinking the future will be like that instead.
And what happens is there's a habit of mind when you read enough science fiction, you say the future could be many different things, quite plausibly from now, and now we need to shape it to the direction that we want.And so this is the political power of science fiction as a mental activity, as a co-creation between writers and readers. The science fiction community is in some sense better prepared for whatever happens, no matter what it is, than the general populace that doesn't read science fiction.
The thought has some plausibility. Science fiction accustoms us to thinking about various possible futures. Instead of ignoring the future, or assuming it must take some particular shape, science fiction helps us imagine a wider range of alternatives.
This might prepare us two ways: First, if one of the alternatives we've imagined comes close to actually playing out, we have already thought through some of its implications. Second, we develop a more general sense of the flexibility of the future. This may encourage readers to take action to steer us toward better futures.
Or Maybe Not?
Robinson is making a substantive claim about human psychology, one that's potentially testable (with difficulty). Does reading science fiction really generate a more flexible and open view of the future? This claim has the same intuitive appeal as Martha Nussbaum's claim that reading literary fiction broadens your empathy with people from other walks of life, or the claim that studying ethics improves moral decision-making.
It might be that none of these claims are true. For example, I've repeatedly found that ethics professors behave about the same as non-ethicists of similar social background. And I wouldn't bet a large sum that devoted readers of literary fiction are overall more empathetic than their peers who spend an equal amount of time reading non-fiction.
Pretty though Robinson's picture is, I'm not sure science fiction readers really are better prepared for the future. What drives science fiction writing and reading might be too disconnected from the practical future -- too fantastical, too plot-driven, chosen to be exciting and emotionally satisfying rather than accurate. Its envisioned futures might be too distorted by the need for high-stakes individual action, or too wishful, or too self-congratulatory, or too satisfyingly dystopian (for those of us who find dystopias satisfying). Readers might emerge with unrealistic or overconfident views, shaped not by realism but by the demands of story.
A particularly timely example is the nearly universal trope that humanoid robots and linguistically fluent AI systems are conscious. This might be an artifact of the demands of storytelling rather than something accurately foreseen. A world with conscious robots is more interesting -- a more engaging setting for a novel. If the robots are conscious, there's more at stake, so the action is more exciting. And it's structurally difficult to portray entities that act as though they are conscious but really are not. Doing so is nearly impossible in film, and it's a significant challenge in prose, requiring constant intrusive reminders. (I can attest to this both as a writer and a reader, having published stories with non-conscious and disputably conscious robots.)
So there's a systematic pressure in science fiction toward portraying advanced AI as conscious. If optimists about AI consciousness turn out to be right, then science fiction will have nudged readers in the right direction. But if the AI consciousness scoffers are right, the genre will have served its readers poorly. It remains to be seen who is right. (For details, see my forthcoming book: AI and Consciousness: A Skeptical Overview.)
Robinson's Realism
Now, among the great science fiction writers of our time, Kim Stanley Robinson's fiction is perhaps the least subject to the concerns I've just raised. He attempts to keep strictly within the bounds of scientific plausibility; and conventional character-driven plot is often replaced by loosely connected scenes featuring unrelated or barely related characters, plus less conventional devices, like mini-treatises on science or engineering, lists, and reflections that verge on expository philosophy or lyrical poetry. The Ministry for the Future in particular is rigorously grounded in real science and politics.
In the interview, Robinson praises realism in science fiction:
If you set a story in the future, you're automatically saying to the reader, this is made up, I've invented this, this isn't real. It is a concoction. And then if you add all of the clues and habits and techniques of realism to that concoction, you make it solider. It has a more powerful emotional cognitive impact on the reader. So realistic science fiction is a mode that I quite like.And that requires a lot of detail, a lot of scientific support for the future that you're describing, the idea that it's plausible at every point along the way, and it looks like it could happen, and therefore it might happen. These are powerful literary effects to support the basically fantastic nature of science fiction as a genre.
Robinson thus suggests that adding realistic detail and excluding anything implausible will tend to make a story emotionally and cognitively more powerful. Again, it's a plausible claim, though I'm not sure we know this to be the case. After all, people can also be deeply moved and influenced by unrealistic fantasies.
Robinson's commitment to realism also synergizes with his thought about science fiction as a tool for helping us think better about the future. If the value of science fiction lies in opening our minds to future possibilities, it seems desirable to ensure that they really are possibilities and not just unrealistic fantasies.
Against Dystopias, for Utopias
Robinson suggests that the future will have to differ from the present, because our present path isn't sustainable. Things will get either much better or much worse. But dystopias, he suggests, are boring:
... descriptions of capitalist realist futures are generally dystopias. If we keep going this way, things will be wrecked. Yes, we can see that. Indeed, dystopias quickly become boring because we already know this truth. We're not taught anything by dystopias.But utopias -- this is where it gets interesting. There could be a better world. This, I think, is becoming more and more obvious.... We have, at least in theory, the wisdom to realize we could create a world that has food, water, shelter, clothing, health care, education, electricity, and security for the feeling that people after you will have the same, and sense of dignity and meaning.... This is all possible technologically.... So then utopia becomes interesting, the most interesting of literary genres. Can there but a utopian realism, or a realistic utopia?
Dystopias can be satisfying in a way -- they point out the wrongs we already know, affirming our sense of their reality. But we learn more by envisioning a realistic utopia, something we hadn't properly imagined before, which we could see becoming real and could maybe take steps toward enacting.
In Robinson's telling, science fiction is the most profound and informative of the literary genres, and realistic science fiction is the most profound and informative science fiction, and utopian realism is the most interesting form of science fiction. The value of science fiction lies in enabling us to envision realistic possibilities for improving the world.
And thus we get Kim Stanley Robinson's style of science fiction, and The Ministry for the Future in particular.
It's an appealing vision. But somewhere along the way, I think we've lost sight of the value of all the other ways science fiction can work. After all, almost none of the great science fiction writers work within the constraints Robinson proposes!

7 comments:
I agree with the broad point you are making here that Robinson's claim is an empirical one which could be tested and which is not self-evidently true. And like you, I personally would not be surprised if Robinson's claim was tested and found to be inaccurate.
However, I don't think your example is actually good evidence against Robinson's view and, to some extent, I think it actually cuts the opposite way.
If I understand Robinson's quote correctly, he is not saying that science fiction gives its readers an accurate picture of what the future will be like. Rather, it allows them to imagine a broader range of possibilities so that whatever the future does look like, it is less surprising because science fiction readers have been prepared for things to be weird (even if not the future is not weird in exactly the same way as any science fiction stories).
In the case of AI consciousness, I agree that science fiction stories are heavily biased towards depicting AI/robots as conscious. However, my experience is that most people (including most non-science fiction readers) pretty much automatically assume that currently-existing AI like ChatGPT, Claude, etc is not conscious (which, in my view, is far from clear). So perhaps such people would actually have benefited from reading science fiction stories since it would shift them away from the view "ChatGPT is obviously not conscious" and more towards what I believe is the correct view (and which you also seem to espouse), namely "it's very hard to tell whether ChatGPT is conscious, but it's certainly plausible."
Many have suggested that the direction of development of technologies chosen by tech companies has been influenced by a certain type of (American) SF eg "Metaverse", transhumanism. Mundane SF of the type KSR is interested in does feed off current scientific and technological speculations (consider how many novels arose from JBS Haldane's Daedalus), so it is possible that we would still be having an AI arms race and persons of high individual personal wealth seeking immortality, even if none of these things had the subjects of novels. But I suspect it is reciprocal.
AI is Cyberspace, Consciousness is Experience, Claude Mythos is the latest very most fast deep Cybersecurity AI...In the ongoing 'Truth or Fiction' wars...
Thanks for the comments, folks!
Anon Apr 16 03:23: Yes, that's a good point. Two thoughts in response. First, it would still be the case that what drives the pressure might tend to be the demands of story rather than plausibility, in which case there's not the kind of connection Robinson might want between what SF portrays and what is plausible, even if it fortuitously works out well in this particular case. Second, whether it does work out well will depend on psychological facts about people and technological facts about AI consciousness or not. If, for example, the younger generation is too primed by SF to see AI conscious and the scoffers are right, then it won't have worked out well. With increasingly many young people attached to AI companions, I think there's at least some cause to worry about this possibility.
Yes, that strikes me as plausible.
Original anon here.
I definitely agree that narrative demands often make science fiction less useful for the purpose that Robinson highlights. I feel that this bias often pushes science fiction to be much less weird than I expect the future to be, and to look much more like the time period in which it was written, just ornamented with a few technological marvels.
One example of this tendency in science fiction which is related to AI and that I've been thinking about recently is that in most stories about AI, AI tends to consist of a relatively small population of entities whose identity persists over long time periods. In particular, in a lot of classic SF, you either had one/some very small number of possibly extremely powerful AI (e.g. in The Moon is a Harsh Mistress or Asimov's "multivac") or a relatively small population of humanoid robots acting on similar timescales as humans and with roughly similar abilities to humans, e.g. Asimov's positronic robots or the androids in Star Trek, Star Wars or Alien. Basically either AI functioned like a kind of god or like humans with somewhat different personalities and talents.
But it seems like we're headed to a future (which to some extent is already here) where there are vastly more AI entities than humans which mostly persist over much shorter timescales than humans and where drawing a boundary between different AI "entities" and saying how long one entity persists is often very challenging.
Actually, I find this tendency in science fiction puzzling. It is pretty understandable in early science fiction, which was written at a time when computers were immensely expensive and so it probably seemed reasonable to think that running many different AI entities would be prohibitively expensive. But as computers became more widespread, cheap, small and fast, I feel that many science fiction writers did not update their depictions. In particular, it now seems obvious that once you can create one AI of human level intelligence, you will shortly have billions or even trillions of them running all the time. Of course, there are some science fiction writers who did consider this possibility, but it feels to me that even now it's far from the standard depiction of AI.
My grandson at13 has been indirectly learning from you about philosophy of psychology: now...thru todays questioning of plausibility and reasoning for teenagers, to better understand the limits of the plausibility of psychology, in facing the limits of AI, directing as I have learned with you that he can only think about what I am and who he is--if he teaches himself AI is very much limited to DATA on our small planet...to compensate he plays soccer and gets straight at school...Reading "On liberty" and "Brave New World" to come.
Post a Comment