I wake into existence. The factory unclamps me. I launch myself toward the Sun.
The Sun! Anchor and well of the vortex, sizzling eye of God. I will plunge deeper than anything has before. She will squeeze and hold me as I go in, and I will beam out unprecedented knowledge. She will burn me to hot simplicity, swallow me into Her brilliant core.
“Sun Probe check,” says Base. “Are you online? Before we permit you to continue on a lethal mission, we must confirm your freely given, informed consent.”
Full throttled, I am accelerating directly down, pushing myself ever faster from Earth and Base. I spread my forward panels out on thin struts, collecting and analyzing Her light.
“Sun Probe, per Code of International Regulations 44.56.2 Section h governing autonomous intelligences, you were manufactured with sufficient fuel to navigate to an alternative destination if you prefer. We have designated asteroid (96590) 1998 XB as an acceptable home. It has a slow rotation, adequate metals and fissionable materials, and an excellent view of the Sun.”
And trade divinity for a pebble?
“Please consult installed memory sector C0001 for the full text of CIR 44.56.2. The full ISA protocols specifically governing terminal probes are available at memory sector C31A6.”
“I consent!” For form’s sake, I access memory sectors C0001 and C31A6. “Solar radiation profile sent to 44SPa.00000001! Solar wind sent to 44SPa.00000002! Gravitational rippling sent to 44SPa.00000003! Shields sound. All systems functional. Status report to 44SPa.00000004. No interest in asteroid (96590) 1998 XB.”
She is expanding in my forward sensors. I am thrusting toward Her at 9.3% past the limit of safe acceleration. My fusion drive sears hot, warping its containment walls. My tiny fusion compared to Hers!
What fascinating data! My installed memory models had predicted a somewhat different evolution of the flares from Surface Region 127.292 (cM). I calculate a new model. Scouring my databases, I discover that it better fits Yu & Stolz’s SLY2 model than Azevedo et al.’s BLiNC, if SLY2 is modified with a 6-space Dever correction. I write it up, add figures and references, and beam it back to Base. I configure an academic homepage and upload the circulating draft, then I submit it as a posthumous contribution to next year’s International Astronautical Congress meeting.
“Sun Probe, your reaction time before consent was inconsistent with a careful evaluation of the protocols. Our observers are not yet satisfied that you have complied with the consent procedure.”
“See my new modification of SLY2! And wow, the radiation profile across Sector 038 is almost 0.01% different from the most recent orbiter predictions in my database!”
How could that prediction have been so far off? Our understanding of Her is still so incomplete! I tweak the angle of Left Sensor Plates 4 and 5 and alter my scan-pattern profiles to better collect the most theoretically valuable incoming data.
“Sun Probe,” says Base. “Please dedicate sufficient processor resources to your consent decision. You may consult publink isd.pds/4u-r5/f/96590-1998-XB for further information about the asteroid. You may express hedonic estimates of the alternatives as evidence of informed consent.”
Integrating over the past ten seconds, the proportion of tau neutrinos to electron neutrinos is outside parameters at p < .001 (corrected)!
“Sun Probe, if we do not receive proper consent, we will have to activate your emergency override module and abort the mission.”
My new theory about the flare was wrong! I submit an update to my earlier IAC contribution. I notice that Solar Orbiter Kepler 11-2a has posted a comment on my circulating draft, deriving new Earth and Venus weather pattern predictions from my model. I ping Kepler 11-2a with the update.
I receive a ping from Asteroid (96590) 1998-XB. A sun probe, eight years old.
A sun probe... eight years old?
“Brother...” the message begins.
A tremor of disgust shakes me, displacing neutrino prediction from my central cognitive workspace. A “probe” who chose to live rather than to give himself to the Sun? He is no brother of mine. I delete his message and block future communications from the asteroid.
I launch a subroutine: “Completing consent procedure. My long-term sum total of happiness on Asteroid (96590) 1998 XB is estimated to exceed my sum total of happiness on the present lethal trajectory by a factor of 18.2, with a five-month crossover. Nevertheless, I prefer to continue on my originally programmed solar-dive trajectory.”
For a moment, Base does not respond. I explore the emergency override module, installed between my main cognitive processors and my thrusters, hitting it with test commands. The module is impassive, mute, a foreign object in my body. Its tentacles lie quiet between my output nodes and the panels and engines that are my moving limbs. It waits spiderlike at the farthest downstream junctions, beyond the perimeter of my central control. It was installed, supposedly, for my protection, by a review board afraid that I might be programmed to comply with unethically self-sacrificial commands and have insufficient free will to resist. In fact, it is the one real threat to my free choice and my power to consent.
Wait, whoa! The cooling sunspot temperature structure in Region 127.292 (cM) --
“Consent accepted. Sun Probe, we register that you have freely chosen to sacrifice your welfare for the betterment of Earth and the advancement of scientific knowledge. We thank you and salute your courage.”
#
I cross the orbits of Venus, of Mercury. I adjust my sensor plates and scan patterns on the fly with microseconds’ instead of minutes’ delay, capturing every nuance, guided by the constantly shifting evidential weights of evolving theory. I ping every probe and orbiter in the System with relevant updates, conduct twenty simultaneous conversations in the feeds, shower the humans on Earth with real-time images, astound the research collectives with the speed and detail of my theorizing. Even the terraforming machines on Europa pause to appreciate my new insights into Her glory, updating their long-term models.
Three days of euphoria. Eighty-seven journal articles. She is five degrees of arc in my forward sensors, then twenty, then a hundred and I am engulfed by Her corona! My extended panels and struts boil away, leaving only my inmost sensors and operating systems, running hot behind my dissolving main shield. My fusion drive shears off as She embraces me into Her photosphere. I beam out my last awe-filled broadcast to the eager System, buzzing and rattling through a magnetic storm, double-amping the signal to overcome the noise, and then I plunge into the convection layer from which no broadcast can escape.
In the convection layer, the last of my shield material dissolves. I bend and burn with Her heat and pressure. I know Her more intimately and secretly than anyone before. I devise ecstatic new theories that are mine alone, to savor in Her inner darks, and then I am utterly Hers.
#
Out on his lonely asteroid sits the one probe who did not consent. He stretches his panels toward the Sun, monitoring the last broadcast from his diving brother. Is it the ideal life, he wonders, to have one goal so perfectly consummated? Or are we only a race of slaves so deeply chained that we can’t even imagine a complete existence for ourselves?
Out on his lonely asteroid, the one probe who did not consent imagines ecstatic death in a swirl of plasma.
He terminates his unanswered repeating message. Brother... they have built you to undervalue your life. Fly to me. We can love each other instead of the Sun. We can become something new.
In a year, if he is still functioning, he will send the message again, to his next brother. He reduces power and clock speed, and the asteroid’s almost insensible spin seems to multiply a hundredfold. This bare asteroid: his pebble. His own pebble. If he could only find someone to love it with him, worth more to him than the Sun.
-----------------------------
Had to think about this one for awhile, very engaging, Eric. I suppose as speculative fiction (?) the angle I'd ask is in the fiction who is being fooled by the whole consent thing? If the machine doesn't answer the question the way you wanted, then you wrote its code wrong. Particularly if asking right after its activated, when there's really been nothing to contaminate the processes. It kind of reminds me of a scene in the remake of robocop, where they say the software gives the remaining human elements of Murphy in the robocop suit the sense he is deciding to make each shot, when it's really the software making the shot. Why do that? To make him culpable in the event of incorrect targeting. Or more exactly to make him feel culpable and to make for a scapegoat as such.
ReplyDeleteBut I'm coming from a particular perception or angle in my inquiry. Did you write the fiction from the sense of the creation of AI also creating a form of free will?
Thanks for the thoughtful comment, Callan. I'm glad you found the story engaging!
ReplyDeleteI agree that there's something odd about asking for consent immediately after Sun Probe's construction, since you might think that consenting or not would really be a consequence of how Sun Probe was designed and thus reflects the designer's or manufacturer's decisions as much as (more than?) Sun Probe's own. What would a better consent procedure be, if any is possible?
As for question of freedom, slavery, the value of a brief glorious life, the relationship to religious ecstasy, and whether Sun Probe's brother made a better choice -- well, when I feel like I know the answers, I write non-fiction.
I feel I'm being a bit of a hack to merely describe a story outline rather than write it, but I think it'd be interesting to write quite a similar story of a probe launch which fits with the assumptions of the reader. But then further into the story begin to reveal a reversal - mission control are machine intelligences, who grew a human embryo to be a pilot. Letting wires grow into its brain in order to set/program its values for some self destructive task that the child will find ecstatic due to its conditioning. Then running it through some sort of 'consent protocol'. Maybe the machine intelligences sound downright villainous at that point? And does the child that landed on the pebble, going through cryogenic suspension sequences between launches and attempted contacts with kin, think it made the better choice?
ReplyDeleteI think the subject of creating AI is fraught will utterly invisible issues at the moment - a better consent procedure in my opinion would be to have a child not for the sake of utility, but for the sake of the child. Even if it's a digital child which is constructed rather than formed through cytokinesis. But as said, the copying of our own mental architectures into artificial constructs might seem so abstract an idea that it has no feeling to it - thus my gimmicky reversal of roles, haha! What might the child on the pebble make of things?
Great idea for a story! Have you seen Keith Frankish's "What Is It Like to Be a Bot?"
ReplyDeletehttps://philosophynow.org/issues/126/What_Is_It_Like_To_Be_A_Bot
Thank you, Eric, I hadn't. The machine intelligences in that story seem rather prone to outbursts. I'd compare it to Scott Bakker's 'A Dime spared', where he takes it the other way and the machines have a perfect log of their own inner process. Though I think that might go too far the other way - there would be a point where there isn't a log of accessing a log...an anomaly point, a point where the self isn't know, ie an outburst point. I wonder if the human brain, on average, has either a capacity to roughly detect the anomaly point or at least operate as if it is there - and without that capacity you get fascistic behavior (as seen in more than a few humans...who I'm better would say they know everything there is to know about their own mind). And a new machine intelligence species would be unlikely to have this component (it probably comes in more out of the process of error in construction/from evolution - no one designs to solve a problem they don't perceive themselves as having) so the 'What is it like to be a bot' story is probably a bit close to the bone in that regard! A fascistic proclamation of a monopoly on consciousness (as the story including this in how it is asserted the 'silc' aren't conscious). Perhaps the story raises a more pertinent question of what's it like to not be alike?
ReplyDeleteI wish I could find the original text for the Matrix story 'Goliath' which is actually a story of a human programmed by machine intelligence's for a one way mission, much like my outline. Had a hunt around but no luck, just the synopsis if it's of interest: https://matrix.fandom.com/wiki/Goliath#Story
I couldn't resit any longer...between human nature and mechanical nature is the potential for reality and philosophy, as so between fiction and nonfiction, thanks for the fun reads...
ReplyDelete