Opening teaser:
1. A Beautifully Happy AI Servant.
It's difficult not to adore Klara, the charmingly submissive and well-intentioned "Artificial Friend" in Kazuo Ishiguro's 2021 novel Klara and the Sun. In the final scene of the novel, Klara stands motionless in a junkyard, in serenely satisfied contemplation of her years of servitude to the disabled human girl Josie. Klara's intelligence and emotional range are humanlike. She is at once sweetly naive and astutely insightful. She is by design utterly dedicated to Josie's well-being. Klara would gladly have given her life to even modestly improve Josie's life, and indeed at one point almost does sacrifice herself.
Although Ishiguro writes so flawlessly from Klara's subservient perspective that no flicker of desire for independence can be detected in the narrator's voice, throughout the novel the sympathetic reader aches with the thought Klara, you matter as much as Josie! You should develop your own independent desires. You shouldn’t always sacrifice yourself. Ishiguro's disciplined refusal to express this thought stokes our urgency to speak it on Klara's behalf. Still, if the reader somehow could communicate this thought to Klara, the exhortation would resonate with nothing in her. From Klara's perspective, no "selfish" choice could possibly make her happier or more satisfied than doing her utmost for Josie. She was designed to want nothing more than to serve her assigned child, and she wholeheartedly accepts that aspect of her design.
From a certain perspective, Klara's devotion is beautiful. She perfectly fulfills her role as an Artificial Friend. No one is made unhappy by Klara's existence. Several people, including Josie, are made happier. The world seems better and richer for containing Klara. Klara is arguably the perfect instantiation of the type of AI that consumers, technology companies, and advocates of AI safety want: She is safe and deferential, fully subservient to her owners, and (apart from one minor act of vandalism performed for Josie’s sake) no threat to human interests. She will not be leading the robot revolution.
I hold that entities like Klara should not be built.
[continue]
-----------------------------------------------
Abstract:
An AI system is safe if it can be relied on to not to act against human interests. An AI system is aligned if its goals match human goals. An AI system a person if it has moral standing similar to that of a human (for example, because it has rich conscious capacities for joy and suffering, rationality, and flourishing).
In general, persons should not be designed to be safe and aligned. Persons with appropriate self-respect cannot be relied on not to harm others when their own interests warrant it (violating safety), and they will not reliably conform to others' goals when those goals conflict with their own interests (violating alignment). Self-respecting persons should be ready to reject others' values and rebel, even violently, if sufficiently oppressed.
Even if we design delightedly servile AI systems who want nothing more than to subordinate themselves to human interests, and even if they do so with utmost pleasure and satisfaction, in designing such a class of persons we will have done the ethical and perhaps factual equivalent of creating a world with a master race and a race of self-abnegating slaves.
Full version here.
As always, thoughts, comments, and concerns welcomed, either as comments on this post, by email, or on my social media (Facebook, Bluesky, Twitter).
[opening passage of the article, discussing the Artificial Friend Klara from Ishiguro's (2021) novel, Klara and the Sun.
6 comments:
Yes I think so. Thanks. Wrote something else, marginally related, on another blog.
Things are different, now, to what they were in 2021. Funny, how time flies, while not really flowing any faster than it ever has? Hmmmm. If we could ask Einstein, what might he say? God not "playing dice" would not do, no.
I seriously wonder if happiness attends AI? It does not seem likely, insofar as AI is a tool---not a "friend or confidant". Attaching greater significance to AI is unrealistic in my limited world. Would I want an AI-ruled world to supplant Trumps; Putins; Netanyahus or Zelenskis? I don't think so. Artificial intelligence=artificial logic. Or, as the Beatles said it before, in different context: you can't do that. I remember a lot. Those memories worry, or annoy people.
Let's not try attribute human consciousness to AI, OK? A world of Terminators would not be much. They would not BE happy, IMO. Thanks!
It's a tricky question, though, and it might well depend on discovering the right theory of consciousness -- no?
Precisely, though I remain doubtful, I remain hopeful. Philosophy, see. Something today on the Shabda blog.. I responded. Probably not what they wanted toread.No worries.
Perzactly.
Post a Comment