New paper in draft!
In 2015, Mara Garza and I briefly proposed what we called the Emotional Alignment Design Policy -- the idea that AI systems should be designed to induce emotional responses in ordinary users that are appropriate to the AI systems' genuine moral status, or lack thereof. Since last fall, I've been working with Jeff Sebo to express and defend this idea more rigorously and explore its hazards and consequences. The result is today's new paper: The Emotional Alignment Design Policy.
Abstract:
According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities’ capacities and moral status, or lack thereof. This principle can be violated in two ways: by designing an artificial system that elicits stronger or weaker emotional reactions than its capacities and moral status warrant (overshooting or undershooting), or by designing a system that elicits the wrong type of emotional reaction (hitting the wrong target). Although presumably attractive, practical implementation faces several challenges including: How can we respect user autonomy while promoting appropriate responses? How should we navigate expert and public disagreement and uncertainty about facts and values? What if emotional alignment seems to require creating or destroying entities with moral status? To what extent should designs conform to versus attempt to alter user assumptions and attitudes?
As always, comments, corrections, suggestions, and objections welcome by email, as comments on this post, or via social media (Facebook, Bluesky, X).
2 comments:
I guess I don't get the utility of this proposal, or better maybe, I am not comfortable with the proposal, a priori. AI is not our buddy or friend, IMHO. AI is a tool. Humans possess something called empathy which, properly thought of is directed towards other humans. Emotional alignment design seems artificial to me. Not sorry: I guess I just do not have the right attitude here. Another philosophy blog has featured a series of posts on AI. My view, I think, is more aligned with his.
Went to my heart doctor, today. Got good news: my heart is still beating. Was glad to hear that. Very comforting...
Post a Comment