Suppose that we someday create genuinely conscious Artificial Intelligence, with all the intellectual and emotional capacity of human beings. Never mind how we do it -- possibly via computers, possibly via biotech ("uplifted" animals).
Here are two things we humans might want, which appear to conflict:
(1.) We might want them to subordinately serve us and die for us.
(2.) We might want to treat them ethically, as beings who deserve rights equal to the rights of natural human beings.
A possible fix suggests itself: Design the A.I.'s so that they want to serve us and die for us. In other words, make a race of cheerfully suicidal A.I. slaves. This was Asimov's solution with the Three Laws of Robotics -- a solution that slowly falls apart across the arc of his robot stories (finally collapsing in "Bicentennial Man").
What to make of this idea?
Douglas Adams parodies the cheerily suicidal A.I. with an animal uplift case in The Restaurant at the End of the Universe:
A large dairy animal approached Zaphod Beeblebrox's table, a large fat meaty quadruped of the bovine type with large watery eyes, small horns and what might almost have been an ingratiating smile on its lips.
"Good evening," it lowed and sat back heavily on its haunches. "I am the main Dish of the Day. May I interest you in parts of my body?" It harrumphed and gurgled a bit, wriggled its hind quarters into a comfortable position and gazed peacefully at them.
Zaphod's naive Earthling companion, Arthur Dent, is predictably shocked and disgusted, and when he suggests a green salad instead, the suggestion is brushed off. Zaphod and the animal argue that it's better to eat an animal that wants to be eaten, and can say so clearly and explicitly, than one that does not want to be eaten. Zaphod orders four rare steaks.
"A very wise choice, sir, if I may say so. Very good," it said. "I'll just nip off and shoot myself."
He turned and gave a friendly wink to Arthur.
"Don't worry, sir," he said. "I'll be very humane."
Adams, I think, nails the peculiarity of the idea. There's something ethically jarring about creating an entity with human-like intelligence and emotion, which will completely subject its own interests to ours, even to the point of suiciding at our whim. This appears to be so even if the being wants to be subjected in that way.
The three major classes of ethical theory -- consequentialism, deontology, and virtue ethics -- can each be read in a way that delivers this result. The consequentialist can object that the good of a small pleasure for a human does not outweigh the potential of the lifetime of pleasure for an uplifted steer, even if the steer doesn't appreciate that fact. The Kantian deontologist can object that the steer is treating itself as a "mere means" rather than as an agent whose life should not be sacrificed by itself or others to achieve others' goals. The Aristotelian virtue ethicist can say that the steer is cutting its life short rather than flourishing into its full potential of creativity, joy, friendship, and thought.
If we can use Adams' steer as an anchoring point of moral absurdity at one end of the ethical continuum, the question then arises to what extent such reasoning transfers to less obvious intermediate cases, such as Asimov's robots who don't sacrifice themselves as foodstuffs (though presumably they would do so if commanded to, by the Second Law) but who do, in the stories, appear perfectly willing to sacrifice themselves to save human lives.
When a human sacrifices her life to save someone else's it can be, at least sometimes, a morally beautiful thing. But a robot designed that way from the start, to always subordinate its interests to those of humans -- I'm inclined to think that ought to be ruled out, in the general case, by any reasonable egalitarian principle that treats AIs as deserving equal moral status with humans if they have broadly human-like cognitive and emotional capacities. Such a principle would be a natural extension of the types of consequentialist, deontologist, and virtue ethicist reasoning that rules out Adams' steer.
Thus, I think we can't use the "cheerfully suicidal slave" fix to escape the dilemma posed by (1) and (2) above. If we somehow create genuinely conscious, general-intelligence A.I. with a range of emotional capacities like our own, then we must create it morally equal, not subordinate.
Related article: A Defense of the Rights of Artificial Intelligences (Schwitzgebel & Garza, Midwest Studies in Philosophy 2015).