Human moral systems are designed, or evolve and grow, with human beings in mind. So maybe it shouldn't be too surprising if they would break apart into confusion and contradiction if radically different intelligences enter the scene.
Scott Bakker's and Peter Hankins's insightful responses to my January posts on robot or AI rights. (All the posts also contain interesting comments threads, e.g., by Sergio Graziosi.) Scott emphasizes that our sense of blameworthiness (and other intentional concepts) seems to depend on remaining ignorant of the physical operations that make our behavior inevitable; we, or AIs, might someday lose this ignorance. Peter emphasizes that moral blame requires moral agents to have a kind of personal identity over time which robots might not possess.
My own emphasis would be this: Our moral systems, whether deontological, consequentialist, virtue ethical, or relatively untheorized and intuitive, take as a background assumption that the moral community is composed of stably distinct individuals with roughly equal cognitive and emotional capacities (with special provisions for non-human animals, human infants, and people with severe mental disabilities). If this assumption is suspended, moral thinking goes haywire.
One problem case is Robert Nozick's utility monster, a being who experiences vastly more pleasure from eating cookies than we do. On pleasure-maximizing views of morality, it seems -- unintuitively -- that we should give all our cookies to the monster. If it someday becomes possible to produce robots capable of superhuman pleasure, some moral systems might recommend that we impoverish, or even torture, ourselves for their benefit. I suspect we will continue to find this unintuitive unless we radically revise our moral beliefs.
Systems of inviolable individual rights might offer an appealing answer to such cases. But they seem vulnerable to another set of problem cases: fission/fusion monsters. (Update Feb. 4: See also Briggs & Nolan forthcoming). Fission/fusion monsters can divide into separate individuals at will (or via some external trigger) and then merge back into a single individual later, with memories from all the previous lives. (David Brin's Kiln People is a science fiction example of this.) A monster might fission into a million individuals, claiming rights for each (one vote each, one cookie from the dole), then optionally reconvene into a single highly-benefited individual later. Again, I think, our theories and intuitions start to break. One presupposition behind principles of equal rights is that we can count up rights-deserving individuals who are stable over time. Challenges could also arise from semi-separate individuals: AI systems with overlapping parts.
If genuinely conscious human-grade artificial intelligence becomes possible, I don't see why a wide variety of strange "monsters" wouldn't also become possible; and I see no reason to suppose that our existing moral intuitions and moral theories could handle such cases without radical revision. All our moral theories are, I suggest, in this sense provincial.
I'm inclined to think -- with Sergio in his comments on Peter's post -- that we should view this as a challenge and occasion for perspective rather than as a catastrophe.
[HT Norman Nason; image source]