Thursday, April 15, 2010

The Moral Behavior of Super-Duper Artificial Intelligences

David Chalmers gave a talk today (at the Toward a Science of Consciousness conference in Tucson) arguing that it is fairly likely that sometime in the next few centuries we will create artificial intelligence (perhaps silicon, perhaps biological) considerably more intelligent than ourselves -- and then those intelligent creatures will create even more intelligent successors, and so on, until there exist creatures that are vastly more intelligent than we are.

The question then arises, what will such hyperintelligent creatures do with us? Maybe they will be us, and we needn't worry. But what if human beings in something like the current form still exist alongside these hyperintelligent artificial creatures? If the hyperintelligent creatures don't care about our welfare, that seems like a pretty serious danger that we should plan ahead for.

Perhaps, Chalmers suggests, we should build only intelligences that value human flourishing or have benign desires. He also advises creating hyperintelligent creatures only in simulations that they can't escape. But, as he points out, these strategies for dealing with the risk might be tricky to execute successfully (as numerous science fiction works attest).

More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish.

One thing Chalmers didn't discuss, though, was the shape of the moral trajectory: Even if super-duper hyperintelligent artificial intelligences would be hypermoral, unless intermediate stages en route are also very moral (probably more moral than actual human beings are), we might still be in great danger. It seems like we want sharply rising, monotonic improvement in morality, and not just hypermorality at the endpoint.

So the question arises: Is there good empirical evidence that bears on this question, evidence concerning the relationship between morality and intelligence? By "intelligence" let's mean something like the capacity to learn facts or reason logically or design complicated plans, especially plans to make more intelligent creatures. Leading engineers, scientists, and philosophers would tend to be highly intelligent by this definition. Is there any reason to think that morality rises sharply and monotonically with intelligence in this sense?

There is some evidence for a negative relationship between IQ and criminality (though it's tangled in complicated ways with socioeconomic status and other factors). However, I can't say that my personal and hearsay knowledge of the moral behavior of university professors (perhaps especially ethicists?) makes me optimistic about a sharply increasing monotonic relationship between intelligence and morality.

In which case, look out, great-great-great-great-great-great-grandchildren!

17 comments:

  1. Hi Eric,

    My view is that these super-intelligent digital beings will not hang around Earth very long because they would get bored just being on a planet. The main thing we could do to maximize our chance of survival is to not pose a threat to their evolution. However, based on history, my guess is that we'll try and suppress them which won't end happily for mankind.

    Cheers,
    Graham

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Here are links on (trans-species) propagation of meme-like qualities of spirit that may be of interest:

    1) Lumines - http://is.gd/bvlvp

    2) Good ancestors - http://is.gd/bvlDL

    Best,

    Mark Frazier
    @openworld

    ReplyDelete
  4. Thanks for the links, Mark!

    Graham: It's an interesting question whether they would get bored. I suppose it depends partly on how motivated they are toward outward exploration vs. inward or more purely intellectual exploration....

    ReplyDelete
  5. If Artificial Intelligence is a measure of knowledge and responsibility then there is nothing to worry as far as singularity happening. Intelligence is conjectural and speculative, so in this aspect moral circuits will have to be implemented for groups to allow group survival and overdrive selfish interests of individuals (humans or not) in the group. Intelligence is about competition and “outsmarting the others” but moral intelligence will constrain it globally for the welfare of the group.

    ReplyDelete
  6. Isn't it possible that we are no more able to predict the actions of the super-moral than the actions of the super-intelligent?

    ReplyDelete
  7. I'm inclined towards the idea that the hyperintelligent creatures will be us (or at least some of us, not sure everyone would want hyperintelligence) since from an engineering standpoint, as long as you can solve the interface problem, it's easier to expand on an existing framework than build an entirely new one.

    But that aside, even if it is different beings, I think we need to be mindful, but not fearful. It's basic self-interest. We want the next generation of slightly more intelligent beings to be moral and nice to us, and that generation also have interest in creating morality in the following generation to be nice to them, and so on. The problem isn't just insuring that the hyperintelligent are moral and nice to us, but at every step, the preceding generation has that exact same interest in the following generation. So, the self-interest of each generation sets the moral trajectory even if intelligence itself doesn't (although I am inclined to think it would - higher intelligence and higher technology would have to be moral or it would self-destruct long before it reached hyperintelligence).

    So, it is a gamble, certainly. However, for various reasons above, I'm optimistic that it's a safe bet. We need to be mindful, but not fearful.

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. I doubt that such questions make sense, but there were some very interesting thoughts of Stanislav Lem on that issue. He included them in his stories, esp. Golem XIV, 'formula lymphatera', "The Inquest" in this collection (review). Lem was one of the very few sf writers who were both highly trained psychiatrists and science erudits in general, esp. interested in AI related issues.

    ReplyDelete
  10. Thanks for the thoughts and links, folks! Interesting point about the possible unpredictability of hypermorality, Nick. There's a theological, theodicy angle on that too, of course!

    Greg Egan and Olaf Stapledon are other SF writers with fascinating post-humanist ideas.

    ReplyDelete
  11. Moral behaviour is in my opinion strongly related to the capacity of self-doubt. Here is a report that even apes could have it. But what is with people with very high IQ's? Aquaintances belonging to this group reported a high proportion of narcicism (1,2,3) there, obviously incompatible with a capacity for self-doubt.

    ReplyDelete
  12. I just remember an other SF touching that theme, written by a japanologist and an astronomer, whose novels and stories are very highly esteemed in the russian science community. Below the film version and here it's summary, obviously with interesting similarities and differences to Stapledon's novels.
    (Trailer, filmsite, wiki on the novel)

    ReplyDelete
  13. First re-read this statement...
    "More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish."

    Then read this short article...
    http://scienceblogs.com/cortex/2010/04/psychopaths_and_rational_moral.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+scienceblogs/wDAM+(The+Frontal+Cortex)&utm_content=Google+Reader

    We're always so stuck on rationality (myself included) but in someways isn't it our emotional capacity that defines our consciousness?
    Take 3 statements-
    I feel pain. I feel happy. I compute 2+2=4.
    which ones would you peg for the AI? which should you completely discount? which confers emotion? which confers rationality?

    "There is some evidence for a negative relationship between IQ and criminality" sure if you're using prison stats... but what about stalin, hitler, polpot et al? how are we defining criminality?

    ReplyDelete
  14. Thanks for the comment, Roga. I'm not sure emotionality is the key to consciousness; maybe part of the issue is whether (contra Searle) computers are *really* computing 2 + 2 (or proving theorems).

    I share your skepticism about the relationship between intelligence and morality -- but I have to acknowledge the consensus literature on the other side.

    ReplyDelete
  15. They will do whatever they were programmed to do. You have to be very, very sure that you program them right, as the could, with their intelligence, develop vastly superior technologies capable of rewriting the world on a molecular level. Socrates's arguments don't apply to an entity with a mind very much unlike a human (or to humans, really), while Kant's don't apply to an entity with singular power over anything (or to anyone, really). If an AI is programmed to maximize paperclips, no amount of clever philosophical argument will convince it that the interests of humans should be taken into account. It will just turn us all into raw material for paperclips. (A more likely scenario is very tiny, very happy, entities.)

    ReplyDelete