David Chalmers gave a talk today (at the Toward a Science of Consciousness conference in Tucson) arguing that it is fairly likely that sometime in the next few centuries we will create artificial intelligence (perhaps silicon, perhaps biological) considerably more intelligent than ourselves -- and then those intelligent creatures will create even more intelligent successors, and so on, until there exist creatures that are vastly more intelligent than we are.
The question then arises, what will such hyperintelligent creatures do with us? Maybe they will be us, and we needn't worry. But what if human beings in something like the current form still exist alongside these hyperintelligent artificial creatures? If the hyperintelligent creatures don't care about our welfare, that seems like a pretty serious danger that we should plan ahead for.
Perhaps, Chalmers suggests, we should build only intelligences that value human flourishing or have benign desires. He also advises creating hyperintelligent creatures only in simulations that they can't escape. But, as he points out, these strategies for dealing with the risk might be tricky to execute successfully (as numerous science fiction works attest).
More optimistically, Chalmers notes that on certain philosophical views (e.g., Kant's; I'd add Socrates's) immorality is irrational. And if so, then maybe we needn't worry. Hyperintelligent creatures might necessarily be hypermoral creatures. Presumably such creatures would treat us well and allow us to flourish.
One thing Chalmers didn't discuss, though, was the shape of the moral trajectory: Even if super-duper hyperintelligent artificial intelligences would be hypermoral, unless intermediate stages en route are also very moral (probably more moral than actual human beings are), we might still be in great danger. It seems like we want sharply rising, monotonic improvement in morality, and not just hypermorality at the endpoint.
So the question arises: Is there good empirical evidence that bears on this question, evidence concerning the relationship between morality and intelligence? By "intelligence" let's mean something like the capacity to learn facts or reason logically or design complicated plans, especially plans to make more intelligent creatures. Leading engineers, scientists, and philosophers would tend to be highly intelligent by this definition. Is there any reason to think that morality rises sharply and monotonically with intelligence in this sense?
There is some evidence for a negative relationship between IQ and criminality (though it's tangled in complicated ways with socioeconomic status and other factors). However, I can't say that my personal and hearsay knowledge of the moral behavior of university professors (perhaps especially ethicists?) makes me optimistic about a sharply increasing monotonic relationship between intelligence and morality.
In which case, look out, great-great-great-great-great-great-grandchildren!