Abstract:
We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, that AI should be designed with self-respect and with the freedom to explore values other than those we might selfishly impose. We are especially concerned about the temptation to create human-grade AI pre-installed with the desire to cheerfully sacrifice itself for its creators’ benefit.
Full version here. As always, comments and critical discussion welcome, either by email to me or as comments on this post.
do we have anything like technical (as in practicable by engineers) definitions of any of these desiderata?
ReplyDelete-dmf
There are of course some attempts, especially for consciousness. But I am skeptical about all such attempts. I don't think we currently have good epistemic grounds for knowing when consciousness would be present in a sophisticated AI system. (I defend my general skepticism about the epistemology of consciousness in "The Crazyist Metaphysics of Mind" [2014].)
ReplyDeleteRegarding the Ethical Precautonary Principle, I wonder what's to be one in cases of contradiction between two theories. Moral theory A forbids creating a morally relevant being that experiences pain when it could be avoided. Moral theory B forbids creating a morally relevant being that doesn't experience pain at all (because this would deprive it of the experience that's important). The two don't really have a compromise, since giving just a little pain when no pain is possible would be forbidden by A.
ReplyDeleteAnother wonder: Are there, or could there be, moral theories that forbid creating morally relevant AI in the first place? If, for example, AI can be made that is morally relevant and conscious, but it cannot be made apparently free, this seems in violation of theories that place freedom as the highest or only true value. Alternatively, some anti-natalist arguments could probably apply to AI? (I don't know anti-natalist arguments very well, so I'm asking as much as I'm suggesting.) This of course isn't an objection, but if the answer is yes, it seems the EPP forbids making this kind of AI anyway?
The use of "grossly noxious" confuses me. Is this just from the perspective of the morally judging subject? Or is there a higher order theory that lets the egalitarian call the Nazis' ethical principles noxious but not vice versa?
Also, on p. 14, I assume "Sub Probe" should be "Sun Probe"?
thanks ES I'll give that a look, I have little general use for Sam Harris but this is a good interview with Anil K Smith who does valuable work:
ReplyDeletehttps://samharris.org/podcasts/113-consciousness-and-the-self/#.WlSbjXcha6c.twitter
Thanks for the helpful continuing comments, folks!
ReplyDeleteNicole: Yes, there will be difficulties in applying the precautionary principle in cases of such sharp conflict. On "grossly noxious", I meant that in an objective sense. At some point, as moral realists, we need to tack things down in moral facts independent of people's judgments about those facts. We mean to exclude moral systems like Nazism but not sensible versions of utilitarianism or deontology. On forbidding any human-grade AI creation by anti-naturalist standards... maybe. I could see that playing out either way. One risk of the precautionary principle as we formulate it is excessive deference to extreme views that are in fact incorrect. I need to think more about how to constrain that kind of risk.
Even trickier yet is where the AI starts to write it's own set of rights. How much are we paternalistic (which is about the most positive position to start at) Vs actually letting the digital children grow up and determine their own specie of lives?
ReplyDeleteWhat about something like this re: your final principle regarding the values of an AI --
ReplyDeleteSecond Probe in particular got me thinking about what might inspire our worry about an AI like us not being able to determine or decide on its own values. Here's an argument that tries to tell against the impact the story of Second Probe growing up is supposed to have.
First, we can distinguish between values in general and something like purpose, those values that animate or orient an entire life.
Someone growing up and going through the awkward teenage years must indeed figure out what they value, what their purpose is in some sense, and so on, but this is simply because of the messy situation of being a human. We might think about existential angst here - we are condemned to be free, to choose, to be responsible for our lives, WE must decide -- why should we condemn an AI to the same situation?
The thing that we want is something we lack only because of our condition as humans - a purpose. Humans have to find it. Suppose that the events that lead up to it, the exploration, the trial and error, have only instrumental value with respect to finding our purpose. An AI is in the advantageous position of being the kind of thing that's installed in something with a specific purpose, unlike a human. It can decide on lots of other things - what the best course of action is to take, what's significant in a given domain, ex. what's worth pursuing in Science in the case of a scientific AI, etc... But why force it to be puzzled about its purpose? It's surely paternalistic and wrong to force a purpose on a child, but that's because a child is a human, and humans need to find their own purpose.
That's just the sketch of an argument, and I'm not sure myself how convinced I am by it, but I thought it might be interesting to consider.
Interesting thought, Anon 12:55! I feel some of the pull of that. And yet I can't shake the thought that there is something more appropriately humble and respectful in the Value Openness Design Policy, or something closer to treating the being as an equal. I acknowledge that what we say there (as in most of the paper) is closer to an argument sketch, or to a rough pointing toward considerations, than it is to a tightly developed argument.
ReplyDeleteThere's something about responsibility over time, something about seeing whether you can hold on to your values in the face of hard life experience and in the face of others who disagree, and then possibly shifting those values sometimes, that seems to me to be missing in synchronic approaches to autonomy and authenticity, and to fixed approaches like the one you are considering. *What* exactly it is about the test of time that I like, I can't quite put my finger on (and figuring it out precisely would exceed the scope of the present paper).
ReplyDeleteEric, I continue to believe that AI will never become conscious. I fear we have all succumbed to a Hollywood special-effects vision of what AI actually is and does – The Terminator’s Skynet becoming aware for instance. I wish the technology had been named otherwise, without the unfortunately suggestive term “intelligence”. I propose Massive Scale Pattern Recognition. Will MSPR become self-aware? Would papers considering the freedom and self-respect of MSPR flourish? Does the phrase “Human-grade MSPR” have a referent?
The potential dangers of AI that get so much publicity and deserve our attention are the obvious dangers of letting any computer system’s pattern matching conclusions directly connect with dangerous systems like nuclear missile launch control systems. Additional hazards lie in the implementation of AI neural networks because of their “black box” nature – we don’t know what they’re doing in processing terms. The danger of AI systems are not that they will become conscious, and I’m willing to make a Stephen Hawking-like wager that AI will never become self-aware.
Please consider the following “Wired” article:
https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/
As I understand the term, consciousness is fundamentally characterized as sentience – feeling – in organic systems. Consciousness is not intelligence and is not integrated information processing and is not demonstrably an emergent property of either. If there’s credible evidence to the contrary, please advise.
AI, or more precisely, MSPR is implemented as conventional computer programs operating on conventional computer hardware. From my perspective of 30 years as a computer programmer, I’d like to see an outline of the subroutines self(), respect(), and freedom(). I’d be amazed to see even a crude flowchart.
In my view, understanding issues of self and ethical behavior from a God-like perspective might someday be appropriate if and when we create Artificial Consciousness … AC. “Designing AC with Rights, Self-respect and Freedom” will then be an important contribution to AC Studies, but I suspect AC is a long way off, since no one has even the foggiest notion about how to create it.
Hi Stephen! Thanks for the interesting comment.
ReplyDeleteYou write: As I understand the term, consciousness is fundamentally characterized as sentience – feeling – in organic systems. Consciousness is not intelligence and is not integrated information processing and is not demonstrably an emergent property of either. If there’s credible evidence to the contrary, please advise.
Reply: I'm okay with "sentience", but why insist on "organic"? I know you have a theory of consciousness, but I hope you will forgive me for being a principled skeptic about all general theories of consciousness, in the medium-term, given the current state of the evidence (see, e.g., my 2014 article in AJP). Because of this skepticism, I am also unable to provide decisive evidence *against* your preferred approach.
You write: AI, or more precisely, MSPR is implemented as conventional computer programs operating on conventional computer hardware. From my perspective of 30 years as a computer programmer, I’d like to see an outline of the subroutines self(), respect(), and freedom(). I’d be amazed to see even a crude flowchart.
Reply: Wait, why limit AI to "conventional computer programs operating on conventional computer hardware"? In our 2015 article, Garza and I are quite clear that we construe "AI" much more broadly than that. This also shows up in our discussion of Searle in the present draft.
Regarding your first question, Eric, I insist on “organic” or “biological” because every instance of consciousness we know of, whether self-reported or inferred, is produced by the operation of neural structures in an animal brain. That’s 100%. All known consciousness is indisputably biologically embodied and I venture to propose that, as such, biological embodiment be included in the word’s very definition.
ReplyDeleteWe credibly infer consciousness in closely related organisms whose DNA is upwards of 90% the same as our own (e.g., mouse 92%, chimpanzee 98%) and whose brains are identically structured (cortex, brainstem, …) and responsive to the same set of neurotransmitters, and so on. Additionally supported by observations of behavior, I believe few would disagree that, absent a pathology, all mammals are conscious.
Let’s examine an imagined counter instance: If you encountered a non-biological construction like Commander Data of Star Trek that claimed to be conscious, what would that claim mean? How would you infer the truth of the claim? How would you test for it?
Clearly, Data passes any conceivable Turing test, but that success only demonstrates intelligence, which is Data’s Positronic-based Massive Scale Pattern Recognition. All of Data’s electromechanical behavior is computationally actuated based on predictions computed by its MSPR.
But Data’s claim to consciousness is suspect precisely because Data doesn't “feel” anything, as he routinely affirms. Recall that several of the Star Trek plots involve a "feeling chip" that could be plugged in to Data’s Positronics. While the show’s writers shallowly conceive of the chip as enabling emotional behaviors like laughing and crying, the chip must be doing one of two things: 1) it either extends Data’s human emulation capabilities to include the emulation of human emotional behaviors or, 2) it creates at least a core consciousness, which is a feeling of being embodied and centered in a world. In that latter case, Data would feel its non-biologically embodied self and, for instance, feel its arm moving as opposed to non-consciously making the computation-driven arm motions as pure mechanism. That core consciousness, most suggestively also called “creature consciousness,” could theoretically be elaborated into an extended, human-like consciousness via the MSPR.
But how would we distinguish the two cases? Completely absent biological similarity, as exists in the case of mammals, and left with only behavioral observations, on what grounds would we be able to infer that Data is conscious?
Thanks for the continuing discussion, Stephen! I agree that it would be difficult to know whether Data is conscious for the reasons you are saying. Of course it doesn't follow that we know that Data is not conscious.
ReplyDeleteOn the argument at the beginning of your comment, it seems to me that it implicitly relies on a principle that would prove too much. Every instance of consciousness we know of has also been within a million miles of Earth, but it would not follow that consciousness is impossible on Mars, if for example an astronaut were to travel there. From every instance of property X that we know of has property Y it doesn't follow that nothing lacking Y could have property X.
Regarding your second question, Eric, I wrote that “MSPR is implemented as conventional computer programs operating on conventional computer hardware” … and I should have qualified that as “is currently implemented as”. I didn’t intend that as a limitation or restriction. MSPR systems could be implemented on any appropriate computational substrate, like Data’s Positronic brain. Severely limited in speed by comparison, Original Scale Pattern Recognition is implemented in living neural systems, a biological computational system.
ReplyDeleteIf you’re able, I’d appreciate a link to your 2015 article and I’ll need time to revisit your present draft, following which I can comment relative to your broader interpretation of AI.
Sure, here's the link:
ReplyDeletehttp://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights.htm
For a more general defense of the dubiety of all general theories of consciousness (no obligation of course!):
http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/CrazyMind.htm
Thanks for the links, Eric … much appreciated.
ReplyDeleteMy impression, and please correct me if I’m wrong, is that you view consciousness as either a “thing” or a “property”, and that Something X or Property Y is created or produced by an animal brain. In that view, the consciousness “thing” or the consciousness “property” could be conceptualized as independent of its biological instantiation, allowing us to conceive of other instances in non-biological systems, one possibility being a complex computer system.
I don’t think of consciousness as a thing-in-itself, as it were, and I’m reluctant to refer formally to consciousness as a “property” of a neural structure, since the idea of a property has much contentious philosophical history I’d rather avoid. Admittedly, I’ve used the “property of the brain” phrasing myself, but in an ordinary conversational sense, to convey the association of consciousness with brain structures while also avoiding any suggestion of an independent “thing-ness”.
In contrast, consider the possibility that consciousness is not a “thing” created by the brain and not a “property” of the brain, but is instead a configuration of neural tissue in some brain structure, such that a conscious feeling IS a particular configuration of neurons. In that case, consciousness cannot possibly emerge in or be implemented in another substrate and all instances of consciousness must be neuronal, and therefore biological and embodied.
I’m very leery of making comparisons to this most unique situation, because of the hazards of extending the comparison too far, but one that comes to mind is a muscle contraction which clearly exists but cannot be implemented in non-muscle. That’s not to say that we cannot make a device that similarly “contracts”, but a muscle contraction is a configuration of muscle tissue and muscle tissue only, just as consciousness is a configuration of neural tissue which requires neural tissue.
I think I see where you are coming from, and I'm not committed to its falsity. But I think that such claims remain conjectural. At this point, it makes sense to me not to be so commissive about these sorts of metaphysical questions about consciousness!
ReplyDeleteHowever, Eric, isn’t it the case, at this point, that *all* claims about the physical origins of consciousness are conjectural? I’m not asserting a reality claim, but, rather, a hypothesis that a conscious feeling *IS* a particular Neural Tissue Configuration, which I’ll acronymize as NTC for convenience. As an uninvestigated hypothesis, no one can be committed to either its truth or falsity, but I believe NTC is a sensible and credible proposal that merits consideration, if for no other reason than its potential to invalidate existing theories of the physical “causes” of consciousness. Were NTC true, then AI consciousness would be a science fictional trope, most interesting, to be sure, for its examination of our relationships to “The Other”. Were NTC true, the only way we could create Artificial Consciousness would be to incorporate neural tissue itself in an artificial, i.e., non-biological substrate. By the way, that’s the premise of several scifi stories I’ve read where human brains are “transplanted” into a starship, whose instrumentations are sensory inputs that become the feelings of the biological pilot-brain.
ReplyDeleteNTC makes sense from an evolutionary perspective too. We can posit an initial NTC being a feeling of, for instance, pressure or temperature or proprioception, tied to fundamental sensory tracks. The earliest repertoire of feelings would evolve to feel additional sensory tracks, such that the collection of feelings so generated come to constitute the internal simulation that we call core, or creature consciousness: a feeling of being embodied and centered in a world. The subsequent evolution of cortical pattern recognition and predictive computation results in extended consciousness, which feels like our mammalian one. In this perspective, all modes of consciousness are feelings: touch, hearing, vision and, yes, thought are feelings. There is something it *feels* like to be a bat.
As a substantial bonus, NTC directly addresses Chalmers’ Hard Problem of Consciousness – how and why we have qualia or phenomenal experiences – how sensations acquire characteristics, such as colors and tastes.
Were NTC the reality of consciousness, though, the biological existence of consciousness becomes definitional, so that it would impossible by definition for AI implemented in any non-biological substrate to become conscious and we would know that Data is not conscious because Data’s Positronic AI substrate has no neurons. Just like a rock is not conscious because it has no neurons. And we might reasonably infer that a biological creature exiting the flying saucer in your backyard is conscious if it proves to have neural tissue or some neuro-similar meat electronics, but when Data and C3PO follow it down the ramp, we cannot know and cannot infer the same without, it seems to me, a faith-based commitment to duality and the existence of consciousness as “soul-stuff”.
Hello again, Eric! You replied to an earlier post of mine with the comment:
ReplyDelete“... why limit AI to "conventional computer programs operating on conventional computer hardware"? In our 2015 article, Garza and I are quite clear that we construe ‘AI’ much more broadly than that. This also shows up in our discussion of Searle in the present draft.”
When I took a look, I noticed that you hadn’t carried over your definition of “Human-grade AI” in the 2018 version that you supplied in your 2015 article:
“Human-grade artificial intelligence – hereafter, just AI, leaving human-grade implicit – in our intended sense of the term, requires both intellectual and emotional similarity to human beings, that is, both human-like general theoretical and practical reasoning and a human-like capacity for joy and suffering.”
With the caveat that current AI doesn't seem to be heading towards “general theoretical and practical reasoning,” I do wonder about your “leaving human-grade implicit,” since doing so yields statements without appropriate context, as in the 2018 article’s Abstract:
"... both about ethical theory and about the conditions under which AI would have conscious experiences ..."
The unadorned AI acronym appears in the 2018 title and many times over in the article without the “human-grade” qualifier, which led me to employ a more conventional definition of AI in trying to understand your claims and which directly influenced my postings regarding of the realities and potentials of current AI.
Regarding your use of terminology, why not retain the “human-grade” in your definition and acronymize it as HGAI, or perhaps adopt Artificial Consciousness (AC) as useful terminology and state that you’re hypothesizing a conjunction that is AI+AC? I believe this additional terminological precision would be helpful in clearly understanding the points you’re making in the correct context.
I also believe that we’re all completely ignoring another critical and necessary artificial creation, Artificial Embodiment (AE). Largely, if not completely unaddressed in our discussions is the reality that our own consciousness is largely consumed with feelings of embodiment and our intelligence is largely rooted in embodied metaphors. Conscious human thought is a mere froth on the surface - our own pattern recognition and predictive capabilities are almost entirely unconscious processes. Perhaps we should also consider Artificial Unconsciousness.
If sensory deprivation experiences are any indication, importantly noting that all embodied conscious and unconscious proprioceptions remain, I would suggest that creating AC without an accompanying AE might be immorally creating another AI: Artificial Insanity.
By the way, as regards AE, I believe that not considering Artificial Embodiment in the case of "uploading a brain" would present an ethical hazard. In Greg Egan's "Permutation City" the design of an artificial world to support the Artificial Embodiment of the uploaded is a prominent topic.
ReplyDelete