tag:blogger.com,1999:blog-26951738.post6941114151111006117..comments2024-03-25T11:49:21.281-07:00Comments on The Splintered Mind: How Robots and Monsters Might Break Human Moral SystemsEric Schwitzgebelhttp://www.blogger.com/profile/11541402189204286449noreply@blogger.comBlogger37125tag:blogger.com,1999:blog-26951738.post-3748394148308348432015-02-18T17:36:52.542-08:002015-02-18T17:36:52.542-08:00LOL, sis. You just wait and see!LOL, sis. You just wait and see!Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-36881345152257714002015-02-16T19:50:35.913-08:002015-02-16T19:50:35.913-08:00"That is all fine and dandy but my daughter i..."That is all fine and dandy but my daughter is NOT going to prom with a robot!" -Some dad, some day<br />Sandy Ryannoreply@blogger.comtag:blogger.com,1999:blog-26951738.post-90660326230430282802015-02-10T12:43:40.235-08:002015-02-10T12:43:40.235-08:00Katherine: Thanks for that interesting comment! I...Katherine: Thanks for that interesting comment! I think that is one among a range of reasonable approaches. For purposes of the argument, I was trying to assume that the AI would be similar in all relevant psychological respects, including capacity for suffering. To the extent the question turns on suffering, the special provisions would apply to beings in those categories *not* capable of suffering, if any. You might think that no being incapable of suffering deserves moral status -- maybe that's true (and more plausible that an equivalent thesis about higher cognition) but some people would make an except for early-stage fetuses or people (if any) with such severe brain damage that neither pleasure nor suffering is possible or (hypothetically) a being capable only of pleasure but no suffering.<br /><br />Now if you mean by "suffering" "dukkha", now maybe that takes the conversation a different direction.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-39035873213214400062015-02-10T12:37:59.077-08:002015-02-10T12:37:59.077-08:00Neat post, Sergio. I've left a comment over t...Neat post, Sergio. I've left a comment over there.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-80351162272845570122015-02-10T12:29:50.625-08:002015-02-10T12:29:50.625-08:00Simon: I will put Red Dwarf on my list. It might ...Simon: I will put Red Dwarf on my list. It might depend a lot on how it is implemented, which might not be revealed. I'm working on a group organism sci-fi story of my own too, right now -- will put a bit more energy into showing the details of implementation, too squeeze some of the philosophical juice.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-55659996275722609032015-02-10T12:27:06.964-08:002015-02-10T12:27:06.964-08:00Fun link, Simon! I've been playing around wit...Fun link, Simon! I've been playing around with concepts of AI divinity, in some of my stories and in posts like "Our Possible Imminent Divinity" and "Super-Cool Theodicy". Highly unorthodox, though. Or check out Eric Steinhart's recent book!Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-81418883053921456902015-02-10T12:24:38.280-08:002015-02-10T12:24:38.280-08:00Thanks for the link, John! Yes, very interesting ...Thanks for the link, John! Yes, very interesting and lively thread over there.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-66675328504036539342015-02-09T01:29:24.913-08:002015-02-09T01:29:24.913-08:00Hello Eric,
Funny coincidence, I used to be obses...Hello Eric,<br /><br />Funny coincidence, I used to be obsessed in college with the idea that all future AIs would deserve rights or at least moral consideration.<br /><br />What strikes me as odd in this whole debate is encapsulated in the following quote: "Our moral systems, whether deontological, consequentialist, virtue ethical, or relatively untheorized and intuitive, take as a background assumption that the moral community is composed of stably distinct individuals with roughly equal cognitive and emotional capacities (with special provisions for non-human animals, human infants, and people with severe mental disabilities)."<br /><br />I don't think you can maintain that this "background assumption" is even <i>relevant</i> to intrinsic morality, unless you ditch the "special provisions." Either morality is dependent on such a background assumption; or else our "special provisions" are problematic, because they allow moral consideration that goes against our assumptions. You can't have it both ways.<br /><br />I personally ascribe to a position influenced by the animal rights movement, antinatalism, and Buddhism. The <b>only</b> moral question is not "can they think" or "how do they think" or "are they individuals, fragments or wholes," but <i>can they suffer.</i> All other questions are not moral questions, they are only technical. So, in order to answer your questions about AI, fission/fusion monsters (btw, a Hindu might maintain that we are already fission/fusion monsters in a sense), we would have to consider each entity, in its particular space-time coordinates, for its ability to suffer. Pleasure, it seems to me, would not be relevant.KThttps://www.blogger.com/profile/07044696046548374340noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-81716317291340977582015-02-08T06:48:43.640-08:002015-02-08T06:48:43.640-08:00Eric,
Thank you so much, I am (late and) delighted...Eric,<br />Thank you so much, I am (late and) delighted by your comments. Your post, along with Hankins and Bakker contributions have prompted me to write an attempt at making the positive case.<br />It's here:<br /><a href="http://sergiograziosi.wordpress.com/2015/02/08/strong-ai-utilitarianism-and-our-limited-minds/" rel="nofollow">Strong AI, Utilitarianism and our limited minds</a>. All feedback, especially criticism, is more than welcome.Sergio Graziosihttps://www.blogger.com/profile/07571218856690513933noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-86147503781742376442015-02-06T16:29:21.978-08:002015-02-06T16:29:21.978-08:00And Rimmers idea that aliens would give him a new ...And Rimmers idea that aliens would give him a new body was...essentially correct in the end!<br /><br />I'll get me coat...lol!Callan S.https://www.blogger.com/profile/15373053356095440571noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-11435333160743045282015-02-06T15:29:27.692-08:002015-02-06T15:29:27.692-08:00Eric here is the episode if you are interested
Re...Eric here is the episode if you are interested<br /><br />Red Dwarf 32 "Legion" <br />Chasing the vapour trail of Red Dwarf into a gas nebula, Starbug is taken over by a tractor beam which takes it to a space station. There the crew discover Legion, a highly intelligent, sophisticated and cultured lifeform conceived out of an experiment by a group of famous scientists. It is Legion who modifies Rimmer's holo-projection unit, enabling him to become a "hardlight" hologram , as a result he is able to touch, feel, eat, and experience pain – but still being made of light, cannot be physically harmed. They learn that Legion is composed from the minds of each member of the crew, combined and magnified, and as such they are sustaining his very existence with their presence. Legion will not allow them to leave and continue the search for Red Dwarf.<br /><br />I'd be interested to know how you would characterize this entity?<br /><br />Lauren Seiler describes humans as “poly-super-organisms” and that the system boundaries between life and non life are very indistinct.Simonhttps://www.blogger.com/profile/00540668068672572303noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-70599695252254362122015-02-05T15:38:52.635-08:002015-02-05T15:38:52.635-08:00If they have different morals can they be saved? h...If they have different morals can they be saved? http://gizmodo.com/when-superintelligent-ai-arrives-will-religions-try-t-1682837922/+charliejaneSimonhttps://www.blogger.com/profile/00540668068672572303noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-12495124635286806062015-02-05T13:31:20.874-08:002015-02-05T13:31:20.874-08:00Hi, Eric! There's a lively conversation based...Hi, Eric! There's a lively conversation based on this article happening here:<br /><br />https://plus.google.com/u/0/117663015413546257905/posts/Kmpf8JsdxKCJohn Baezhttps://www.blogger.com/profile/11573268162105600948noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-79099891819051506612015-02-05T11:38:17.182-08:002015-02-05T11:38:17.182-08:00David, I think I agree with most of that, especial...David, I think I agree with most of that, especially your concluding thought:<br /><br />"So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not. And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment."<br /><br />I do think we can anticipate that enduring creatures (like the "Matrioshka Brain" of one of my earlier posts) are likely to have a structure of self-preserving or at least species-preserving goals that will make them partially interpretable; and if they can manipulate elements generatively, in communication with others of their kind, according to rules or quasi-rulish-regularities then well, maybe we have language there. But I think these might be empirical likelihoods rather than strict necessities.<br /><br />So the possibility of some kind of rational engagement with aliens seems not far-fetched to me, contra the darkest pessimists (like Stanislaw Lem?); but I think reflection on these types of examples suggest that we might not get as far as we might have thought without some serious rethinking.<br /><br />All this I take to be a similar path to a similar conclusion as what you express above.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-11517831829254031102015-02-05T11:11:53.895-08:002015-02-05T11:11:53.895-08:00Mobius: The comment is a bit densely packed and me...Mobius: The comment is a bit densely packed and metaphorical for me to understand! I do agree that the past is always prologue; nothing is born in philosophy without substantial precedent.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-28044573482534960412015-02-05T11:09:50.716-08:002015-02-05T11:09:50.716-08:00Chinaphil: "As with children, it's the le...Chinaphil: "As with children, it's the letting go of control that allows them to become complete people." Interesting thought!<br /><br />I'll have to look again at the Bostrom; I don't recall that oversight in the text (and I agree it would be an oversight). I do agree that we should (both normatively and empirically) expect, that if a fully-functional general-intelligence AI is created, to engage it on its own ground and expect its (as well as ours) values to change over time in a way that eludes our full control.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-22204716254202433912015-02-05T10:44:50.745-08:002015-02-05T10:44:50.745-08:00Hi Erich,
I think you’re asking all the right que...Hi Erich,<br /><br />I think you’re asking all the right questions here!<br /><br />Some (not me) might object that our conception of a rational agent is maximally substrate neutral. It's the idea of a creature we can only understand "voluminously" by treating it as responsive to reasons. According to some (Davidson/Brandom) this requires the agent to be social and linguistic – placing such serious constraints on "posthuman possibility space" as to render your discourse moot.<br />Even if we demur on this, it could be argued that the idea of a rational subject as such gives us a moral handle on any agent - no matter how grotesque or squishy. This seems true of the genus "utility monster". We can acknowledge that UM’s have goods and that consequentialism allows us to cavil about the merits of sacrificing our welfare for them. Likewise, agents with nebulous boundaries will still be agents and, so the story goes, rational subjects whose ideas of the good can be addressed by any other rational subject.<br />So according to this Kantian/interpretationist line, there is a universal moral framework that can grok any conceivable agent, even if we have to settle details about specific values via radical interpretation or telepathy. And this just flows from the idea of a rational being.<br />I think the Kantian/interpretationist response is wrong-headed. But showing why is pretty hard. A line of attack I pursue concedes to Brandom-Davidson that that we have the craft to understand the agents we know about. But we have no non-normative understanding of the conditions something must satisfy to be an interpreting intentional system or an apt subject of interpretation (beyond commonplaces like heads not being full of sawdust).<br />So all we are left with is a suite of interpretative tricks whose limits of applicability are unknown. Far from being a transcendental condition on agency as such, it’s just a hack that might work for posthumans or aliens, or might not.<br />And if this is right, then there is no a future-proof moral framework for dealing with feral Robots, Cthulhoid Monsters or the like. Following First Contact, we would be forced to revise our frameworks in ways that we cannot possible have a handle on now. Posthuman ethics must proceed by way of experiment.<br />Or they might eat our brainz first.<br />Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-91631055361412703052015-02-05T10:34:42.256-08:002015-02-05T10:34:42.256-08:00This comment has been removed by the author.Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-63042127109234195022015-02-05T10:29:23.003-08:002015-02-05T10:29:23.003-08:00Simon: I think I agree with everything you said --...Simon: I think I agree with everything you said -- and thanks for the references! I think the necessities of survival/reproduction are going to make some types of value systems much less likely than others -- a thought I start to explore in last year's post of Matrioshka Brains, and which I'm thinking of exploring a bit more in a coming post.Eric Schwitzgebelhttps://www.blogger.com/profile/11541402189204286449noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-60325110129813449502015-02-05T10:27:18.171-08:002015-02-05T10:27:18.171-08:00This comment has been removed by the author.Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-19725936316063346312015-02-05T10:21:47.156-08:002015-02-05T10:21:47.156-08:00This comment has been removed by the author.Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-32348278029975408932015-02-05T10:19:21.084-08:002015-02-05T10:19:21.084-08:00This comment has been removed by the author.Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-70278532686446856372015-02-05T10:11:31.667-08:002015-02-05T10:11:31.667-08:00This comment has been removed by the author.Davidhttps://www.blogger.com/profile/04359327661778032716noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-48114887111087247662015-02-05T08:48:56.575-08:002015-02-05T08:48:56.575-08:00It seems the past is prologue. Could Plato have al...It seems the past is prologue. Could Plato have already given birth to the monster ethic by presuposing a foundation to mathematics (the Sun itself in the Allegory of the Cave), thus giving rise to the intuitions of calculators? Not until Godel, the analogue of the Greek Prometheus, were the possibilities of freedom from the universe of necessity rekindled. Even worse that the human calculators are the Noble Liars with no ontological commitment. It seems the past is prologue. Could Plato have already given birth to the monster ethic by presuposing a foundation to mathematics (the Sun itself in the Allegory of the Cave), thus giving rise to the intuitions of calculators? Not until Godel, the analogue of the Greek Prometheus, were the possibilities of freedom from the universe of necessity rekindled. Even worse that the human calculators are the Noble Liars with no ontological commitment.Mobius Triphttps://www.blogger.com/profile/11620423740245738406noreply@blogger.comtag:blogger.com,1999:blog-26951738.post-2189284244604226372015-02-04T21:02:07.048-08:002015-02-04T21:02:07.048-08:00Sorry, double comment again, but I've just rea...Sorry, double comment again, but I've just realised that I can invert the argument I just made. If a being can't assess and change its own goals, then it's not conscious, and it's not legitimate to call those goals "the being's goals" - they're givens, not goals.<br /><br />Perhaps this is something like a Kantian argument: in order to be a being, it has to have existence separate from its goals, so that we can see *it* as an end in itself. A computer program designed to do my taxes has no identity; its identity is its purpose. It's only when we can separate the identity from the purpose that our AI can become a thing unto itself. So maybe that's the worry with Bostrom's argument. If these AIs are (clever, meta-) programs for producing our values, then they will never be moral objects, things in themselves. As with children, it's the letting go of control that allows them to become complete people.chinaphilhttps://www.blogger.com/profile/14572591745611690731noreply@blogger.com