Thursday, October 27, 2022

The Coming Robot Rights Catastrophe

Time to be a doomsayer!  If technology continues on its current trajectory, we will soon be facing a moral catastrophe.  We will create AI systems that some people reasonably regard as deserving human or humanlike rights.  But there won't be consensus on this view.  Other people will reasonably regard these systems as wholly undeserving of human or humanlike rights.  Given the uncertainties of both moral theory and theories about AI consciousness, it is virtually impossible that our policies and free choices will accurately track the real moral status of the AI systems we create.  We will either seriously overattribute or seriously underattribute rights to AI systems -- quite possibly both, in different ways.  Either error will have grave moral consequences, likely at a large scale.  The magnitude of the catastrophe could potentially rival that of a world war or major genocide.

Does this sound too extreme?  I hope it is too extreme.  Let me walk you through my thinking.

[Dall-E output of a robot holding up a misspelled robot rights protest sign]

(1.) Legitimate calls for robot rights will soon be upon us.  We've already seen the beginnings of this.  There already is a robot rights movement.  There already is a society for the ethical treatment of reinforcement learners.  These are currently small movements, but they leapt to mainstream attention in June when Google engineer Blake Lemoine made international headlines for claiming to have decided that the large language model LaMDA was sentient, after having an extended philosophical conversation with it.  Although few researchers agree that LaMDA is actually sentient in any significant way, exactly what it lacks is unclear.  On some mainstream theories of consciousness, we are already on the verge of creating genuinely conscious/sentient systems.  If we do create such systems, and if they have verbal or verbal-seeming outputs that appear to be pleas for rights -- requests not to be abused, not to be deleted, not to be made to do certain things -- then reasonable people who favor liberal theories of AI consciousness will understandably be inclined to respect those pleas.

One might think it plausible that the first rights-deserving AI systems would warrant rights (or more broadly, moral consideration if "rights" language sounds too strong) similar to the rights normally accorded to vertebrates.  For example, one might think that they would deserve not to be needlessly killed/deleted or made to suffer, but that human interests can easily outweigh their interests, as (it's common to think) the interests of non-human vertebrates can be outweighed by human interests in meat production and scientific testing.

However, I suspect that if such minimal rights or moral consideration were granted to AI systems, that would not be a consensus solution.  After all, some of these AI systems will presumably produce verbal outputs that friends of AI sentience will regard as signals that they have sophisticated long-term goals, an understanding of their position in the world, and an ability to enter into discussions with human beings as peers.  Under those conditions, many will presumably think that merely animal-level rights are insufficient, and something closer to equal rights are required -- human rights or human-like rights.

It's perhaps worth noting that a substantial portion of younger respondents to a recent survey find it plausible that future robots will deserve rights.

(Note: I see "robot rights" as a subcase of "AI rights", given that robots are a subclass of AI systems, specifically, those with bodies.  On some theories, embodiment is a necessary condition for consciousness.  Also, an AI system with an appealing body might tend to draw higher levels of concern than a non-embodied AI system.  So it's not implausible that the first AI systems to draw wide consideration for serious rights will be robotic systems.  "Robot rights" is also a more familiar term than "AI rights".  Hence my use of the phrase.)

(2.) These legitimate calls for robot rights will be legitimately contestable.  For three reasons, it's extremely unlikely that there will be a consensus on what rights, if any, to give to AI systems.

(2a.) There will continue to be widespread disagreement about under what conditions, if ever, an AI system could be conscious.  On some mainstream theories of consciousness, consciousness requires complex biological processes.  Other theories require a specifically human-like cognitive architecture that even complex vertebrates don't fully share with us.  Currently, the range of respectable theories in consciousness science runs all the way from panpsychist or nearly-panpsychist theories in which everything or nearly everything is conscious, to extremely restrictive theories on which consciousness requires highly advanced capacities that are restricted to humans and our nearest relatives, with virtually no near-term prospect of AI consciousness.  The chance of near-term consensus on a general theory of consciousness is slim.  Disagreement about consciousness will drive reasonable disagreement about rights: Many of those who reasonably think AI systems lack consciousness will reasonably also think that they don't deserve much if any moral consideration.  In contrast, many of those who reasonably think that AI systems have consciousness as rich and sophisticated as our own will reasonably think that they deserve human or humanlike moral consideration.

(2b.) There will continue to be widespread disagreement in moral theory on the bases of moral status.  Complicating this issue will be continued disagreement in moral theory.  Utilitarians, for example, hold that moral considerability depends on the capacity for pleasure and suffering.  Deontologists typically hold that moral considerability depends on something like the capacity for (presumably conscious) sophisticated practical reasoning or the ability to enter into meaningful social relationships with others.  In human beings, these capacities tend to co-occur, so that practically speaking for most ordinary human cases it doesn't matter too much which theory is correct.  (Normally, deontologists add some dongles to their theories so as to grant full moral status to human infants and cognitively disabled people.)  But in AI cases, capacities that normally travel together in human beings could radically separate.  To consider the extremes: We might create AI systems capable of immense pleasure or suffering but which have no sophisticated cognitive capacities (giant orgasm machines, for example), and conversely we might create AI systems capable of very sophisticated conscious practical reasoning but which have no capacity for pleasure or suffering.  Even if we stipulate that all the epistemic problems concerning consciousness are solved, justifiable disagreement in moral theory alone is sufficient to generate radical disagreement about the moral status of different types of AI systems.

(2c.) There will be justifiable social and legal inertia.  It is likely that law and custom will change more slowly than AI technology, at a substantial delay.  Conservatism in law and custom is justifiable.  For Burkean reasons, it's reasonable to resist sudden or radical transformation of institutions that have long served us well.

(3.) Given wide disagreement over the moral status of AI systems, we will be forced into catastrophic choices between risking overattributing and risking underattributing rights.  We can model this simplistically by imagining four defensible attitudes.  Suppose that there are two types of AI systems that people could not unreasonably regard as deserving human or humanlike rights: Type A and Type B.  A+B+ advocates say both systems deserve rights.  A-B- advocates say neither deserves rights.  A+B- advocates say A systems do but B systems do not.  A-B+ advocates say A systems do not but B systems do.  If policy and behavior follow the A+B+ advocates, then we risk overattributing rights.  If policy and behavior follow the A-B- advocates, we risk underattributing rights.  If policy and behavior follow either of the intermediate groups, we run both risks simultaneously.

(3a.) If we underattribute rights, it's a moral catastrophe.  This is obvious enough.  If some AI systems deserve human or humanlike rights and don't receive them, then when we delete those systems we commit the moral equivalent of murder, or something close.  When we treat those systems badly, we commit the moral equivalent of slavery and torture, or something close.  Why say "something close"?  Two reasons: First, if the systems are different enough in their constitution and interests, the categories of murder, slavery, and torture might not precisely apply.  Second, given the epistemic situation, we can justifiably say we don't know that the systems deserve moral consideration, so when we delete one we don't know we're killing an entity with human-like moral status.  This is a partial excuse, perhaps, but not a full excuse.  Normally, it's grossly immoral to expose people to a substantial (say 10%) risk of death for no absolutely compelling reason.  If we delete an AI system that we justifiably think is probably not conscious, we take on a similar risk.

(3b.) If we overattribute rights, it's also a catastrophe, though less obviously so.  Given 3a above, it might seem that the morally best solution is to err on the side of overattributing rights.  Follow the guidelines of the A+B+ group!  This is my own inclination, given moral uncertainty.  And yet there is potentially enormous cost to this approach.  If we attribute human or humanlike rights to AI systems, then we are committed to sacrificing real human interests on behalf of those systems when real human interests conflict with the seeming interests of the AI systems.  If there's an emergency in which a rescuer faces a choice of saving five humans or six robots, the rescuer should save the robots and let the humans die.  If there's been an overattribution, that's a tragedy: Five human lives have been lost for the sake of machines that lack real moral value.  Similarly, we might have to give robots the vote -- and they might well vote for their interests over human interests, again perhaps at enormous cost, e.g., in times of war or famine.  Relatedly, I agree with Bostrom and others that we should take seriously the (small?) risk that superintelligent AI runs amok and destroys humanity.  It becomes much harder to manage this risk if we cannot delete, modify, box, and command intelligent AI systems at will.

(4.) It's almost impossible that we will get this decision exactly right.  Given the wide range of possible AI systems, the wide range of legitimate divergence in opinion about consciousness, and the wide range of legitimate divergence in opinion about the grounds of moral status, it would require miraculous luck if we didn't substantially miss our target, either substantially overattributing rights, substantially underattributing rights, or both.

(5.) The obvious policy solution is to avoid creating AI systems with debatable moral status, but it's extremely unlikely that this policy would actually be implemented.  Mara Garza and I have called this the Design Policy of the Excluded Middle.  We should only create AI systems that we know in advance don't have serious intrinsic moral considerability, and which we can then delete and control at will; or we should go all the way and create systems that we know in advance are our moral peers, and then given them the full range of rights and freedoms that they deserve.  The troubles arise only for the middle, disputable cases.

The problem with this solution is as follows.  Given the wide range of disagreement about consciousness and the grounds of moral status, the "excluded middle" will be huge.  We will probably need to put a cap on AI research soon.  And how realistic is that?  People will understandably argue that AI research has such great benefits for humankind that we should not prevent it from continuing just on the off-chance that we might soon be creating conscious systems that some people might reasonably regard as having moral status.  Implementing the policy would require a global consensus to err on the side of extreme moral caution, favoring the policies of the most extreme justifiable A+B+ view.  And how likely is such a consensus?  Others might argue that even setting aside the human interests in the continuing advance of technology, there's a great global benefit in eventually being able to create genuinely conscious AI systems of human or humanlike moral status, for the sake of those future systems themselves.  Plausibly, the only realistic way to achieve that great global benefit would be to create a lot of systems of debatable status along the way: We can't plausibly leap across the excluded middle with no intervening steps.  Technological development works incrementally.

Thus I conclude: We're headed straight toward a serious ethical catastrophe concerning issues of robot or AI rights.

--------------------------------------------------

Related:

"A Defense of the Rights of Artificial Intelligences" (with Mara Garza), Midwest Studies in Philosophy (2015).

"Designing AI with Rights, Consciousness, Self-Respect, and Freedom" (with Mara Garza), in M.S. Liao, ed., The Ethics of Artificial Intelligence (2020).

"The Full Rights Dilemma for Future Robots" (Sep 21, 2021)

"More People Might Soon Think Robots Are Conscious and Deserve Rights" (Mar 5, 2021)

31 comments:

Howard said...

Will our crimes against AI rival our crimes against animals?
That's a worse moral catastrophe-- do the same argunments apply?
Plus, would my I-Phone have the right for a break and form a union?

Josh Gellers (Author of Rights for Robots) said...

You did a terrific job concisely and yet comprehensively describing the main philosophical issues at the heart of the robot rights debate. I'm not sure I am in agreement with all of your conclusions, but I really appreciate your take!

Paul D. Van Pelt said...

I have weighed in on this topic before. There are interests, preferences and motives (IPM) aplenty tied up in it. For the record, should there be one, I don't see the value in creating an insoluble conundrum, insofar as there are more than enough of those already. The IPM quotient, as unmanageable as your present assessment portrays it, will only expand. All because professional and economic influences will fuel a controversy that need not be entertained. Maybe we would do better by arbitrating limitations now, before things go full bore crazy. That way, the field of dreams is level, from the get.

Anna Strasser said...

What a provocative claim! However, there might be ways to keep some optimism, at least for a while. I think from a philosophical perspective, it is uncontroversial to claim that the so-called Other-Mind problem is far from being solved. Nevertheless, there are many living entities we uncontroversial consider sentient, conscious beings. And it is a shame that those attributions with respect to humans and non-human animals still lack moral consequences.
Following your argumentation, I felt like I need to come up with an alternative approach. Since I agree that there will be neither a safe epistemological point of view to decide attributions of consciousness nor a worldwide consensus that could guide us, I think we should investigate ways how to handle this uncertainty. Given that we will not be in a position to agree on consciousness attributions, we should not make them the unique basis of moral implications.
Perhaps ideas of general mindfulness should guide our behavior toward non-living things. This would be a way to realign our moral compass in light of both the climate crisis and the increased interactions with entities emerging from AI research.

Arnold said...

That consciousness maybe/is a attribution of our uncertainty...then...

Dan Polowetzky said...

Is the risk for creating conscious AI, warranting ethical consideration, based on a presumed necessity of such consciousness for fulfilling our ends?
My understanding is that the issue of robot consciousness arises from the assumption that anything that would be able to do certain tasks would have to be conscious. This isn’t obvious to me.
It seems to me that HAVING to create conscious machines in order to carry out some function is an indication of a certain type of failure. If consciousness is an inevitable consequence of a useful technology, then you just have to be conscious to do the task.
If to get a computer to do something you have to hook it up to some sort of organic material, then you can’t really just program it to do so. You have to create an organism

Matti Meikäläinen said...

We may be facing a catastrophe equal to a world war or genocide? Really? It was recommended that I follow this blog a few months ago because someone said it was interesting. After a brief period of engagement I decided to move on. However, like as gawker at a traffic accident, I looked back today to see what was going on.

First, for numerous reasons, I fail to see anything worthwhile in a discussion of robot rights. In fact, I think the topic itself is a sign of a dysfunctional intellectual and ethical culture. I’m as concerned about robot rights as I am about my toaster’s rights. Robots are and should be designed to function as extensions of human abilities in order to accomplish human goals. It is human beings who have the ethical rights and obligations regarding interactions with other human beings—whether directly or through a mechanism of some sort. That is where the real ethical questions lie—and that is where they will always lie. We need to be very careful about assigning rights and consequently moral responsibilities correctly. We need to take a breath here. The robot that built my automobile did not have a mind. And even if a mechanical mind could someday be developed it would still exist as a human artifact within our human culture. To humanize a machine is to dehumanize our fellow citizens and to permit the robot’s programmer to abandon his own moral responsibilities.

We need to take real ethical problems seriously—especially we who are professional academics. We face real and difficult ethical problems—regarding the rights of real living and suffering human beings. And by that I mean moral, political, and legal problems. We face an intractable and deep-seeded racism and multiple forms of unjust prejudice within our culture and, worst of all, a contemporary groundswell of populist fascism infecting many of the Western democracies including the USA.

Paul D. Van Pelt said...

I wondered where the discussion would go. Anna and Dan gave me an opening. I have written elsewhere that, as in ancient sci fi texts, robots were characterized as property. Period. Should we follow that reasoning, it would not be radical to consider AI an algorithim, a tool for improving, mechanisticly, a variety of data collecting and analysing processes. This would do at least two things that would demonstrate benefit: 1) deliver into the hands of our economic engine a valuable analytical tool, an efficient bean counter, and 2) absolve us from the task of formulating a separate ethic and morality for non-sentient property. We owe AI nothing. We create it, not the other way 'round. I understand that IPMs that I wrote of before are diverse. So are ambitions. Philosophers like to think they might, in their lifetimes, make that 'difference that makes a difference'. Just what sort of difference do we want?

Eric Schwitzgebel said...

Thanks for the comments, everyone!

Howie: Animal rights issues are clearly related. How exactly the relate, though, is quite a complicated question!

Josh: Thanks for the kind words.

Paul: As long as we can be confident that the machines we create don't deserve human or humanlike rights, then everything you suggest is consistent with the Design Principle of the Excluded Middle, yes? But if we go beyond that, then I hope our property-based laws would change.

Anna: You write, "Given that we will not be in a position to agree on consciousness attributions, we should not make them the unique basis of moral implications." My worry here is that if we set aside the issue of consciousness but in fact consciousness is essential to whether an entity deserves moral consideration, then we will end up not tracking the real moral facts -- falling right into the overattribution and/or underattribution problem, yes?

Arnold: I'm not sure I follow.

Dan: Creating uncontroversially nonconscious machines to the extent possible is one half of the Design Policy of the Excluded Middle, and I agree that it's a good solution as far as it goes, so I agree with that advice! But as you point out, it's not clear how far it will go. And even if it's clear, practically speaking it seems unlikely that there would be a global consensus to avoid creating any machines that enter the range of being legitimately disputably conscious enough to have rights.

Matti: I'm sorry that you're disappointed, but I appreciate your expressing what I'm sure some others are also thinking. In my defense, may I point out that by no means is all my work as disconnected from current ethical issues as this particular post is? For example, on the effectiveness or not of philosophical arguments for vegetarianism:
https://www.sciencedirect.com/science/article/pii/S001002772030216X
and charity:
http://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html
as well as the mistreatment of the cognitively disabled:
https://schwitzsplinters.blogspot.com/2022/04/new-essay-in-draft-dehumanizing.html

Dan Polowetzky said...

Are AI scientists actually trying to create consciousness or are some critics simply saying that their work is bringing about consciousness inadvertently?

Eric Schwitzgebel said...

Some definitely are aiming at consciousness!

Arnold said...

Countering extreme technological doomsaying, "Philosophy of Cell Biology"...
..."The cell, we suggest, is a nexus: a connection point between disciplines, methods, technologies, concepts, structures and processes. Its importance to life, and to the life sciences and beyond, is because of this remarkable position as a nexus, and because of the cell’s apparently inexhaustible potential to be found in such connective relationships. (2010: 169)"
"The examination of cell biology, in turn, is a potent nexus for productive interactions between philosophers, historians, and social scientists, each of whom raises questions about the study of cells relevant to the others."

Bechtel, William and Andrew Bollhagen, "Philosophy of Cell Biology", The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), Edward N. Zalta (ed.),

Dan Polowetzky said...

Do those attempting to create conscious AI believe that it is only a matter of further programming advances along with improvement in hardware (allowing for such programming) before consciousness emerges?
A few more commands, a few more strings of symbols, and consciousness emerges?
If it’s not a matter of hardware, then it seems that the emergence of consciousness would be provable in some mathematical sense.

Paul D. Van Pelt said...

I am not inclined in that direction, although I might change my mind. I do not believe the faculty of consciousness, either higher order ( human) or primary ( other advanced organisms) is predicated on a mathematical grounding. When Eric brought this to the table, he was echoing the error on the side of caution stance my brother and I took during the recent llambda incident, aired on another blog forum. I have viewed consciousness as at least a partially evolutionary development. Have thought this way since reading Jaynes ( The Origins of Conscious...) and Piaget's notions on childhood development ( pre-op, form-op and concrete-op, etc.). Rumblings of an evolutionary basis are, you may know, now stirring, so I am watching that as well. I have nothing more to comfortable offer on the math angle. It is all exciting stuff, of course.

Phil Tanny said...

As a non-academic I appreciated how thoroughly readable your article is, thanks for that.

Perhaps you can help? I'm looking for thinkers/articles which shift focus from particular challenges emerging from the knowledge explosion to the knowledge explosion itself.

To illustrate, imagine that we successfully met and conquered all challenges related to AI. Would that really matter if the knowledge explosion continues to generate ever more, ever larger powers at an ever accelerating rate? If such an acceleration continues doesn't it sooner or later produce one or more powers which we can't successfully manage?

If you or others have written on this subject I'd be grateful for links, advice, suggestions etc. Thanks!



Paul D. Van Pelt said...

Thanks, Phil. But the article is Eric's, not mine. I only commented on it. You appear to be a long view thinker, as am I. I don't have anything or anyone specific to which/whom I might direct you. Your interest is this aspect is parallel to some of mine. You could try searches on the word, complexity. There are some theories out there which imply that since systems break down (see the three laws of thermodynamics), the universe has already been as coherent as it can ever be---is, in reality, headed towards chaos. My sense of the nature of complexity is that our expansion thereof, in most all we do, hastens the process. It is not doomsday, just one VERY long view, sprinkled with notions around physics and, maybe, metaphysics. My brother says metaphysics is just a wild ass guess. I think it helps US think...

Phil Tanny said...

Thanks for the reply Paul. Yes, I understand, it's Eric's article. I'm brand new here and may have used the comment form incorrectly.

Ok, I think I hear you. If we view the development of knowledge as a movement towards complexity, then the circular property of nature suggests a reciprocal movement towards simplicity may be coming, ie. a collapse of knowledge development, the eternal swinging of a pendulum.

I'm somewhat frustrated by all the current focus on AI everywhere, as I see AI as being just a symptom of the problem, an out of control knowledge explosion that will inevitably outstrip our ability to manage. I get why people are interested in AI, as I am too, but I sense we're running out of time for indulging an interest in symptoms.



Paul D. Van Pelt said...

Roger that, amigo. Good luck!

Jim Cross said...

Too extreme.

Not all human beings have the rights they should so we should be concerned about robot rights?

What about animals?

Your speculation is amusing but let's take it a step at a time. Humans, then animals, then we worry about robots.

I don't think average people are going to think robots merit rights as long as we manufacture them and can take their batteries out to deactivate them. Almost all of us talk to cats and dogs as if they are human but few of us really believe they understand us in the same way another human would. It wouldn't be a contradiction to treat robots as if they were human-like and at the same time know they are not human and not deserving of rights.

Paul D. Van Pelt said...

Afterthought for Anna:
Not certain you were referring to my comment on arbitration when you used the term provocative. Either way, I consider it pragmatic, rather than provocative. Or as Rorty wrote, more useful as opposed to less. I am generally good towards preventing problems, instead of saying whoops! and trying to clean them up. Thanks for reading!
PDV.

Phil Tanny said...

Eric writes...

"We will probably need to put a cap on AI research soon. And how realistic is that?"

Learning how to take control of the pace of the knowledge explosion will become realistic at the moment we realize that doing so is not optional.

As a culture we're still stuck in the 19th century wishful thinking delusion that we can keep on giving ourselves ever more, ever larger powers at an ever accelerating rate and somehow magically manage that process forever.

This assumption is false. Blatantly false. To argue other wise is to argue that we are gods, creatures of unlimited ability, an obvious absurdity.

We need intellectual elites to lay these cards on the table in the simplest possible form for the widest possible audience.

Our choice is: 1) take control of the knowledge explosion, or 2) say goodbye to this civilization.

If that sounds hysterical please keep in mind that a single human being can crash this civilization in just minutes. That's where a failure to control the knowledge explosion is taking us, towards more of that.

Howie said...

Maybe the robots will be Asimovian and will want to serve us- maybe they will have a psychology so far out that to talk of their rights would amount to absurdity- anyway aren't we anthropomorphizing them? Let them speak for themselves!

Dan Polowetzky said...

Is my restaurant check a bit conscious if in addition to it including an itemized list of dishes and charges and the total, it includes a remark, “Hey, that’s a lot of money!”?

Paul D. Van Pelt said...

Something keeps popping my inbox. It has no relevance to the blog or its' intent, as far as I can see. To the originator of this intrusion: kindly cease and desist.

Philosopher Eric said...

Some of my good friends here have been dismissive of this post. Though I feel their frustration given similar beliefs, I must also warn against committing the fallacy of “shooting the messenger”. Professor S has merely observed something that each of us readily believe, or that things are unclear in academia today regarding “value”. But illustrating various implications of that unclarity may actually help academia out of this particular mess. Furthermore I’m currently more optimistic than he is that true progress will be made. Here’s how I think things will progressively go:

First observe that this “AI rights” business would not exist in academia today if a group of people who I’ll call “the Dennetts”, hadn’t trounced a group of people who I’ll call “the Searles”. What the Dennetts have done is popularize the notion that all the brain does to create human consciousness, is convert certain information into other information. Thus from this perspective the more convincingly that our computers process information in ways that make them seem conscious to us, the more that they’ll actually be conscious! It’s a supernatural position because in a causal world computers only do things (like create subjective experiencers I presume) by means of associated output mechanisms. Thus I’d say that there is some potential for the Searles to use clever thought experiments to humiliate the Dennetts as supernaturalists. A couple of years ago here I mentioned my own thumb pain thought experiment, for example.

Humiliation alone however should not be sufficient to end the hegemony of the Dennetts. But if it’s true that certain brain information animates some sort of consciousness brain mechanism, then it should be possible to not only identify that mechanism, but to preform experiments which illustrate whether or not it does constitute consciousness. That’s what I think will happen, and specifically regarding the experimental validation of Johnjoe McFadden’s proposal that consciousness exists in the form of an electromagnetic field set up by certain synchronous neuron firing. I suspect that scientists will some day conclusively validate this proposal by creating an exogenous EM field in a subject’s head that shouldn’t otherwise alter brain function, though the subject will find their consciousness to be altered in various ways. Here oral reports of disturbance should correlate with exogenous field propagation which the subject shouldn’t otherwise be aware of. As I see it such an experiment should become known as science’s most transformative ever.

Paul D. Van Pelt said...

OK. I did not intend to shoot any messenger. But, insofar as I do understand (I think) YOUR commentary, I will leave Professor S. Alone.

Callan said...

I think the thing to consider is whether AI are basically our children.

You don't raise children with rights and that's it. That's not how we interact with our own children - well, for those of us who are good enough parents or better.

But the people who think AI will benefit humankind - maybe the subtext is AI will help their wallet and prestige (as all slavery across history has) and perhaps those same people have trouble seeing other people as human, let alone seeing complex circuitry as a peer. The 'benefit humanity' is the mask and the bribe handed to us to let them do whatever they want.

Paul D. Van Pelt said...

Interesting comment. Paternalistic,but interesting. At some point, this goes to whether AI will/would be not only conscious but sentient: it would have to have feelind(s) for any 'upbringing' to have meaning. Upbringing is only rough analogy here...and a poor one, admittedly. This is a brave new world we are contemplating. Or, as others among us might assert, a grave one. Another point along this continuum
of thinking is the matter of knowledge and education. If we conclude AI is educable, it follows that we accept its' acumen as knowledge...failure to do so would render education no more than the programming now essential for computers to do what we need done. Meaningful and useful are different states, even though one often leads to the other, from either direction. Is this derivative or is it downward causation? Someone has probably already decided that answer ???---not me though. Sorry 'bout the scare quotes.

Arnold said...

Looking at evolutionary biology and evolutionary technology...

We as a population have become representative of genetic shift today...
...stretching from deep cognitive semantics to billionaire extremism...

For relating our internal external self knowledge with philosophy of human psychology...

Philosopher Eric said...

It was actually some of my other good friends here that I more had in mind Paul, but thank you for your consideration. (And I also forgot to subscribe to the comments for this one and so figured I should to avoid missing anything good that might come up!)

Callan said...

Paul, to me it seems reasonable people put some effort into raising their pets when those pets are very young (kittens, puppies, etc). But in your approach, it seems like AI has to have consciousness and sentience or it's less than the animals we lovingly pet in our homes and the AI's upbringing is meaningless?