How about intuitive ethics?
I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.
Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.
For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?
Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.
Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.
However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.
---------------------------------------------- Related posts:
---------------------------------------------- Thanks to Ever Eigengrau for extensive discussion.
Have you thought about how morality is tied to the kind of progress which would be required for such beings to come into existence? For example, it seems like science requires some minimum amount of honesty and other moral virtues—or at least, nations with 'better' virtues (according to some optimizing-for-science 'better') out-compete nations with 'worse' virtues. We could call these 'science virtues' to be clear that they aren't what your common person would call 'moral virtue'. For example, giving charity to the poor doesn't seem like a 'science virtue'.
ReplyDeleteVery cool stuff, Eric.
ReplyDeleteThere seem to me to be two questions here. The first is: might there be more moral problems than current modes of moral theorizing are equipped to grapple with? The second is: Is moral realism true? Thinking about strange posthumans really raises the plausibility of a “yes” answer to the first question in my mind. However, I’m not seeing how any progress is thereby made one way or another on the second question.
Suppose that some humans who share your views come into conflict with the super sociopaths you worry about in your final paragraph. The two groups agree to try to talk things out before a full blown war breaks out. They are each well-armed enough that diplomacy seems to each to be prudent. Suppose that both groups speak English, and both use words like “ought,” “moral,” etc. Suppose further that the super sociopaths also couch their views, as you do, in terms of moral realism. They are happy to announce that there are “certain immovable moral facts;” however, they just radically disagree with you and your ilk with what those facts are. What at all is added if both parties to the dispute declare their allegiance to moral realism?
Alternately, couldn’t there be a similar scenario that differed in that both groups were the same flavor of moral-ANTIrealist? They would agree in their meta-ethics, agreeing that moral facts are response-dependent secondary properties or whatever, but still disagree over whether it was morally permissible for the Super Sociopaths torture the hell out of a trillion self-conscious sims. Again, what’s the ultra-weirdness of the posthuman telling us about the realism/antirealism debate?
Thanks for the interesting comments, Luke and Pete!
ReplyDeleteLuke: I'm not sure about how charity to the poor would play out in particular, but I think there's a complex relationship between what moral propositions enable the flourishing of a group and what moral propositions are true in the weak sort of moral realism I favor. The questions aren't independent, but neither is group flourishing enough to ensure that the moral propositions are true. I guess I'm saying this partly as a way of dodging your main question about how to distinguish "science virtues" from other sorts of virtues!
Pete: Right, I was assuming moral realism as a background framework rather than trying to argue for it. On your super-sociopaths, I'll go roughly secondary-quality with rigid designation about the term "moral": It's what we (rigidly, now) react to as moral violations (when we're well informed, unbiased, sympathetic, etc.) that defines what is "moral" as we use the term. A set of beings might use the term with a sufficiently different range that we would want to say "moral" in their usage is really a term with a different meaning from the same-sounding word in our usage. There's some room for flexibility and construction here, I think, but not unlimited flexibility.
Dr. Schwitzgebel: I don't think I needed to take a stance on realism vs. anti-realism in what I said. Instead, I wonder whether certain virtues being developed are a prerequisite for each of your weird situations to manifest or be 'constructable'. Suppose, for example, that humans always sucked at honesty for some reason. Would modern science even be possible? What this opens up is the possibility that the 'resources' required to arrive at problematic situations themselves are sufficient to deal with those situations. But perhaps such a possibility would only obtain if something like a mind were to ensure (or have ensured) it obtains.
ReplyDeleteRight, Luke -- I didn't mean to be saddling you with moral realism. But I do think the issues connect. What you say seems to be possible, but I wouldn't count on it as a full solution (not that I hear you suggesting that). For a cartoonishly strong version, suppose that developing conscious AI required that we become Perfect Reasoners, and Perfect Reasoners always accept Kant's categorial imperative and know how to apply it to every case. Problem solved!
ReplyDeleteI think you vastly overestimate our progress to date in ethics. I couldn't agree with the word "destabilize" because there's nothing stable yet! In the physics analogy you were using, I feel like people are ethically about at the stage of the first ape-man banging bones at the beginning of 2001, not an athletically practiced berry picker.
ReplyDeleteAs always, it seems to me that most "far out" sci-fi ethics questions have real, historical analogues:
"...it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason."
Indeed it was a moral disaster, and the shame is that it wasn't really feminism that stopped us. It was wealth. We stopped abusing women (mostly) when we found other things to amuse ourselves, not because we worked out it was wrong.
Utility monsters? - Global warming. We are the utility monsters. And it turns out that it doesn't matter how much anyone else wants to give the monster, the monster will just keep on taking the cookies.
The outer reaches of utilitarianism, the repugnant conclusion? - We do it now to farm animals.
On weird minds, though, it's a pet theory of mine that there is much, much more disagreement on fundamental issues in, for example, the UK (which outside London looks like a very homogeneous society) than people realise. I think that if you take two random Brits and ask them about things like the value of life, the value of pleasure, metaphysics, the nature of the mind, what education is for, etc., etc., you would get wildly divergent answers. The societies we forge are masterful acts of papering over these chasms with institutions and conventions. So I don't think you have to go far to find your "weird" mind - your next door neighbour probably believes things that you find inconceivable.
One that exercises me sometimes is this: some people really believe that criminals should be punished; others really don't. I would think that whichever camp you fall into, the other camp must seem like a very weird, antagonizing place.
Or utility - given how diverse humanity is, it seems likely that utility monsters do in fact live among us. I have a nasty feeling that utility monsterhood is highly correlated with education and wealth, and so the exploitation of the rich by the poor is just another example of the utility monsters taking all the cookies.
I would love to see some more thoroughly worked out consequences of philosophical theories, though. Singer gestures that way, but he is constrained by his desire to make his utilitarianism palatable, so he won't completely impoverish himself. I like the effective altruism movement for taking seriously the question of how we should really do the most for other people. Despite being pro-choice, I have a sneaking affection for the really crazy anti-abortion protesters, because they seem to be thinking very clearly: if you do believe that a fertilized egg is a human being (and I don't think that's a crazy belief), then there is a holocaust going on around you, and craziness is the correct response. Some of the more extreme religious orders seem to take things seriously, too. But on the whole I think we're strikingly lacking in sustained, committed workings out of where any particular moral theory would take us.
Meta ethics assumes questions have become intent and, depending on length of service, appeals now need to be verified for attitude-found in what is in front of one (myself)...
ReplyDeleteDoes speaking to humanities place on our earth in our solar system...still rejuvenate the individual and the community towards a not-obsolete-Conscientious-state of Being value...
Thanks for the comments, folks!
ReplyDeleteUnknown: I'm have a little trouble understanding yours. Maybe you could expand a bit on the main thought?
chinaphil: Lots there, but I suspect that we start from pretty fundamentally different places here, in terms of our understanding of ethical diversity. My perspective is that there's more diversity in the words that come out of people's mouths than in the reactions in their hearts, once you account for the different interests and group allegiences people have. Taking "criminals should be punished" as an example: Some people are fine with no punishment for certain classes of criminal conduct, others fine with no punishment for other classes of criminal conduct, and people differ in who they regard as legitimate authorities, but it's going to be an extreme outlier who does not feel that there are important classes of misconduct that warrant the punishment by some legitimate type of authority. In fact, I think there's going to be a lot of overlap here among people not too far from the mainstream, even between hugely separated cultures like 21st century US and ancient China, which is why the mainstream moral systems of those cultures are so readily comprehensible to us, though they also do have a tinge of foreignness. So... huge issue! Obviously, we're not going to resolve it here. But I do agree that it would be interesting to see Singer, Rawls, etc., applied to weird-mind cases to see where that takes us -- could be illuminating both because of the potential future of AI but also just in terms of evaluating those views now for current purposes (as Nozick was doing with the utility monster).
I'm not sure you need to go to future technologies to find situations where intuitive ethics fail. I think Hannah Arendt did a good job of explaining why that was the case in Nazi Germany and Stalinist Russia. And most of the standard refutations of Utilitarianism do the same. In my view, Trolley Problems don't really show a flaw with Utilitarianism, rather they show that no one ethical theory can apply to all things all the time. Utilitarianism is Newtonian and Trolley Problems are Special Relativity, so to speak. I think our ethical intuitions will continue to work well for most people most of the time. And, just like our ethics have always changed for new and special situations (nuclear capabilities, prosthetics, etc.), they will continue to do so.
ReplyDelete> chinaphil: But on the whole I think we're strikingly lacking in sustained, committed workings out of where any particular moral theory would take us.
ReplyDeleteI think you are 100% correct. You might like Richard Posner's Public Intellectuals: A Study of Decline. One of his evidence-based conclusions is that public intellectuals function more as entertainment than trustworthy, actionable information and wisdom. I might add, as a slightly-better-than-guess, that the lusting after 'revolution' pursued by so many intellectuals may also help explain this state of affairs: the attempt to thrust off tradition seems to inject much unpredictability.
Dr. Schwitzgebel, have you written on this topic? It is perhaps not enough on-topic for further discussion here, although I might say that the kinds of entities you imagine—if they do not exist, pace chinaphil—might require the kind of systematic adherence to theory which chinaphil describes. If so, then my opening remarks in this thread might make an interesting connection.
> Dr. Schwitzgebel: My perspective is that there's more diversity in the words that come out of people's mouths than in the reactions in their hearts, once you account for the different interests and group allegiences people have.
What reasoning or evidence do you have for this? (I realize that you might be going largely off of intuition, and don't mean to derogate that. I'm simply interested in investigating this matter further.) I was struck by a discussion by Steven D. Smith in The Disenchantment of Secular Discourse where he articulates John Rawls' idea of public reason: modern political liberalism, according to Rawls, must prohibit people from using "their hearts" (their "comprehensive doctrines") to influence politics and law (14–15). Lack of use in solidarity-seeking procedures would seem to lead to atrophy and/or divergence.
This is probably also too far off-topic, so I would consider some mere references supererogatory. :-)
I know approximately nothing about metaethics, so apologies if this is old news, but I'm interested in the suggestion of the last few paragraphs of what might be called "dynamic moral realism": there are moral facts that are not constituted by what moral agents take to be moral facts, yet the objective moral facts are in some way related to the attitudes of moral agents and hence can change if those attitudes change. So the emergence of new types of moral agents with previously unknown moral attitudes would change what moral facts there objectively are. (There would presumably be some path-dependency in the development of moral facts.)
ReplyDeleteWith regard to the main worry, I think though that institutions are probably at least as important as intuitions in determining moral attitudes. The emergence of states, i.e. societies of strangers, c. 10 000 years ago certainly changed the inventory of moral attitudes. So institutions governing interactions with AIs would likely also change our moral attitudes. (The problem being, of course, that it is people with today's moral attitudes that must design and implement the institutions; but this was also the case with states.)
I am interested in your presentations for philosophy, as of late--concepts without origin or origins...
ReplyDeleteMy 'main thought' would be--remembering the origins of one's philosophies does help in critical thinking...example: the origin of AI is Human behavior not Self...
Are ethics morals virtues values becoming obsolete, like conscience, in that they are no longer an Affect for reason, but have become an Effect in speech from behaviors...
Eric: "I suspect that we start from pretty fundamentally different places here, in terms of our understanding of ethical diversity."
ReplyDeleteYes, and quite reasonably so. I'll have one go at persuading you.
On this punishment thing, I was talking not so much about different crimes, but about the three types of response to criminal behaviour: deterrence, protection and punishment. The reason prison is such a universal criminal sanction is because it covers all three bases. It deters because no-one wants to go to prison; it protects because the criminal can't do any more damage while he's inside; and it punishes because being inside is unpleasant. (Corporal punishment deters and punishes, but fails to protect, for example; corrupt or arbitrary systems do not effectively deter, because the potential criminal thinks there is a good chance of escaping any punishment.) Whichever combination of deterrence, protection and punishment you believe in, prison fits the bill, and that's why I think you get such widespread buy-in for the justice system as it exists today - regardless of whether an individual thinks that using marijuana, for example, should be a crime or not.
But there are plenty of people (I think!) who don't believe in punishment per se. In real terms, that's reflected in prison reform movements, campaigns for better education in prisons, opposition to the death penalty (though not all people who are involved in those political movements disavow any form of punishment). It can be related to kids - we might threaten kids with punishment, but generally we don't actually wish to carry the punishment out (and this connects to the accusation of paternalism often thrown at lefty politics). And in the sci-fi world, it's been explored in terms of reprogramming criminals' brains.
There is an obvious problem with disavowing punishment, which is that it's hard to generate deterrence without punishment. But there are plenty of responses (my personal response is that law-abiding behaviour is not for the most part brought about through deterrence, but through social cues, modelling, the desire for conformity, etc.). So I don't think it's an off-the-wall position by any means. And, as I said, I think you'd find that a surprising number of people hold it.
(NB. being off-the-wall is not necessarily negatively correlated with how many people hold an idea. Crystal energy. Anti-vaxers. Chinese medicine. Republicans. Etc.)
So I take your point about the similarity of cultures and mass agreement on certain issues. But I think that that agreement is a function of certain structural/political problems that people face, not a reflection of underlying agreement on fundamental principles. In fact, where you get almost universal use of a single, specific solution (like prison), that is precisely because it lies in the overlap part of the Venn diagram of all the different kinds of beliefs out there.
For me, school is another example: it serves the triple purpose of teaching stuff, socialising and day care. Ask anyone what school is for, and they'll give you different answers (for me, it's day care - I teach my kids, and I'm far from sure that I want them socialised into the Chinese mindset). But whichever you want, school does it. Hence it is our universal solution.
On the utility monsters who live among us:
ReplyDeleteA friend of mine (Chinese, lives in the UK) just went to the US and posted a very common comment on Facebook: the portions are large in restaurants. This seems to me to be a way in which US restaurants have changed their business practices to cater to the utility monsters in the US. I assume that not everyone in the US always finishes these big portions. The big portions therefore exist to satisfy the desires of those monsters (sorry, people with big appetites! I'm using monster in the philosophical sense!) who can extract additional utility from the last, say, 1/3 of the portion.
This again supports the idea that monsters tend to win. The market economy *wants* them to win.
Both very interesting points, chinaphil!
ReplyDeleteOne further thought on prisons and schools: I agree that they have those convergent functions that make them appealing (in a way!), but supporting prison reform is pretty different from feeling that no crime should ever be punished. People do sometimes avow strong positions, but I suspect that in most (not all) cases, those positions exaggerate what they really dispostionally believe (as would be revealed by their real-world reactions across a range of possible cases) -- no punishment of any sort for any crime by any authority? The problem is often that the punishment is too much or the "crime" is not a crime or the authority is illegitimate in their view.
Bringing it back to the main thought of this post -- maybe the convergent solutions start to break apart when we talk about radically different types of minds -- e.g., where a bit of imagination can generate minds odd enough that reform, punishment, deterrence, and protection functions might come radically apart in a variety of different ways.
This runs along lines similar to my thesis in "Artificial Intelligence as Socio-Cognitive Pollution," the difference being I adapt the ABC Research Group's formulation of 'simple heuristics' as a way to understand how and why this is a problem. I think this is the direction you're moving toward here (though I think you’ll need to suspend commitment to moral realism to get as far as you need to go). The primary problem with simple heuristics is also their source of power: the degree to which they neglect otherwise relevant background information. This means their efficacy turns on *taking backgrounds for granted,* which in turn renders them vulnerable to background transformations.
ReplyDeleteThis is why I think you underestimate the pickle we find ourselves in the AI age. They don’t need to be all that weird, simply ubiquitous and upgradable.
Thanks, Scott! I agree about the risks that accompany the benefits of heuristics. Not ready to let go of a modest form of moral realism yet, though!
ReplyDeleteI got some of my terminology and concepts wrong in that bit on punishment - I should have used the words retribution and rehabilitation to be a bit more precise.
ReplyDeleteBut more than that, I still have a quibble with the procedural way you're thinking about "what people want":
"what they really dispositionally believe (as would be revealed by their real-world reactions across a range of possible cases)"
Depending on how big your range of possible cases is, this is exactly what I'm trying to look beyond. Not what they choose, but what's driving what they choose. Because, as I said above, I think that what they choose is conditioned by circumstances, circumstances which create the illusion of agreement, when in fact it is a coincidence of solutions.
On a different topic, I had a nice real-world example on Monday. I was talking with a friend of mine who has been thinking about end-of-life issues a bit, and he said to me, "I don't believe in the value of life per se. I think it's only good to extend life if there is quality of life." I don't think there's any reason to discount his assertion - it was just a chat over a beer. But more than that, there are whole medical frameworks which follow this view (QALYs? can't remember the acronym), plus the whole Dignitas/Kevorkian stories. So it's not like this guy is a lone voice. And on the other side is, for example, the Catholic church - hardly a minority, extremist organisation.
So that seems to me to be a crystal clear example of disagreement over a really, really fundamental question: Is there moral value attached to human life itself? And as our robot doctors and nurses get better, this disagreement might become much more salient.
I'm not sure how you can be a 'moral realist' and have these questions? Maybe I don't get moral realism, but it seems to be like there's a circle which is all of creation - then there's a bigger circle around that which is morality? While these questions suggest a point where such morality runs out - making it a smaller circle that is inside the circle of creation. Perhaps a very small circle.
ReplyDeleteIs it still moral realism when it's not all of reality? Just a circled part of it?
Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?
I think this really shows how moral impetus is this floating thing that can drive extreme action. I do agree it's a good example for the subject - that although hewing to moral intuitions feels right (as you'd expect it would), with no grasp of what this thing is you are hewing too...if it happens to be in a batshit direction, then you're off flying planes into buildings or feeling how the answer is suddenly so clear that we all die for hedionium tiling.
Many people have explored how some ethical "disasters" might seem less terrible where duplication and restarts are possible (I'm thinking of the state of the fast folk in Ken McLeod for example, where civilisations are continually being snuffed out when a computational run is finished). Awfulness of death type and maybe eternal return arguments seem applicable.
ReplyDeleteFor the longest time I belonged to the 'Stompin Tom' school of moral realism, but now... I argue heartbroken, I guess.
ReplyDeleteHere's the question (and it has everything to do with weird AI) that poked me down the plank: If you agree that intentional cognition is heuristic cognition and you agree that moral cognition is a form of intentional cognition, then you agree that moral cognition leverages solutions to complex problems on the basis of scant data, that it consists of simple heuristics. So the question is, What could the question, 'What is goodness?' possibly mean? What is it we're 'reflecting' on? What renders simple heuristics effective is precisely the ability to solve despite information regarding what's going on. It simply follows that 'moral talk' will be largely impenetrable to theoretical cognition short of empirical science. The information is formatted to the solution of practical contexts--what sense is theoretical reflection alone to make of this? But of course, empirical science will never answer 'What is goodness?' because it does not exist in any high-dimensional sense.
And at the same time, computer assisted decision-making will drive more and more of our behaviour!
Not to sound like a bummer or anything.
Scott, my inclination is to go one of two ways:
ReplyDelete(1.) Compare goodness to brownness. It's a fact that this coconut is brown, but that fact is partly constituted by the fact that people in ordinary circumstances would view it as brown. So: It's a fact that killing babies for fun is wrong, but that is partly constituted by the fact that people in ordinary circumstances would view it as wrong. Of course, "ordinary circumstances" is one major issue here!
(2.) Compare goodness to water. Our folk concept of water isn't much like H2O, but H2O is a natural kind (supposedly) that is nearby in extension to our folk concept of water. Maybe same for goodness. A simple candidate for a nearby natural kind might be "tending to cause more suffering than pleasure". Of course, what the natural kind would be is complicated. I'm not sure there needs to be one, actually, instead of a family.
If either of these can be made to work, heuristics allowed and no magic underneath!
David: Yeah, that's a fascinating issue! I have a few blog posts, and a short story in draft, about some of those issues, e.g.:
ReplyDeletehttp://schwitzsplinters.blogspot.com/2013/06/my-boltzmann-continuants.html
Callan: Yeah, could be a circled part -- per my Option 1, in answer to Scott above. I hear you on the risk at the end of your remark! In a way, that's what I'm trying to get to in this post.
ReplyDeletechinaphil: I think there can be a lot of disagreement, within certain familiar ranges, within cultures. In most major cultures across history, there will be versions of a recognizable debate between continuing life as valuable per se vs. surrendering life when the quality is too low (and also, differently but relatedly, for one's ideals). I think non-relativism is characterized too simply if that sort of debate is seen as a problem for it. Rather, there are ranges of familiar positions, with variants, and ranges of considerations (with variants) that will tend to be recruited for those positions, so that the debate will be cross-culturally comprehensible once one gets past some of differences in local practice and non-moral belief.
ReplyDeleteThe risk for weird minds, I think, is opening up issues that are quite far removed from this usual spectrum of familiar debates, and thus to which our intuitions and folk theories might be very poorly suited.
...Scott and Eric, today some of us have observed and been developing attitudes for... "our thoughts may be our own but they are from this planet and, then, for this planet"... akin to our reflection's, nature and family inheritances...
ReplyDeleteThis is like 3,000 years of philosophical heuristics finally hitting the ground running...not really so weird...
In terms of 'what is goodness' I tend to think backwards, toward tribal living - a lot of stuff makes sense in that context (in terms of survival) that tends not to in the modern era. So I tend to go pragmatic as well as pragmatic in a very differnt set of circumstances to figuring the grounding of goodness or such. That's why it doesn't so much head toward headonium for me - that sort of tribal anchor. But then again tribes can be very male chauvanistic, so I choose against that model to some degree, which perhaps swings me toward hedonium somewhat, once again (though again I think that chauvanism came from raw survivalism and have a 'slave gender' tended to enable survival (or so it appeared - maybe it wasn't needed but maybe no one wanted to risk death over simply experimenting with a principle. Or they were just so hungry there was no thought beyond the raw instinct of enacting slavery))
ReplyDeleteEric: I actually think you're illustrating my point... 'Ordinary circumstances' translates into 'adaptive problem-ecologies' well enough, and applies to 'brownness' as well as goodness. Brownness also belongs to those things that should expect to be impervious to theoretical reflection! H2O is a different story, of course, but your inadvertent swapping of 'pleasure' and 'suffering' actually illustrates the radical underdetermination that is the bane all proposed moral facts of the matter. Many do believe in the goodness of suffering.
ReplyDeleteThe 'moral' being, this is the very underdetermination we should expect, given a heuristic view of morality.
Are you sure you're not actually fixating on the *reality of moral cognition* (wherein the term 'goodness' plays a systematic, problem-solving function vis a vis a jungle of 'shallow-information' contexts) and the predictability of its determinations ('This is good!') across certain contexts?
So we might not be *that* far apart Scott, as long as you agree that some things really are brown (not really-really-magically-really, but "really" enough).
ReplyDeleteOn underdetermination: Here my inclinations against strong descriptive psychological relativism will save the day! If normal (admittedly, a tricky word) people really did differ radically in their sense of right and wrong, then there would be radical underdetermination. But as I think the comprehensibility of ancient Chinese ethics suggests, there really is a lot of commonality under the variations, and variations and debates follow certain types of familiar patterns -- so that's the psychological anchor. It might still be mushy enough that there's substantial indeterminacy and blurry edges, though.
Callan: Yes, grounding it historically in that way, or in "typical"/"normal" reactions risks ending up with some results that 21st century Western liberals tend not to want, like gender roles, approval of aggressive warfare, condemnation of harmless sexual practices -- so I want to put a bit old tweak or caveat on that type of approach for those reasons, to the extent I'm attracted to it.
ReplyDeleteNow we get to understand Earth's entities AI and Being are subject to locality in voluntary and involuntary Attachment, from what is in front of us, for observation...The objective reality of observation...
ReplyDeleteEric,
ReplyDeleteYeah - I think though that generally one can see various resource pressures driving stuff like warfare and the gender roles. One can see it from the luxury of currently having a certain abundance of resources/food.
So I'm not sure drawing from a tribal background is absolutely antithical to liberal agenda. Possibly the last survivalism is to recognise that civilised behaviour is rented, not owned - that with resources we can move on from the aggressive warfare and entrenched gender roles.
Otherwise it seems a bit denying of ones demons. We are tribal - it's what we are. By looking back to a tribal model then imagining forward with resources that can fund compassion and with it a sense of equality (more tribal qualities, really, despite being depreciated) we can start to work out a compass by which we can figure basic human psychologies that work toward what we think of as good.
By not looking back at the aggressive, sexist tribes we were and not finding ourselves there - by denying those demons...then we never rebuild from that base (that thing we are) to something better (ie perhaps promoting the psychologies that we share and the psychologies that can actually get along with each other). I think that'd undermine a liberal agenda, in the end. Liberal people would just think they are good - even as they start the hedonium tiling program. Because they aren't grounded/anchored and then working out a compass - they just float in 'good' land.
So I think looking back and going 'ah yeah, gender slaves, I can see how that was a benefit in scarce resource times...an ugly benefit, but a benefit.' is useful to build forward from our tribal notions. Have to sympathise with the demon so as to stave it off.
Eric: I suspected as much! ;) You do realize that 'real moral patterns' moral realism is not what people generally mean when they describe themselves as 'moral realists'--at least back when I was still in the game. (My compliant with Dennett's redefinitional strategy, for instance, is that it amounts to urging us to rename the family dog 'Mildred' at Grandma Mildred's funeral). The fact that we agree on the same basic shape, yet you describe yourself as a moral realist and I describe myself as a self-loathing nihilist says something. It could be that I'm wrong, or it could be that underdetermination has chased you to this level of moral discourse as well.
ReplyDeleteI tell you what, though, I find your way much, much more attractive than my own. I just think the ugly answers, all things considered, are typically those that move our understanding forward.