We might think more about science and the scientific method, and so improve our personal application of the scientific method. But that presupposes that we want to improve our personal application of the scientific method.
I read stuff on brain science in order to better understand it. I'm not proposing to become a practitioner.
So, there's no reason that anyone studying ethics, or even proposing theories about ethics, needs to actually be more morally good, to apply what they learn to their own behaviour. Once they determine what is morally good, they then still have a choice to be morally good or not.
In Philosophy Bites, G.A. Cohen on Inequality of Wealth (http://philosophybites.com/2007/12/ga-cohen-on-ine.html) is questioned on his reconciling his egalitarianism with his current relatively wealthy position...
"The basic question is that if you have a salary maybe two or three times the average wage in the society, and you don't believe you ought to get all that, which I don't; then you believe you ought to sacrifice a lot of it, which I don't; I give away some but not very much; and the explanation is that I'm a less good person than I would be if I was as good as I could be. I just think that I'm not a morally exemplary person, that's all. That's the reconciliation."
When we get down to the details of what morality is, what drives it within our brains and wider nervous systems, and our cultural history, it's all a rather arbitrary process historically, that leaves us all in this arbitrary, but now somewhat consistent state of 'knowing' what is morally good with some degree of agreement. Given that ethics professors are in this same environment there's no inherent reason they should be any more moral than the rest of us.
However, if they do choose to be 'morally good', then they may be in a better position to think through some of the more complex questions of morality, once they have decided what is 'morally good'. The latter still remains the tricky bit because we have this notion of morality being ineffable in some way.
But then contrary to this, if they have an appreciation of evolution, psychology, anthropology, etc., they might figure that they should be moral for the important moral questions - e.g. do not kill - but on some of the lesser ones, such as lying, they might decide that actually they have no serious moral objection, because of the arbitrary outcome that has established lying to be morally bad. Though not lying might be of general benefit to our species, and may be a good general rule, and so quite a good moral code, individually it's okay to lie if the risk of getting caught isn't too great and the personal benefit sufficient. This might be a perfectly rational choice - and one I'm sure is made in business quite often, in fact in sales and marketing is almost inevitable. The power to walk away from moral convention, as suggested in the video?
Of course this is what scares the crap out of those who wishfully think that morality is a universal absolute waiting to be discovered, or revealed by God. You can't be good without God? Given the results of the data discussed in this video, maybe that's right. But when considering theists or philosophers in Nazi Germany, another example covered in the video, the problem comes from the motivations and presuppositions that drive the morality - the same drives that allow suicide bombers to justify their acts as being morally good.
All this seems to be consistent with the view that morality is arbitrary and influenced specifically by development and environment. Or maybe causally inert, as you put it.
Ron: I don't think it's obvious that, as you put it: "When we get down to the details of what morality is, what drives it within our brains and wider nervous systems, and our cultural history, it's all a rather arbitrary process historically". One at least *might* have thought or hoped that there is a more rational component to it, and that rational appreciation of morality promotes moral behavior (rather than promoting a walk away from it). Maybe you're right that that's not so. People seem to have all kinds of divergent opinions about the issue, so it's worth trying to explore it, I think, with a bit more empirical rigor, and without assuming that we know what the outcome of that research will be -- and my research is just one way of exploring this nest of questions.
I myself would find it disappointing if it turned out that philosophical moral reflection were either motivationally inert or merely productive of a random walk away from convention. But I think that is the simplest explanation of the data I have collected.
The problem with the rational approach alone is that it assumes we know what we should be looking for. It's like asking ourselves why we want to have pleasure rather than pain - it's because the pleasurable sensations are, well, pleasurable, and the painful ones are not. Then we ask what is pleasure and pain that makes pleasure preferable? And so on. Eventually we get round to some empirical stuff.
In terms of the evolution and cultural development of the human species we need to take account of the probable sequence. It's difficult to know the details, but we needed a certain level of brain development and accompanying physiological development to come up with spoken language - so what drove spoken language? It would seem pretty useless if conceptual thinking wasn't there already, some form of curiosity, wonder, intention must have been driving that. And written language developed a long time after that. So it's hard to say how rational thought developed alongside our emotions, innate feelings or empathy, and so on - there must have been a co-development that is hard to retrace now. It seems plausible, if not likely given the paucity of other explanations, that morals emerged as part of our complex behaviour over time, and that our later rationality rationalised those feelings and formed them into the human construct that is morality and ethics.
So rationalising our morality occurs as we individually explain our learned and innate behaviour, as well as over time for the species.
And therein lies the real difficulty for empirical rigour - separating out the components. Reductionism is a valuable tool, but looking just here and now, which came first, the chicken or the egg? Simply looking at the problem of ethics as it exists now doesn't get us very far, since we end up going round in circles. It's all buried in the complex long term interaction of evolved innate feelings with language and cultural development.
There is a way to figure this out empirically, but, ironically, it's considered unethical. That's to use genetic engineering to develop and examine human-ape chimera, engineering various brain features, learning capacities, innate emotional responses, etc. Not that we could do it yet, of course, even if we approved of it.
Maybe the alternative is machine intelligence, and similar development of emotions in machines. But if we succeed in this, what's the ethical difference between the treatment of this intelligence and, say, that of a chimera?
Another chicken and egg conundrum: will we develop our understanding of ethics to be able to overcome some of the more innately objectionable practices, say, if we also develop the science to alleviate the bad effects we assume science has on studied subjects? We have afterall, at least to some extent, even for those people that are very heterosexual, managed to control our innate distaste for homosexuality. Perhaps we can examine many of our innate moral instincts, recognise them for what they are, and dismiss the ones that are unhelpful to progress; once we diced what particular progress we want to make - more innate chicken and egg stuf. But we've got this far; we've overcome nature red in tooth and claw, let's press on.
Hi Eric,
ReplyDeleteWe might think more about science and the scientific method, and so improve our personal application of the scientific method. But that presupposes that we want to improve our personal application of the scientific method.
I read stuff on brain science in order to better understand it. I'm not proposing to become a practitioner.
So, there's no reason that anyone studying ethics, or even proposing theories about ethics, needs to actually be more morally good, to apply what they learn to their own behaviour. Once they determine what is morally good, they then still have a choice to be morally good or not.
In Philosophy Bites, G.A. Cohen on Inequality of Wealth (http://philosophybites.com/2007/12/ga-cohen-on-ine.html) is questioned on his reconciling his egalitarianism with his current relatively wealthy position...
"The basic question is that if you have a salary maybe two or three times the average wage in the society, and you don't believe you ought to get all that, which I don't; then you believe you ought to sacrifice a lot of it, which I don't; I give away some but not very much; and the explanation is that I'm a less good person than I would be if I was as good as I could be. I just think that I'm not a morally exemplary person, that's all. That's the reconciliation."
When we get down to the details of what morality is, what drives it within our brains and wider nervous systems, and our cultural history, it's all a rather arbitrary process historically, that leaves us all in this arbitrary, but now somewhat consistent state of 'knowing' what is morally good with some degree of agreement. Given that ethics professors are in this same environment there's no inherent reason they should be any more moral than the rest of us.
However, if they do choose to be 'morally good', then they may be in a better position to think through some of the more complex questions of morality, once they have decided what is 'morally good'. The latter still remains the tricky bit because we have this notion of morality being ineffable in some way.
But then contrary to this, if they have an appreciation of evolution, psychology, anthropology, etc., they might figure that they should be moral for the important moral questions - e.g. do not kill - but on some of the lesser ones, such as lying, they might decide that actually they have no serious moral objection, because of the arbitrary outcome that has established lying to be morally bad. Though not lying might be of general benefit to our species, and may be a good general rule, and so quite a good moral code, individually it's okay to lie if the risk of getting caught isn't too great and the personal benefit sufficient. This might be a perfectly rational choice - and one I'm sure is made in business quite often, in fact in sales and marketing is almost inevitable. The power to walk away from moral convention, as suggested in the video?
Of course this is what scares the crap out of those who wishfully think that morality is a universal absolute waiting to be discovered, or revealed by God. You can't be good without God? Given the results of the data discussed in this video, maybe that's right. But when considering theists or philosophers in Nazi Germany, another example covered in the video, the problem comes from the motivations and presuppositions that drive the morality - the same drives that allow suicide bombers to justify their acts as being morally good.
All this seems to be consistent with the view that morality is arbitrary and influenced specifically by development and environment. Or maybe causally inert, as you put it.
Ron: I don't think it's obvious that, as you put it: "When we get down to the details of what morality is, what drives it within our brains and wider nervous systems, and our cultural history, it's all a rather arbitrary process historically". One at least *might* have thought or hoped that there is a more rational component to it, and that rational appreciation of morality promotes moral behavior (rather than promoting a walk away from it). Maybe you're right that that's not so. People seem to have all kinds of divergent opinions about the issue, so it's worth trying to explore it, I think, with a bit more empirical rigor, and without assuming that we know what the outcome of that research will be -- and my research is just one way of exploring this nest of questions.
ReplyDeleteI myself would find it disappointing if it turned out that philosophical moral reflection were either motivationally inert or merely productive of a random walk away from convention. But I think that is the simplest explanation of the data I have collected.
Hi Eric,
ReplyDeleteThe problem with the rational approach alone is that it assumes we know what we should be looking for. It's like asking ourselves why we want to have pleasure rather than pain - it's because the pleasurable sensations are, well, pleasurable, and the painful ones are not. Then we ask what is pleasure and pain that makes pleasure preferable? And so on. Eventually we get round to some empirical stuff.
In terms of the evolution and cultural development of the human species we need to take account of the probable sequence. It's difficult to know the details, but we needed a certain level of brain development and accompanying physiological development to come up with spoken language - so what drove spoken language? It would seem pretty useless if conceptual thinking wasn't there already, some form of curiosity, wonder, intention must have been driving that. And written language developed a long time after that. So it's hard to say how rational thought developed alongside our emotions, innate feelings or empathy, and so on - there must have been a co-development that is hard to retrace now. It seems plausible, if not likely given the paucity of other explanations, that morals emerged as part of our complex behaviour over time, and that our later rationality rationalised those feelings and formed them into the human construct that is morality and ethics.
So rationalising our morality occurs as we individually explain our learned and innate behaviour, as well as over time for the species.
And therein lies the real difficulty for empirical rigour - separating out the components. Reductionism is a valuable tool, but looking just here and now, which came first, the chicken or the egg? Simply looking at the problem of ethics as it exists now doesn't get us very far, since we end up going round in circles. It's all buried in the complex long term interaction of evolved innate feelings with language and cultural development.
There is a way to figure this out empirically, but, ironically, it's considered unethical. That's to use genetic engineering to develop and examine human-ape chimera, engineering various brain features, learning capacities, innate emotional responses, etc. Not that we could do it yet, of course, even if we approved of it.
Maybe the alternative is machine intelligence, and similar development of emotions in machines. But if we succeed in this, what's the ethical difference between the treatment of this intelligence and, say, that of a chimera?
Another chicken and egg conundrum: will we develop our understanding of ethics to be able to overcome some of the more innately objectionable practices, say, if we also develop the science to alleviate the bad effects we assume science has on studied subjects? We have afterall, at least to some extent, even for those people that are very heterosexual, managed to control our innate distaste for homosexuality. Perhaps we can examine many of our innate moral instincts, recognise them for what they are, and dismiss the ones that are unhelpful to progress; once we diced what particular progress we want to make - more innate chicken and egg stuf. But we've got this far; we've overcome nature red in tooth and claw, let's press on.
Great issues, Ron -- what a tangle!
ReplyDelete