Friday, March 15, 2019

Should You Defer to Ethical Experts?

Ernest Sosa gave a lovely and fascinating talk yesterday at UC Riverside on the importance of "firsthand intuitive insight" in philosophy. It has me thinking about the extent to which we ought, or ought not, defer to ethical experts when we are otherwise inclined to disagree with their conclusions.

To illustrate the idea of firsthand intuitive insight, Sosa gives two examples. One concerns mathematics. Consider a student who learns that the Pythagorean theorem is true without learning its proof. This student knows that a^2 + b^2 = c^2 but doesn't have any insight into why it's true. Contrast this student with one who masters the proof and consequently does understand why it's true. The second student, but not the first, has firsthand intuitive insight. Sosa's other example is in ethics. One child bullies another. Her mother, seeing the act and seeing the grief in the face of the other child, tells the bullying child that she should apologize. The child might defer to her mother's ethical judgment, sincerely concluding she really should apologize, but without understanding why what she has done is bad enough to require apology. Alternatively, she might come to genuinely notice the other child's grief and more fully understand how her bullying was inappropriate, and thus gain firsthand intuitive insight into the need for apology. (I worry that firsthand intuitive insight is a bit of a slippery concept, but I don't think I can do more with it here.)

Sosa argues that a central aim of much of philosophy is firsthand intuitive insight of this sort. In the sciences and in history, it's often enough just to know that some fact is true (that helium has two protons, that the Qin Dynasty fell in 206 BCE). On such matters, we happily defer to experts. In philosophy, we're less likely to accept a truth without having our own personal, firsthand intuitive insight. Expert metaphysicians might almost universally agree that barstool-shaped-aggregates but not barstools themselves supervene on collections of particles arranged barstoolwise. Expert ethicists might almost universally agree that a straightforward pleasure-maximizing utilitarian ethics would require radical revision of ordinary moral judgments. But we're not inclined to just take them at their word. We want to understand for ourselves how it is so.

This seems right. And yet, there's a bit of a puzzle in it, if we think that it's important that our ethical opinions be correct. (Yes, I'm assuming that ethics is a domain in which there are correct and incorrect opinions.) What should we do when the majority of philosophical experts think P, but your own (apparent) firsthand intuitive insight suggests not-P? If you care about correctness above all, maybe you should defer to the experts, despite your lack of understanding. But Sosa appears to think, as I suspect many of us do, that often the right course instead is to stand steadfast, continuing to judge according to your own best independent reasoning.

Borrowing an example from Sarah McGrath's work on moral deference, consider the case of vegetarianism. Based on some of my work, I think that probably the majority of professional ethicists in the United States believe that it is normally morally wrong to eat the meat of factory-farmed animals. This might also be true in German-speaking countries. Impressionistically, most of the philosophers I know who have given the issue serious and detailed consideration come to endorse vegetarianism, including two of the most prominent ethicists currently alive, Peter Singer and Christine Korsgaard. Now suppose that you haven't given the matter nearly as much thought as they have, but you have given it some thought. You're inclined still to think that eating meat is okay, and you can maybe mount one or two plausible-seeming defenses of your view. Should you defer to their ethical expertise?

Sosa compares philosophical reasoning with archery. You not only want to hit the target (the truth), you want to do so by the exercise of your own skill (your own intuitive insight), rather than by having an expert guide your hand (deference to experts). I agree that ideally this is so. It's nice when you have have both truth and intuitive insight! But when the aim of hitting the target conflicts with the aim of doing so by your own intuitive insight, your preference should depend on the stakes. If it's an archery contest, you don't want the coach's help: The most important thing is the test of your own skill. But if you're a subsistence hunter who needs dinner, then you probably ought to take any help you can get, if the target looks like it's about to escape. And isn't ethics (outside the classroom, at least) more like subsistence hunting than like an archery contest? What should matter most is whether you actually come to the right moral conclusion about eating meat (or whatever) not whether you get there by your own insight. Excessive emphasis on the individual's need for intuitive insight, at the cost of truth or correctness, risks turning ethics into a kind of sport.

So maybe, then, you should defer to the majority of ethical experts, and conclude that it is normally wrong to eat factory-farmed meat, even if that conclusion doesn't accord with your own best attempts at insight?

While I'm tempted to say this, I simultaneously feel pulled in Sosa's direction -- and perhaps I should defer to his expertise as one of the world's leading epistemologists! There's something I like about non-deference in philosophy, and our prizing of people's standing fast in their own best judgments, even in the teeth of disagreement by better-informed experts. So here are four brief defenses of non-deference. I fear none of them is quite sufficient. But maybe in combination they will take us somewhere?

(1.) The "experts" might not be experts. This is McGrath's defense of non-deference in ethics. Despite their seeming expertise, great ethicists have often been horribly wrong in the past. See Aristotle on slavery, Kant on bastards, masturbation, homosexuality, wives, and servants, the consensus of philosophers in favor of World War I, and ethicists' seeming inability to reason better even about trolley problems than non-ethicists.

(2.) Firsthand intuitive insight might be highly intrinsically valuable. I'm a big believer in the intrinsic value of knowledge (including self-knowledge). One of the most amazing and important things about life on Earth is that sometimes we bags of mostly water can stop and reflect on some of the biggest, most puzzling questions that there are. An important component of the intrinsic value of philosophical reflection is the real understanding that comes with firsthand intuitive insight, or seeming insight, or partial insight -- our ability to reach our own philosophical judgments instead of simply deferring to experts. This might be valuable enough to merit some substantial loss of ethical correctness to preserve it.

(3.) The philosophical community might profit from diversity of moral opinion, even if individuals with unusual views are likely to be incorrect. The philosophical community as a whole might, over time, be more likely to converge upon correct ethical views if it fosters diversity of opinion. If we all defer to whoever seems to be most expert, we might reach consensus too fast on a wrong, or at least a narrow and partial, ethical view. Compare Kuhn and Longino on the value of diversity in scientific opinion: Intellectual communities need stubborn defenders of unlikely views, even if those stubborn folks are probably wrong -- since sometimes they have an important piece of the truth that others are missing.

(4.) Proper moral motivation might require acting from one's own insight rather than from moral deference. The bully who apologizes out of deference gives, I think, a less perfect apology than the bully who has firsthand intuitive insight into the need to apologize. Maybe in some cases, being motivated by one's own intuitive insight is so morally important that it's better to do the ethically wrong thing on the basis of your imperfect but non-deferential insight than to do the ethically right thing deferentially.

As I said, none of these defenses of non-deference seems quite enough on its own. Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not. And the considerations in Points 2-4 seem plausibly to be only secondary from the perspective of the person who wants really to have ethically correct views by which to guide her behavior.

[image source]

18 comments:

Andrew Sepielli said...

Eric (if I may) -- At the end, you write: "Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not." Perhaps you could say more about what you mean by a "bird's eye perspective". If you have in mind a perspective from which we might sort people into "peers", "experts", and so on without relying on first-order ethical claims -- say, by appealing to domain-general claims about the nature of truth, or knowledge, or cognitive significance, or what-have-you -- then a quietist like me will want to say that there is no such perspective. I'd want to say of a case in which, e.g., I disagree with an excellent moral philosopher either that: (a) she is my epistemic inferior on ethical matters, even if she's at least as smart as I am in the ordinary sense (i.e. give van Inwagen's response re: disagreements with Lewis); or else: (b) there are not enough first-order ethical truths independent of our dispute to enable me to sort her into any of the "peer", "superior", or "inferior" piles. In the latter case, the more plausible versions of conciliationism will not require me to defer.

Andrew Sepielli said...

Sorry, that was a little too quick. Obviously, I'd want to make room for the possibility that my holding some view is due to some logical error or something that my cleverer opponent managed to avoid. What I meant was that I'd want to say either (a) or (b) sometimes -- and so not always to classify some excellent moral philosopher as my peer or superior on the grounds that lead me to label them "excellent".

Anonymous said...

Hi Eric
It may be for a child, I.E. ANYONE under (about) 24 years of age, that they CANNOT reliably realize fully ethical judgements because they lack ‘executive control’. After 24 when their frontal lobes have finished forming and connecting THEN they CAN develop proper Intuitive Insight. Until then you as parent must simply demand that “in your house” they must fake it (until — you tacitly hope— someday they are able to make ethically supportable choices on their own). They may only be able to fully reach the desired insight when they see that the apology as doing one’s best to heal the damage done to the wronged child — i.e. real empathy and not the self centered version of it.
Some children are perhaps preternaturally wise and understand at younger ages - perhaps this cognitive development happens earlier in some (Hopefully many?) young people. Until each child (hopefully, eventually) reaches this developed capacity they are simply incapable of the insight towards the most ethical behavior set.
Just A Thot,
stu

Drake Thomas said...

Given the diversity of opinion on a huge range of topics by moral philosophers, it doesn't seem clear to me that there is much of a consensus to follow; are there specific topics where moral philosophers have pretty uniform agreement on a topic that laypeople don't?

Though I'm not sure it follows from the above argument that one ought to rely on one's own moral judgments: if historians hotly debated whether the Qin dynasty fell in 206 BCE or 207 BCE, I wouldn't take that as a sign I could just make up my favorite year and believe that. So maybe the only conclusion is that one ought to have lots of moral uncertainty about such topics.

I think my tentative conclusion here might be something like "In theory, one ought to defer to moral experts were there such a group, but anyone well-equipped enough to be pondering questions of moral uncertainty like this is probably around as good at competent moral reasoning and coming to ethical conclusions as the average moral philosopher, and so shouldn't necessarily discount their own reasoning very much." Deferring to expert moral consensus might actually be a good idea for the median person, but the median person doesn't think about these issues, so it's a bit of a moot point.

All that said, though, I think there are cases where one can do this at least a little. There are specific people who I trust to be genuinely kind and truth-seeking and acting in the ways they believe to be the best way of doing good - Kelsey Piper and Peter Singer come to mind - where I would be hesitant about endorsing a moral claim they had denounced, and would at least examine any counterarguments they had offered in opposition to my reasoning. If Kelsey Piper wrote "I strongly believe that the best course of action morally is X on topic Y", I would, ceteris paribus, update to believing that one should probably take action X, just as I would if a trustworthy expert economist had endorsed an opinion on monetary policy.

Eric Schwitzgebel said...

Thanks for the interesting comments, folks!

Stu: I’m inclined to agree that wisdom requires some ageing — some combo of life experiences and brain maturity.

Andrew: I’ve never really understood hardline steadfast views in the face of seeming-peers — though pure deference or conciliationism seems odd too. There’s something odd about the self-confidence involved. What warrants it?

Drake, I think I agree with most of that — though there are certainly some consensus views, eg anti-racism, and maybe a pretty solid majority that it’s morally better to avoid factory farmed meat even if it’s not morally required that one do so. And I agree that there are people it would worry me to disagree with (though maybe I wouldn’t choose the same two).

Andrew Sepielli said...

Eric -- I'm not a steadfast-er. I'm a conciliationist here as elsewhere. It's just that I don't feel the pressure to regard, e.g. Frances Kamm (just to pick someone who's wrong about ethics but clearly smarter than I am) as my superior. My suspicion is that those who do feel this pressure tend to do so because they hold views about ethics from which it follows that ethics is not autonomous. As a quietist, I think such views are mistaken. As I wrote earlier, I am certainly open to the possibility that there is nothing independent of our dispute that I can used to rank Kamm and I in terms of our abilities to grasp the ethical truth. But then, again, the more plausible versions of conciliationism don't tell me to "split the difference" in such a case; they only say that when someone's an epistemic peer. (I've found some of Katia Vavova's and David Christensen's recent work useful on this point.)

Since we're talking specifically about super-famous moral philosophers, it's worth noting an asymmetry between me, say, and them. I know most of their arguments for their views, b/c I work in the area, and it's de rigeur that one who works in their area is familiar with their arguments. However, most of them don't know my arguments for my views -- because, well, need I say it? So these are akin to cases in which all pertinent evidence is not shared; they're cases in which all pertinent arguments are not shared.

John said...

I used to be obsessed with the problem of moral deference (for my thesis), but have become less and less impressed by it as a standalone problem. I used to do my best to avoid making substantive metaethical or normative commitments to avoid begging the question, but looking back I've come to view the problem as a different flavor of classic ethical stalemates.

Like you said, we care a lot about you acting ethically on your own, until we ratchet up the stakes of the consequences. Similarly, we care a lot about people doing things for the right reasons, but we care a lot less when the utilitarian gets to set up the scenarios. And while many Kantians still continue to confuse me, their respect for autonomous decision-making seems capture a lot of uneasiness at wholesale moral deferrers. The deference debate seems to be a lot about process, and a lot of normative ethics seems to often clash between consequences and the right process in the same way.

I'm curious how philosophical deference fits in with all of this. I think we might dislike deference in our classrooms and essays for different reasons.

In the examples of deferring about chemical make up or historical facts, we are fine with your blind deference but only to a point. When your area of expertise depends on it, we would consider it an intellectual vice. Maybe it's okay to defer on the Qin dynasty dates when you're just a historian, but we're less tolerant when your expertise is Chinese history, even less so when your expertise is of that era specifically. So why are we less tolerant of philosophical deference? My guess is that so much of what is on the line in these debates are fundamental aspects of human experience: how we ought to act and what our relationship with reality is. I don't know how to continue this line of thought to concretely say what people ought to know or care about, surely the virtues will be context specific, but that seems to me probably a different problem than what is at the core of intuitions about moral deference.


Eric Schwitzgebel said...

Thanks for the continuing comments!

Andrew: I apologize for misunderstanding the thrust of your earlier remark. To start with your final point, in real cases of disagreement rarely is all relevant information shared, and rarely is anyone exactly your peer in all relevant respects: Usually people are more skilled and more informed in some ways and less skilled and less informed in others. The disadvantage of the check-splitting case is that it somewhat obscures the complexity. It is true that I think that ethics is not autonomous in the sense that I think you mean that claim (i.e., not independent of empirical facts on which empirical expertise is possible), so maybe that's mixed up in the story. But setting that aside, how about this proposal: If I start trying to argue trolley problems with Kamm, there's a high likelihood that she will be aware of some nuances between some near-seeming cases that it would take me some work to finally understand. If I say, "Anyone who accepts killing the one in Case A ought to accept killing the one in Case B" and she disagrees, saying that there's a defensible principle for distinguishing the two, I'll probably defer. But it's less clear how much conciliationism I should have on more practical moral issues.

John: Interesting thoughts. On your last point, I agree that deference for the historian becomes more of a vice as it approaches one's area of specialization -- and of course that seems to lead to the thought that when it comes to philosophy we are in some sense all already specialists in our own values. Or something like that!

Unknown said...

Hey Eric, interesting post. Really made me think of this Nichoals D. Smith Article "Peer disagreement, Testimony, and Personal Justification."

Winter Wallaby said...

"Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not."

Why should this be our best guess? It seems to me that before I defer to a self-proclaimed expert, I want some evidence that their self-proclamation of expertise actually make them experts. So what is that evidence for self-proclaimed ethical experts? (While adding the adjective "self-proclaimed" might seem insulting, I think it's important to put that qualification in there, since simply calling them experts assumes the conclusion.)

In many fields, I assume self-proclaimed experts are actually experts, and happily defer to them, but it's not just because I think they're smarter than me. It's because I know that their expertise is judged as part of a system that I have evidence regularly produces correct results. If a mathematician states the Pythagorean theorem without proof, I know that that mathematician is working within a framework that has produced many correct results in the past - not 100% of the time, but enough that we regularly use its results to successfully build bridges or launch spacecraft. The historian who tells me that the Qin Dynasty fell in 206 BCE is working within an academic framework that I know will punish her if she regularly gets basic historical facts wrong, and I have reasonable confidence that the historical community gets many historical facts correct because its methodology seems basically sound, even if I don't know a lot about that methodology, or anything at all about the details of how that methodology was applied to arrive at the answer of 206 BCE. But in both cases, my deference isn't just based on them telling me that they're experts, or them seeming smarter than me, or even that they've thought about math or history a lot.

In contrast, someone who tells me that they're an expert in astrology, both in its practice, and in the scientific research validating astronomy, is not going to receive my deference on their judgment over whether tomorrow is a good day for risk-taking.

So what are the reasons to think that self-proclaimed experts in ethics are more likely to be correct regarding ethics? If the reasons are simply that that they're smart, that they've thought about ethics a lot, and that they're good at reasoning, than these wouldn't be enough to trust their expertise in other fields.

(Of course, one practical test would be to see if their self-proclaimed expertise translated into more correct actions on a ground truth set, where we could agree on what the ethical action was. e.g. whether they fulfilled their duty to respond to student requests for information. If they performed no better than non-self-proclaimed experts, that would be another reason to be skeptical of the field overall. Perhaps someone should perform this experiment!)

Callan said...

To me it seems that the right moral conclusion generally isn't universal (so essentially it isn't right). Someone who doesn't know why they would come to that conclusion then, upon facing a situation where the right moral conclusion isn't right, crashes. They can't make up a new approach and it results in dysfunctional behavior. So advocating for just having the right conclusion...doesn't seem that ethical to advocate for.

chinaphil said...

I was a bit confused about whether your question is directed toward philosophers or non-philosophers. If philosophers are society's experts on ethics, then it would be reasonable to say: let there be disagreement within the expert group, and but society at large will follow the consensus that emerges. That's how lots of things seem to work - from technical subjects like accounting, to politics, to ethics in the past: (some) debate within the church was fine, but outsiders lacked standing to debate against the church.

So one reasonable answer would be: Philosophers/ethicists should insist on firsthand intuition, and the rest of us should be content with secondhand intuition.

I think one question that relates closely to this is the social value of shared or consistent ethics. If it is valuable to have everyone share at least some part of their ethics, then that gives us an additional reason to suggest that it is often a good idea to defer to authority.

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Winter: I wasn't think that "self-proclamation" was the basis, but rather institutional status (e.g., appointment as a full professor of ethics) and factual knowledge of the literature on the topics that is published in recognized academic presses and journals. That could still all be empty smoke, but the case is at least a little harder to make!

Callan: If you're a particularist, for example, you might think that you do need that firsthand intuition to navigate tricky situations, since second-hand rules won't work -- is that what you're thinking?

Chinaphil: I meant my question to be directed toward those of us who are learning the philosophical literature in an area, whether as students or as philosophers who aren't themselves experts in the subfield. What you say makes sense -- and yet there's something attractive to me in the idea that people should not defer but rather act on their own conscience, except maybe in cases that rely on very specific factual knowledge (like details of organ-transplant policy maybe).

Callan said...

Eric: Leaving people with no idea what to do once their rote 'the right way to do things' is clearly shown as not right(because it doesn't apply in certain situations), that...doesn't seem right.

That's based off the idea that the right moral conclusion never really seems to be universal - peoples plan crashes on a mountain and the idea of eating people who died in an accident seems morally wrong. And yet in that (fairly rare) circumstance, is it wrong for them to eat people who died in the accident so as to survive toward rescue? If not then it'd seem cruel/not ethical to just tell them it's wrong to eat accident victims and that's that, without any grounding for that statement/condemnation.

I'm not familiar with the idea of a particularist - it sounds like saying that people who say 'edge cases make certain moral condemnations inapplicable' are being particular? I'd google the term, but I thought I'd give an old fashioned conversation on it a go :)

Winter Wallaby said...

Eric: "Self-proclamation" in my comment referred to the self-proclamation of the institutions of the academic community, not to self-proclamation of individuals.

When you say "the case is at least a little harder to make" you seem to be saying that it makes it harder to make the case that the academic community of ethics professors are not experts who we should trust.But what I'm saying is that that seems to put the burden in the wrong place. I think the burden is on ethics professors to make the case that they are experts who can be trusted to give correct ethics decisions. Unlike mathematicians or historians, I don't see what they've done to meet that burden.

Winter Wallaby said...

"I meant my question to be directed toward those of us who are learning the philosophical literature in an area."

Oh, I missed this. Then, yeah, my comments aren't really applicable.

The focus of my comment is really that it's not clear why to defer to professors of philosophy rather than, say, Catholic theologians. But if you've already implicitly accepted that the field as a whole has expertise, then my comments aren't relevant. Whoops!

Eric Schwitzgebel said...

Thanks for the continuing comments, folks!

Callan: Particularism, as I understand it, is a negative claim about what doesn't work: a fixed set of rules. It doesn't commit on what should replace rules -- but maybe something like well-trained intuitions. So a particularist might resist saying that we can apply some finite set of rules to the plane crash case, but that a wise person could reach a wise decision nonetheless.

Wallaby: I agree that it's reasonable to doubt that professional ethicists have more general-purpose wisdom about what's good and bad, right and wrong, in most ordinary life situations or in situations outside their expertise -- perhaps especially compared to clergy and grandparents. However, for more other sorts of cases it does seem reasonable to default to thinking they might have some better knowledge that it is worth being *partly* deferential to, e.g., a hospital ethicist on an applied medical ethics case on a topic on which she has training and experience. I chose vegetarianism as a kind of in-between case that I thought was interestingly unclear between the two.

Callan said...

Hi Eric,

I'm not sure I was thinking of any wise or correct decision, just not advocating for people being held down by an inapplicable dogma and instead going 'I don't know what to do here, the old rules don't seem to apply'. The person may be doomed and there is no real solution, but at least they wont be weighed down by a failed dogma as well.

On the other hand I have to wonder about psychology on the matter - perhaps from a certain stable economic position it would seem a matter of going from one stability to another stability/from rules that don't work to a wise decision. My own psychology on the matter isn't going from stability then to stability but instead going from a non stability/failed system to a non stability but at least not being weighed down by that failed system. Jump off the burning boat and into the wild sea. There's not necessarily any wise decision to be made once in that sea, if you get the perspective.