Friday, February 24, 2017

Call for Papers: Introspection Sucks!

Centre for Philosophical Psychology and European Network for Sensory Research

Introspection sucks!

Conference with Eric Schwitzgebel, May 30, 2017, in Antwerp

This is a call for papers on any aspect of introspection (and not just papers critical of introspection, but also papers defending it)

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: March 30, 2017. Papers should be sent to nanay@berkeley.edu

[from Brains Blog]

Thursday, February 23, 2017

Belief Is Not a Norm of Assertion (but Knowledge Might Be)

Many philosophers have argued that you should only assert what you know to be the case (e.g. Williamson 1996). If you don't know that P is true, you shouldn't go around saying that P is true. Furthermore, to assert what you don't know isn't just bad manners; it violates a constitutive norm, fundamental to what assertion is. To accept this view is to accept what's sometimes called the Knowledge Norm of Assertion.

Most philosophers also accept the view, standard in epistemology, that you cannot know something that you don't believe. Knowing that P implies believing that P. This is sometimes called the Entailment Thesis. From the Knowledge Norm of Assertion and the Entailment Thesis, the Belief Norm of Assertion follows: You shouldn't go around asserting what you don't believe. Asserting what you don't believe violates one of the fundamental rules of the practice of assertion.

However, I reject the Entailment Thesis. This leaves me room to accept the Knowledge Norm of Assertion while rejecting the Belief Norm of Assertion.

Here's a plausible case, I think.

Juliet the implicit racist. Many White people in academia profess that all races are of equal intelligence. Juliet is one such person, a White philosophy professor. She has studied the matter more than most: She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. When she considers the matter she feels entirely unambivalent. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the Black students never look bright. When a Black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a White or Asian student to do so, even though her Black students make insightful comments and submit excellent essays at the same rate as the others. This bias affects her grading and the way she guides class discussion. She is similarly biased against Black non-students. When Juliet is on the hiring committee for a new office manager, it won’t seem to her that the Black applicants are the most intellectually capable, even if they are; or if she does become convinced of the intelligence of a Black applicant, it will have taken more evidence than if the applicant had been White (adapted from Schwitzgebel 2010, p. 532).

Does Juliet believe that all the races are equally intelligent? On my walk-the-walk view of belief, Juliet is at best an in-between case -- not quite accurately describable as believing it, not quite accurately describable as failing to believe it. (Compare: someone who is extraverted in most ways but introverted in a few ways might be not quite accurately describable as an extravert nor quite accurately describable as failing to be an extravert.) Juliet judges the races to be equally intelligent, but that type of intellectual assent or affirmation is only one piece of what it is believe, and not the most important piece. More important is how you actually live your life, what you spontaneously assume, how you think and reason on the whole, including in your less reflective, unguarded moments. Imagine two Black students talking about Juliet behind her back: "For all her fine talk, she doesn't really believe that Black people are just as intelligent."

But I do think that Juliet can and should assert that all the races are intellectually equal. She has ample justification for believing it, and indeed I'd say she knows it to be the case. If Timothy utters some racist nonsense, Juliet violates no important norm of assertion if she corrects Timothy by saying, "No, the races are intellectually equal. Here's the evidence...."

Suppose Tim responds by saying something like, "Hey, I know you don't really or fully believe that. I've seen how to react to your Black students and others." Juliet can rightly answer: "Those details of my particular psychology are irrelevant to the question. It is still the case that all the races are intellectually equal." Juliet has failed to shape herself into someone who generally lives and thinks and reasons, on the whole, as someone who believes it, but this shouldn't compel her to silence or compel her to always add a self-undermining confessional qualification to such statements ("P, but admittedly I don't live that way myself"). If she wants, she can just baldly assert it without violating any norm constitutive of good assertion practice. Her assertion has not gone wrong in a way that an assertion goes wrong if it is false or unjustified or intentionally misleading.

Jennifer Lackey (2007) presents some related cases. One is her well-known creationist teacher case: a fourth-grade teacher who knows the good scientific evidence for human evolution and teaches it to her students, despite accepting the truth of creationism personally as a matter of religious faith. Lackey uses this case to argue against the Knowledge Norm of Assertion, as well as (in passing) against a Belief Norm of Assertion, in favor of a Reasonable-To-Believe Norm of Assertion.

I like the creationist teacher case, but it's importantly different from the case of Juliet. Juliet feels unambivalently committed to the truth of what she asserts; she feels no doubt; she confidently judges it to be so. Lackey's creationist teacher is not naturally read as unambivalently committed to the evolutionary theory she asserts. (Similarly for Lackey's other related examples.)

Also, in presenting the case, Lackey appears to commit to the Entailment Thesis (p. 598: "he does not believe, and hence does not know"). Although it is minority opinion in the field, I think it's not outrageous to suggest that both Juliet and the creationist teacher do know the truth of what they assert (cf. the geocentrist in Murray, Sytsma & Livengood 2013). If the creationist teacher knows but does not believe, then her case is not a counterexample to the Knowledge Norm of Assertion.

A related set of cases -- not quite the same, I think, and introducing further complications -- are ethicists who espouse ethical views without being much motivated to try to govern their own behavior accordingly.

[image from Helen De Cruz]

Wednesday, February 15, 2017

Human Nature Is Good: A Sketch of the Argument

The ancient Chinese philosopher Mengzi and the early modern French philosopher Rousseau both argued that human nature is good. The ancient Chinese philosopher Xunzi and the early modern English philosopher Hobbes argued that human nature is not good.

I interpret this as an empirical disagreement about human moral psychology. We can ask, who is closer to right?

1. Clarifying the Question.

First we need to clarify the question. What do Mengzi and Rousseau mean by the slogan that is normally translated into English as "human nature is good"? There are, I think, two main claims.

One is a claim about ordinary moral reactions: Normal people, if they haven't been too corrupted by a bad environment, will tend to be revolted by clear cases of morally bad behavior and pleased by clear cases of morally good behavior.

The other is a claim about moral development: If people reflect carefully on those reactions, their moral understanding will mature, and they will find themselves increasingly wanting to do what's morally right.

The contrasting view -- the Xunzi/Hobbes view -- is that morality is an artificial human construction. Unless the right moral system has specifically been inculcated in them, ordinary people will not normally find themselves revolved by evil and pleased by the good. At least to start, people need to be told what is right and wrong by others who are wiser than them. There is no innate moral compass to get you started in the right developmental direction.

2. Mixed Evidence?

One might think the truth is somewhere in the middle.

On the side of good: Anyone who suddenly sees a child crawling toward a well, about to fall in, would have an impulse to save the child, suggesting that everyone has some basic, non-selfish concern for the welfare of others, even without specific training (Mengzi 2A6). This concern appears to be present early in development. For example, even very young children show spontaneous compassion toward those who are hurt. Also, people of different origins and upbringings admire moral heroes who make sacrifices for the greater good, even when they aren't themselves directly benefited. Non-human primates show sympathy for each other and seem to understand the basics of reciprocity, exchange, and rule-following, suggesting that such norms aren't entirely a human invention. (On non-human primates, see especially Frans de Waal's 1996 book Good Natured.)

On the other hand: Toddlers (and adults!) can of course be selfish and greedy; they don't like to share or to wait their turn. In the southern U.S. about a century ago, crowds of ordinary White people frequently lynched Blacks for minor or invented offenses, proudly taking pictures and inviting their children along, without apparently seeing anything wrong in it. (See especially James Allen et al., Without Sanctuary.) The great "heroes" of the past include not only those who sacrificed for the greater good but also people famous mainly for conquest and genocide. We still barely seem to notice the horribleness of naming our boys "Alexander" and "Joshua".

3. Human Nature Is Nonetheless Good.

Some cases can be handled by emphasizing that only "normal" people who haven't been too corrupted by a bad environment will be attracted to good and revolted by evil. But a better general defense of the goodness of human nature involves adopting an idea that runs through both the Confucian and Buddhist traditions and, in the West, from Socrates through the Enlightenment to Habermas and Scanlon. It's this: If you stop and think, in an epistemically responsible way (perhaps especially in dialogue with others), you will tend to find yourself drawn toward what's morally good and repelled by what's evil.

Example A. Extreme ingroup bias. For example, one of the primary sources of evil that doesn't feel like evil -- and can in fact feel like doing something morally good -- is ingroup/outgroup thinking. Early 20th century Southern Whites saw Blacks as an outgroup, a "them" that needed to be controlled; the Nazis similarly viewed the Jews as alien; in celebrating wars of conquest, the suffering of the conquered group is either disregarded or treated as much less important that the benefits to the conquering group. Ingroup/outgroup thinking of this sort typically requires either ignoring others' suffering or accepting dubious theories that can't withstand objective scrutiny. (This is one function of propaganda.) The type of extreme ingroup bias that promotes evil behavior tends to be undermined by epistemically responsible reflection.

Example B. Selfishness and jerkitude. Similarly, selfish or jerkish behavior tends to be supported by rationalizations and excuses that prove flimsy when carefully examined. ("It's fine for me to cheat on the test because of X", "Our interns ought to expect to be hassled and harrassed; it's just part of their job", etc.) If you were simply comfortable being selfish, you wouldn't need to concoct those poor justifications. If and when critical reflection finally reveals the flimsiness of those justifications, that normally creates some psychological pressure for you to change.

It's crucial not to overstate this point. We can be unshakable in our biases and rationalizations despite overwhelming evidence. And even when we do come to realize that something we eagerly want for ourselves or our group is immoral, we can still choose that thing. Evil might still be commonplace: Just as most plants don't survive to maturity, many people fall far short of their moral potential, often due to hard circumstances or negative outside influences.

Still, if we think well enough, we all can see the basic outlines of moral right and wrong; and something in us doesn't like to choose the wrong. This is true of pretty much everyone who isn't seriously socially deprived, regardless of the specifics of their cultural training. Furthermore, this inclination toward what's good -- I hope and believe -- is powerful enough to place at the center of moral education.

That is the empirical content of the claim that human nature is good.

I do have some qualms and hesitations, and I think it only works to a certain extent and within certain limits.

Perhaps oddly, the strikingly poor quality of the reasoning in recent U.S. politics has actually firmed up my opinion that careful reflection can indeed fairly easily reveal the lies behind evil.

-----------------------------------

Related: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly 2007).

[image source]

Monday, February 06, 2017

Should Ethics Professors Be Held to Higher Ethical Standards in Their Personal Behavior?

I've been waffling about this for years (e.g., here and here). Today, I'll try out a multi-dimensional answer.

1. My first thought is that it would be unfair for us to hold ethics professors to higher standards of personal behavior because of their career choice. Ethics professors are hired based on their academic skills as philosophers -- their ability to interpret texts, evaluate arguments, and write and teach effectively about a topic of philosophical discourse. If we demand that they also behave according to higher ethical standards than other professors, we put an additional burden on them that they don't deserve and isn't written into their work contracts. They signed up to be scholars, not moral exemplars. (In this way, ethics professors differ from clergy, whose role is partly that of exemplar.)

2. Nonetheless, it might be reasonable for ethicists to hold themselves to higher moral standards. Consider my "cheeseburger ethicist" thought experiment. An ethicist reads Peter Singer on vegetarianism, considers the available counterarguments, and ultimately concludes that Singer is correct. Eating meat is seriously morally wrong, and we ought to stop. She publishes a couple of articles, and she teaches the arguments to her classes. But she just keeps eating meat at the same rate she always did, with no effort to change her ways. If challenged by a surprised student, maybe she defends herself with something like Thought 1 above: "I'm just paid to evaluate the arguments. Don't demand that I also live that way. I'm off duty!"

[Socrates: always on duty.]

There's something strange and disappointing, I think, about a response that depends on treating the study of ethics as just another job. Our cheeseburger ethicist knows a large range of literature, and she has given the matter extensive thought. If she insulates her philosophical thinking entirely from her personal behavior, she seems to be casting away a major resource for moral self-improvement. All of us, even if we don't aim to be saints, ought to take some advantage of the resources we have that can help us to be better people -- whether those resources are community, church, meditation, thoughtful reading, or the advice of friends we know to be wise. As I've imagined her, the cheeseburger ethicist shows a disconcerting lack of interest in becoming a better person.

We can run similar examples with political activism, charitable giving, environmentalism, sexual ethics, honesty, kindness, racism and sexism, etc. -- any issue with practical implications for one's life, to which an ethicist might give serious thought, leading to what she takes to be a discovery that she would be much morally better if she started doing X. Almost all ethicists have thought seriously about some issues with practical implications for their lives.

Combining 1 and 2. Despite the considerations of fairness raised in point 1, I think we can reasonably expect ethicists to shape and improve their personal behavior in a way that is informed by their professional ethical reasoning. This is not because ethicists have a special burden as exemplars but rather because it's reasonable to expect everyone to use the tools at their disposal toward moral self-improvement, at least to some moderate degree, or at least toward the avoidance of serious moral wrongdoing. We should similarly expect people who regularly attend religious services to try to use, rather than ignore, what they regard as the best moral insights of their religion. We should also expect secular non-ethicists to explore and improve their moral worldviews, in some way that suits their abilities and life circumstances, and apply some of the results.

3. My third thought is to be cautious with charges of hypocrisy. Part of the philosopher's job is to challenge widely held assumptions. This can mean embracing unusual or radical views, if that's where the arguments seem to lead. If we expect high consistency between a professional ethicist's espoused positions and her real-world choices, then we disincentivize highly demanding or self-sacrificial conclusions. But it seems, epistemically, like a good thing if professional ethicists have the liberty to consider, on their argumentative merits alone, the strength of the arguments for highly demanding ethical conclusions (e.g., the relatively wealthy should give most of their money to charity, or if you are attacked you should "turn the other cheek") alongside the arguments for much less demanding ethical conclusions (e.g., there's no obligation to give to charity, revenge against wrongdoing is just fine). If our ethicist knows that as soon as she reaches a demanding moral conclusion she risks charges of hypocrisy, then our ethicist might understandably be tempted to draw the more lenient conclusion instead. If we demand ethicists to live according to the norms they endorse, we effectively pressure them to favor lenient moral systems compatible with their existing lifestyles.

(ETA: Based on personal experience, and my sense of the sociology of the field, and one empirical study, it does seem that professional reflection on ethical issues, in contemporary Anglophone academia, coincides with a tendency to embrace more stringent moral norms and to see our lives as permeated with moral choices.)

4. And yet there's a complementary epistemic cost to insulating one's philosophical positions too much from one's life. To gain insight into an ethical position, especially a demanding one, it helps to try to live that way. When Gandhi and Martin Luther King Jr. talk about peaceful resistance, we rightly expect them to have some real understanding, since they have tried to put it to work. Similarly for Christian compassion, Buddhist detachment, strict Kantian honesty, or even egoistic hedonism: We ought to expect people who have attempted to put these things into practice to have, on average, a richer understanding of the issues than those who have not. If an ethicist aspires to write and teach about a topic, it seems almost intellectually irresponsible for them not to try to gain direct personal experience if they can.

(ETA 2: Also, to understand vice, it's probably useful to try it out! Or better, to have lived through it in the past.)

Combining 1, 2, 3, and 4. I don't think all of this fits neatly together. The four considerations are to some extent competing. Should we hold ethics professors to higher ethical standards? Should we expect them to live according to the moral opinions they espouse? Neither "yes" nor "no" does justice to the complexity of the issue.

At least, that's where I'm stuck today. I guess "multi-dimensional" is a polite word for "still confused and waffling".

[image source]

Friday, February 03, 2017

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

New essay in draft:

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

Abstract: The mainstream tradition in recent Anglophone Zhuangzi interpretation treats spontaneous skillful responsiveness -- similar to the spontaneous responsiveness of a skilled artisan, athlete, or musician -- as a, or the, Zhuangzian ideal. However, this interpretation is poorly grounded in the Inner Chapters. On the contrary, in the Inner Chapters, this sort of skillfulness is at least as commonly criticized as celebrated. Even the famous passage about the ox-carving cook might be interpreted more as a celebration of the knife’s passivity than as a celebration of the cook’s skillfulness.

--------------------------------------

This is a short essay at only 3500 words (about 10 double-spaced pages excluding abstract and references) -- just in and out with the textual evidence. Skill-centered interpretations of Zhuangzi are so widely accepted (e.g., despite important differences, Graham, Hansen, and Ivanhoe), that people interested in Zhuangzi might find it interesting to see the contrarian case.

Available here.

As always, comments welcome either by email or in the comments section of this post. (I'd be especially interested in references to other scholars with a similar anti-skill reading, whom I may have missed.)

[image source]