Thursday, March 05, 2015

Zhuangzi's Delightful Inconsistency about Death

I've been working on a new paper on ancient Chinese philosophy, "Death and Self in the Incomprehensible Zhuangzi" (come hear it Saturday at Pitzer College, if you like). In it, I argue that Zhuangzi has inconsistent views about death, but that that inconsistency is a good thing that fits nicely with his overall philosophical approach.

Most commentators, understandably, try to give Zhuangzi -- the Zhuangzi of the authentic "Inner Chapters" at least -- a self-consistent view. Of course! This is only charitable, you might think. And this is what we almost always try to do with philosophers we respect.

There are two reasons not to take this approach to Zhuangzi.

First, Zhuangzi seems to think that philosophical theorizing is always defective, that language always fails us when we try to force rigid distinctions upon it, and that logical reasoning collapses into paradox when pushed to its natural end (see especially Ch. 2). Thus, you might think that Zhuangzi should want to resist committing to any final, self-consistent philosophical theory.

Second, Zhuangzi employs a variety of devices that seem intended to frustrate the reader's natural desire to make consistent sense of his work, including: stating patent absurdities with a seeming straight face; putting his words in the mouths of various dubious-seeming sources; using humor, parable, and parody; and immediately challenging or contradicting his own assertions.

Thus, I think we can't interpret Zhuangzi in the way we'd interpret most other philosophers: He is not, I think, offering us the One Correct Theory or the Philosophical Truth. His task is different, more subtle, more about jostling us out of our usual habits and complacent confidence, while pushing us in certain broad directions.

Given the brevity of the text, his comments about longevity and death are strikingly frequent. In my view, they exemplify his self-inconsistency in a fun and striking way. I see three strands:

(1.) Living out your full span of years is better than dying young. For example, Zhuangzi appears to advocate that you "live out all your natural years without being cut down halfway" (Ziporyn trans., p. 39). He celebrates trees that are big and useless and thus never chopped down (p. 8, 30-31). He seems to prefer the useless yak who can't catch rats to the weasel who can and who therefore hurries about, dying in a snare (p. 8). He seems to think it a bad outcome to be killed by a tyrant (p. 25, p. 29-30) or to die because well-meaning friends have drilled holes in your head (p. 54). A butcher so skillful in carving oxen that his blade is still as sharp as if straight from the whetstone is described as knowing "how to nourish life" (p. 23).

(2.) Living out your full span of years is not better than dying young. In seemingly more radical moments, Zhuangzi says that although the sage likes growing old, the sage also likes dying young (p. 43), that the "Genuine Human Beings of old understood nothing about delighting in being alive or hating death. They emerged without delight, submerged again without resistance" (p. 40). He seems to admire groups of friends who are not at all distressed by each others' deaths, who "look upon life as a dangling wart or a swollen pimple, and on death as its dropping off, its bursting and draining" (p. 46-47). Of "early death, old age, the beginning, the end", the sage sees "each of them as good" (p. 43).

(3.) We don't know whether living out your full span of years is better than dying young. This view fits with the general skepticism Zhuangzi expresses in Chapter 2. It doesn't have as broad a base of direct textual support, but there is one striking passage to this effect:

How, then, do I know that delighting in life is not a delusion? How do I know that in hating death I am not like an orphan who left home in youth and no longer knows the way back? Lady Li was a daughter of the border guard of Ai. When she was first captured and brought to Qin, she wept until tears drenched her collar. But when she got to the palace, sharing the king's luxurious bed and feasting on the finest means, she regretted her tears. How do I know the dead don't regret the way they used to cling to life?" (p. 19).
You could try to reconcile these various strands into a consistent view. For example you could say that they are targeted to readers of different levels of enlightenment (Allinson), or maybe they reflect different phases of Zhuangzi's intellectual development (possibly Graham), or you might think try to explain away one or the other strand: Maybe he really values death as much as he values life, as part of the infinite series of changes that is life-and-death (possibly Ames or Fraser), or you might think that Zhuangzi's view is that it's only remote "sages" who are lacking something important who are unmoved by death (Olberding). But each of these interpretations has substantial weaknesses, if intended as a means by which to reconcile the text into a self-consistent unity.

[revision 6:40 pm: These statements are too compressed to be entirely accurate to these scholars' views and Olberding in particular suggests that in the course of personal mourning (outside the Inner Chapters) Zhuangzi seems to have a shifting attitude.]

My own approach is to allow Zhuangzi to be inconsistent, since there's textual evidence that Zhuangzi is not trying to present a single, self-consistent philosophical theory. If Zhuangzi thinks that philosophical theorizing is always inadequate in our small human hands, then he might prefer to philosophize in a fragmented, shard-like way, expressing a variety of different, conflicting perspectives on the world. He might wish to frustrate, rather than encourage, our attempts to make neat sense of him, inviting us to mature as philosophers not by discovering the proper set of right and wrong views, but rather by offering his hand as he takes his smiling plunge into confusion and doubt.

That delightfully inconsistent Zhuangzi is the one I love -- the Zhuangzi who openly shares his shifting ideas and confusions, rather than the Zhuangzi that most others seem to see, who has some stable, consistent theory underneath that for some reason he chooses not to display in plain language on the surface of the text.

Related posts:
Skill and Disability in Zhuangzi (Sep. 10, 2014)
Zhuangzi, Big and Useless -- and Not So Good at Catching Rats (Dec. 19, 2008)
The Humor of Zhuangzi; the Self-Seriousness of Laozi (Apr. 8, 2013)
[image source]

Wednesday, February 25, 2015

Depressive Thinking Styles and Philosophy

Recently I read two interesting pieces that I'd like to connect with each other. One is Peter Railton's Dewey Lecture to the American Philosophical Association, in which he describes his history of depression. The other is Oliver Sacks's New York Times column about facing his own imminent death.

One of the inspiring things about Sacks's work is that he shows how people with (usually neurological) disabilities can lead productive, interesting, happy lives incorporating their disabilities and often even turning aspects of those disabilities into assets. (In his recent column, Sacks relates how imminent death has helped give him focus and perspective.) It has also always struck me that depression -- not only major, clinical depression but perhaps even more so subclinical depressive thinking styles -- is common among philosophers. (For an informal poll, see Leiter's latest.) I wonder if this prevalence of depression among philosophers is non-accidental. I wonder whether perhaps the thinking styles characteristic of mild depression can become, Sacks-style, an asset for one's work as a philosopher.

Here's the thought (suggested to me first by John Fischer): Among the non-depressed, there's a tendency toward glib self-confidence in one's theoretical views. (On positive illusions in general among the non-depressed see this classic article.) Normally, conscious human reasoning works like this: First, you find yourself intuitively drawn to Position A. Second, you rummage around for some seemingly good argument or consideration in favor of Position A. Finally, you relax into the comfortable feeling that you've got it figured out. No need to think more about it! (See Kahneman, Haidt, etc.)

Depressive thinking styles are, perhaps, the opposite of this blithe and easy self-confidence. People with mild depression will tend, I suspect, to be less easily satisfied with their first thought, at least on matters of importance to them. Before taking a public stand, they might spend more time imagining critics attacking Position A, and how they might respond. Inclined toward self-doubt, they might be more likely to check and recheck their arguments with anxious care, more carefully weigh up the pros and cons, worry that their initial impressions are off-base or too simple, discard the less-than-perfect, worry that there are important objections that they haven't yet considered. Although one needn't be inclined toward depression to reflect in this manner, I suspect that this self-doubting style will tend to come more naturally to those with mild to moderate depressive tendencies, deepening their thought about the topic at hand.

I don't want to downplay the seriousness of depression, its often negative consequences for one's life including often for one's academic career, and the counterproductive nature of repetitive dysphoric rumination (see here and here), which is probably a different cognitive process than the kind of self-critical reflection that I'm hypothesizing here to be its correlate and cousin. [Update, Feb. 26: I want to emphasize the qualifications of that previous sentence. I am not endorsing the counterproductive thinking styles of severe, acute depression. See also Dirk Koppelberg's comment below and my reply.] However, I do suspect that mildly depressive thinking styles can be recruited toward philosophical goals and, if managed correctly, can fit into, and even benefit, one's philosophical work. And among academic disciplines, philosophy in particular might be well-suited for people who tend toward this style of thought, since philosophy seems to be proportionately less demanding than many other disciplines in tasks that benefit from confident, high-energy extraversion (such as laboratory management and people skills) and proportionately more demanding of careful consideration of the pros and cons of complex, abstract arguments and of precise ways of formulating positions to shield them from critique.

Related posts:
Depression and Philosophy (July 28, 2006)
SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (August 14, 2014)

Thursday, February 19, 2015

Why I Deny (Strong Versions of) Descriptive Cultural Moral Relativism

Cultural moral relativism is the view that what is morally right and wrong varies between cultures. According to normative cultural moral relativism, what varies between cultures is what really is morally right and wrong (e.g., in some cultures, slavery is genuinely permissible, in other cultures it isn't). According to descriptive cultural moral relativism, what varies is what people in different cultures think is right and wrong (e.g., in some cultures people think slavery is fine, in others they don't; but the position is neutral on whether slavery really is fine in the cultures that think it is). A strong version of descriptive cultural moral relativism holds that cultures vary radically in what they regard as morally right and wrong.

A case can be made for strong descriptive cultural moral relativism. Some cultures appear to regard aggressive warfare and genocide as among the highest moral accomplishments (consider the book of Joshua in the Old Testament); others (ours) think aggressive warfare and genocide are possibly the greatest moral wrongs of all. Some cultures celebrate slavery and revenge killing; others reject those things. Some cultures think blasphemy punishable by death; others take a more liberal attitude. Cultures vary enormously on womens' rights and obligations.

However, I reject this view. My experience with ancient Chinese philosophy is the central reason.

Here are the first passages of the Analects of Confucius (Slingerland trans., 2003):

1.1. The Master said, "To learn and then have occasion to practice what you have learned -- is this not satisfying? To have friends arrive from afar -- is this not a joy? To be patient even when others do not understand -- is this not the mark of the gentleman?"
1.2. Master You said, "A young person who is filial and respectful of his elders rarely becomes the kind of person who is inclined to defy his superiors, and there has never been a case of one who is disinclined to defy his superiors stirring up rebellion. The gentleman applies himself to the roots. 'Once the roots are firmly established, the Way will grow.' Might we not say that filial piety and respect for elders constitute the root of Goodness?"
1.3. The Master said, "A clever tongue and fine appearance are rarely signs of Goodness."
1.4. Master Zeng said, "Every day I examine myself on three counts: in my dealings with others, have I in any way failed to be dutiful? In my interactions with friends and associates, have I in any way failed to be trustworthy? Finally, have I in any way failed to repeatedly put into practice what I teach?"
No substantial written philosophical tradition is culturally farther from the 21st century United States than is ancient China. And yet, while we might not personally endorse these particular doctrines, they are not alien. It is not difficult to enter into the moral perspective of the Analects, finding it familiar, comprehensible, different in detail and emphasis, but at the same time homey. Some people react to the text as kind of "fortune cookie": full of boring and trite -- that is, familiar! -- moral advice. (I think this underestimates the text, but the commonness of the reaction is what interests me.) Confucius does not advocate the slaughter of babies for fun, nor being honest only when the wind is from the east, nor severing limbs based on the roll of dice. 21st century U.S. undergraduates might not understand the text's depths but they are not baffled by it as they would be by a moral system that was just a random assortment of recommendations and prohibitions.

You might think, "of course there would be some similarities!" The ancient Confucians were human beings, after all, with certain natural reactions and who needed to live in a not-totally-chaotic social system. Right! But then, of course, this is already to step away from the most radical form of descriptive cultural moral relativism.

Still, you might say, the Analects is pretty morally different -- the Confucian emphasis on being "filial", for example -- that's not really a big piece of U.S. culture. It's an important way in which the moral stance of the ancient Chinese differs from ours.

This response, I think, underestimates two things.

First, it underestimates the extent to which people in the U.S. do regard it as a moral ideal to care for and respect their parents. The word "filial" is not a prominent part of our vocabulary, but this doesn't imply that attachment to and concern for our parents is minor.

Second, and more importantly, it underestimates the diversity of opinion in ancient China. The Analects is generally regarded as the first full-length philosophical text. The second full-length text is the Mozi. Mozi argues vehemently against the Confucian ideal of treating one's parents with special concern. Mozi argues that we should have equal concern for all people, and no more concern for one's parents than for anyone else's parents. Loyalty to one's state and prince he also rejects, as objectionably "partial". One's moral emphasis should be on ensuring that everyone has their basic necessities met -- food, shelter, clothing, and the like. Whereas Confucius is a traditionalist who sees the social hierarchy as central to moral life, Mozi is a radical, cosmopolitan, populist consequentialist!

And of course, Daoism is another famous moral outlook that traces back to ancient China -- one that downplays social obligation to others and celebrates harmonious responsiveness to nature -- quite different again from Confucianism and Mohism.

Comparing ancient China and the 21st century U.S., I see greater differences in moral outlook within each culture than I see between the cultures. With some differences in emphasis and in culturally specific manifestations, a similar range of outlooks flourishes in both places. (This would probably be even more evident if we had more than seven full-length philosophical texts from ancient China.)

So what about slavery, aggressive warfare, women's rights, and the rest? Here's my wager: If you look closely at cultures that seem to differ from ours in those respects, you will see a variety of opinions on those issues, not a monolithic foreignness. Some slaves (and non-slaves) presumably abhor slavery; some women (and non-women) presumably reject traditional gender roles; every culture will have pacifists who despise military conquest; etc. And within the U.S., probably with the exception of slavery traditionally defined, there still is a pretty wide range of opinion about such matters, especially outside mainstream academic circles.

[image source]

Wednesday, February 11, 2015

The Intrinsic Value of Self-Knowledge

April 2, I'll be a critic at a Pacific APA author-meets-critics session on Quassim Cassam's book Self-Knowledge for Humans. (Come!) In the last chapter of the book, Cassam argues that self-knowledge is not intrinsically valuable. It's only, he says, derivatively or instrumentally valuable -- valuable to the extent it helps deliver something else, like happiness.

I disagree. Self-knowledge is intrinsically valuable! It's valuable even if it doesn't advance some other project, valuable even if it doesn't increase our happiness. Cassam defends his view by objecting to three possible arguments for the intrinsic value of self-knowledge. I'll spot him those objections. Here are three other ways to argue for the intrinsic value of self-knowledge.


1. The Argument from Addition and Subtraction.

Here's what I want you to do: Imaginatively subtract our self-knowledge from the world while keeping everything else as constant as possible, especially our happiness or subjective sense of well-being. Now ask yourself: Is something valuable missing?

Now imaginatively add lots of self-knowledge to the world while keeping everything else as constant as possible. Now ask: Has something valuable been gained?

Okay, I see two big problems with this method of philosophical discovery. Both problems are real, but they can be partly addressed.

Problem 1: The subtraction and addition are too vague to imagine. To do it right, you need to get into details, and the details are going to be tricky.

Reply 1: Fair enough! But still: We can give it a try and take our best guess where it's leading. Suppose I suddenly knew more about why I'm drawn to philosophy. Wouldn't that be good, independent of further consequences? Or subtract: I think of myself as a middling extravert. Suppose I lose this knowledge. Stipulate again: To the extent possible, no practical consequences. Wouldn't something valuable be lost?

Alternatively, consider an alien culture on the far side of the galaxy. What would I wish for it? Would I wish for a culture of happy beings with no self-knowledge? Or, if I imaginatively added substantial self-knowledge to this culture, would I be imagining a better state of affairs in the universe? I think the latter.

Contrast with a case where addition and subtraction leave us cold: seas of iron in the planet's core. Unless there are effects on the planetary inhabitants, I don't care. Add or subtract away, whatever.

Problem 2: What these exercises reveal is only that I regard self-knowledge as something that has intrinsic value. You might differ. You might think: happy aliens, no self-knowledge, great! They're not missing anything important. You might think that unless some practical purpose is served by knowing your personality, you might as well not know.

Reply 2: This is just the methodological problem that's at the root of all value inquiries. I can't rationally compel you to share my view, if you start far enough away in value space. I can just invite you to consider how your own values fit together, suggest that if you think about it, you'll find you already do share these values with me, more or less.

2. The Argument from Nearby Cases.

Suppose you agree that knowledge in general is intrinsically valuable. A world of unreflective bliss would lack something important that a world of bliss plus knowledge would possess. I want my alien world to be a world with inhabitants who know things, not just a bunch of ecstatic oysters.

Might self-knowledge be an exception to the general rule? Here's one reason to think not: Knowledge of the motivations and values and attitudes of your friends and family, specifically, is intrinsically good. Set this up with an Argument from Subtraction: Subtracting from the world people's psychological knowledge of people intimate to them would make the world a worse place. Now do the Nearby Cases step: You yourself are one of those people intimate to you! It would be weird if psychological knowledge of your friends were valuable but psychological knowledge of yourself were not.

Unless you're a hedonist -- and few people, when they really think about it, are -- you probably thinks that there's some intrinsic value in the rich flourishing of human intellectual and artistic capacities. It seems natural to suppose that self-knowledge would be an important part of that general flourishing.

3. The Argument from Identity.

Another way to argue that something has intrinsic value is to argue that it is in fact identical to something that we already agree has intrinsic value.

So what is self-knowledge? On my dispositional view (see here and here), to know some psychological fact about yourself is to possess a suite of dispositions or capacities with respect to your own psychology. An example:

What is it to know you're an extravert? It's in part the capacity to say, truly and sincerely, "I'm an extravert". It's in part the capacity to respond appropriately to party invitations, by accepting them in anticipation of the good time you'll have. It's in part to be unsurprised to find yourself smiling and laughing in the crowd. It's in part to be disposed to conclude that someone in the room is an extravert. Etc.

My thought is: Those kinds of dispositions or capacities are intrinsically valuable, central to living a rich, meaningful life. If we subtract them away, we impoverish ourselves. Human life wouldn't be the same without this kind of self-attunement or structured responsiveness to psychological facts about ourselves, even if we might experience as much pleasure. And self-knowledge is not just some further thing floating free of those dispositional patterns that could be subtracted without taking them away too. Self-knowledge isn't some independent representational entity contingently connected to those patterns; it is those patterns.

You might notice that this third argument creates some problems for the straightforward application of the Argument from Addition and Subtraction. Maybe in trying to imagine subtracting self-knowledge from the world while holding all else constant to the extent possible, you were imagining or trying to imagine holding constant all those dispositions I just mentioned, like the capacity to say yes appropriately to party invitations. If my view of knowledge is correct, you can't do that. What this shows is that the Argument from Addition and Subtraction isn't as straightforward as it might at first seem. It needs careful handling. But that doesn't mean it's a bad argument.

Conclusion.

I'd go so far as to say this: Self-knowledge, when we have it (which, I agree with Cassam, is less commonly than we tend to think), is one of the most intrinsically valuable things in human life. The world is a richer place because pieces of it can gaze knowledgeably upon themselves and the others around them.

Tuesday, February 03, 2015

How Robots and Monsters Might Break Human Moral Systems

Human moral systems are designed, or evolve and grow, with human beings in mind. So maybe it shouldn't be too surprising if they would break apart into confusion and contradiction if radically different intelligences enter the scene.

This, I think, is the common element in Scott Bakker's and Peter Hankins's insightful responses to my January posts on robot or AI rights. (All the posts also contain interesting comments threads, e.g., by Sergio Graziosi.) Scott emphasizes that our sense of blameworthiness (and other intentional concepts) seems to depend on remaining ignorant of the physical operations that make our behavior inevitable; we, or AIs, might someday lose this ignorance. Peter emphasizes that moral blame requires moral agents to have a kind of personal identity over time which robots might not possess.

My own emphasis would be this: Our moral systems, whether deontological, consequentialist, virtue ethical, or relatively untheorized and intuitive, take as a background assumption that the moral community is composed of stably distinct individuals with roughly equal cognitive and emotional capacities (with special provisions for non-human animals, human infants, and people with severe mental disabilities). If this assumption is suspended, moral thinking goes haywire.

One problem case is Robert Nozick's utility monster, a being who experiences vastly more pleasure from eating cookies than we do. On pleasure-maximizing views of morality, it seems -- unintuitively -- that we should give all our cookies to the monster. If it someday becomes possible to produce robots capable of superhuman pleasure, some moral systems might recommend that we impoverish, or even torture, ourselves for their benefit. I suspect we will continue to find this unintuitive unless we radically revise our moral beliefs.

Systems of inviolable individual rights might offer an appealing answer to such cases. But they seem vulnerable to another set of problem cases: fission/fusion monsters. (Update Feb. 4: See also Briggs & Nolan forthcoming). Fission/fusion monsters can divide into separate individuals at will (or via some external trigger) and then merge back into a single individual later, with memories from all the previous lives. (David Brin's Kiln People is a science fiction example of this.) A monster might fission into a million individuals, claiming rights for each (one vote each, one cookie from the dole), then optionally reconvene into a single highly-benefited individual later. Again, I think, our theories and intuitions start to break. One presupposition behind principles of equal rights is that we can count up rights-deserving individuals who are stable over time. Challenges could also arise from semi-separate individuals: AI systems with overlapping parts.

If genuinely conscious human-grade artificial intelligence becomes possible, I don't see why a wide variety of strange "monsters" wouldn't also become possible; and I see no reason to suppose that our existing moral intuitions and moral theories could handle such cases without radical revision. All our moral theories are, I suggest, in this sense provincial.

I'm inclined to think -- with Sergio in his comments on Peter's post -- that we should view this as a challenge and occasion for perspective rather than as a catastrophe.

[HT Norman Nason; image source]

Monday, February 02, 2015

Brief Interview at The Magazine of Fantasy and Science Fiction

... here, about my story "Out of the Jar", which features a philosophy professor who discovers he's a sim running in the computer of a sadistic teenager.

Friday, January 30, 2015

Not quite up to doing a blog post this week, after the death of my father on the 18th. Instead, I post this picture of a highly energy efficient device in outer space:

Related post: Memories of my father.

Friday, January 23, 2015

Memories of My Father

My father, Kirkland R. Gable (born Ralph Schwitzgebel) died Sunday. Here are some things I want you to know about him.

Of teaching, he said that authentic education is less about textbooks, exams, and technical skills than about moving students "toward a bolder comprehension of what the world and themselves might become." He was a beloved psychology professor at California Lutheran University.

I have never known anyone, I think, who brought as much creative fun to teaching as he did. He gave out goofy prizes to students who scored well on his exams (e.g., a wind-up robot nun who breathed sparks of static electricity: "nunzilla"). Teaching about alcoholism, he would start by pouring himself a glass of wine (actually, water with food coloring), pouring more wine and acting drunker, arguing with himself, as the class proceeded. Teaching about child development, he would bring in my sister or me, and we would move our mouths like ventriloquist dummies as he stood behind us, talking about Piaget or parenting styles (and then he'd ask our opinion about parenting styles). Teaching about neuroanatomy, he brought in a brain jello mold, which he sliced up and passed around class for the students to eat ("yum! occipital cortex!"). Etc.

As a graduate student and then assistant professor at Harvard in the 1960s and 1970s, he shared the idealism of his mentors Timothy Leary and B.F. Skinner, who thought that through understanding the human mind we can transform and radically improve the human condition -- a vision he carried through his entire life.

His comments about education captured his ideal for thinking in general: that we should always aim toward a bolder comprehension of what the world and we ourselves, and the people around us, might become.

He was always imagining the potential of the young people he met, seeing things in them that they often did not see in themselves. He especially loved juvenile delinquents, whom he encouraged to think expansively and boldly. He recruited them from street corners, paying them to speak their hopes and stories into reel-to-reel tapes, and he recorded their declining rates of recidivism as they did this, week after week. His book about this work, Streetcorner Research (1964), was a classic in its day. As a prospective graduate student in the 1990s, I proudly searched the research libraries at the schools I was admitted to, always finding multiple copies with lots of date stamps in the 1960s and 1970s.

With his twin brother Robert, he invented the electronic monitoring ankle bracelet, now used as an alternative to prison for non-violent offenders.

He wanted to set teenage boys free from prison, rewarding them for going to churches and libraries instead of street corners and pool halls. He had a positive vision rather than a penal one, and he imagined everyone someday using location monitors to share rides and to meet nearby strangers with mutual interests -- ideas which, in 1960, seem to have been about fifty years before their time.

With degrees in both law and psychology, he helped to reform institutional practice in insane asylums -- which were often terrible places in the 1960s, whose inmates had no effective legal rights. He helped force these institutions to become more humane and to release harmless inmates held against their will. I recall his stories about inmates who were often, he said, "as sane as could be expected, given their current environment", and maybe saner than their jailors -- for example an old man who decades earlier had painted his neighbor's horse as an angry prank, and thought he'd "get off easy" if he convinced the court he was insane.

As a father, he modeled and rewarded unconventional thinking. We never had an ordinary Christmas tree that I recall -- always instead a cardboard Christmas Buddha (with blue lights poking through his eyes), or a stepladder painted green, or a wild-found tumbleweed carefully flocked and tinseled -- and why does it have to be on December 25th? I remember a few Saturdays when we got hamburgers from different restaurants and ate them in a neutral location -- I believe it was the parking lot of a Korean church -- to see which burger we really preferred. (As I recall, my sister and he settled on the Burger King Whopper, while I could never confidently reach a preference, because it seemed like we never got the methodology quite right.)

He loved to speak with strangers, spreading his warm silliness and unconventionality out into the world. If we ordered chicken at a restaurant, he might politely ask the server to "hold the feathers". Near the end of his life, if we went to a bank together he might gently make fun of himself, saying something like "I brought along my brain," here gesturing toward me with open hands, "since my other brain is sometimes forgetting things now". For years, though we lived nowhere near any farm, we had a sign from the Department of Agriculture on our refrigerator sternly warning us never to feed table scraps to hogs.

I miss him painfully, and I hope that I can live up to some of the potential he so generously saw in me, carrying forward some of his spirit.

-----------------------------------------------

I am eager to hear stories about his life from people he knew, so please, if you knew him, add one story (or more!) as a comment below. (Future visitors from 2018 or whenever, still post!) Stories are also being collected on his Facebook wall.

We are planning a memorial celebration for him in July to which anyone who knew him would be welcome to come. Please email me for details if you're interested.

Friday, January 16, 2015

Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument

Wednesday, I argued that artificial intelligences created by us might deserve more moral consideration from us than do arbitrarily-chosen human strangers (assuming that the AIs are conscious and have human-like general intelligence and emotional range), since we will be partly responsible for their existence and character.

In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?

First, two clarifications:

  • (1.) I speak of "rights", but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
  • (2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).

Now, two arguments.

-----------------------------------------

The No-Relevant-Difference Argument

Assume that all normal human beings have rights. Assume that both bacteria and ordinary personal computers in 2015 lack rights. Presumably, the reason bacteria and ordinary PCs lack rights is that there is some important difference between them and us. For example, bacteria and ordinary PCs (presumably) lack the capacity for pleasure or pain, and maybe rights only attach to beings with the capacity for pleasure or pain. Also, bacteria and PCs lack cognitive sophistication, and maybe rights only attach to beings with sufficient cognitive sophistication (or with the potential to develop such sophistication, or belonging to a group whose normal members are sophisticated). The challenge, for someone who would deny AI rights, would be to find a relevant difference which grounds the denial of rights.

The defender of AI rights has some flexibility here. Offered a putative relevant difference, the defender of AI rights can either argue that that difference is irrelevant, or she can concede that it is relevant but argue that some AIs could have it and thus that at least those AIs would have rights.

What are some candidate relevant differences?

(A.) AIs are not human, one might argue; and only human beings have rights. If we regard "human" as a biological category term, then indeed AIs would not be human (excepting, maybe, artificially grown humans), but it's not clear why humanity in the biological sense should be required for rights. Many people think that non-human animals (apes, dogs) have rights. Even if you don't think that, you might think that friendly, intelligent space aliens, if they existed, could have rights. Or consider a variant of Blade Runner: There are non-humans among the humans, indistinguishable from outside, and almost indistinguishable in their internal psychology as well. You don't know which of your neighbors are human; you don't even know if you are human. We run a DNA test. You fail. It seems odious, now, to deny you all your rights on those grounds. It's not clear why biological humanity should be required for the possession of rights.

(B.) AIs are created by us for our purposes, and somehow this fact about their creation deprives them of rights. It's unclear, though, why being created would deprive a being of rights. Children are (in a very different way!) created by us for our purposes -- maybe even sometimes created mainly with their potential as cheap farm labor in mind -- but that doesn't deprive them of rights. Maybe God created us, with some purpose in mind; that wouldn't deprive us of rights. A created being owes a debt to its creator, perhaps, but owing a debt is not the same as lacking rights. (In Wednesday's post, I argued that in fact as creators we might have greater moral obligations to our creations than we would to strangers.)

(C.) AIs are not members of our moral community, and only members of our moral community have rights. I find this to be the most interesting argument. On some contractarian views of morality, we only owe moral consideration to beings with whom we share an implicit social contract. In a state of all-out war, for example, one owes no moral consideration at all to one's enemies. Arguably, were we to meet a hostile alien intelligence, we would owe it no moral consideration unless and until it began to engage with us in a socially constructive way. If we stood in that sort of warlike relation to AIs, then we might owe them no moral consideration even if they had human-level intelligence and emotional range. Two caveats on this: (1.) It requires a particular variety of contractarian moral theory, which many would dispute. And (2.) even if it succeeds, it will only exclude a certain range of possible AIs from moral consideration. Other AIs, presumably, if sufficiently human-like in their cognition and values, could enter into social contracts with us.

Other possibly relevant differences might be proposed, but that's enough for now. Let me conclude by noting that mainstream versions of the two most dominant moral theories -- consequentialism and deontology -- don't seem to contain provisions on which it would be natural to exclude AIs from moral consideration. Many consequentialists think that morality is about maximizing pleasure, or happiness, or desire satisfaction. If AIs have normal human cognitive abilities, they will have the capacity for all these things, and so should presumably figure in the consequentialist calculus. Many deontologists think that morality involves respecting other rational beings, especially beings who are themselves capable of moral reasoning. AIs would seem to be rational beings in the relevant sense. If it proves possible to create AIs who are psychologically similar to us, those AIs wouldn't seem to differ from natural human beings in the dimensions of moral agency and patiency emphasized by these mainstream moral theories.

-----------------------------------------

The Simulation Argument

Nick Bostrom has argued that we might be sims. That is, he has argued that we ourselves might be artificial intelligences acting in a simulated environment that is run on the computers of higher-level beings. If we allow that we might be sims, and if we know we have rights regardless of whether or not we are sims, then it follows that being a sim can't, by itself, be sufficient grounds for lacking rights. There would be at least some conceivable AIs who have rights: the sim counterparts of ourselves.

This whole post assumes optimistic technological projections -- assumes that it is possible to create human-like AIs whose rights, or lack of rights, are worth considering. Still, you might think that robots are possible but sims are not; or you might think that although sims are possible, we can know for sure that we ourselves aren't sims. The Simulation Argument would then fail. But it's unclear what would justify either of these moves. (For more on my version of sim skepticism, see here.)

Another reaction to the Simulation Argument might be to allow that sims have rights relative to each other, but no rights relative to the "higher level" beings who are running the sim. Thus, if we are sims, we have no rights relative to our creators -- they can treat us in any way they like without risking moral transgression -- and similarly any sims we create have no rights relative to us. This would be a version of argument (B) above, and it seems weak for the same reasons.

One might hold that human-like sims would have rights, but not other sorts of artificial beings -- not robots or oracles. But why not? This puts us back into the No-Relevant-Difference Argument, unless we can find grounds to morally privilege sims over robots.

-----------------------------------------

I conclude that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve as least some moral consideration. What range of AIs deserve moral consideration, and how much moral consideration they deserve, and under what conditions, I leave for another day.

-----------------------------------------

Related posts:

(image source)

Wednesday, January 14, 2015

Our Moral Duties to Artificial Intelligences

Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?

You might think: Our moral duties to them would be similar to our moral duties to natural humans beings. A reasonable default view, perhaps. If morality is about maximizing happiness (a common consequentialist view), these beings ought to deserve consideration as loci of happiness. If morality is about respecting the autonomy of rational agents (a common deontological view), these beings ought to deserve consideration as fellow rational agents.

One might argue that our moral duties to such beings would be less. For example, you might support the traditional Confucian ideal of "graded love", in which our moral duties are greatest for those closest to us (our immediate family) and decline with distance, in some sense of "distance": You owe less moral consideration to neighbors than to family, less to fellow-citizens than to neighbors, less to citizens of another country than to citizens of your own country -- and still less, presumably, to beings who are not even of your own species. On this view, if we encountered space aliens who were objectively comparable to us in moral worth from some neutral point of view, we might still be justified in favoring our own species, just because it is our own species. And artificial intelligences might properly be considered a different species in this respect. Showing equal concern for an alien or artificial species, including possibly sacrificing humanity for the good of that other species, might constitute an morally odious disloyalty to one's kind. Go, Team Human?

Another reason to think our moral duties might be less, or more, involves emphasizing that we would be the creators of these beings. Our god-like relationship to them might be especially vivid if the AIs exist in simulated environments controlled by us rather than as ordinarily embodied robots, but even in the robot case we would presumably be responsible for their existence and design parameters.

One might think that if these beings owe their existence and natures to us, they should be thankful to us as long as they have lives worth living, even if we don't treat them especially well. Suppose I create a Heaven and a Hell, with AIs I can transfer between the two locations. In Heaven, they experience intense pleasure (perhaps from playing harps, which I have designed them to intensely enjoy). In Hell, I torture them. As I transfer Job, say, from Heaven to Hell, he complains: "What kind of cruel god are you? You have no right to torture me!" Suppose I reply: "You have been in Heaven, and you will be in Heaven again, and your pleasures there are sufficient to make your life as a whole worth living. In every moment, you owe your very life to me -- to my choice to expend my valuable resources instantiating you as an artificial being -- so you have no grounds for complaint!" Maybe, even, I wouldn't have bothered to create such beings unless I could play around with them in the Torture Chamber, so their very existence is contingent upon their being tortured. All I owe such beings, perhaps, is that their lives as a whole be better than non-existence. (My science fiction story Out of the Jar features a sadistic teenage God who reasons approximately like this.)

Alternatively (and the first narrator in R. Scott Bakker's and my story Reinstalling Eden reasons approximately like this), you might think that our duties to the artificial intelligences we create are something like the duties a parent has to a child. Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.

I tend to favor the latter view. But it's worth clarifying that our relationship isn't quite the same as parent-child. A young child is not capable of fully mature practical reasoning; that's one reason to take a paternalistic attitude to the child, including overriding the child's desires (for ice cream instead of broccoli) for the child's own good. It's less clear that I can justify being paternalistic in exactly that way in the AI case. And in the case of an AI, I might have much more capacity to control what they desire than I have in the case of my children -- for example, I might be able to cause the AI to desire nothing more than to sit on a cloud playing a harp, or I might cause the AI to desire its own slavery or death. To the extent this is true, this complicates my moral obligations to the AI. Respecting a human peer involves giving them a lot of latitude to form and act on their own desires. Respecting an AI whose desires I have shaped, either directly or indirectly through my early parameterizations of its program, might involve a more active evaluation of whether its desires are appropriate. If the AI's desires are not appropriate -- for example, if it desires things contrary to its flourishing -- I'm probably at least partly to blame, and I am obliged to do some mitigation that I would probably not be obliged to do in the case of a fellow human being.

However, to simply tweak around an AI's desire parameters, in a way the AI might not wish them to be tweaked, seems to be a morally problematic cancellation of its autonomy. If my human-intelligence-level AI wants nothing more than to spend every waking hour pressing a button that toggles its background environment between blue and red, and it does so because of how I programmed it early on, then (assuming we reject a simple hedonism on which this would count as flourishing), it seems I should do something to repair the situation. But to respect the AI as an individual, I might have to find a way to persuade it to change its values, rather than simply reaching my hand in, as it were, and directly altering its values. This persuasion might be difficult and time-consuming, and yet incumbent upon me because of the situation I've created.

Other shortcomings of the AI might create analogous demands: We might easily create problematic environmental situations or cognitive structures for our AIs, which we are morally required to address because of our role as creators, and yet which are difficult to address without creating other moral violations. And even on a Confucian graded-love view, if species membership is only one factor among several, we might still end up with special obligations to our AIs: In some morally relevant sense of "distance" creator and created might be very close indeed.

On the general principle that one has a special obligation to clean up messes that one has had a hand in creating, I would argue that we have a special obligation to ensure the well being of any artificial intelligences we create. And if genuinely conscious human-grade AI somehow becomes cheap and plentiful, surely there will be messes, giant messes -- whole holocausts worth, perhaps. With god-like power comes god-like responsibility.

(image source)

[Thanks to Carlos Narziss for discussion.]

Related posts:

[Updated 6:03 pm]

Monday, January 12, 2015

The 10 Worst Things About Listicles

10. Listicles destroy the narrative imagination and subtract the sublimity from your gaze.

9. The numerosities of nature never equal the numerosity of human fingers.

8. The spherical universe becomes pretzel sticks upon a brief conveyor.

7. In every listicle, opinion subverts fact, riding upon it as upon a sad pony. (Since you momentarily accept everything you hear, you already know this.)

6. The human mind naturally aspires to unifying harmonies that the listicle instead squirts into yogurt cups.

5. Those ten yogurt pretzels spoiled your dinner. That is the precisely the relation between a listicle and life.

4. In their eagerness to consume the whole load, everyone skips numbers 4 and 3, thereby paradoxically failing to consume the whole load. This little-known fact might surprise you!

3. Why bother, really. La la.

2. Near the end of eating a listicle you begin to realize, once again, that if you were going to be doing something this pointless, you might as well have begun filing that report. Plus, whatever became of that guy you knew in college? Is this really your life? Existential despair squats atop your screen, a fat brown google-eyed demon.

1. Your melancholy climax is already completed. The #1 thing is never the #1 thing. Your hope that this listicle would defy that inevitable law was only absurd, forlorn, Kierkegaardian faith.

(image source)

Wednesday, January 07, 2015

Psychology Research in the Age of Social Media

Popular summaries of fun psychology articles regularly float to the top of my Facebook feed. I was particularly struck by this fact on Monday when these two links popped up:

The links were striking to me because both of the articles reported studies that I regarded as methodologically rather weak, though interesting and fun. It occurred to me to wonder whether social media might be seriously aggravating the perennial plague of sexy but dubious psychological research.

Below, I will present some dubious data on this very issue!

But first, why think these two studies are methodologically dubious? [Skip the next two paragraphs if you like.]

The first article, on whether the rich are jerks, is based largely on a study I critiqued here. Among that study's methodological oddities: The researchers set up a jar of candy in their lab, ostensibly for children in another laboratory, and then measured whether wealthy participants took more candy from the jar than less-wealthy participants. Cute! Clever! But here's the weird thing. Despite the fact that the jar was in their own lab, they measured candy-stealing by asking participants whether they had taken candy rather than by directly observing how much candy was taken. What could possibly justify this methodological decision, which puts the researchers at a needless remove from the behavior being measured and conflates honesty with theft? The linked news article also mentions a study suggesting that expensive cars are more likely to be double-parked. Therefore, see, the rich really are jerks! Of course, another possibility is that the wealthy are just more willing to risk the cost of a parking ticket.

The second article highlights a study that examines a few recently-popular measures of "individualistic" vs. "collectivistic" thinking, such as the "triad" task (e.g., whether the participant pairs trains with buses [because of their category membership, supposedly individualistic] or with tracks [because of their functional relation, supposedly collectivistic] when given the three and asked to group two together). According to the study, the northern Chinese, from wheat-farming regions, are more likely to score as individualistic than are the southern Chinese, from rice-farming regions. A clever theory is advanced: wheat farming is individualistic, rice farming communal! (I admit, this is a cool theory.) How do we know that difference is the source of the different performance on the cognitive tasks? Well, two alternative hypotheses are tested and found to be less predictive of "individualistic" performance: pathogen prevalence and regional GDP per capita. Now, the wheat vs. rice difference is almost a perfect north-south split. Other things also differ between northern and southern China -- other aspects of cultural history, even the spoken language. So although the data fit nicely with the wheat-rice theory, many other possible explanations of the data remain unexplored. A natural starting place might be to look at rice vs. wheat regions in other countries to see if they show the same pattern. At best, the conclusion is premature.

I see the appeal of this type of work: It's fun to think that the rich are jerks, or that there are major social and cognitive differences between people based on the agricultural methods of their ancestors. Maybe, even, the theories are true. But it's a problem if the process by which these kinds of studies trickle into social media has much more do to with how fun the results are than with the quality of the work. I suspect the problem is especially serious if academic researchers who are not specialists in the area take the reports at face value, and if these reports then become a major part of their background sense of what psychological research has recently revealed.

Hypothetically, suppose a researcher measured whether poor people are jerks by judging whether people in more or less expensive clothing were more or less likely to walk into a fast-food restaurant with a used cup and steal soda. This would not survive peer review, and if it did get published, objections would be swift and angry. It wouldn't propagate through Facebook, except perhaps as the butt of critical comments. It's methodologically similar, but the social filters would be against it. I conjecture that we should expect to find studies arguing that the rich are morally worse, or finding no difference between rich and poor, but not studies arguing that the poor are morally worse (though they might be found to have more criminal convictions or other "bad outcomes"). (For evidence of such a filtering effect on studies of the relationship between religion and morality, see here.)

Now I suspect that in the bad old days before Facebook and Twitter, popular media reports about psychology had less influence on philosophers' and psychologists' thinking about areas outside their speciality than they do now. I don't know how to prove this, but I thought it would be interesting to look at the usage statistics on the 25 most-downloaded Psychological Science articles in December 2014 (excluding seven brief articles without links to their summary abstracts).

The article with the most views of its abstracted summary was The Pen Is Mightier Than the Keyboard: Advantages of Longhand over Laptop Note Taking. Fun! Useful! The article had 22,389 abstract views in December, 2014. It also had 1,320 full text or PDF downloads. Thus, it looks like at most 6% of the abstract viewers bothered to glance at the methodology. (I say "at most" because some viewers might go straight through to the full text without viewing the abstract separately. See below for evidence that this happens with other articles.) Thirty-seven news outlets picked up the article, according to Psychological Science, tied for highest with Sleep Deprivation and False Memories, which had 4,786 abstract views and 645 full text or PDF downloads (at most 13% clicking through).

Contrast these articles with the "boring" articles (not boring to specialists!). The single most downloaded article was The New Statistics: Why and How: 1,870 abstract views, 4,717 full-text and PDF views -- more than twice as many full views as abstract views. Psychological Science reports no media outlets picking this one up. I guess people interested in statistical methods want to see the details of the articles about statistical methods. One other article had more full views than abstract views: the ponderously titled Retraining Automatic Action Tendencies Changes Alcoholic Patients’ Approach Bias for Alcohol and Improves Treatment Outcome: 164 abstract views and 274 full views (67% more full views than abstract views). That article was only picked up by one media outlet. Overall, I found a r = -.49 (p = .01) correlation between the number of news-media pickups and the log of the ratio of full-text views to abstract views.

I suppose it's not surprising that articles picked up by the media attract more casual readers who will view the abstract only. I have no way of knowing that many of these readers are fellow academics in philosophy, psychology, and the other humanities and social sciences. But if so, and if my hypothesis is correct that academic researchers are increasingly exposed to psychological research based on Tweetability rather than methodological quality, that's bad news for the humanities and social sciences. Even if the rich really are jerks.

Thursday, January 01, 2015

Writings of 2014

I guess it's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012 and 2013.)

Two notable things this past year were (a.) several essays on a topic that is new to me: skeptical epistemology of metaphysics ("The crazyist metaphysics of mind", "If materialism is true, the United States is probably conscious", "1% skepticism", "Experimental evidence for the existence of an external world"); and (b.) a few pieces of philosophical science fiction, a genre in which I had only published one co-authored piece before 2014. These two new projects are related: one important philosophical function of science fiction is to enliven metaphysical possibilities that one might not otherwise have taken seriously, thus opening up more things to be undecided among.

Of course I also continued to work on some of my other favorite topics: self-knowledge, moral psychology, the nature of attitudes, the moral behavior of ethicists. Spreading myself a bit thin, perhaps!

Non-fiction appearing in print in 2014:

Non-fiction finished and forthcoming:
Non-fiction in draft and circulating:
Some favorite blog posts:
Fiction:
  • What kelp remembers”, Weird Tales: Flashes of Weirdness series, #1, April 14, 2014.
  • The tyrant’s headache”, Sci Phi Journal, issue 3. [appeared Dec. 2014, dated Jan. 2015]
  • Out of the jar”, The Magazine of Fantasy and Science Fiction [forthcoming].
  • “Momentary sage” The Dark [forthcoming].
  • (reprint) “Reinstalling Eden” (first author, with R. Scott Bakker), in S. Schneider, ed., Science Fiction and Philosophy, 2nd ed. (Wiley-Blackwell) [forthcoming].
(I also published a couple of poems, and I have several science fiction stories in draft. If you're interested to see any of these, feel free to email me.)

Monday, December 29, 2014

"The Tyrant's Headache" in Sci Phi Journal

According to a broad class of materialist views, conscious experiences -- such as the experience of pain -- do not supervene on the local physical state of the being who is having those conscious experiences. Rather, they depend in part on the past evolutionary or learning history of the organism (Fred Dretske) or on what is "normal" for members of its group (David Lewis). These dependencies are not just causal but metaphysical: The very same (locally defined) brain state might be experienced as pain by one organism as as non-pain by another organism, in virtue of differences in the organisms' past history or group membership, even if the two organisms are molecule-for-molecule identical at the moment in question.

Donald Davidson's Swampman example is typically used to make this point vivid: You visit a swamp. Lightning strikes, killing you. Simultaneously, through incredibly-low-odds freak quantum chance, a being who is molecule-for-molecule identical to you emerges from the swamp. Does this randomly-congealed Swampman, who lacks any learning history or evolutionary history, experience pain when it stubs its toe? Many people seem to have the hunch or intuition, that yes, it would; but any externalist who thinks that consciousness requires a history will have to say no. Dretske makes clear in his 1995 book that he is quite willing to accept this consequence. Swampman feels no pain.

But Swampman cases are only the start of it! If pain depends, for example, on what is normal for your species, then one ought to be able to relieve a headache by altering your conspecifics -- for example, by killing enough of them to change what is "normal" for your species: anaesthesia by genocide. And in general, any view that denies local supervenience while allowing the presence or absence of pain to depend on other currently ongoing events (rather than only on events in the past) should allow that there will be conditions under which one can end one's own pain by changing other people even without any changes in one's own locally-defined material configuration.

To explore this issue further, I invented a tyrant with the headache, who will do anything to other people to end his headache, without changing any of his own relevant internally-defined brain states.

"The Tyrant's Headache" is a hybrid between a science fiction story and an extended philosophical thought experiment. It has just come out in Sci Phi Journal -- a new journal that publishes both science fiction stories and philosophical essays about science fiction. The story/essay is behind a paywall for now ($3.99 at Amazon or Castalia House). But consider buying! Your $3.99 will support a very cool new journal, and it will get you, in addition to my chronicle of the Tyrant's efforts to end his headache (also featuring David K. Lewis in magician's robes), three philosophical essays about science fiction, eight science fiction stories that explore other philosophical themes, part of a continuing serial, and a review. $3.99 well spent, I hope, and dedicated to strengthening the bridge between science fiction and philosophy.

[See also Anaesthesia by Genocide, David Lewis, and a Materialistic Trilemma]

(image source)

Sunday, December 28, 2014

The Moral World of Dreidel

I used to think dreidel was a poorly designed game of luck. Now I realize that its "bugs" are really features! Dreidel is the moral world in minature.

Primer for goys: You sit in a circle with friends or relatives and take turns spinning a wobbly top (the dreidel). In the center of the circle is a pot of several foil-wrapped chocolate coins. If the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute to the pot again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put a coin in. Then the next player takes a turn.

It all sounds very straightforward, until you actually start to play the game.

First off: Some coins are big, others little. If the game were fair, all the coins would be the same size, or at least there would be clear rules about tradeoffs or about when you're supposed to contribute your big coins and little coins. Also, there's never just one driedel, and the dreidels all seem to be uneven and biased. (This past Hannukah, my daughter Kate and I spun a sample of dreidels 40 times each. One in particular landed on shin an incredible 27/40 spins. [Yes, p < .001, highly significant, even with a Bonferroni correction.]) No one agrees whether you should round up or round down with hey; no one agrees when the game should end or how low the pot should be before you all have to contribute again. (You could look at various alleged authorities on the internet, but people prefer to argue and employ varying house rules.) No one agrees whether you should let someone borrow coins if they run out, or how many coins to start with. Some people hoard their coins; others slowly unwrap and eat them while playing, then beg and borrow from their wealthy neighbors.

You can, if you want, always push things to your advantage: Always contribute the smallest coins you can, always withdraw the biggest coins you can, insist on using what seems to be the "best" dreidel, always argue for rule-interpretations in your favor, eat your big coins and use that as a further excuse to only contribute little ones, etc. You could do all this without ever once breaking the rules, and you'd probably end up with the most chocolate as a result.

But here's the brilliant part: The chocolate isn't very good. After eating a few coins, the pleasure gained from further coins is pretty minimal. As a result, almost all the children learn that they would rather enjoy being kind and generous than they would enjoy hoarding up more coins. The pleasure of the chocolate doesn't outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put a big coin in next time, even though the rules don't demand it -- just to be fair to others, and to be perceived as fair by them.

Of course, it also feels bad always to be the most generous one -- always to put in big, take out small, always to let others win the rules-arguments, etc., to play the sucker or self-sacrificing saint. Dreidel is a practical lesson in discovering the value of fairness both to oneself and others, in a context where proper interpretation of the rules is unclear, and where there are norm violations that aren't rule violations, and where both norms and rules are negotiable, varying from occasion to occasion -- just like life itself, but with only mediocre chocolate at stake.

(image source)

Tuesday, December 23, 2014

Nussbaum on the Moral Bright Side of Literature

In Poetic Justice, her classic defense of the moral value of the "literary imagination", Martha Nussbaum writes about the children's song "Twinkle, twinkle little star" that:

the fact is that the nursery song itself, like other such songs, nourishes the ascription of humanity, and the prospect of friendship, rather than paranoid sentiments of being persecuted by a hateful being in the sky. It tells the child to regard the star as "like a diamond," not like a missile of destruction, and also not like a machine good only for production and consumption. In this sense, the birth of fancy is non-neutral and does, as Dickens indicates, nourish a generous construction of the seen (p. 39).
Nussbaum also argues that the literary imagination favors the oppressed over the aristocracy:
Whitman calls his poet-judge an "equalizer." What does he mean? Why should the literary imagination be any more connected with equality than with inequality, or with democratic rather than aristocratic ideals?... When we read [the Dickens novel] Hard Times as sympathetic participants, our attention has a special focus. Since the sufferings and anxieties of the characters are among the central bonds between reader and work, our attention is drawn in particular to those characters who suffer and fear. Characters who are not facing any adversity simply do not hook us in as readers (p. 90).
Does listening to nursery rhymes and reading literature cultivate generous and sympathetic friendship, across class and ethic divides, as Nussbaum seems to think it does? Maybe so! But the evidence isn't really in yet. Nursery rhymes can also be dark and unsympathetic -- "Rock-a-Bye Baby", "Jack and Jill" -- and I must say that it seems to me that aristocrats are over-represented in literature, the more common targets of our sympathies, than are the poor. We sympathize with Odysseus, with Hamlet, with the brave knight, with the wealthy characters in Eliot, James, and Fitzgerald, and we tend to overlook the servants around them, except in works intentionally written (as Hard Times was) to turn our eyes toward the working class. True, if these characters had no adversities, they wouldn't engage us; but Hamlet suffers adversity enough to capture sympathy despite ample wealth.

Children's literature (especially pre-Disney) mocks and chuckles and laughs callously at suffering as much as it expresses the ideals of wonder and friendship. Children's literature represents the full moral range of human impulses, for good and bad; it would be surprising if that were not so. The same with movies, novels, television, every medium. And "fancy" -- that is, the metaphorical imagination (p. 38) -- can be quite dark and paranoid (especially at night), and sadistic, and sexual, and vengeful, and narcissistic. Fancy is as morally mixed as those who do the fancying.

One might even argue, contra Nussbaum, that there is an aristocratic impulse in literature, a default tendency to present as its focal figures people of great social power, since the socially powerful are typically the ones who do the most exciting things on which the future of their worlds depends. The literary eye is drawn to Lincoln and Caesar and their equivalents, more than to the ordinary farmer who never leaves his land. It takes an egalitarian effort to excite the reader equally about the non-great. And although we are sympathetic with focal figures, the death of non-focal figures (e.g., foes in battle) might tend to excite less sympathy in literature than in real life.

Nussbaum has cherry picked her sample. She might be right that, on balance, we are morally improved by a broad consumption of literature. (Or at least by "good" literature? But let's be careful about what we build into "good" here, lest we argue in a circle.) But if so, I don't think the case can be made on the grounds that literature tends, overall, to be anti-aristocratic and broadly sympathetic. Nor do I think there is much direct empirical evidence on this question, such as longitudinal studies comparing the moral behavior and attitudes those extensively exposed to literature to those not so exposed. (Impressionistically, I'd say literature professors don't seem much morally better, for all their exposure, than others of similar education and social background with less exposure; but the study has never been done.)

It's an interesting and important issue, what are the moral effects of reading literature -- but in my mind, wide open.

[Image source]

Tuesday, December 16, 2014

Moral Order and Immanent Justice

Let's say the world is morally ordered if good things come to those who act morally well and bad things come to those who act morally badly.

Moral order admits of degrees. We might say that the world is perfectly morally ordered if everyone gets exactly what they morally deserve, perfectly immorally ordered if everyone gets the opposite of what they morally deserve, and has no moral order if there's no relationship between what one deserves and what one gets.

Moral order might vary by subgroup of individuals considered. Perhaps the world is better morally ordered in 21st century Sweden than it was in 1930s Russia. Perhaps the world is better morally ordered among some ethnicities or social classes than among others. Class differences highlight the different ways in which moral order can fail: Moral order can fail among the privileged if they do not suffer for acting badly, can fail among the disadvantaged if they do not benefit from acting well.

Moral order might vary by action type. Sexual immorality might more regularly invite disaster than financial immorality, or vice versa. Kindness to those you know well might precipitate deserved benefits or undeserved losses more dependably than kindness to strangers.

Moral order can be immanent or transcendent. Transcendent moral order is ensured by an afterlife. Immanent moral order eschews the afterlife and is either magical (mystical attraction of good or bad fortune) or natural.

Some possible natural mechanisms of immanent moral order:

* A just society. Obviously.

* A natural attraction to morality of the sort Mencius finds in us. Our hearts are delighted, Mencius says, when we see people do what's plainly good and revolted when we see people do what's plainly wrong. Even if this impulse is weak, it might create a constant pressure to reward people for doing the right and revile them for doing the wrong; and it might add pleasure to one's own personal choices of the right over the wrong.

* The Dostoyevskian and Shakespearian psychological reactions to crime. Crime might generate fear of punishment or exposure, including exaggerated fear; it might lead to a loss of intimacy with others if one must hide one's criminal side from them; and it might encourage further crimes, accumulating risk.

* Shaping our preferences toward noncompetitive goods over competitive ones. If you aim to be richer than your neighbors, or more famous, or triumphant in physical, intellectual, or social battle, then you put your happiness at competitive risk. The competition might encourage morally bad choices; and maybe success in such aims is poorly morally ordered or even negatively morally ordered. Desires for non-competitive goods -- the pleasures of shared friendship and a good book -- seem less of a threat to the moral order (though books and leisure time are not free, and so subject to some competitive pressures). And if it's the case that we can find as much or more happiness in easily obtainable non-competitive goods, then even if wealth goes to the jerks, the world might be better morally ordered than it at first seems.

How morally ordered is the world? Do we live in a world where the knaves flourish while the sweethearts are crushed underfoot? Or do people's moral choices tend to come back around to them in the long run? No question, I think, is more central to one's general vision of the world, that is, to one's philosophy in the broad and and proper sense of "philosophy". All thoughtful people have at least implicit opinions about the matter, I think -- probably explicit opinions, too.

Yet few contemporary philosophers address the issue in print. We seem happy to leave the question to writers of fiction.

Tuesday, December 09, 2014

Knowing Something That You Think Is Probably False

I know where my car is parked. It's in the student lot on the other side of the freeway, Lot 30. How confident am I that my car is parked there? Well, bracketing radically skeptical doubts, I'd say about 99.9% confident. I seem to have a specific memory of parking this morning, but maybe that specific memory is wrong; or maybe the car has been stolen or towed or borrowed by my wife due to some weird emergency. Maybe about once in every three years of parking, something like that will happen. Let's assume (from a god's-eye perspective) that no such thing has happened. I know, but I'm not 100% confident.

Justified degree of confidence doesn't align neatly with the presence or absence of knowledge, at least if we assume that it's true that I know where my car is parked (with 99.9% confidence) but false that I know that my lottery ticket will lose (despite 99.9999% confidence it will lose). (For puzzles about such cases, see Hawthorne 2004 and subsequent discussion.) My question for this post is, how far can this go? In particular, can I know something about which I'm less than 50% confident?

"I know that my car is parked in Lot 30; I'm 99.9% confident it's there." -- although that might sound a little jarring to some ears (if I'm only 99.9% confident, maybe I don't really know?), it sounds fine to me, perhaps partly because I've soaked so long in fallibilist epistemology. "I know that my car is parked in Lot 30; I'm 80% confident it's there." -- this sounds a bit odder, though perhaps not intolerably peculiar. Maybe "I'm pretty sure" would be better than "I know"? But "I know that my car is parked in Lot 30; I'm 40% confident it's there." -- that just sounds like a bizarre mistake.

On the other hand, Blake Myers-Schulz and I have argued that we can know things that we don't believe (or about which we are in an indeterminate state between believing and failing to believe). Maybe some of our cases constitute knowledge of some proposition simultaneously with < 50% confidence in that proposition?

I see at least three types of cases that might fit: self-deception cases, temporary doubt cases, and mistaken dogma cases.

Self-deception. Gernot knows that 250 pounds is an unhealthy weight for him. He's unhappy about his weight; he starts half-hearted programs to lose weight; he is disposed to agree when the doctor tells him that he's too heavy. He has seen and regretted the effects of excessive weight on his health. Nonetheless he is disposed, in most circumstances, to say to himself that he's approximately on the fence about whether 250 pounds is too heavy, that he's 60% confident that 250 is a healthy weight for him and 40% confident he's too heavy.

Temporary doubt. Kate studied hard for her test. She knows that Queen Elizabeth died in 1603, and that's what she writes on her exam. But in the moment of writing, due to anxiety, she feels like she's only guessing, and she thinks it's probably false that Elizabeth died in 1603. 1603 is just her best guess -- a guess about which she feels only 40% confident (more confident than about any other year).

Mistaken dogma. Kaipeng knows (as do we all) that death is bad. But he has read some Stoic works arguing that death is not bad. He feels somewhat convinced by the Stoic arguments. He'd (right now, if asked) sincerely say that he has only a 40% credence that death is bad; and yet he'd (right now, if transported) tremble on the battlefield, regret a friend's death, etc. Alternatively: Karen was raised a religious geocentrist. She takes an astronomy class in college and learns that the Earth goes around the sun, answering correctly (and in detail) when tested about the material. She now knows that the Earth goes around the sun, though she feels only 40% confident that it does and retains 60% confidence in her religious geocentrism.

The examples -- mostly adapted from Schwitzgebel 2010, Myers-Schulz and Schwitzgebel 2013, and Murray, Sytsma, and Livengood 2013 -- require fleshing out and perhaps also a bit of theory to be convincing. I offer a variety because I suspect different examples will resonate with different readers. I aim only for an existence claim: As long as there is a way of fleshing out one of these examples so that the subject knows a proposition toward which she has only 40% confidence, I'll consider it success.

As I just said, it might help to have a bit of theory here. So consider this model of knowledge and confidence:

You know some proposition P if you have it -- metaphorically! -- stored in your memory and available for retrieval in such a way that we can rightly hold you responsible for acting or not acting on account of it (and P is true, justified, etc.).

You're confident about some proposition P just in case you'd wager on it, and endorse it, and have a certain feeling of confidence in doing so. (If the wagering, expressing, and feeling come apart, it's a non-canonical, in-between case.)

There will be cases where a known proposition -- because it is unpleasant, or momentarily doubted, or in conflict with something else one wants to endorse -- does not effectively guide how you would wager or govern how you feel. But we can accuse you. We can say, "You know that! Come on!"

So why won't you say "I know that P but I'm only 40% confident in P"? Because such utterances, as explicit endorsements, reflect one's feelings of confidence -- exactly what comes apart from knowledge in these types of cases.