Friday, July 31, 2015

Against Intellectualism about Belief

Sometimes what we sincerely say -- aloud or even just silently to ourselves -- doesn't fit with the rest of our cognition, reactions, and behavior. Someone might sincerely say, for example, that women and men are equally intelligent, but be consistently sexist in his assessments of intelligence. (See the literature on implicit bias.) Someone might sincerely say that her dear friend has gone to Heaven, while her emotional reactions don't at all fit with that.

On intellectualist views of belief, what we really believe is the thing we sincerely endorse, despite any other seemingly contrary aspects of our psychology. On the more broad-based view I prefer, what you believe depends, instead, on how you act and react in a broad range of ways, and sincere endorsements are only one small part of the picture.

Intellectualism might be defended on four grounds.

(1.) Intellectualism might be intuitive. Maybe the most natural or intuitive thing to say about the implicit sexism case is that the person really believes that women are just as smart; he just has trouble putting that belief into action. The person really believes that her friend is in Heaven, but it's hard to avoid reacting emotionally as if her friend is ineradicably dead rather than just "departed".

Reply: Sometimes we do seem to want to say that people believe what they intellectually endorse in cases like this, but I don't think our intuitions are univocal. It can also seem natural or intuitive to say that the implicit sexist doesn't really or wholly or deep-down believe that the sexes are equal, and that the mourner maybe has more doubt about Heaven than she is willing to admit to herself. So the intuitive case could go either way.

(2.) Intellectualism might fit well with our theoretical conceptualization of belief. Maybe it's in the nature of belief to be responsive to evidence and deployable in reasoning. And maybe only intellectually endorsed or endorsable states can play that cognitive role. The implicit sexist's bias might be insufficiently responsive to evidence and insufficiently apt to be deployed in reasoning for it to qualify as belief, while her intellectual endorsement is responsive to evidence and deployable in reasoning.

Reply: Zimmerman and Gendler, in influential essays, have nicely articulated versions of this defense of intellectualism [caveat: see Zimmerman's comment below]. I raised some objections here, and Jack Marley-Payne has objected in more explicit detail, so I won't elaborate in this post. Marley-Payne's and my point is that people's implicit reactions are often sensitive to evidence and deployable in what looks like reasoning, while our intellectual endorsements are often resistant to evidence and rationally inert -- so at least it doesn't seem that there's a sharp difference in kind.

(It was Marley-Payne's essay that got me thinking about this post, I should say. We'll be discussing it, also with Keith Frankish, in September for Minds Online 2015.)

(3.) Intellectualism about belief might cohere well with the conception of "belief" generally used in current Anglophone philosophy. Epistemologists commonly regard knowledge as a type of belief. Philosophers of action commonly think of beliefs coupling with desires to form intentions. Philosophers of language discuss the weird semantics of "belief reports" (such as "Lois believes that Superman is strong" and "Lois believes that Clark Kent is not strong"). Possibly, an intellectualist approach to belief fits best with existing work in these other areas of philosophy.

Reply: I concede that something like intellectualism seems to be presupposed in much of the epistemological literature on knowledge and much of the philosophy-of-language literature on belief reports. However, it's not clear that philosophy of action and moral psychology are intellectualistic. Philosophy of action uses belief mainly to explain what people do, not what they say. For example: Why did Ralph, the implicit sexist, reject Linda for the job? Well, maybe because he wants to hire someone smart for the job and he doesn't think women are smart. Why does the mourner feel sorry for the deceased? Maybe because she doesn't completely accept that the deceased is in Heaven.

Furthermore, maybe coherence with intellectualist views of belief in epistemology and philosophy of language is a mistaken ideal and not in the best interest of the discipline as a whole. For example, it could be that a less intellectualist philosophy of mind, imported into philosophy of language, would help us better see our way through some famous puzzles about belief reports.

(4.) Intellectualism might be the best practical choice because of its effects on people's self-understanding. For example, it might be more effective, in reducing unjustified sexism, to say to an implicit sexist, "I know you believe that women are just as smart, but look at all these spontaneous responses you have" than to say "I know you are sincere when you say women are just as smart, but it appears that you don't through-and-through believe it". Tamar Gendler, Aaron Zimmerman, and Karen Jones have all defended attribution of egalitarian beliefs partly on these grounds, in conversation with me.

Reply: I don't doubt that Gendler, Zimmerman, and Jones are right that many people will react negatively to being told they don't entirely or fully possess all the handsome-sounding egalitarian and spiritual beliefs they think they have. (Neither, would I say, do they entirely lack the handsome beliefs; these are "in-between" cases.) They'll react more positively, and be more open to rigorous self-examination perhaps, if you start on a positive note and coddle them a bit. But I don't know if I want to coddle people in this way. I'm not sure it's really the best thing in the long term. There's something painfully salutary in thinking to yourself, "Maybe deep down I don't entirely or thoroughly believe that women (or racial minorities, or...) are very smart. Similarly, maybe my spiritual attitudes are also mixed up and multivocal." This is a more profound kind of self-challenge, a fuller refusal to indulge in self-flattery. It highlights the uncomfortable truth that our self-image is often ill-tuned to reality.

------------------------------------------

Although all four defenses of intellectualism have some merit, none is decisive. This tangle of reasons leaves us in approximately a tie so far. But we haven't yet come to...

The most important reason to reject intellectualism about belief:

Given the central role of the term "belief" in philosophy of mind, philosophy of action, epistemology, and philosophy of language, we should reserve the term for the most important thing in the vicinity.

Both intellectualism and broad-based views have some grounding in ordinary and philosophical usage. We are at liberty to choose between them. Given that choice, we should prefer the account that picks out the aspect of our psychology that most deserves the central role that "belief" plays in philosophy and folk psychology.

What we sincerely say, what we intellectually endorse, is important. But it is not as important as how we live our way through the world generally. What I say about the intellectual equality of the sexes is important, but not as important as how I actually treat people. My sincere endorsements of religious or atheistic attitudes are important, but they are only a small slice of my overall religiosity or lack of religiosity.

On a broad-based view of belief, to believe that the sexes are equal, or that Heaven exists, or that snow is white, is to steer one's way through the world, in general, as though these propositions are true, not only to be disposed to say they are true. It is this overall pattern of self-steering that we should care most about, and to which we should, if we can do so without violence, attach the philosophically important term "belief".

[image source]

Tuesday, July 28, 2015

Podcast Interview of Me, about Ethicists' Moral Behavior

... other topics included rationalization and confronting one's moral imperfection,

at Rationally Speaking.

Thanks, Julia, for your terrific, probing questions!

Friday, July 24, 2015

Cute AI and the ASIMO Problem

A couple of years ago, I saw the ASIMO show at Disneyland. ASIMO is a robot designed by Honda to walk bipedally with something like the human gait. I'd entered the auditorium with a somewhat negative attitude about ASIMO, having read Andy Clark's critique of Honda's computationally-heavy approach to robotic locomotion (fuller treatment here); and the animatronic Mr. Lincoln is no great shakes.

But ASIMO is cute! He's about four feet tall, humanoid, with big round dark eyes inside what looks a bit like an astronaut's helmet. He talks, he dances, he kicks soccer balls, he makes funny hand gestures. On the Disneyland stage, he keeps up a fun patter with a human actor. ASIMO's gait isn't quite human, but his nervous-looking crouching run only makes him that much cuter. By the end of the show I thought that if you gave me a shotgun and told me to blow off ASIMO's head, I'd be very reluctant to do so. (In contrast, I might quite enjoy taking a shotgun to my darn glitchy laptop.)

Another case: ELIZA was a simple computer program written in the 1960s that would chat with a user, using a small template of pre-programmed responses to imitate a non-directive psychotherapist (“Are such questions on your mind often?”, “Tell me more about your mother.”) Apparently, some users mistook it for human and spent long periods chatting with it.

I assume that ASIMO and ELIZA are not proper targets of substantial moral concern. They have no more consciousness than a laptop computer, no more capacity for genuine joy and suffering. However, because they share some of the superficial features of human beings, people might come improperly to regard them as targets of moral concern. And future engineers could presumably create entities with an even better repertoire of superficial tricks. Discussing this issue with my sister, she mentioned a friend who had been designing a laptop that would scream and cry when its battery runs low. Imagine that!

Conversely, suppose that it's someday possible to create an Artificial Intelligence so advanced that it has genuine consciousness, a genuine sense of self, real joy, and real suffering. If that AI also happens to be ugly or boxy or poorly interfaced, it might tend to attract less moral concern than is warranted.

Thus, our emotional responses to AIs might be misaligned with the moral status of those AIs, due to superficial features that are out of step with the AI's real cognitive and emotional capacities.

In the Star Trek episode "The Measure of a Man", a scientist who wants to disassemble the humanoid robot Data (sympathetically portrayed by a human actor) says of the robot, "If it were a box on wheels, I would not be facing this opposition." He also points out that people normally think nothing of upgrading the computer systems of a starship, though that means discarding a highly intelligent AI.

I have a cute stuffed teddy bear I bring to my philosophy of mind class on the day devoted to animal minds. Students scream in shock when without warning in the middle of the class, I suddenly punch the teddy bear in the face.

Evidence from developmental and social psychology suggests that we are swift to attribute mental states to entities with eyes and movement patterns that look goal directed, much slower to attribute mentality to eyeless entities with inertial movement patterns. But of course such superficial features needn’t track underlying mentality very well in AI cases.

Call this the ASIMO Problem.

I draw two main lessons from the ASIMO Problem.

First is a methodological lesson: In thinking about the moral status of AI, we should be careful not to overweight emotional reactions and intuitive judgments that might be driven by such superficial features. Low-quality science fiction -- especially low-quality science fiction films and television -- does often rely on audience reaction to such superficial features. However, thoughtful science fiction sometimes challenges or even inverts these reactions.

The second lesson is a bit of AI design advice. As responsible creators of artificial entities, we should want people to neither over- nor under-attribute moral status to the entities with which they interact. Thus, we should generally try to avoid designing entities that don’t deserve moral consideration but to which normal users are nonetheless inclined to give substantial moral consideration. This might be especially important in the design of children’s toys: Manufacturers might understandably be tempted to create artificial pets or friends that children will love and attach to -- but we presumably don’t want children to attach to a non-conscious toy instead of to parents or siblings. Nor do we presumably want to invite situations in which users might choose to save an endangered toy over an an endangered human being!

On the other hand, if we do someday create genuinely human-grade AIs who merit substantial moral concern, it would probably be advisable to design them in a way that would evoke the proper range of moral emotional responses from normal users.

We should embrace an Emotional Alignment Design Policy: Design the superficial features of AIs in such a way that they evoke the moral emotional reactions are appropriate to the real moral status of the AI, whatever it is, neither more nor less.

(What is the real moral status of AIs? More soon! In the meantime, see here and here.)

[image source]

Sunday, July 19, 2015

Philosophy Via Facebook? Why Not?

An adapation of my June blog post What Philosophical Work Could Be, in today's LA Times.

--------------------------------------

Academic philosophers tend to have a narrow view of what is valuable philosophical work. Hiring, tenure, promotion and prestige depend mainly on one's ability to produce journal articles in a particular theoretical, abstract style, mostly in reaction to a small group of canonical and 20th century figures, for a small readership of specialists. We should broaden our vision.

Consider the historical contingency of the journal article, a late-19th century invention. Even as recently as the middle of the 20th century, leading philosophers in Western Europe and North America did important work in a much broader range of genres: the fictions and difficult-to-classify reflections of Sartre, Camus and Unamuno; Wittgenstein's cryptic fragments; the peace activism and popular writings of Bertrand Russell; John Dewey's work on educational reform.

Popular essays, fictions, aphorisms, dialogues, autobiographical reflections and personal letters have historically played a central role in philosophy. So also have public acts of direct confrontation with the structures of one's society: Socrates' trial and acceptance of the hemlock; Confucius' inspiring personal correctness.

It was really only with the generation hired to teach the baby boomers in the 1960s and '70s that academic philosophers' conception of philosophical work became narrowly focused on the technical journal article.

continued here.

Tuesday, July 14, 2015

The Moral Lives of Ethicists

[published today in Aeon Magazine]

None of the classic questions of philosophy are beyond a seven-year-old's understanding. If God exists, why do bad things happen? How do you know there's still a world on the other side of that closed door? Are we just made of material stuff that will turn into mud when we die? If you could get away with killing and robbing people just for fun, would you? The questions are natural. It's the answers that are hard.

Eight years ago, I'd just begun a series of empirical studies on the moral behavior of professional ethicists. My son Davy, then seven years old, was in his booster seat in the back of my car. "What do you think, Davy?" I asked. "People who think a lot about what's fair and about being nice – do they behave any better than other people? Are they more likely to be fair? Are they more likely to be nice?"

Davy didn’t respond right away. I caught his eye in the rearview mirror.

"The kids who always talk about being fair and sharing," I recall him saying, "mostly just want you to be fair to them and share with them."

When I meet an ethicist for the first time – by "ethicist", I mean a professor of philosophy who specializes in teaching and researching ethics – it's my habit to ask whether ethicists behave any differently to other types of professor. Most say no.

I'll probe further: Why not? Shouldn't regularly thinking about ethics have some sort of influence on one’s own behavior? Doesn't it seem that it would?

To my surprise, few professional ethicists seem to have given the question much thought. They'll toss out responses that strike me as flip or are easily rebutted, and then they'll have little to add when asked to clarify. They'll say that academic ethics is all about abstract problems and bizarre puzzle cases, with no bearing on day-to-day life – a claim easily shown to be false by a few examples: Aristotle on virtue, Kant on lying, Singer on charitable donation. They'll say: "What, do you expect epistemologists to have more knowledge? Do you expect doctors to be less likely to smoke?" I'll reply that the empirical evidence does suggest that doctors are less likely to smoke than non-doctors of similar social and economic background. Maybe epistemologists don’t have more knowledge, but I'd hope that specialists in feminism would exhibit less sexist behavior – and if they didn't, that would be an interesting finding. I'll suggest that relationships between professional specialization and personal life might play out differently for different cases.

It seems odd to me that our profession has so little to say about this matter. We criticize Martin Heidegger for his Nazism, and we wonder how deeply connected his Nazism was to his other philosophical views. But we don’t feel the need to turn the mirror on ourselves.

The same issues arise with clergy. In 2010, I was presenting some of my work at the Confucius Institute for Scotland. Afterward, I was approached by not one but two bishops. I asked them whether they thought that clergy, on average, behaved better, the same or worse than laypeople.

"About the same," said one.

"Worse!" said the other.

No clergyperson has ever expressed to me the view that clergy behave on average morally better than laypeople, despite all their immersion in religious teaching and ethical conversation. Maybe in part this is modesty on behalf of their profession. But in most of their voices, I also hear something that sounds like genuine disappointment, some remnant of the young adult who had headed off to seminary hoping it would be otherwise.

In a series of empirical studies – mostly in collaboration with the philosopher Joshua Rust of Stetson University – I have empirically explored the moral behavior of ethics professors. As far as I'm aware, Josh and I are the only people ever to have done so in a systematic way.

Here are the measures we looked at: voting in public elections, calling one's mother, eating the meat of mammals, donating to charity, littering, disruptive chatting and door-slamming during philosophy presentations, responding to student emails, attending conferences without paying registration fees, organ donation, blood donation, theft of library books, overall moral evaluation by one's departmental peers based on personal impressions, honesty in responding to survey questions, and joining the Nazi party in 1930s Germany.

[continued in the full article here]

Wednesday, July 08, 2015

Profanity Inflation, Profanity Migration, and the Paradox of Prohibition

As a fan of profane language judiciously employed, I fear that the best profanities of English are cheapening from overuse -- or worse, that our impulses to offend through profane language are beginning to shift away from harmless terms toward more harmful ones.

I am inspired to these thoughts by Rebecca Roache's recent Philosophy Bites podcast on swearing.

Roache distinguishes between objectionable slurs (especially racial slurs) and presumably harmless swear words like "fuck". The latter words, she suggests, should not be forbidden, although she acknowledges that in some contexts it might be inappropriate to use them. Roache also suggests that it's silly to forbid "fuck" while allowing obvious replacements like "f**k" or "the f-word". Roache says, "We should swear more, and we shouldn't use asterisks, and that's fine." (31:20).

I disagree. Overstating somewhat, I disagree because of this:

"Fuck" is a treasure of the English language. Speakers of other languages will sometimes even reach across the linguistic divide to relish its profanity. "Fuck" is a treasure precisely because it is forbidden. Its being forbidden is the source of its profane power and emotional vivacity.

When I was growing up in California in the 1970s, "fuck" was considered the worst of the seven words you can't say on TV. You would never hear it in the media, or indeed -- in my posh little suburb -- from any adults, except maybe, very rarely, from some wild man from somewhere else. I don't think I heard my parents or any of their friends say the word even once, ever. It wasn't until fourth grade that I learned that the word existed. What a powerful word, then, for a child to relish in the quiet of his room, or to suddenly drop on a friend!

"Fuck" is in danger. Its power is subsiding from its increased usage in the public sphere. Much as the overprinting of money devalues it, profanity inflation risks turning "fuck" into another "damn". The hundred-dollar-bill of swear words doesn't buy as much shock as it used to. (Yes, I sound like an old curmudgeon -- but it's true!)

Okay, a qualification: I'm pretty sure what I've just said is true for the suburban California dialect; but I'm also pretty sure "fuck" was never so powerful in some other dialects. Some evidence of its increased usage overall, and its approach toward "damn", is this Google NGram of "fuck", "shit", and "damn" in "lots of books", 1960-2008:

[click to enlarge]

A further risk: As "fuck" loses its sting and emotional vivacity, people who wish to use more vividly offensive language will find themselves forced to other options. The most offensive alternative options currently available in English are racial slurs. But unlike "fuck", racial slurs are plausibly harmful in ordinary use. The cheapening of "fuck" thus risks forcing the migration of profanity to more harmful linguistic locations.

The paradox of prohibition, then: If the woman in the eCard above wishes to preserve the power of her favorite word, she should cheer for it to remain forbidden. She should celebrate, not bemoan, the existence of standards against the use of "fuck" on major networks, the awarding of demerits for its use in school, and its almost complete avoidance by responsible adults in public contexts. Conversely, some preachers might wish to encourage the regular recitation of "fuck" in the preschool curriculum. (Okay, that last remark was tongue in cheek. But still, wouldn't it work?)

Despite the substantial public interest in retaining the forbidden deliciousness of our best swear word, I do think that since the word is in fact (pretty close to) harmless, severe restrictions would be unjust. We must really only condemn it with the forgiving standards we usually apply to etiquette violations, even if this results in the term's not being quite as potent as it otherwise would be.

Finally, let me defend usages like "f**k" and "the f-word". Rather than being silly avoidances because we all know what we're talking about, such decipherable maskings communicate and reinforce the forbiddenness of "fuck". Thus, they help to sustain its power as an obscenity.

[image source]

Thursday, July 02, 2015

How In-Between Cases of Belief Differ Normatively from In-Between Cases of Extraversion

For twenty years, I've been advocating a dispositional account of belief, according to which to believe that P is to match, to an appropriate degree and in appropriate respects, a "dispositional stereotype" characteristic of the belief that P. In other words: All there is to believing that P is being disposed, ceteris paribus (all else equal or normal or right), to act and react, internally and externally, like a stereotypical belief-that-P-er.

Since the beginning, two concerns have continually nagged at me.

One concern is the metaphysical relation between belief and outward behavior. It seems that beliefs cause behavior and are metaphysically independent of behavior. But it's not clear that my dispositional account allows this -- a topic for a future post.

The other concern, my focus today, is this: My account struggles to explain what has gone normatively wrong in many "in-between" cases of belief.

The Concern

To see the worry, consider personality traits, which I regard as metaphysically similar to beliefs. What is it to be extraverted? It is just to match, closely enough, the dispositional stereotype that we tend to associate with being extraverted -- that is, to be disposed to enjoy parties, to be talkative, to like meeting new people, etc. Analogously, on my view, to believe there is beer in the fridge is, ceteris paribus, to be disposed to go to the fridge if one wants a beer, to be disposed to feel surprise if one were to open the fridge and find no beer, to answer "yes" when asked if there is beer in the fridge, etc.

One interesting thing about personality traits is that people are rarely 100% extravert or 100% introvert, rarely 100% high-strung or 100% mellow. Rather, people tend to be between the extremes, extraverted in some respects but not in others, or in some types of contexts but not in others. One feature of my account of belief which I have emphasized from the beginning is that it easily allows for the analogous in-betweenness: We often match only imperfectly, and in some respects, the stereotype of the believer in racial equality, or of the believer in God, or of the believer that the 19th Street Bridge is closed for repairs. ("The Splintered Mind"!)

The worry, then is this: There seems to be nothing at all normatively wrong -- no confusion, no failing -- with being an in-between extravert who has some extraverted dispositions and other introverted ones; while in contrast it does seem that typically something has gone wrong in structurally similar cases of in-between believing. If some days I feel excited about parties and other days I loathe the thought, with no particular excuse or explanation for my different reactions, no problem, I'm just an in-between extravert. In contrast, if some days I am disposed to act and react as if Earth is third planet from the Sun and other days I am disposed to act and react as if it is the fourth, with no excuse or explanation, then something has gone wrong. Being an in-between extravert is typically not irrational; being an in-between believer typically is irrational. Why the difference?

My Answer

First, it's important not to exaggerate the difference. Too arbitrary an arrangement of, or fluctuation in, one's personality dispositions does seem at least a bit normatively problematic. If I'm disposed to relish the thought of a party when the wall to my left is beige and to detest the thought of a party when the wall to my left is truer white, without any explanatory story beneath, there's something weird about that -- especially if one accepts, as I do, following McGeer and Zawidzki, that shaping oneself to be comprehensible to others is a central feature of mental self-regulation. And on the other hand, some ways of being an in-between believer are entirely rational: for example, having an intermediate degree of confidence or having procedural "how to" knowledge without verbalizable semantic knowledge. But this so far is not a full answer. Wild, inexplicable patterns still seem more forgivable for traits like extraversion than attitudes like belief.

A second, fuller reply might be this: There is a pragmatic or instrumental reason to avoid wild splintering of one's belief dispositions that does not apply to the case of personality traits. It's good (at least instrumentally good, maybe also intrinsically good?) to be a believer of things, roughly, because it's good to keep track of what's going on in one's environment and to act and react in ways that are consonant with that. Per impossibile, if one were faced with the choice of whether or not to be a creature with the capacity to form dispositional structures in response to evidence that stay mostly stable, except under the influence of new evidence, and which guide one's behavior accordingly, vs. being a creature without the capacity to form such evidentially stable dispositional structures, it would be pragmatically wise to choose to be the former. On average, plausibly, one would live longer and attain more of one's goals. So perhaps the extra normative failing in wildly splintering belief dispositions derives from that. An important part of the value of having stable belief-like dispositional sets is to guide behavior in response to evidence. In normatively defective in-between cases, that value isn't realized. And if one explicitly embraces wild in-betweenness in belief, one goes the extra step of thumbing one's nose at such structures, when one could, instead, try to employ them toward one's ends.

Whether these two answers are jointly sufficient to address the concern, I haven't decided.

[Thanks to Sarah Paul and Matthew Lee for discussion.]

[image source]

Monday, June 29, 2015

A New Podcast Interview of Me

here.

Thanks to Daniel Bensen for the fun interview! We discuss the rights of artificial intelligences, whether our moral intuitions break down in far-out SF cases, the relationship between science fiction and philosophy, and my recent story "Momentary Sage".

Thursday, June 25, 2015

Celebrate the Nerd!

Here's my definition of a nerd:

A nerd is someone who loves an intellectual topic, for its own sake, to an unreasonable degree.

The nerd might be unreasonably passionate about Leibnizian metaphysics, for example -- she studies Latin, French, and German so she can master the original texts, she stays up late reading neglected passages, argues intensely about obscure details with anyone who has the patience to listen. Or she loves twin primes in that same way, or the details of Napoleonic warfare, or the biology of squids. How could anyone care so much about such things?

It's not that the nerd sees some great practical potential in studying twin primes (though she might half-heartedly try to defend herself in that way), or is responding in the normal way to something that sensible people might study carefully because of its importance (such as a cure for leukemia). Rather the nerd is compelled by an intellectual topic and builds a substantial portion of her life around it, with no justification that would make sense to anyone who is not similarly consumed by that topic. All passions drift free of reasonable justification to some extent, but still there's a difference between moderate passions and passions so extreme and compelling that one is somewhat unbalanced as a result of them. The nerd will sacrifice a lot -- time, money, opportunities -- to learn just a little bit more about her favored topic.

The secondary features of nerdiness are side effects: The nerd might not care about dressing nicely. She's too busy worrying about the Leibniz Nachlass. The nerd might fail at being cool -- she's not invested in developing the social skills that would be required. The nerd might be introverted: Maybe she really was introverted all along and that's part of why she found herself with her nerdy passions; or maybe she's an introvert partly in reaction to other people's failure to care about squid. Oh, but now squid have come up in the conversation? Her knowledge is finally relevant! The nerd becomes now too eager to deploy her vast knowledge. She won't stop talking. She'll correct all your minor errors. She'll nerdsplain tirelessly at you.

The nerd needn't possess any of these secondary features: Caring intensely about the Leibniz Nachlass needn't consume one entirely, and so there can still be room for the nerd to care also, in a normal, non-intellectual way, about ordinary things. But the tendency on average will be for nerdy passion to push away other interests and projects, with the result that uncool, shlumpy introverts will be overrepresented among nerds.

Innate genius might exist. But I don't find the empirical evidence very compelling. What I think passes for innate genius is often just nerdy passion. Meeting the nerd on her own turf, she can appear to be a natural-born genius or talent because she has already thought the topic through so thoroughly that she operates two moves ahead of you and has a chess-master-like recognition of the patterns of intellectual back-and-forth in the area. She has thought repetitively, and from many angles, of the various ways in which pieces of Leibniz might possibly connect, or about the wide range of techniques in prime-number mathematics, or about the four competing theories of squid neural architecture and their relative empirical weaknesses. She dreams them at night. How could you hope to keep up? She will also master related domains so that she exceeds you there, too -- early modern philosophy generally and abstract metaphysics, say, for the Leibniz nerd. Other aspects of her mind might not be so great -- just ask her to fix a faucet or find her way around downtown -- but meet her anywhere near her turf and she'll scorch right past you. If she is good enough also at exuding an aura of intelligence (not all nerds are, but it's a social technique that pairs well with nerdiness), then you might attribute her overperformance on Leibniz to her innate brilliance, her underperformance in plumbing to her not giving a whit.

Movies like Good Will Hunting drive me nuts, because they feed the impression that intellectual accomplishment is the result of an innate gift, rather than the result of nerdy passion. In this way, they are antithetical to the vision of nerdiness that I want to celebrate. A janitor who doesn't care (much?) about math but is innately great at it -- and somehow also knows better than history graduate students what's going on in obscure texts in their field? Such innate-genius movies rely on the fixed mindset that Carol Dweck has criticized. What I think I see in the nerdy eminences I have met is not so much innate genius as years of thought inspired by passion for stuff that no one sensible would care so much about.

Society needs nerds. If we want to know as much as a society ought to know about Leibniz and about squids, we benefit from having people around who are so unreasonably passionate about these things that they will master them to an amazing degree. There's also just something glorious about a world that contains people who care as passionately about obscure intellectual topics as the nerd does.

**** Celebrate the nerd! ****

[image source, image source]

Thursday, June 18, 2015

Why Do We Care about Discovering Life, Exactly?

It would be exciting to discover life on another planet -- no doubt about that! But why would it be exciting?

Let's start with a contrast: the possibility of finding intelligence that is not alive -- a robot or a god, without means of reproduction. (Standard textbook definitions, philosophy of biology, and NASA-sponsored discussions all tend to define "life" partly in terms of reproduction.) I'm inclined to think that the search for extra-terrestrial life would have been successful in its aims if we discovered a manufactured robot or a non-reproducing god, even if such beings are not technically alive or are only borderline cases of living things. So maybe what we call the "search for life" is better conceptualized as the search for... well, what exactly?

(Could we discover evidence of a god -- a creator being who exists outside of our space and time? I don't see why not, at least hypothetically. Maybe we find a message in the stars: "Hey, God here! Ask me for a miracle and I will produce one!")

The robot and god cases might suggest that what we really care about is finding intelligence. SETI, for example, takes that as its explicit goal: the Search for Extra-Terrestrial Intelligence. But an emphasis on intelligence appears to underestimate our target. We'd be excited to find microbes on Mars or Europa -- and the search for extra-terrestrial life would rightly be regarded as having met with success (though not the most exciting form of success) -- despite microbes' lack of intelligence.

Or do microbes possess some sort of minimal intelligence? They engage in behaviors that sustain their homeostasis, repelling some substances and consuming others, for example, in a way that preserves their internal order. This type of "intelligence" is also part of standard definitions of life. Maybe, then, order-preserving homeostasis is what excites us? But then, Jupiter's Great Red Spot does something something similar, but we don't seem to think of it as the kind of thing we're looking for in searching for life.

Are we looking, then, for complexity? Maybe a microbe is more complex that the Great Red Spot. (I don't know. Measuring complexity is a vexed issue.) But sheer complexity doesn't seem like what we're after. Galaxies are complex, and the canyons of Mars are complex, and there are subtle, complex variations in cosmic background radiation -- all very interesting, but the search for life appears to be something different, not just a search for complexity.

Maybe discovering life would be interesting because it would give us a glimpse of our potential past? Life on Earth evolved up from microbes, but it's still obscure how. Seeing microbial life elsewhere might illuminate our own origins. Maybe, if it's very different from us, it will also illuminate the contingency of our origins.

Maybe discovering life would be interesting because it would complete the Copernican revolution, which knocked human beings out of the center of the cosmos? Earth is still special in being the only planet known to have life, and maybe that sense of specialness is still implicit in our thinking. Finding life elsewhere might knock us more fully from the center of the cosmos.

Maybe discovering life would be interesting because it would be a discovery of something with awesome potential? Reproduction might work its way back into our considerations here. Microbes can reproduce and thus evolve, and maybe their awesomeless lies partly in the possibility that in a billion years they could give rises to multicellular entities very different from us -- capable of very different forms of consciousness, self-awareness, pleasure and pain, creativity, art.

Maybe discovering life would be interesting because terrifying -- either because of the threat alternative life forms might directly pose to life on Earth or, more subtly, because if non-technological life is common enough in the universe for us to discover it, then the Great Filter of Fermi's Paradox is more likely to be before us than behind us. (That is, it might be evidence that biological life is common while technological intelligence is rare, and thus that technological civilizations tend to destroy themselves in short order.)

On the flip side, maybe it would be interesting for its potential use: intelligences with technology to share, non-technological organisms with interesting biologies from which we could learn to construct new medicines or other technologies.

Would it be interesting in the same way to find remnants of life? I'm inclined to think it would have some of the same interest. If so, and if we're inclined to think, for whatever reason, that technological societies tend to be short-lived, then we might dedicate some resources toward detecting possible signs of dead civilizations. Such signs might include solar collectors that interfere with stellar output, or stable compounds in a planet's atmosphere that are unlikely to have arisen except by technological means.

I see no reason we need to insist on a single answer to questions about what ambitions we do or should have in our search for extra-terrestrial company of some sort. But in the context of space policy it seems worth more extended thought. I'd like to see philosophers more involved in this, since the issues go right to the heart of philosophical questions about what we do and should value in general.

------------------------------

Acknowledgement: This is one of two main issues that struck me during my recent trip to an event on the search for extraterrestrial life, funded by NASA and the Library of Congress. Thanks to LOC, NASA, and the other participants. I discuss the other issue, about our duties to extraterrestrial microbes, here.

[image source, image source]

Thursday, June 11, 2015

What Philosophical Work Could Be

Academic philosophers in Anglophone Ph.D.-granting departments tend to have a narrow conception of what counts as valuable philosophical work. Hiring, tenure, promotion, and prestige turn mainly on one's ability to write an essay in a particular theoretical, abstract style, normally in reaction to the work of a small group of canonical historical and 20th century figures, on a fairly constrained range of topics, published in a limited range of journals and presses. This is too narrow a view.

I won't discuss cultural diversity here, which I have addressed elsewhere. Today I'll focus on genre and medium.

Consider the recency and historical contingency of the philosophical journal article. It's a late 19th century invention. Even as late as the mid-20th century, leading philosophers in Western Europe and North America were doing important work in a much broader range of styles than is typical now. Think of the fictions and difficult-to-classify reflections of Sartre, Camus, and Unamuno, the activism and popular writings of Russell, Dewey's work on educational reform, Wittgenstein's fragments. It's really only with the generation hired to teach the baby boomers that our conception of philosophical work became narrowly focused on the academic journal article, and on books written in that same style.

(Miguel de Unamuno)

Consider the future of media. The magazine is a printing-press invention and carries with it the history and limitations of that medium. With the rise of the internet, other possibilities emerge: videos, interactive demonstrations, blogs, multi-party conversations on social media, etc. Is there something about the journal article that makes it uniquely better for philosophical reflection than these other media? (Hint: no.)

Nor need we think that philosophical work must consist of expository argumentation targeted toward disciplinary experts and students in the classroom. This, too, is a narrow and historically recent conception of philosophical work. Popular essays, fictions, aphorisms, dialogues, autobiographical reflections, and personal letters have historically played a central role in philosophy. We could potentially add, too, public performances, movies, video games, political activism, and interactions with the judicial system and governmental agencies.

Philosophers are paid to develop expertise in philosophy, to bring that expertise in philosophy into the classroom, and to contribute that expertise to society in part by further advancing philosophical knowledge. A wide range of activities fit within that job description. I am inclined to be especially liberal here for two reasons: First, I have a liberal conception of philosophy as inquiry into big-picture ontological, normative, conceptual, and broadly theoretical issues about anything (including, e.g., hair and football as well as more traditionally philosophical topics). I favor treating a wide range of inquiries as philosophical, only a small minority of which happen in philosophy departments. And second, I have a liberal conception of "inquiry" on which sitting at one's desk reading and writing expository arguments is only one sort of inquiry. Engaging with the world, trying out one's ideas in action, seeing the reactions of non-academics, exploring ideas in fiction and meditation -- these are also valuable modes of inquiry that advance our philosophical knowledge, activities in which we not only deploy our expertise but cultivate and expand it, influencing society and, in a small or a large way, the future of both academic philosophy and non-academic philosophical inquiry.

Research-oriented philosophy departments tend to regard writing for popular media or consulting with governmental agencies as "service", which is typically held in less esteem than "research". I'm not sure service should be held in less esteem; but I would also challenge the idea that such work is not also partly research. If one approaches popular writing as a means of "dumbing down" pre-existing philosophical ideas for an audience of non-experts whose reactions one does not plan to take seriously, then, yes, that popular writing is not really research. But if the popular essay is itself a locus of philosophical creativity, where philosophical ideas are explored in hopes of discovering new possibilities, advancing (and not just marketing) one's own thinking, furthering the community's philosophical dialogue in a way that might strike professional philosophers, too, as interesting rather than merely familiar re-hashing, and if it's done in a way that is properly intellectually responsive to the work of others, then it is every bit as much "research" as is a standard journal article. Analogously with consulting -- and with Twitter feeds, TED videos, and poetry.

I urge our discipline to conceptualize philosophical work more broadly than we typically do. A Philosophical Review article can be an amazing, awesome thing. Yes! But we should see journal articles of that style, in that type of venue, as only one of many possible forms of important, field-shaping philosophical work.

Thursday, June 04, 2015

Space Agencies Need, but Don't Appear to Have, Policies Governing Contact with Microbial Life on Mars

NASA and other leading space agencies do not appear to have formal policies about how to treat microbial life if it's found elsewhere in the solar system. I find this surprising.

I still need to do a more thorough search to be confident of this. However, last week when I went to an event jointly sponsored by NASA and the Library of Congress, the people I spoke to there seemed to think that there's no worked-out formal policy; nor have I found such a policy in subsequent internet searches. (Please correct me by email or in the comments below if I'm wrong!)

NASA and other space agencies do have rigorous and detailed protocols regarding the cross-contamination of microbial life between planets. If you want to send a lander to Mars, it must be thoroughly sterilized. Likewise, extensive protocols are being developed to protect Earth from possible extra-terrestrial microbes in returned samples. NASA has an Office of Planetary Protection that focuses on these issues. However contact with microbial life raises ethical issues besides cross-contamination.

Suppose NASA discovers a patch of microbes on Mars.

Presumably, NASA scientists will want to test it -- to see how similar Martian life is to Earthly life, for example. Testing it might involve touching it. Maybe NASA scientists will want a rover to scoop up a sample for chemical analysis. But that would mean interfering with the organisms, exposing them to risk. Even just shining light on microbes to examine them more closely is a form of interference that presents some risk -- even the shadow of a parked rover creates a small degree of interference and risk. How much interference with extraterrestrial microbial life is acceptable? How much risk? These questions will rise acutely as soon as we discover extraterrestrial life. In fact, proving that we have actually discovered life might already involve some interference, especially if the sample is ambiguous or subsurface. These questions are quite independent of existing regulations about sterilization and contamination. We need to consider them now, in advance, before we discover life. Otherwise, NASA leaders might be in the position of making these decisions on the fly, without sufficient public input or oversight.

Here's another question in the ethics of contact: Suppose we discover a species of microbe that appears to be under threat of extinction due to local environmental conditions. Should we employ something like a "Prime Directive" policy, on the microbial level: no interference, even if that means extinction? Or should we take positive steps toward alien species protection?

Planetary protection policies that focus on contamination risk seem to rely on standard top-down regulatory models requiring compliance to a fixed set of detailed rules, but I wonder if a better model might be university Institutional Review Boards for the protection of human participants (IRBs) and Animal Care and Use Committees (ACUCs). Such committees have three appealing features:

First, rather than a rigid set of rules, IRBs and ACUCs employ a flexible set of general guidelines. The guidelines governing research on human participants tend to be very conservative about risk in general; but the committee is also charged with weighing risks against benefits. In the context of extraterrestrial microbiology, a reasonable standard might be extreme caution about interference, but one that allows, for example, a small sample to be very carefully taken from a large, healthy microbial colony, for experimentation and then careful disposal without re-release into the planetary environment. As reflection on this example suggests, people might have very different ethical opinions about how much risk and interference is appropriate, and of what sort. Also, expert scientists will want to think in advance about assessing the sources of risk and what feasible steps can be taken to minimize those risks, contingent on various types of possible preliminary information about the microbe's structure and habitat. I do not see evidence that these issues are being given the serious thought, with public input, that they need to be given.

Second, IRBs and ACUCs are normally constituted by a mix of scientist and non-scientist members, the latter typically drawn from the general public (often lawyers and schoolteachers). The scientists bring their scientific expertise which is essential to evaluating the risks and possible benefits, but the non-scientist members play an important role in expressing general community values and in keeping the scientists from possibly going too easy on their scientist friends, as well as sometimes specific expertise on related non-scientific issues. In the context of the treatment of extraterrestrial microbial life, a mixed committee also seems important. It shouldn't only be the folks at the space agencies who are making these calls.

Third, IRBs and ACUCs assess specific protocols in advance of the implementation of those protocols. This should be done where feasible, while also recognizing that some decisions may need to be made urgently without pre-approval when unexpected events occur.

I think we should begin to establish moderately specific national and international guidelines governing human interaction with microbial life elsewhere in the solar system, in which contamination is regarded as only one issue among several; that we should formulate these guidelines after broad input not only from scientists but also from the general public and from people with expertise in risk and research ethics; and that we should form committees, modeled on IRBs and ACUCs, of people who understand these guidelines and stand ready to evaluate proposals at the very moment we discover extraterrestrial life.

NASA, ESA, etc., what do you think?

[image source]

Friday, May 29, 2015

The Immortal's Dilemma

Most of the philosophical literature on immortality and death -- at least that I've read -- doesn't very thoroughly explore the consequences of temporal infinitude. Bernard Williams, for example, suggests that 342 years might be a tediously long life. Well, of course 342 years is peanuts compared to infinitude!

It seems to me that true temporal infinitude forces a dilemma between two options:
(a.) infinite repetition of the same things, without memory, or
(b.) an ever-expanding range of experiences that eventually diverges so far from your present range of experiences that it becomes questionable whether you should regard that future being as "you" in any meaningful sense.

Call this choice The Immortal's Dilemma.

Given infinite time, a closed system will eventually cycle back through its states, within any finite error tolerance. (One way of thinking about this is the Poincare recurrence theorem.) There are only so many relevantly distinguishable states a closed system can occupy. Once it has occupied them, it has to start repeating at least some of them. Assuming that memory belongs to the system's structure of states, then memory too is among those things that must start afresh and repeat. But it seems legitimate to wonder whether the forgetful repetition of the same experiences, infinitely again and again, is something worth aspiring toward -- whether it's what we can or should want, or what we thought we might want, in immortality.

It might seem better, then, or more interesting, or more worthwhile, to have an open system. Unless the system is ever-expanding, though, or includes an ever-expanding population of unprecedented elements, eventually it will loop back around. Thus, given any finite error tolerance, eventually events will have to get more and more remote from the original run of events you lived through -- with no end to the increasing remoteness.

Suppose that conscious experience is what matters. (Parallel arguments can be made for other ways of thinking about what matters.) First, one might cycle through every possible human experience. Suppose, for example, that human experience depends on a brain of no more than a hundred trillion neurons (currently we have a hundred billion, but that might change), and that each neuron is capable of one of a hundred trillion relevantly distinguishable states, and that any difference in even one neuron in the course of a ten-second "specious present" results in a relevantly distinguishable experience. A liberal view of the relationship between different neural states and different possible experiences!

Of course such numbers, though large, are still finite. So once you're done living through all the experiences of seeming-Aristotle, seeming-Gandhi, seeming-Hitler, seeming-Hitler-seeming-to-remember-having-earlier-been-Gandhi, seeming-future-super-genius, and seeming-every-possible-person-else and many, many more experiences that probably wouldn't coherently belong to anyone's life, well, you've either got to settle in for some repetition or find some new range of experiences that include experiences that are no longer human. [Clarification June 1: Not all these states need occur, but that only shortens the path to looping or alien weirdness.] Go through the mammals. Then go through hypothetical aliens. Expand, expand -- eventually you'll have run through all possible smallish creatures with a neural or similar basis and you'll need to go to experiences that are either radically alien or vastly superhuman or both. At some point -- maybe not so far along in this process -- it seems reasonable to wonder, is the being who is doing all this really "you"? Even if there is some continuous causal thread reaching back to you as you are now, should you, as you are now, care about that being's future any more than you care about the future of some being unrelated to you?

Either amnesic infinite repetition or a limitless range of unfathomable alien weirdness. Those appear to be the choices.

References to good discussions of this in the existing literature welcome in the comments section!

[Thanks particularly to Benjamin Mitchell-Yellin for discussion.]

Related posts:
Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct. 31, 2012)
My Boltzmann Continuants (Jun. 6, 2013)
Goldfish-Pool Immortality (May 30, 2014)
Duplicating the Universe (Apr. 29, 2015)

[image source]

Thursday, May 21, 2015

Leading SF Novels: Academic Library Holdings and Citation Rates

Among the most culturally influential English-language fiction writers of the 20th century, a substantial portion wrote science fiction or fantasy -- "speculative fiction" (SF) broadly construed. H.G. Wells, J.R.R. Tolkien, George Orwell, Isaac Asimov, Philip K. Dick, and Ursula K. Le Guin, for starters. In the 21st century so far, speculative fiction remains culturally important. There's sometimes a feeling among speculative fiction writers that even the best recent work in the genre isn't taken seriously by academic scholars. I thought I'd look at a couple possible (imperfect!) measures of this.

(I'm doing this partly just for fun, 'cause I'm a dork and I find this kind of thing relaxing, if you'll believe it.)

Holdings of recent SF in academic libraries

I generated a list of critically acclaimed SF novels by considering Hugo, Nebula, and World Fantasy award winners from 2009-2013 plus any non-winning novels that were among the 5-6 finalists for at least two of the three awards. Nineteen novels met the criteria.

Then I looked at two of the largest Anglophone academic library holdings databases: COPAC and Melvyl, and counted how many different campuses (max 30-ish) had a print copy of the book [see endnote for details].

H = Hugo finalist, N = Nebula finalist, W = World Fantasy finalist; stars indicate winners.

The results, listed from most held to least:

16 campuses: Neil Gaiman, The Graveyard Book (H*W)
15: George R.R. Martin, A Dance with Dragons (HW)
15: China Mieville, The City & the City (H*NW*)
12: Cory Doctorow, Little Brother (HN)
12: Ursula K. Le Guin, Powers (N*)
12: China Mieville, Embassytown (HN)
12: Connie Willis, Blackout / All Clear (H*N*)
11: Paolo Bacigalupi, The Windup Girl (HN*)
11: G. Willow Wilson, Alif the Unseen (W*)
10: Kim Stanley Robinson, 2312 (HN*)
8: N.K. Jemisin, The Hundred Thousand Kingdoms (HNW)
8: N.K. Jemisin, The Killing Moon (NW)
8: Jon Scalzi, Redshirts (H*)
8: Jeff VanderMeer, Finch (NW)
8: Jo Walton, Among Others (H*N*W)
7: Cherie Priest, Boneshaker (HN)
7: Caitlin Kiernan, The Drowning Girl (NW)
5: Nnedi Okorafor, Who Fears Death (NW*)
3: Saladin Ahmed, Throne of the Crescent Moon (HN)

As a reference point, I did a similar analysis of PEN/Faulkner award winners and finalists over the same period.

Of the 25 PEN winners and finalists, 7 were held by more campuses than was any book on my SF list, though the difference was not extreme, with two at 24 (Jennifer Egan, A Visit from the Goon Squad; Joseph O'Neill, Netherland) and five ranging from 18-21 campuses. In the PEN group, just as in the SF group, there were nine books held by fewer than ten of the campuses (3, 5, 6, 7, 7, 7, 9, 9, 9) -- so the lower part of the lists looks pretty similar.

References in Google Scholar

Citation patterns in Google Scholar tell a similar story. Although citation rates are generally low by philosophy and psychology standards (assuming as a comparison group the most-praised philosophy and psychology books of the period), they are not very different between the SF and PEN lists. The SF books for which I could find five or more Google Scholar citations:

53 citations: Gaiman, The Graveyard Book
52: Doctorow, Little Brother
27: Martin, A Dance with Dragons
26: Bacigalupi, The Windup Girl
9: Priest, Boneshaker
8: Robinson, 2312
5: Okorafor, Who Fears Death

The top-cited PEN books were at 70 (O'Neill, Netherland) and 59 (Egan, A Visit from the Goon Squad). After those two, there's a gap down to 17, 15, 12, 11, 10.

I continue to suspect that there is a bit of a perception difference between "highbrow" literary fiction and "middlebrow" SF, disadvantaging SF studies in some quarters of the university; but if so, perhaps that is compensated by recognition of SF's broader visibility in popular culture, so that in terms of overall scholarly attention, it appears to be approximately a tie.

---------------------------------

Bestsellers:

So... hey! That makes me wonder about bestsellers. I've taken the four best selling fiction books each year from 2009-2013 (according to USA Today for 2009-2012, Nielsen Book Scan for 2013) and tried the same. (The catalogs are a bit messier since these books tend to have multiple editions, so the numbers are a little rougher.)

Top five by citations (# of campuses in parens):

431: Suzanne Collins, The Hunger Games (23)
333: Stephanie Meyer, Twilight (26)
162: Stephanie Meyer, Breaking Dawn (17)
132: Stephanie Meyer, New Moon (15)
130: Steig Larsson, The Girl with the Dragon Tattoo (12)

Only 4 of the 19 had fewer than 10 citations, and all were held by at least six campuses.

So by both of these measures, bestsellers are receiving more academic attention than either the top critically acclaimed SF or PEN. Notable: By my count, 8 of the 19 bestsellers are SF, including all of the top-four most cited.

Maybe that's as is should be: The Hunger Games and Twilight are major cultural phenomena, worthy of serious discussion for that sake alone, in addition to whatever merits they might have as literature.

---------------------------------

Endnote:
COPAC covers the major British and Irish academic libraries, Melvyl the ten University of California campuses. I counted up the total number of campuses in the two systems with at least one holding of each book, limiting myself to print holdings (electronic and audio holdings were a bit disorganized in the databases, and spot checking suggested they didn't add much to the overall results since most campuses with electronic or audio also had print of the same work).

As always, corrections welcome!

Thursday, May 14, 2015

Moral Duties to Flawed Gods

Suppose that God exists and is morally imperfect. (I'm inclined to think that if a god exists, that god is not perfect.) If God has created me and sustains the world, I owe a pretty big debt to her/him/it. Now suppose that this morally imperfect God tells me to wear a blue shirt today instead of a brown one. No greater good would be served; it's just God's preference, for no particular reason. God tells me to do it, but doesn't threaten me with punishment if I don't -- she (let's say "she") just appeals to my sense of moral obligation: "I am your creator," she says, "and I work to sustain your whole universe. I'd like you to do it. You owe me!"

One way we might conceptualize a morally flawed god is this: We might be sims, or model playthings, in a world that is subject to the whims of some larger being with the power to radically manipulate or destroy it, and who therefore has sufficient powers to be properly conceptualized as a god by us. Alternatively, if technology advances sufficiently, we ourselves might create genuinely conscious rational beings who live as sims or playthings, and then we would be gods relative to them.

It is helpful, I think, to consider these issues simultaneously bottom up and top down -- both in terms of what we ourselves would owe to such a hypothetical god and in terms of what we, if we hypothetically gained divine levels of power over created beings, could legitimately demand of those beings. It seems a reasonable desideratum of a theory that the constraints be symmetrical: Whatever a flawed god could legitimately demand of us, we, if we had similar attributes in relation to beings we created, could legitimately demand of them; and contrapositively, whatever we could not legitimately demand of beings we created we should not recognize as demands a flawed god could make upon us, barring some relevant asymmetry between the situations.

Here are three possible approaches to God's authority to command:

(1.) Love of God and/or the good. Divine command theory is the view that we are obliged to do whatever God commands. Christian articulations of this view have typically assumed a morally perfect God, whom we obey out of love for him, or love of the good, or both (e.g., Adams 1999). A version of this view might be adapted to the case where God is morally flawed: We might still love her, and obey her from love (as one might obey another human out of love); or one might obey because one admires and respects the goodness of God and her commands, even if God is not perfectly good and this particular command is flawed.

(2.) Acknowledgement of debt. Other approaches to divine command theory emphasize God's power and our debt as God's creations (for example, Augustine: "Unless you turn to Him and repay the existence that He gave you... you will be wretched. All things owe to God, first of all, what they are insofar as they are natures" [cited here] and the conclusion of the Book of Job). A secular comparison might be the debt children owe to their parents for their creation and sustenance, for example as emphasized in the Confucian tradition.

(3.) Social contract theory. According to social contract theory, what gives (morally flawed) governmental representatives legitimate authority to command us is something like the fact that, hypothetically, the overall social arrangement is fair, and we would agree to it if it were offered from the right kind of neutral position. God might say: Universes require gods to create, command, and sustain them -- or at least your universe has required one -- and I am the god in that role, executing my powers in a manner that would be antecedently recognizable as fair. Surely you would agree, hypothetically, to the justice of the creation of your world under this general arrangement?

Now when I consider these possible justifications of a morally imperfect God's authority to command, what strikes me is that all three seem to justify only rather limited power. To see this, consider three types of command: (a.) the trivial and arbitrary, (b.) the non-trivial and arbitrary, and (c.) the non-arbitrary and non-trivial.

It is perhaps legitimate for a god to make trivial, arbitrary demands -- like to wear a blue shirt today rather than a brown -- and for a created being to satisfy them, in recognition of a personal relationship or a debt. Similarly legitimate, it seems, are non-arbitrary demands that God makes for excellent reasons, justifiable either interpersonally or through social contract theory.

My own sense, however -- does yours differ? -- is that arbitrary but non-trivial demands should be sharply limited. Suppose, for example, that God says she wants me to go out to the student commons and do a chicken dance -- not for any good reason but just as a passing minor whim, because she wants me to. I'd be embarrassed, but no serious consequences would ensue. My feeling is that God would not be in the right to make this sort of demand of me; nor would I be in the right to demand it of my creations, were I ever to create genuinely conscious beings over whom I had divine degrees of power.

It seems to me that would be wrong in the same way that it would be wrong for my mother or wife to ask this of me for no good reason: It would be a matter of someone's treating her own whims as of greater importance than my legitimate desires and interests. It would violate the principle of equality. But if that's correct -- if an imperfect god's whims don't trump my interests for that type of reason -- then in the relevant moral sense, we are God's equals.

You might say: If a god really did create us, our debt is enormous. Indeed it would be! But what follows? My parents created me, and they raised me through childhood, so my debt to them is also enormous; and my government paid for my education and my roads and my protection, so in a sense my government has also created and sustained me, and my debt to it is also enormous. However, once I have been created, I have a dignity and interests that even those who have created and sustained me cannot legitimately disregard to satisfy their whims. And I see no reason to suppose this limitation on the morally legitimate exercise of power is any less for gods than for fellow humans.

A morally perfect god might be different. Necessarily, such a god would not demand anything morally illegitimate. But I think a sober look at the world suggests that if there is any creating or sustaining god of substantial power, that god is far from morally perfect. If that god tells me never to mix clothing fibers or never to work on the sabbath, she had better also supply a good reason.

Related posts:

  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • [image source]

    Monday, May 11, 2015

    Network Map of Philosophical SF Authors

    Andrew Higgins has done one of his beautiful network maps for my Philosophical SF authors list:

    [click to see full size] Andrew writes:

    This graph represents a network of science fiction authors and philosophers, with the authors linked to philosophers just in case the philosopher listed that author as philosophically interesting. Authors are labeled, and label size corresponds to the number of philosophers mentioning them. Label colors and positions are rough indicators of similarity. Colors represent groups of authors; as an intuitive gloss, if authors A1-An are the same color that means the connections between the As is ≥ their connections to authors in other groups. Author positions are determined by a combination of three forces - gravity, attraction, and repulsion - applied to the network until it has settled into a stable position (a local peak in the space of possible positions). All nodes gravitate to the center and repulse one another, and nodes are attracted just in case they are connected. So, positions and colors can be seen as weak indicators of similarity, whatever kind of similarity is highlighted by philosophers' choices.

    But, given the relatively small sample size and lack of strong modularity in the network, we should be cautious in inferring anything about these authors (or philosophers) based on their relative positions or colors.

    Friday, May 08, 2015

    Competing Perspectives on the Significance of One's Final, Dying Thought

    Here's a particularly unsentimental view about last, dying thoughts: Your dying thought will be your least important thought. After all (assuming no afterlife), it is the one thought guaranteed to have no influence on any of your future thoughts, or on any other aspect of your psychology.

    Now maybe if you express the thought aloud -- "I did not get my Spaghetti Os. I got spaghetti. I want the press to know this." -- or if your last thought is otherwise detectable by others, it will have an effect; but for this post let's assume a private last thought that influences no one else.

    A narrative approach to the meaning of life seems to recommend a different attitude toward last thoughts. If a life is like a story, you want it to end well! The ending of a story colors all that has gone before. If the hero dies resentful or if the hero dies content, that rightly changes our understanding of earlier events. It does so not only because we might now understand that all along the hero felt subtly resentful, but also because private deathbed thoughts, on this view, have a retrospective transformative power: An earlier betrayal, for example, now becomes a betrayal that was forgiven by the end (or it becomes one that was never forgiven). The ghost's appearance to Hamlet has one type of significance if Hamlet ends badly and quite a different significance if Hamlet ends well. On the narrative view, the significance of events depends partly on the future. Maybe this is part of what Solon had in mind when he told King Croesus not to call anyone happy until they die: A horrible enough disaster at the end, maybe, can retrospectively poison what your marriage and seeming successes had really amounted to. Thus, maybe the last thought is like the final sentence of a book: Ending on a thought of love and happiness makes your life a very different story than does ending on a thought of resentment and regret.

    The unsentimental view seems to give too little significance to one's last thought -- I, at least, would want to die on a positive note! -- but the narrative view seems to give one's last thought too much significance. I doubt we're deprived of knowing the significance of someone's life if we don't know their last thought in the way we can't know the significance of a story if we don't know its last sentence. Also, the last sentence of a story is a contrived feature of a type of work of art, a sentence which the work is designed to render highly significant; while a last thought might be trivially unimportant by accident (if you're thinking about what to have for lunch, then hit by a truck you didn't see coming) or it might not reflect a stable attitude (if you're grumpy from pain).

    Maybe the right answer is just a compromise: The last thought is not totally trivial because it has some narrative power, but life isn't so much like a narrative that it has last-sentence-of-a-story-like power? Life has narrative elements, but the independent pieces also have a power and value that isn't hostage to future outcomes.

    Here's another possibility, which interacts with the first two: Maybe one's last thought is an opportunity. But what kind of opportunity it is will depend on whether last thoughts can retrospectively change the significance of earlier events.

    On the narrative view, it is an opportunity to -- secretly! with an almost magical time-piercing power -- make it the case that Person A was forgiven by you or never forgiven, that Action B was regretted or never regretted, etc.

    On the unsentimental view, in contrast, it is an opportunity to think things that, had you thought them earlier, would have been too terrible to think because of their possible impact on your future thoughts. (Compare: It's also an opportunity to explore the neuroscience of decapitation.) I don't know that we have such a reservoir of unthinkable thoughts that we refuse to make conscious for fear of the effects of thinking them. That sounds pretty Freudian! But if we do, here's the perfect opportunity, perhaps, to finally admit to yourself that you never really loved Person A or that your life was a failure. Maybe if you thought such things and then remembered those thoughts the next day, bad consequences would follow. But now there can be no such bad consequences the next day; and if you reject the narrative view, there are no retrospective bad consequences on earlier events either. So it's your chance, if you can grab it, to drop your self-illusions and glare at the truth.

    Writing this now, though, that last view seems too dark. I'd rather die under illusion, I think, than dispel the illusion at the last moment, when it's too late to do anything about it. Maybe that's the better narrative. Or maybe truth is not the most important thing on the deathbed.

    [image source]

    Thursday, May 07, 2015

    List of Philosophical Science Fiction / Speculative Fiction

    I've just updated my list of "philosophically interesting" SF -- about 400 total recommendations from 40 contributors, along with brief "pitches" for each work that point toward the work's philosophical interest. All of the contributors are either professional philosophers or professional SF writers with graduate training in philosophy.

    The version sorted by author (or director, for movies) is organized so that the most frequently recommended authors appear first on the list. What SF authors are the biggest hits with the philosophy crowd? Now you know! (Or you will know, shortly after you click.)

    There's also a version sorted by recommender. If you scan through to find works you love, then you can see which contributors recommended those works. Since you have overlapping tastes, you might want to especially check out their other recommendations.

    Tuesday, May 05, 2015

    Momentary Sage

    My newest piece of short speculative fiction, Momentary Sage, has just come out in The Dark. I wanted to do two things with the story.

    First: I wanted to envision the aftermath of A Midsummer Night's Dream. In the main plot of Shakespeare's play, Lysander and Hermia want to marry, but Hermia has been promised to Demetrius whom she loathes. The problem is resolved with a fairy love spell: Demetrius is tricked into loving Helena, to whom he had previously been engaged and who still loves him. All ends happily, with Lysander marrying Hermia and Demetrius marrying Helena. But dear poet Willy, that's too cheap a fix! Demetrius can't just stay permanently tricked into love, happily ever after, can he? Midnight fairy magic always causes more problems than it solves, for that is the unbreakable law of fairies. (Just ask Susanna Clarke.)

    Second: I wanted to explore a certain simplistic parody of Buddhism. Demetrius's love spell ends the next day. But his revenge is this: Hermia's child, Sage, is a philosopher baby who believes that non-existence is preferable to suffering. Since he disbelieves in the reality of an extended self, to determine whether life is worth living at any moment, Sage simply weighs up his total joy and suffering at that moment. As soon as his current suffering outweighs his current joy, he attempts to commit suicide, employing a sharp magic tusk he was born with for just that purpose. Hermia and Lysander must thus keep constant watch on Sage, physically pinning him down the moment he starts feeling frustrated or colicky.

    Though drawn in starker colors, this is just the predicament confronting all parents when their children would rather cast away future interests than accept a little short-term suffering. Is there a rational argument that can convince someone to value the future, if they don't already? Sage and Lysander have a go at it, but Sage always wins. He is the better philosopher.

    It's a piece of dark fantasy, verging on horror -- so if you don't enjoy that genre, stand warned.

    [image source]

    Wednesday, April 29, 2015

    Duplicating the Universe

    I've been thinking about two forms of duplication. One is duplication of the entire universe from beginning to end, as envisioned in Nietzsche's eternal return (cf. Poincare's recurrence theorem on a grand scale). The other is duplication within an eternal (or very long) individual life (goldfish-pool immortality). In both cases, I find myself torn among four different evaluative perspectives.

    For color, imagine a god watching our universe from Big Bang to heat death. At the end, this god says, "In total, that was good. Replay!" Or imagine an immortal life in which you loop repeatedly (without remembering) through the same pleasures over and over.

    Consider four ways of thinking about the value of duplication:

    1. The summative view: Duplicating a good thing doubles the world's goodness, all else being equal; and in particular duplicating the universe doubles the total sum of goodness. There's twice as much total happiness overall, for example. Although Nietzsche rejected the ethics of happiness-summing, something in the general direction of the summative view seems to be implicit in his suggestion that if we knew that the universe repeats infinitely, that would add infinite weight to every decision.

    2. The indifference view: Repetition adds no value or disvalue, if it is a true repetition (no memory, no development, no audience-god watching saying "oh, I remember this... here comes the good part!"). You might even think, if the duplication is perfect enough, that there aren't even two metaphysically distinct things (Leibniz's identity of indiscernibles).

    3. The diminishing returns view: A second run-through is good, but it doesn't double the goodness of the first run-through. For example, the total subjectively experienced happiness might be double, but there's something special about being the first person on the (or "a"?) moon, which is something that never happens in the second run -- and likewise something special about being the last episode of Seinfeld (or "Seinfeld"?) and about being the only copy of a Van Gogh painting (or a "Van Gogh" painting?), which the first run loses if a second run is added.

    4. The precious uniqueness view: Expanding the last thought from the diminishing returns view, one might think that duplication somehow cheapens both runs, and that it's better to do things exactly once and be done.

    Which of these four views is the best way of thinking about cosmic value (or the value of an extended life)?

    You might think that this kind of question isn't amenable to rational argumentation -- that there is no discoverable fact of the matter about whether doubling is better. And maybe that's right. But consider this: Universe A is just like our universe. Universe B is just like our universe, but life on Earth never advances past microbial levels of complexity. If you think Universe A is overall better, or more creation-worthy (or, if you're enough of a pessimist, overall worse) than Universe B, then you think there are facts about the relative value of universes -- in which case, plausibly, there should also be some fact about whether a duplicative universe is a lot better, a little better, the same, or worse than a single-run universe. Yes?

    There is, I think, at least a chance that this question, or a relative of it, will become a question of practical ethics in the future -- if we ever become "gods" who create universes of genuinely conscious people running inside of simulated environments (as I discuss here and here), or if we ever have the chance to "upload" into paradises of repetitive bliss.

    [image source]

    Monday, April 27, 2015

    How to Make Van Gogh's "Starry Night" Undulate

    Not sure the original source of this one (maybe notbecauseitsironic on Reddit?).

    First, look at the center of the image below for about 30 seconds.

    Look at the center of this image for 30sec, then watch Van Gogh's *Starry Night* come to life
    Then look at Van Gogh's "The Starry Night".
    The technique also achieves interesting results when applied to Kincade:
    [HT Mariano Aski]