Wednesday, September 12, 2018

One-Point-Five Cheers for a Hugo Award for a TV Show about Ethicists’ Moral Expertise

[cross-posted at Kittywumpus]

When The Good Place episode “The Trolley Problem” won one of science fiction’s most prestigious awards, the Hugo, in the category of best dramatic presentation, short form, I celebrated. I celebrated not because I loved the episode (in fact, I had so far only seen a couple of The Good Place’s earlier episodes) but because, as a philosophy professor aiming to build bridges between academic philosophy and popular science fiction, the awarding of a Hugo to a show starring a professor of philosophy discussing a famous philosophical problem seemed to confirm that science fiction fans see some of the same synergies I see between science fiction and philosophy.

I do think the synergies are there and that the fans see and value them – as also revealed by the enduring popularity of The Matrix, and by West World, and Her, and Black Mirror, among others – but “The Trolley Problem”, considered as a free-standing episode, fumbles the job. (Below, I will suggest a twist by which The Good Place could redeem itself in later episodes.)

Yeah, I’m going to be fussy when maybe I should just cheer and praise. And I’m going to take the episode more philosophically seriously than maybe I should, treating it as not just light humor. But taking good science fiction philosophically seriously is important to me – and that means engaging critically. So here we go.

The Philosophical Trolley Problem

The trolley problem – the classic academic philosophy version of the trolley problem – concerns a pair of scenarios.

In one scenario, the Switch case, you are standing beside a railroad track watching a runaway railcar (or “trolley”) headed toward five people it will surely kill if you do nothing. You are standing by a switch, however, and you can flip the switch to divert the trolley onto a side track, saving the five people. Unfortunately, there is one person on the side track who will be killed if you divert the trolley. Question: Should you flip the switch?

In another scenario, the Push case, you are standing on a footbridge when you see the runaway railcar headed toward the five people. In this case, there is no switch. You do, however, happen to be standing beside a hiker with a heavy backpack, who you could push off the bridge into the path of the trolley, which will then grind to a halt on his body, killing him and saving the five. (You are too light to stop the trolley with your own body.) He is leaning over the railing, heedless of you, so you could just push him over. Question: Should you push the hiker?

The interesting thing about these problems is that most people say it’s okay to flip the switch in Switch but not okay to push the hiker in Push, despite the fact that in both cases you appear to be killing one person to save five. Is there really a meaningful difference between the cases? If so, what is it? Or are our ordinary intuitions about one or the other case wrong?

It’s a lovely puzzle, much, much debated in academic philosophy, often with intricate variations on the cases. (Here’s one of my papers about it.)

The Problem with “The Trolley Problem”

“The Trolley Problem” episode nicely sets up some basic trolley scenarios, adding also a medical case of killing one to save five (an involuntary organ donor). The philosophy professor character, Chidi, is teaching the material to the other characters.

Spoilers coming.

The episode stumbles by trying to do two conflicting things.

First, it seizes the trope of the philosophy professor who can’t put his theories into practice. The demon Michael sets up a simulated trolley, headed toward five victims, with Chidi at the helm. Chidi is called on to make a fast decision. He hesitates, agonizing, and crashes into the five. Micheal reruns the scenario with several variations, and it’s clear that Chidi, faced with a practical decision requiring swift action, can’t actually figure out what’s best. (However, Chidi is clear that he wouldn’t cut up a healthy patient in an involuntary organ donor case.)

Second, incompatibly, the episode wants to affirm Chidi’s moral expertise. Michael, the demon who enjoys torturing humans, can’t seem to take Chidi’s philosophy lessons seriously, despite Chidi’s great knowledge of ethics. Michael tries to win Chidi’s favor by giving him a previously unseen notebook of Kant’s, but Chidi, with integrity that I suppose the viewer is expected to find admirable, casts the notebook aside, seeing it as a bribe. What Chidi really wants is for Michael to recognize his moral expertise. At the climax of the episode, Michael seems to do just this, saying:

Oh, Chidi, I am so sorry. I didn’t understand human ethics, and you do. And it made me feel insecure, and I lashed out. And I really need your help because I feel so lost and vulnerable.

It’s unclear from within the episode whether we are supposed to regard Michael as sincere. Maybe not. Regardless, the viewer is invited to think that it’s what Michael should say, what his attitude should be – and Chidi accepts the apology.

But this resolution hardly fits with Chidi’s failure in actual ethical decision making in the moment (a vice he also reveals in other episodes). Chidi has abstract, theoretical knowledge about ethical quandaries such as the trolley problem, and he is in some ways the most morally admirable of the lead characters, but his failure in vividly simulated trolley cases casts his practical ethical expertise into doubt. Nothing in the episode satisfactorily resolves that practical challenge to Chidi’s expertise, pro or con.

Ethical Expertise?

Now, as it happens, I am the world’s leading expert on the ethical behavior of professional ethicists. (Yes, really. Admittedly, the competition is limited.)

The one thing that shows most clearly from my and others’ work on this topic, and which is anyway pretty evident if you spend much time around professional ethicists, is that ethicists, on average, behave more or less similarly to other people of similar social background – not especially better, not especially worse. From the fact that Chidi is a professor of ethics, nothing in particular follows about his moral behavior. Often, indeed, expertise in philosophical ethics appears to become expertise in constructing post-hoc intellectual rationales for what you were inclined to do anyway.

I hope you will agree with me about the following, concerning the philosophy of philosophy: Real ethical understanding is not a matter of what words you speak in classroom moments. It’s a matter of what you choose and what you do habitually, regardless of whether you can tell your friends a handsome story about it, grounded in your knowledge of Kant. It’s not clear that Chidi does have especially good ethical understanding in this practical sense. Moreover, to the extent Chidi does have some such practical ethical understanding, as a somewhat morally admirable person, it is not in virtue of his knowledge of Kant.

Michael should not be so deferential to Chidi’s expertise, and especially he should not be deferential on the basis of Chidi’s training as a philosopher. If, over the seasons, the characters improve morally, it is, or should be, because they learn from the practical situations they find themselves in, not because of Chidi’s theoretical lessons.

How to Partly Redeem “The Trolley Problem”

Thus, the episode, as a stand-alone work, is flawed both in plot (the resolution at climax failing to answer the problem posed by Chidi’s earlier practical indecisiveness) and in philosophy (being too deferential to the expertise of theoretical ethicists, in contrast with the episode’s implicit criticism of the practical, on-the-trolley value of Chidi’s theoretical ethics).

When the whole multi-season arc of The Good Place finally resolves, here’s what I hope happens, which in my judgment would partly redeem “The Trolley Problem”: Michael turns out, all along, to have been the most ethically insightful character, becoming Chidi’s teacher rather than the other way around.

[image source]

Thursday, September 06, 2018

Inflate and Explode

Here's a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong -- if things of Type X needn't necessarily have Property A -- then you've given what I'll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and "illusionism" about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to "consciousness" (or "qualia" or "what-it's-like-ness" or...), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

I am motivated to write this post in part due to influential recent work by Keith Frankish and Jay Garfield, who I think make this mistake.

-----------------------------------------

Some earlier examples of the inflate-and-explode strategy include:

Paul Feyerabend (1965) denies that mental processes of any sort exist. He does so on the grounds that "mental processes", understood in the ordinary sense, are necessarily nonmaterial, and only material things exist.

Patricia Churchland (1983) argues that the concept of consciousness may "fall apart" or be rendered obsolete (or at least require "transmutation") because the idea of consciousness is deeply, perhaps inseparably, connected with false empirical views about the transparency of our mental lives and the centrality of linguistic expression.

Daniel Dennett (1991) argues that "qualia" do not exist, on the grounds that qualia are supposed by their nature to be ineffable and irreducible to scientifically discoverable mental mechanisms.

Unfortunately, philosophical enthusiasts for the importance of conscious experience tend to set themselves up for the inflate-and-explode move, making Feyerabend's, Churchland's, and Dennett's critisms understandable.

The problem on the enthusiasts' side, as I see it, is that they tend to want to do two things simultaneously:

(1.) They want to use the word "consciousness" or "phenomenology" or "qualia" or whatever to refer to that undeniable stream of experience that we all have.

(2.) In characterizing that stream, or for the sake of some other philosophical project, they typically make some dubious assertions about its nature. They might claim that we know it infallibly well, or that it forms the basis of our understanding of the outside world, or that's irreducible to merely functional for physical processes, or....

Now if the additional claims that the enthusiasts made in (2) were correct, the double purpose would be approximately harmless. However, I'm inclined to think that these types of claims are generally not correct, or at least are quite legitimately disputable. Thus, the enthusiasts unfortunately invite inflate-and-explode. They invite critics to think that those dubious claims are essential to the existence of consciousness in the intended sense, such that if those dubious claims prove false, that's sufficient to show that consciousness doesn't exist.

The reason I think that Feyerabend, Churchland, and Dennett are inflating the target, rather than just correctly interpreting the target, is that I believe the enthusiasts would much more readily abandon the dubious claims, if required to do so by force of argument, than they would deny the existence of consciousness. Those claims aren't really ineliminably, foundationally important to their concept of consciousness. It's not like the relation between magical powers and witches on some medieval European conceptions of witches, such that if magical powers were shown not to exist, the right conclusion would be that witches don't exist. Even if we must jettison thoughts of infallibility or immateriality, consciousness in our communally shared sense of the term still exists. The core conception of phenomenal consciousness in philosophy of mind is, I think or suspect or at least hope, the conception of the stream of experience that it is almost impossible to deny the existence of -- not that stream-of-experience-plus-such-and-such-a-dubious-property.

-----------------------------------------

Frankish's and Garfield's more recent illusionist arguments, as I see them, employ the same mistaken inflate-and-explode strategy. Keith Frankish (2016) argues that phenomenal consciousness is an "illusion" because there are no phenomenal properties that are "private", ineffable, or irreducible to physical or functional processes. Jay Garfield (2015) denies the existence of phenomenal consciousness on the broadly Buddhist grounds that there is no "subject" of experience of the sort required and that we don't have the kind of infallibility about experience that friends of phenomenal consciousness assume.

Now it is true that many recent philosophers think that consciousness involves privacy, ineffability, irreducibility, infallibility, or a subject of experience of the sort not countenanced by (some) Buddhists; and maybe they are wrong to think so. On these matters, Frankish's and Garfield's (and Feyerabend's and Churchland's and Dennett's) criticisms have substantial merit. But it does not follow that consciousness is a mere illusion or does not exist. We can, and I think normally do, conceptualize consciousness more innocently. We need not commit to such dubious theses; our shared conception can survive without them.

To avoid commitment to dubious theses, we can and do define consciousness primarily by example. We gesture, so to speak, toward our sense experiences, our imagery experiences, our vividly felt emotions, our inner speech. We notice that there is something extremely obvious that all of these examples vividly share. Consciousness is that obviously shared thing. Maybe it's reducible; maybe not. Maybe there's a "subject" in a Cartesian sense; maybe not. Why commit on such matters, right out of the gate? Keep it theoretically innocent! Consciousness, in this innocent sense, is almost undeniably real. (I say "almost" because the clever philosopher can find a way to deny anything.)

Now admittedly, this sort of theoretically innocent definition by example is not quite as simple as I've just portrayed it. For a more careful attempt see Schwitzgebel 2016.

-----------------------------------------

I've tried this argument on both Frankish and Garfield, in critical commentaries (contra Frankish; contra Garfield). They remain unconvinced. (Well, this is philosophy!) Let me summarize their replies and share my reaction.

Frankish says that he agrees that consciousness, defined innocently by example as I have done, does indeed exist. He graciously allows that I have executed the important task of identifying a "neutral explanandum" for theories of consciousness that both realists and illusionists can accept (p. 227). However, Frankish also asserts that my definition is "not substantive" "in the substantive sense created by the phenomenality language game" (ibid.), and thus he feels licensed to continue to embrace illusionism about phenomenal consciousness.

I remain unsure why my definition by example is insufficiently substantive. Surely some definitions by example are substantive, or substantive enough. For instance, I might define "furniture" by reference to a diversity of positive and negative examples. That seems to pick out a substantive target of things that exist, and done well, it's good enough to let us start counting pieces of furniture (maybe with some disputable cases), evaluating the quality and function of different types of furniture, etc. Why wouldn't example-by-definition of consciousness work similarly? What is missing?

Garfield responds differently, doubling down, as I see it, on the inflation move:

I argue that if by 'qualitative states' we mean states that are the objects of immediate awareness, the foundation of our empirical knowledge, inner states that we introspect, with qualitative properties that are properties of those states and not of the objects we perceive, there are no such states (Garfield 2018).

Whoa! I don't think I meant all that! My whole aim in definition by example is to avoid such commitments.

Maybe Garfield takes himself to be denying the existence only of properties that most 21st century Anglophone philosophers don't actually endorse? No, I don't think so. It is clear from context that in denying the existence of qualitative properties, Garfield takes himself to be in conflict with the mainstream view in philosophy of mind, the view of people like me who accept the existence of phenomenal consciousness. But I don't see why Nagel, Block, Searle, Chalmers, Strawson, Carruthers, Kriegel, Siegel, Siewert, Thompson, etc. need to be committed to the dubious package of views Garfield lists in the blockquote above, simply by virtue of accepting the existence of consciousness. Of course they may also make other, further claims about consciousness, besides merely asserting that it exists, and those further claims might commit some of them to the dubious theses that Garfield wisely rejects.

-----------------------------------------

[image source]

Thursday, August 30, 2018

Rebecca Kukla on Diversifying Philosophy

I'm on a mission to help diversify philosophy journals. The journals that are seen as elite in philosophy (but not only them) tend to draw on a somewhat narrow range of authors, addressing a somewhat narrow range of topics, using a somewhat narrow range of tools. It's not as bad as it could be, and not as bad (I think) as it once was, but there is a long way to go.

Philosophy is the broadest of all disciplines, with at least a bird's-eye view of everything important. For all X, there's a philosophy of X. I like to think that my discipline could become the broadest-minded too, welcoming of all methods and viewpoints and cultural backgrounds.

Alarmingly, elite Anglophone philosophy journals are even more demographically narrow than the famously demographically narrow philosophy departments of the large Anglophone countries. For example, only about 13% of authors in elite Anglophone journals are women, and less than 1% are Black, and only 3% of citations are to books or articles originally written in a language other than English.

At the Pacific Division meeting of the American Philosophical Association last spring, Nicole Hassoun, Sherri Conklin, and I organized a session on Diversity in Philosophy Journals, in which over 20 journal editors participated, as well as seven experts on the demographics of philosophy, and a large, engaged audience. Following up on that session, we recruited five of those journal editors to write guest posts for the Blog of the APA, concerning their experiences with trying to improve the diversity of their journals.

After a brief introductory piece last week by Nicole Hassoun, Subrena Smith, and me, the first editor's post is finally up, and it's terrific! Rebecca Kukla describes the editorial policies she has used to substantially expand the diversity of contributors and viewpoints in the Kennedy Institute of Ethics Journal. As always, Kukla is vivid, practical, and bold.

I hope that you will read her post now!

Still to come over the next four weeks: Stephen Hetherington from the Australasian Journal of Philosophy, Lucy O'Brien from Mind, Purushottama Bilimoria from Sophia, and Sven Ove Hansson from Theoria.

[image from the Blog of the APA]

Thursday, August 23, 2018

The Four-Implicature Theory of Fortune Cookies

(your guide to properly understanding the dire messages from Panda Express)

Fortune cookies explicitly state the good and silently pass over the bad. In this way, they are like letters of recommendation. The wise reader understands the Gricean implicatures.

Gricean implicature involves implying one thing by saying something else, typically exploiting the hearer's or reader's knowledge of the context and of the norms of cooperative communication. Probably the most famous example, from Grice's classic "Logic and Conversation" (1967), is this:

A is writing a testimonial about a pupil who is a candidate for a philosophy job, and his letter reads as follows: 'Dear Sir, Mr. X's command of English is excellent, and his attendance at tutorials has been regular. Yours, etc.'

Although A does not explicitly say that Mr. X is an unimpressive student, the letter implicates it. For if Mr. X were an impressive student, the letter writer, as a cooperative conversation partner, would surely have said that. The reader knows that A knows that letters of recommendation should praise the quality of students who deserve academic praise. A thereby intentionally communicates to the reader that in his view Mr. X does not deserve academic praise. The best that can be said about X concerns his attendance and command of English.

With this in mind, consider these two principles governing the proper interpretation of fortune cookies:

(1.) Fortune cookies, like letters of recommendation, (a.) say only good things, and (b.) say the best that they can about those things.

(2.) All fortune cookies address the following four topics: health, success, social relationships, and happiness.

When a fortune cookie silently omits any of the four topics listed in Principle 2, it implicates that the news on that topic is bad. Furthermore, when a fortune cookie says something limited about health, success, social relationships, or happiness, it implicates that nothing better can be said. This is the Four-Implicature Theory of Fortune Cookies.

Consider, for example, my most recent fortune: "You have the ability to overcome obstacles on the way to success."

What a disastrous fortune! Although it may seem good to the naive reader -- like saying of a philosophy student that he speaks good English and attends regularly -- properly understood, the implicatures are catastrophic. Since only success is mentioned, we must infer that it is passing silently over bad news concerning my health, happiness, and social relationships. Worse, the cookie tells me only that I have the ability to overcome obstacles, not that I will overcome those obstacles. By Principle 1a, the fortune would have said that I will overcome those obstacles if in fact I will. It follows that I will not in fact overcome. Disaster on all four fronts!

[a dire fortune from Panda Express]

Let's try another fortune: "You are kind-hearted and hospitable, cheerful and well-liked." This fortune concerns both social relationships and happiness, two of the four topics that all cookies address. We can therefore infer that the recipient will suffer ill-health and poverty. Concerning happiness, the news is good: The recipient is cheerful! However, the implicature concerning social relationships is mixed: If the best that can be said is that the recipient is kind, hospitable, and well-liked, and not that she finds love, or that people admire her, or that she has other such social goods, the implicature is that she is a bit of a doormat. To the wise reader of cookies, the message is clear: Other people appreciate how cheerful the recipient remains as they take unfair advantage of her kind-hearted hospitality.

I leave the fortunes below as an exercise for the reader.

ETA Aug 24:

OMG, today's fortune is even worse!

[printable fortune cookie sheet from Red Castle]

Thursday, August 16, 2018

To Reduce the Risk of Moral Catastrophes, Should Society Hire Lots of Philosophers?

In June, I wrote a post arguing that future generations might find our generation especially morally loathsome, even if we don't ourselves feel like we are morally that bad. (By "we" I mean typical highly educated, middle-class people in Western democracies.) We might be committing morally grievous wrongs -- atrocities on par with the wrong that we now see in race-based slavery or the Holocaust or bloody wars of conquest -- without (most of us) recognizing how morally terrible we're being.

In Facebook discussion, Kian MW pointed me to a fascinating article by Evan G. Williams, which makes a similar point and adds the further thought, bound to be attractive to many philosophers, that the proper response to such a concern is to hire lots of philosophers.

Okay, hiring lots of philosophers isn't the only remedy Williams suggests, and he doesn't phrase his recommendation in quite that way. What he says with that we need to dedicate substantial societal resources to (1) identifying our moral wrongdoing and to (2) creating social structures to implement major changes in light of those moral discoveries. Identifying our moral wrongdoing will require progress, Williams says, both in moral theory and in related applied fields. (For example, progress in animal ethics requires progress both in moral theory and in relevant parts of biology.) Williams' call for dedicating substantial resources toward making progress in moral theory seems like a call for society to hire many more philosophers, though I suppose there are a variety of ways that he could disavow that implication if he cared to do so.

The annual U.S. military budget is about $700 billion. Suppose that President Trump and his allies in Congress, inspired by Williams' article, decided to divert 2% of U.S military spending toward identifying our society's moral wrongdoing, with half of that 2% going to ethicists and the other half to other relevant disciplines. Assuming that the annual cost of employing a philosopher is $150,000 (about half salary, about half benefits and indirect costs), the resulting $7 billion could hire about 50,000 ethicists.

[With 50,000 more ethicists, these empty chairs could be filled!]

Two percent of the military budget seems like a small expenditure to substantially reduce the risk that we unwittingly perpetrate the moral equivalent of institutionalized slavery or the Holocaust, don't you think? A B2 bomber costs about $1-$2 billion. The U.S. government might want to consider a few bomber-for-philosopher swaps.

I write this partly in jest of course, but also partly seriously. If society invested more in moral philosophy -- and it needn't be a whole lot more, compared to the size of military budgets -- and if society took the results of that investment seriously, giving its philosophers prestige, attention, and policy influence, we might be morally far better off as a people.

We might. But I also think about the ancient Athenians, the ancient Chinese, and the early 20th-century Germans. Despite the flourishing of philosophy in these times and places, the cultures did not appear to avoid moral catastrophe: The ancient Athenians were slave-owners who engaged in military conquest and genocide (perhaps even more than their neighbors, if we're grading on a curve), the flourishing of philosophy in ancient China coincided with the moral catastrophe of the period of the Warring States, and the Germans perpetrated the Holocaust and helped initiate World War II (with some of the greatest philosophers, including Heidegger and Frege, on the nationalistic, anti-Semitic, political right).

Now maybe these societies would have produced even worse moral catastrophes if philosophers had not also been flourishing in them, but I see no particular reason to think so. If there's a correlation between the flourishing of philosophy and the perpetration of social evil, the relationship appears to be, if anything, positive. This observation fits with my general concerns about the not-very-moral behavior of professional ethicists and philosophers' apparent skill at post-hoc rationalization.

I'm not sure how skeptical to be. I hesitate to suggest that a massive infusion of social capital into philosophical ethics couldn't have a large positive impact on the moral choices we as a society make. It might be truly awesome and transformative, if done in the right way. But what would be the right way?

[photo credit: Bryan Van Norden]

Tuesday, August 07, 2018

Top Science Fiction and Fantasy Magazines 2018

In 2014, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on awards and "best of" placements in the previous ten years. Since people seemed to find it useful or interesting, I've been updating it annually. Below is my list for 2018.

Method and Caveats:

(1.) Only magazines are included (online or in print), not anthologies or standalones.

(2.) I gave each magazine one point for each story nominated for a Hugo, Nebula, Eugie, or World Fantasy Award in the past ten years; one point for each story appearance in any of the Dozois, Horton, Strahan, Clarke, or Adams "Year's Best" anthologies; and half a point for each story appearing in the short story or novelette category of the annual Locus Recommended list.

(3.) I am not attempting to include the horror / dark fantasy genre, except as it appears incidentally on the list.

(4.) Prose only, not poetry.

(5.) I'm not attempting to correct for frequency of publication or length of table of contents.

(6.) I'm also not correcting for a magazine's only having published during part of the ten-year period. Reputations of defunct magazines slowly fade, and sometimes they are restarted. Reputations of new magazines take time to build.

(7.) Lists of this sort do tend to reinforce the prestige hierarchy. I have mixed feelings about that. But since the prestige hierarchy is socially real, I think it's in people's best interest -- especially the best interest of outsiders and newcomers -- if it is common knowledge.

(8.) I take the list down to 1.5 points.

(9.) I welcome corrections.

Results:

1. Asimov's (229.5 points)
2. Fantasy & Science Fiction (162.5)
3. Clarkesworld (151.5)
4. Tor.com (147.5)
5. Lightspeed (101) (started 2010)
6. Subterranean (75) (ceased 2014)
7. Analog (53.5)
8. Strange Horizons (46.5)
9. Interzone (43.5)
10. Uncanny (41.5) (started 2014)
11. Beneath Ceaseless Skies (38)
12. Fantasy Magazine (25.5) (merged into Lightspeed 2012, occasional special issues thereafter)
13. Apex (19.5)
14. Nightmare (13.5) (started 2012)
15. Postscripts (11.5) (ceased short fiction in 2014)
16. The New Yorker (8)
17. Realms of Fantasy (7.5) (ceased 2011)
18. Black Static (7)
19. McSweeney's (6)
20t. Electric Velocipede (5.5) (ceased 2013)
20t. Intergalactic Medicine Show (5.5)
20t. Sirenia Digest (5.5)
23t. Conjunctions (5)
23t. Jim Baen's Universe (5) (ceased 2010)
25t. Omni (4.5) (classic science/SF magazine, restarted 2017)
25t. The Dark (4.5) (started 2013)
25t. Tin House (4.5)
28. Helix SF (4) (ceased 2008)
29t. Cosmos (3)
29t. GigaNotoSaurus (3) (started 2010)
29t. Shimmer (3)
29t. Terraform (3) (started 2014)
33t. Beloit Fiction Journal (2.5)
33t. Black Gate (2.5)
33t. Buzzfeed (2.5)
33t. Harper's (2.5)
33t. Lady Churchill's Rosebud Wristlet (2.5)
33t. Lone Star Stories (2.5) (ceased 2009)
33t. Matter (2.5) (started 2011)
33t. Slate (2.5)
33t. Weird Tales (2.5) (ceased 2014)
42t. Boston Review (2)
42t. Fireside (2) (started 2012)
42t. Mothership Zeta (2) (started 2015)
45t. Abyss & Apex (1.5)
45t. Daily Science Fiction (1.5) (started 2010)
45t. e-flux journal (1.5)
45t. Flurb (1.5) (ceased 2012)
45t. MIT Technology Review (1.5)
--------------------------------------------------

Comments:

(1.) The New Yorker, Tin House, McSweeney's, Conjunctions, Harper's, Beloit Fiction Journal, and Boston Review are literary magazines that occasionally publish science fiction or fantasy. Cosmos, Slate, Buzzfeed, and MIT Technology Review are popular magazines that have published a little bit of science fiction on the side. e-flux is a wide-ranging arts journal. The remaining magazines focus on the F/SF genre.

(2.) It's also interesting to consider a three-year window. Here are those results, down to six points:

1. Clarkesworld (74)
2. Tor.com (69.5)
3. Asimov's (65)
4. Lightspeed (56.5)
5. Uncanny (41.5)
6. F&SF (39)
7. Beneath Ceaseless Skies (23)
8. Analog (20)
9. Strange Horizons (14)
10. Nightmare (12.5)
11. Interzone (9.5)
12. Apex (6.5)

(3.) Left out of these numbers are some terrific podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, Pseudopod, and Cast of Wonders), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasts are also important venues.

(4.) Check out Nelson Kingfisher's recent analysis of acceptance rates and response times for most of the magazines above.

(5.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

[image source; admittedly, it's not the latest issue!]

Monday, July 30, 2018

On What We Tell Pollsters

Barry Lam’s podcast Hi-Phi Nation has a new episode on “information silos” and what we tell pollsters. Partway through the episode, I am briefly interviewed about the nature of belief.

Lam is always fun, and the episode has a few twists you might not expect. One theme throughout the episode is a critique of the view generally accepted as implicit background in polling and in popular reports of poll results: that people tell pollsters what they actually believe. Lam explores an empirical challenge to this and a more philosophical challenge.

Empirical challenge: People who feel uncertain might answer by "cheerleading" for their side, Republicans for example simply saying whatever they think will make Trump look good, Democrats saying whatever they think makes Trump look bad. If this is going on, when the incentives are changed (for example by paying respondents for right answers, including a smaller payment for admitting that they don’t know), they might instead reveal their true opinion. Even if they are not uncertain, they might simply lie to the pollster, saying what they plainly know to be false, to help or express support for their side.

A more philosophical challenge explores the question of what it is, really, to have a political, or politically loaded, belief. On some questions, there might not be a single straightforward fact about what you believe, hidden in a “secret compartment”, which you choose either to reveal or not reveal to the pollster. On climate change, or racial equality, or on what accommodations society owes to people with disabilities, you might be inclined to answer one way in one context or to one audience, and in quite a different way in another context or to another audience; you might wager thus-and-so when X is at stake, but quite differently when Y is at stake; your spontaneous reactions and your more guarded reactions might splinter in different directions; and so on. Among all these various thoughts and reactions, there needn’t be some privileged set that reflects your true belief while others are somehow misleading or inauthentic.

That, at least, is my view of belief. If you are sufficiently splintered, fragmented, or in-betweenish in your dispositional profile, then what you tell pollsters, even sincerely, will be only one element of a complicated picture. If what you say is misaligned with some other aspects of your speech and behavior, you might be merely cheerleading or lying, but you needn’t necessarily be. You might be answering as sincerely as you can, with the fragment of you that is called forth at the moment.

Full episode here.

[image source]

Tuesday, July 24, 2018

My New Book in Draft

Working title:

Jerks, Zombie Robots, and Other Philosophical Misadventures

[former working title: How to Be a Crazy Philosopher]

The book is composed of several dozen blog posts and popular articles, on philosophy, psychology, culture, and technology, updated and revised, selected from eleven hundred I published between 2006 and 2018.

The full draft is available here.

I will be revising it for the rest of the summer and into the fall, so feedback is appreciated! In addition to the usual content-level feedback, I also welcome feedback on: (a) alternative possible titles, (b) posts or articles that I should have included but didn't, (c) posts or articles that aren't up to the quality of the others and should be cut.

The book is divided into 61 chapters in 5 parts. Every chapter is free standing. No need to read them in order.

[a haphazard sample of the stacks of books in my office, consulted during revision]

Table of Contents:

Part One: Moral Psychology

1. A Theory of Jerks
2. Forgetting as an Unwitting Confession of Your Values
3. The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot
4. Cheeseburger Ethics (or How Often Do Ethicists Call Their Mothers?)
5. On Not Seeking Pleasure Much
6. How Much Should You Care about How You Feel in Your Dreams?
7. Imagining Yourself in Another’s Shoes vs. Extending Your Love
8. Aiming for Moral Mediocrity
9. A Theory of Hypocrisy
10. On Not Distinguishing Too Finely Among Your Motivations
11. The Mush of Normativity
12. A Moral Dunning-Kruger Effect?
13. The Moral Compass and the Liberal Ideal in Moral Education

Part Two: Technology

14. Should Your Driverless Car Kill You So Others May Live?
15. Cute AI and the ASIMO Problem
16. My Daughter’s Rented Eyes
17. Someday, Your Employer Will Technologically Control Your Moods
18. Cheerfully Suicidal AI Slaves
19. We Have Greater Moral Obligations to Robots Than to (Otherwise Similar) Humans
20. Our Moral Duties to Monsters
21. Our Possible Imminent Divinity
22. Skepticism, Godzilla, and the Artificial Computerized Many-Branching You
23. How to Accidentally Become a Zombie Robot

Part Three: Culture

24. Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature
25. Does It Matter If the Passover Story Is Literally True?
26. Memories of My Father
27. Flying Free of the Deathbed, with Technological Help
28. Thoughts on Conjugal Love
29. Knowing What You Love
30. The Epistemic Status of Deathbed Regrets
31. Competing Perspectives on One’s Final, Dying Thought
32. Profanity Inflation, Profanity Migration, and the Paradox of Prohibition (or I Love You, “Fuck”)
33. The Legend of the Leaning Behaviorist
34. What Happens to Democracy When the Experts Can’t Be Both Factual and Balanced?
35. On the Morality of Hypotenuse Walking
36. Birthday Cake and a Chapel

Part Four: Consciousness and Cosmology

37. Possible Psychology of a Matrioshka Brain
38. A Two-Seater Homunculus
39. Is the United States Literally Conscious?
40. Might You Be a Cosmic Freak?
41. Penelope’s Guide to Defeating Time, Space, and Causation
42. Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity
43. How Everything You Do Might Have Huge Cosmic Significance
44. Goldfish-Pool Immortality
45. How Big the Moon Is, According to One Three-Year-Old
46. Tononi’s Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant
47. What’s in People’s Stream of Experience During Philosophy Talks?
48. The Paranoid Jeweler and the Sphere-Eye God
49. The Tyrant’s Headache

Part Five: The Psychology and Sociology of Philosophy

50. Truth, Dare, and Wonder
51. Trusting Your Sense of Fun
52. Why Metaphysics Is Always Bizarre
53. The Philosopher of Hair
54. Kant on Killing Bastards, Masturbation, Organ Donation, Homosexuality, Tyrants, Wives, and Servants
55. Obfuscatory Philosophy as Intellectual Authoritarianism and Cowardice
56. Nazi Philosophers, World War I, and the Grand Wisdom Hypothesis
57. Against Charity in the History of Philosophy
58. Invisible Revisions
59. On Being Good at Seeming Smart
60. Blogging and Philosophical Cognition, or Why Blogging Is the Ideal Form of Philosophy!!! :-)
61. Will Future Generations Find Us Morally Loathsome?

Thursday, July 19, 2018

Think of Your Dissertation as Your Longest Work, Not Your Best Work

You* know how to write a decent seminar paper. You've done it at least a dozen times. So why is it so hard to get going on that dissertation?

[* for values of "you" of approximately 3rd-5th year in a philosophy PhD program]

Your advisor might stink. That's rough. It can definitely slow you down. But that's probably not the main reason.

You've got to teach or TA. Yes, that consumes oodles of time if you're conscientious about it. But I doubt that's the main reason either.

I suspect that the main reason, for most students, is excessive expectations. You want your dissertation to be the best thing you've ever written. You want to finally address a big, ambitious topic. You want work that will wow your advisor and everyone else.

Sure, of course you do! Those are good things to want. The problem is that the wanting interferes with the getting.

It interferes in two ways: by leading you to take responsibility for a larger literature than you can digest in the time allotted, and by encouraging perfectionism.

Against taking on too big a literature:

How do you write a seminar paper? You read the ten or thirty things assigned for the seminar, plus maybe a few other things. You especially master one or two of them, and then you develop your critique or alternative position. Now you've got a fine little criticism, say, of Gendler's view of "alief" in light of a couple of recent review papers on implicit bias. Seminar grade: A. Lovely!

But now you're on your dissertation. Time for a big, ambitious topic. Time to throw yourself deep into something important. You want to defend the existence of moral facts. Or you want to show how it's possible to have infallible introspective knowledge of your own experience. Great! To do this responsibly, what do you need to read? Um...

... the whole entire literature on moral facts or introspection?

That might take a while. Could you possibly do it in a semester or a year (while teaching, with life happening, etc.)? How do you even get started? Meanwhile, the clock is ticking. You feel like you've got to start writing. You can't not write for two and a half years while you read every book and article that has been written about these topics. But neither can you really write in an informed way without having read all of that stuff. So are you supposed to write in an uninformed way? If you're like most of us and you think in part by writing, how are you going to even organize it all in your mind and remember it if you aren't writing?

That's a nasty little pickle.

Here's the way to duck the pickle: Choose a much smaller topic that you really can master in a dedicated couple of months. Not just any topic, of course. One relevant to the big picture you have in mind.

Suppose your big topic is the (in-)fallibility of introspection of conscious experience. You might read everything pro and con in the recent literature on "containment" models of introspection, according to which the judgment I am in conscious state S literally contains conscious state S within it as a part (and is arguably thus infallibly correct). This topic is small enough to master and write about in a semester, if you are already a well-trained graduate student in philosophy of mind who has given it some preliminary thought. Though it's a small literature, really getting it right is an ample task for a big, long chapter. And the topic is centrally related to the big-picture issue of introspective infallibility in the recent literature.

Then, for the next chapter, find something nearby that is similarly small and tractable, to which you can bring some of the insights and tools from your previous chapter. Continuing the example, maybe neo-expressivist views of the nature of introspection.

Repeat a few times, and you'll find you've actually covered a fairly large territory in sum. In the process, you'll have mastered much of the literature that seemed so impossibly large at the start.

Find the tiniest thing you can find that is relevant to your topic while also being of significant interest to specialists in the area, and become the absolute bleeding world expert on it. Then bore your advisor with sixty-plus pages of a tediously thorough treatment. This is how to duck the pickle.

Against perfectionism:

If I said, "Michelangelo, next month create the best art you've ever made" could he do it? Not likely!

Don't think of your dissertation as your best work ever. Don't hold your writing to that difficult a standard. Maybe at some point down the road, after a bunch of revision, some favorite piece of it will turn out to be your best work ever. (More on this below.) But having that kind of expectation is not the way to start writing. Reconcile yourself to the likely fact that you're not going to produce the best work of your life in the next few months.

Here's a better way to think of it: Your dissertation will be the longest and most thorough thing you've ever written. That's all. Instead of aiming for brilliant prose, aim for length. Choose that little thing and cover it so exhaustively that no one who reads your chapter on the topic could doubt that you know that tiny corner of academia as well as anyone else in the world.

Send your advisor something long and hairy. It's okay. You're not being graded on prose style. Get it out to your advisor for feedback, and to others you trust, even though it's not the most beautiful of things. You can revise it later! That's the point of getting feedback, right?

Settle. Your target should be: rough, covers the bases (except for that one thing you forgot which you'll put in later), good enough to move on to the next chapter.

After the feedback, you'll see some things that definitely need changing. But unless some radical rethinking is required, don't make those changes yet! Instead, move on. Write the next chapter. Set the feedback aside to return to later, after you've rough drafted your other chapters. In the course of writing those other chapters, you'll probably have insights that influence your thinking about the earlier chapters, so they'll need significant revising for that reason anyway.

Do this several times and you'll find that you have a few hundred pages of long, ugly chapters that take you from the beginning to the end. Then revise. Rewrite the whole thing top to bottom. Now, at the end -- not in the early drafting stage -- is the time to make it... well I won't say perfect, but better. Polished. This is the last thing you do, simultaneously with hitting the job market.

Likely you will find, at the end, once you're done polishing, that your dissertation, or at least your favorite piece of it, is the best thing you've ever written. That's not because any chapter was fantastic in its initial draft, but rather because you've never before in your life spent a few years thinking about a single topic, and as a result you now have that topic down so well that you really see to the core of it. You know everyone else's views, and you can lay out the strengths and weaknesses of those views, and your great command of the material will show in the final, polished version of your work.

Start today:

So if you're sitting there stalled out on your dissertation, take Doctor Eric's three-step remedy:

(1.) Let go of the thought that you are about to produce the best work you've ever written. Lower your ambitions.

(2.) Choose a topic so narrow that you can read everything relevant in short order, and then read that stuff.

(3.) Write out that narrow thing, long and boring and ugly, blow by blow.

At the end, you'll have several dozen long, ugly, boring, thorough pages -- exactly the kind of material from which excellent dissertations are eventually built.

[image source]

Thursday, July 12, 2018

Two Ways of Being a Group Mind: Synchronic vs. Diachronic

Based on last week's post, I am now seeing ads for sunglasses everywhere, as if to say "Welcome, Eric, to the internet hypermind! Did you say SUNGLASSES?!"

Speaking of hyperminds....

I want to distinguish two ways of being a group mind, since I know you care immensely about the cognitive architecture of group minds I'm a dork. My thought is that the philosophical issues of group consciousness and personal identity play out differently in the two types of case.

Synchronic:

Examples: Star Trek's Borg (mostly), Ann Leckie's ancillaries, highly interconnected nations (according to me), Vernor Vinge's tines.

Synchronic group minds are probably the default conception. A bunch of independent, or semi-independent, or independent-enough entities remain in constant or constant-enough communication with each other. Their communication is sufficiently rich or sufficiently well structured that it gives rise to group-level mental states in the whole. In the most interesting case, the individual entities are distinct enough to have rich mental lives but also the group is well enough coordinated that it also, simultaneously, has a distinctive, rich mental life above and beyond that of its members.

Diachronic:

Examples: David Brin's Kiln People, Linda Nagata's ghosts, Benjamin Kinney's forks.

In a diachronic group mind, communication is relatively rare but high bandwidth and transformative. Post-communication, the individuals inherit mental states from the others. In the most interesting case, the inheritance process is very different from just listening credulously to someone's testimony; rather it's more like a direct transfer of memories, plans, and opinions, maybe even values or personality. Imagine "forking" into three versions of yourself, going your separate ways for the day, and then at the end of the day merging back into a single individual, integrating the memories and plans of each, and averaging any general changes in value, opinion, or personality. Tomorrow and every day thereafter, you will fork and merge in the same way.

[Cerberus might not be integrated enough to be a good example of a group mind, but I didn't want to attach another darn picture of the Borg.]

Tradeoffs Between Group-Level and Individual-Level Personhood and Autonomy:

As I have described it here, delay between information transfer episodes is the fundamental difference between these types of group minds: whether the minds are in constant or constant-enough communication, or whether instead they communicate only at long intervals. Obviously, temporal distance admits of degree, but this difference in degree creates structural pressures. If communication is infrequent, its effects have to be radical if it is to give rise to an entity sufficiently integrated to be worth calling a "group mind". If every day group of friends meets to exchange information and plan the next day's activities, in the ordinary way people sometimes do this, I suppose that in some weak sense they have formed a group mind. But they haven't done so in the radical science-fictional sense I'm imagining. For example, if there were five friends who did this, there would still be exactly five persons -- entities with serious rights whose destruction would we worth calling murder. For the emergence of something more metaphysically and morally interesting, the exchange has to be radical enough to challenge the boundaries of personal identity.

Conversely, if communication is constant and its effects are radical, it's not clear that we have a group of individuals in any interesting sense: We might just have a single non-group entity that happens to be spatially scattered (as in my Martian Smartspiders).

In other words, to be a philosophically-interesting group entity there must be some sort of interestingly autonomous mentality both at the individual level and at the group level. Massive transformative communication (as in diachronic merging of memories and values) radically reduces autonomy: If communication is both massively transformative and very frequent, there's no chance for interesting person-like autonomy at the individual level. If communication is neither massively transformative nor very frequent, there's no chance for interesting person-like autonomy at the group level.

Consciousness:

Our intuitive judgments about group-level consciousness are probably pretty crappy (as I've argued here and here). But our general theories about consciousness as they apply to the group level are probably even crappier (as I've argued here and here). At the same time, whether the group as a whole has a stream of conscious experience over and above the consciousness of its individual members seems like a very important question if we're interested in its mentality and whether it deserves moral status as a person. So we're kind of stuck. We'll have to guess.

Plausibly, in the diachronic case there is no stream of consciousness beyond that of the merging individuals. When there's one body at night, there's one stream of consciousness (at most, if it's dreaming). When there are three bodies off doing their thing, there are three streams of consciousness. We might be able to create some problematic boundary cases during the merge, but maybe that's marginal enough to dismiss with a bit of hand waving.

The synchronic case is, I think, more conceptually challenging with respect to consciousness. If we allow that minimally interactive groups do not give rise to group level consciousness and we also allow that a fully informationally integrated but spatially distributed entity does give rise to consciousness, it seems that we can create a slippery slope from one case to the other by adding more integration and communication (for example here). At some point, if there is enough coherent behavior, self-representation, and information exchange at the group level, most standard functionalist views of consciousness (unless they accept an anti-nesting principle) should allow that each individual member of the group would have a stream of experience and also that there would be a further, different stream of experience at the group level. But it's a tricky question how much integration and information exchange, and with what kind of structural properties, is necessary for group-level consciousness to arise.

Personhood:

One interesting issue that arises is the extent to which an individual's beliefs about what counts as "self-interest" and "death" define the boundaries of their personhood. Consider a diachronic case: You are walking back home after your day out and about town, with a wallet full of money and interesting new information about a job opportunity tomorrow, and you are about to merge back together with the two other entities you forked off from this morning. Is this death? Are "you" going to be gone after the merge, your memories absorbed into some entity who is not you (but who you might care about even more than you care about yourself)? In walking back, are you magnanimously sacrificing your life to give your money and information to the entity who will exist tomorrow? Would it be more in your self-interest to run away and blow your wad on something fun for this current body? Or, instead, will it still be "you" tomorrow, post-merge, with that information and that money? To some extent, in unclear cases of this sort, I think it might depend on how you think and feel about it: It's to some extent up to you whether to conceptualize the merging together as death or not.

A parallel issue might arise with synchronic groups, though my hunch is that it would play out differently. Synchronic groups, as I'm imagining them, don't have identity-threatening splits and merges. The individual members of synchronic groups would seem to have the same types of rights that otherwise similar individuals who aren't members of synchronic group minds would have -- rights depending on (for example, but it's not this simple) their capacity to suffer and think and choose as individuals. They might choose, as individuals, to view the group welfare as much more important than their own welfare (as a soldier might choose to die for sake of country); but unless there's some real loss of autonomy or consciousness, this doesn't threaten their status as persons or redefine the boundaries of what counts as death.

Related:

Possible Architectures of Group Minds: Perception (May 4, 2016)

Possible Architectures of Group Minds: Memory (Jun 14, 2016)

Group Minds on Ringworld (Oct 24, 2012)

If Materialism Is True, the United States Is Probably Conscious (academic essay in Philosophical Studies, 2015)

Our Moral Duties to Monsters (Mar 8, 2014)

Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity (Aug 20, 2016).

[image source]

Friday, July 06, 2018

How to Create Immensely Valuable New Worlds by Donning Your Sunglasses

[A satisfied REI customer, creating whole gobs of new worlds.]

One world is good, so two is better, right?

(If you think one world is bad and two are worse, just invert the reflections below.)

Here's one way to look at it: Run a universe, or a world, from beginning to end, sum up all the good stuff, subtract all the bad stuff, and note the (hopefully positive) total. Now consider, from your end-of-the-universe God's-eye point of view: Should you launch another world similar to the previous one? Well, of course you should! There would be even more good stuff, and a higher positive total, after that second world has been run. Similar considerations suggest that two good worlds running in parallel would also be better than a single good world.

(To avoid problems with summing infinitudes, let's assume finite worlds with finite value. For some complexities regarding the value or disvalue of repetition specifically see this earlier post.)

Now on some interpretations of quantum mechanics, every time there's a quantum event with different possible outcomes, all of the outcomes occur, each in a different world. Such "many world" interpretations often describe the world as "splitting" into two worlds -- one world in which Outcome A occurs and one in which Outcome B occurs. You too, the observer, will split: One copy of you goes into World A, observing Outcome A, and the other goes into World B, observing Outcome B.

In a classic article on the many-worlds interpretation of quantum mechanics, Bryce S. DeWitt writes:

This universe is constantly splitting into a stupendous number of branches, all resulting from the measurementlike interactions between its myriads of components. Moreover, every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.... I still recall vividly the shock I experienced on first encountering this multiworld concept. The idea of 10^100+ slightly imperfect copies of oneself all constantly splitting into further copies, which ultimately become unrecognizable.... (DeWitt 1970, p. 33).

Notice that on DeWitt's portrayal, there are a finite but large number of worlds: 10^100+. Now here's the normative question I want to consider: Is there positive value -- would it be ethically or prudentially or aesthetically good -- to increase the amount of splitting, so that there are more worlds rather than fewer?

On the face of it, it seems that, yes, it would be good if there were more worlds. If each world independently considered is good, then plausibly more worlds is better. Suppose World W has positive value V. Suppose now that it splits into two worlds that are very similar but not identical: W1 with value V1 and W2 with value V2. If we assume V1 ~= V2 ~= V and that the value of worlds is additive, then after the split the whole W1+W2 is approximately twice the value of W. We have doubled the amount of value that the cosmos as a whole contains!

Now you might object in one of three ways:

(1.) You might reject the whole splitting-worlds interpretation. Fair enough! But then you're not really playing the game. I'm interested in thinking about the normative consequences assuming that the world does split as DeWitt describes.

(2.) You might reject the assumption that V1 ~= V. For example, you might think that after splitting, each world has only approximately half the value that the world before the split had, so that V1 + V2 ~= V. Then no value would be gained by splitting. A tempting thought, perhaps. But why would splitting make a world half as valuable? It's not clear. One consequence of this view is that our world, constantly splitting, would be constantly halving in value, so that its value is plummeting by orders of magnitude every second. Hmmm, that seems no less bizarre than the idea that splitting doubles the value of cosmos. Now if you thought that the other worlds already existed before the split, then you might reject V1 ~= V, since the subset of worlds in V1 would be about half (or whatever) of the total number of worlds in V. But the whole idea of splitting worlds is not that the worlds already existed. It's that they are created. So we're talking about whether creating a new valuable world adds value to the cosmos as a whole. Pending a good argument otherwise, it seems like it should.

(3.) You might reject additivity. You might say that although V1 ~= V2 ~= V, you can't simply sum V1 and V2 to get ~= 2V. I can feel the pull of this idea. If the worlds were strictly identical, you might say "well, there's no point in duplicating everything again"! But (a.) The worlds are not strictly identical, and in fact (given chaos theory) they might diverge increasingly over time. And (b.) Duplication plausibly adds value when worlds are temporally separated (at the end of the universe, it seems like not re-running the thing again would involve omitting possible future pleasures that future inhabitants of the world would have), and side-by-side splitting wouldn't seem to be very normatively different in principle.

Okay, let's pretend you're convinced. When new worlds arise through quantum mechanical splitting, that's terrific. Whole realms of value are added! The value of the cosmos as a whole approximately doubles. Now, accepting this, is there anything you should do differently?

Consider the following possible principle:

Conservation of Splitting. Over any time interval t, the world will split N times, no matter what you do.

As far as I can see, there's no reason to accept Conservation of Splitting. It seems like you can create situations in which there would be relatively more splitting or relatively less splitting. You can run more quantum mechanical experiments or fewer. You can make more quantum mechanical measurements or fewer. And if you run more experiments and make more measurements, there will be more splitting, and thus more worlds.

For example, that pair of polarized sunglasses you have, sitting in that dark sunglass case in that dark drawer? When a photon passes or fails to pass through a polarized lens, that's a quantum mechanical event -- a chance event, which, on this interpretation, results in a splitting of worlds. In some worlds the photon goes through; in others it does not. You could take those sunglasses out of the drawer. You could go to the beach and wear them in the sun. Many more photons will pass through those lenses! Maybe about 10^18 more photons per second. If each of those photons has an independent quantum chance of passing or not passing through those lenses, splitting the world, then you're creating 2^10^18 new worlds per second just by sitting on the beach -- worlds that would not have been created had the sunglasses remained cased in the drawer. Think of all the value you're adding to the cosmos!

Effective altruism is a movement that recommends using reason and evidence to do the most good you can do. Normally, effective altruists recommend doing things like donating money to charities that effectively help to alleviate suffering due to poverty. But the value of saving one life has to be tiny compared to the value of creating 2^10^18 new universes per second! So instead of staying indoors in the shade, writing a check to the Against Malaria Foundation, maybe you'd do better to spend the day at the beach.

Even if you have only a 0.001% credence that the splitting worlds interpretation of quantum mechanics is true, the expected utility of sitting on the beach might far exceed that of donating to poverty relief.

I know that it doesn't seem intuitively very plausible that going to the beach is far morally better than donating money to effective, life-saving charities, but consequentialist philosophers are often willing to admit what's ethically best might not match our intuitions about what's ethically best.

PS: I feel so bad about making this argument that I just donated to Oxfam.

--------------------------------------------------

Related Posts:

Duplicating the Universe (Apr 29, 2015)

Goldfish Pool Immortality (May 30, 2014)

How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)

[image source]

Friday, June 29, 2018

Sorry about the Lingering Unapproved Comments!

... It looks like Blogger has stopped giving me notifications of comments to approve. I found a huge backlog, most of which I have approved. Let me see if I can turn notifications back on.

It will take me til next week to catch up on responding to a selection of the comments that have been lingering. Sorry to have neglected you!

Will Future Generations Find Us Especially Morally Loathsome?

Ethical norms change. Although reading Confucius doesn't feel like encountering some wholly bizarre, alien moral system, some ethical ideas do differ dramatically over time and between cultures. Genocide and civilian-slaughtering aggressive warfare are now widely considered to be among the evilest things people can do, yet they appear to be celebrated in the Bible (especially Deuteronomy and Joshua) and we still name children after Alexander "the Great". Many seemingly careful thinkers, including notoriously Aristotle and Locke, wrote justifications of slavery. Much of the world has only recently opened its eyes to the historically common oppression of women, homosexuals, low-status workers, people with disabilities, and ethnic minorities.

We probably haven't reached the end of moral change. In a few centuries, people might look back on our current norms with the same mix of appreciation and condemnation that we now look back on ethical norms common in Warring States China and Early Modern Europe.

Indeed, future generations might find our generation to be especially vividly loathsome, since we are the first generation creating an extensive video record of our day-to-day activities.

It’s one thing to know, in the abstract, that Rousseau fathered five children with a lover he regarded as too dull-witted to be worth attempting to formally educate, and that he demanded against her protests that their children be sent to (possibly very high mortality) orphanages [see esp. Confessions, Book VII]. It would be quite another if we had baby pictures and video of Rousseau's interactions with Thérèse. It's one thing to know, in the abstract, that Aristotle had a wife and a life of privilege. It would be quite another to watch video of him proudly enacting sexist and classist values we now find vile. Future generations that detest our sexual practices, or our consumerism, or our casual destruction of the environment, or our neglect of the sick and elderly, might be especially horrified to view these practices in vivid detail.

By "we" and "our" practices and values, I mean the typical practices and values of highly educated readers from early 21st-century democracies -- the notional readership of this blog. Maybe climate change proves to be catastrophic: Crops fail, low-lying cities are flooded, a billion desperate people are displaced or malnourished and tossed into war. Looking back on video of a philosopher of our era proudly stepping out of his shiny, privately-owned minivan, across his beautiful irrigated lawn in the summer heat, into his large chilly air-conditioned house, maybe wearing a leather hat, maybe sharing McDonald's ice-cream cones with his kids -- looking back, that is, on what I (of course this is me) think of as a lovely family moment -- might this seem to some future Bangladeshi philosopher as vividly disgusting as I suspect I would find Aristotle's treatment of Greek slaves?

#

If we are currently at the moral pinnacle, any change in future values will be a change for the worse. Future generations might condemn our mixing of the races, for example. They might be disgusted to see pictures of interracial couples walking together in public and raising their mixed-race children. Or they might condemn us for clothing customs that they come to view as obscene. However, I feel comfortable saying that they'd be wrong to condemn us, if those were the reasons why.

But it seems unlikely that we are at the pinnacle; and thus it seems likely that future generations might have some excellent moral reason to condemn us. More likely than our being at the moral pinnacle, it seems to me, is that either (a.) there has been a slow trajectory toward better values over the centuries (as argued by Steven Pinker) and that the trajectory will continue, or alternatively that (b.) shifts in value are more or less a random walk up, down, and sideways, in which case it would be unlikely chance if we happened to be at the peak right now. I am assuming here the same kind of non-relativism that most people assume in condemning Nazism and in thinking that it constitutes genuine moral progress to recognize the equal moral status of women and men.

(To someone who endorses most of the widely-shared values of their group it is almost just analytically the case that they will see their group's values as the peak. Suppose you endorse the mainstream values in your group -- values A, B, C, D, E, and F. Elsewhere, the mainstream values might instead be A, not-B, D, E, F and G, or A, C, not-D, not-E, H and I. Of course it will seem to you that you're the group that got it right -- exactly A, B, C, D, E, and F! It will seem to you that changes from past values have been good, and the likely future rejection of your values will be mistaken. This is basically the old man's "kids these days!" complaint, writ large.)

I worry then, that we might be in a situation similar to Aristotle's: horribly wrong (most of us) on some really important moral issues, though it doesn't feel like we're wrong, and although we think we are applying our excellent minds excellently to the matter, with wisdom and good sense. I worry that we, or I, might be using philosophy to justify the 21st-century college-educated North American's moral equivalent of keeping slaves, oppressing women, and launching genocidal war.

Is there some way of gaining insight into this possibility? Some way to get a temperature reading, so to speak, on our unrecognized evil?

Here's one thing I don't think will work: Rely on the ethical reasoning of the highest status philosophers in our society. If you've read any of my work on Kant's applied ethics, German philosophers' failure to reject Nazism, and the morality of ethics professors, you'll know why I say this.

#

I'd suggest, or at least I'd hope, that if future generations rightly condemn us, it won't be for something we'd find incomprehensible. It won't be because we sometimes chose blue shirts over red ones or because we like to smile at children. It will be for things that we already have an inkling might be wrong, and which some people do already condemn as wrong. As Michele Moody-Adams emphasizes in her discussion of slavery and cultural relativism (Moody-Adams 1997, ch. 2), in every slave culture there were always some voices condemning the injustice of slavery -- among them, typically, the slaves themselves -- and it required a kind of affected ignorance to disregard those voices. As a clue to our own evil, we might look to minority moral opinions in our own culture.

I tend to disagree with those minority opinions. I tend to think that the behavior of my social group is more or less fine, or at least forgivably mediocre. If someone advances a minority ethical view I disagree with, I'm philosopher enough to concoct some superficially plausible defenses. What I worry is that a properly situated observer might recognize those defenses to be no better than Hans Heyse's defense of Nazism or Kant's critique of masturbation.

Moody-Adams suggests that we can begin to transcend our cultural and historical moral boundaries though moral reflection and moral imagination. In the epilogue of her 1997 book, she finds hope in the kind of moral reflection that involves self-scrutiny, vivid imagination, a wide-ranging contact with other disciplines and traditions, a recognition of minority voices, and serious engagement with the concrete details of everyday moral inquiry.

Hey, that sounds pretty good! I'll put, or try to put, my hopes there too.

Wednesday, June 20, 2018

The Perceived Importance of Kant, as Measured by Advertisements for Specialists in His Work

I'm revising a couple of my old posts on Kant for my next book, and I wanted some quantitative data on the importance of Kant in Anglophone philosophy departments.

There's a Leiter poll, where Kant ranks as the third "most important" philosopher of all time after Plato and Aristotle. That's pretty high! But a couple of measures suggest he might be even more important than number three. In terms of appearance in philosophy abstracts, he might be number one. Kant* appears 4370 times since 2010 in Philosophers Index abstracts, compared to 2756 for Plato*, 3349 for Aristot*, 1096* for Hume*, 1545 for Nietzsch*, and 1110 for Marx*. I've tried a bunch of names and found no one higher.

But maybe the most striking measure of a philosopher's perceived importance is when philosophy departments advertise for specialists specifically in that person's work. By this measure, Kant is the winner, hands-down. Not even close!

Here's what I did: I searched PhilJobs -- currently the main resource for philosophy jobs in the Anglophone world -- for permanent or tenure-track positions posted from June 1, 2015 to June 18, 2018. "Kant*" yields 30 ads (of 910 in the database), among which 17 contained "Kant" or "Kantian" in the line for "Area of Specialization". One said "excluding Kant", so let's toss that one out, leaving 29 and 16. Four were specifically asking for "post-Kantian" philosophy (which presumably excludes Kant, but it's testament to his influence that a historical period is referred to in this way), but most were advertising either for a Kant specialist (e.g., UNC Chapel Hill searched in AOS "Kant's theoretical philosophy") or Kant among other things (e.g., Notre Dame "Kant and/or early modern"). Where "Kant" was not in the AOS line, his name was either in the Area Of Competence line or somewhere in the body of the ad [note 1].

In sum, the method above yields:
Kant: 29 total PhilJobs hits, 16 in AOS (12 if you exclude "post-Kantian").

Here are some others:

Plato*: 3, 0.
Aristot*: 2, 0.
Hume*: 1, 0.
Confuc*: 1, 0.
Aquin*: 3, 1 (all Catholic universities).
Nietzsch*: 0, 0.
Marx*: 5, 1. (4/5 Chinese universities).

As I said, hands down. Kant runs away with the title, Plato and Confucius shading their eyes in awe as they watch him zoom toward the horizon.

Note 1: If "Kant" was in the body of the ad, it was sometimes because the university was mentioning their department's strength in Kant rather than searching for someone in Kant, but for my purposes if a department is self-describing its strengths in that way, that's also a good signal of Kant's perceived importance, so I haven't excluded those cases.

[image source]

Thursday, June 14, 2018

Slippery Slope Arguments and Discretely Countable Subjects of Experience

I've become increasingly worried about slippery slope arguments concerning the presence or absence of (phenomenal) consciousness. Partly this is in response to Peter Carruthers' new draft article on animal consciousness, partly it's because I'm revisiting some of my thought experiments about group minds, and partly it's just something I've been worrying about for a while.

To build a slippery slope argument concerning the presence of consciousness, do this:

* First, take some obviously conscious [or non-conscious] system as an anchor point -- such as an ordinary adult human being (clearly conscious) or an ordinary proton (obviously(?) non-conscious).

* Second, imagine a series of small changes at the far end of which is a case that some people might view as a case of the opposite sort. For example, subtract one molecule at a time from the human until you have only one proton left. (Note: This is a toy example; for more attractive versions of the argument, see below.)

* Third, highlight the implausibility of the idea that consciousness suddenly winks out [winks in] at any one of these little steps.

* Finally, conclude that the disputable system at the end of the series is also conscious [non-conscious].

Now slippery slope arguments are generally misleading for vague predicates like "red". Even if we can't finger an exact point of transition from red to non-red in a series of shades from red to blue, it doesn't follow that blue is red. Red is a vague predicate, so it ought to admit of vague, in-betweenish cases. (There are some fun logical puzzles about vague predicates, of course, but I trust that our community of capable logicians will eventually sort that stuff out.)

However, unlike redness, the presence or absence of consciousness seems to be a discrete all-or-nothing affair, which makes slippery-slope arguments more tempting. As John Searle says somewhere (hm... where?), having consciousness is like having money: You can have a little of it or a lot of it -- a penny or a million bucks -- but there's a discrete difference between having only a little and having not a single cent's worth. Consider sensory experience, for example. You can have a richly detailed visual field, or you can have an impoverished visual field, but there is, or at least seems to be, a discrete difference between having a tiny wisp of sensory experience (e.g., a brief gray dot, the sensory equivalent of a penny) and having no sensory experience at all. We normally think of subjects of experience as discrete, countable entities. Except as a joke, most of us wouldn't say that there are two-and-a-half conscious entities in the room or that an entity has 3/8 of a stream of experience. An entity either is a subject of conscious experience (however limited their experience is) or has no conscious experience at all.

Consider these three familiar slippery slopes.

(1.) Across the animal kingdom. We normally assume that humans, dogs, and apes are genuinely, richly phenomenally conscious. We can imagine a series of less and less sophisticated animals all the way down to the simplest animals or even down into unicellular life. It doesn't seem that there's a plausible place to draw a bright line, on one side of which the animals are conscious and on the other side of which they are not. (I did once hear an ethologist suggest that the line was exactly between toads (conscious) and frogs (non-conscious); but even if you accept that, we can construct a fine-grained toad-frog series.)

(2.) Across human development. The fertilized egg is presumably not conscious; the cute baby presumably is conscious. The moment of birth is important -- but it's not clear that it's so neurologically important that it is the bright line between an entirely non-conscious fetus and a conscious baby. Nor does there seem to be any other obvious sharp transition point.

(3.) Neural replacement. Tom Cuda and David Chalmers imagine replacing someone's biological neurons one by one with functionally equivalent artificial neurons. A sudden wink-out between N and N+1 replaced neurons doesn't seem intuitively plausible. (Nor does it seem intuitively plausible that there's a gradual fading away of consciousness while outward behavior, such as verbal reports, stays the same.) Cuda and Chalmers conclude that swapping out biological neurons for functionally similar artificial neurons would preserve consciousness.

Less familiar, but potentially just as troubling, are group consciousness cases. I've argued, for example, that Guilio Tononi's influential Integrated Information Theory of consciousness runs into trouble in employing a threshold across a slippery slope (e.g. here and Section 2 here). Here the slippery slope isn't between zero and one conscious subjects, but rather between one and N subjects (N > 1).

(4.) Group consciousness. At one end, anchor with N discretely distinct conscious entities and presumably no additional stream of consciousness at the group level. At the other end, anchor with a single conscious entity with parts none of which, presumably, is an individual subject of experience. Any particular way of making this more concrete will have some tricky assumptions, but we might suppose an Ann Leckie "ancillary" case with a hundred humanoid AIs in contact with a central computer on a ship. As the "distinct entities" anchor, imagine that the AIs are as independent as ordinary human beings are, and the central computer is just a communications relay. Intermediate steps involve more and more information transfer and central influence or control. The anchor case on the other end is one in which the humanoid AIs are just individually nonconscious limbs of a single fully integrated system (though spatially discontinuous). Alternatively, if you like your thought experiments brainy, anchor on one end with normally brained humans, then construct a series in which these brains are slowly neurally wired together and perhaps shrunk, until there's a single integrated brain again as the anchor on the other end.

Although the group consciousness cases are pretty high-flying as thought experiments, they render the countability issue wonderfully stark. If streams of consciousness really are countably discrete, then either you must:

(a.) Deny one of the anchors. There was group consciousness all along, perhaps!

(b.) Affirm that there's a sharp transition point at which adding just a single bit's worth of integration suddenly shifts the whole system from N distinct conscious entitites to only one conscious entity, despite the seemingly very minor structural difference (as on Tononi's view).

(c.) Try to wiggle out of the sharp transition with some intermediate number between N and 1. Maybe this humanoid winks out first while this other virtually identical humanoid still has a stream of consciousness -- though that's also rather strange and doesn't fully escape the problem.

(d.) Deny that conscious subjects, or streams of conscious experience, really must come in discretely countable packages.

I'm increasingly drawn to (d), though I'm not sure I can quite wrap my head around that possibility yet or fully appreciate its consequences.

[image adapted from Pixabay]