Monday, July 04, 2022

Political Conservatives and Political Liberals Have Similar Views about the Goodness of Human Nature

with Nika Chegenizadeh

Back in 2007, I hypothesized that political liberals would tend to have more positive views about the goodness of human nature than political conservatives. My thinking was grounded in a particular conception of what it is to say that "human nature is good". Drawing on Mengzi and Rousseau (and informed especially by P.J. Ivanhoe's reading of Mengzi), I argued that those who say human nature is good have a different conception of moral development than do those who say it is bad.

On my interpretation, those who say human nature is good have an inward-out model of moral development, according to which all ordinary people have something like an inner moral compass: an innate tendency to be attracted by what is morally good and revolted by what is morally evil, at least when it's up close and extreme. This tendency doesn't require any particular upbringing or specific cultural background. It's universal to all normally developing humans. Of course it can be overridden by any of a number of factors -- self-interest, cultural learning, situational pressures -- and sometimes it speaks only with a quiet voice. But somewhere in the secret heart of every Nazi killer of Jews, every White supremacist lyncher, every evil tyrant, every rapist and abuser and vile jerk is something that understands and rebels against their horrid actions. Moral development then proceeds by noticing that quiet voice of conscience and building upon it.

Emblematic of this view, picture the pre-school teacher who confronts a child who has just punched another child. "Don't you feel bad about what you did to her?" the teacher asks, hoping that this provokes reflection and a feeling of sympathy from which better moral behavior will grow in the future.

Those who say human nature is bad have, in contrast, an outward-in model of moral development. On this view, what is universal to humans is self-interest. Morality is an artificial social construction. Any quiet voice of conscience we might have is the result of cultural learning. People regularly commit evil and feel perfectly fine about it. Moral development proceeds by being instructed to follow norms that at first feel alien and unpleasant -- being required to share your toys, for example. Eventually you can learn to conform whole-heartedly to socially constructed moral norms, but this is more a matter of coming to value what society values than building on any innate attraction to moral goodness.

Thus, a liberal style of caregiving, which emphasizes children exploring their own values, fits nicely with the view that human nature is good, while a conservative style of caregiving, which emphasizes conformity to externally imposed rules, fits nicely with the view that human nature is bad.

At least, that has been my thought. Some political scientists have endorsed related views. For example, John Duckitt and Kirsten Fisher argue that believing that people are ruthless and the world is dangerous tends to correlate with having more authoritarian politics.

For her undergraduate honors thesis, Nika Chegenizadeh decided to put these ideas to an empirical test. She recruited 200 U.S. participants through Prolific, an online platform commonly used to recruit research subjects.

Participants first answered eleven questions about the morality of "most people" -- for example, "Most people will return a lost wallet" and "For most people it is easier to do evil than good" (6-point response scale from "strongly agree" [5] to "strongly disagree" [0]). Next, they answered five questions about their own helpful or unhelpful behavior in hypothetical situations. For example:

While walking in a park, you notice someone struggling to carry a box of water bottles. Which of the following are you most likely to do? 
o Continue walking your path. 
o Help them carry their box.

Next, participants were explicitly asked about human nature:

Human nature can be defined in terms of what is characteristic or normal for most human beings. It describes the way humans are inclined to be if they mature and develop normally from when they are first born. 
Based on the definition given, which of the following two statements better represents your view? 
o Human nature is inherently bad. 
o Human nature is inherently good.

Now one could quibble that this definition of human nature doesn't map exactly onto philosophical conceptions in Mengzi, Xunzi, Hobbes, or Rousseau. And it's certainly the case that Mengzi and Rousseau can allow that human nature is good despite most people acting badly most of the time. But those issues are probably too nuanced to convey accurately in a short amount of time to ordinary online research participants. It's interesting enough to work with Nika's approximation for this first-pass research.

Next, participants were asked their political opinions on a some representative issues. For example: "The federal government should make sure everyone has an equal opportunity to succeed" (6-point agree/disagree scale), "Do you favor or oppose requiring background checks for gun purchases at gun shows or other private sales?" (favor, neither favor nor oppose, oppose), and "Where would you place yourself on this political scale?" (Liberal, Leaning Liberal, Leaning Conservative, Conservative). The questionnaire concluded with some demographic questions.

To Nika's and my surprise, we found no evidence of the hypothesized relationship.

The simplest test is to consider whether participants who describe themselves as politically liberal are more likely than those who describe themselves as politically conservative to say "human nature is inherently good". In all, 79% (118/150) of participants who described themselves as liberal or leaning liberal said that human nature is inherently good, compared to 74% (37/50) of participants who described themselves as conservative or leaning conservative -- a difference that is well within statistical chance (two-proportion z = 0.66, p = .51).

Here is the breakdown by political leaning:

[click to enlarge and clarify; error bars are +/- 1 standard error]

For a possibly more sensitive measure, we created a composite "people are good" score by averaging the eleven questions in the first part of the survey (e.g., "most people will return a lost wallet"), reverse scoring the negative items. As expected, people who said that "human nature is inherently good" scored higher, on average, on the people-are-good composite scale (2.5) than respondents who said that "human nature is inherently bad" (1.9) (pooled SD = .52, t[198] = 7.29, p < .001). We then converted the political leaning answers to a 0-3 scale by converting "liberal" to 3, "leaning liberal" to 2, "leaning conservative" to 1, and "conservative" to 0. We then checked for a correlation. If political liberals have more positive views about the moral behavior of the average person, we should find a positive correlation between these two measures.

Again, and contrary to our hypothesis, we found no evidence of a positive correlation. The measured correlation between the "people are good" composite score and political leaning was almost exactly zero (r = .00, p = .95).

How about using our indirect measure of political liberalism? To test this, we created a composite political liberalism score by scoring the most liberal response to each political question as 1, the most conservative response as 0, and intermediate responses as intermediate, then averaging. As expected, this correlated very highly with self-described political leaning (r = .78, p < .001). Again, there was no statistically detectable correlation with the "people are good" score (r = -.07, p = .35).

Looking post-hoc at individual items, we do find two items concerning human nature and human goodness that correlate with political leaning. Agreement with "Children need to be taught right from wrong through strict rules and harsh punishments" correlated negatively with self-described political liberalism at r = -.40 (p < .001) and composite political liberalism at r = -.41 (p < .001). And political liberals were more likely to opt for "natural consequences" to the prompt:

Your child has purposefully disobeyed the rules you set for them. Which of the following are you most likely to do?
o Let them live with the natural consequences that they have made. 
o Opt for hands-on punishment by grounding them (taking their phone/technology away and not leaving the house).

For example, 8% (4/50) of respondents who were conservative or leaning conservative chose "natural consequences", compared to 46% of respondents who were liberal or leaning liberal (two proportion z = 6.79, p < .001).

In retrospect, these two questions were outliers. They directly concern parenting styles rather than more generally whether people are inherently good or respondents' hypothetical helpful or unhelpful behavior. Parenting styles and beliefs about human nature are closely connected on my theory, but the surface content of these questions is different from the others, and my theory might well be wrong.

As the numbers above suggest, liberals and conservatives do differ on these two parenting-related questions in the direction my theory would predict. Furthermore, also as my theory would predict, "liberal" answers on these questions correlate with agreement that "human nature is inherently good" (r = .27, r = .31, both p's < .001). However, when we get away specifically from questions about parenting to more general questions about the goodness or helpfulness of people, we don't see the relationship Nika and I expected. In general, political liberals seem to have no more optimistic a view of human nature than do political conservatives.

Full stimulus materials and raw data available here.

Monday, June 27, 2022

If We're Living in a Simulation, The Gods Might Be Crazy

[A comment on David Iverson's new short story, "This, But Again", in Slate's Future Tense]

That we’re living in a computer simulation—it sounds like a paranoid fantasy. But it’s a possibility that futurists, philosophers, and scientific cosmologists treat increasingly seriously. Oxford philosopher and noted futurist Nick Bostrom estimates there’s about a 1 in 3 chance that we’re living in a computer simulation. Prominent New York University philosopher David J. Chalmers, in his recent book, estimates at least a 25 percent chance. Billionaire Elon Musk says it’s a near-certainty. And it’s the premise of this month’s Future Tense Fiction story by David Iserson, “This, but Again.”

Let’s consider the unnerving cosmological and theological implications of this idea. If it’s true that we’re living in a computer simulation, the world might be weirder, smaller, and more unstable than we ordinarily suppose.

Full story here.

----------------------------------------

Related:

"Skepticism, Godzilla, and the Artificial Computerized Many-Branching You" (Nov. 15, 2013).

"Our Possible Imminent Divinity" (Jan. 2, 2014).

"1% Skepticism" (Nous (2017) 51, 271-290).

Related "Is Life a Simulation? If So, Be Very Afraid" (Los Angeles Times, Apr. 22, 2022).

Wednesday, June 22, 2022

Your Summer Reading, Sorted!

I've just finished a new version of my book in draft, The Weirdness of the World. This one includes a new chapter co-written with Jacob Barandes, on some of the bizarre consequence of spatiotemporal infinitude.

Draft available here.

I'm looking for comments and suggestions. Here's your chance to improve my book before it goes into print! Isn't that better than emailing me your insightful idea after it's too late for me to change anything?

Table of Contents

1. In Praise of Weirdness


Part One: Bizarreness and Dubiety

2. If Materialism Is True, the United States Is Probably Conscious

• Chapter Two Appendix: Six Objections

3. Universal Bizarreness and Universal Dubiety

4. 1% Skepticism

5. Kant Meets Cyberpunk


Part Two: The Size of the Universe

6. Experimental Evidence for the Existence of an External World

7. Almost Everything You Do Causes Almost Everything (Under Certain Not Wholly Implausible Assumptions); or Infinite Puppetry


Part Three: More Perplexities of Consciousness

8. An Innocent and Wonderful Definition of Consciousness

9. The Loose Friendship of Visual Experience and Reality

10. Is There Something It’s Like to Be a Garden Snail? Or: How Sparse or Abundant Is Consciousness in the Universe?

11. The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma

12. Weirdness and Wonder

Friday, June 17, 2022

Dispositionalism vs. Representationalism -- What's the Core Disagreement?

I'm just back from a workshop on the nature of belief in Princeton. As usual, I defended my dispositional approach to belief (see here, here, and here), according to which to believe some proposition P (such as that there is beer in the fridge) is just to be disposed to act and react in the manner characteristic of a believer that P, as defined by a folk-psychologically available (alternatively, scientifically constructed) stereotype or dispositional profile for believers-that-P. The relevant dispositions can be behavioral (e.g., being disposed to go to the fridge if one wants a beer), phenomenal/experiential (e.g., being disposed to feel surprise should one open the fridge and find no beer), and cognitive (e.g., being ready to conclude that there is beer in the house). To believe that P is to be prone to act and think as a P-believer would.

To believe that P, on my dispositional account, is not just to be ready to sincerely say that P. It is broadly speaking to have a particular behavioral, experiential, and cognitive posture toward the world. To believe, for example, that all the races are intellectually equal is not just to be disposed to say so, but to actually live that way. This view is grounded in the pragmatist tradition in belief, back to Bain, Peirce, and James.

A prominent alternative account -- maybe the dominant approach among philosophers and cognitive scientists -- is representationalism. According to representationalism, to believe that P is to have a representation with the content "P" stored in one's cognitive architecture, ready to be retrived and deployed in relevant practical and theoretical reasoning. If asked whether there's beer in the fridge, you pull from your memory stores the representation "there's beer in the fridge", do a little bit of cognitive processing, and answer "yes".

These are somewhat simplified descriptions of the competing accounts. Dispositionalism, for example, typically treats the relevant dispositions as ceteris paribus (that is, all else being equal, or normal, or right) to deal with cases of faking, acting under duress, etc. Representationalism typically allows for tacit belief, where P itself isn't explicitly stored but instead is quickly derivable from some neighboring proposition that is stored (so that you don't need separately stored representations for "there's beer in the fridge", "there's Lucky Lager in the fridge", "there's Dan's favorite beer in the kitchen", etc.).

To get a sense of what this difference amounts to and why it matters, let me mention the two main reasons I prefer dispositionalism.

First, dispositionalism better captures what we care about in thinking about what people believe. A thought experiment: Space aliens arrive at Earth. We know nothing about their internal cognitive architecture, but it's nonetheless the case that they act and react exactly as creatures with belief. Alpha-1 is disposed to say there's beer in the fridge, to go to the fridge if they want a beer, to experience surprise if they open the fridge and find no beer, to think to themself in inner speech "there's beer in the fridge, yes!", etc., etc. This should be sufficient for us to regard Alpha-1 as a beer-in-the-fridge believer, regardless of what might or might not be true about the underlying cognitive architecture.

Second, I suspect that the representationalist architectural story is overly simple. The idea that we store and retrieve representations with simple ordinary-language contents like "there's beer in the fridge" seems to me likely to be merely a cartoon-sketch of a radically more complicated architecture (compare the complex, almost uninterpretable internal architectures of deep learning Artifical Intelligence systems). The explicit/tacit distinction mentioned above is likely the tip of an iceberg of dubious architectural commitments that follow from taking literally that acting on our beliefs requires storing and retrieving contents like "there's beer in the fridge".

Now, these two objections to representationalism operate at two different levels and therefore create two different types of contrast with representationalism. The first objection constitutes a commitment to what I call superficialism. What matters, or should matter, to our conception of whether someone has a belief, is what is happening at the dispositional "surface" rather than the deep architecture. By the surface, here, I don't just mean the behavioral surface but also the phenomenalogical/experiential surface and the cognitive surface -- such as what experiences the putative believer is disposed to have and what conclusions they are prone to draw.

Superficialism is compatible with thinking that there's a representational architecture underneath. You could be a superficialist and still hold that what explains why you act and react like a beer-in-the-fridge believer is that you have a stored representation with the content "there's beer in the fridge" that you're ready to deploy in your practical and theoretical reasoning. If so, there can be a partial reconciliation between dispositionalism and representationalism. There would still be a metaphysical difference: On dispositionalism, you're a beer-in-the-fridge believer in virtue of your dispositional structure, not in virtue of the cognitive architecture that underwrites that structure. On representationalism, the reverse would be true. But maybe this dispute is minor if we're primarily concerned with ordinary, real-world human cases where the dispositional and representational structures co-occur.

So it's possible to partially reconcile dispositionalism and representationalism. But that partial reconciliation concerns only the first of the two objections -- the a priori philosophical argument in favor of superficialism.

My second objection is more architectural and also more empirical. It's a guess or bet against a representationalist architecture according to which we literally store representational contents like "there's a beer in the fridge" and retrieve those contents when they are relevant to our reasoning.

Actually, there's a possibility of a partial reconciliation here, of a different sort. After all, what is it to literally have a stored representation with the content "there's a beer in the fridge"? Obviously, there's no literal slip of paper with that sentence written anywhere in the brain. Maybe a complex distributed process can count as literally storing that representation if certain other conditions are met.

Here, I think the representationalist faces a dilemma. On the one hand, the representationalist can be super liberal and say that there's a stored representation that "there's a beer in the fridge" whenever the system is such that it has the dispositional structure characteristic of a beer-in-the-fridge believer. In that case, there's no substantive empirical dispute between dispositionalism and representationalism: The so-called representationalist is really just a dispositionalist employing misleading language. On the other hand, the representationalist can make specific architectural commitments regarding how the cognitive system must be designed. The more specific, the riskier empirically, of course, and the closer to the simplistic cartoon sketch, and the less relevant to our superficialist interests as belief ascribers.

As participants in the Princeton workshop noticed, and as readers of the debate between Jake Quilty-Dunn, Eric Mandelbaum, and me (e.g., here) sometimes notice, although there's a bald top-line disagreement between dispositionalism and representationalism, a closer look suggests some paths toward reconciling the two views. However, I think a still closer look suggests that such apparent reconciliations can really only be partial. Understanding this back-and-forth helps us better understand the philosophical terrain and the real nature of the dispute.

Friday, June 10, 2022

The Continental/Analytic Divide Is Alive and Well in Philosophy: A Quantitative Analysis

A major sociological divide in recent Anglophone philosophy is the divide between philosophers who see themselves as working in the tradition of Nietzsche, Heidegger, Foucault, and Derrida -- so-called "Continental" philosophers -- and those working in the tradition of Frege, Russell, Wittgenstein, and Quine -- so-called "analytic philosophers". This division is reflected, in part, in journal citation patterns. You might wonder about this history of this. Was Philosophical Review always allergic to Nietzsche? Or is that a relatively recent phenomenon? And how extreme is the phenomenon? Do leading Continental figures get at least some play in the top analytic journals, or are they almost entirely excluded?

I first looked at this issue quantitatively ten years ago. Today's post is a reanalysis, with new and updated data.

The "big three" Anglophone philosophy journals -- all of which have been leading journals since the first decade of the 20th century -- are Philosophical Review, Mind, and Journal of Philosophy (formerly Journal of Philosophy, Psychology, and Scientific Methods). All currently lean heavily "analytic". Recent journal rankings also tend to classify Nous (founded 1967) as similarly prestigious. All are also indexed in JStor, along with a diverse group of 135 other philosophy journals, many of which are not as sociologically aligned with the analytic tradition.

What I've done is to look, decade by decade, at the rates at which the names of central analytic and Continental philosophers appear in these "big 4" journals compared to other journals.

Compare, first, Nietzsche and Frege -- foundational figures for both traditions, both born in the 1840s. The crucial measure is the percentage of articles in which each figure is mentioned in the big four vs. the percentage in which each is mentioned in the remainder. For methodological details see this note.[1] For a clearer view of the charts below, click them.

As you can see, through the 1940s or 1950s the Nietzsche lines stay more or less together and the Frege lines stay more or less together. This means that Nietzsche and Frege are mentioned about equally frequently in the big four (actually big three, back then) journals as in other journals -- Nietzsche in about 5% of articles throughout the period and Frege in about 2%.

Starting in the 1960s or 1970s, the dashed and solid lines are clearly separated -- a separation that increases through the 1990s. Since the 1990s, the separation has remained quite large. Note how often Frege is mentioned in the big four journals in the 1980s through 2010s -- in about 20% of all articles. Outside the big four, he's mentioned in about 6% of all articles. Also outside the big four, Nietzsche is mentioned about as often as Frege -- in about 5% of articles. But in the big four, he is mentioned in only about 1% of articles by the 2000s and 2010s.

In the period from 2000 to 2019, Nietzsche is mentioned only 20 times among 1760 articles in the big four. If you were to read every article published in the big four journals, you would see his name mentioned on average in only one article per year. That's remarkably infrequent for such a historically important figure!

A similar story holds for Heidegger and Wittgenstein -- leading early figures in the Continental and analytic traditions, respectively -- and both born in 1889. (Again, click chart for clarity.)

Starting in the 1950s, Wittgenstein is notably more favored in the big four than in the others, though the difference isn't extreme. Starting the 1940s, Heidegger is slightly disfavored by the big four relative to the others, with the difference getting large by the 1980s and continuing to increase up to the present. In the 2010s, Heidegger is mentioned in 0.7% of articles in the big four (5 times total in 764 articles) and in 6% of articles in the remaining journals (1474/26084).

Okay, how about the Continentals Sartre (b. 1905), Foucault (b. 1926), and Derrida (b. 1930) vs. the analytics Quine (b. 1908), Chisholm (b. 1916), and Putnam (b. 1926)? The graph is a little crowded but the following should be evident: The muted-color analytics show higher in the Big Three (solid lines) than in the remaining journals (dashed lines), while the bright-color Continentals show the reverse pattern -- and the spread is much more evident in the past few decades than it was mid-century. (There's a bit of false-positive noise for Foucault and Putnam, but not enough to mask the general trend. Russell I have chosen to exclude entirely due to false positives.)

Here's one way of thinking about it. In the bulk of JStor philosophy journals, representing a mix of journals with Continental vs. analytic vs. eclectic perspectives, these six figures are all broadly similar in importance in recent decades, as shown by the fact that the dashed lines end up all bunched in the middle range of about 2-7% in the most recent decades. But if we look at the big 4 analytic journals, the analytic figures loom large, especially Quine, while the Continentals are near the floor -- Derrida and Foucault in particular being almost 0% (each has 4 mentions total in the period from 2000-2019, i.e., about once in each journal across the whole period).

The effect is even clearer if we take averages of trend lines for the five Continentals versus the five analytics:

In the 2000s and the portion of the 2010s that has so far been indexed in JStor, the words Nietzsche, Heidegger, Sartre, Foucault, and Derrida appeared in 48 articles total among 1760 big four journal articles (2.7%). Thus, the big four journals have included, on average, less than one article per journal per year that even passingly mentions any of these five authors.

Maybe an analysis of leading Continental journals would reveal a similar trend, with Nietzsche and Heidegger mentioned in a substantial percentage of articles and Frege and Wittgenstein hardly mentioned at all -- or maybe not. But even if not, the exclusion of leading Continental figures from the top analytic journals shows that the Continental/analytic divide remains sociologically important.

ETA, 12:05 p.m.

Could it be a result of the fact that the big four journals don't publish much ethics and some of the Continental figures mainly had their impact in ethics? I don't think so. If we look for ethics-specific journals among the top journals of philosophy "without regard to area" in Leiter's most recent poll (a bit dated), only Ethics and Philosophy and Public Affairs rank among the top 25. A disjunctive search for Nietzsche or Heidegger or Sartre or Foucault or Derrida shows only 38 hits total for the 17 year period from 2000-2016 -- again about one mention of any of these five authors per journal per year.

------------------------------------------

[1] Names include a truncation symbol, e.g., "Nietzsche*", which includes "Nietzsche's" and "Nietzschean", except for "Putnam", to exclude false alarms for the publisher "Putnam's sons", and except for the disjunctive search described in the penultimate paragraph. Percentages are divided by a representative universe of articles including the search term "the". Only English-language articles are included. Reviews and minor documents are excluded by limiting the results to "articles" in JStor. Although the search terms run through 2019, JStor only covers through the mid-2010s for many journals, including the big four.

Wednesday, June 01, 2022

Infinite Puppetry

About a year ago, I argued in a blog post that "Everything You Do Causes Almost Everything". If the universe is temporally infinite (as suggested by the current default theories in cosmology) and supports random fluctuations post-heat-death (as also suggested by the current default theories in cosmology), then every action you take will perturb some particles, which will perturb more particles, which will perturb more particles, in an infinite causal chain (a "ripple"), eventually perturbing some post-heat-death system in a way that results in any type of non-unique, non-zero-probability event that you care to specify. Wave your hand, and you will trigger a ripple of perturbances through the cosmos that will eventually cause a far-distant-future duplicate of Shakespeare to decide that Hamlet needs a happy ending.

Philosopher of physics Jacob Barandes and I have collaborated on a draft chapter developing the idea in more detail, for my forthcoming book with Princeton, The Weirdness of the World. I think you'll agree that the idea fits nicely with the book title!

The draft chapter more carefully articulates the physical assumptions required for the almost-everything-causes-almost-everything idea to work -- and then it adds some new thoughts, specifically the infinite puppetry idea.

Below I share the three final sections of the draft chapter, on infinite puppetry. (For more on almost-everything-causes-almost-everything see last year's blog post or the full chapter draft.) Thoughts and comments welcome as always!


Signaling Across the Vastness

The following will also almost certainly occur, given our assumptions so far: On some far distant post-heat-death counterpart of Earth will exist a counterpart of you – let’s call that person you-prime – with the following properties: You-prime will think “right hand” after the ripple from the act of your raising your right hand arrives at their world, and you-prime will not have thought “right hand” had that ripple not arrived at their world. Maybe the ripple initiates a process that affects the weather which causes a slightly different growing season for grapes, which causes small nutritional differences in you-prime’s diet, which causes one set of neurons to fire rather than another at some particular moment when you-prime happens to be thinking about their hands. Likewise, there’s a future you-prime who would have thought “A” if you, here on our Earth, had held up a sheet with that letter and not otherwise. Indeed, infinitely many future counterparts of you have that property. You can specify the message as precisely as you wish, within the bounds of what a counterpart of you could possibly think. Some you-prime will think, “Whoa! Infinite causation!” as a result of your having raised your hand and would not have done so otherwise.

These message recipients will mostly not believe that they have been signaled to. However, we can dispel their disbelief by choosing the fraction who, for whatever reason, are such that they believe they are receiving a signal if and only if they do in fact receive a signal. We can stipulate that we’re interested in you-primes who share the property that when your signal arrives they think not only the content of the signal but also “Ah, finally that signal I’ve been waiting for from my earlier counterpart.”[1]

There’s a question of whether one of your future counterparts could rationally think such a thought. But maybe they could, if they had the right network of surrounding beliefs, and if those beliefs were themselves reasonably arrived at. We’ll consider one such set of beliefs in the final section of this chapter.


Infinite Puppetry

You needn’t limit yourself to ordinary communicative signals. You can also control your future counterparts’ actions. Consider future counterparts with the following property: They will raise their right hand if you raise your right hand, and they will not raise their right hand if you do not. Exactly which counterparts have this feature will depend on exactly when you raise your hand and how, since that will affect which particles follow which trajectories when they are disturbed by your hand. But no matter. Whenever and however you raise your hand, such future counterparts exist.

Your counterparts’ actions can be arbitrarily complex. There is a future you-prime who will, if you raise your hand, write an essay word-for-word identical with the chapter you are now reading and who will otherwise write nothing at all. Maybe that you-prime is considering whether to write some fanciful philosophy of cosmology, as their last hurrah in a failing career as a philosopher. They’re leaning against. However, the arriving particle triggers a series of events that causes an internet outage that prevents them from pursuing an alternate plan, so they do write the essay after all. (A much greater proportion[2] of such future counterparts, of course, will write very different essays from this one, but we can focus on the tiny fraction of them who create word-for-word duplicates of this essay.)

Let’s call someone a puppet if they perform action A as a consequence of your having performed an action (such as raising your hand) with the intention of eventually causing a future person to perform action A. (Admittedly, you might need to agree with the assumptions of this chapter to be able to successfully form such an intention.) You can now wave your hand around with any of a variety of intentions for your future counterparts’ actions, and an infinite number of these future counterparts will act accordingly – puppets, in the just-defined sense.

We recommend that you intend for good things to happen. This might seem silly, since if the assumptions of this chapter are correct, almost every type of finitely probable, non-unique future event occurs, regardless of your benevolent or malevolent intent right now. Still, there is a type of good event that can occur as a result of your good intentions, which could not otherwise occur. That’s the event of a good thing happening in the far distant future as a consequence of your raising your hand with the intention of causing that future good event. So let’s choose benevolence, letting good future events be intentionally caused while bad future events are merely foreseen side effects.

A deeper kind of puppet mastery would involve influencing a person’s actions through a sequence of moves over time and with some robustness to variations in the details of execution. This might not be possible on the current set of assumptions. Raising your right hand, you can trigger arbitrarily long sequences of actions in some future you-prime. But if you then raise your left hand, there’s no guarantee that a ripple of particles from your left hand will also hit the same you-prime. Maybe all the ripples from your right hand head off toward regions A, B, and C of the future universe and all the ripples from your left hand head off toward regions D, E, and F. Similarly, if you raise your right hand like this, the ripples might head toward regions A, B, and C, and if you raise it instead like that, they head toward regions G, H, and I. So there might be no future counterparts of you who do what you intend if you raise your right hand now and then do what you intend when you raise your left hand later; and there might be no future counterparts who will do what you intend if you raise your right hand now, insensitively to the particular manner in which you raise it. In this way, there might be no sequencing and no implementational robustness to your puppetry.

Sequential and robust puppetry might only be reliably possible if we change one of the assumptions in this chapter. Suppose that although the universe endures infinitely in time, spatially it repeats – that is, it has a closed topology in the sense we described in Section 1 – so that any particle that travels far enough in one direction eventually returns to the spatial region from which it originated, as if traveling on the surface of a sphere. Suppose, further, that in this finite space, every ripple eventually intersects every other ripple infinitely often. Over the course of infinite time each ripple eventually traverses the whole of space infinitely many times; none get permanently stuck in regions or rhythms that prevent them from all repeatedly meeting each other. (If a few do get stuck, we can deal with them using the n^m strategy of Section 4. Also the rate of ripple stoppage would presumably increase with so much intersection, but hopefully again in a way that’s manageable with the n^m strategy.) When you raise your right hand, the ripples initially head toward regions A, B, and C; when you raise your left hand, they initially head toward regions D, E, and F; but eventually those ripples meet.

With these changed assumptions, we can now find future counterparts who raise their right hands as a result of your raising your right hand and who then afterward raise their left hand as a result of your afterward raising your left hand. We simply look at the infinite series of systems that are perturbed by both ripples. Eventually some will contain counterparts of you who raise their right hands, then their left, as a result of that joint perturbation. In a similar way, we can find implementationally robust puppets: counterparts living in systems that are perturbed by your actual raising of your right hand (via the ripple that initially traversed regions A, B, and C) and which are also such that they would have been perturbed had you, counterfactually, raised your hand in a somewhat different way (via the ripple that would have initially traversed regions G, H, and I). Multiplying the minuscule-but-finite upon the miniscule-but-finite, we can now find puppets whose behavioral matching to yours is long and implementationally robust, within reasonable error tolerances.


We Might All Be Puppets

So far, we have not assumed that anything existed before the Big Bang. But if the universe is infinite in duration, with infinitely many future sibling galaxies, it would be in a sense surprising if the Big Bang were the beginning. It would be surprising because it would make us amazingly special, in violation of the Copernican Principle of cosmology, which holds that our position in the cosmos is not special or unusual. We would be special in being so close to the beginning of the infinite cosmos. Within the first 14 billion years, out of infinity! It’s as though you had a lotto jar with infinitely many balls numbered 1, 2, 3… and you somehow managed to pull out a ball with the low, low number of 14 billion. If you don’t like a strictly infinite lotto, consider instead a Vast one. The odds of pulling a number as low as 14 billion in a fair lottery from one to a Vastness are far less than one in a googolplex.[3]

Cosmologists don’t ordinarily deny that there might have been something before the Big Bang. Plenty of theories posit that the Big Bang originated from something prior, though there’s no consensus on these theories.[4] If we assume that somehow the Big Bang was brought into existence by a prior process, and that process in turn had something prior to it, and so on, then the Copernican lottery problem disappears. We’re in the middle of a series, not at the beginning of one. Maybe Big Bangs can be seeded in one way or another. Heck, maybe the whole observable universe is a simulation nested in a whole different spatial reality (Chapters 4 and 5) or is itself a very large fluctuation from a prior heat-death state.

Suppose, then, that we are in the middle of an infinite series rather than at the beginning of one, the consqeuence of accepting both Copernican mediocrity and an infinite future. If so, and if we can trace chains of causation or contingency infinitely backward up the line, and if a few other assumptions hold, then eventually we ought to find our puppeteers – entities who act with the intention of causing people to do what we are now doing and whose intentions are effective in the sense that had they not performed those actions, we would not be here doing those things. Suppose you are knitting your brow right now. Somewhere in the infinite past, there is a near-duplicate counterpart of you with the following properties: They are knitting their brow. They are doing so with the intention of initiating ripples that cause later counterparts of them to knit their brows. And you are just such a later counterpart, because among the events that led up to your knitting your brow, absent which you wouldn’t have knit your brow, was a ripple from that past counterpart.

We the authors of this chapter – Eric and Jacob – can work ourselves into the mood of finding this probable. An infinite cosmos is simpler, more elegant, and more consistent with standard cosmological theory; if it’s infinite, it’s probably infinite in all directions; and if it’s truly infinite in all directions, there will be bizarre consequences of that infinitude. Puppetry is one such consequence. We would not be so special as to be only puppeteers and never puppet. It seems only fair to our future puppets to acknowledge this.

--------------------------------------


[1] Compare this procedure with Sinhababu’s 2008 procedure for writing love letters between possible worlds. One advantage of our method over Sinhababu’s is that there actually is a causal connection.

[2] Here and throughout we bracket quibbles about ratios of infinitude by considering the limit of the ratio of counterparts with property A to counterparts with property B as the region of spacetime defined by your forward lightcone goes to infinity.

[3] Our reasoning here resembles the reasoning in the “Doomsday argument”, e.g., Gott 1993, according to which it’s highly unlikely that we’re very near the beginning of a huge run of cosmological observers. For a bit more detail, see Schwitzgebel 2022b. For another related perspective, see Huemer 2021.

[4] See notes 12 and 13 (in the full draft) for references. A note on terminology: “Prior” sounds kind of like “earlier” but is also more general in that there’s a sense in which one thing can be ontologically prior to another, or ground it, or give rise to it, even if they one doesn’t temporally precede the other (e.g., an object is prior to its features, or noumena are prior to phenomena [see Chapter 5 of the book draft]). Possibly, temporal priority is a relationship that only holds among events within our post-Big Bang universe while whatever gave rise to the Big Bang stands in some broader priority relationship to us.

[image source]

Thursday, May 26, 2022

After a Taste-Bud Hiatus, Experiencing Candy Like a Six-Year-Old

I used to blog quite a bit about weird aspects of sensory experience, back when my central research interest concerned the vagaries of introspection and the strange things people say about their streams of experience. (See a few sample posts; some articles; two books.) I thought I'd share another today -- something striking to me -- though actually not that weird, I suppose.

About a month ago, I accidentally bit down hard on an unpopped popcorn kernel, "bruising" the teeth on the left side of my mouth. (Yes, that's a thing. My dentist tells me nothing is broken or cracked; it just needs time to heal.) It was remarkably painful to chew on that side, and for weeks I chewed entirely on the right side of my mouth, barely even letting food drift to the left side. Last week, I resumed gently chewing on the left again -- just soft things, carefully, experimentally. Having dessert one night, I was suddenly struck by how much sweeter the dessert tasted on the left side than on the right side. Remarkably sweeter. Different enough that the fact really jumped out at me, though I wasn't at all expecting or looking for it.

I was eating an "orange slice" candy. You know, one of these guys:

On the right side of my mouth, the candy was blandly sweet with a simple citrus flavor. On the left side, I experienced the candy as vividly sweet, zinging with orange. The contrast persisted as I moved the mass of candy around in my mouth. When I shifted the bulk to the right, it seemed to instantly lose flavor, like a piece of gum chewed too long. When I shifted it back to the left, the flavor brightened again.

I experimented with other candies over the next few days: lemon and lime slices, chocolate, peppermint sticks. I consistently found the left side sweeter than the right -- and not only sweeter, but also more vividly flavored in other ways. However, I found no similarly noticeable difference for savory flavors, or tea, or pure salt, straight lemon juice, or many of the other things I have eaten since. The effect was mostly or entirely limited to sweetness only, and the associated flavors of the sweet things.

I remember loving fruit slice candies when I was six. I would savor them for fifteen minutes, driving my parents nuts, who had to wait for me at the end of meals. (Now I tend to wolf down desserts: See my defense of dessert-wolfing.) The flavor of the orange slice resonated with my memories of youth. It was like my taste buds -- or the related sensory regions in my brain -- were six years old again. It seemed to me that the orange slice tasted to me now, on the left side of my mouth, in that amazing way it had tasted to me as a child, and then when I shifted it to the right side, it fell back into the blandness that I have since become accustomed to.

I'm not sure why the effect was limited to sweetness. In general, taste sensitivity declines with age, but the decline seems to be as strong for salty, savory, and bitter tastes as for sweet ones. My taste experience has probably dulled in multiple respects. Why sweetness only should rejuvenate, I have no idea -- even more confusingly, not simple sweetness only but the more complex flavors tangled up with sweetness, such as chocolate and sweet orange.

A week later, I find that the effect is still present, though diminishing. I want the vivid sweetness back! The experience acutely reminds me of how much of what we vividly experience recedes into a fog with ageing. A comparison point: I remember getting glasses as a teenager after years of slightly blurry vision and loving how sharp the world became. Now even the best prescription I can find will never make the world that sharp. I also feel that when I read fiction I don't quite as vividly imagine the scenes as I used to.

Middle age has its compensating advantages. I'm mellower, more settled. Even sensorily, things are open to me that weren't before: Presumably because of my diminished taste sensitivity, I can enjoy bitter coffee and sharp cheese. But the hiatus from left-side chewing, followed by some fleeting new candy raptures, has given a sharp new tang to my thoughts about sensory loss with age.

[image source]

Tuesday, May 17, 2022

Our Infinite Predecessors: Flipping the Doomsday Argument on Its Head

The Doomsday Argument purports to show, probabilistically, that humanity will not endure for much longer: Likely, at least 5% of the humans who will ever live have already lived. If 60 billion have lived so far, then probably no more than 1.2 trillion humans will live, ever. (This gives us a maximum of about eight more millennia at the current birth rate of 140 million per year.) According to this argument, the odds that humanity colonizes the galaxy with many trillions of inhabitants are vanishingly small.

Why think we are doomed? The core idea, as developed by Brandon Carter (see p. 143), John Leslie, Richard Gott, and Nick Bostrom is this. It would be statistically surprising if we -- you and I and our currently living friends and relatives -- were very nearly the first human beings ever to live. Therefore, it's unlikely that we are in fact very nearly the first human beings ever to live. But if humanity continues on for many thousands of years, with many trillions of future humans, then we would in fact be very nearly the first human beings ever to live. Thus, we can infer, with high probability, that humanity is doomed before too much longer.

Consider two hypotheses: On one hypothesis, call it Endurance, humanity survives for many millions more years, and many, many trillions of people live and die. On the other, call it Doom, humanity survives for only another a few more centuries or millennia. On Endurance, we find ourselves in a surprising and unusual position in the cosmos -- very near the beginning of a very long run! This, arguably, would be as strange and un-Copernican as finding ourselves in some highly unusual spatial position, such as very near the center of the cosmos. The longer the run, the more surprisingly unusual our position. In contrast, Doom suggests that we are in a rather ordinary temporal position, roughly the middle of the pack. Thus, the reasoning goes, unless there's some independent reason to think Endurance to be much more plausible than Doom, we ought to conclude that Doom is likely.

Let me clarify by showing how Doomsday-style reasoning would work in a few more intuitive cases. But first, here's an inverted mushroom cloud to symbolize that I'll soon be flipping the argument over.

Imagine two lotteries. One has ten numbers, the other a hundred numbers. You don't know which one you've entered into, but you go ahead and draw a number. You discover that you have ticket #6. Upon finding this out, you ought to guess that you probably drew from the ten number lottery rather than the hundred number lottery, since #6 would be a surprisingly low draw in a hundred-number lottery. Not impossible, of course, just relatively unlikely. If your prior credence was split 50-50 between the two lotteries, you can use Bayesian inference to derive a posterior credence of about 91% that you are in the ten-number lottery, given that you see a number among the top ten. (Of course, if you have other evidence that makes it very likely that you were in the hundred-number lottery, then you can reasonably retain that belief even after drawing a relatively low number.)

Alternatively, imagine that you're one of a hundred people who have been blindfolded and imprisoned. You know that 90% of the prison cells are on the west side of town and 10% are on the east side. Your blindfold is removed, but you don't see anything that reveals which side of town you're on. Nonetheless, you ought to think it's likely you're on the west side of town.

Or imagine that you know that 10,000 people, including you, have been assigned in some order to view a newly discovered painting by Picasso, but you don't know in what order people actually viewed the painting. Exiting the museum, you should think it unlikely that you were either among the very first or very last.

The reasoning of the Doomsday argument is intended to be analogous: If you don't know where you're temporally located in the run of humans, you ought to assume it's unlikely that you're in the unusual position of being among the first 5% (or 1% or, amazingly, .001%).

Now various disputes and seeming paradoxes arise with respect to such probabilistic approaches to "self-location" (e.g., Sleeping Beauty), and a variety of objections have been raised to Doomsday Argument reasoning in particular (Leslie's book has a good discussion; see also here and here). But let's bracket those objections. Grant that the reasoning is sensible. Today I want to add a pair of observations that have the potential to flip the Doomsday Argument on its head, even if we accept the general style of reasoning.

Observation 1: The argument assumes that only about 60 billion humans have existed so far, rather than vastly many more. Of course this seems plausible, but as we will see there might be reason to reject it.

Observation 2: Standard physical theory appears to suggest that the universe will endure infinitely long, giving rise to infinitely many future people like us.

There isn't room here to get into depth on Observation 2. I am collaborating with a physicist on this issue now; draft hopefully available soon. But the main idea is this. There's no particular reason to think that the universe has a future temporal edge, i.e., that it will entirely cease. Instead, standard physical theory suggests that it will enter permanent "heat death", a state of thin, high-entropy chaos. However, there will from time to time be low-probability events in which people, or even much larger systems, spontaneously congeal from the chaos, by freak quantum or thermodynamical chance. There's no known cap on the size of such spontaneous fluctuations, which could even include whole galaxies full of evolving species, eventually containing all non-zero-probability life forms. (See the literature on Boltzmann brains.) Perhaps there will even be new cosmic inflations, for example, caused by black holes or spontaneous fluctuations. Vanilla cosmology thus appears to imply an infinite future containing infinitely many people like us, to any arbitrarily specified degree of similarity, perhaps in very large chance fluctuations or perhaps in newly nucleated "pocket universes".

Now if we accept this, then by reasoning similar to that of the Doomsday Argument, we ought to be very surprised to find ourselves among the first 60 billion people like us, or living in the first 14 billion years of an infinitely existing cosmos. We'd be among the first 60 billion out of infinity. A tiny chance indeed! On Doomsday-style reasoning, it would be much more reasonable, if we think the future is infinite, to think that the past must be infinite too. Something existed before the Big Bang, and that something contained observers like us. That would make us appropriately mediocre. Then, in accordance with the Copernican Principle, we'd be in an ordinary location in the cosmos, rather than the very special location of being within 14 billion years of the beginning of an infinite duration.

The situation can be expressed as follows. Doomsday reasoning implies the following conditional statement:

Conditional Doom: If only 60 billion humans, or alternatively human-like creatures, have existed so far, then it's unlikely that many trillions more will exist in the future.

If we take as a given that only 60 billion have existed so far, we can apply modus ponens (concluding Q from P and if P then Q) and conclude Doom.

But alternatively, if we take as a given that (at least) many trillions will exist in the future, we can apply modus tollens (concluding not-P from not-Q and if P then Q) and conclude that many more than 60 billion have already existed.

The modus ponens version is perhaps more plausible if we think in terms of our species, considered as a local group of genetically related animals on Earth. But if we think in terms of humanlike creatures instead specifically of our local species, and if we accept an infinite future likely containing many humanlike creatures, then the modus tollens version becomes more plausible, and we can conclude a long past as well as a long future, full of humanlike creatures extending infinitely forward and back.

Call this the Infinite Predecessors argument. From infinite successors and Doomsday-style self-location reasoning, we can conclude infinite predecessors.

---------------------------------------------

Related:

Almost Everything You Do Causes Almost Everything (Mar 18, 2021)

My Boltzmann Continuants (Jun 6, 2013).

How Everything You Do Might Have Huge Cosmic Significance (Nov 29, 2016).

And Part 4 of A Theory of Jerks and Other Philosophical Misadventures.

[image adapted from here]

Thursday, May 12, 2022

Draft Good Practice Guide: Sexual Harassment, Caregivers, and Student-Staff Relationships

The Demographics in Philosophy project is seeking feedback on a proposed "Good Practice" guide. Help us make this document better!

[cross-posted at Daily Nous]

This is part one of several.

--------------------------------------------------------

Good Practice Policy: Sexual Harassment

Sexual harassment can be carried out by persons of any gender, and persons of any gender may be victims. Although harassment of students by staff is often the focus of discussions, departments need to be aware that power differentials of this sort are not essential to sexual harassment. Sexual harassment may occur between any members of the department. Departments should attend equally seriously to harassment committed both by students and by staff, as both can have dramatically negative effects on particular individuals and on departmental culture. Departments should also be aware that sexual harassment may interact with and be modified by issues of race, ethnicity, religion, class and disability status.

There is good evidence that the proportion of incidents of sexual harassment that get reported, even informally, in philosophy departments is very low, and that this has created serious problems for some staff and students. We therefore urge even those staff who do not believe that harassment is a problem in their own departments to give serious consideration to the recommendations below.

The US defines ‘sexual harassment’ as unwanted sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual nature when:

Submission to such conduct is made either explicitly or implicitly a term or condition of an individual’s employment

Submission to or rejection of such conduct by an individual is used as a basis for employment decisions affecting such individual

Such conduct has the purpose or effect of unreasonably interfering with an individual’s work performance or creating an intimidating, hostile, or offensive working environment.

Institutional definitions of ‘sexual harassment’ differ greatly from one another. Some institutional definitions focus solely on sexual conduct, while others include also include non-sexual harassment related to sex.

While departments need to attend to their institution’s definition of ‘sexual harassment’, and to make use of institutional procedures where appropriate, this is not the end of their responsibilities. Where sexist or sexual behavior is taking place that contributes to an unwelcoming environment for underrepresented groups, departments should act whether or not formal procedures are possible or appropriate.

We note that sexual harassment in philosophy can be present even when it does not meet the formal definitions above. Sexual harassment involves conduct of a sexual nature with the purpose or effect of violating the dignity of a person, or creating an intimidating, hostile, degrading, humiliating or offensive environment. This includes both harassment related to sex, sexual orientation, or gender identity (e.g. hostile and dismissive though not sexual comments about women, gay, lesbian, transgender, or nonbinary people) and harassment of a sexual nature. Note that sexual harassment is not limited to one-to-one interactions but may include, for example, general comments made in lectures or seminars that are not aimed at an individual.

General Suggestions

1. All members of the department—undergraduates, graduate students, academic and non-academic staff—should be made aware of the regulations that govern sexual harassment in their university.

a. In particular, they should know the university’s definition of ‘sexual harassment’ and who to contact in possible cases of sexual harassment.

b. They should also know who has standing to file a complaint (in general, and contrary to widespread belief, the complainant need not be the victim).

c. They should be made aware of both formal and informal measures available at their university.

d. Departments may wish to consider including this information in induction sessions for both students and staff, and in training for teaching assistants.

Where the University or Faculty has a list of Harassment Contacts (see e.g. www.southampton.ac.uk/diversity/how_we_support_diversity/harassment_contacts.page), all staff—including non-academic staff—and students should be made aware of it. If no such list exists, the department should consider suggesting this approach to the university. It is very important for department members to be able to seek advice outside their department.

2. All members of staff should read the advice given at www.oed.wisc.edu/sexualharassment/guide.html on how to deal with individuals who approach them to discuss a particular incident.

3. All of the information listed above should be made permanently available to staff (including non-academic staff) and students, e.g. through a stable URL and/or staff and student handbooks, rather than only in the form of a one-off email communication.

4. The department head and others with managerial responsibilities (such as Directors of Graduate and Undergraduate Studies) should ensure that they have full knowledge of university procedures regarding sexual harassment.

Departmental Culture

1. Seriously consider the harms of an atmosphere rife with dismissive or sexualizing comments and behavior, and address these should they arise. (It is worth noting, however, that the right way to deal with this may vary.)

2. Cultivate—from the top down—an atmosphere in which maintaining a healthy climate for all department members, especially those from under-represented groups and including non-academic staff, is considered everyone’s responsibility. What this entails will vary from person to person and situation to situation. But at a minimum it includes a responsibility to reflect on the consequences (including unintended consequences) of one’s own behavior towards individuals from underrepresented groups. It may also include a responsibility to intervene, either formally or informally. (For more on the range of responses available, see Saul, op. cit.)

3. Ensure, as far as possible, that those raising concerns about sexual harassment are protected against retaliation.

4. Offer bystander training either to staff, or to staff and graduate students, if this is available or can be made available by the institution. This can help bystanders to feel comfortable intervening when they witness harassing behavior. (See the Good Practice website for more information.)

--------------------------------------------------------

Good Practice Policy: Care Givers

Staff members and students with caregiving responsibilities—whether parental or other—face constraints on their time that others often do not. There are simple measures that departments can take to minimize the extent to which caregivers are disadvantaged.

General Suggestions

Departments should adopt an explicit policy concerning caregivers, which covers as many of the following points as is practically possible:

1. Schedule important events, as far as possible, between 9 and 5 (the hours when childcare is more readily available). When an event has to be scheduled outside of these hours, give plenty of advance notice so that caregivers can make the necessary arrangements. Consider using online scheduling polls to find times that work for as many as possible.

2. Seriously consider requests from staff of any background for part- time and flexible working. (This is largely, but not exclusively, an issue for caregivers—requests from non-caregivers should also be taken seriously.) Also be receptive, as far as possible, to requests for unpaid leave.

3. As far as possible, account for caregiving commitments when scheduling teaching responsibilities. 4. Be aware that students, not just staff, may have caregiving responsibilities. Have a staff contact person for students who are caregivers. Take student requests for caregiving accommodations seriously.

5. Ensure that students and staff are made fully aware of any university services for caregivers.

6. Ensure that staff have an adequate understanding of what caregiving involves. (E.g., don’t expect a PhD student to make lots of progress on dissertating while on parental leave.)

7. Ensure that parental leave funds provided by the university are actually used to cover for parental leave, rather than being absorbed into department or faculty budgets.

8. Those involved in performance evaluations should be fully informed about current policies regarding output reduction for caregivers and take caregiving responsibilities into account where possible.

--------------------------------------------------------

Good Practice Policy: Staff-Student Relationships

Romantic or sexual relationships that occur in the student-teacher context or in the context of supervision, line management and evaluation present special problems. The difference in power and the respect and trust that are often present between a teacher and student, supervisor and subordinate, or senior and junior colleague in the same department or unit makes these relationships especially vulnerable to exploitation. They can also have unfortunate unintentional consequences.

Such relationships can also generate perceived, and sometimes real, inequalities that affect other members of the department, whether students or staff. For example, a relationship between a senior and junior member of staff may raise issues concerning promotion, granting of sabbatical leave, and allocation of teaching. This may happen even if no preferential treatment actually occurs, and even if the senior staff member in question is not directly responsible for such decisions. In the case of staff-student relationships, questions may arise concerning preferential treatment in seminar discussions, marking, decisions concerning graduate student funding, and so on. Again, these questions may well emerge and be of serious concern to other students even if no preferential treatment actually occurs.

At the same time, we recognise that such relationships do indeed occur, and that they need not be damaging, but may be both significant and long-lasting.

We suggest that departments adopt the following policy with respect to the behavior of members of staff at all levels, including graduate student instructors.

Please note that the recommendations below are not intended to be read legalistically. Individual institutions may have their own policies, and these will constitute formal requirements on staff and student behavior. The recommendations below are intended merely as departmental norms, and to be adopted only where not in conflict with institutional regulations.

General Suggestions

The department’s policy on relationships between staff and students (and between staff) should be clearly advertised to all staff and students in a permanent form, e.g. intranet or staff/student handbooks. The policy should include clear guidance about whom students or staff might consult in the first instance if problems (real or perceived) arise.

Undergraduate Students

1. Staff and graduate student teaching assistants should be informed that relationships between teaching staff and undergraduates are very strongly discouraged, for the reasons given above.

2. If such a relationship does occur, the member of staff in question should:

a. inform a senior member of the department—where possible, the department head—as soon as possible;

b. withdraw from all small-group teaching involving that student (in the case of teaching assistants, this may involve swapping tutorial groups with another TA), unless practically impossible;

c. withdraw from the assessment of that student, even if anonymous marking is used.

d. withdraw from writing references and recommendations for the student in question.

e. It should be made clear to staff and students that if an undergraduate student has entered into a relationship with a member of staff (including a TA), while the responsibility for taking the above steps lies with the member of staff concerned, the student is equally entitled to report their relationship to another member of staff (e.g. Head of Department, if appropriate), and to request that the above steps be taken.

Graduate Students

1. Staff and graduate students should be informed that relationships between academic members of teaching staff and graduate students are very strongly discouraged, especially between a supervisor and a graduate supervisee.

2. If such a relationship occurs between a member of staff and a graduate student, the member of staff should:

a. inform a senior member of staff—where possible, the department head—as soon as possible;

b. withdraw from supervising the student, writing letters of recommendation for them, and making any decisions (e.g. distribution of funding) where preferential treatment of the student could in principle occur;

c. in the case of graduate students, withdraw from all small-group teaching involving that student, unless practically impossible;

d. in the case of graduate students, withdraw from the assessment of that student, even if anonymous marking is used.

e. As much as possible, the Department should encourage a practice of full disclosure in the case of such relationships’ continuance. This avoids real or perceived conflicts of interest, as well as embarrassment for others.

Academic Staff

Between members of academic staff where there is a large disparity in seniority (e.g. Associate Professor/Lecturer; Head of Department/Assistant Professor):

1. Disclosure of any such relationship should be strongly encouraged, in order to avoid real or perceived conflicts of interest.

2. Any potential for real or perceived conflicts of interest should be removed by, e.g., removal of the senior member of staff from relevant decision-making (e.g. promotions, appointment to permanent positions).

Friday, May 06, 2022

Everything Is Valuable

A couple of weeks ago, I was listening to a talk by Henry Shevlin titled "Which Animals Matter?" The apparent assumption behind the title is that some animals don't matter -- not intrinsically, at least. Not in their own right. Maybe jellyfish (with neurons but no brains) or sponges (without even neurons) matter to some extent, but if so it is only derivatively, for example because of what they contribute to ecosystems on which we rely. You have no direct moral obligation to a sponge.

Hearing this, I was reminded of a contrasting view expressed in a famous passage by the 16th century Confucian philosopher Wang Yangming:

[W]hen they see a child [about to] fall into a well, they cannot avoid having a mind of alarm and compassion for the child. This is because their benevolence forms one body with the child. Someone might object that this response is because the child belongs to the same species. But when they hear the anguished cries or see the frightened appearance of birds or beasts, they cannot avoid a sense of being unable to bear it. This is because their benevolence forms one body with birds and beasts. Someone might object that this response is because birds and beasts are sentient creatures. But when they see grass or trees uprooted and torn apart, they cannot avoid feeling a sense of sympathy and distress. This is because their benevolence forms one body with grass and trees. Someone might object that this response because grass and trees have life and vitality. But when they see tiles and stones broken and destroyed, they cannot avoid feeling a sense of concern and regret. This is because their benevolence forms one body with tiles and stones (in Tiwald and Van Norden, eds., 2014, p. 241-242).

My aim here isn't to discuss Wang Yangming interpretation, nor to critique Shevlin (whose view is more subtle than his title suggests), but rather to express a thought broadly in line with Wang Yangming and with which I find myself sympathetic: Everything is valuable. Nothing exists to which we don't owe some sort of moral consideration.

When thinking about value, one of my favorite exercises is to consider what I would hope for on a distant planet -- one on the far side of the galaxy, for example, blocked by the galactic core, which we will never see and never have any interaction with. What would be good to have going on over there?

What I'd hope for, and what I'd invite you to join me in hoping for, is that it not just be a sterile rock. I'd hope that it has life. That would be, in my view, a better planet -- richer, more interesting, more valuable. Microbial life would be cool, but even better would be multicellular life, weird little worms swimming in oceans. And even better than that would be social life -- honeybees and wolves and apes. And even better would be linguistic, technological, philosophical, artistic life, societies full of alien poets and singers, scientists and athletes, philosophers and cosmologists. Awesome!

This is part of my case for thinking that human beings are pretty special. We're central to what makes Earth an amazing planet, a planet as amazing as that other one I've just imagined. The world would be missing something important, something that makes it rich and wonderful, if we suddenly vanished.

Usually I build the thought experiment up to us at the pinnacle (that is, the pinnacle so far; maybe we'll have even more awesome descendants); but also I can strip it down, in the pattern of Wang Yangming. A distant planet without us but with wolves and honeybees would still be valuable. Without the wolves and honeybees but with the worms, it also would still be valuable. With only microbes, it would still have substantial value -- after all, it would have life. Let's not forget how intricately amazing life is.

But even if there's no life -- even if it's a sterile rock after all -- well, in my mind, that's better than pure vacuum. A rock can be beautiful, and beauty has value even if there's no one to see it. Alternatively, even if we're stingy about beauty and regard the rock as a neutral or even ugly thing, well, mere existence is something. It's better that there's something rather than nothing. A universe of things is better than mere void. Or so I'd say, and so I invite you also to think. (It's hard to know how to argue for this other than simply to state it with the right garden path of other ideas around it, hoping that some sympathetic readers agree.)

I now bring this thinking back to Earth. Looking at the pebbles on the roof below my office window, I find myself feeling that they matter. Earth is richer for their existence. The universe is richer for their existence. If they were replaced with vacuum, that would be a loss. (Not that there isn't something cool about vacuums, too, in their place.) Stones aren't high on my list of valuable things that I must treat with care, but neither do I feel that I should be utterly indifferent to their destruction. I'm not sure my "benevolence forms one body" with the stones, but I can get into the mood.

[image source]

Thursday, April 28, 2022

Will Today's Philosophical Work Still Be Discussed in 200 Years?

I'm a couple days late to this party. Evidently, prominent Yale philosopher Jason Stanley precipitated a firestorm of criticism on Twitter by writing:

I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.

(Stanley has since deleted the tweet, but he favorably retweeted a critique that discusses him specifically, so I assume he wouldn't object to my also doing so.)

Now "abject failure" is too strong -- Stanley has a tendency toward hyperbole on Twitter -- but I think it is entirely reasonable for him to aspire to create philosophical work that will still be read in 200 years and to be somewhat disheartened by the prospect that he will be entirely forgotten. Big-picture philosophy needn't aim only at current audiences. It can aspire to speak to future generations.

How realistic is such an aim? Well, first, we need to evaluate how likely it is that history of philosophy will be an active discipline in 200 years. The work of our era -- Stanley and others -- will of course be regarded as historical by then. Maybe there will be no history of philosophy. Humanity might go extinct or collapse into a post-apocalyptic dystopia with little room for recondite historical scholarship. Alternatively, humanity or our successors might be so cognitively advanced that they regard us early 21st century philosophers as the monkey-brained advocates of simplicistic views that are correct only by dumb luck if they are correct at all.

But I don't think we need to embrace dystopian pessimism; and I suspect that even if our descendants are super-geniuses, there will remain among them some scholars who appreciate the history of 21st century thought, at least in an antiquarian spirit. ("How fascinating that our monkey-brained ancestors were able to come up with all of this!") And of course another possibility is that society proceeds more or less on its current trajectory. Economic growth continues, perhaps at a more modest rate, and with it a thriving global academic culture, hosting ever more researchers of all stripes, with historians in India, Indonesia, Illinois, and Iran specializing in ever more recondite subfields. It's not unreasonable, then, to guess that there will be historians of philosophy in 200 years.

What will they think of our era? Will they study it at all? It seems likely they will. After all, historians of philosophy currently study every era with a substantial body of written philosophy, and as academia has grown, scholars have been filling in the gaps between our favorite eras. I have argued elsewhere that the second half of the 20th century might well be viewed as a golden age of philosophy -- a flourishing of materialism, naturalism, and secularism, as 19th- and early 20th-century dualism and idealism were mostly jettisoned in favor of approaches more straightforwardly grounded in physics and biology. You might not agree with that conjecture. But I think you should still agree that at least in terms of the quantity of work, the variety of topics explored, and the range of views considered, the past fifty years compares favorably with, say, the early medieval era, and indeed probably pretty much any relatively brief era.

So I don't think historians will entirely ignore us. And given that English is now basically the lingua franca of global academia (for better or worse), historians of our era will not neglect English-language philosophers.

Who will be read? The historical fortunes of philosophers rise and fall. Gottlob Frege and Friedrich Nietzsche didn't receive much attention in their day, but are now viewed as a historical giants. Christian Wolff and Henri Bergson were titans in their lifetimes but are little read now. On the other hand, the general tendency is for influential figures to continue to be seen as influential, and we haven't entirely forgotten Wolff and Bergson. A good historian will recognize at least that a full understanding of the eras in which Wolff and Bergson flourished requires appreciating the impact of Wolff and Bergson.

Given the vast number of philosophers writing today and in recent decades, an understanding of our era will probably focus less on understanding the systems of a few great figures and more on understanding the contributions of many scholars to prominent topics of debate -- for example, the rise of materialism, functionalism, and representationalism in philosophy of mind (alongside the major critiques of those views); or the division of normative ethics into consequentialist, deontological, and virtue-ethical approaches. A historian of our era will want to understand these things. And that will require reading David Lewis, Bernard Williams, and other leading figures of the late 20th century as well as, probably, David Chalmers and Peter Singer among others writing now.

As I imagine it, scholars of the 23rd century will still have archival access to our major books and journals. Specialists, then, will thumb through old issues of Nous and Philosophical Review. Some will be intrigued by minor scholars who are in dialogue with the leading figures of our era. They might find some of the work by these minor scholars to be intriguing or insightful -- a valuable critique, perhaps, of the views of the leading figures, maybe prefiguring positions that are more prominently and thoroughly developed by better-known subsequent scholars.

It is not unreasonable, I think, for Stanley to aspire to be among the leading political philosophers and philosophers of language of our era, who will still read by some historians and students, and still perhaps viewed as having some good ideas that are worth continuing discussion and debate.

For my own part, I doubt I will be viewed that way. But I still fantasize that some 23rd-century specialist in the history of philosophy of our era will stumble across one of my books or articles and think, "Hey, some of the work of this mostly-forgotten philosopher is pretty interesting! I think I'll cite it in one of my footnotes." I don't write mainly with that future philosopher in mind, but it still pleases me to think that my work might someday provoke that reaction.

[image generated by wombo.art]

Friday, April 22, 2022

Let's Hope We Don't Live in a Simulation

reposting from the Los Angeles Times, where it appears under a different title[1]

------------------------------------------

There’s a new creation story going around. In the beginning, someone booted up a computer. Everything we see around us reflects states of that computer. We are artificial intelligences living in an artificial reality — a “simulation.”

It’s a fun idea, and one worth taking seriously, as people increasingly do. But we should very much hope that we’re not living in a simulation.

Although the standard argument for the simulation hypothesis traces back to a 2003 article from Oxford philosopher Nick Bostrom, 2022 is shaping up to be the year of the sim. In January, David Chalmers, one of the world’s most famous philosophers, published a defense of the simulation hypothesis in his widely discussed new book, Reality+. Essays in mainstream publications have declared that we could be living in virtual reality, and that tech efforts like Facebook’s quest to build out the metaverse will help prove that immersive simulated life is not just possible but likely — maybe even desirable.

Scientists and philosophers have long argued that consciousness should eventually be possible in computer systems. With the right programming, computers could be functionally capable of independent thought and experience. They just have to process enough information in the right way, or have the right kind of self-representational systems that make them experience the world as something happening to them as individuals.

In that case, the argument goes, advanced engineers should someday be able to create artificially intelligent, conscious entities: “sims” living entirely in simulated environments. These engineers might create vastly many sims, for entertainment or science. And the universe might have far more of these sims than it does biologically embodied, or “real,” people. If so, then we ourselves might well be among the sims.

The argument requires some caveats. It’s possible that no technological society ever can produce sims. Even if sims are manufactured, they may be rare — too expensive for mass manufacture, or forbidden by their makers’ law.

Still, the reasoning goes, the simulation hypothesis might be true. It’s possible enough that we have to take it seriously. Bostrom estimates a 1-in-3 chance that we are sims. Chalmers estimates about 25%. Even if you’re more doubtful than that, can you rule it out entirely? Any putative evidence that we aren’t in a sim — such as cosmic background radiation “proving” that the universe originated in a Big Bang — could, presumably, be simulated.

Suppose we accept this. How should we react?

Chalmers seems unconcerned: “Being in an artificial universe seems no worse than being in a universe created by a god” (p. 328). He compares the value of life in a simulation to the value of life on a planet newly made inhabitable. Bostrom acknowledges that humanity faces an “existential risk” that the simulation will shut down — but that risk, he thinks, is much lower than the risk of extinction by a more ordinary disaster. We might even relish the thought that the cosmos hosts societies advanced enough to create sims like us.

In simulated reality, we’d still have real conversations, real achievements, real suffering. We’d still fall in and out of love, hear beautiful music, climb majestic “mountains” and solve the daily Wordle. Indeed, even if definitive evidence proved that we are sims, what — if anything — would we do differently?

But before we adopt too relaxed an attitude, consider who has the God-like power to create and destroy worlds in a simulated universe. Not a benevolent deity. Not timeless, stable laws of physics. Instead, basically gamers.

Most of the simulations we run on our computers are games or scientific studies. They run only briefly before being shut down. Our low-tech sims live partial lives in tiny worlds, with no real history or future. The cities of Sim City are not embedded in fully detailed continents. The simulated soldiers dying in war games fight for causes that don’t exist. They are mere entertainments to be observed, played with, shot at, surprised with disasters. Delete the file, uninstall the program, or recycle your computer and you erase their reality.

But I’m different, you say: I remember history and have been to Wisconsin. Of course, it seems that way. The ordinary citizens of Sim City, if they were somehow made conscious, would probably be just as smug. Simulated people could be programmed to think they live on a huge planet with a rich past, remembering childhood travels to faraway places. Their having these beliefs in fact makes for a richer simulation.

If the simulations that we humans are familiar with reveal the typical fate of simulated beings, long-term sims are rare. Alternatively, if we can’t rely on the current limited range of simulations as a guide, our ignorance about simulated life runs even deeper. Either way, there are no good grounds for confidence that we live in a large, stable simulation.

Taking the simulation hypothesis seriously means accepting that the creator might be a sadistic adolescent gamer about to unleash Godzilla. It means taking seriously the possibility that you are alone in your room with no world beyond, reading a fake blog post, existing only as a short-lived subject or experiment. You might know almost nothing about reality beyond and beneath the simulation. The cosmos might be radically different from anything you could imagine.

The simulation hypothesis is wild and wonderful to contemplate. It’s also radically skeptical. If we take it seriously, it should undermine our confidence about the past, the future and the existence of Milwaukee. What or whom can we trust? Maybe nothing, maybe no one. We can only hope our simulation god is benevolent enough to permit our lives to continue awhile.

Really, we ought to hope the theory is false. A large, stable planetary rock is a much more secure foundation for reality than bits of a computer program that can be deleted at a whim.

Postscript:

In Reality+, Chalmers argues against the possibility that we live in a local or a temporary simulation on grounds of simplicity (p. 442-447). I am not optimistic that this response succeeds. In general, simplicity arguments against skepticism tend to be underdeveloped and unconvincing -- in part because simplicity itself is complex to evaluate (see my paper with Alan T. Moore, "Experimental Evidence for the Existence of an External World"). And more specifically, it's not clear why it would be easier or simpler to create a giant, simulated world than to create a small simulation with fake indicators of a giant world -- perhaps only enough indicators to effectively fool us for the brief time we exist or on the relatively few tests we run. (And plausibly, our creators might be able to control or predict what thoughts we have or tests we will run and thus only create exactly the portions of reality that they know we will examine.) Continuing the analogy from Sim City, our current sims are more easily constructed if they are small, local, and brief, or if they are duplicated off a template, than if each is giant, a unique run of a whole universe from the beginning. I see no reason why this fact wouldn't generalize to more sophisticated simulations containing genuinely conscious artificial intelligences.

------------------------------------------

[1] The Los Angeles Times titled the piece "Is life a simulation? If so, be very afraid". While I see how one might draw that conclusion from the piece, my own view is that we probably should react emotionally as we react to other small but uncontrollable risks -- not with panic, but rather with a slight shift toward favoring short-term outcomes over long-term ones. See my discussion in "1% Skepticism" and Chapter 4 of my book in draft, The Weirdness of the World. I have also added links, a page reference, and altered the wording for clarity in a few places.

[image generated from inputting the title of this piece into wombo.art's steampunk generator]