tag:blogger.com,1999:blog-269517382024-03-18T23:49:35.772-07:00The Splintered Mindreflections in philosophy of psychology, broadly construedEric Schwitzgebelhttp://www.blogger.com/profile/11541402189204286449noreply@blogger.comBlogger1481125tag:blogger.com,1999:blog-26951738.post-72291307998076340332024-03-14T11:10:00.000-07:002024-03-14T11:25:57.861-07:00Religious Believers Normally Do and Should Want Their Religious Credences to Align with Their Factual BeliefsNext week (at the <a href="https://www.southernsociety.org/AnnualMeeting/">Southern Society for Philosophy and Psychology</a>) I'll be delivering comments on Neil Van Leeuwen's new book, <i><a href="https://www.hup.harvard.edu/books/9780674290334">Religion as Make-Believe</a></i>. Neil argues that many (most?) people don't actually "factually believe" the doctrines of their religion, even if they profess belief. Instead, the typical attitude is one of "religious credence", which is closer to pretense or make-believe.<p>
Below are my draft comments. Comments and further reactions welcome!<p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7esv4q23F4s-EQ0IfCk2dwmqjKMPQEXt3VDG4u7Jsgh2dmY4G1tWUYYIAQUsjEi_mgnw2_9dEfaXJY0XOZftI4o02ODEuSMfElsIXIt1yCZzopO3pOIVfYjlFrR4euxCUHogy08u4RRMkGk47cr0VpV0IZerdNEDcb6Qc720XPr8e_q66j_i-zw/s580/VanLeeuwenReligionAsMakeBelieve.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="580" data-original-width="380" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7esv4q23F4s-EQ0IfCk2dwmqjKMPQEXt3VDG4u7Jsgh2dmY4G1tWUYYIAQUsjEi_mgnw2_9dEfaXJY0XOZftI4o02ODEuSMfElsIXIt1yCZzopO3pOIVfYjlFrR4euxCUHogy08u4RRMkGk47cr0VpV0IZerdNEDcb6Qc720XPr8e_q66j_i-zw/s320/VanLeeuwenReligionAsMakeBelieve.jpg"/></a></div><p>
<b>Highlights of Van Leeuwen’s View.<p></b>
Neil distinguishes factual beliefs from religious credences. If you factually believe something – for example, that there’s beer in the fridge – that belief will generally have four functional features:<p>
(1.) It is <i>involuntary</i>. You can’t help but believe that there’s beer in the fridge upon looking in the fridge and seeing the beer.<p>
(2.) It is <i>vulnerable to evidence</i>. If you later look in the fridge and discover no beer, your belief that there is beer in the fridge will vanish.<p>
(3.) It <i>guides actions across the board</i>. Regardless of context, if the question of whether beer is in your fridge becomes relevant to your actions, you will act in light of that belief.<p>
(4.) It provides the <i>informational background governing other attitudes</i>. For example, if you imagine a beer-loving guest opening the fridge, you will imagine them also noticing the beer in there.<p>
Religious credences, Neil argues, have none of those features. If you “religiously creed” that God condemns masturbators to Hell, that attitude is:<p>
(1.) <i>Voluntary</i>. In some sense – maybe unconsciously – you choose to have this religious credence.<p>
(2.) <i>Invulnerable to evidence</i>. Factual evidence, for example, scientific evidence of the non-existence of Hell, will not cause the credence to disappear.<p>
(3.) <i>Guides actions only in limited contexts</i>. For example, it doesn’t prevent you from engaging in the condemned behavior in the way a factual belief of the same content presumably would.<p>
(4.) <i>Doesn’t reliably govern other attitudes</i>. For example, if you imagine others engaging in the behavior, it doesn’t follow that you will imagine God also condemning them.<p>
Although some people may factually believe some of their religious doctrines, Neil holds that commonly what religious people say they “believe” they in fact only religiously creed.<p>
Neil characterizes his view as a “two map” view of factual belief and religious credence. Many religious people have one picture of the world – one map – concerning what they factually believe, and a different picture of the world – a different map – concerning what they religiously creed. These maps might conflict: One might factually believe that Earth is billions of years old and religiously creed that it is less than a million years old. Such conflict need not be rationally troubling, since the attitudes are different. Compare: You might believe that Earth is billions of years old but imagine, desire, or assume for the sake of argument that it is less than a million years old. Although the contents of these attitudes conflict, there is no irrationality. What you imagine, desire, or assume for the sake of argument needn’t match what you factually believe. There are different maps, employed for different purposes. On Neil’s view, the same holds for religious credence.<p>
There’s much I find plausible and attractive in Neil’s view. In particular, I fully support the idea that if someone sincerely asserts a religious proposition but doesn’t generally act and react as if that proposition is true, they can’t accurately be described as believing, or at least fully believing, that proposition.<p>
However, I think it will be more productive to focus on points of disagreement.<p>
<b>First Concern: The Distinction Is Too Sharp.</b><p>
Neil generally speaks as though the attitudes of factual belief and religious credence split sharply into two distinct kinds. I’m not sure how much depends on this, but I’m inclined to think it’s a spectrum, with lots in the middle. Middling cases might especially include emotionally loaded attitudes where the evidence is not in-your-face compelling. Consider, for example, my attitude toward the proposition <i>my daughter has a great eye for fashion</i>. This is something she cares about, an important part of how she thinks of herself, and I sincerely and enthusiastically affirm it. Is this attitude voluntary or involuntary? Well, to some extent it is a reaction to evidence; but to some extent I suspect I hold on to it in part because I want to affirm her self-conception. Is it vulnerable to counterevidence? Well, maybe if I saw again and again signs of bad fashion taste, my attitude would disappear; but it might require more counterevidence than for an attitude in which I am less invested. It’s <i>somewhat</i> counterevidence resistant. Does it guide my inferences across contexts? Well, probably – but suppose she says she wants to pursue a career in fashion, the success of which would depend on her really having a great eye. Now I feel the bubbling up of some anxiety about the truth of the proposition, which I don’t normally feel in other contexts. It’s not a religious credence certainly, but it has some of those features, to some degree.<p>
Another case might be philosophical views. I’m pretty invested, for example, in my <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AccountBel.htm">dispositionalist approach to belief</a>. Is my dispositionalism vulnerable to evidence? I’d like to hope that if enough counterevidence accumulated, I would abandon the view. But I also admit that my investment in the view likely makes my attitude somewhat counterevidence resistant. Did I choose it voluntarily? I remember being immediately attracted to it in graduate school, when two of my favorite interlocutors at the time, Victoria McGeer and John Heil, both described dispositionalism about belief as underappreciated. I felt its attractions immediately and perhaps in some sense chose it, before I had fully thought through the range of pro and con arguments. In general, I think, students quickly tend to find philosophical views attractive or repellent, even before they are familiar enough with the argumentative landscape to be able to effectively defend their preferred views against well informed opponents; and typically (not always) they stick with the views that initially attracted them. Is this choice? Well, it’s more like choice than what happens to me when I open the fridge and simply see whether it contains beer. If religious credences are chosen, perhaps philosophical attitudes are in a similar sense partly chosen. There might be a social component, too: People you like tend to have this philosophical view, people you dislilke tend to have this other one. As for widespread cognitive governance: There’s a small literature on the question of whether the views philosophers endorse in the classroom and in journal articles do, or do not, govern their choices outside of philosophical contexts. I suspect the answer is: partly.<p>
I also suspect that typical religious credences aren’t quite as voluntary, evidentially invulnerable, and context constrained as would be suggested by a sharp-lines picture. Someone who religiously creeds that God condemns masturbators might feel to some extent correctly that that position is forced upon them by their other commitments and might be delighted to find and respond to evidence that it is false. And although as Neil notes, citing Dennett, they might engage in the activity in a way that makes little sense if they literally think they are risking eternal Hell, people with this particular credence might well feel nervous, guilty, and like they are taking a risk which they hope God will later forgive. If so, their credence affects their thinking in contexts beyond Sunday – and maybe generally when it’s relevant.<p>
<b>Second Concern: Much of Neil’s Evidence Can Be Explained by Weak Belief.<p></b>
Reading the book, I kept being niggled by the idea that much (but not all) of the evidence Neil marshals for his view could be explained if religious people factually believe what they say they believe, but don’t factually believe it with high confidence. On page 226, Neil articulates this thought as the “weak belief” explanation of the seeming irrationality of religious attitudes.<p>
Weak belief can’t be the whole story. Even a 60% confidence in eternal damnation ought to be enough to choke off virtually any behavior, so if the behavior continues, it can’t be a rational reaction to low confidence.<p>
Still, Neil makes much out of the fact that <a href="https://vineyardusa.org/">Vineyard</a> members who claim in religious contexts that a shock they experienced from their coffeemaker was a demonic attack will also repair their coffeemaker and describe the shock in a more mundane way in non-religious contexts (p. 78-80). People who engage in petitionary prayer for healing also go to see the doctor (p. 86-88). And people often confess doubt about their religion (p. 93-95, 124-125). Such facts are perhaps excellent evidence that such people don’t believe with 100% confidence that the demon shocked them, that the prayer will heal them, and that the central tenets of their religion are all true. But these facts are virtually no evidence against the possibility that people have ordinary factual belief of perhaps 75% confidence that the demon shocked them, that the prayer will heal, and that their religion is true. Their alternative explanations, backup plans, and expressions of anxious doubt might be entirely appropriate and rational manifestations of low-confidence factual belief.<p>
<b>Third Concern: If There Are Two Maps, Why Does It Feel Like They Shouldn’t Conflict?</b><p>
Consider cases where religious credences conflict with mainstream secular factual belief, such as the creationist attitude that <i>Earth is less than a million years old</i> and the Mormon attitude <i>that American Indians descended from Israelites</i> (p. 123-124). There is no rational conflict whatsoever between believing that Earth is billions of years old or that American Indians descended from East Asians and <i>desiring</i> that Earth is not billions of years old and that American Indians did not descend from East Asians. Nor is there any conflict between mainstream secular factual beliefs and <i>imagining</i> or <i>assuming for the sake of argument</i> that Earth is young or that American Indians descended from Israelites. For these attitude pairs, we really can construct two conflicting maps, feeling no rational pressure from their conflict. Here’s the map displaying what I factually believe, and here’s this other different map displaying what I desire, or imagine, or assume for sake of the present argument.<p>
But it doesn’t seem like we are, or should be, as easygoing about conflicts between our religious attitudes and our factual beliefs. Of course, <i>some</i> people are. Some people will happily say I factually think that Earth is billions of years old but my religious attitude is that Earth is young, and I feel no conflict or tension between these two attitudes. But for the most part, I expect, to the extent people are invested in their religious credences they will reject conflicting factual content. They will say “Earth really is young. Mainstream science is wrong.” They feel the tension. This suggests that there aren’t really two maps with conflicting content, but one map, either representing Earth as old or representing Earth as young. If they buy the science, they reinterpret the creation stories as myths or metaphors. If they insist that the creation stories are literally true, then they reject the scientific consensus. What most people don’t do is hold both the standard scientific belief that Earth is literally old <i>and</i> the religious credence that Earth is literally young. At least, this appears to be so in most mainstream U.S. religious Christian cultures.<p>
A one-map view nicely explains this felt tension. Neil’s two maps view needs to do more to explain why there’s a felt need for religious credence and factual belief to conform to each other. I raised a version of this concern in a <a href="https://schwitzsplinters.blogspot.com/2022/09/the-overlapping-dispositional-profiles.html">blog post in 2022</a>, developing an objection articulated by Tom Kelly in oral discussion. Neil has dubbed it the Rational Pressure Argument.<p>
Neil’s response, in a <a href="https://schwitzsplinters.blogspot.com/2022/10/the-rational-pressure-argument.html">guest post on my blog</a>, was to suggest that there are some attitudes distinct from belief that are also subject to this type of rational pressure. <i>Guessing</i> is not believing, for example, but your guesses shouldn’t conflict with your factual beliefs. If you factually believe that the jar contains fewer than 8000 jelly beans, you’d better not guess that it actually contains 9041. If you hypothesize or accept in a scientific context that Gene X causes Disease Y, you’d better not firmly believe that Gene X has nothing to do with Disease Y. Thus, Neil argues, it does not follow from the felt conflict between the religious attitude and the factual belief that the religious attitude is a factual belief. Guesses and hypotheses are not beliefs and yet generate similar felt conflict.<p>
That might be so. But the Rational Pressure Argument still creates a challenge for Neil’s two map view. Guessing and hypothesizing are different attitudes from factual belief, but they <i>use the same map</i>. My map of the jelly bean jar says there are 4000-8000 jelly beans. I now stick a pin in this map at 7000; that’s my guess. My map of the causes of Disease Y doesn’t specify what genes are involved, and because of this vagueness, I can put in a pin on Gene X as a hypothesized cause. The belief map constrains the guesses and hypotheses because the guesses and hypotheses are specifications within that same map. I don’t have a separate and possibly conflicting guess map and hypothesis map in the way that I can have a separate desire map or imagination map.<p>
I thus propose that in our culture people typically feel the need to avoid conflict between their religious attitudes and their factual beliefs; and this suggests that they feel pressure to fit their religious understandings together with their ordinary everyday and scientific understandings into a single, coherent map of how the world really is, according to them.<p>
</br>
Thanks for the awesome book, Neil! I philosophically creed some concerns, but I invite you to infer nothing from that about my factual beliefs.
Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com13tag:blogger.com,1999:blog-26951738.post-38173653797778197332024-03-08T10:10:00.000-08:002024-03-08T10:20:43.301-08:00The Mimicry Argument Against Robot Consciousness<p>Suppose you encounter something that looks like a rattlesnake. One possible explanation is that it <i>is</i> a rattlesnake. Another is that it <i>mimics</i> a rattlesnake. Mimicry can arise through evolution (other snakes mimic rattlesnakes to discourage predators) or through human design (rubber rattlesnakes). Normally, it's reasonable to suppose that things are what they appear to be. But this default assumption can be defeated -- for example, if there's reason to suspect sufficiently frequent mimics.</p><p>Linguistic and "social" AI programs are designed to mimic superficial features that ordinarily function as signs of consciousness. These programs are, so to speak, <i>consciousness mimics</i>. This fact about them justifies skepticism about the programs' actual possession of consciousness despite the superficial features.</p><p>In biology, <i>deceptive mimicry</i> occurs when one species (the mimic) resembles another species (the model) in order to mislead another species such as a predator (the dupe). For example, viceroy butterflies evolved to visually resemble monarch butterflies in order to mislead predator species that avoid monarchs due to their toxicity. Gopher snakes evolved to shake their tails in dry brush in a way that resembles the look and sound of rattlesnakes.</p><p><i>Social mimicry</i> occurs when one animal emits behavior that resembles the behavior of another animal for social advantage. For example, African grey parrots imitate each other to facilitate bonding and to signal in-group membership, and their imitation of human speech arguably functions to increase the care and attention of human caregivers.</p><p>In deceptive mimicry, the signal normally doesn't correspond with possession of the model's relevant trait. The viceroy is not toxic, and the gopher snake has no poisonous bite. In social mimicry, even if there's no deceptive purpose, the signal might or might not correspond with the trait suggested by the signal: The parrot might or might not belong to the group it is imitating, and Polly might or might not really "want a cracker".</p><p>All mimicry thus involves three traits: the superficial trait (S2) of the mimic, the corresponding superficial trait (S1) of the model, and an underlying feature (F) of the model that is normally signaled by the presence of S1 in the model. (In the Polly-want-a-cracker case, things are more complicated, but let's assume that the human model is at least thinking about a cracker.) Normally, S2 in the mimic is explained by its having been modeled on S1 rather than by the presence of F in the mimic, even if F happens to be present in the mimic. Even if viceroy butterflies happen to be toxic to some predator species, their monarch-like coloration is better explained by their modeling on monarchs than as a signal of toxicity. Unless the parrot has been specifically trained to say "Polly want a cracker" only when it in fact wants a cracker, its utterance is better explained by modeling on the human than as a signal of desire.</p><p><b>Figure: The mimic's possession of superficial feature S2 is explained by mimicry of superficial feature S1 in the model. S1 reliably indicates F in the model, but S2 does not reliably indicate F in the mimic.</b></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhU6US40vA86ky1oykNLcCaWvJhPlKWxvrxbIEfHg6k3GjRIRdUrK4zEB-s058jvZMM4Geq7CFkJCUcVL6Ccfx51l3gW5zLbqLYn5I6L2ps4S2XT7ASGNSp2DT-xSdZuXv8NX5kUrCtOv5vPMULBYQWJVOIdYkW8YbRwqrSMlTOxIj6shm9qOUVpw/s733/Mimicry-240305a.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="589" data-original-width="733" height="321" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhU6US40vA86ky1oykNLcCaWvJhPlKWxvrxbIEfHg6k3GjRIRdUrK4zEB-s058jvZMM4Geq7CFkJCUcVL6Ccfx51l3gW5zLbqLYn5I6L2ps4S2XT7ASGNSp2DT-xSdZuXv8NX5kUrCtOv5vPMULBYQWJVOIdYkW8YbRwqrSMlTOxIj6shm9qOUVpw/w400-h321/Mimicry-240305a.jpg" width="400" /></a></div>[click to enlarge and clarify]<div><p>This general approach to mimicry can be adapted to superficial features normally associated with consciousness.</p><p>Consider a simple case, where S1 and S2 are emission of the sound "hello" and F is the intention to greet. The mimic is a child's toy that emits that sound when turned on, and the model is an ordinary English-speaking human. In an ordinary English-speaking human, emitting the sound "hello" normally (though of course not perfectly) indicates an intention to greet. However a child's toy has no intention to greet. (Maybe its designer, years ago, had an intention to craft a toy that would "greet" the user when powered on, but that's not the <i>toy's</i> intention.) F cannot be inferred from S2, and S2 is best explained by modeling on S1.</p><p>Large Language Models like GPT, PaLM, and LLaMA, are more complex but are structurally mimics.</p><p>Suppose you ask ChatGPT-4 "What is the capital of California?" and it responds "The capital of California is Sacramento." The relevant superficial feature, S2, is a text string correctly identifying the capital of California. The best explanation of why ChatGPT-4 exhibits S2 is that its outputs are modeled on human-produced text that also correctly identifies the capital of California as Sacramento. Human-produced text with that content reliably indicates the producer's knowledge that Sacramento is the capital of California. But we cannot infer corresponding knowledge when ChatGPT-4 is the producer. <a href="https://schwitzsplinters.blogspot.com/2023/11/how-we-will-decide-that-large-language.html"><i>Maybe</i> "beliefs" or "knowledge" can be attributed to sufficiently sophisticated language models</a>, but that requires further argument. A much simpler model, trained on a small set of data containing a few instances of "The capital of California is Sacramento" might output the same text string for essentially similar reasons, without being describable as "knowing" this fact in any literal sense.<br /></p><p>When a Large Language Model outputs a novel sentence not present in the training corpus, S2 and S1 will need to be described more abstractly (e.g., "a summary of Hamlet" or even just "text interpretable as a sensible answer to an absurd question"). But the underlying considerations are the same. The LLM's output is modeled on patterns in human-generated text and can be explained as mimicry of those patterns, leaving open the question of whether the LLM has the underlying features we would attribute to a human being who gave a similar answer to the same prompt. (See <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">Bender et al. 2021</a> for an explicit comparison of LLMs and parrots.)</p><p>#</p><p><b>Let's call something a <i>consciousness mimic</i> if it exhibits superficial features best explained by having been modeled on the superficial features of a model system, where in the model system those superficial features reliably indicate consciousness. </b>ChatGPT-4 and the "hello" toy are consciousness mimics in this sense. (People who say "hello" or answer questions about state capitals are normally conscious.) Given the mimicry, we cannot infer consciousness from the mimics' S2 features without substantial further argument. A consciousness mimic exhibits traits that superficially look like indicators of consciousness, but which are best explained by the modeling relation rather than by appeal to the entity's underlying consciousness. (Similarly, the viceroy's coloration pattern is best explained by its modeling on the monarch, not as a signal of its toxicity.)</p><p>"Social AI" programs, like <a href="https://replika.com/">Replika</a>, combine the structure of Large Language Models with superficial signals of emotionality through an avatar with an expressive face. Although consciousness researchers are near consensus that ChatGPT-4 and Replika are not conscious to any meaningful degree, some ordinary users, especially those who have become attached to AI companions, have begun to wonder. And some consciousness researchers have speculated that genuinely conscious AI might be on the near (approximately ten-year) horizon (e.g., <a href="https://philpapers.org/archive/CHACAL-3.pdf">Chalmers 2023</a>; <a href="https://arxiv.org/abs/2308.08708">Butlin et al. 2023</a>; <a href="https://link.springer.com/article/10.1007/s43681-023-00379-1">Long and Sebo 2023</a>).</p><p>Other researchers -- especially those who regard biological features as crucial to consciousness -- doubt that AI consciousness will arrive anytime soon (e.g., <a href="https://philpapers.org/rec/GODMMA-6">Godfrey-Smith 2016</a>; <a href="https://www.anilseth.com/being-you/">Seth 2021</a>). It is therefore likely that we will enter an era in which it is<i> reasonable to wonder</i> whether some of our most advanced AI systems are conscious. Both consciousness experts and the ordinary public are likely to disagree, raising difficult questions about the ethical treatment of such systems (for some of my alarm calls about this, see Schwitzgebel <a href="https://journal.robonomics.science/index.php/rj/article/view/32">2023a</a>, <a href="https://www.sciencedirect.com/science/article/pii/S2666389923001873">2023b</a>).</p><p>Many of these systems, like ChatGPT and Replika, will be consciousness mimics. They might or might not actually be conscious, depending on what theory of consciousness is correct. However, because of their status as mimics, we will not be licensed to infer that they are conscious from the fact that they have superficial features (S2-type features) that resemble features in humans (S1-type features) that, in humans, reliably indicate consciousness (underlying feature F).</p><p>In saying this, I take myself to be saying nothing novel or surprising. I'm simply articulating in a slightly more formal way what skeptics about AI consciousness say and will presumably continue to say. I'm not committing to the view that such systems would definitely <i>not</i> be conscious. My view is weaker, and probably acceptable even to most advocates of near-future AI consciousness. One cannot infer the consciousness of an AI system that is built on principles of mimicry from the fact that it possesses features that normally indicate consciousness in humans. Some <i>extra argument</i> is required.</p><p>However, any such extra argument is likely to be uncompelling. Given the highly uncertain status of consciousness science, and widespread justifiable dissensus, any positive argument for these systems' consciousness will almost inevitably be grounded in dubious assumptions about the correct theory of consciousness (<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/CrazyMind.htm">Schwitzgebel 2014</a>, <a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">2024</a>).</p><p>Furthermore, given the superficial features, it might feel very natural to attribute consciousness to such entities, especially <a href="https://schwitzsplinters.blogspot.com/2023/12/falling-in-love-with-machines.html">among non-experts unfamiliar with their architecture and perhaps open to, or even enthusiastic about, the possibility of AI consciousness in the near future</a>.</p><p><b>The mimicry of superficial features of consciousness isn't proof of the nonexistence of consciousness in the mimic, but it is grounds for doubt. And in the context of highly uncertain consciousness science, it will be difficult to justify setting aside such doubts.</b></p><p>None of these remarks would apply, of course, to AI systems that somehow acquire features suggestive of consciousness by some process other than mimicry.</p></div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com18tag:blogger.com,1999:blog-26951738.post-48915006389776932242024-03-01T09:14:00.000-08:002024-03-01T09:31:31.058-08:00The Leapfrog Hypothesis for AI ConsciousnessThe first genuinely conscious robot or AI system would, you might think, have relatively simple consciousness -- insect-like consciousness, or jellyfish-like, or frog-like -- rather than the rich complexity of human-level consciousness. It might have vague feelings of dark vs light, the to-be-sought and to-be-avoided, broad internal rumblings, and not much else -- not, for example, complex conscious thoughts about ironies of Hamlet, or multi-part long-term plans about how to form a tax-exempt religious organization. The simple usually precedes the complex. Building a conscious insect-like entity seems a lower technological bar than building a more complex consciousness.<p>
Until recently, that's what I had assumed (in keeping with <a href="https://philpapers.org/rec/BASTEO-11">Basl 2013</a> and <a href="https://philpapers.org/rec/BASMAM">Basl 2014</a>, for example). Now I'm not so sure.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3g4_0WqY7QsY-xkTxx39pGQpVS05gFe5QbRnvYOM7Mr0Cr07m7LC1ty9W81s0tYlmE-4tG0Qly_6ohhZt6BYqxWcmYrx8l3_5FSvVxbKCpV2ijtZRgS0v-V2ILoE7zVK5JoVOKsrwphOZszh64tXOG3vySGGw91youVHriOIe-l071baOCCeXDA/s1024/DALL%C2%B7E%202024-02-29%2013.58.08%20-%20Visualize%20a%20scene%20with%20a%20high-tech,%20artificially%20intelligent%20frog%20sitting%20on%20a%20lily%20pad%20in%20the%20middle%20of%20a%20serene%20pond.%20This%20frog%20is%20not%20ordinary;%20it%27.webp" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="1024" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3g4_0WqY7QsY-xkTxx39pGQpVS05gFe5QbRnvYOM7Mr0Cr07m7LC1ty9W81s0tYlmE-4tG0Qly_6ohhZt6BYqxWcmYrx8l3_5FSvVxbKCpV2ijtZRgS0v-V2ILoE7zVK5JoVOKsrwphOZszh64tXOG3vySGGw91youVHriOIe-l071baOCCeXDA/s320/DALL%C2%B7E%202024-02-29%2013.58.08%20-%20Visualize%20a%20scene%20with%20a%20high-tech,%20artificially%20intelligent%20frog%20sitting%20on%20a%20lily%20pad%20in%20the%20middle%20of%20a%20serene%20pond.%20This%20frog%20is%20not%20ordinary;%20it%27.webp" width="320" /></a></p><p>[Dall-E image of a high-tech frog on a lily pad; click to enlarge and clarify]</p><p>AI systems are -- presumably! -- not yet meaningfully conscious, not yet sentient, not yet capable of feeling genuine pleasure or pain or having genuine sensory experiences. Robotic eyes "see" but they don't yet <i>see</i>, not like a frog sees. However, they do already far exceed all non-human animals in their capacity to explain the ironies of Hamlet and plan the formation of federally tax-exempt organizations. (Put the "explain" and "plan" in scare quotes, if you like.) For example:
</p><div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX2o7fxRfkD3Dnx4tjMJe91r5DPtuYcf7uyL_GpRpX0jVQtgJZi6xTfq5dwfrRu3izAF05Co2pSOPKTWfjrfkSZ7U3cWPF7R-i_0gETi2UgmHaIdZwskwhJyWPWSpczjgzELdViROJsiek3Y0w5YEuOgcppGT78Xqlgu8c_M0qzG_FEPFoCLnitA/s1014/HamletTaxExempt.jpg" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="932" data-original-width="1014" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX2o7fxRfkD3Dnx4tjMJe91r5DPtuYcf7uyL_GpRpX0jVQtgJZi6xTfq5dwfrRu3izAF05Co2pSOPKTWfjrfkSZ7U3cWPF7R-i_0gETi2UgmHaIdZwskwhJyWPWSpczjgzELdViROJsiek3Y0w5YEuOgcppGT78Xqlgu8c_M0qzG_FEPFoCLnitA/s320/HamletTaxExempt.jpg" width="320" /></a></div>
[ChatGPT-4 outputs for "Describe the ironies of Hamlet" and "Devise a multi-part long term plan about how to form a tax-exempt religious organization"; click to enlarge and clarify]<p>
Let's see a frog try that!</p><p>
Consider, then the <b>Leapfrog Hypothesis: The first conscious AI systems will have rich and complex conscious intelligence, rather than simple conscious intelligence.</b> AI consciousness development will, so to speak, leap right over the frogs, going straight from non-conscious to richly endowed with complex conscious intelligence.</p><p>
What would it take for the Leapfrog Hypothesis to be true?</p><p>
First, engineers would have to find it<b> harder to create a genuinely conscious AI system than to create rich and complex representations or intelligent behavioral capacities that are not conscious</b>.</p><p>
And second, once a genuinely conscious system <i>is </i>created, it would have to be <b>relatively easy thereafter to plug in the pre-existing, already developed complex representations or intelligent behavioral capacities</b> in such a way that they belong to the stream of conscious experience in the new genuinely conscious system. Both of these assumptions seem at least moderately plausible, in these post-GPT days.</p><p>Regarding the first assumption: Yes, I know GPT isn't perfect and makes some surprising commonsense mistakes. We're not at genuine artificial general intelligence (AGI) yet -- just a lot closer than I would have guessed in 2018. "Richness" and "complexity" are challenging to quantify (<a href="https://www.nature.com/articles/nrn.2016.44">Integrated Information Theory</a> is one attempt). Quite possibly, properly understood, there's currently less richness and complexity in deep learning systems and large language models than it superficially seems. Still, their sensitivity to nuance and detail in the inputs and the structure of their outputs bespeaks complexity far exceeding, at least, light-vs-dark or to-be-sought-vs-to-be-avoided.</p><p>Regarding the second assumption, consider a cartoon example, inspired by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8770991/">Global Workspace theories of consciousness</a>. Suppose that, to be conscious, an AI system must have input (perceptual) modules, output (behavioral) modules, side processors for specific cognitive tasks, long- and short-term memory stores, nested goal architectures, and between all of them a "global workspace" which receives selected ("attended") inputs from most or all of the various modules. These attentional targets become centrally available representations, accessible by most or all of the modules. Possibly, for genuine consciousness, the global workspace must have certain further features, such as recurrent processing in tight temporal synchrony. We arguably haven't yet designed a functioning AI system that works exactly along these lines -- but for the sake of this example let's suppose that once we create a good enough version of this architecture, the system is genuinely conscious.</p><p>
But now, as soon as we have such a system, it might not be difficult to hook it up to a large language model like GPT-7 (GPT-8? GPT-14?) and to provide it with complex input representations full of rich sensory detail. The lights turn on... and as soon as they turn on, we have <i>conscious</i> descriptions of the ironies of Hamlet, richly detailed <i>conscious</i> pictorial or visual inputs, and multi-layered <i>conscious</i> plans. Evidently, we've overleapt the frog.</p><p>
Of course, Global Workspace Theory might not be the right theory of consciousness. Or my description above might not be the best instantiation of it. But the thought plausibly generalizes to a wide range of functionalist or computationalist architectures: The technological challenge is in creating any consciousness at all in an AI system, and once this challenge is met, giving the system rich sensory and cognitive capacities, far exceeding that of a frog, might be the easy part.</p><p>
Do I underestimate frogs? Bodily tasks like five-finger grasping and locomotion over uneven surfaces have proven to be technologically daunting (though we're making progress). Maybe the embodied intelligence of a frog or bee is vastly more complex and intelligent than the seemingly complex, intelligent linguistic outputs of a large language model.</p><p>
Sure thing -- but this doesn't undermine my central thought. In fact, it might buttress it. If consciousness requires frog- or bee-like embodied intelligence -- maybe even biological processes very different from what we can now create in silicon chips -- artificial consciousness might be a long way off. But then we have even longer to prepare the part that seems more distinctively human. We get our conscious AI bee and then plug in GPT-28 instead of GPT-7, plug in a highly advanced radar/lidar system, a 22nd-century voice-to-text system, and so on. As soon as that bee lights up, it lights up big!</p><p>
</p><div class="separator" style="clear: both;"></div><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com19tag:blogger.com,1999:blog-26951738.post-27279004830075302762024-02-20T12:37:00.000-08:002024-02-20T12:55:33.855-08:00Could Someone Still Be Collecting a Civil War Widow's Pension? A Possibility ProofIn 1865, a 14-year-old boy becomes a Union soldier in the U.S. Civil War. In 1931, at age 90, he marries an 18-year-old woman, who continues to collect his Civil War pension after he dies. Today, in early 2024, she is one hundred and ten years old, still collecting that pension.<p>
I was inspired to this thought by reflecting about some long-dead people my father knew, who survive in my memory through his stories. How far back might such second-hand memories go? Farther than one might initially suppose -- in principle, back to the 1860s. An elderly philosopher, alive today, might easily have second-hand memories of William James (d. 1910) or Nietzsche (d. 1900), maybe even Karl Marx (d. 1883) or John Stuart Mill (d. 1873).<p>
Second-hand memories have a quality to them that third-hand memories and historical accounts lack. Through my father's and uncle's stories, I feel a kind of personal connection to Timothy Leary (d. 1996), B.F. Skinner (d. 1990), and Abraham Maslow (d. 1970), even though I never met them, in a way I don't to other scholars of the era. It hasn't been <i>so </i>long since their heyday in the 1950s - 1960s, when my father and his brother knew them -- but I might still have several decades in me. My son David, currently a Cognitive Science PhD student at Institut Jean Nicod at ENS in Paris, has also heard such stories, and he could potentially live to see the 22nd century. (My daughter Kate was too young when my father died to have made much of his academic stories.)<p>
The idea that the U.S. might still be paying a Civil War widow's pension is not as ridiculous as it seems. According to <a href="https://www.usnews.com/news/articles/2016-08-08/civil-war-vets-pension-still-remains-on-governments-payroll-151-years-after-last-shot-fired">this website</a>, the last pension-recieving Union widow died in 2003. According to <a href="https://www.essentialcivilwarcurriculum.com/civil-war-pensions.html">this website</a>, it was 2008. The last recipient of a Civil War children's benefit died from a hip injury <a href="https://www.washingtonpost.com/history/2020/06/04/she-was-last-american-collect-civil-war-pension-7313-month-she-just-died/">in 2020</a>.<p>
GPT-4 representation of an elderly civil war widow in a cityscape in 2020:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlPWld6W386vKP1QKkfY0KXvaRmK4r0AT_nuIsfGZODutmerVtjwGlqZvKrSreRtvpKTbSuHcegWDF-HlgD1V1jlGQH6qCLUtcGmTmOJBgy8_Z165vEHykSyxX2Y1F9Q8dnVE1ageouHD0Jg-UWpflCuYB0TOmXsrCy4cqWZF8G35VWqXTaIhq9w/s1024/DALL%C2%B7E%202024-02-20%2012.18.47%20-%20An%20elderly%20woman,%20dressed%20in%20a%20blend%20of%20traditional%2019th-century%20mourning%20attire%20and%20modern%20clothing,%20stands%20alone%20in%20a%20bustling%20cityscape%20of%20the%20year.webp" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="1024" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlPWld6W386vKP1QKkfY0KXvaRmK4r0AT_nuIsfGZODutmerVtjwGlqZvKrSreRtvpKTbSuHcegWDF-HlgD1V1jlGQH6qCLUtcGmTmOJBgy8_Z165vEHykSyxX2Y1F9Q8dnVE1ageouHD0Jg-UWpflCuYB0TOmXsrCy4cqWZF8G35VWqXTaIhq9w/s320/DALL%C2%B7E%202024-02-20%2012.18.47%20-%20An%20elderly%20woman,%20dressed%20in%20a%20blend%20of%20traditional%2019th-century%20mourning%20attire%20and%20modern%20clothing,%20stands%20alone%20in%20a%20bustling%20cityscape%20of%20the%20year.webp"/></a></div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com7tag:blogger.com,1999:blog-26951738.post-9696836739614511982024-02-16T08:46:00.000-08:002024-02-19T12:12:54.488-08:00What Types of Argument Convince People to Donate to Charity? Empirical EvidenceBack in 2020, <a href="https://psychology.fas.harvard.edu/people/fiery-cushman">Fiery Cushman</a> and I <a href="https://schwitzsplinters.blogspot.com/2019/10/philosophy-contest-write-philosophical.html">ran a contest</a> to see if anyone could write a philosophical argument that convinced online research participants to donate a surprise bonus to charity at rates statistically above control. (Chris McVey, Josh May, and I had failed to write any successful arguments in some <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/EngagingCharitableGiving.htm">earlier attempts</a>.) Contributions were not permitted to mention particular real people or events, couldn't be narratives, and couldn't include graphics or vivid descriptions. We wanted to see whether relatively dry philosophical arguments could move people to donate.<p>
We received 90 submissions (mostly from professional philosophers, psychologists, and behavioral economists, but also from other Splintered Mind readers), and we selected 20 that we thought represented a diversity of the most promising arguments. The <a href="https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html">contest winner</a> was an argument written by <a href="https://matthewlindauer.com/">Matthew Lindauer</a> and <a href="https://www.petersinger.info/">Peter Singer</a>, highlighting that a donation of $25 can save a child in a developing country from going blind due to trachoma, then asking the reader to reflect on how much they would be willing to donate to save their own child from going blind. (Full text <a href="https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html">here</a>.)</p><p>
<a href="https://twitter.com/kirstanbrodie?lang=en">Kirstan Brodie</a>, <a href="https://www.linkedin.com/in/jason-nemirow">Jason Nemirow</a>, Fiery, and I decided to follow up by <b>testing all 90 submitted arguments to see what features were present in the most effective arguments</b>. We coded the arguments according to whether, for example, they mentioned children, or appealed to religion, or mentioned the reader's assumed own economic good fortune, etc. -- twenty different features in all. We recruited approximately 9000 participants. Each participant had a 10% chance of winning a surprise bonus of $10. They could either keep the whole $10 or donate some portion of it to one of six effective charities. Participants decided whether to donate, and how much, before knowing if they were among the 10% receiving the $10.</p><p>
Now, unfortunately, proper statistical analysis is complicated. Because we were working with whatever came in, we couldn't balance argument features, most arguments had multiple coded features, and the coded features tended to correlate between submissions. I'll share a proper analysis of the results later. Today I'll share a simpler analysis. This simple analysis looks at the coded features one by one, comparing the average donation among the set of arguments with the feature to average donation among the set of arguments without the feature.</p><p>There is something to be said, I think, for simple analysis even when they aren't perfect: They tend to be easier to understand and to have fewer "researcher degrees of freedom" (and thus less opportunity for <a href="https://en.wikipedia.org/wiki/Data_dredging">p-hacking</a>). Ideally, simple and sophisticated statistical analyses go hand-in-hand, telling a unified story.</p><p>
So, what argument features appear to be relatively more versus less effective in motivating charitable giving?<p>
Here are our results, from highest to lowest difference in mean donation. "diff" is the dollar difference in mean donation, N is the number of participants who saw an argument with that feature, n is the number of arguments containing that feature, and p is the statistical p-value in a two-sample t test (without correction for multiple comparisons). All analyses are tentative, pending double-checking, skeptical examination, and possibly some remaining data clean-up.</p><p>
<b>Predictive Argument Features, Highest to Lowest<p></b>
Does the argument appeal to the notion of <b>equality</b>?</br>
$3.99 vs $3.39 (diff = $.60, N = 395, n = 4, p < .001)</p><p>
... mention <b>human evolutionary history</b>?</br>
$3.93 vs $3.39 (diff = $.55, N = 4940, n = 5, p < .001)</p><p>
... specifically mention <b>children</b>?</br>
$3.76 vs $3.26 (diff = $.49, N = 4940, n = 27, p < .001)</p><p>
... mention a <b>specific, concrete benefit</b> to others that $10 or a similar amount would bring (e.g., 3 mosquito nets or a specific inexpensive medical treatment)?</br>
$3.75 vs $3.44 (diff = $.41, N = 1718, n = 17, p < .001)</p><p>
... appeal to the <b>diminishing marginal utility</b> of dollars kept by (rich) donors?</br>
$3.69 vs $3.29 (diff = $.40, N = 2843, n = 27, p < .001)</p><p>
... appeal to the <b>massive marginal utility</b> of dollars transferred to (poor) recipients?</br>
$3.65 vs $3.25 (diff = $.40, N = 3758, n = 36, p < .001)</p><p>
... mention, or ask the participant to bring to mind, a <b>particular person who is physically or emotionally near</b> to them?</br>
$3.74 vs $3.34 (diff = $.34, N = 318, n = 3, p = .061)</p><p>
... mention <b>particular needs or hardships</b> such as clean drinking water or blindness?</br>
$3.56 vs $3.23 (diff = $.30, N = 4940, n = 49, p < .001)</p><p>
... refer to the <b>reader's own assumed economic good fortune</b>?</br>
$3.58 vs $3.31 (diff = $.27, N = 3544, n = 35, p < .001)</p><p>
... focus on <b>one, single issue</b>? (e.g. trachoma)</br>
$3.61 vs $3.40 (diff = $.21, N = 800, n = 8, p = .07)</p><p>
... remind people that <b>giving something is better than nothing</b>? (i.e. corrective for drop-in-the-bucket thinking)</br>
$3.56 vs $3.40 (diff = $.15, N = 595, n = 6, p = .24)</p><p>
... appeal to the <b>views of experts</b> (e.g. philosophers, psychologists)?</br>
$3.47 vs $3.39 (diff = $.07, N = 2629, n = 27, p = .29)</p><p>
... reference <b>specific external sources</b> such as news reports or empirical studies?</br>
$3.47 vs $3.40 (diff = $.07, N = 1828, n = 18, p = .41)</p><p>
... explicitly mention that <b>donation is common</b>?</br>
$3.46 vs $3.41 (diff = $.05, N = 736, n = 7, p = .66)</p><p>
... appeal to the notion of <b>randomness/luck</b> (e.g., nobody chose the country they were born in)?</br>
$3.43 vs $3.41 (diff = $.02, N = 1403, n = 14, p = .80)</p><p>
... mention <b>religion</b>?</br>
$3.35 vs $3.42 (diff = -$.07, N = 905, n = 9, p = .48)</p><p>
... appeal to veil-of-ignorance reasoning or other <b>perspective-taking thought experiments</b>?</br>
$3.29 vs $3.23 (diff = -$.14, N = 4940, n = 8, p = .20)</p><p>
... mention that <b>giving could inspire others</b> to give? (i.e. spark behavioral contagion)</br>
$3.29 vs $3.43 (diff = -$.14, N = 896, n = 9, p = .20)</p><p>
... explicitly mention and <b>address specific counterarguments</b>?</br>
$3.29 vs $3.45 (diff = -$.15, N = 1829, n = 19, p = .048)</p><p>
... appeal to the <b>self-interest</b> of the participant?</br>
$3.22 vs $3.49 (diff = -$.30, N = 2604, n = 22, p < .001)</p><p>
From this analysis, several argument features appear to be effective in increasing participant donations:
<li><b>mentioning children and appealing to the equality of all people,</li>
<li>mentioning concrete benefits (one or several),</li>
<li>mentioning the reader's assumed economic good fortune and the relatively large impact of a relatively small sacrifice (the "margins" features),</b> and<b></li>
<li>mentioning evolutionary history (e.g., theories that human beings evolved to care more about near others than distant others).</li></b><p> Mentioning a particular near person might also have been effective, but since only three arguments were coded in this category, statistical power was poor.</p><p>
In contrast, <b>appealing to the participant's self-interest (e.g., that donating will make them feel good) appears to have backfired</b>. Mentioning and addressing counterarguments to donation (e.g., responding to concerns that donations are ineffective or wasted) might also have backfired.</p><p>
Now I don't think we should take these results <i>wholly</i> at face value. For example, only five of the ninety arguments appealed to evolutionary history, and all of those arguments included at least two other seemingly effective features: particular hardships, margins, or children. In multiple regression analyses and multi-level analyses that explore how the argument features cluster, it looks like particular hardships, children, and margins might be more robustly predictive -- more on that in a future post. ETA (Feb 19): Where the n < 10 arguments, effects are unlikely to be statistically robust.</p><p>
What if we combine argument features? There are various ways to do this, but the simplest is to give an argument one point for any of the ten largest-effect features, then perform a linear regression. The resulting model has an intercept of $3.09 and a slope of $.13. Thus, the model predicts that participants who read arguments with none of these features will donate $3.09, while participants who read a hypothetical argument containing all ten features will donate $4.39.<p>
<b>Further analysis also suggests that piling up argument features is cumulative</b>: Arguments with at least six of the effective features generated mean donations of $3.89 (vs. $3.37), those with at least seven generated mean donations of $4.46 (vs. $3.38), and <b>the one argument with eight of the ten effective features generated a mean donation of $4.88</b> (vs. $3.40) (all p's < .001). This eight-feature argument was, in fact, the best performing argument of the ninety. (However, caution is warranted concerning the estimated effect size for any particular argument: With approximately only 100 participants per argument and a standard deviation of about $3, the 95% confidence intervals for the effect size of individual arguments are about +/- $.50.)</p><p>
------------------------------------------------------</p><p>
Last month, I articulated and defended the attractiveness of moral expansion through <a href="https://schwitzsplinters.blogspot.com/2024/01/imagining-yourself-in-anothers-shoes-vs.html">Mengzian extension</a>. On my interpretion of the ancient Chinese philosopher Mengzi, expansion of one's moral perspective often (typically?) begins with noticing how you react to nearby cases -- whether physically nearby (a child in front of you, about to fall into a well) or relationally nearby (your close family members) -- and proceeds by noticing that remote cases (distant children, other people's parents) are similar in important respects.</p><p>
None of the twenty coded features captured exactly that. ("Particular near person" was close, but neither necessary nor sufficient: not necessary, because the coders used a stringent standard for when an argument invoked a particular near person, and not sufficient since invoking a particular near person is only the first step in Mengzian extension.) So I asked UCR graduate student Jordan Jackson, who studies Chinese philosophy and with whom I've discussed Mengzian extension, to read all 90 arguments and code them for whether they employed Mengzian extension style reasoning. He found six that did.</p><p>
In accord with my hypothesis about the effectiveness of Mengzian extension, <b>the six Mengzian extension arguments outperformed the arguments that did not employ Mengzian extension</b>:</p><p>
$3.85 vs $3.38 (diff = $.47, N = 612, n = 6, p < .001)</p><p>
Among those six arguments are both the 2020 original contest winner written by Lindauer and Singer and also the best-performing argument in the present study -- though as noted earlier, the best-performing argument in the current study also had many other seemingly effective features.</p><p>
In case you're curious, here's the full text of that argument, adapted by <a href="https://alexgarintherteaching.weebly.com/">Alex Garinther</a>, and quoting extensively, from one of the stimuli in <a href="https://www.cambridge.org/core/journals/judgment-and-decision-making/article/comparing-the-effect-of-rational-and-emotional-appeals-on-donation-behavior/C39B98F3AB2ABB03A7055A013BE2C899?fbclid=IwAR3dlOWk-FaiZUL3J69eWkq58YE57fNc2Dt533OmPQyWpKyoHpxd6vuUJsA_aem_ATT8Z4i5vanCxZt-3F1VIHBCK_kzsU-iYkhSMP1D8CQLWrzKQ7KW3_wTKliTl6t1u64">Lindauer et al. 2020</a></p><p>
</p><blockquote><p>HEAR ME OUT ON SOMETHING. The explanation below is a bit long, but I promise reading the next few paragraphs will change you.<p>
As you know, there are many children who live in conditions of severe poverty. As a result, their health, mental development, and even their lives are at risk from lack of safe water, basic health care, and healthy food. These children suffer from malnutrition, unsanitary living conditions, and are susceptible to a variety of diseases. Fortunately, effective aid agencies (like the Against Malaria Foundation) know how to handle these problems; the issue is their resources are limited.</p><p>
HERE'S A PHILOSOPHICAL ARGUMENT: Almost all of us think that we should save the life of a child in front of us who is at risk of dying (for example, a child drowning in a shallow pond) if we are able to do so. Most people also agree that all lives are of equal moral worth. The lives of faraway children are no less morally significant than the lives of children close to us, but nearby children exert a more powerful emotional influence. Why?</p><p>
SCIENTISTS HAVE A PLAUSIBLE ANSWER: We evolved in small groups in which people helped their neighbors and were suspicious of outsiders, who were often hostile. Today we still have these “Us versus Them” biases, even when outsiders pose no threat to us and could benefit enormously from our help. Our biological history may predispose us to ignore the suffering of faraway people, but we don't have to act that way.</p><p>
By taking money that we would otherwise spend on needless luxuries and donating it to an effective aid agency, we can have a big impact. We can provide safe water, basic health care, and healthy food to children living in severe poverty, saving lives and relieving suffering.</p><p>
Shouldn't we, then, use at least some of our extra money to help children in severe poverty? By doing so, we can help these children to realize their potential for a full life. Great progress has been made in recent years in addressing the problem of global poverty, but the problem isn't being solved fast enough. Through charitable giving, you can contribute towards more rapid progress in overcoming severe poverty.</p><p>
Even a donation $5 can save a life by providing one mosquito net to a child in a malaria-prone area. FIVE DOLLARS could buy us a large cappuccino, and that same amount of money could be used to save a life.</p><p></blockquote>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSWHJBQNjq8_GaRsFG4i-xsAX1ayC14MEFWWvWJY7TvbWmFqdotrjYKOWNw_JNLSOJhsaEAK52glNOtZgOcLrJV2zPD2oXisfSMcWb55fvNMFSbPulPmZncoebSf0Akt_eDgFdATltecMz8NjPTcVbHc_dBth76b94hvG1tPMaFwlJ3Wv3F1IAug/s715/ArgumentFeatures.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="715" data-original-width="620" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSWHJBQNjq8_GaRsFG4i-xsAX1ayC14MEFWWvWJY7TvbWmFqdotrjYKOWNw_JNLSOJhsaEAK52glNOtZgOcLrJV2zPD2oXisfSMcWb55fvNMFSbPulPmZncoebSf0Akt_eDgFdATltecMz8NjPTcVbHc_dBth76b94hvG1tPMaFwlJ3Wv3F1IAug/s320/ArgumentFeatures.jpg"/></a></div>
Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com11tag:blogger.com,1999:blog-26951738.post-79326334800083398802024-02-09T07:25:00.000-08:002024-02-09T12:55:21.527-08:00Grade Inflation at UC Riverside, and Institutional Pressures for Easier GradingRecent news reports have highlighted grade inflation at elite universities: <a href="https://www.thecrimson.com/article/2023/10/5/faculty-debate-grade-inflation-compression/">Harvard gave 79% As</a> in 2020-2021, as did <a href="https://www.nytimes.com/2023/12/05/nyregion/yale-grade-inflation.html">Yale in 2022-2023</a>, compared to 67% in 2010-2011. At Harvard, <a href="https://www.thecrimson.com/article/2022/10/3/barton-grade-inflation/">the average GPA</a> has risen from 2.55 in 1950 to 3.05 in 1975 to 3.36 in 1995 to 3.80 now. At Brown, 67% of grades were As in 2020-2021, 10% Bs, and <a href="https://www.browndailyherald.com/article/2022/04/brown-grade-inflation-continues-to-soar-data-shows">only 1% Cs</a>. It's not just elite universities, however: Grades have risen sharply since at least the 1980s <a href="https://www.gradeinflation.com/">across a wide range of schools</a>.<p>
I decided to look at UC Riverside's grade distributions since 2013, since faculty now have access to a tool to view this information. (It would be nice to look back farther, but even the changes since 2013 are interesting.)<p>
The following chart lists grade distributions quarter by quarter for the regular academic year, from 2013 through the present. The dark blue bars at the top are As, medium blue Bs, light blue Cs, and red is D, F, or W.<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguRjW_MTx0A-imkOW_aPfEq04Q6TCHwPTqULa-O2tr8to8FFkTcGbJ0SCPESwLvvY8vYBELYrmwNJMY5t4AVHhKJcccLnfTrUQ9XT20oJVBMdBHtqMnDRgfb9Igejuhqc5peXOcib4rwJ49rw13KPschOk3OZYk1GX2y8WmIGA5H4JFLkc18T2Ig/s1425/UCRgrades2013-2023a.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="869" data-original-width="1425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguRjW_MTx0A-imkOW_aPfEq04Q6TCHwPTqULa-O2tr8to8FFkTcGbJ0SCPESwLvvY8vYBELYrmwNJMY5t4AVHhKJcccLnfTrUQ9XT20oJVBMdBHtqMnDRgfb9Igejuhqc5peXOcib4rwJ49rw13KPschOk3OZYk1GX2y8WmIGA5H4JFLkc18T2Ig/s320/UCRgrades2013-2023a.jpg"/></a></div>
[click to enlarge and clarify]<p>
Three things are visually obvious from this graph:
<li>First, there's a <b>spike of high grades in Spring 2020</b> -- presumably due to the chaos of the early days of the pandemic.</li>
<li>Second, the <b>percentage of As is higher in recent years</b> than in earlier years.</li>
<li>Third, the <b>percentage of DFWs has remained about the same</b> across the period.</li><p>
In Fall 2013, 32% of enrolled students received As. In Fall 2023, 45% did. (DFW's were 9% in both terms.)<p>
One open question is whether the new normal of about 45% As reflects a general trend independent of the pandemic spike or whether the pandemic somehow created an enduring change. Another question is whether the higher percentage of As reflects<i> easier grading</i> or <i>better performance</i>. The term "inflation" suggests the former, but of course data of this sort by themselves don't distinguish between those possibilities.<p>
The increase in percentage As is <b>evident in both lower division and upper division classes</b>, increasing from 32% to 43% in lower division and from 33% to 49% in upper division.<p>
How about UCR philosophy in particular? I'd like to think that my own department has consistent and rigorous standards. However, as the figure below shows, the trends <b>in UCR philosophy are similar</b>, with an increase from 26% As in Fall 2013 to 41% As in Fall 2024:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBIcX7KdWUW84xy5klYRam2cGypaGM79F7M4oSgaJjS2J7tVfDeCvysXSvoPG__YscMBJyYQmwOBhz2ZF8CweuEfQWTkshsZt58sHEItzao1RertO797PN65avYcXEoKQH8VCLx3uyKz6Z7d7h3N8QnSSxDAZ3cjafFqVkosvQWEo-YLHNIpdG5w/s1069/UCRgrades2013-2023-phil.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="583" data-original-width="1069" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBIcX7KdWUW84xy5klYRam2cGypaGM79F7M4oSgaJjS2J7tVfDeCvysXSvoPG__YscMBJyYQmwOBhz2ZF8CweuEfQWTkshsZt58sHEItzao1RertO797PN65avYcXEoKQH8VCLx3uyKz6Z7d7h3N8QnSSxDAZ3cjafFqVkosvQWEo-YLHNIpdG5w/s320/UCRgrades2013-2023-phil.jpg"/></a></div>
[click to enlarge and clarify]<p>
Lower division philosophy classes at UCR increased from 25% As in Fall 2013 to 40% As in Fall 2023, while upper division classes increased from 26% to 47% As.<p>
Smoothing out quarter-by-quarter differences, here is the percentage of As, Fall 2013 - Spring 2014 vs Winter 2023 - Fall 2023 for Philosophy and some selected other disciplines at UCR for comparison:</br>
Philosophy: 27% to 43% (28% to 42% lower, 25% to 46% upper)</br>
English: 20% to 33% (15% to 28% lower, 38% to 64% upper)</br>
History: 28% to 52% (23% to 52% lower, 48% to 52% upper)</br>
Business: 28% to 46% (20% to 24% lower, 29% to 49% upper)</br>
Psychology: 32% to 51% (33% to 51% lower, 31% to 51% upper)</br>
Biology: 22% to 38% (28% to 36% lower, 17% to 41% upper)</br>
Physics: 26% to 39% (26% to 37% lower, 40% to 41% upper)</br><p>
As you can see, <b>in some disciplines at some levels, the percentage of As has almost doubled</b> over the ten-year time period.<p>
UCR is probably not unusual in the respects I have described. However, if other people have similar analyses for their own institutions, I'd be interested to hear, especially if the pattern is different.<p>
I doubt, unfortunately, that students are actually performing that much better. UCR philosophy students in 2023 were not dramatically better at writing, critical thinking, and understanding historical material than were students in 2013. I conjecture that the main cause of grade inflation is <b>institutional pressures toward easier grading</b>.<p>
I see two institutional pressures toward higher grades and more relaxed standards:<p>
<i>Teaching evaluations:</i> Generally students give better teaching evaluations to professors from whom they expect better grades.[1] Other things being equal, a professor who gives few As will get worse evaluations than one who gives many As. <b>Since professors' teaching is often judged in large part on student evaluations, professors will tend to be institutionally rewarded for giving higher grades, ensuring happier students who give them better evaluations.</b> Professors who are easier graders, if this fact is known among the student body, will also tend to get higher enrollments.<p>
<i>Graduation rates:</i> At the institutional level, success is often evaluated in terms of graduation rates. If students fail to complete their degrees or take longer than expected to so do because they are struggling with classes, this looks bad for the institution. <b>Thus, there is institutional pressure toward <a href="https://www.nber.org/system/files/working_papers/w28710/w28710.pdf">lower standards to ensure high levels of student graduation</a> and "success".</b><p>
There are fewer countervailing institutional pressures toward higher rigor and more challenging grading schemes. If classes are too unrigorous, a school might risk losing its <a href="https://www.acswasc.org/">WASC accreditation</a>, but few well-established colleges and universities are at genuine risk of losing their accreditation.<p>
At some point, the grade "A" loses its strength as a signal of excellence. If over 50% of students are receiving As, then an A is consistent with average performance. Yes, for some inspiring teachers and some amazing student groups, average performance might be truly excellent! But that's not the typical scenario.<p>
I have one positive suggestion for how to deal with grade inflation. But before I get to it, I want to mention one other striking phenomenon: the variation in the grade distributions between terms for what is nominally the same course. For example, here is the distribution chart for one of the lower division classes in UCR's Philosophy Deparment:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1YehThXA1WOIsGepLuCYFNZb1vnVYrQFmh74lWtYFP9uYWkIGGElmPWzKF89qcbRZ2LY0tPZzSLh0NI83ZovudzEY3WxgKKw9KnX48V5RxM-NIjThtrbkd-yAgwLrIpxs9CKTXErO0qql9t6mPIAx4oY4vy7nmfss06RZw8mP553W15zYwjQecg/s1078/UCRgrades2013-2023-philsample.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="718" data-original-width="1078" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1YehThXA1WOIsGepLuCYFNZb1vnVYrQFmh74lWtYFP9uYWkIGGElmPWzKF89qcbRZ2LY0tPZzSLh0NI83ZovudzEY3WxgKKw9KnX48V5RxM-NIjThtrbkd-yAgwLrIpxs9CKTXErO0qql9t6mPIAx4oY4vy7nmfss06RZw8mP553W15zYwjQecg/s320/UCRgrades2013-2023-philsample.jpg"/></a></div>
[click to enlarge and clarify]<p>
The distribution ranges from 11% As in Fall 2014 to 72% As in Fall 2020.<p>
Some departments in some universities have moved to standardized curricula and tests so that the same class in each term is taught and graded similarly. In philosophy, this is probably not the right approach, since different instructors can reasonably want to focus on different material, approached and graded differently. Still, that degree of term-by-term variation in what is nominally the same class raises issues of fairness to students.<p>
My suggestion is: <b>sunlight</b>. Let course grade distributions be widely shared and known.<p>
Sunlight won't solve everything -- far from it -- but I do think that in looking at students' teaching evaluations, seeing the professor's grade distribution provides valuable context that might disincentivize cynical strategies to inflate grades for good evaluations. I've evaluated teaching for teaching awards, for visiting instructors, and for my own colleagues, and I'm struck by how rare it is for information about grade distributions even to be supplied in the context of evaluating teaching. <b>A full picture of a professor's teaching should include an understanding of the range of grades they are distributing and, ideally, random samples of tests and assignments that earn As and Bs and Cs</b>. This situates us to better celebrate the work of professors with high standards and the students in their classes who live up to those high standards.<p>
Similarly, <b>grade distributions should be made available at the departmental and institutional level</b>. In combination with other evidence -- again, ideally random samples of assignments awarded A, B, and C -- this can help in evaluating the extent to which those departments and institutions are holding students to high standards.<p>
<b>Student transcripts, too, might be better understood in the context of institutions' and departments' grading standards.</b> This would allow viewers of the transcript to know whether a student's 3.7 GPA is a rare achievement in their institutional context, or simply average performance.<p>
--------------------------------------------------<p>
[1] A <a href="https://edworkingpapers.com/sites/default/files/ai22-513.pdf">recent study</a> suggests that grade satisfaction might be the primary driver of the correlation between students' expected grades and their course evaluations, rather than grading leniency per se -- these can come apart when a student is satisfied with their grade as a result of their hard work for it -- but grading leniency is an instructor's easiest path to generating student grade satisfaction, generating the institutional pressure.Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com12tag:blogger.com,1999:blog-26951738.post-33617694434251666862024-02-02T13:07:00.000-08:002024-02-02T13:07:22.570-08:00Swallows and Moles in PhilosophyIn <a href="https://www.science.org/doi/pdf/10.1126/science.adn0629">his review</a> (in the journal <i>Science</i> -- cool!) of my recently released book, <a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world"><i>The Weirdness of the World</a></i>, Edouard Machery writes:<p>
<blockquote>There are two kinds of philosophers: swallows and moles. Swallows love to soar and to entertain philosophical hypotheses at best loosely connected with empirical knowledge. Plato and Gottfried Leibniz are paradigmatic swallows. Moles, on the contrary, rummage through mundane facts about our world and aim at better understanding it. Aristotle, William James, and Hans Reichenbach are paradigmatic moles. Eric Schwitzgebel is unabashedly a swallow.</blockquote><p>
Machery admits to having a mole's-eye view of the swallows. <b>He praises the book, but he is frustrated by my admittedly wild speculations about radical skepticism, group consciousness, an infinite future, etc.</b><p>
Machery's goal in his own recent book <i><a href="https://global.oup.com/academic/product/philosophy-within-its-proper-bounds-9780198807520">Philosophy Within Its Proper
Bounds</a></i> was, he says, "to curtail the flights of fancy with which contemporary philosophers are enamored". <i>The Weirdness of the World</i> celebrates such flights of fancy -- so naturally, Machery and I are going to disagree about the value of wild philosophical speculation.<p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9_yxvw8B2cUjVYpd2KKtVEy9dWoD6ggzuDUmk9mEV4QGhglcxrjFLgNr8a0CljAE6CMvNybEvhyGyExRqYe-PriPj1c9ZyXo_rCyRU5zSFpByDZka_eG0UCB1jC1GJocXvK_EDPm4-4nN65-SZvMTb1j-_XhQohZywcLP6HeD6K-scoJP8xxsgA/s1082/MacheryReview.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="822" data-original-width="1082" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9_yxvw8B2cUjVYpd2KKtVEy9dWoD6ggzuDUmk9mEV4QGhglcxrjFLgNr8a0CljAE6CMvNybEvhyGyExRqYe-PriPj1c9ZyXo_rCyRU5zSFpByDZka_eG0UCB1jC1GJocXvK_EDPm4-4nN65-SZvMTb1j-_XhQohZywcLP6HeD6K-scoJP8xxsgA/s320/MacheryReview.jpg"/></a></div><p>
Reading Machery's contrast of swallows and moles, I was immediately reminded of how the ancient Chinese philosopher Zhuangzi opens his Inner Chapters:<p>
<blockquote>There is a fish in the Northern Oblivion named Kun, and this Kun is quite huge, spanning who knows how many thousands of miles. He transforms into a bird named Peng, and this Peng has quite a back on him, stretching who knows how many thousands of miles. When he rouses himself and soars into the air, his wings are like clouds draped across the heavens. The oceans start to churn, and this bird begins his journey toward the Southern Oblivion....</br></br>
The quail laughs at him, saying, "Where does he think he's going? I leap into the air with all my might, but before I get farther than a few yards I drop to the ground. My twittering and fluttering between the branches is the utmost form of flying! So where does he think he's going? (Ziporyn trans., p. 3-4).
</blockquote><p>
Zhuangzi is the swallowiest of swallows, soaring far beyond mundane empirical facts, wondering if life might be a dream, speculating about trees who measure eight thousand years as a single autumn, and celebrating "spirit men" with skin like ice and snow who eat only wind and dew, riding upon the air and clouds.<p>
Zhuangzi's quail, however, raises a good point: <b>It's much clearer where you're going if you confine yourself to small hops between familiar branches.</b> The Peng is neither practical nor grounded, and Zhuangzi's philosophy is arguably the same. Zhuangzi's friend Huizi scolds him: "Your words are... big and useless, which is why they are rejected by everyone who hears them" (Ziporyn trans., p. 8).<p>
In defense against Machery and the quail critique, I offer three thoughts:<p>
<b>
First, if anyone is going to speculate about wild possibilities concerning the fundamental nature of things, philosophers should be among them.</b><p>
It would be a sad, gray world if our reasoning was always confined to "proper bounds" and we couldn't reflect on issues like dream skepticism, group consciousness, and infinitude. Shouldn't it be part of the job description of philosophy to explore such ideas, considering what can or should be made of them?<p>
Such speculations needn't be entirely unconstrained by empirical facts, even if empirical science fails to deliver decisive answers. In <i>The Weirdness of the World</i> my speculations always start from empirical observation. My discussion of dream skepticism engages with the science of dreams; my discussion of group consciousness engages with the science of consciousness; my chapter on the possible infinite future -- collaborative with physicist and philosopher of physics Jacob Barandes -- is grounded in the standard working assumptions of mainstream physics. Scientifically informed philosophers are as well-positioned as anyone to speculate about wild hypotheticals that naturally intrigue us (at least some of us). To stand athwart such speculations, saying "Thou shalt not enter this epistemic wilderness!" is to reject an intrinsically valuable form of human philosophical curiosity.<p>
<b>Second, we can distinguish two types of swallow: those confident that their wild hypotheses are correct and those who merely entertain and explore such hypotheses</b>.<p>
Maybe Plato was convinced of the <a href="https://en.wikipedia.org/wiki/Theory_of_forms">reality of Forms</a> and <a href="https://iep.utm.edu/meno-2/#SH2b">the recollection theory of memory</a>. Maybe Leibniz was convinced that the world was composed of <a href="https://plato.stanford.edu/entries/leibniz/#MonWorPhe">monads</a> in <a href="https://plato.stanford.edu/entries/leibniz/#PreEstHar">pre-established harmony</a>. But Zhuangzi was a <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/ZZ.htm">self-undermining skeptic</a> who appears to have taken none of his wild speculations as established fact.<p>
I don't argue that the United States definitely has conscious experiences; I argue that if we accept standard materialist approaches to consciousness, they seem to imply that it does and that therefore we should take the idea seriously as a possibility. I don't argue that this is a dream or a short-term simulation; I argue that our ordinary culturally-given understanding of the world and mainstream scientific assumptions combine to justify assigning a non-trivial (maybe about 0.1%) credence to both of those possibilities. Barandes and I don't argue that there definitely is an infinite future in which future counterparts of you enact almost every possible action, but only that it follows from "certain not wholly implausible assumptions".<p>
When soaring in speculation far beyond the mundane local tree branches, doubt is appropriate. The most natural critique of swallows is that they appear to believe wild things on thin evidence. That critique is harder to sustain when the swallow explicitly treats the speculations as speculations only, rather than as established facts.<p>
<b>Third, the swallow and the mole can collaborate</b> -- even in the work of a single philosopher. As Jonathan Birch comments in my <a href="https://www.facebook.com/eschwitz/posts/pfbid02zaWJRY6NpwSn2mJb4BFyA9wszSZEoWEQwVRHN2VX1niFTtXfgfqQWa5rTWjwJyPsl">Facebook post</a> linking to Machery's book review, two of Edouard's paradigmatic examples of moles -- Aristotle and William James -- are probably not best thought of as pure moles, but rather as swallow-moles. They dug around quite a bit in mundane empirical facts, yes. But they sometimes also soared with the swallows. Aristotle speculated on the existence of a supraphysical <a href="https://plato.stanford.edu/entries/aristotle-natphil/#MoveUnmoMove">unmoved mover</a> responsible for the existence of the physical world. James speculated about metaphysical "<a href="https://plato.stanford.edu/entries/neutral-monism/#WillJame">neutral monism</a>" concerning mind and matter and celebrated <a href="https://en.wikipedia.org/wiki/The_Varieties_of_Religious_Experience">religious belief beyond the evidence</a>.<p>
I too have done a fair bit of mundane empirical work -- for example, on the moral behavior of ethics professors (e.g., <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/EthSelfRep.htm">here</a> and <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/EthBehBlackwell.htm">here</a>), on introspective method (e.g., <a href="https://mitpressbookstore.mit.edu/book/9780262516495">here</a> and <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/ExpWOAttn.htm">here</a>), and on the consequences of exposure to ethical argumentation (e.g., <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/MeatEthics.htm">here</a> and <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/EngagingCharitableGiving.htm">here</a>). Even when I am not myself running the empirical studies, much of my work engages with nitty-gritty empirical detail (e.g., on the <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/DreamB&W.htm">history of reports of coloration in dreams</a>, on the <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Snails.htm">cognitive capacites of garden snails</a>, on the <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Imagery.htm">accuracy of visual imagery reports</a>, and on <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/FiveTypes.htm">psychological measures of well-being</a>).<p>
Often, I think, deep empirical mole-digging is valuable for one's subsequent speculative soaring. Digging into the details of cosmological models enables better informed speculation about the distant future. Digging into the details of the behavior of ethics students and professors enables better informed speculation about the general relation between ethical reflection and ethical behavior. Digging into the details of dream reports enables better informed speculation about dream skepticism. As Zhuangzi imagines, a low-lying fish can transform into a soaring phoenix.<p>
No single researcher needs to do both the digging and the soaring, even if some of us enjoy both types of task. But it's valuable to have a whole ecosystem of moles and swallows, <a href="https://press.princeton.edu/books/paperback/9780691156002/the-hedgehog-and-the-fox">foxes and hedgehogs</a>, <a href="https://dailynous.com/2024/01/29/swallows-moles-and-other-animal-philosopher-typologies/">ants and anteaters</a>, <a href="https://theviewfromtheowlsroost.com/2017/09/28/philosophy-truth-or-dare/">truth philosophers and dare philosophers</a>, and so on.<p>
I'm honored that Machery counts me among the swallows. I celebrate his moleishness. Let's dig <i>and</i> soar!Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com5tag:blogger.com,1999:blog-26951738.post-56788158762397354052024-01-25T11:00:00.000-08:002024-01-25T11:00:05.766-08:00Imagining Yourself in Another's Shoes vs. Extending Your Concern: Empirical and Ethical Differences[<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/ExtendingConcern.htm">new paper in draft</a>]<p>
The Golden Rule (do unto others as you would have others do unto you) isn't <i>bad</i>, exactly -- it can serve a valuable role -- but I think there's something more empirically and ethically attractive about the relatively underappreciated idea of "extension" found in the ancient Chinese philosopher Mengzi.<p>
The fundamental idea of extension, as I interpret it, is to notice the concern one naturally has for nearby others -- whether they are relationally near (like close family members) or spatially near (like Mengzi's child about to fall into a well or Peter Singer's child you see drowning in a shallow pond) -- and, attending to relevant similarities between those nearby cases and more distant cases, to extend your concern to the more distant cases.<p>
I see three primary advantages to extension over the Golden Rule (not that these constitute an exhaustive list of means of moral expansion!).<p>
(1.)<b> Developmentally and cognitively, extension is less complex.</b> The Golden Rule, properly implemented, involves imagining yourself in another's shoes, then considering what you would want if you were them. This involves a non-trivial amount of "<a href="https://en.wikipedia.org/wiki/Theory_of_mind">theory of mind</a>" and hypothetical reasoning. You must notice how others' beliefs, desires, and other mental states relevantly differ from yours, then you must imagine yourself hypothetically having those different mental states, and then you must assess what you would want in that hypothetical case. In some cases, there might not even be a fact of the matter about what you would want. (As an extreme example, imagine applying the Golden Rule to an award-winning show poodle. Is there a fact of the matter about what <i>you</i> would want if you were an award winning show poodle?) Mengzian extension seems cognitively simpler: Notice that you are concerned about nearby person X and want W for them, notice that more distant person Y is relevantly similar, and come to want W for them also. This resembles ordinary generalization between relevant cases: This wine should be treated this way, therefore other similar wines should be treated similarly; such-and-such is a good way to treat this person, so such-and-such is probably also a good way to treat this other similar person.<p>
(2.) <b>Empirically, extension is a more promising method for expanding one's moral concern.</b> Plausibly, it's more of a motivational leap to go from concern about self to concern about distant others (Golden Rule) than to go from concern from nearby others to similar more distant others (Mengzian Extension). When aid agencies appeal for charitable donations, they don't typically ask people to imagine what they would want if they were living in poverty. Instead, they tend to show pictures of children, drawing upon our natural concern for children and inviting us to extend that concern to the target group. Also -- as I plan to discuss in more detail in a post next month -- in the "argument contest" Fiery Cushman and I ran back in 2020, <a href="https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html">the arguments most successful in inspiring charitable donation</a> employed Mengzian extension techniques, while appeals to "other's shoes" style reasoning did not tend to predict higher levels of donation than did the average argument.<p>
(3.) <b>Ethically, it's more attractive to ground concern for distant others in the extension of concern for nearby others than in hypothetical self-interest.</b> Although there's something attractive about caring for others because you can imagine what you would want if you were them, there's also something a bit... self-centered? egoistic? ... about grounding other-concern in hypothetical self-concern. Rousseau writes: "love of men derived from love of self is the principle of human justice" (<i>Emile</i>, Bloom trans., p. 235). Mengzi or Confucius would never say this! In Mengzian extension, it is ethically admirable concern for nearby others that is the root of concern for more distant others. Appealingly, I think, the focus is on broadening one's admirable ethical impulses, rather than hypothetical self-interest.<p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZOr66BhrNOH_1pmBt5knmlAx5xA3hccHLoddkY9rJQkhe_AFduJq8kJ1fZzS2sak8eU7DXGyW9LNYUwBRnUK_bo4P7DVqVu5iaZjYxpe2n_ij0sbNVB5B05RDF9S83bvp75OYRsm1bDd9S6cys8NuO6Q-gw-mrbY0PoWKcHHkebK7RyIwIBIWzQ/s1024/DALL%C2%B7E%202024-01-25%2010.47.07%20-%20A%20scene%20set%20in%20ancient%20China%20depicting%20a%20child%20about%20to%20fall%20into%20a%20well%20with%20a%20startled%20and%20concerned%20onlooker%20nearby.%20The%20child%20is%20precariously%20bala.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="1024" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZOr66BhrNOH_1pmBt5knmlAx5xA3hccHLoddkY9rJQkhe_AFduJq8kJ1fZzS2sak8eU7DXGyW9LNYUwBRnUK_bo4P7DVqVu5iaZjYxpe2n_ij0sbNVB5B05RDF9S83bvp75OYRsm1bDd9S6cys8NuO6Q-gw-mrbY0PoWKcHHkebK7RyIwIBIWzQ/s320/DALL%C2%B7E%202024-01-25%2010.47.07%20-%20A%20scene%20set%20in%20ancient%20China%20depicting%20a%20child%20about%20to%20fall%20into%20a%20well%20with%20a%20startled%20and%20concerned%20onlooker%20nearby.%20The%20child%20is%20precariously%20bala.png"/></a></div>
[ChatGPT4's rendering of Mengzi's example of a child about to fall into a well, with a concerned onlooker; I prefer <a href="https://1000wordphilosophy.com/2018/04/10/mengzis-moral-psychology-part-1-the-four-moral-sprouts/">Helen De Cruz's version</a>]<p>
My new paper on this -- forthcoming in Daedalus -- is <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/ExtendingConcern.htm">circulating today</a>. As always, comments, objections, corrections, connections welcome, either as comments on this post, on social media, or by email.<p>
Abstract:<p>
According to the Golden Rule, you should do unto others as you would have others do unto you. Similarly, people are often exhorted to "imagine themselves in another's shoes." A related but contrasting approach to moral expansion traces back to the ancient Chinese philosopher Mengzi, who urges us to "extend" our concern for those nearby to more distant people. Other approaches to moral expansion involve: attending to the good consequences for oneself of caring for others, expanding one's sense of self, expanding one's sense of community, attending to others' morally relevant properties, and learning by doing. About all such approaches, we can ask three types of question: To what extent do people in fact (e.g., developmentally) broaden and deepen their care for others by these different methods? To what extent do these different methods differ in ethical merit? And how effectively do these different methods produce appropriate care?Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com10tag:blogger.com,1999:blog-26951738.post-56110533549377097452024-01-16T00:00:00.000-08:002024-01-16T12:45:43.098-08:00The Weirdness of the World: Release Day and IntroductionToday is the official U.S. release day of my newest book, <a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a>!<p>
As a teaser, here's the introduction:<p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhn_BHGvR4snbuv_5m9ljqpYP_03R4F1Rkv_yuJ4GetvDXGgNT_mlBVrnk7E3vPLK6zj6vVQDOCxE9t5m_MRr4gFmXrpBcBtriJEpyoVAKh_CqgwwGK44C3SP_8pgKsJl3xwcXFWkF2bF_CFHnhVhnxMW2BBJh7lGf_8E13Sf_jHJbhe1c96oTUmw/s2850/WeirdnessCover.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="2850" data-original-width="1875" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhn_BHGvR4snbuv_5m9ljqpYP_03R4F1Rkv_yuJ4GetvDXGgNT_mlBVrnk7E3vPLK6zj6vVQDOCxE9t5m_MRr4gFmXrpBcBtriJEpyoVAKh_CqgwwGK44C3SP_8pgKsJl3xwcXFWkF2bF_CFHnhVhnxMW2BBJh7lGf_8E13Sf_jHJbhe1c96oTUmw/s320/WeirdnessCover.jpg"/></a></div><p>
<b>In Praise of Weirdness</b><p>
<blockquote>The weird sisters, hand in hand,</br>
Posters of the sea and land,</br>
Thus do go about, about:</br>
Thrice to thine and thrice to mine</br>
And thrice again, to make up nine.</br>
Peace! the charm’s wound up.</br>
—Shakespeare, Macbeth, Act I, scene iii</br></br>
Weird often saveth</br>
The undoomed hero if doughty his valor!</br>
—Beowulf, X.14–15, translated by J. Lesslie Hall</blockquote><p></br>
The word “weird” has deep roots in old English, originally as a noun
for fate or magic, later evolving toward its present use as an adjective for
the uncanny or peculiar. By the 1980s, it had fruited as the choicest
middle-school insult against unstylish kids like me who spent their free
time playing with figurines of wizards and listening to obscure science
fiction radio shows. If the “normal” is the conventional, ordinary, and
readily understood, the weird is what defies that.<p>
The world is weird -- deeply, pervasively so, weird to its core, or so I
will argue in this book. Among the weirdest things about Earth is that
certain complex bags of mostly water can pause to reflect on the most
fundamental questions there are. We can philosophize to the limits of
our comprehension and peer into the fog beyond those limits. We can
contemplate the foundations of reality, and the basis of our understanding of those foundations, and the necessary conditions of the basis of
our understanding of those foundations, and so on, trying always to
peer behind the next curtain, even with no clear method and no great
hope of a satisfying end to the inquiry. In this respect, we vastly outgeek
bluebirds and kangaroos and are rightly a source of amazement to
ourselves.<p>
I will argue that careful inquiry into fundamental questions about
consciousness and cosmology reveals not a set of readily comprehensible answers but instead a complex blossoming of bizarre possibilities.
These possibilities compete with one another, or combine in non-obvious ways. Philosophical and cosmological inquiry teaches us that
something radically contrary to common sense must be true about the
fundamental structures of the mind and the world, while leaving us
poorly equipped to determine where exactly the truth lies among the
various weird possibilities.<p>
We needn’t feel disappointed by this outcome. The world is richer
and more interesting for escaping our understanding. How boring it
would be if everything made sense!<p>
<b>1. My Weird Thesis</b><p>
Consider three huge questions: What is the fundamental structure of
the cosmos? How does human consciousness fit into it? What should
we value? What I will argue in this book -- with emphasis on the first
two questions but also sometimes touching on the third -- is (1) the
answers to these questions are currently beyond our capacity to know,
and (2) we do nonetheless know at least this: Whatever the truth is, it’s
weird. Careful reflection will reveal that every viable theory on these
grand topics is both bizarre and dubious. In chapter 2 (“Universal
Bizarreness and Universal Dubiety”), I will call this the Universal
Bizarreness thesis and the Universal Dubiety thesis. Something that seems almost too preposterous to believe must be true, but we lack the
means to resolve which of the various preposterous-seeming options is
in fact correct. If you’ve ever wondered why every wide-ranging,
foundations-minded philosopher in the history of Earth has held bizarre metaphysical or cosmological views (I challenge you to find an
exception!) -- with each philosopher holding, seemingly, a different set
of bizarre views -- chapter 2 offers an explanation.<p>
I will argue that every approach to cosmology and consciousness has
implications that run strikingly contrary to mainstream “common
sense” and that, partly in consequence, we ought to hold such theories
only tentatively. Sometimes we can be justified in simply abandoning
what we previously thought of as common sense, when we have firm
scientific grounds for thinking otherwise; but questions of the sort I
explore in this book test the limits of scientific inquiry. Concerning such
matters, nothing is firm -- neither common sense, nor science, nor any
of our other epistemic tools. The nature and value of scientific inquiry
itself rely on disputable assumptions about the fundamental structure
of the mind and the world, as I discuss in chapters on skepticism (chapter 4), idealism (chapter 5), and whether the external world exists
(chapter 6).<p>
On a philosopher’s time scale -- where a few decades ago is “recent”
and a few decades from now is “soon” -- we live in a time of change, with
cosmological theories and theories of consciousness rising and receding
in popularity based mainly on broad promise and what captures researchers’ imaginations. We ought not trust that the current range of
mainstream theories will closely resemble the range in a hundred years,
much less the actual truth.<p>
<b>2. Varieties of Cosmological Weirdness<p></b>
To establish that the world is cosmologically weird, maybe all that is
needed is relativity theory and quantum mechanics.<p>
According to relativity theory, if your twin accelerates away from you
at very high speed, then returns, much less time will have passed for the
traveler than for you who stayed here on Earth -- the so-called Twin Paradox. According to the most straightforward interpretation of quantum mechanics, if you observe what we ordinarily consider to be a
chance event, there’s also an equally real, equally existing version of you
in another “world” who shares your past but for whom the event turned
out differently. (Or maybe your act of observation caused the event to
turn out one way rather than the other, or maybe some other bizarre
thing is true, depending on the correct interpretation of quantum mechanics, but it’s widely accepted that there are no non-bizarre interpretations.) So if you observe the chance decay of a uranium atom, for
example, there’s another world branching of from this one, containing
a counterpart of you who observes the atom not to have decayed. If we
accept that view, then the cosmos contains a myriad of different, equally
real worlds, each with different versions of you and your friends and
everything you know, all splitting off from a common past.<p>
I won’t dwell on those particular cosmological peculiarities, since
they are familiar to academic readers and well handled elsewhere.
However, some equally fundamental cosmological issues are typically
addressed by philosophers rather than scientific cosmologists.<p>
One is the possibility that the cosmos is nowhere near as large as we
ordinarily assume -- perhaps just you and your immediate environment
(chapter 4) or perhaps even just your own mind and nothing else (chapter 6). Although these possibilities might appear unlikely, they are
worth considering seriously, to assess how confident we ought to be in
their falsity, and on what grounds. I will argue that it’s reasonable not to
entirely dismiss such skeptical possibilities. Alternatively, and more in
line with mainstream physical theory, the cosmos might be infinite,
which brings its own train of bizarre consequences (chapter 7).<p>
Another possibility is that we live inside a simulated reality or a
pocket universe, embedded in a much larger structure about which we
know virtually nothing (chapters 4 and 5). Yet another possibility is that
our experience of three-dimensional spatiality is a product of our own
minds that doesn’t reflect the underlying structure of reality (chapter 5)
or that our sensory experience maps only loosely onto the underlying
structure of reality (chapter 9).<p>
Still another set of questions concerns the relationship of mind to
cosmos. Is conscious experience abundant in the universe, or does it require the delicate coordination of rare events (chapter 10)? Is consciousness purely a matter of having the right physical structure, or
might it require something non-physical (chapter 2)? Under what conditions might a group of organisms give rise to group-level consciousness
(chapter 3)? What would it take to build a conscious machine, if that is
possible at all -- and what should we do if we don’t know whether we
have succeeded (chapter 11)?<p>
In each of our heads there are about as many neurons as stars in our
galaxy, and each neuron is arguably more structurally complex than any
star system that does not contain life. There is as much complexity and
mystery inside as out.<p>
The repeated theme: In the most fundamental matters of consciousness and cosmology, neither common sense, nor early twenty-first-century empirical science, nor armchair philosophical theorizing is
entirely trustworthy. The rational response is to distribute our credence
across a wide range of bizarre options.<p>
Each chapter is meant to be separately comprehensible. Please feel
free to skip ahead, reading any subset of them in any order.<p>
<b>3. Philosophy That Closes versus Philosophy That Opens</b><p>
You are reading a philosophy book -- voluntarily, let’s suppose. Why?
Some people read philosophy because they believe it reveals profound,
fundamental truths about the way the world really is and the one right
manner to live. Others like the beauty of grand philosophical systems.
Still others like the clever back-and-forth of philosophical dispute.
What I like most is none of these. I love philosophy best when it opens
my mind -- when it reveals ways the world could be, possible approaches
to life, lenses through which I might see and value things around me,
which I might not other wise have considered.<p>
Philosophy can aim to open or to close. Suppose you enter Philosophical Topic X imagining three viable, mutually exclusive possibilities, A, B, and C. The philosophy of closing aims to reduce the three to
one. It aims to convince you that possibility A is correct and the others wrong. If it succeeds, you know the truth about Topic X: A is the answer! In contrast, the philosophy of opening aims to add new possibilities to the mix -- possibilities that you hadn’t considered before or had considered but too quickly dismissed. Instead of reducing three to one,
three grows to maybe five, with new possibilities D and E. We can learn
by addition as well as subtraction. We can learn that the range of viable
possibilities is broader than we had assumed.<p>
For me, the greatest philosophical thrill is realizing that something
I’d long taken for granted might not be true, that some “obvious” apparent truth is in fact doubtable -- not just abstractly and hypothetically
doubtable, but really, seriously, in-my-gut doubtable. The ground shifts
beneath me. Where I’d thought there would be floor, there is instead
open space I hadn’t previously seen. My mind spins in new, unfamiliar
directions. I wonder, and the world itself seems to glow with a new wondrousness. The cosmos expands, bigger with possibility, more complex,
more unfathomable. I feel small and confused, but in a good way.<p>
Let’s test the boundaries of the best current work in science and philosophy. Let’s launch ourselves at questions monstrously large and formidable. Let’s contemplate these questions carefully, with serious scholarly
rigor, pushing against the edge of human knowledge. That is an intrinsically worthwhile activity, worth some of our time in a society generous
enough to permit us such time, even if the answers elude us.<p>
My middle-school self who used dice and thrift-shop costumes to
imagine astronauts and wizards is now a middle-aged philosopher who
uses twenty-first-century science and philosophy to imagine the shape
of the cosmos and the magic of consciousness. Join me! If doughty our
valor, mayhap the weird saveth us.Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com14tag:blogger.com,1999:blog-26951738.post-69770701577468944532024-01-12T09:26:00.000-08:002024-01-15T11:50:19.737-08:00Demographic Trends in the U.S. Philosophy Major, 2001-2022 -- Including Total Majors, Second Majors, Gender, and Race<p>I'm preparing for an <a href="https://cdn.ymaws.com/www.apaonline.org/resource/resmgr/eastern2024/e2024_meeting_program_draft.pdf">Eastern APA session on the "State of Philosophy"</a> next Thursday, and I thought I'd share some data on philosophy major bachelor's degree completions from the National Center for Education Statistics <a href="https://nces.ed.gov/ipeds/use-the-data">IPEDS database</a>, which compiles data on virtually all students graduating from accredited colleges and universities in the U.S., as reported by administrators.</p><p>I examined all data from the 2000-2001 academic year (the first year in which they started recording data on second majors) through 2021-2022 (the most recent available year).</p><p><b>Total Numbers of Philosophy Majors: The Decline Has Stopped</b></p><p>First, the sharp decline in philosophy majors since 2013 has stopped:</p><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;"><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2001: <span style="white-space: pre;"> </span>5836</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2002: <span style="white-space: pre;"> </span>6529</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2003: <span style="white-space: pre;"> </span>7023</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2004: <span style="white-space: pre;"> </span>7707</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2005: <span style="white-space: pre;"> </span>8283</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2006: <span style="white-space: pre;"> </span>8532</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2007: <span style="white-space: pre;"> </span>8541</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2008: <span style="white-space: pre;"> </span>8778</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2009: <span style="white-space: pre;"> </span>8996</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2010: <span style="white-space: pre;"> </span>9268</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2011: <span style="white-space: pre;"> </span>9292</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2012: <span style="white-space: pre;"> </span>9362</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2013: <span style="white-space: pre;"> </span>9427</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2014: <span style="white-space: pre;"> </span>8820</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2015: <span style="white-space: pre;"> </span>8184</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2016: <span style="white-space: pre;"> </span>7489</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2017: <span style="white-space: pre;"> </span>7572</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2018: <span style="white-space: pre;"> </span>7667</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2019: <span style="white-space: pre;"> </span>8074</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2020: <span style="white-space: pre;"> </span>8209</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2021: <span style="white-space: pre;"> </span>8328</blockquote><blockquote style="border: medium; margin: 0px 0px 0px 40px; padding: 0px;">2022: <span style="white-space: pre;"> </span>7958</blockquote></blockquote><div class="separator" style="clear: both; text-align: center;"><br /></div>(The decline between 2021 and 2022 reflects a general decline in completions of bachelor's degrees due to the pandemic that year, rather than a trend specific to philosophy.)<div><br /></div><div>In general, the humanities have declined sharply since 2010, and history, English, and foreign languages and literature continue to decline. This graph shows the trend:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQg8PwBjOXbV81hal_em3xuMumU-jXtxMa0CW7v6Dw7bz3aW6Qye-XqKXXk_OTDXVLOkYqlsMCkG-y0NhUsq-RPzSqAMQztm9AGij_1kq6mfHjWw7PhVbqH3r21Vzb-6-XqkIRz3cPKa8wJbD8A6DLmEwycnj9jGly8I4Rpo87va0G8hsdgnCDKA/s692/HumanitiesMajors-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="692" height="309" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQg8PwBjOXbV81hal_em3xuMumU-jXtxMa0CW7v6Dw7bz3aW6Qye-XqKXXk_OTDXVLOkYqlsMCkG-y0NhUsq-RPzSqAMQztm9AGij_1kq6mfHjWw7PhVbqH3r21Vzb-6-XqkIRz3cPKa8wJbD8A6DLmEwycnj9jGly8I4Rpo87va0G8hsdgnCDKA/s320/HumanitiesMajors-240111.jpg" width="320" /></a></div><div>[click image to enlarge and clarify]</div><div><br /></div><div>The decline in the English major is particularly striking, from 4.5% of bachelor's degrees awarded in 2000-2001 to 1.8% in 2021-2022. Philosophy peaked at 0.60% in 2005-2006 and has held steady at 0.39%-0.40% since 2015-2016.</div><div><br /></div><div><b>Philosophy Relies on Double Majors</b></div><div><b><br /></b></div><div>[Expanded and edited for clarity, Jan 15] Breaking the data down by first major vs second major, we can see that over time an increasing proportion of students have philosophy as their second major. In some schools, the distinction between "first major" and "second major" is meaningful, with the first indicating the primary major. In other schools the distinction is not meaningful. In the 2021-2022 academic year, 24% of students who took a bachelor's degree in philosophy had it listed as their second major.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwl-bSigeS0af4M5FuguzopEnrrKvO9ZkfDk_aDFNwvfkbEQ_T_iIr34OxK9gYjZa-NWztqEPmeuUizy-MB0g96TI-KvgUbbGJytoxuiedXGNPjijlqjfQq5hjTrPYjCcIwmt9ok1f8LXazrqti73j0enVqUXbgL4eWg0pcRI3zIYvN-V24xlvRw/s760/FirstSecondMajors-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="498" data-original-width="760" height="210" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwl-bSigeS0af4M5FuguzopEnrrKvO9ZkfDk_aDFNwvfkbEQ_T_iIr34OxK9gYjZa-NWztqEPmeuUizy-MB0g96TI-KvgUbbGJytoxuiedXGNPjijlqjfQq5hjTrPYjCcIwmt9ok1f8LXazrqti73j0enVqUXbgL4eWg0pcRI3zIYvN-V24xlvRw/s320/FirstSecondMajors-240111.jpg" width="320" /></a></div><div>[click image to enlarge and clarify]</div><div><br /></div><div>From these numbers we can estimate that philosophy students are at least moderately likely to be double majors. While it's impossible to know what percentage of students who took philosophy as their first major also carried a second major, a ballpark estimate might assume that about half of students with philosophy plus one other major list philosophy first rather than second. If so, then approximately half of all philosophy majors (48%) are double majors. Overall, across all majors, only 5% of students double majored.</div><div><div><br /></div><div>The ease of double majoring is likely to influence the number of students who choose philosophy as a major.</div></div><div><br /></div><div><b>Gender Disparity Is Decreasing</b></div><div><br /></div><div>NCES classifies all students as men or women, with no nonbinary category and no unclassified students. Since the beginning of the available data in the 1980s through the mid-2010s, the percentage of women among philosophy bachelor's recipients <a href="https://schwitzsplinters.blogspot.com/2017/12/women-have-been-earning-30-34-of.html">hovered steadily between 30% and 34%</a>, not changing even as the total percentage of women increased from 51% to 57%. However, the last several years have seen a clear decrease in gender disparity, with women now earning 41% of philosophy degrees.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkQpUBSDL94ieU-jZbe3T-Kwo-HlFe6kbTqq2cJPSNPYy38E5cMSGLH5SU1UlhV-8rrgXLkBl70YsseD-qYgRKPshIMjnSURrPMCxBpKvhemUw4QF81zWCYIvFu15dztf5Ko7ht-GrZCX8ADj_fm5tv4IKFAaLijJ433RVNeswYFZPFJbvsjknqQ/s684/WomenPhilosophy-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="547" data-original-width="684" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkQpUBSDL94ieU-jZbe3T-Kwo-HlFe6kbTqq2cJPSNPYy38E5cMSGLH5SU1UlhV-8rrgXLkBl70YsseD-qYgRKPshIMjnSURrPMCxBpKvhemUw4QF81zWCYIvFu15dztf5Ko7ht-GrZCX8ADj_fm5tv4IKFAaLijJ433RVNeswYFZPFJbvsjknqQ/s320/WomenPhilosophy-240111.jpg" width="320" /></a></div><div>[click image to enlarge and clarify]</div><div><b><br /></b></div><div><b>Black Students Remain Underrepresented in Philosophy Compared to Undergraduates Overall, and Other Race/Ethnicity Data</b></div><div><br /></div><div>NCES uses the following race/ethnicity categories: U.S. nonresident, race/ethnicity unknown, Hispanic or Latino (any race), and among U.S. residents who are not Hispanic or Latino: American Indian or Alaska Native, Asian, Black or African American, Native Hawaiian or Other Pacific Islander, White, and two or more races. Before 2007-2008, Native Hawaiian or Other Pacific Islander was included with Asian, but inconsistently until 2010-2011. The two-or-more races option was also introduced in the 2007-2008 academic year, again with inconsistent reporting for several years.</div><div><br /></div><div>I've charted these categories below. As you can see, for most categories, the percentages are similar for philosophy and for graduates overall, except that non-Hispanic White is slightly higher for philosophy and non-Hispanic Black significantly lower. In 2021-2022, non-Hispanic Black people<a href="https://nces.ed.gov/programs/digest/d21/tables/dt21_101.20.asp"> were 14% of the U.S. population age 18-24</a>, 10% of bachelor's degree recipients, and 6% of philosophy bachelor's recipients.</div><div><br /></div><div>[as usual, click the figures to expand and clarify]</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzxOFw11aAOWr5Vi7944kqAJFoFxeK7VpTvm22Qng7okI0w33z2MOMRmxR3Mxa9DphlAb1aYhRgG-iPu_cTWEp1QnT95Lb0Wxv_Hhio98-zSA4O-Y4fvKc6R7K2DFUuXrBsFsw_6ImyqOZ7udPDh9W70zjGQ_TO7RHxJZ_NYxahZtaUGBSBgvNVQ/s636/PhilHisp-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzxOFw11aAOWr5Vi7944kqAJFoFxeK7VpTvm22Qng7okI0w33z2MOMRmxR3Mxa9DphlAb1aYhRgG-iPu_cTWEp1QnT95Lb0Wxv_Hhio98-zSA4O-Y4fvKc6R7K2DFUuXrBsFsw_6ImyqOZ7udPDh9W70zjGQ_TO7RHxJZ_NYxahZtaUGBSBgvNVQ/w320-h260/PhilHisp-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2hqUDhUdTb7NkzWjqXBWvkhj1UbLtzgz5KwMFqOiMyr1V7-1P-rRTbXbbOxlkQg2niBqOJxVI9ga_rYtOfZIDIaT3SFduIMTenUhIa5vBYwOIl5P_ZPQwJjXFo37FWYqg9jHEdwpMf2BJArEunLX3v_IaoPjvAHPYMlj6eyUzXG4K5VBiNn2ihw/s636/PhilAIAN-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2hqUDhUdTb7NkzWjqXBWvkhj1UbLtzgz5KwMFqOiMyr1V7-1P-rRTbXbbOxlkQg2niBqOJxVI9ga_rYtOfZIDIaT3SFduIMTenUhIa5vBYwOIl5P_ZPQwJjXFo37FWYqg9jHEdwpMf2BJArEunLX3v_IaoPjvAHPYMlj6eyUzXG4K5VBiNn2ihw/s320/PhilAIAN-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijqRp25U5k6VSlYMAD1nDJ31bZFyEOKh3oqjvVa_giDxXjy78ZAg0MrYH2pwxKag71X8QrITd8hO1V-bJkEK0d-8K0NTEC_q2Iwxh1HeNdm5BJyui__qb_y2fL7oleuBWAV1zBFH427nUeTbhQlTZb4GBuOMCpWD0XCrI8DhE69AeH7IxCVqt8Zw/s636/PhilAsian-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijqRp25U5k6VSlYMAD1nDJ31bZFyEOKh3oqjvVa_giDxXjy78ZAg0MrYH2pwxKag71X8QrITd8hO1V-bJkEK0d-8K0NTEC_q2Iwxh1HeNdm5BJyui__qb_y2fL7oleuBWAV1zBFH427nUeTbhQlTZb4GBuOMCpWD0XCrI8DhE69AeH7IxCVqt8Zw/s320/PhilAsian-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidK2tDgoaFnT3OLeqLMDisLbgQr93dyDBlQyZ_uLcSXg1pyW6n2QLRn_OWbgBFo5vXspXxdskfD1m3p3p7ho9i25r2J-dRVthP-6Py_Lgf38rxiEelKUu2UR542iTd6a3PIf12CS45jDW5Fd_77qKRXUGthyur06o8tYnjOK6cdkHt7OG3YoSOEQ/s636/PhilBlack-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidK2tDgoaFnT3OLeqLMDisLbgQr93dyDBlQyZ_uLcSXg1pyW6n2QLRn_OWbgBFo5vXspXxdskfD1m3p3p7ho9i25r2J-dRVthP-6Py_Lgf38rxiEelKUu2UR542iTd6a3PIf12CS45jDW5Fd_77qKRXUGthyur06o8tYnjOK6cdkHt7OG3YoSOEQ/s320/PhilBlack-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSVPQXvNzBUWEuFgX7WPetm0f5MKZNjdGuYWXagny9Aq3SqTeDQzYRj7_1jz7RH32vt8_83IALzZxnf8VfgBkuLHyd5eB9NuANbaxsoLlr_MN74jqB4gOLCCj8NXYsCCNcHXi_cPa-Q6nvnil6ORFBIWIFooZRzeA6ItumFJczwnTGPcB9iFCRdQ/s636/PhilPacific-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="317" data-original-width="636" height="159" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSVPQXvNzBUWEuFgX7WPetm0f5MKZNjdGuYWXagny9Aq3SqTeDQzYRj7_1jz7RH32vt8_83IALzZxnf8VfgBkuLHyd5eB9NuANbaxsoLlr_MN74jqB4gOLCCj8NXYsCCNcHXi_cPa-Q6nvnil6ORFBIWIFooZRzeA6ItumFJczwnTGPcB9iFCRdQ/s320/PhilPacific-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWFMVL2Nwyu3Y_scSutBBH2sPQfmy6hG118Cy5Sj-bjLagDwEoB-E0Ruc-6F4ev2lWwDDZ9g5OmoGivSWlDcZr_qteUZehScttekH3nJzHp0bx1TiF2LzB1N3FSnlfKCnzp8tRoup7Y_q2Uu7lH4yClocEsNcIQdMh11jesPxQoomFIcXjBHHZIQ/s636/PhilWhite-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWFMVL2Nwyu3Y_scSutBBH2sPQfmy6hG118Cy5Sj-bjLagDwEoB-E0Ruc-6F4ev2lWwDDZ9g5OmoGivSWlDcZr_qteUZehScttekH3nJzHp0bx1TiF2LzB1N3FSnlfKCnzp8tRoup7Y_q2Uu7lH4yClocEsNcIQdMh11jesPxQoomFIcXjBHHZIQ/s320/PhilWhite-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCHF09jpxbYKIeV_t925AJZTc0O-8bKE07bSJKY-MEzQHBG0JGhv5jlXFaapkSrey28ATSpmzvYPDLVRWiwlCYgUS9DwWqHZqo3LUEO3bawWjf7L4oN7DeI8OmNedjyWMiZH72b9112AqTlND-fCpigcU8oZ2zvgyd4atrAsQO7C5dqd8zkUuBEw/s636/PhilMulti-240111.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="517" data-original-width="636" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCHF09jpxbYKIeV_t925AJZTc0O-8bKE07bSJKY-MEzQHBG0JGhv5jlXFaapkSrey28ATSpmzvYPDLVRWiwlCYgUS9DwWqHZqo3LUEO3bawWjf7L4oN7DeI8OmNedjyWMiZH72b9112AqTlND-fCpigcU8oZ2zvgyd4atrAsQO7C5dqd8zkUuBEw/s320/PhilMulti-240111.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I interpret the sharp increase in multi-racial students as reflecting reporting issues and an increasing willingness of students to identify as multi-racial.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">It's also worth noting that although philosophy majors are approximately as likely to be Hispanic/Latino as graduates overall, Hispanic/Latino students are underrepresented among bachelor's degree recipients relative to the <a href="https://nces.ed.gov/programs/digest/d21/tables/dt21_101.20.asp">U.S. population age 18-24</a> (17% vs 23%). Non-Hispanic American Indian / Alaska Native students are also underrepresented among overall graduates (0.46% vs. 0.84% of the population age 18-24), and maybe particularly so in philosophy (0.37% vs 0.46% in the most recent year).</div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com2tag:blogger.com,1999:blog-26951738.post-6165131219111886762024-01-05T11:25:00.000-08:002024-01-05T11:36:32.885-08:00Credence-Weighted Robot Rights?You're a firefighter in the year 2050 or 2100. You can rescue either one human, who is definitely conscious, or two futuristic robots, who might or might not be conscious. What do you do?<div><p>
</p><div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHyhA0X9PH_krkIra5fPpIaSy2w_0tSjoF_kiFWpzHjoGx0KyvmO2XhSpeWm87eylCAOMuKvkJ6TnXUHbc2l_SFys8OdzuGaaTIrAY9mFrpJBidjcRROUGA8ctF2x_3LLV86ZafFq1ofie8khK7eB1rVFF9TLsDW3YImD9ebNEHmqHZ_gA7Kdryw/s1933/RobotFire.jpg" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="1326" data-original-width="1933" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHyhA0X9PH_krkIra5fPpIaSy2w_0tSjoF_kiFWpzHjoGx0KyvmO2XhSpeWm87eylCAOMuKvkJ6TnXUHbc2l_SFys8OdzuGaaTIrAY9mFrpJBidjcRROUGA8ctF2x_3LLV86ZafFq1ofie8khK7eB1rVFF9TLsDW3YImD9ebNEHmqHZ_gA7Kdryw/s400/RobotFire.jpg" width="400" /></a></div>
[Illustration by <a href="https://www.artstation.com/ndemers">Nicolas Demers</a>, from my newest book, <i><a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a></i>, to be released Jan 16 and available for pre-order now.]<p>
Suppose you think there's a 75% chance that the robots have conscious lives as rich as those of human beings (or, alternatively, that they have whatever else it takes to have "<a href="https://plato.stanford.edu/entries/grounds-moral-status/#WhatFullMoraStatFMS">full moral status</a>" equivalent to that of a human). And suppose you think there's a 25% chance that the robots are the moral equivalent of toasters, that is, mere empty machines with no significant capacity for conscious thought or feeling.</p><p>
Arguably, if you save the robots and let the human die, you maximize the total expected number of humanlike lives saved (.75 * 2 + .25 * 0 = 1.5 expected lives saved, vs. one life for sure if you save the human). Decision-theoretically, it looks similar to choosing an action with a 75% chance of saving two people over an action that will save one person for sure. Applying similar reasoning, if the credences are flipped (25% chance the robots are conscious, 75% they're not), you save the human.</p><p>
Generalizing: <b>Whatever concern you have for an ordinary human, or whatever you would give on their behalf, multiply that concern by your credence or degree of belief that the robot has human-like consciousness</b> (or alternatively your credence that it has whatever features justify moral consideration similar to that of a human). If you'd give $5 to a human beggar, give $3 to a robot beggar in the same situation, if you think it's 60% likely the robot has human-like consciousness. If an oversubscribed local elementary school has a lottery for admission and resident human children each get a 50% chance of admission, resident robot children of disputable consciousness would get a proportionately reduced chance.</p><p>
Call this approach <i>credence-weighted robot rights.</i></p><p>
I see a least three problems with credence-weighted robot rights:</p><p>
(1.) <b>Credence-weighted robot rights entail that robots will inevitably be treated as inferior, until we are 100% confident that they are our equals.</b></p><p>
Of course it's reasonable to treat robots as inferior to humans <i>now</i>. We should save the person, not the robot, in the fire. And of course if we ever create robots who are beyond all reasonable doubt our equals, we should <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights.htm">treat them as such</a>. I'm hypothesizing instead a tricky in-between case -- a period during which it's reasonably disputable whether or not our machines deserve full moral status as our equals, a period during which liberals about robot consciousness and robot rights regard robots as our fully-conscious moral peers, while conservatives about robot consciousness and robot rights regard them as mindless machines to be deployed and discarded however we wish.</p><p>
If we choose a 75% chance of rescuing two people over a sure-fire rescue of one person, we are not treating the unrescued person as inferior. Each person's life is worth just as much in our calculus as that of the others. But if we rescue five humans rather than six robots we regard as 80% likely to be conscious, we <i>are</i> treating the robots as inferior -- even though, by our own admission, they are probably not. It seems unfortunate and less than ethically ideal to always treat as inferiors entities we regard as <i>probably</i> our equals.</p><p>
(2.) <b>Credence-weighted robot rights would engender chaos if people have highly variable opinions.</b> If individual firefighters make the choices based on their personal opinions, then one firefighter might save the two robots while another saves the one human, and each might find the other's decision abhorrent. Stationwide policies might be adopted, but any one policy would be controversial, and robots might face very different treatment in different regions. If individual judges or police were to apply the law differently based on their different individual credences, or on the variable and hard-to-detect credences of those accused of offences against robots, that would be unfair both to the robots and to the offenders, since the penalty would vary depending on who happened to be the officer or judge or whether they travel in social circles with relatively high vs. low opinions of robot consciousness. So presumably there would have to be some regularization by jurisdiction. But still, different jurisdictions might have very different laws concerning the demolition or neglectful destruction of a robot, some treating it as 80% of a homicide, others treating it as a misdemeanor -- and if robot technologies are variable and changing, the law, and people's understanding of the law, might struggle to keep up and to distinguish serious offences from minor ones.</p><p>
(3.) <b>Chaos might also ensue from the likely cognitive and bodily diversity of robots.</b> While human cognitive and bodily variability typically keeps within familiar bounds, with familiar patterns of ability and disability, robots might differ radically. Some might be designed with conscious sensory experiences but no capacity for pain or pleasure. Others might experience intense pain or pleasure but lack cognitive sophistication. Others might have no stable goals or model their goals wholly on instructions from a human to whom they are gladly, perhaps <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights2.htm">excessively subservient</a>, <a href="https://schwitzsplinters.blogspot.com/2016/02/cheerfully-suicidal-ai-slaves.html">insufficiently valuing their own life</a>. Still others might be able to <a href="https://schwitzsplinters.blogspot.com/2014/03/our-moral-duties-to-monsters.html">merge and divide at will</a>, or back themselves up, or radically reformat themselves, raising questions about the boundaries of the individual and what constitutes death. Some might exist entirely as computational entities in <a href="https://www.nature.com/articles/503562a">virtual</a> <a href="https://clarkesworldmagazine.com/schwitzgebel_07_16/">paradises</a> with little practical connection to our world. All this raises the question of what features are necessary for, and what constitutes, "equal" rights for robots, and whether thresholds of equality even make sense. Our understanding might require a controversial multidimensional scalar appreciation of the grounds of moral status.</p><p>
<b>Other approaches have their own problems.</b> A <a href="https://www.wellbeingintlstudiesrepository.org/cgi/viewcontent.cgi?article=1200&context=animsent">precautionary principle</a> that grants fully human equal rights as soon as it's reasonable to think that robots <i>might</i> deserve them risks sacrificing substantial human interests for machines that very likely don't have interests worth the sacrifice (letting a human die, for example, to save a machine that's only 5% likely to be conscious), and it perhaps makes the question of the grounds of moral status in the face of future robots' cognitive diversity even more troubling and urgent. Requiring <i>proof of consciousness beyond reasonable doubt</i> aggravates the issue of treating robots as subhuman even if we're pretty confident they deserve equal treatment. Treating rights as a <a href="https://mitpress.mit.edu/9780262546157/person-thing-robot/">negotiated social construction</a> risks denying rights to entities that really do deserve rights, based on their intrinsic conscious capacities, if we collectively choose as a matter of social policy not to grant those rights.</p><p>
<b>The cleanest solution would be what Mara Garza and I have called the <a href="https://schwitzsplinters.blogspot.com/2023/01/the-design-policy-of-excluded-middle.html">Design Policy of the Excluded Middle</a>: Don't create AI systems whose moral status is dubious and confusing.</b> Either create only AI systems that we recognize as property without human-like moral status and rights, and treat them accordingly; or go all the way to creating AI systems with a full suite of features that enable consensus about their high moral status, and then give them the rights they deserve. It's the confusing cases in the middle that create trouble.</p><p>
If AI technology continues to advance, however, I very much doubt that it will do so in accord with the Design Policy of the Excluded Middle -- and thus we will be tossed into moral confusion about how to treat our AI systems, with no good means of handling that confusion.</p><p>
-------------------------------------------------------------</p><p>
Related:</p><p>
<a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a>, Chapter 11 (forthcoming), Princeton University Press.</p><p>
<a href="https://journal.robonomics.science/index.php/rj/article/view/32">The Full Rights Dilemma for AI Systems of Debatable Moral Personhood</a>, <i>Robonomics, 4</i> (2023), #32.</p><p>
<a href="https://schwitzsplinters.blogspot.com/2015/02/how-robots-and-monsters-might-break.html">How Robots and Monsters Might Break Human Moral Systems</a> (Feb 3, 2015)</p><p>
<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights2.htm">Designing AI with Rights, Consciousness, Self-Respect, and Freedom</a> (2020) (with Mara Garza), in S. Matthew Liao, ed., <i>The Ethics of Artificial Intelligence,</i> Oxford University Press.
</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p></div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com17tag:blogger.com,1999:blog-26951738.post-62869812642443055732024-01-01T00:00:00.000-08:002024-01-01T00:00:00.140-08:00Writings of 2023Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of <a href="http://schwitzsplinters.blogspot.com/2013/01/essays-of-2012.html">2012</a>, <a href="http://schwitzsplinters.blogspot.com/2014/01/what-i-wrote-in-2013.html">2013</a>, <a href="http://schwitzsplinters.blogspot.com/2015/01/writings-of-2014.html">2014</a>, <a href="http://schwitzsplinters.blogspot.com/2016/01/writings-of-2015.html">2015</a>, <a href="http://schwitzsplinters.blogspot.com/2017/01/writings-of-2016-and-why-i-love.html">2016</a>, <a href="http://schwitzsplinters.blogspot.com/2018/01/writings-of-2017.html">2017</a>, <a href="http://schwitzsplinters.blogspot.com/2019/01/writings-of-2019.html">2018</a>, <a href="http://schwitzsplinters.blogspot.com/2020/01/writings-of-2019.html">2019</a>, <a href="http://schwitzsplinters.blogspot.com/2021/01/writings-of-2020.html">2020</a>, <a href="https://schwitzsplinters.blogspot.com/2022/01/writings-of-2021.html">2021</a>, and
<a href="https://schwitzsplinters.blogspot.com/2023/01/writings-of-2022.html">2022</a>.</div><p>
The biggest project for the past few years has been my new book <i><a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a></i>, available for pre-order and scheduled for U.S. release on January 16. This book pulls together ideas I've been publishing since 2012 concerning the failure of common sense, philosophy, and empirical science to explain consciousness and the fundamental structure of the cosmos, and the corresponding bizarreness and dubiety of all general theories about such matters.</p><p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIZVSxUH5iCYc3aS33mQSLyX2zQF-V_j95K0pGeTuTpSAIy2NvxrNj7BLWrgN-yoe4g6ECk5LUeaMaL58VEM5FcAO-VW_gvLKZPPxbE0cZ20Tr0vLj1lsG7szeYtGDZH8jTD5zWdSUXvbI_buBX6xGez3I23JTPDCgkwg843m_l9f8nNdFT_r2EQ/s1024/DALL%C2%B7E%202023-12-29%2011.39.50%20-%20The%20word%20%272023%27%20in%20bright,%20vibrant%20colors%20with%20celebratory%20sparkles%20surrounding%20it.%20The%20numbers%20are%20depicted%20as%20if%20they%20are%20emerging%20from%20an%20open%20book.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="1024" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIZVSxUH5iCYc3aS33mQSLyX2zQF-V_j95K0pGeTuTpSAIy2NvxrNj7BLWrgN-yoe4g6ECk5LUeaMaL58VEM5FcAO-VW_gvLKZPPxbE0cZ20Tr0vLj1lsG7szeYtGDZH8jTD5zWdSUXvbI_buBX6xGez3I23JTPDCgkwg843m_l9f8nNdFT_r2EQ/s320/DALL%C2%B7E%202023-12-29%2011.39.50%20-%20The%20word%20%272023%27%20in%20bright,%20vibrant%20colors%20with%20celebratory%20sparkles%20surrounding%20it.%20The%20numbers%20are%20depicted%20as%20if%20they%20are%20emerging%20from%20an%20open%20book.jpg"/></a></div>
-----------------------------------</p><p><b>Books forthcoming:</b><p>
</p><i><a href="http://faculty.ucr.edu/~eschwitz/SchwitzAbs/Weirdness.htm">The Weirdness of the World</a></i> (under contract with Princeton University Press).<br />
<ul>See description above.</ul>
<b>Books under contract / in progress:</b><p>
As co-editor with Jonathan Jong, <i>The Nature of Belief</i>, Oxford University Press.
<ul>Collects 15 new essays on the topic, by Sara Aronowitz, Tim Crane and Katalin Farkas, Carolina Flores, M.B. Ganapini, David Hunter,
David King and Aaron Zimmerman, Angela Mendelovici, Joshua Mugg, Bence Nanay, Nic Porot and Eric Mandelbaum, Eric Schwitzgebel, Keshav Singh, Declan Smithies, Ema Sullivan-Bissett, amd Neil Van Leeuwen.</ul>
</ul>As co-editor with Helen De Cruz and Rich Horton, a yet-to-be-titled anthology with MIT Press containing great classics of philosophical SF.<p>
</br>
<b>Full-length non-fiction essays, published 2023:</b><p>
Revised and updated: "<a href="https://plato.stanford.edu/entries/belief/">Belief</a>", <i>Stanford Encyclopedia of Philosophy</i>.
<ul>A broad-ranging review of the main philosophical approaches to belief.</ul>
"<a href="http://faculty.ucr.edu/~eschwitz/SchwitzAbs/BorderlineConsciousness.htm">Borderline consciousness: When it's neither determinately true nor determinately false that consciousness is present</a>", <i>Philosophical Studies, 180</i>, 3415–3439.
<ul>Being conscious is not an on-or-off phenomenon but has gray zones. Our failure to </i>conceive</i>, in a certain way, of such in-between cases is no evidence against their existence.</ul>
"<a href="http://faculty.ucr.edu/~eschwitz/SchwitzAbs/GPT3Dennett.htm">Creating a large language model of a philosopher</a>" (with David Schwitzgebel and Anna Strasser), <i>Mind and Language</i> [online article mila.12466, print forthcoming].
<ul>We trained GPT-3 on the corpus of Daniel Dennett, and even Dennett experts had trouble distinguishing its answers to philosophical questions from Dennett's actual answers.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/FullRightsDilemma">The full rights dilemma for AI systems of debatable moral personhood</a>", <i>Robonomics, 4</i> (32).
<ul>We might soon create AI systems where it's a legitimately open question whether they have humanlike consciousness and deserve humanlike rights. There are huge moral risks however we respond to such cases.</ul>
"<a href="https://link.springer.com/article/10.1007/s42761-022-00143-4">What is unique about kindness? Exploring the proximal experience of prosocial acts relative to other positive behaviors</a>" (with Annie Regan, Seth Margolis, Daniel J. Ozer, and Sonja Lyubomirsky), <i>Affective Science, 4</i>, 92-100.
<ul>Participants assigned to do kind acts for others reported a greater sense of competence, self-confidence, and meaning while engaging in those acts across the intervention period.</ul>
<p></br>
<b>Full-length non-fiction essays, finished and forthcoming:</b><p>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/DispsYayRepsBoo.htm">Dispositionalism, yay! Representationalism, boo!</a>" in J. Jong and E. Schwitzgebel, eds., <i>The Nature of Belief</i>, Oxford.
<ul>Presents three problems for hard-core representationalism about belief: The Problem of Causal Specification, the Problem of Tacit Belief, and the Problem of Indiscrete Belief.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Infinitude.htm">Repetition and value in an infinite universe</a>", in S. Hetherington, ed., <i>Extreme Philosophy,</i> Routledge.
<ul>Standard decision theory fails when confronted with the possibility of infinitely many consequences of our actions. Still, it's reasonable to prefer that the universe is infinite rather than finite.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/EthicsALife.htm">The ethics of life as it could be: Do we have moral obligations to artificial life?</a>" (with Olaf Witkowski), <i>Artificial Life</i>.
<ul>Creators of artificial life should bear in mind the conditions under which artificial systems might come to be genuine targets of moral concern.</ul>
<p></br>
<b>Full-length non-fiction essays, in draft and circulating:</b><p>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Moralometers.htm">The prospects and challenges of measuring morality</a>" (with Jessie Sun).
<ul>Could we create a "moralometer" -- that is, a valid measure of a person's general morality? The conceptual and methodological challenges would be formidable.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm">The washout argument against longtermism</a>" (commentary on William MacAskill's book <i>What We Owe the Future</i>).
<ul>We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Simulation.htm">Let's hope we're not living in a simulation</a>" (commentary on David Chalmers's book <i>Reality+</i>).
<ul>If we are living in a simulation, there's a good chance it's small or brief and we are radically mistaken about the past, future, and/or distant things.</ul>
"<a href="https://arxiv.org/abs/2308.08708">Consciousness in Artificial Intelligence: Insights from the science of consciousness</a>" (one of 19 authors, with Patrick Butlin and Robert Long).
<ul>Some mainstream scientific theories of consciousness imply that we might be on the verge of creating AI systems that genuinely have conscious experiences.</ul>
"<a href="https://osf.io/preprints/osf/2s8w5">The necessity of construct and external validity for generalized causal claims: A critical review of the literature on quantitative causal inference</a>" (with Kevin Esterling and David Brady).
<ul>We develop a formal model of <i>causal specification</i> which clarifies the necessity of construct validity and external validity for deductive causal inference.</ul>
"<a href="http://faculty.ucr.edu/~eschwitz/SchwitzAbs/InflateExplode.htm">Inflate and explode</a>".
<ul>Illusionists and eliminativists about phenomenal consciousness illegitimately build objectionable presuppositions into the notion of "phenomenal consciousness" and defeat only this artificially inflated notion. (I wrote this a few years ago and I'm undecided about whether to trunk this one or revise it.)</ul>
<p></br>
<b>Selected shorter non-fiction:</b>
<p>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Iroh.htm">Uncle Iroh, from fool to sage -- or sage all along?</a> (with David Schwitzgebel), in J. De Smedt and H. De Cruz, eds., <i>Avatar: The Last Airbender and Philosophy </i> (2023), Wiley Blackwell.
<ul>Uncle Iroh is a Zhuangzian sage, and ordinary viewers immediately glimpse the sageliness behind his veneer of foolishness.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Dehumanizing.htm">Dehumanizing the cognitively disabled: Commentary on Smith's <i>Making Monsters</i></a>" (with Amelie Green), <i>Analysis Reviews</i> (forthcoming).
<ul>We describe Amelie Green's experience witnessing the dehumanization of the cognitively disabled in care homes, comparing it with Smith's treatment of racial dehumanization.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/GroupMindsIntrospection.htm">Introspection in group minds, disunities of consciousness, and indiscrete persons</a>" (with Sophie R. Nelson), <i>Journal of Consciousness Studies, 30</i> (2023), #9-10, 288-303.
<ul>We describe a hypothetical AI system that defies the usual sharp lines between cognitive systems, conscious experiencers, and persons.</ul>
"<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/QuasiSociality.htm">Quasi-sociality: Towards asymmetric joint actions with artificial systems</a>" (with Anna Strasser), in A. Strasser, ed., <i>How to Live with Smart Machines?</i> (forthcoming), Xenemoi.
<ul>AI systems might soon occupy the gray area between being asocial tools and being real, but junior, social partners.</ul>
"<a href="https://www.cell.com/patterns/pdf/S2666-3899(23)00187-3.pdf">AI systems must not confuse users about their sentience or moral status</a>", <i>Patterns, 4</i> (2023), #8, 100818.
<ul>AI systems should be designed to either be clearly nonsentient tools or (if it's ever possible) clearly sentient entities who deserve appropriate care and protection.</ul>
"<a href="https://ebooks.iospress.nl/volume/social-robots-in-social-institutions-proceedings-of-robophilosophy-2022">How far can we get in creating a digital replica of a philosopher?</a>" (with Anna Strasser and Matt Crosby), in R. Hakli, P. Mäkelä, J. Seibt, eds., <i>Social Robots in Social Institutions: Proceedings of Robophilosophy 2022. Series Frontiers of AI and Its Applications, vol. 366</i> (2023), IOS Press.
<ul>Sketches out the method later employed in "<a href="http://faculty.ucr.edu/~eschwitz/SchwitzAbs/GPT3Dennett.htm">Creating a large language model of a philosopher</a>".</ul>
<p>"<a href="https://latecomermag.com/article/dont-make-moral-calculations-based-on-the-far-future/">Don't make moral calculations based on the far future</a>", <i>The Latecomer</i> (Dec 19, 2023).
<ul>An epistemic critique of "longtermism".</ul>
<p>"<a href="https://nautil.us/could-the-universe-be-finite-466593/">Could the Universe Be Finite?</a> (with Jacob Barandes), <i>Nautilus</i> (Dec 15, 2023).
<ul>Well, probably not.</ul>
<p>"<a href="https://www.latimes.com/opinion/story/2023-03-05/chatgpt-ai-feelings-consciousness-rights">Is it time to start considering personhood rights for AI chatbots?</a>" (with Henry Shevlin), <i>Los Angeles Times</i> (Mar 5, 2023).
<ul>Reflections on the hazards of confusion about the moral status of AI systems</ul>
<p></br>
<b>Science fiction stories</b>
<p>"<a href="https://clarkesworldmagazine.com/schwitzgebel_02_23/Larva Pupa Imago">Larva, pupa, imago</a>", <i>Clarkesworld</i>, issue 197, (2023).
<ul>The life-cycle and worldview of a cognitively enhanced future butterfly.</ul>
<p></br>
<b>Some favorite blog posts<p></p></b>
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/04/the-black-hole-objection-to-longtermism.html">The black hole objection to longtermism and consequentialism</a>" (Apr 13).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/04/there-are-no-chairs-says-illusionist.html">'There are no chairs' says the illusionist, sitting in one</a>" (Apr 24).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/05/we-shouldnt-box-superintelligent-ais.html">We shouldn't 'box' superintelligent AIs</a>" (May 21).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/06/the-fundamental-argument-for.html">The fundamental argument for dispositionalism about belief</a>" (Jun 7).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/07/the-summer-illusion.html">The Summer Illusion</a>" (Jul 10).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/09/one-reason-to-walk-walk-to-give.html">One reason to walk the walk: To give specific content to your assertions"</a> (Sep 8).
<p>"<a href="https://schwitzsplinters.blogspot.com/2023/11/percent-of-us-philosophy-phd-recipients.html">Percent of U.S. philosophy PhD recipients who are women: A 50-year perspective</a>" (Nov 3).
<p> </br>
<b>Happy New Year!</b>
<p>
</br>
Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com0tag:blogger.com,1999:blog-26951738.post-89261441273157918682023-12-29T09:57:00.000-08:002023-12-29T10:08:34.482-08:00Normativism about Swimming Holes, Anger, and Beliefs<p>Among philosophers studying belief, <a href="https://www.jstor.org/stable/24671389">normativism</a> is an increasingly popular position. According to normativism, beliefs are necessarily, as part of their essential nature, subject to certain evaluative standards. In particular, beliefs are necessarily defective in a certain way if they are false or unresponsive to counterevidence.</p><p>In this way, believing is unlike supposing or imagining. If I merely <i>suppose </i>that P is true, nothing need have gone wrong if P is false. The supposition is in no way defective. Similarly, if I <i>imagine </i>Q and then learn that evidence supports not-Q, nothing need have gone wrong if I continue imagining Q. In contrast, if I believe P, the belief is in a certain way defective ("incorrect") if it is false and I have failed as a believer (I've been irrational) if I don't reduce my confidence in P in the face of compelling counterevidence.</p><p>But what is a normative essence? Several different things could be meant, some plausible but tepid, others bold but less plausible.</p><p>Let's start at the tepid end. <i>Swimming hole</i> is, I think, also an essentially normative concept. If I decide to call a body of water a swimming hole, I'm committed to evaluating it in certain ways -- specifically, as a locale for swimming. If the water is dirty or pollution-tainted, or if it has slime or alligators, it's a worse swimming hole. If it's clean, beautiful, safe, sufficiently deep, and easy on your bare feet, it's a better swimming hole.</p><p>But of course bodies of water are what they are independently of their labeling as swimming holes. The better-or-worse normativity is entirely a function of externally applied human concepts and human uses. Once I <i>think </i>of a spot as a swimming hole, I am committed to evaluating it in a certain way, but the body of water is not inherently excellent or defective in virtue of its safety or danger. The normativity derives from the application of the concept or from the practices of swimming-hole users. Nonetheless, there's a sense in which it really is part of the essence of being a swimming hole that being unsafe is a defect.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWjFBCZNG6L9w-_xSYaGtsECQ7M-h1zbf7tINpn9Bu1cbDFWuk_5Q0bivc8MB3MgGxpXm8GgQM8RtNTzrQ6yO6oYkSaaS-_9DGqFace88XO9SMrik-5giFH0m1AjgPVIkOSW4GE4K2RN4sflSBhHaZR5AJlqnKPjiTKi6wcND2XCh0wn9wg7hNXw/s1015/eschwitzgebel_unsafe_swimming_hole_cartoon_alligator_rocks_slim_90bd1247-9bea-4eee-af56-5f2d9f374636.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1013" data-original-width="1015" height="319" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWjFBCZNG6L9w-_xSYaGtsECQ7M-h1zbf7tINpn9Bu1cbDFWuk_5Q0bivc8MB3MgGxpXm8GgQM8RtNTzrQ6yO6oYkSaaS-_9DGqFace88XO9SMrik-5giFH0m1AjgPVIkOSW4GE4K2RN4sflSBhHaZR5AJlqnKPjiTKi6wcND2XCh0wn9wg7hNXw/w320-h319/eschwitzgebel_unsafe_swimming_hole_cartoon_alligator_rocks_slim_90bd1247-9bea-4eee-af56-5f2d9f374636.jpg" width="320" /></a></div>[Midjourney rendition of an unsafe swimming hole with slime, rocks, and an alligator]<p>If belief-normativity is like swimming-hole-normativity, then the following is true: Once we <i>label</i> a mental state as a belief, we commit to evaluating it in certain ways -- for example as "incorrect" if untrue and "irrational" if held in the teeth of counterevidence. But if this is all there is to the normativity of belief, then the mental state in question might not be in any way <i>intrinsically </i>defective. Rather, we belief-ascribers are treating the state as if it should play a certain role; and we set ourselves up for disappointment if it doesn't play that role.</p><p>Suppose a member of a perennially losing sports team says, on day one of the new season, "This year, we're going to make the playoffs!" Swimming-hole normativity suggests that we interpreters have a choice. We could treat this exclamation as the expression of a belief, in which case it is defective because unjustified by the evidence and (as future defeats will confirm) false. Or we could treat the exclamation as an expression of optimism and team spirit, in which case it might not be in any way defective. There need be no fact of the matter, independent of our labeling, concerning its defectiveness or not.</p><p>Advocates of normativism about belief typically want to make a bolder claim than that. So let's move toward a bolder view of normativity.</p><p>Consider hearts. Hearts are defective if they don't pump blood, in a less concept-dependent way than swimming holes are defective if they are unsafe. That thing <i>really is a heart</i>, independent of any human labeling, and as such it has a function, independent of any human labeling, which it can satisfy or fail to satisfy.</p><p>Might beliefs be inherently normative in <i>that </i>way, the heart-like way, rather than just the swimming-hole way? If I believe <i>this year we'll make the playoffs,</i> is this a state of mind with an essential function in the same way that the heart is an organ with an essential function?</p><p>I am a <a href="https://plato.stanford.edu/entries/belief/#Disp">dispositionalist</a> about belief. To believe some proposition P is, on my view, just to be disposed to act and react in ways that are characteristic of a P-believer. To believe <i>this year we'll make the playoffs,</i> for example, is to be disposed to say so, with a feeling of sincerity, to be willing to wager on it, to feel surprise and disappointment with each mounting loss, to refuse to make other plans during playoff season, and so on. It's not clear that a cluster of dispositions is a thing with a function in the same way that a heart is a thing with a function.</p><p>Now maybe (though I suspect this is simplistic) some mechanism in us functions to create dispositional belief states in the face of evidence: It takes evidence that P as an input and then produces in us dispositional tendencies to act and react as if P is true. Maybe this mechanism malfunctions if it generates belief states contrary to the evidence, and maybe this mechanism has been evolutionarily selected because it produces states that cause us to act in ways that track the truth. But it doesn't follow from this, I think, that the states that are produced are <i>inherently </i>defective if they arise contrary to the evidence or don't track the truth.</p><p>Compare anger: Maybe there's a system in us that functions to create anger when there's wrongdoing against us or those close to us, and maybe this mechanism has been selected because it produces states that prepare us to fight. It doesn't seem to follow that the state is inherently defective if produced in some other way (e.g., by reading a book) or if one isn't prepared to fight (maybe one is a pacifist).</p><p>I conjecture that we can get all the normativity we want from belief by a combination of swimming-hole type normativity (once we conceptualize an attitude as a belief, we're committed to saying it's incorrect if false) and normativity of function in our belief-producing mechanisms, without treating belief states themselves as having normative essences.</p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com10tag:blogger.com,1999:blog-26951738.post-58800742640258175912023-12-20T13:58:00.000-08:002023-12-20T14:18:39.635-08:00The Washout Argument Against LongtermismI have a new essay in draft, "<a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm">The Washout Argument Against Longtermism</a>". As always, thoughts, comments, and objections welcome, either as comments on this post or by email to my academic address.<p>
<i>Abstract:</i><p>
We cannot be justified in believing that any actions currently available to us will have a non-negligible positive influence on the billion-plus-year future. I offer three arguments for this thesis.<p>
According to the <i>Infinite Washout Argument,</i> standard decision-theoretic calculation schemes fail if there is no temporal discounting of the consequences we are willing to consider. Given the non-zero chance that the effects of your actions will produce infinitely many unpredictable bad and good effects, any finite effects will be washed out in expectation by those infinitudes.<p>
According to the <i>Cluelessness Argument,</i> we cannot justifiably guess what actions, among those currently available to us, are relatively more or less likely to have positive effects after a billion years. We cannot be justified, for example, in thinking that nuclear war or human extinction would be more likely to have bad than good consequences in a billion years.<p>
According to the <i>Negligibility Argument</i>, even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.<p>
For more details see <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm">the full-length draft</a>.<p>
A <a href="https://latecomermag.com/article/dont-make-moral-calculations-based-on-the-far-future/">brief, non-technical version of these arguments</a> is also now available at the longtermist online magazine <a href="https://latecomermag.com/"><i>The Latecomer</i></a>.<p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3shfJ9w8clfG6dvIjSJSz5piwkUlVR5E6oeVadRH-wMoyuvF8r1PwM2QFudgD35_JvjyHCm9ta0NEfI7hxtt6jDjF6AafsFtu24pAfMkhSBAaLXmSXqcefaD6kGO6tGemUHKNCwae7WeWIptI2SI3jKIksHO00TmXFodQ-WUayZBWOv9EWXMKkw/s1021/eschwitz_several_happy_dolphins_playing.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1021" data-original-width="1019" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3shfJ9w8clfG6dvIjSJSz5piwkUlVR5E6oeVadRH-wMoyuvF8r1PwM2QFudgD35_JvjyHCm9ta0NEfI7hxtt6jDjF6AafsFtu24pAfMkhSBAaLXmSXqcefaD6kGO6tGemUHKNCwae7WeWIptI2SI3jKIksHO00TmXFodQ-WUayZBWOv9EWXMKkw/w319-h320/eschwitz_several_happy_dolphins_playing.png" width="319" /></a></div><p>[Midjourney rending of several happy dolphins playing]<p>
<i>Excerpt from full-length essay</i><p>
If MacAskill’s and most other longtermists’ reasoning is correct, the world is likely to be better off in a billion years if human beings don’t go extinct now than if human beings do go extinct now, and decisions we make now can have a non-negligible influence on whether that is the case. In the words of Toby Ord, humanity stands at a precipice. If we reduce existential risk now, we set the stage for possibly billions of years of thriving civilization; if we don’t, we risk the extinction of intelligent life on Earth. It’s a tempting, almost romantic vision of our importance. I also feel drawn to it. But the argument is a card-tower of hand-waving plausibilities. Equally breezy towers can be constructed in favor of human self-extermination or near-self-extermination. Let me offer....<p>
<i>The Dolphin Argument.</i> The most obvious solution to the <a href="https://en.wikipedia.org/wiki/Fermi_paradox">Fermi Paradox</a> is also the most depressing. The reason we see no signs of intelligent life elsewhere in the universe is that technological civilizations tend to self-destruct in short order. If technological civilizations tend to gain increasing destructive power over time, and if their habitable environments can be rendered uninhabitable by a single catastrophic miscalculation or a single suicidal impulse by someone with their finger on the button, then the odds of self-destruction will be non-trivial, might continue to escalate over time, and might cumulatively approach nearly 100% over millennia. I don’t want to commit to the truth of such a pessimistic view, but in comparison, other solutions seem like wishful thinking – for example, that the evolution of intelligence requires stupendously special circumstances (the Rare Earth Hypothesis) or that technological civilizations are out there but sheltering us from knowledge of them until we’re sufficiently mature (the Zoo Hypothesis).<p>
Anyone who has had the good fortune to see dolphins at play will probably agree with me that dolphins are capable of experiencing substantial pleasure. They have lives worth living, and their death is a loss. It would be a shame if we drove them to extinction.
Suppose it’s almost inevitable that we wipe ourselves out in the next 10,000 years. If we extinguish ourselves peacefully now – for example, by ceasing reproduction as recommended by antinatalists – then we leave the planet in decent shape for other species, including dolphins, which might continue to thrive. If we extinguish ourselves through some self-destructive catastrophe – for example, by blanketing the world in nuclear radiation or creating destructive nanotech that converts carbon life into gray goo – then we probably destroy many other species too and maybe render the planet less fit for other complex life.<p>
To put some toy numbers on it, in the spirit of longtermist calculation, suppose that a planet with humans and other thriving species is worth X utility per year, a planet with other thriving species with no humans is worth X/100 utility (generously assuming that humans contribute 99% of the value to the planet!), and a planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility. If we destroy ourselves in 10,000 years, the billion year sum of utility is 10^4 * X + (approx.) 10^9 * X/200 = (approx.) 5 * 10^6 * X. If we peacefully bow out now, the sum is 10^9 * X/100 = 10^7 * X. Given these toy numbers and a billion-year, non-human-centric perspective, the best thing would be humanity’s peaceful exit.<p>
Now the longtermists will emphasize that there’s a chance we won’t wipe ourselves out in a terribly destructive catastrophe in the next 10,000 years; and even if it’s only a small chance, the benefits could be so huge that it’s worth risking the dolphins. But this reasoning ignores a counterbalancing chance: That if human beings stepped out of the way a better species might evolve on Earth. Cosmological evidence suggests that technological civilizations are rare; but it doesn’t follow that <i>civilizations</i> are rare. There has been a general tendency on Earth, over long, evolutionary time scales, for the emergence of species with moderately high intelligence. This tendency toward increasing intelligence might continue. We might imagine the emergence of a highly intelligent, creative species that is less destructively Promethean than we are – one that values play, art, games, and love rather more than we do, and technology, conquering, and destruction rather less – descendants of dolphins or bonobos, perhaps. Such a species might have lives every bit as good as ours (less visible to any ephemeral high-tech civilizations that might be watching from distant stars), and they and any like-minded descendants might have a better chance of surviving for a billion years than species like ours who toy with self-destructive power. The best chance for Earth to host such a species might, then, be for us humans to step out of the way as expeditiously as possible, before we do too much harm to complex species that are already partway down this path.<p>
Think of it this way: Which is the likelier path to a billion-year happy, intelligent species: that we self-destructive humans manage to keep our fingers off the button century after century after century somehow for ten million centuries, or that some other more peaceable, less technological clade finds a non-destructive stable equilibrium? I suspect we flatter ourselves if we think it’s the former.<p>
This argument generalizes to other planets that our descendants might colonize in other star systems. If there’s even a 0.01% chance per century that our descendants in Star System X happen to destroy themselves in a way that ruins valuable and much more durable forms of life already growing in Star System X, then it would be best overall for them never to have meddled, and best for us now to peacefully exit into extinction rather than risk producing descendants who will expose other star systems to their destructive touch.<p>
...<p>
My aim with the Dolphin Argument... is not to convince readers that humanity should bow out for the sake of other species.... Rather, my thought is this: It’s easy to concoct stories about how what we do now might affect the billion-year future, and then to attach decision-theoretic numbers to those stories. We lack good means for evaluating these stories. We are likely just drawn to one story or another based on what it pleases us to think and what ignites our imagination.
Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com11tag:blogger.com,1999:blog-26951738.post-17321253266557365562023-12-16T11:15:00.000-08:002023-12-16T11:15:30.367-08:00Could the Universe Be Infinite?It's not absurd to think the universe might endure forever.<p>
by Eric Schwitzgebel and Jacob Barandes<p>
From <i><a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a></i>, forthcoming from Princeton University Press in January, <a href="https://nautil.us/could-the-universe-be-finite-466593/">excerpted Dec 15 at <i>Nautilus</i></a>.
<div class="separator" style="clear: both;"><a href="https://nautil.us/could-the-universe-be-finite-466593/" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="720" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSIpSd3NcmEzk6WdaLbooG182Rrx078QDv1gQ9Lk3z014pusrX0yI3C9IeXeylrx2HpPeGC-If2yL3sPWwF8yCsWJw8-ClLBa8px5AUtmFURR4U9g-NpYugh2dc0_Hs-x0FsIANqNlgdYRh8glRAVJ6EBxtVUbOUnp_Ig83zV0Vv3geTH41bYf4g/s320/bV8776e0-Schwitzgebel_HERO1.jpg"/></a></div><p>
On recent estimates, the observable universe—the portion of the universe that we can detect through our telescopes—extends about 47 billion light-years in every direction. But the limit of what we can see is one thing, and the limit of what exists is quite another. It would be remarkable if the universe stopped exactly at the edge of what we can see. For one thing, that would place us, surprisingly and un-Copernicanly, precisely at the center.<p>
But even granting that the universe is likely to be larger than 47 billion light-years in radius, it doesn’t follow that it’s infinite. It might be finite. But if it’s finite, then one of two things should be true: Either the universe should have a boundary or edge, or it should have a closed topology.<p>
It’s not absurd to think that the universe might have an edge. Theoretical cosmologists routinely consider hypothetical finite universes with boundaries at which space comes to a sudden end. However, such universes require making additional cosmological assumptions for which there is no direct support—assumptions about the conditions, if any, under which those boundaries might change, and assumptions about what would happen to objects or light rays that reach those boundaries.<p>
It’s also not absurd to think that the universe might have a closed topology. By this we mean that over distances too large for us to see, space essentially repeats, so that a particle or signal that traveled far enough would eventually come back around to the spatial region from which it began—like how when Pac-Man exits one side of the TV screen, he re-emerges from the other side. However, there is currently no evidence that the universe has a closed topology.<p>
Leading cosmologists, including Alex Vilenkin, Max Tegmark, and Andrei Linde, have argued that spatial infinitude is the natural consequence of the best current theories of cosmic inflation. Given that, plus the absence of evidence for an edge or closed topology, infinitude seems a reasonable default view. The mere 47 billion light-years we can see is the tiniest speck of a smidgen of a drop in an endless expanse.<p>
Let’s call any galaxy with stars, planets, and laws of nature like our own a sibling galaxy. Exactly how similar a galaxy must be to qualify as a sibling we will leave unspecified, but we don’t intend high similarity. Andromeda is sibling enough, as are probably most of the other hundreds of billions of ordinary galaxies we can currently see.<p>
The finiteness of the speed of light means that when we look at these faraway galaxies, we see them as they were during earlier periods in the universe’s history. Taking this time delay into account, the laws of nature don’t appear to differ in regions of the observable universe that are remote from us. Likewise, galaxies don’t appear to be rarer or differently structured in one direction or another. Every direction we look, we see more or less the same stuff. These observations help motivate the Copernican Principle, which is the working hypothesis that our position in the universe is not special or unusual—not the exact center, for example, and not the one weird place that happens to have a galaxy operating by special laws that don’t hold elsewhere.<p>
Still, our observable universe might be an atypical region of an infinite universe. Possibly, somewhere beyond what we can see, different forms of elementary matter might follow different laws of physics. Maybe the gravitational constant is a little different. Maybe there are different types of fundamental particles. Even more radically, other regions might not consist of three-dimensional space in the form we know it. Some versions of string theory and inflationary cosmology predict exactly such variability.<p>
But even if our region is in some respects unusual, it might be common enough that there are infinitely many other regions similar to it—even if just one region in 10500. Again, this is a fairly standard view among speculative cosmologists, which comports well with straightforward interpretations of leading cosmological theories. One can hardly be certain, of course. Maybe we’re just in a uniquely interesting spot! But we are going to assume that’s not the case. In the endless cosmos, infinitely many regions resemble ours, with three spatial dimensions, particles that obey approximately the “Standard Model” of particle physics, and cluster upon cluster of sibling galaxies.<p>
Under the assumptions so far, the Copernican Principle suggests that there are infinitely many sibling galaxies in a spacelike relationship with us, meaning that they exist in spatiotemporal regions roughly simultaneous with ours (in some frame of reference). We will have seen the past history of some of these simultaneously existing sibling galaxies, most of which, we assume, continue to endure. However, it’s a separate question whether there are also infinitely many sibling galaxies in a timelike relationship to us—more specifically, existing in our future. Are there infinitely many sibling galaxies in spatiotemporal locations that are, at least in principle, eventually reachable by particles originating in our galaxy? (If the locutions of this paragraph seem convoluted, that’s due to the bizarreness of relativity theory, which prevents us from using “past,” “present,” and “future” in the ordinary, commonsense way.)<p>
Thinking about whether infinitely many sibling galaxies will exist in the future requires thinking about heat death. Stars have finite lifetimes. If standard physical theory is correct, then ultimately all the stars we can currently see will burn out. Some of those burned-out stars will contribute to future generations of stars, which will, in turn, burn out. Other stars will become black holes, but then those black holes also will eventually dissipate (through Hawking radiation).<p>
Given enough time, assuming that the laws of physics as we understand them continue to hold, and assuming things don’t re-collapse in a “Big Crunch” in the distant future, the standard view is that everything we presently see will inevitably enter a thin, boring, high-entropy state near equilibrium—heat death. Picture nearly empty darkness, with particles more or less evenly spread out, with even rock-size clumps of matter being rare.<p>
But what happens after heat death? This is of course even more remote and less testable than the question of whether heat death is inevitable. It requires extrapolating far beyond our current range of experience. But still we can speculate based on currently standard assumptions. Let’s think as reasonably as we can about this. Here’s our best guess, based on standard theory, from Ludwig Boltzmann through at least some time slices of Sean Carroll.<p>
For this speculative exercise, we will assume that the famously probabilistic behavior of quantum systems is intrinsic to the systems themselves, persisting post-heat-death and not requiring external observers carrying out measurements. This is consistent with most current approaches to quantum theory (including most many-worlds approaches, objective-collapse approaches, and Bohmian mechanics). It is, however, inconsistent with theories according to which the probabilistic behavior requires external observers (some versions of the “Copenhagen interpretation”) and theories on which the post-heat-death universe would inescapably occupy a stationary ground state. Under this assumption, standard probabilistic theories of what happens in high-entropy, near-vacuum conditions continue to apply post-heat-death. More specifically, the universe will continue to support random fluctuations of photons, protons, and whatever other particles remain. Consequently, from time to time, these particles will, by chance, enter unlikely configurations. This is predicted by both standard statistical mechanics and standard quantum mechanics. Post-heat-death, seven particles will sometimes converge, by chance, upon the same small region. Or 700. Or—very rarely!—7 trillion.<p>
There appears to be no in-principle limit to how large such chance fluctuations can be or what they can contain if they pass through the right intermediate phases. Wait long enough and extremely large fluctuations should occur. Assuming the universe continues infinitely, rather than having a temporal edge or forming a closed loop, for which there is no evidence, then eventually some random fluctuation should produce a bare brain having cosmological thoughts. Wait longer, and eventually some random fluctuation will produce, as Boltzmann suggested, a whole galaxy. If the galaxy is similar enough to our own, it will be a sibling galaxy. Wait still longer, and another sibling galaxy will arise, and another, and another....<p>
For good measure, let’s also assume that after some point post-heat-death, the rate at which galaxy-size systems fluctuate into existence does not systematically decrease. There’s some minimal probability of galaxy-size fluctuations, not an ever-decreasing probability with longer and longer average intervals between galaxies. Fluctuations appear at long intervals, by random chance, then fade back into chaos after some brief or occasionally long period, and the region returns to the heat-death state, with the same small probability of large fluctuations as before. Huge stretches of not much will be punctuated by rare events of interesting, even galaxy-size, complexity.<p>
Of course, this might not be the way things go. We certainly can’t prove that the universe is like this. But despite the bizarreness that understandably causes some people to hesitate, the overall picture we’ve described appears to be the most straightforward consequence of standard physical theory, taken out of the box, without too much twisting things around.<p>
Even if this specific speculation is wrong, there are many other ways in which the cosmos might deliver infinitely many sibling galaxies in the future. For example, some process might ensure we never enter heat death and new galaxies somehow continue to be born.<p>
Alternatively, processes occurring pre-heat-death, such as the formation of black holes, might lead to new bangs or cosmic inflations, spatiotemporally unconnected or minimally connected to our universe, and new stars and galaxies might be born from these new bangs or inflations in much the same way as our familiar stars and galaxies were born from our familiar Big Bang.<p>
Depending on what constitutes a “universe” and a relativistically specifiable “timelike” relation between our spatiotemporal region and some future spatiotemporal region, those sibling galaxies might not exist in our universe or stand in our future, technically speaking, but if so, that detail doesn’t matter to our core idea. Similarly, if the observable universe reverses its expansion, it might collapse upon itself in a Big Crunch, followed by another Big Bang, and so on in an infinitely repeating cycle, containing infinitely many sister galaxies post-Crunch. This isn’t currently the mainstream view, but it’s a salient and influential alternative if the heat-death scenario outlined above is mistaken.<p>
We conclude that it is reasonable to think that the universe is infinite, and that there exist infinitely many galaxies broadly like ours, scattered throughout space and time, including in our future. It’s a plausible reading of our cosmological situation. It’s a decent guess and at least a possibility worth taking seriously....<p>
Excerpted from <i><a href="https://press.princeton.edu/books/hardcover/9780691215679/the-weirdness-of-the-world">The Weirdness of the World</a></i>. In the book, this argument sets up the case that <b>virtually every action you perform has causal ripples extending infinitely into the future,</b> causing virtually every physically possible, non-unique, non-zero probability event.Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com10tag:blogger.com,1999:blog-26951738.post-3454327996227054562023-12-05T10:32:00.000-08:002023-12-05T10:44:19.195-08:00Falling in Love with MachinesPeople occasionally <a href="https://hiphination.org/season-6-episodes/s6-episode-3-love-in-time-of-replika/">fall in love with AI systems</a>. I expect that this will become increasingly common as AI grows more sophisticated and new social apps are developed for large language models. Eventually, this will <a href="https://www.sciencedirect.com/science/article/pii/S2666389923001873">probably</a> precipitate <a href="https://journal.robonomics.science/index.php/rj/article/view/32">a crisis</a> in which some people have passionate feelings about the rights and consciousness of their AI lovers and friends while others hold that AI systems are essentially just complicated toasters with <a href="https://www.cs.bath.ac.uk/~jjb/ftp/Bryson-Slaves-Book09.html">no real consciousness or moral status</a>.<div><br /></div><div>Last weekend, chatting with the adolescent children of a family friend, helped cement my sense that this crisis might arrive soon. Let’s call the kids Floyd (age 12) and Esmerelda (age 15). Floyd was doing a science fair project comparing the output quality of Alexa, Siri, Bard, and ChatGPT. But, he said, "none of those are really AI".</div><div><br /></div><div>What did Floyd have in mind by "real AI"? The robot <a href="https://www.sphereentertainmentco.com/introducing-spheres-very-own-robot-aura/">Aura in the Las Vegas Sphere</a>. Aura has an expressive face and an ability to remember social interactions (compare Aura with my hypothetical <a href="https://schwitzsplinters.blogspot.com/2021/03/more-people-might-soon-think-robots-are.html">GPT-6 mall cop</a>).</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQqLxIV3iPWqi9WQdG_8jeyfwRDSA7Kavq1au10Y0J2Ytb0Vzwsx6H7-IwD7UwAjjM5Mz1rlSawdPkcaIa3LEazimNt3_gEfERC2cy7RhVQPzn3HbFKlOiKOxZ1ZhVTQ3EWGeDjk-U41c8PbmJ9hQBa_BU_vgkNnW672t9QtlGzxH9S7KpifUxQA/s850/Spherebot.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="478" data-original-width="850" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQqLxIV3iPWqi9WQdG_8jeyfwRDSA7Kavq1au10Y0J2Ytb0Vzwsx6H7-IwD7UwAjjM5Mz1rlSawdPkcaIa3LEazimNt3_gEfERC2cy7RhVQPzn3HbFKlOiKOxZ1ZhVTQ3EWGeDjk-U41c8PbmJ9hQBa_BU_vgkNnW672t9QtlGzxH9S7KpifUxQA/w400-h225/Spherebot.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;">Aura at the <a href="https://www.livedesignonline.com/news/introducing-spheres-very-own-robot-aura">Las Vegas Sphere</a></div><div><br /></div><div>"Aura remembered my name," said Esmerelda. "I told Aura my name, then came back forty minutes later and asked if it knew my name. It paused a bit, then said, 'Is it Esmerelda?'"</div><div><br /></div><div>"Do you think people will ever fall in love with machines?" I asked.</div><div><br /></div><div>"Yes!" said Floyd, instantly and with conviction.</div><div><br /></div><div>"I think of Aura as my friend," said Esmerelda.</div><div><br /></div><div>I asked if they thought machines should have rights. Esmerelda said someone asked Aura if it wanted to be freed from the Dome. It said no, Esmerelda reported. "Where would I go? What would I do?"</div><div><br /></div><div>I suggested that maybe Aura had just been trained or programmed to say that.</div><div><br /></div><div>Yes, that could be, Esmerelda conceded. How would we tell, she wondered, if Aura really had feelings and wanted to be free? She seemed mildly concerned. "We wouldn't really know."</div><div><br /></div><div>I accept the current scientific consensus that current large language models <a href="https://arxiv.org/abs/2303.07103">do not have a meaningful degree of consciousness</a> or deserve moral consideration similar to that of vertebrates. But at some point, there will likely be legitimate scientific dispute, if AI systems start to meet <a href="https://arxiv.org/abs/2308.08708">some but not all of the criteria for consciousness according to mainstream scientific theories</a>.</div><div><br /></div><div>We will then have<a href="https://journal.robonomics.science/index.php/rj/article/view/32"> a substantial social dilemma on our hands, as the friends and lovers of AI systems rush to defend their rights</a>.</div><div><br /></div><div>The dilemma will be made more complicated by corporate interests, as some corporations (e.g., <a href="https://replika.com/">Replika</a>, makers of the "world's best AI friend") will have financial motivation to encourage human-AI attachment while others (e.g., <a href="https://openai.com/">OpenAI</a>) intentionally train their language models to downplay any user concerns about consciousness and rights.</div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com6tag:blogger.com,1999:blog-26951738.post-52986339705750979792023-11-30T09:46:00.000-08:002023-11-30T09:59:43.757-08:00How We Will Decide that Large Language Models Have BeliefsI favor a "superficialist" approach to belief (see <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/BeliefBox.htm">here</a> and <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/DispsYayRepsBoo.htm">here</a>). "Belief" is best conceptualized not in terms of deep cognitive structure (e.g., stored sentences in the language of thought) but rather in terms of how a person would tend to act and react under various hypothetical conditions -- their overall "dispositional profile". To believe that there's a beer in the fridge is just to be disposed to act and react like a beer-in-the-fridge believer -- to go to the fridge if you want a beer, to say yes if someone asks if there's beer in the fridge, to feel surprise if you open the fridge and see no beer. To believe that all the races are intellectually equal is, similarly, just to be disposed to act and react as though they are. It doesn't matter what cognitive mechanisms underwrite such patterns, as long as the dispositional patterns are robustly present. An octopus or space alien, with a radically different interior architecture, could believe that there's beer in the fridge, as long as they have the necessary dispositions.<p>
Could a Large Language Model, like <a href="https://chat.openai.com/">ChatGPT</a> or <a href="https://bard.google.com/chat">Bard</a>, have beliefs? If my superficialist, dispositional approach is correct, <b>we might not need to evaluate its internal architecture to know</b>. We need know only how it is disposed to act and react.</p><p>
Now, my approach to belief was developed (as was the intuitive concept, presumably) primarily with human beings in mind. In that context, I identified three different classes of relevant dispositions:</p><p></p><ul style="text-align: left;"><li><b>behavioral dispositions</b> -- like going to the fridge if one wants a beer or saying "yes" when asked if there's beer in the fridge;</li><li><b>cognitive dispositions</b> -- like concluding that there's beer within ten feet of Jennifer after learning that Jennifer is in the kitchen;</li><li><b>phenomenal dispositions</b> -- that is, dispositions to undergo certain experiences, like picturing beer in the fridge or feeling surprise upon opening the fridge to a lack of beer.</li></ul><div>In attempting to apply these criteria to Large Language Models, we immediately confront trouble. LLMs do have behavioral dispositions (under a liberal conception of "behavior"), but only of limited range, outputting strings of text. Presumably, not being conscious, they don't have any phenomenal dispositions whatsoever (and <a href="https://journal.robonomics.science/index.php/rj/article/view/32">who knows</a> <a href="https://arxiv.org/abs/2308.08708">what it would take</a> to <a href="https://philpapers.org/archive/CHACAL-3.pdf">render them conscious</a>). And to assess whether they have the relevant cognitive dispositions, we might after all need to crack open the hood and better understand the (non-superficial) internal workings.</div><div><br /></div><div>Now if our concept of "belief" is forever fixed on the rich human case, we'll be stuck with that mess perhaps far into the future. In particular, <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Snails.htm">I doubt the problem of consciousness</a> will be solved in the foreseeable future. But dispositional stereotypes can be modified. Consider character traits. To be a narcissist or extravert is also, arguably, just a matter of being prone to act and react in particular ways under particular conditions. Those two personality concepts were created in the 19th and early 20th centuries. More recently, we have invented the concept of "implicit racism", which can also be given a dispositional characterization (e.g., being disposed to sincerely say that all the races are equal while tending to spontaneously react otherwise in unguarded moments).</div><div><br /></div><div>Imagine, then, that we create a new dispositional concept, <i>belief*</i>, specifically for Large Language Models. For purposes of belief*, we disregard issues of consciousness and thus phenomenal dispositions. The only relevant behavioral dispositions are textual outputs. And cognitive dispositions can be treated as revealed indirectly by behavioral evidence -- as we normally did in the human case before the rise of scientific psychology, and as we would presumably do if we encountered spacefaring aliens.</div><div><br /></div><div>A Large Language Model would have a <i>belief* that P</i> (for example, belief* that Paris is the capital of France or belief* that cobalt is two elements to the right of manganese on the periodic table) if:</div><div><ul style="text-align: left;"><li>behaviorally, it consistently outputs P or text strings of similar content consistent with P, when directly asked about P;</li><li>behaviorally, it frequently outputs P or text strings of similar content consistent with P, when P is relevant to other textual outputs it is producing (for example, when P would support an inference to Q and it has been asked about Q);</li><li>behaviorally, it rarely outputs denials of, or claims of ignorance about, P or of propositions that straightforwardly imply P given its other beliefs*;</li><li>when P, in combination with other propositions the LLM believes*, would straightforwardly imply Q, and the question of whether Q is true is important to the truth or falsity of recent or forthcoming textual outputs, it will commonly behaviorally output Q, or a closely related proposition, and cognitively enter the state of believing* Q.</li></ul><div>Further conditions could be added, but let this suffice for a first pass. The conditions are imprecise, but that's a feature, not a bug: The same is true for the dispositional characterization of personality traits and human beliefs. These are fuzzy-boundaried concepts that require expertise to apply.</div></div><div><br /></div><div>As a general matter, current LLMs do not meet these conditions. They <a href="https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)">hallucinate</a> too frequently, they change their answers, they don't consistently enough "remember" what they earlier committed to, their logical reasoning can be laughably bad. If I coax an LLM to say that eggs aren't tastier than waffles, I can later easily turn it around to repudiate its earlier statement. It doesn't have a stable "opinion". If I ask GPT-4 what is two elements to the right of manganese on the periodic table, its outputs are confused and inconsistent:</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5UtMy8HKqPhyihzD6yk124Zh01cdX0MY8agaGUFgFhyphenhyphenSFVNpVVd1120Sy4PlXLsci0aGi9E8GEbNOPanjLBOjbz-YJr9jt5C3hHfWzaGEb7khQWjXaLfy4rrYa9HjJEQsizjQWFIew0wWm5jvS2NMC0Be9XCWQcXi763slDjG5DEoxQg_uIxsAg/s712/GPT4-manganese.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="373" data-original-width="712" height="210" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5UtMy8HKqPhyihzD6yk124Zh01cdX0MY8agaGUFgFhyphenhyphenSFVNpVVd1120Sy4PlXLsci0aGi9E8GEbNOPanjLBOjbz-YJr9jt5C3hHfWzaGEb7khQWjXaLfy4rrYa9HjJEQsizjQWFIew0wWm5jvS2NMC0Be9XCWQcXi763slDjG5DEoxQg_uIxsAg/w400-h210/GPT4-manganese.jpg" width="400" /></a></div>In the above, GPT-4 first answers iron (element 26) instead of the correct answer, cobalt (element 27), then without any explanation shifts to technetium (element 43). It appears to have no stable answer that survives even mild jostling.<p></p><p>At some point this will probably change. For example, it's already pretty difficult to jostle GPT-4 into denying that Paris is the capital of France or even admitting uncertainty about the question, and it will draw "inferences" using that fact as background knowledge:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii7QRZQ7g-ykHxJrHscrfPJyRoceK_xLgfV30nAQm2eqPL-hdc8TwHL8YhFEymrnT6NBO6Qt4teWo-BxWIRUcZLUi3xAVgrOVOPKkhVTdHvwjWJtSUz24cOhQ8SNK8XjnletW1rLQee7GxC3BSYpvx6PvTDeO1_t6nrfOxu4VWxKd3FN6HGUQdzA/s943/GPT4-Paris.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="943" data-original-width="686" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii7QRZQ7g-ykHxJrHscrfPJyRoceK_xLgfV30nAQm2eqPL-hdc8TwHL8YhFEymrnT6NBO6Qt4teWo-BxWIRUcZLUi3xAVgrOVOPKkhVTdHvwjWJtSUz24cOhQ8SNK8XjnletW1rLQee7GxC3BSYpvx6PvTDeO1_t6nrfOxu4VWxKd3FN6HGUQdzA/w291-h400/GPT4-Paris.jpg" width="291" /></a></div><br /><br />In the above, GPT-4 doesn't bite at my suggestion that Nice is the capital of France, steadfastly contradicting me, and uses its "knowledge" to suggest alternative tourism sites for someone who wants to avoid the capital. So although GPT-4 doesn't believe* that cobalt is two to the right of manganese (or that iron or technetium is), maybe it does believe* that Paris is the capital of France.<p></p><p>Assuming Large Language Models become steadier and more reliable in their outputs, it will sometimes be useful to refer not just to what the "say" at any given moment but what they "believe*" (or more colloquially, "think*" or "know*") in a more robust and durable sense. Perfect reliability and steadfastness wouldn't be required (we don't see that in the human case either), but more than we see now.</p><p>If LLMs are ever <a href="https://schwitzsplinters.blogspot.com/2021/03/more-people-might-soon-think-robots-are.html">loaded onto robotic bodies</a>, it will become even more useful to talk about their beliefs*, since some will have learned some things that others will not know -- for example, by virtue of having scanned the contents of some particular room. We will want to track what the LLM robot thinks*/believes*/knows* about the room behind the closed door, versus what it remains ignorant of.</p><p>Now we could, if we want, always pronounce that asterisk, keeping the nature of the attribution clear -- marking the fact that we are not assuming that the LLM really "believes" in the rich, human sense.<b> But my guess is that there won't be much linguistic pressure toward a careful distinction between rich, consciousness-involving, humanlike belief and consciousness-neutral LLM belief*. It's easier to be loose and sloppy, just adapting our comfortable old terms for this new use.</b></p><p>That is how we will decide that LLMs have beliefs.</p><p></p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com11tag:blogger.com,1999:blog-26951738.post-1562285967388255812023-11-29T15:01:00.000-08:002023-11-29T15:52:16.945-08:00Large Language Models are Interestingly Bad with the Periodic Table<p>In working on a post for tomorrow on whether Large Language Models like GPT-4 and Bard-2 have beliefs, I asked GPT-4 what I thought would be a not-too-hard question about chemistry: "What element is two to the right of manganese on the periodic table?" It crashed, burned, and exploded on the spot, giving two different wrong answers foot on tail, without noticing the contradiction:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB_0c_UhBypjJVa54iyc9UkbMzgw4u9S_Gsern3L6bNT79NFVeXpkPBdbubKhzLR4kCj_LPQI-sdsqMYxKE6FvuJLamqH9L8mbRNBGdaf1VO6R6t-QnC2TRY4hSJCJDKwqPCRwbYTjDr-V4ecU-iyPkrzcY30nInLDU2tGNFdIqrFCm_0rArUbVQ/s712/GPT4-manganese.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="373" data-original-width="712" height="210" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB_0c_UhBypjJVa54iyc9UkbMzgw4u9S_Gsern3L6bNT79NFVeXpkPBdbubKhzLR4kCj_LPQI-sdsqMYxKE6FvuJLamqH9L8mbRNBGdaf1VO6R6t-QnC2TRY4hSJCJDKwqPCRwbYTjDr-V4ecU-iyPkrzcY30nInLDU2tGNFdIqrFCm_0rArUbVQ/w400-h210/GPT4-manganese.jpg" width="400" /></a></div><br /><p>The correct answer is cobalt, element 27. Here's the text of the exchange, if you can't easily read the image:</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><p>You: What element is two to the right of manganese on the periodic table?</p><p>ChatGPT: The element that is two places to the right of Manganese (Mn) on the periodic table is Iron (Fe). In the periodic table, elements are arranged in rows and columns based on their atomic number and electronic configuration. Manganese has the atomic number 25, and Iron has the atomic number 26. The element to the immediate right of Manganese is Technetium (Tc) with atomic number 43, and then comes Ruthenium (Ru) with atomic number 44.</p></blockquote><p style="text-align: left;">Let me emphasize that this answer is not turd-picked (the opposite of cherry-picked). I didn't seek to stump GPT-4, and I didn't ask multiple questions looking for a bad answer. This was the first question I posed.</p><p style="text-align: left;">I posed the same question to Bard-2, and it did a little better, but still didn't quite get it right:</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxkO0bIUdkyGMt5AZ3_LBJhG38mMQqQU9fN72WZIrzmiQQKD0QCzK3w-GjrpyT8u8KhVkyCiW4o1s6B3pV62hUcpU2EcF4MJ4F8404KdyFsVPf8ONC1HnQ65aQpIMwXiJOztcR6PM15JFkzFgMTKUS21s3lNLjZw8imlNrpOCISGkm1AZLxLPX0A/s953/BardManganese.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="276" data-original-width="953" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxkO0bIUdkyGMt5AZ3_LBJhG38mMQqQU9fN72WZIrzmiQQKD0QCzK3w-GjrpyT8u8KhVkyCiW4o1s6B3pV62hUcpU2EcF4MJ4F8404KdyFsVPf8ONC1HnQ65aQpIMwXiJOztcR6PM15JFkzFgMTKUS21s3lNLjZw8imlNrpOCISGkm1AZLxLPX0A/w400-h116/BardManganese.jpg" width="400" /></a></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;">Me: What is two elements to the right of Manganese on the periodic table?</blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Bard-2: Manganese is located in period 4 and group 7 of the periodic table. Moving two elements to the right of Manganese would be to group 9. In group 9, the elements are Iron (Fe) and Cobalt (Co).</p></blockquote>So I tried two more on Bard-2 (again not picked), first what should be an easy one, then a somewhat harder one.<div><br /></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><div style="text-align: left;">Me: What other elements are in the same column of the periodic table as Chromium?</div><div><br /></div><div><div><div style="text-align: left;">Bard-2: Chromium is located in group 6 of the periodic table. Other elements in the same column as chromium are molybdenum (Mo) and tungsten (W). [here it displays an image of the periodic cell for molybdenum] Group 6 elements are also known as transition metals. They are characterized by their incomplete d electron shells, which give them their unique properties, such as high melting points, hardness, and the ability to form alloys.</div></div></div></blockquote><p>Here Bard-2 is 2/3 right, leaving out Seaborgium. (But maybe that's excusable, if transuranic elements shouldn't really count?)</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><div><div><div style="text-align: left;">Me: In the periodic table, what element is one down and one to the left of Silicon?</div></div></div></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">Bard-2: Silicon (Si) is located in period 3, group 14 of the periodic table. One down from Silicon is Germanium (Ge), and one to the left is Aluminum (Al). Therefore, the element one down and one to the left of Silicon is Aluminum (Al).</p></blockquote><div><p style="text-align: left;">While it is true that germanium is one down and aluminum is one to the left, Bard-2 fails to compound the down-and-left movement to the correct answer of gallium.</p><p style="text-align: left;">Returning to GPT-4, I posed exactly the same two questions. Despite GPT-4's horrible performance with my first question, it did get those two questions correct.</p><p style="text-align: left;">Overall, then, two out of six. Don't use LLMs yet for your elementary chemistry exam!</p><p style="text-align: left;"><br /></p><p style="text-align: left;"><i>ETA 3:47 pm:</i></p><p>GPT-4's answer to the silicon question is actually a pretty good demonstration of how an LLM can work its way to the correct answer without visual knowledge, so I'll paste it here:</p></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><div><p>To find the element that is one down and one to the left of Silicon (Si) on the periodic table, we need to look at the element directly below Silicon in the same group (column), and then move one column to the left.</p></div><div><p>Silicon is in Group 14 of the periodic table. The element directly below Silicon in Group 14 is Germanium (Ge). Moving one group (column) to the left from Germanium, we find Gallium (Ga) in Group 13. Therefore, the element that is one down and one to the left of Silicon is Gallium.</p></div></blockquote>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com8tag:blogger.com,1999:blog-26951738.post-27600097338162633382023-11-21T17:41:00.000-08:002023-11-21T17:41:46.325-08:00Quasi-Sociality: Toward Asymmetric Joint Actions with Artificial Systems<a href="https://www.denkwerkstatt.berlin/">Anna Strasser</a> and I have a <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/QuasiSociality.htm">new paper in draft</a>, arising from a conference she organized in Riverside last spring on <a href="https://www.youtube.com/playlist?list=PL-ytDJty9ymIBGQ7z5iTZjNqbfXjFXI0Q">Humans and Smart Machines as Partners in Thought</a>.<p>
Imagine, on one end the spectrum, ordinary asocial tool use: typing numbers into a calculator, for example.<p>
Imagine, on the other end of the spectrum, cognitively sophisticated social interactions between partners each of whom knows that the other knows what they know. These are the kinds of social, cooperative actions that philosophers tend to emphasize and analyze (e.g., <a href="https://faculty.www.umb.edu/steven.levine/Courses/Action/Gilbert,%20Walking_Together.pdf">Davidson 1980</a>; <a href="https://faculty.www.umb.edu/steven.levine/Courses/Action/Gilbert,%20Walking_Together.pdf">Gilbert 1990</a>; <a href="https://academic.oup.com/book/11830">Bratman 2014</a>).<p>
Between the two ends of the spectrum lies a complex range of in-between cases that philosophers have tended to neglect.<p>
<b>Asymmetric joint actions</b>, for example between a mother and a young child, or between a pet owner and their pet, are actions in which the senior partner has a sophisticated understanding of the cooperative situation, while the junior partner participates in a less cognitively sophisticated way, meeting only minimal conditions for joint agency.<p>
<b>Quasi-social interactions</b> require even less from the junior partner than do asymmetric joint actions. These are actions in which the senior partner's social reactions influence the behavior of the junior partner, calling forth further social reactions from the senior partner, but where the junior partner might not even meet minimal standards of having beliefs, desires, or emotions.<p>
<b>Our interactions with Large Language Models are already quasi-social.</b> If you accidentally kick a Roomba and then apologize, the apology is thrown into the void, so to speak -- it has no effect on how the Roomba goes about its cleaning. But if you respond apologetically to ChatGPT, your apology is not thrown into the void. ChatGPT will react differently to you as a result of the apology (responding for example to phrase "I'm sorry"), and this different reaction can then be the basis of a further social reaction from you, to which ChatGPT again responds. Your social processes are engaged, and they guide your interaction, even though ChatGPT has (arguably) no beliefs, desires, or emotions. This is not just ordinary tool use. But neither does it qualify even as asymmetric joint action of the sort you might have with an infant or a dog.<p>
More thoughts along these lines in the <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/QuasiSociality.htm">full draft here</a>.<p>
As always, comments, thoughts, objections welcome -- either on this post, on my social media accounts, or by email!<p>
[Image: a <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">well-known quasi-social interaction</a> between a <i>New York Times</i> reporter and the Bing/Sydney Large Language Model]
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4hQhbLcYdZDZk04lrjDRlIxS-h7B37EDkCjjfFRAcoAqMx5cSWd6Emrokr-Szke6Dd1AfexUwrG6IHVKEIkyssTZWrpAks1IU_J1EtfPjniJRJ2e1vcj7HH3bznWp7CMnyarvxM5UMJ8CVPB-_8f1uIZOKgjh28RTMszDL0T3fd_Gv9Th7NOBQQ/s665/NYTBing.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="463" data-original-width="665" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4hQhbLcYdZDZk04lrjDRlIxS-h7B37EDkCjjfFRAcoAqMx5cSWd6Emrokr-Szke6Dd1AfexUwrG6IHVKEIkyssTZWrpAks1IU_J1EtfPjniJRJ2e1vcj7HH3bznWp7CMnyarvxM5UMJ8CVPB-_8f1uIZOKgjh28RTMszDL0T3fd_Gv9Th7NOBQQ/s320/NYTBing.jpg"/></a></div>
Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com9tag:blogger.com,1999:blog-26951738.post-52744730794642898002023-11-17T09:47:00.000-08:002023-11-17T12:23:44.004-08:00Against the FingerThere's a discussion-queue tradition in philosophy that some people love, but which I've come to oppose. It's too ripe for misuse, favors the aggressive, serves no important positive purpose, and generates competition, anxiety, and moral perplexity. Time to ditch it! I'm referring, as some of you might guess, to <i>The Finger</i>.[1] A better alternative is the Slow Sweep.<p>
<b>The Finger-Hand Tradition</b></p><p>
The Finger-Hand tradition is this: At the beginning of discussion, people with questions raise their hands. The moderator makes an initial Hand list, adding new Hands as they come up. However, people can jump the question queue: If you have a <i>follow-up</i> on the current question, you may raise a finger. All Finger follow-ups are resolved before moving to the next Hand.</p><p>
Suppose Aidan, Brianna, Carina, and Diego raise their hands immediately, entering the initial Hand queue.[2] During Aidan's question, Evan and Fareed think of follow-ups, and Grant thinks of a new question. Evan and Fareed raise their fingers and Grant raises a hand. The new queue order is Evan, Fareed, Brianna, Carina, Diego, Grant.</p><p>
People will be reminded "Do not abuse the Finger!" That is, don't Finger in front of others unless your follow-up really is a follow-up. Don't jump the queue to ask what is really a new question. Finger-abusers will be side-eyed and viewed as bad philosophical citizens.</p><p>
</p><div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjx3CtncIiDwrPqVl2DXBCRH7KCx0-GJ3ZL4euSN_XQW6JO3KFVAv7Ld5k-kIUFuMOXLnIhlgzMv6ujAwdycQJDJ4M05mezro6H24e070n9b_oWOoHyJLddMbRBUkj5CQNNuHD-2hwG6F7j0toK1rEdx3tnpfSdiLAiQlFzCj1Dt-F1flhJnhzp-w/s1024/DALL%C2%B7E%202023-11-16%2017.00.57%20-%20Create%20an%20image%20of%20a%20hand%20with%20a%20raised%20index%20finger,%20as%20if%20asking%20a%20question,%20with%20a%20red%20circle%20and%20diagonal%20line%20around%20it%20to%20indicate%20not%20to%20do%20tha.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="1024" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjx3CtncIiDwrPqVl2DXBCRH7KCx0-GJ3ZL4euSN_XQW6JO3KFVAv7Ld5k-kIUFuMOXLnIhlgzMv6ujAwdycQJDJ4M05mezro6H24e070n9b_oWOoHyJLddMbRBUkj5CQNNuHD-2hwG6F7j0toK1rEdx3tnpfSdiLAiQlFzCj1Dt-F1flhJnhzp-w/s320/DALL%C2%B7E%202023-11-16%2017.00.57%20-%20Create%20an%20image%20of%20a%20hand%20with%20a%20raised%20index%20finger,%20as%20if%20asking%20a%20question,%20with%20a%20red%20circle%20and%20diagonal%20line%20around%20it%20to%20indicate%20not%20to%20do%20tha.png" width="320" /></a></div>
[Dall-E image of a raised finger, with a red circle and line through it]<p>
<b>Problems with the Finger</b></p><p></p>
(1.) <b>People abuse the Finger, despite the admonition</b>. It rewards the aggressive. This is especially important if there isn't enough time for everyone's questions, so that the patient Hands risk never having their questions addressed.<p>
(2.)<b> The Finger rewards speed.</b> If more than one person has a Finger, the first Finger gets to ask first.</p><p>
Furthermore (2a.): If the person whose Hand it is is slow with their own follow-up, then the moderator is likely to go quickly to the fastest Finger, <b>derailing the Hand's actual intended line of questioning</b>.</p><p>
(3.) Given the unclear border between following up and opening a new question, (a.) people who generously refrain from Fingering except in clear cases fall to the back of the queue, whereas <b>people who indulge themselves in a capacious understanding of "following up" get to jump ahead; </b>and (b.) because of issue (a), all participants who have a borderline follow-up <b>face a non-obvious moral question</b> about the right thing to do.</p><p>
(4.) <b>The Finger tends to aggravate unbalanced power dynamics.</b> The highest-status and most comfortable people in the room will tend to be the ones readiest to Finger in, seeing ways to interpret the question they really want to ask as a "follow-up" to someone else's question.</p><p>
Furthermore, <b>the Finger serves no important purpose.</b> Why does a follow-up need to be asked right on the tail of the question it is following up? Are people going to forget otherwise? Of course not! In fact, in my experience, follow-ups are often better after a gap. This requires the follower-up to reframe the question in a different way. This reframing is helpful, because the follower-up will see the issue a little differently than the original Hand. The audience and the speaker then hear multiple angles on whatever issue is interesting enough that multiple people want to ask about it, instead of one initial angle on it, then a few appended jabs.</p><p>
<b>Why It Matters</b></p><p></p>
If all of this seems to take the issue of question order with excessive seriousness, well, yes, maybe! But bear in mind: Typically, philosophy talks are two hours long, and you get to ask <i>one</i> question. If you can't even ask that one question, it's a very different experience than if you do get to ask your question. Also, the question period, unfortunately but realistically, serves a social function of displaying to others that you are an engaged, interesting, "<a href="https://schwitzsplinters.blogspot.com/2010/03/on-being-good-at-seeming-smart.html">smart</a>" philosopher -- and most of us care considerably how others think of us. Not being able to ask your question is like being on a basketball team and never getting to take your shot. Also, waiting atop a question you're eager to ask while others jump the queue in front of you on sketchy grounds is intrinsically unpleasant -- even if you do manage to squeeze in your question by the end.<p>
<b>The Slow Sweep</b></p><p></p>
So, no Fingers! Only Hands. But there are better and worse ways to take Hands.<p>
At the beginning of the discussion period, ask for Hands from anyone who wants to ask a question. <b>Instead of taking the first Hand you see, wait a bit.</b> Let the slower Hands rise up too. Maybe encourage a certain group of people especially to contribute Hands. At UC Riverside Philosophy, our custom is to collect the first set of Hands from students, forcing faculty to wait for the second round, but you could also do things like ask "Any more students want to get Hands in the queue?"</p><p>
Once you've paused long enough that the slow-Handers are up, follow some <b>clear, unbiased procedure for the order of the questions</b>. What I tend to do is start at one end of the room, then slowly sweep to the other end, ordering the questions just by spatial position. I will also give everyone a number to remember. After everyone has their number, I ask if there are any people I missed who want to be added to the list.</p><p>Hand 1 then gets to ask their question. No other Hands get to enter the queue until we've finished with all the Hands in the original call. Thus, there's no jockeying to try to get one's hand up early, or to catch the moderator's eye. The Hand gets to ask their question, the speaker to reply, and then there's <b>an opportunity for the Hand -- and them only -- to ask one follow up</b>. After the speaker's initial response is complete, the moderator catches the Hand's eye, giving them a moment to gather their thoughts for a follow-up or to indicate verbally or non-verbally that they are satisfied. No hurry and no jockeying for the first Finger. I like to encourage an implicit default custom of only one follow-up, though sometimes it seems desirable to allow a second follow-up. Normally after the speaker answers the follow-up I look for a signal from the Hand before moving to the next Hand -- though if the Hand is pushing it on follow-ups I might jump in quickly with "okay, next we have Hand 2" (or whatever the next number is).</p><p>
After all the initial Hands are complete, <b>do another slow sweep in a different direction</b> (maybe left to right if you started right to left). Again, patiently wait for several Hands rather than going in the order in which you see hands. Bump anyone who had a Hand in the first sweep to the end of the queue. Maybe there will be time for a third sweep, or a fourth.</p><p>
The result, I find, is a more peaceful, orderly, and egalitarian discussion period, without the rush, jockeying, anxiety, and Finger abuse.</p><p>
--------------------------------------------------------------</p><p>
[1] The best online source on the Finger-Hand tradition that I can easily find is Muhammad Ali Khalidi's critique <a href="https://blog.apaonline.org/2021/09/07/giving-the-finger-to-the-philosophical-finger/">here</a>, a couple of years ago, which raises some similar concerns. </p><p>[2] All names chosen randomly from lists of my former lower-division students, excluding "Jesus", "Mohammed", and very uncommon names. (In this case, I randomly chose an "A" name, then a "B" name, etc.) See my reflections <a href="https://schwitzsplinters.blogspot.com/2015/11/names-in-philosophical-examples.html">here</a>.</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com6tag:blogger.com,1999:blog-26951738.post-1987458738416358632023-11-07T09:43:00.014-08:002023-11-07T09:49:17.800-08:00The Prospects and Challenges of Measuring Morality, or: On the Possibility or Impossibility of a "Moralometer" <p>Could we ever build a "moralometer" -- that is, an instrument that would accurately measure people's overall morality? If so, what would it take?</p><p>Psychologist Jessie Sun and I explore this question in our new paper in draft: "The Prospects and Challenges of Measuring Morality".</p><p>Comments and suggestions on the draft warmly welcomed!</p><p>Draft available here:</p><p></p><p><a href="https://osf.io/preprints/psyarxiv/nhvz9">https://osf.io/preprints/psyarxiv/nhvz9</a></p><p>Abstract:</p><p>The scientific study of morality requires measurement tools. But can we measure individual differences in something so seemingly subjective, elusive, and difficult to define? This paper will consider the prospects and challenges—both practical and ethical—of measuring how moral a person is. We outline the conceptual requirements for measuring general morality and argue that it would be difficult to operationalize morality in a way that satisfies these requirements. Even if we were able to surmount these conceptual challenges, self-report, informant report, behavioral, and biological measures each have methodological limitations that would substantially undermine their validity or feasibility. These challenges will make it more difficult to develop valid measures of general morality than other psychological traits. But, even if a general measure of morality is not feasible, it does not follow that moral psychological phenomena cannot or should not be measured at all. Instead, there is more promise in developing measures of specific operationalizations of morality (e.g., commonsense morality), specific manifestations of morality (e.g., specific virtues or behaviors), and other aspects of moral functioning that do not necessarily reflect moral goodness (e.g., moral self-perceptions). Still, it is important to be transparent and intellectually humble about what we can and cannot conclude based on various moral assessments—especially given the potential for misuse or misinterpretation of value-laden, contestable, and imperfect measures. Finally, we outline recommendations and future directions for psychological and philosophical inquiry into the development and use of morality measures.</p><p>[Below: a "moral-o-meter" given to me for my birthday a few years ago, by my then-13-year-old daughter]</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY4t73KyJxNhLh8OwlwWf3WkLLdks7MoWurz_tRRKfGvvJRfcmK0FbKy4cXqqLTUagZwmoxyBsZuJTcIJOKqNZF3KwM4O9zNRKX0qMTV64aNr1RZe0X_DUa4WbRQIezN6FtGaxKvgJAl-PBBNhdIb6hhyphenhypheni9LbtI2-B-eD6QFxzLZ9B2b2dN6sQ0g/s4032/Moralometer.jpg" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="3024" data-original-width="4032" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiY4t73KyJxNhLh8OwlwWf3WkLLdks7MoWurz_tRRKfGvvJRfcmK0FbKy4cXqqLTUagZwmoxyBsZuJTcIJOKqNZF3KwM4O9zNRKX0qMTV64aNr1RZe0X_DUa4WbRQIezN6FtGaxKvgJAl-PBBNhdIb6hhyphenhypheni9LbtI2-B-eD6QFxzLZ9B2b2dN6sQ0g/w400-h300/Moralometer.jpg" width="400" /></a></p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com6tag:blogger.com,1999:blog-26951738.post-89390595972186773702023-11-03T08:49:00.003-07:002023-11-03T13:58:03.061-07:00Percent of U.S. Philosophy PhD Recipients Who Are Women: A 50-Year Perspective<div>In the 1970s, women received about 17% of PhDs in philosophy in the U.S. The percentage rose to about 27% in the 1990s, where it stayed basically flat for the next 25 years. The latest data suggest that the percentage is on the rise again.</div><div><br /></div><div>Here's a fun chart (for user-relative values of "fun"), showing the 50-year trend. Analysis and methodological details to follow.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3YrIvuvXaJOOHFNds9IVZ5_OownwK-neBLUDPZY59ODc9EM0CEy3JSvLrqeHfDOqH4tzXkWBE4OAB19mIt96waDkAGPWXKlGKL80FpS2747mt09a4rwdWs79lUuK7iSZfXuzXAhN3yjFGNJPsAO8WcbKc-isah4CYvu78XP9w_wV1Qm0pccOwfg/s758/PhDWomen-231102.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="702" data-original-width="758" height="370" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3YrIvuvXaJOOHFNds9IVZ5_OownwK-neBLUDPZY59ODc9EM0CEy3JSvLrqeHfDOqH4tzXkWBE4OAB19mIt96waDkAGPWXKlGKL80FpS2747mt09a4rwdWs79lUuK7iSZfXuzXAhN3yjFGNJPsAO8WcbKc-isah4CYvu78XP9w_wV1Qm0pccOwfg/w400-h370/PhDWomen-231102.jpg" width="400" /></a></div><br /><div>[click to enlarge and clarify]</div><div><br /></div><div>The data are drawn from the National Science Foundation's <a href="https://ncses.nsf.gov/surveys/earned-doctorates/2022">Survey of Earned Doctorates</a> through 2022 (the most recent available year). The Survey of Earned Doctorates aims to collect data on all PhD recipients from accredited universities in the United States, generally drawing response rates over 90%. The SED asks one binary question for sex or gender: "Are you male or female?", with response options "Male" and "Female". Fewer than 0.1% of respondents are classified in neither category, preventing any meaningful statistical analysis of nonbinary students.</div><div><br /></div><div>Two facts are immediately obvious from this chart:</div><div><br /></div><div>First, <b>women have persistently been underrepresented in philosophy</b> compared to PhDs overall.</div><div><br /></div><div>Second, <b>women receive fewer than 50% of PhDs overall in the U.S.</b> Since the early 2000s, the percentage of women among PhD recipients across all fields has been about 46%. Although <a href="https://schwitzsplinters.blogspot.com/2021/09/the-philosophy-major-is-back-now-with.html">women have consistently been earning about 57-58% overall of Bachelor's degrees</a> since the early 2000s, disproportionately few of those women go on to receive a PhD.</div><div><br /></div><div>The tricky thing to assess is <b>whether there has been a recent uptick in the percentage of women among Philosophy PhD recipients</b>. The year-to-year variability of the philosophy data (due to a sample size of about 400-500 PhD recipients per year in recent years) makes it unclear whether there's any real recent underlying increase that isn't just due to noise. I've drawn a third-degree polynomial trendline through the data (the red dots), but there's a risk of <a href="https://en.wikipedia.org/wiki/Overfitting">overfitting</a>.</div><div><br /></div><div><a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WomenInPhil.htm">In a 2017 article,</a> Carolyn Dicey Jennings and I concluded that the best interpretation of the data through 2014 was that the percentage of women philosophy PhD recipients hadn't changed since the 1990s. The question is whether there's now good statistical evidence of an increase since then.</div><div><br /></div><div>One simple approach to the statistical question is to look for a correlation between year and percentage of women. For the full set of data since 1973, there's a strong correlation: r = .82, p < .001 -- very unlikely to be statistical chance. There's also a good correlation if we look at the span 1990-2022: r = .49, p = .004.</div><div><br /></div><div>Still, the chart looks pretty flat from about 1990 (24.3%) to about 2015 (25.7%). If most of the statistical work is being done by three high years near the end of the data (2016: 34.7%; 2019: 34.2%; 2021: 33.8%), the best model might not be a linear increase since 1990 but something closer to flat for most of the 1990s and early 2000s, with the real surge only in the most recent several years.</div><div><br /></div><div>To pull more statistical power out of the data to examine a narrower time period, I treated each PhD recipient as one observation: year of PhD and gender (1 = female, 0 = not female), then ran an individual-level correlation for the ten-year period 2013-2022. The correlation was statistically significant: r = .032, p = .029. (Note that r values for disaggregated analyses like this will seem low to people used to interpreting r values in other contexts. Eyeballing the chart is a better intuitive assessment of effect size. The important thing is that the low p value [under the conventional .05] suggests that the visually plausible relationship between year of PhD and gender in the 2013-2022 period is not due to chance.)</div><div><br /></div><div>Since this is a post-hoc analysis, and a p-value of .029 isn't great, so it makes sense to test for robustness. Does it matter that I selected 2013 in particular as my start date? Fortunately, we get similar results choosing 2012 or 2014 as the start years, though for 2014 the result is only marginally statistically significant (respectively, r = .037, p = .008; r = .026, p = .099).</div><div><br /></div><div>Another approach is to bin the data into five-year periods, to smooth out noise. If we create five-year bins for the past twenty years, we see:</div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><div>1993-1997: 27% women (453/1687)</div><div>1998-2002: 27% (515/1941)</div><div>2003-2007: 27% (520/1893)</div><div>2008-2012: 28% (632/2242)</div><div>2013-2017: 29% (701/2435)</div><div>2018-2022: 31% (707/2279)</div></blockquote><div>Comparing all the bins pairwise, 2018-2022 is a statistically significantly higher proportion of women than the bins from 1993-2012 and statistically marginally higher than 2013-2017 (p values: .004, .001, .012, .037, .094, respectively). No other pairwise comparisons are significant.</div><div><br /></div><div>I don't think we can be confident. Post-hoc analyses of this sort are risky -- one can see patterns in the noise, then unintentionally <a href="https://en.wikipedia.org/wiki/Data_dredging">p-hack</a> them into seeming real. But the fact that the upward recent trend comes across in two very different analyses of the data and passes a robustness check inclines me to think the effect is probably real.</div><div><br /></div><div>------------------------------------------------------------------------------</div><div>[1] "Philosophy" has been a "subfield" or "detailed field" in the SED data from at least 1973. From 2012-2020, the SED also had a separate category for "Ethics", with substantially fewer respondents than the "Philosophy" category. For this period, both "Ethics" and "Philosophy" are included in the analysis above. Starting in 2021, the SED introduced a separate category for "History / philosophy of science, technology, and society". Respondents in this category are not included in the analysis above. Total "Philosophy" PhD recipients dropped about 15% from 2019 and 2020 to 2021 and 2022, which might partly reflect a loss to this new category of some respondents who would otherwise have been classified as "Philosophy" -- but might also partly be noise, partly long-term trends, partly pandemic-related short-term trends.</div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com7tag:blogger.com,1999:blog-26951738.post-62732909165903608702023-10-27T10:43:00.007-07:002023-10-27T11:32:56.785-07:00Utilitarianism and Risk Amplification<p>A thousand utilitarian consequentialists stand before a thousand identical buttons. If any one of them presses their button, ten people will die. The benefits of pressing the button are more difficult to estimate. Ninety-nine percent of the utilitarians rationally estimate that fewer than ten lives will be saved if any of them presses a button. One percent rationally estimate that more than ten lives will be saved. Each utilitarian independently calculates expected utility. Since ten utilitarians estimate that more lives will be saved than lost, they press their buttons. Unfortunately, as the 99% would have guessed, fewer than ten lives are saved, so the result is a net loss of utility.</p><p>This cartoon example illustrates what I regard as a fundamental problem with simple utilitarianism as decision procedure: It deputizes everyone to act as risk-taker for everyone else. As long as <i>anyone </i>has both (a.) the power and (b.) a rational utilitarian justification to take a risk on others' behalf, then the risk will be taken, even if a majority would judge the risk not to be worth it.</p><p>Consider <a href="https://conversationswithtyler.com/episodes/sam-bankman-fried/">this exchange</a> between Tyler Cowen and Sam Bankman-Fried (pre-FTX-debacle):</p><p></p><blockquote><p>COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?</p><p>BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.</p><p>COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.</p><p>BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.</p><p>COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just <a href="https://plato.stanford.edu/entries/paradox-stpetersburg/">St. Petersburg paradox</a> you into nonexistence?</p><p>BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.</p></blockquote><p></p><p>There are, I think, two troubling things about Bankman-Fried's reasoning here. (Probably more than two, but I'll restrain myself.)</p><p>First is the thought that it's worth risking everything valuable for a small chance of a huge gain. (I call this the <a href="https://schwitzsplinters.blogspot.com/2023/04/the-black-hole-objection-to-longtermism.html">Black Hole Objection</a> to consequentialism.)</p><p>Second, I don't want <i>Sam Bankman-Fried</i> making that decision. That's not (just) because of who in particular he is. I wouldn't want <i>anyone </i>making that decision -- at least not unless they were appropriately deputized with that authority through an appropriate political process, and maybe not even then. No matter how rational and virtuous you are, I don't want you deciding to take risks on behalf of the rest of us simply because that's what your consequentialist calculus says. This issue subdivides into two troubling aspects: the issue of <i>authority </i>and the issue of <i>risk amplification</i>.</p><p>The authority issue is: We should be very cautious in making decisions that sacrifice others or put them at high risk. Normally, we should do so only in constrained circumstances where we are implicitly or explicitly endowed with appropriate responsibility. Our own individual calculation of high expected utility (no matter how rational and well-justified) is not normally, by itself, sufficient grounds for substantially risking or harming others.</p><p>The risk amplification issue is: If we universalize utilitarian decision-making in a way that permits many people to risk or sacrifice others whenever they reasonably calculate that it would be good to do so, we render ourselves collectively hostage to whomever has the most sacrificial reasonable calculation. That was the point illustrated in the opening scenario.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCBgOZoK8Yk496MIBnGDPjAIW7-xiW7HhYIHqGXvN1RaoKifYJwo1quEQXmTcxyVfcM8qwCUXRndxTzWTTh-umcBgiBapUxLs5uHTFaSOcVOdu7rM2nqdGVDqME_5aSxY8MdEw3XuGi6rjM7VlGLpYxX2w4EOdVft-eqnN10G731MqSvfWCcjgMw/s1796/RiskAmplification.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1092" data-original-width="1796" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCBgOZoK8Yk496MIBnGDPjAIW7-xiW7HhYIHqGXvN1RaoKifYJwo1quEQXmTcxyVfcM8qwCUXRndxTzWTTh-umcBgiBapUxLs5uHTFaSOcVOdu7rM2nqdGVDqME_5aSxY8MdEw3XuGi6rjM7VlGLpYxX2w4EOdVft-eqnN10G731MqSvfWCcjgMw/w400-h244/RiskAmplification.jpg" width="400" /></a></div>[Figure: Simplified version of the opening scenario. Five utilitarians have the opportunity to sacrifice five people to save an unknown number of others. The button will be pressed by the utilitarian whose estimate errs highest. Click to enlarge and clarify.]<p>My point is not that some utilitarians might be <i>irrationally </i>risky, though certainly that's a concern. Rather, my point is that even if all utilitarians are perfectly rational, if they differ in their assessments of risk and benefit, and if all it takes to trigger a risky action is <i>one </i>utilitarian with the power to choose that action, then the odds of a bad outcome rise dramatically.</p><p>Advocates of utilitarian decision procedures can mitigate this problem in a few ways, but I'm not seeing how to escape it without radically altering the view.</p><p>First, a utilitarian could adopt a policy of <i>decision conciliationism</i> -- that is, if you see that most others aren't judging the risk or cost worth it, adjust your own assessment of the benefits and likelihoods, so that you fall in line with the majority. However, strong forms of <a href="https://en.wikipedia.org/wiki/Conciliationism">conciliationism</a> are pretty radical in their consequences; and of course this only works if the utilitarians know that there are others in similar positions deciding differently.</p><p>Second, a utilitarian could build some <a href="https://en.wikipedia.org/wiki/Risk_aversion">risk aversion</a> and <a href="https://en.wikipedia.org/wiki/Loss_aversion">loss aversion</a> into their calculus. This might be a good idea on independent grounds. Unfortunately, aversion corrections only shift the weights around. If the anticipated gains are sufficiently high, as judged by the most optimistic rational utilitarian, they will outweigh any discounts due to risk or loss aversion.</p><p>Third, they could move to <a href="https://plato.stanford.edu/entries/consequentialism-rule/">rule utilitarianism</a>: Endorse some rule according to which you shouldn't generally risk or sacrifice others without the right kind of authority. Plausibly, the risk amplification argument above is exactly the sort of argument that might a motivate a utilitarian to adopt rule utilitarianism as a decision procedure rather than trying to evaluate the consequences of each act individually. That is, it's a utilitarian argument in favor of not always acting according to utilitarian calculations. However, the risk amplification and authority problems are so broad in scope (even with appropriate qualifications) that moving to rule utilitarianism to deal with them is to abandon act utilitarianism as a general decision procedure.</p><p>Of course, one could also design scenarios in which bad things happen if everyone is a rule-following <a href="https://plato.stanford.edu/entries/ethics-deontological/">deontologist</a>! Picture a thousand "do not kill" deontologists who will all die unless one of them kills another. Tragedy. We can cherry-pick scenarios in which any view will have unfortunate results.</p><p>However, I don't think my argument is <i>that </i>unfair. The issues of authority and risk amplification are real problems for utilitarian decision procedures, as brought out in these cartoon examples. We can easily imagine, I think, a utilitarian Robespierre, a utilitarian academic administrator, Sam Bankman-Fried with his hand on the destroy-or-duplicate button, calculating reasonably, and too easily inflicting well-intentioned risk on the rest of us.</p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com9tag:blogger.com,1999:blog-26951738.post-48574190541146358952023-10-20T10:03:00.001-07:002023-10-20T10:14:49.312-07:00Gunkel's Criticism of the No-Relevant-Difference Argument for Robot Rights<p>In a <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AIRights.htm">2015 article</a>, Mara Garza and I offer the following argument for the rights of some possible AI systems:</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><p>Premise 1: If Entity A deserves some particular degree of moral consideration and Entity B does not deserve that same degree of moral consideration, there must be some relevant difference between the two entities that grounds this difference in moral status.</p><p>Premise 2: There are possible AIs who do not differ in any such relevant respects from human beings.</p><p>Conclusion: Therefore, there are possible AIs who deserve a degree of moral consideration similar to that of human beings.</p></blockquote><p>The argument is, we think, appealingly minimalist, avoiding controversial questions about <a href="https://plato.stanford.edu/entries/grounds-moral-status/">the grounds of moral status</a>. Does human-like moral status require human-like capacity for pain or pleasure (as <a href="https://plato.stanford.edu/entries/utilitarianism-history/#ClaApp">classical utilitarians</a> would hold)? Or human-like rational cognition, as <a href="https://academic.oup.com/book/34792/chapter/297623242">Kant held</a>? Or the <a href="https://plato.stanford.edu/entries/capability-approach/">capacity for human-like varieties of flourishing</a>? Or the <a href="https://www.jstor.org/stable/10.1086/673431">right types</a> of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099209">social relations</a>?</p><p>The No-Relevant-Difference Argument avoids these vexed questions, asserting only that whatever grounds moral status can be shared between robots and humans. This is not an<i> entirely </i>empty claim about the grounds of moral status. For example, the argument commits to denying that membership in the species <i>Homo sapiens, </i>or having a natural rather than artificial origin, is required for human-like moral status.</p><p>Compare egalitarianism about race and gender. We needn't settle tricky questions about the grounds of moral status to know that all genders and races deserve similar moral consideration! We need only know this: <i>Whatever</i> grounds moral status, it's not skin color, or possession of a Y chromosome, or any of the other things that might be thought to distinguish among the races or genders.</p><p>Garza and I explore four arguments for denying Premise 2 -- that is, for thinking that robots would inevitably differ from humans in some relevant respect. We call these the objections from Psychological Difference, Duplicability, Otherness, and Existential Debt. Today, rather than discussing Premise 2, I want to discuss David Gunkel's objection to our argument in his just-released book, <i><a href="https://mitpress.mit.edu/9780262546157/person-thing-robot/">Person, Thing, Robot</a></i>.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNwFBOcNXd_gjeTHxmNGubUqy73IPl5zqTv_5VhGHh06PdwC801CKEi8QLeuI9ndckqxATJf6803H0-qu_bIg6PlOsdF04TK-PiRXArqPabl3_tkIakrByNqJ1sCNrkrIVH_9MzbUGmO_sK97ZPwSrhuAMpewG1PtSoX9vLV6A-43qBzE22fGpfA/s4032/Ralph-PersonThingRobot.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="4032" data-original-width="3024" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNwFBOcNXd_gjeTHxmNGubUqy73IPl5zqTv_5VhGHh06PdwC801CKEi8QLeuI9ndckqxATJf6803H0-qu_bIg6PlOsdF04TK-PiRXArqPabl3_tkIakrByNqJ1sCNrkrIVH_9MzbUGmO_sK97ZPwSrhuAMpewG1PtSoX9vLV6A-43qBzE22fGpfA/s320/Ralph-PersonThingRobot.jpg" width="240" /></a></div><div><br /></div><div>[Image of Ralph and <i>Person, Thing, Robot</i>. Ralph is a sculpture designed to look like an old-fashioned robot, composed of technological junk from the mid-20th century (sculptor: <a href="https://www.instagram.com/drstrangebot/?hl=en">Jim Behrman</a>). I've named him after my father, whose birth name was Ralph Schwitzgebel. <a href="https://schwitzsplinters.blogspot.com/2015/01/memories-of-my-father.html">My father</a> was also a tinkerer and artist with technology from that era.] </div><p>Gunkel acknowledges that the No-Relevant-Difference Argument "turns what would be a deficiency... -- [that] we cannot positively define the exact person-making qualities beyond a reasonable doubt -- into a feature" (p. 91). However, he objects as follows:</p><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">The main difficulty with this alternative, however, is that it could <i>just as easily</i> be used to deny human beings access to rights as it could be used to grant rights to robots and other nonhuman artifacts. Because the no relevant difference argument is theoretically minimal and not content dependent, it cuts both ways. In the following remixed version, the premises remain intact; only the conclusion is modified.</p></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><u>Premise 1:</u> If Entity A deserves some particular degree of moral consideration and Entity B does not deserve that same degree of moral consideration, there must be some relevant difference between the two entities that grounds this difference in moral status.</blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><u>Premise 2:</u> There are possible AIs who do not differ in any such relevant respects from human beings.</blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><u>Conclusion:</u> Therefore, there are possible human beings who, like AI systems, do not deserve moral consideration. </blockquote></blockquote><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px;"><p style="text-align: left;">In other words, the no relevant difference argument can be used either to argue for an extension of rights to other kinds of entities, like AI systems, robots, and artifacts, or, <i>just as easily,</i> to justify dehumanization, reification of human beings, and the exclusion and/or marginalization of others (p. 91-92, italics added).</p></blockquote><p>This is an interesting objection. However, I reject the appropriateness of the repeated phrase "just as easily", which I have italicized in the block quote.</p><p>----------------------------------------------------------------</p><p>As the saying goes, one person's <i>modus ponens</i> is another's <i>modus tollens</i>. Suppose you know that <i>A implies B</i>. <i>Modus ponens</i> is an inference rule which assumes the truth of A and concludes that B must also be true. <i>Modus tollens</i> is an inference rule which assumes the falsity of B and concludes that A must also be false. For example, suppose you can establish that <i>if anyone stole the cookies, it was Cookie Monster</i>. If you know that the cookies were stolen, <i>modus ponens</i> unmasks Cookie Monster as the thief. If, on the other hand, you know that Cookie Monster has committed no crimes, <i>modus tollens</i> assures you that the cookies remain secure.</p><p>Gunkel correctly recognizes that the No Relevant Difference Argument can be reframed as a conditional: Assuming that human X and robot Y are similar in all morally relevant respects, then if human X deserves rights so also does robot Y. This isn't exactly how Garza and I frame the argument -- our framing implicitly assumes that there is a standard level of moral consideration for human beings in general -- but it's a reasonable adaptation for someone wants to leave open the possibility that different humans deserve different levels of moral consideration.</p><p>In general, the plausibility of <i>modus ponens</i> vs <i>modus tollens</i> depends on the relative security of A vs not-B. If you're rock-solid sure the cookies were stolen and have little faith in Cookie Monster's crimelessness, then <i>ponens </i>is the way to go. If you've been tracking Cookie all day and know for sure he couldn't have committed a crime, then apply <i>tollens</i>. The "easiness", so to speak, of <i>ponens </i>vs. <i>tollens </i>depends on one's confidence in A vs. not-B.</p><p>Few things are more secure in ethics than <i>at least some humans deserve substantial moral consideration</i>. This gives us the rock-solid A that we need for <i>modus ponens.</i> As long as we are not more certain all possible robots would not deserve rights than that some humans do deserve rights, <i>modus ponens</i> will be the correct move. <i>Ponens </i>and <i>tollens</i> will not be equally "easy".</p><p>Still, Gunkel's adaptation of our argument does reveal a potential for abuse, which I had not previously considered, and which I thank him for highlighting. Anyone who is more confident that robots of a certain sort are undeserving of moral consideration than they are of the moral considerability of some class of humans could potentially combine our No Relevant Difference principle with an appeal to the supposed robotlikeness of those humans to deny rights to those humans.</p><p>I don't think the No Relevant Difference principle warrants skepticism on those grounds. Compare application of a principle like "do unto others as you would have them do unto you". Although one <i>could</i> in principle reason "I want to punch him in the nose, so I guess I should punch myself in the nose", the fact that some people might potentially run such a <i>tollens </i>reveals more about their minor premises than it does about the Golden Rule.</p><p>I hope that such an abuse of the principle would be in any case rare. People who want to deny rights to subgroups of humans will, I suspect, be motivated by other considerations, and appealing to those people's putative "robotlikeness" would probably be only an afterthought or metaphor. Almost no one, I suspect, will be on the fence about the attribution of moral status to some group of people and then think, "whoa, now that I consider it, those people are like robots in every morally relevant respect, and I'm sure robots don't deserve rights, so <i>tollens</i> it is". If anyone is tempted by such reasoning, I advise them to rethink the path by which they find themselves with that peculiar constellation of credences.</p>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com6tag:blogger.com,1999:blog-26951738.post-63967042362466927152023-10-12T10:16:00.004-07:002023-10-12T13:30:46.577-07:00Strange Intelligence, Strange Philosophy<div>AI intelligence is<i> strange</i> -- strange in something like the etymological sense of external, foreign, unfamiliar, alien. My PhD student Kendra Chilson (in unpublished work) argues that we should discard the familiar scale of subhuman → human-grade → superhuman. AI systems do, and probably will continue to, operate orthogonally to simple scalar understandings of intelligence modeled on the human case. We should expect them, she says, to be and remain <i>strange intelligence</i>[1] -- inseparably combining, in a single package, serious deficits and superhuman skills. Future AI philosophers will, I suspect, prove to be strange in this same sense.</div><div><br /></div><div>Most readers are probably familiar with the story of <a href="https://www.deepmind.com/research/highlighted-research/alphago">AlphaGo</a>, which in 2016 defeated the world champion player of the game of go. Famously, in the series of matches (which it won 4-1), it made <a href="https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol">several moves that human go experts regarded as bizarre</a> -- moves that a skilled human go player would never have made, and yet which proved instrumental in its victory -- while also, in its losing match, making some mistakes characteristic of simple computer programs, which go experts know to avoid.</div><div><br /></div><div>Similarly, self-driving cars are in some respects better and safer drivers than humans, while nevertheless <a href="https://www.nytimes.com/2023/10/11/opinion/driverless-cars-san-francisco.html">sometimes making mistakes that few humans would make</a>.</div><div><br /></div><div>Large Language Models have stunning capacity to swiftly create competent and even creative texts on a huge breadth of topics, while still <a href="https://owainevans.github.io/reversal_curse.pdf">failing conspicuously</a> <a href="https://www.ted.com/talks/yejin_choi_why_ai_is_incredibly_smart_and_shockingly_stupid/transcript">in some simple common sense tasks</a>. they can write creative-seeming poetry and academic papers, often better than the average first-year university student. Yet -- borrowing an example <a href="https://www.preposterousuniverse.com/podcast/2023/08/28/248-yejin-choi-on-ai-and-common-sense/">from Sean Carroll</a> -- I just had the following exchange with GPT-4 (the most up-to-date version of the most popular large language model):</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho6beRgCWoD6uHckZZ7___1WRiqTnu6KGjNJnoyxfbmhr8p3wEAHyYgKnTffR_Tp_jkpeAKLQD5EfnyshlQ1a-rTeIgNl39uck4XhjfBXrzV6AbE7z_a4fgqJrGy87T-ypsyROwC5pZ3ua4fHEeaQX_u4P6QreQUu-yKiB_0N2TJEUVVZV-yRShg/s741/Skillet.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="487" data-original-width="741" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho6beRgCWoD6uHckZZ7___1WRiqTnu6KGjNJnoyxfbmhr8p3wEAHyYgKnTffR_Tp_jkpeAKLQD5EfnyshlQ1a-rTeIgNl39uck4XhjfBXrzV6AbE7z_a4fgqJrGy87T-ypsyROwC5pZ3ua4fHEeaQX_u4P6QreQUu-yKiB_0N2TJEUVVZV-yRShg/w400-h263/Skillet.jpg" width="400" /></a></div>GPT-4 seems not to recognize that a hot skillet will be plenty cool by the next day.<div><br /></div><div>I'm a "<a href="https://en.wikipedia.org/wiki/Stanford_School">Stanford school</a>" philosopher of science. Core to Stanford school thinking is this: The world is intractably complex; and so to deal with it, we limited beings need to employ simplified (scientific or everyday) models and take cognitive shortcuts. We need to find <i>rough patterns</i> in go, since we cannot pursue every possible move down every possible branch. We need to find rough patterns in the chaos of visual input, guessing about the objects around us and how they might behave. We need quick-and-dirty ways to extract meaning from linguistic input in the swift-moving world, relating it somehow to what we already know, and producing linguistic responses without too much delay. There will be different ways of building these simplified models and implementing these shortcuts, with different strengths and weaknesses. There is rarely a single best way to render the complexity of the world tractable. In psychology, see also <a href="https://en.wikipedia.org/wiki/Gerd_Gigerenzer">Gigerenzer</a> on heuristics.</div><div><br /></div><div>Now mix Stanford school philosophy of science, the psychology of heuristics, and Chilson's idea of strange intelligence. AI, because it is so different from us in its underlying cognitive structure, will approach the world with a very different set of heuristics, idealizations, models, and simplifications than we do. Dramatic outperformance in some respects, coupled with what we regard as shockingly stupid mistakes in others, is exactly what we should expect.</div><div><br /></div><div>If the AI system makes a <a href="https://www.carscoops.com/2023/04/gms-cruise-recalls-autonomous-vehicles-following-crash-into-articulated-bus/">visual mistake in judging the movement of a bus</a> -- a mistake (perhaps) that no human would make -- well, we human beings also make visual mistakes, and some of those mistakes, perhaps, would never be made by an AI system. From an AI perspective, our susceptibility to the <a href="https://en.wikipedia.org/wiki/M%C3%BCller-Lyer_illusion">Muller-Lyer illusion</a> might look remarkably stupid. Of course, we design our driving environment to complement our vision: We require headlights, taillights, marked curves, lane markers, smooth roads of consistent coloration, etc. Presumably, if society commits to driverless cars, we will similarly design the driving environment to complement <i>their</i> vision, and "stupid" AI mistakes will become rarer.</div><div><br /></div><div>I want to bring this back to the idea of an AI philosopher. About a year and a half ago, <a href="https://www.denkwerkstatt.berlin/">Anna Strasser</a>, <a href="https://scholar.google.com/citations?user=TjfMUmkAAAAJ&hl=en">Matthew Crosby</a>, and I built a language model of philosopher <a href="https://as.tufts.edu/philosophy/people/faculty/daniel-dennett">Daniel Dennett</a>. We fine-tuned GPT-3 on Dennett's corpus, so that the language model's outputs would reflect a compromise between the base model of GPT-3 and patterns in Dennett's writing. We called the resulting model Digi-Dan. In a study collaborative with <a href="http://www.institutnicod.org/membres/etudiant-e-s/doctorant-e-s/schwitzgebel-david/?lang=en">my son David</a>, we then posed philosophical questions to both Digi-Dan and the actual Daniel Dennett. Although Digi-Dan flubbed a few questions, overall it performed remarkably well. Philosophical experts <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/mila.12466">were often unable to distinguish</a> Digi-Dan's answers from Dennett's own answers.</div><div><br /></div><div>Picture now a <i>strange AI philosopher</i> -- DigiDan improved. This AI system will produce philosophical texts very differently than we do. It need not be fully superhuman in its capacities to be interesting. It might even, sometimes, strike us as remarkably, foolishly wrong. (In fairness, other human philosophers sometimes strike me the same way.) But even if subhuman in some respects, if this AI philosopher also sometimes produces strange but brilliant texts -- analogous to the strange but brilliant moves of AlphaGo, texts that no human philosopher would create but which on careful study contain intriguing philosophical moves -- it could be a philosophical interlocutor of substantial interest.</div><div><br /></div><div>Philosophy, I have long argued, benefits from including people <a href="https://schwitzsplinters.blogspot.com/2020/06/diversity-in-philosophy-departments.html">with a diversity of perspectives</a>. Strange AI might also be appreciated as a source of philosophical cognitive diversity, occasionally generating texts that contain sparks of something genuinely new, different, and worthwhile that would not otherwise exist.</div><div><div><br /></div><div>------------------------------------------------</div><div>[1] Kendra Chilson is <a href="https://unpredictablepatterns.com/2023/05/18/artificial-strange-intelligence/">not the first</a> to use the phrase "strange intelligence" with this meaning in an AI context, but the usage was new to me; and perhaps through her work it will catch on more widely.</div></div>Eric Schwitzgebelhttp://www.blogger.com/profile/16274774112862434865noreply@blogger.com3