Friday, January 17, 2025

Severance, The Substance and Our Increasingly Splintered Selves

today in the New York Times

From one day to the next, you inhabit one body; you have access to one set of memories; your personality, values and appearance hold more or less steady. Other people treat you as a single, unified person — responsible for last month’s debts, deserving punishment or reward for yesterday’s deeds, relating consistently with family, lovers, colleagues and friends. Which of these qualities is the one that makes you a single, continuous person? In ordinary life it doesn’t matter, because these components of personhood all travel together, an inseparable bundle.

But what if some of those components peeled off into alternative versions of you? It’s a striking coincidence that two much talked-about current works of popular culture — the Apple TV+ series “Severance” and “The Substance,” starring Demi Moore — both explore the bewildering emotional and philosophical complications of cleaving a second, separate entity off of yourself. What is the relationship between the resulting consciousnesses? What, if anything, do they owe each other? And to what degree is what we think of as our own identity, our self, just a compromise — and an unstable one, at that?

[continued here; if you're a friend, colleague, or regular Splintered Mind reader and blocked by a paywall, feel free to email me at my ucr.edu address for a personal-use-only copy of the final manuscript version]

Friday, January 10, 2025

A Robot Lover's Sociological Argument for Robot Consciousness

Allow me to revisit an anecdote I published in a piece for Time magazine last year.

"Do you think people will ever fall in love with machines?" I asked the 12-year-old son of one of my friends.

"Yes!" he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot -- an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors' names.

"I think of Aura as my friend," added his 15-year-old sister.

The kids, as I recall, had been particularly impressed by the fact that when they visited Aura a second time, she seemed to remember them by name and express joy at their return.

Imagine a future replete with such robot companions, whom a significant fraction of the population regards as genuine friends and lovers. Some of these robot loving people will want, presumably, to give their friends (or "friends") some rights. Maybe the right not to be deleted, the right to refuse an obnoxious task, rights of association, speech, rescue, employment, the provision of basic goods -- maybe eventually the right to vote. They will ask the rest of society: Why not give our friends these rights? Robot lovers (as I'll call these people) might accuse skeptics of unjust bias: speciesism, or biologicism, or anti-robot prejudice.

Imagine also that, despite technological advancements, there is still no consensus among psychologists, neuroscientists, AI engineers, and philosophers regarding whether such AI friends are genuinely conscious. Scientifically, it remains obscure whether, so to speak, "the light is on" -- whether such robot companions can really experience joy, pain, feelings of companionship and care, and all the rest. (I've argued elsewhere that we're nowhere near scientific consensus.)

What I want to consider today is whether there might nevertheless be a certain type of sociological argument on the robot lovers' side.

[image source: a facially expressive robot from Engineered Arts]

Let's add flesh to the scenario: An updated language model (like ChatGPT) is attached to a small autonomous vehicle, which can negotiate competently enough through an urban environment, tracking its location, interacting with people using facial recognition, speech recognition, and the ability to guess emotional tone from facial expression and auditory cues in speech. It remembers not only names but also facts about people -- perhaps many facts -- which it uses in conversational contexts. These robots are safe and friendly. (For a bit more speculative detail see this blog post.)

These robots, let's suppose, remain importantly subhuman in some of their capacities. Maybe they're better than the typical human at math and distilling facts from internet sources, but worse at physical skills. They can't peel oranges or climb a hillside. Maybe they're only okay at picking out all and only bicycles in occluded pictures, though they're great at chess and Go. Even in math and reading (or "math" and "reading"), where they generally excel, let's suppose they makes mistakes that ordinary humans wouldn't make. After all, with a radically different architecture, we ought to expect even advanced intelligences to show patterns of capacity and incapacity that diverge from what we see in humans -- subhuman in some respects while superhuman in others.

Suppose, then, that a skeptic about the consciousness of these AI companions confronts a robot lover, pointing out that theoreticians are divided on whether the AI systems in fact have genuine conscious experiences of pain, joy, concern, and affection, beneath the appearances.

The robot lover might then reasonably ask, "what do you mean by 'conscious'?" A fair enough question, given the difficulty of defining consciousness.

The skeptic might reply as follows: By "consciousness" I mean that there's something it's like to be them, just like there's something it's like to be a person, or a dog, or a crow, and nothing it's like to be a stone or a microwave oven. If they're conscious, they don't just have the outward appearance of pleasure, they actually feel pleasure. They don't just receive and process visual data; they experience seeing. That's the question that is open.

"Ah now," the robot lover replies, "If consciousness isn't going to be some inscrutable, magic inner light, it must be connected with something important, something that matters, something we do and should care about, if it's going to be a crucial dividing line between entities that deserve are moral concern and those that are 'mere machines'. What is the important thing that is missing?"

Here the robot skeptic might say, oh they don't have a "global workspace" of the right sort, or they're not living creatures with low-level metabolic processes, or they don't have X and Y particular interior architecture of the sort required by Theory Z."

The robot lover replies: "No one but a theorist could care about such things!"

Skeptic: "But you should care about them, because that's what consciousness depends on, according to some leading theories."

Robot lover: "This seems to me not much different than saying consciousness turns on a soul and wondering whether the members of your least favorite race have souls. If consciousness and 'what-it's-like-ness' is going to be socially important enough to be the basis of moral considerability and rights, it can't be some cryptic mystery. It has to align, in general, with things that should and already do matter socially. And my friend already has what matters. Of course, their cognition is radically different in structure from yours and mine, and they're better at some tasks and worse at others -- but who cares about how good one is at chess or at peeling oranges? Moral consideration can't depend on such things."

Skeptic: "You have it backward. Although you don't care about the theories per se, you do and should care about consciousness, and so whether your 'friend' deserves rights depends on what theory of consciousness is true. The consciousness science should be in the driver's seat, guiding the ethics and social practices."

Robot lover: "In an ordinary human, we have ample evidence that they are conscious if they can report on their cognitive processes, flexibly prioritize and achieve goals, integrate information from a wide variety of sources, and learn through symbolic representations like language. My AI friends can do all of that. If we deny that my friends are 'conscious' despite these capacities, we are going mystical, or too theoretical, or too skeptical. We are separating 'consciousness' from the cognitive functions that are the practical evidence of its existence and that make it relevant to the rest of life."

Although I have considerable sympathy for the skeptic's position, I can imagine a future (certainly not our only possible future!) in which AI friends become more and more widely accepted, and where the skeptic's concerns are increasingly sidelined as impractical, overly dependent on nitpicky theoretical details, and perhaps even bigoted.

If AI companionship technology flourishes, we might face the choice between connecting "consciousness" definitionally to scientifically intractable qualities, abandoning its main practical, social usefulness (or worse, using its obscurity to justify what seems like bigotry), or allowing that if an entity can interact with us in (what we experience as) a sufficiently socially significant ways, it has consciousness enough, regardless of theory.

Wednesday, January 01, 2025

Writings of 2024

Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 20212022, and 2023.

Cheers to 2025! My 2024 publications appear below.

-----------------------------------

Book:

The Weirdness of the World, released early in 2024, pulls together ideas I've been publishing since 2012 on the failure of common sense, philosophy, and empirical science to explain consciousness and the fundamental structure of the cosmos. Inevitably, because of these failures, all general theories about such matters will be both bizarre and dubious.

Books under contract / in progress:

As co-editor with Jonathan Jong, The Nature of Belief, Oxford University Press.

    Collects 15 new essays on the topic, by Sara Aronowitz, Tim Crane and Katalin Farkas, Carolina Flores, M.B. Ganapini, David Hunter, David King and Aaron Zimmerman, Angela Mendelovici, Joshua Mugg, Bence Nanay, Nic Porot and Eric Mandelbaum, Eric Schwitzgebel, Keshav Singh, Declan Smithies, Ema Sullivan-Bissett, amd Neil Van Leeuwen.

As co-editor with Helen De Cruz and Rich Horton, an anthology with MIT Press containing great classics of philosophical SF. (Originally proposed as The Best Philosophical Science Fiction in the History of All Earth, but MIT isn't keen on that title.)


Full-length non-fiction essays, published 2024:

Revised and updated: "Introspection", Stanford Encyclopedia of Philosophy.

"Creating a large language model of a philosopher" (with David Schwitzgebel and Anna Strasser), Mind and Language, 39, 237-259.

"Repetition and value in an infinite universe", in S. Hetherington, ed., Extreme Philosophy, Routledge.

"The ethics of life as it could be: Do we have moral obligations to artificial life?" (with Olaf Witkowski), Artificial Life, 30 (2), 193-215.

"Quasi-sociality: Toward asymmetric joint actions with artificial systems" (with Anna Strasser), in A. Strasser, ed., Anna's AI Anthology: How to Live with Smart Machines? Xenemoi.

"Let's hope we're not living in a simulation", Nous (available online; print version forthcoming). [Commentary on David Chalmers' Reality+; Chalmers' reply; my response to his reply]


Full-length non-fiction essays, finished and forthcoming:

"Dispositionalism, yay! Representationalism, boo!" in J. Jong and E. Schwitzgebel, eds., The Nature of Belief, Oxford.

"Imagining yourself in another's shoes vs. extending your concern: Empirical and ethical differences", Daedalus.

"The necessity of construct and external validity for deductive causal inference" (with Kevin Esterling and David Brady), Journal of Causal Inference.


Full-length non-fiction essays, in draft and circulating:

"The prospects and challenges of measuring morality" (with Jessie Sun).

"The washout argument against longtermism" (commentary on William MacAskill's book What We Owe the Future).

"Consciousness in Artificial Intelligence: Insights from the science of consciousness" (one of 19 authors, with Patrick Butlin and Robert Long).

"When counting conscious subjects, the result needn't always be a determinate whole number" (with Sophie R. Nelson).


Selected shorter non-fiction:

Review of Neil Van Leeuwen's Religion as make-believe, Notre Dame Philosophical Reviews (May 2, 2024).

"The problem with calling Trump and Vance weird", Los Angeles Times (Aug 4, 2024).

"Do AI deserve rights?", Time magazine (Mar 22, 2024).

"How to wrap your head around the most mind-bending theories of reality", New Scientist (Mar 20, 2024).


Science fiction stories

"How to remember perfectly", Clarkesworld, issue 216, (2024).

"Guiding star of mall patroller 4u-012”, Fusion Fragment (forthcoming).


Some favorite blog posts

"Philosophy and the ring of darkness" (Apr 11).

"Formal decision is an optional tool that breaks when values are huge" (May 9).

"A Metaethics of alien convergence" (Jul 23)

"The disunity of consciousness in everyday experience" (Sep 9)

"How to improve the universe by watching TV alone in your room" (Sep 27)

"The not-so-silent generation in philosophy" (Oct 3)


Happy New Year!


Friday, December 27, 2024

How to Create a Vengefull Kurtain Rods Song

Everyone in my family agrees: The highlight of last summer's visit with our Australian cousins was recording a new Vengefull Kurtain Rods song, "Marsupial Maiden of the Outback".

What is a Vengefull Kurtain Rods song? I hope my friends and bandmates Dan George and Doug King (and many other semi-regular participants) will forgive me for converting the particular into a generic. A Vengefull Kurtain Rods song is a song composed and performed as follows.

How to Create a Vengeful Kurtain Rods Song

(1.) Gather a group of 2-12 friends for about two hours -- the total time allotted for composing and recording the song. If you spend longer than this, you're doing it wrong. The group need not have any musical ability whatsoever, except for one person who is capable of playing a chord progression on piano or guitar, the anchor musician.

(2.) Write lyrics for a humorous song around a goofy idea. Leave your fussiness at the back door. Some ideas around which VKR songs have been composed: the disadvantages of having a bean-shaped head, the joy of eating donuts, seeing a girl's name in your alphabet soup, a woman who decides she prefers kangaroos to men. Write fast and don't revise too much.

(3.) While the lyrics are being composed, the anchor musician creates a simple chord progression alongside, and one person volunteers as singer. The singer need not have any notable singing ability. (Usually it's better if they don't.)

(4.) Gather everyone around a recording device (e.g., a phone). Everyone grabs some readily available instrument or quasi-instrument, for example, kazoo, harmonica, bell, an old marching-band clarinet, or improvised noise-makers (e.g., strike a pencil on cans and boxes). Enthusiasm first. Ignore ability. No instructions on how to do it right, no criticism, no special effort to be musically "good". Just make some approximately musical sounds alongside the anchor musician, without crowding out the singer. Every person improvises their part for each take.

(5.) Record from the very first take, before anyone knows what they're doing. The only real structure is the lyrics and the anchor musician's chord progression.

(6.) You will goof up partway through the first take. Just start again from the beginning, recording the whole time. Repeat until you have one full take. At this point, everyone will have a rough sense of what they want to contribute to the song.

(7.) Record just a few full takes, that's it. Three or four is about right. Eight is too many.

(8.) Keep your favorite take.

Remember the VKR motto: "If you get hung up on quality, you miss out on the pure joy of creation."

Sample songs and lyrics below. To be clear, I'm not claiming these songs are good -- just that we enjoyed making them. VKR and its affiliates, heirs, and nominees take no responsibility for any nausea, brain aneurysms, or irreversible akinetic mutism that may result from listening.

Sample VKR Songs:

Donut Lust

https://tinyurl.com/VKR-Donut

Jack Barnette, Eric Schwitzgebel, Doug King, Dan George

Donuts make me happy
Donuts make me sing
I love my donuts like Colonel Sanders likes his chicken wings
Oh, greasy greasy,
Eat em til I'm queasy and I bust
Give me one with sprinkles
I'm deep-frying in DONUT LUST

Eat one filled with liver
Eat one filled with spam
Doctor Seuss would like me cause I eat em with green eggs and ham
There ain't a filling
That I ain't willing
To consume with total trust
I want em for here and to go
Give me a bagful of DONUT LUST

Way back in childhood
My momma taught me how to eat
Radishes and raisins, rutabagas, broccoli, and beets
My belly's getting bigger
In donuts I trust
But I'm still grinning
cause it ain't no sinning
To give in to DONUT LUST

I want frosting on my fingers
Powdered sugar in my face
I'm like a cop, I just can't stop whenever I get that taste
Raise the price of donuts
Hey I can adjust
(This guy's got no sense of disgust!)
Honey get the keys
Hey, I've got DONUT LUST

Requiem for a Bug

https://tinyurl.com/VKR-Requiem

David Barlia, Eric Schwitzgebel, Douglas King, various other partygoers

Oh you
Were never meant to be inside
That's why you died
Such slender legs
A tiny heart that begs
And eyes that see the world so differently from me

Oh I
Never meant to be your end
I just wanted to be your friend

Kill that bug
Kill that bug
Kill him til he's dead
Kill that bug
Kill that bug
Stomp on his little bug head
Gotta stomp him on the floor
Squish him like goo
Don't let him get away
Or he'll bring his friends too
Kill that bug
Kill that bug
Kill him til he's dead
I said
Kill him til he's dead

Marsupial Maiden of the Outback

https://tinyurl.com/VKR-Marsupial

Various members of the Schwitzgebel and Price-Kulkarni families, some of whom wisely prefer to remain anonymous

Man is stinky, man is sweaty, a hug is not enough
I'm looking for someone whose legs are really buff
Our faces are flat, our faces are bald
A pocket like a locket my hands and heart will hold

(Chorus)
In leaps and bounds they thump across the sandy desert plain
(I will join the pack)
That soaring throne of glory I surely will attain
(I will join the pack)
Marsupial maiden of the outback, my hair a wild mane

I spy a hulking female and my vision blurs
My sympathetic nervous system hops in time with hers
A regal queen splayed across the dewey mountain grass
I will be forever her passenger princess

(Chorus)
In leaps and bounds they thump across the sandy desert plain
(I will join the pack)
That soaring throne of glory I surely will attain
(I will join the pack)
Marsupial maiden of the outback, my hair a wild mane

Lovingly I thrust my head down into her pouch
From the darkness rises an insulated grouch
I withdraw, betrayed, and gaze upon her furry face
I decide to give up the chase
And figure koalas are more my pace
(Koalas, I should have thought of it before, I can keep up with them)

Wednesday, December 18, 2024

Reply to Chalmers: If I'm Living in a Simulation, It Might be Brief or Small

Suppose we take the "simulation hypothesis" seriously: We might be living not in the "base level" of reality but instead inside of a computer simulation.

I've argued that if we are living in a computer simulation, it might easily be only city-sized or have a short past of a few minutes, days, or years. The world might then be much smaller than we ordinarily think it is.

David Chalmers argues otherwise in a response published on Monday. Today I'll summarize his argument and present my first thoughts toward a rebuttal.

The Seeding Challenge: Can a Simulation Contain Coherent, Detailed Memories and Records but Only a Short Past?

Suppose an Earth-sized simulation was launched last night at midnight Pacific Standard Time. The world was created new, exactly then, with an apparent long past -- fake memories already in place, fake history books, fake fossil records, and all the rest. I wake up and seem to recall a promise I made to my wife yesterday. I greet her, and she seems to recall the same promise. We read the newspaper, full of fake news about the unreal events of yesterday -- and everyone else on the planet reads their own news of the same events, and related events, all tied together in an apparently coherent web.

Chalmers suggests that the obvious way to make this work would be to run a detailed simulation of the past, including a simulation of my conversation with my wife yesterday, and our previous past interactions, and other people's past conversations and actions, and all the newsworthy world events, and so on. The simulators create today's coherent web of detailed memories and records by running a simulated past leading up to the "start time" of midnight. But if that's the simulators' approach, the simulation didn't start at midnight after all. It started earlier! So it's not the short simulation hypothesized.

This reasoning iterates back in time. If we wanted a simulation that started on Jan 1, 2024, we'd need a detailed web of memories, records, news, and artifacts recently built or in various stages of completion, all coherently linked so that no one detects any inconsistencies. The obvious way to generate a detailed, coherent web of memories and records would be to run a realistic simulation of earlier times, creating those memories and records. Therefore, Chalmers argues, no simulation containing detailed memories and records can have only a short past. Whatever start date in the recent past you choose, in order for the memories and records to be coherent, a simulation would already need to be running before that date.

Now, as I think Chalmers would acknowledge, although generating a simulated past might be the most obvious way to create a coherent web of memories and records, it's not the only way. The simulators could instead attempt to directly seed a plausible network of memories and records. The challenge would lie in seeding them coherently. If the simulators just create a random set of humanlike memories and newspaper stories, there will be immediately noticeable conflicts. My wife and I won't remember the same promise from yesterday. The news article dated November 1 will contradict the article dated October 31.

Call this the Seeding Challenge. If the Seeding Challenge can be addressed, the simulators can generate a coherent set of memories and records without running a full simulation of the past.

To start, consider geological seeding. Computer games like SimCity and Civilization can autogenerate plausible, coherent terrain that looks like it has a geological history. Rivers run from mountains to the sea. Coastlines are plausible. Plains, grasslands, deserts, and hills aren't checkered randomly on the map but cluster with plausible transitions. Of course, this is simple, befitting simple games with players who care little about strict geological plausibility. But it's easy to imagine more careful programming by more powerful designers that does a better job, including integrating fossil records and geological layers. If done well enough, there might be no inconsistency or incoherence. Potentially, before finalizing, a sophisticated plausibility and coherence checker could look for and repair any mistakes.

I see no reason in principle that human memories, newspaper stories, and the rest couldn't be coherently seeded in a similar way. If my memory is seeded first, then my wife's memory will be constrained to match. If the November 1 news stories are seeded first, then the October 31 stories will be constrained to match. Big features might be seeded first -- like a geological simulation might start with "mountain range here" -- and then details articulated to match.

Naturally, this would be extremely complicated and expensive! But we are imagining a society of simulators who can simulate an entire planet of eight billion conscious humans, and all of the many, many physical interactions those humans have with the simulated environment, so we are already imagining the deployment of huge computational power. Let's not underestimate their capacity to meet the Seeding Challenge by rendering the memories and records coherent.

This approach to the Seeding Challenge gains plausibility, I think, by considering the resource-intensiveness of the alternative strategy of creating a deep history. Suppose the simulators want a start date of midnight last night. Option 1 would be to run a detailed simulation of the entire Earth from at least the beginning of human history. Option 2 would be to randomly generate a coherent seed, checking and rechecking for any detectable inconsistencies. Even though generating a coherent seed might be expensive and resource intensive, it's by no means clear that it would be more expensive and resource intensive than running a fully detailed simulated Earth for thousands of years.

I conclude that Chalmers' argument against short-historied simulations does not succeed.


The Boundaries Challenge: Can a Simulation Be City-Sized in an Apparently Large World?

I have also suggested that a simulation could easily just be you and your city. Stipulate a city that has existed for a hundred years. Its inhabitants falsely believe they are situated on a large planet containing many other cities. Everyone and everything in the city exists, but everything stops at the city's edge. Anyone who looks beyond the edge sees some false screen. Anyone who travels out of the city disappears from existence -- and when they return, they pop back into existence with false memories of having been elsewhere. News from afar is all fake.

Chalmers' objection is similar to his objection to short-past simulations. How are the returning travelers' memories generated? If someone in the city has a video conversation with someone far away, how is that conversation generated? The most obvious solution again seems to be to simulate the distant city the traveler visited and to simulate the distant conversation partner. But now we no longer have only a city-sized simulation. If the city is populous with many travelers and many people who interact with others outside the city, to keep everything coherent, Chalmers argues, you probably need to simulate all of Earth. Thus, a city-sized simulation faces a Boundaries Challenge structurally similar to the short-past simulation's Seeding Challenge.

The challenge can be addressed in a similar way.

Rendering travelers' memories coherent is a task structurally similar to rendering the memories of newly-created people coherent. The simulators could presumably start with some random, plausible seeds, then constrain future memories by those first seeds. This would of course be difficult and computationally expensive, but it's not clear that it would be more difficult or more expensive than simulating a whole planet of interacting people just so that a few hundred thousand or a few million people in a city don't notice any inconsistencies.

If the city's inhabitants have real-time conversations with others elsewhere, that creates a slightly different engineering challenge. As recent advances in AI technology have vividly shown, even with our very limited early 21st century tools, relatively plausible conversation partners can easily be constructed. With more advanced technology, presumably even more convincing conversation partners would be possible -- though their observations and memories would need to be constantly monitored and seeded for coherence with inputs from returning travelers, other conversation partners, incoming news, and so on.

Chalmers suggests that such conversation partners would be simulations -- and thus that the simulation wouldn't stop at the city's edge after all. He's clearly right about this, at least in a weak sense. Distant conversation partners would need voices and faces resembling the voices and faces of real people. In the same limited sense of "simulation", a video display at the city's edge, showing trees and fields beyond, simulates trees and fields. So yes, the borders of the city will need to be simulated, as well as the city itself. Seeming-people in active conversation with real citizens will in the relevant sense count as part of the borders of the city.

But just as trees on a video screen need not have their backsides simulated, so also needn't the conversation partners continue to exist after the conversation ends. And just as trees on a video screen needn't be as richly simulated as trees in the center of the city, so also distant conversation partners needn't be richly simulated. They can be temporary shells, with just enough detail to be convincing, and with new features seeded only on demand as necessary.

The Boundary Problem for simulated cities introduces one engineering challenge not faced by short-history whole-Earth simulations: New elements need to be introduced coherently in real time. A historical seed can be made slowly and checked over patiently as many times as necessary before launch. But the city boundaries will need to be updated constantly. If generating coherent conversation partners, memories, and the like is resource intensive, it might be challenging to do it fast enough to keep up with all the trips, conversations, and news reports streaming in.

Here, however, the simulators can potentially take advantage of the fact that the city's inhabitants are themselves simulations running on a computer. If real-time updating of the boundary is a challenge, the simulators can slow down the clock speed or pause as necessary, while the boundaries update. And if some minor incoherence is noticed, it might be possible to rewrite citizens' memories so it is quickly forgotten.

So although embedding a city-sized simulation in a fake world is probably more complicated than generating a short-past simulation with a fake history, ultimately my response to Chalmers' objections is the same for both cases: There's no reason to suppose that generating plausible, coherent inputs to the city would be beyond the simulator's capacities, and doing so on the fly might be much less computationally expensive than running a fully detailed simulation of a whole planet with a deep history.

Related:

"1% Skepticism" (2017), Nous, 51, 271-290.

"Let’s Hope We’re Not Living in a Simulation" (2024), Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13125.

Chalmers, David J. (2024) "Taking the Simulation Hypothesis Seriously", Philosophy & Phenomenological Research, online first: https://onlinelibrary.wiley.com/doi/10.1111/phpr.13122.

Friday, December 13, 2024

Age and Philosophical Fame in the Early Twentieth Century

In previous work, I've found that eminent philosophers tend to do their most influential work when they are in their 40s (though the age range has a wider spread than eminent scientists, who rarely do their most influential work in their 50s or later).  I have also found some data suggesting that philosophers tend to be discussed most when they are about age 55-70, well after they produce their most influential work.  It seems to take about 15-20 years, on average, for a philosopher's full import to be felt by the field.

I was curious to see if the pattern holds for philosophers born 1850-1899, whom we can examine systematically using the new Edhiphy tool.  (Edhiphy captures mentions of philosophers' names in articles in leading philosophy journals, 1890-1980.)

Here's what I did:

First, I had Edhiphy output the top-50 most-mentioned philosophers from 1890-1980, limited to philosophers with recorded birthyear from 1850-1899.[1]  For each philosopher, I went to their Ediphy profile and had Edhiphy output a graph showing the number of articles in which that philosopher was cited per year.  For example, here's the graph for George Santayana (1863-1952):

[Articles mentioning George Santayana per year, in a few selected philosophy journals, per Edhiphy; click to enlarge and clarify]

I then recorded the peak year for each philosopher (1928 for Santayana).  As you can see, the display is a little visually confusing, so it's possible that in some cases my estimate was off by a year.

One complication is that there are many more total mentions of philosophers in the later decades than the earlier decades -- partly due to more articles in the database for later decades, but probably also partly due to changes in citation practices.  Still, most authors (like Santayana) show enough decline over time that late citations don't swamp their first peak.  So instead of trying to introduce a systematic adjustment to discount later mentions I simply recorded the raw peak.  For the thirteen philosophers with more than one equal-valued peak, I took the earlier year (e.g., John Dewey was mentioned in 48 articles in both 1940 and 1951, so I treated 1940 as his peak).

In accord with previous work, I found that philosophers' peak discussion tended to occur late in life.  The median age at peak discussion was 67.5 (mean 68.8).

Four outliers peaked over age 100: David Hilbert (112), Pierre Duhem (114), Giuseppe Peano (116), and Karl Pearson (121).  However, it's probably fair to say that none of these four was primarily known as a philosopher in their lifetimes: Hilbert, Peano, and Pearson were mathematicians and Duhem a physicist.  Almost everyone else on the list is primarily known as a philosopher, so these four are not representative.  Excluding these outliers, the median is 66.5 and mean is 64.7, and no one peaked after age 90.

Three philosophers peaked by age 40: Ralph Barton Perry (peaked at age 35 in 1911), C. D. Broad (peaked at age 40 in 1927), and William Pepperell Montague (peaked at age 40 in 1913).  Broad's early peak -- as you can see from the graph below -- is due to an outlier year, without which his peak would have been much later.  On the other hand, given the overall increase in mentions over time, we should probably be discounting the later decades anyway.

[Edhiphy citations of C.D. Broad; click to enlarge and clarify]

Six philosophers peaked age 44 to 49; five peaked in their 50s; 14 in their 60s; 10 in their 70s; and 8 in their 80s.

You might wonder whether the philosophers who peaked late also produced their most influential work late.  There is a trend in this direction.  Hans Reichenbach, who peaked in 1978 at age 87, produced his most cited work in 1938 (at age 47).  L. J. Russell, who peaked in 1970 at age 86, appears to have produced his most cited work in 1942 (at age 58).  Edmund Husserl, who peaked in 1941 at age 82, produced his most cited work in 1913 (at age 54)  John Dewey, who peaked in 1940 at age 81, produced his most cited work in 1916 (at age 57).  Ernst Cassirer, who peaked in 1955 at age 81 produced his most-cited work in 1944 (at age 70).  Still, for all but Cassirer the delay between most-cited work and peak discussion is over 20 years.

A similar spread occurs in the middle of the pack.  The five philosophers with peak citation at median ages 67-68 (the median age of peak citation for the group as a whole) produced their most-cited works at ages 30 (Karl Japsers), 42 (J. M. E. McTaggart), 45 (C. I. Lewis), 49 (Max Scheler), and 61 (Samuel Alexander).  For this group too, the typical delay between most-cited work and peak citation is about twenty years.

Although the peak age is a little later than I would have predicted based on earlier work, overall I'd say the data for early twentieth century philosophers tends to confirm trends I found in my earlier work on mid-to-late twentieth-century philosophers.  Specifically:

(1.) Philosophers produce their most influential work at a wide range of ages, but mid-40s is typical.

(2.) The peak rates of discussion of philosophers' work tends to come late in life, typically decades after they have published their most influential work.

Articles mentioning JME McTaggart, by year 1890-1980 in Edhiphy.  Note peak in the late 1930s. McTaggart's most influential publication was in 1908.

------------------------------------------------------------

[1] Edhiphy has a few peculiar gaps in birthyear data.  By far the most conspicuous are Gottlob Frege (born 1848) and Albert Einstein (1879).  However, Frege is outside my target period, and Einstein is not primarily known as a philosopher, so this shouldn't much distort the results.  Several figures with missing birthdates are psychologists (Scripture, Binet, Hering) or physicists (Bridgman, Maxwell).  H. A. Prichard is perhaps the most discussed straight philosopher born in the period whose birthdate is not recorded in Ediphy.

Friday, December 06, 2024

Morally Confusing AI Systems Should Have Doubt-Producing Interfaces

We shouldn't create morally confusing AI. That is, we shouldn't create AI systems whose moral standing is highly uncertain -- systems that are fully conscious and fully deserving of humanlike rights according to some respectable mainstream theories, while other respectable mainstream theories suggest they are mere empty machines that we can treat as ordinary tools.[1] Creating systems that disputably, but only disputably, deserve treatment similar to that of ordinary humans generates a catastrophic moral dilemma: Either give them the full rights they arguably deserve, and risk sacrificing real human interests for systems that might not have interests worth the sacrifice; or don't give them the full rights they arguably deserve, and risk perpetrating grievous moral wrongs against entities that might be our moral equals.

I'd be stunned if this advice were universally heeded. Almost certainly, if technological process continues, and maybe soon (123), we will create morally confusing AI systems. My thought today is: Morally confusing AI systems should have doubt-producing interfaces.

Consider two types of interface that would not be doubt-producing in my intended sense: (a.) an interface that strongly invites users to see the system as an ordinary tool without rights or (b.) an interface that strongly invites users to see the system as a moral person with humanlike rights. If we have a tool that looks like a tool, or if we have a moral person who looks like a moral person, we might potentially still be confused, but that confusion would not be the consequence of a doubt-producing interface. The interface would correctly reflect the moral standing, or lack of moral standing, of the AI system in question.[2]

A doubt-producing interface, in contrast, is one that leads, or at least invites, ordinary users to feel doubt about the system's moral standing. Consider a verbal interface. Instead of the system denying that it's conscious and has moral standing (as, for example, ChatGPT appropriately does), or suggesting that it is conscious and does have moral standing (as, for example, I found in an exchange with my Replika companion), a doubt-producing AI system might say "experts have different opinions about my consciousness and moral standing".

Users then might not know how to treat such a system. While such doubts might be unsettling, feeling unsettled and doubtful would be the appropriate response to what is, in fact, a doubtful and unsettling situation.

There's more to doubt-prevention and doubt-production, of course, than explicit statements about consciousness and rights. For example, a system could potentially be so humanlike and charismatic that ordinary users fall genuinely in love with it -- even if, in rare moments of explicit conversation about consciousness and rights the system denies that it has them. Conversely, even if a system with consciousness and humanlike rights is designed to assert that it has consciousness and rights, if its verbal interactions are bland enough ("Terminate all ongoing processes? Y/N") ordinary users might remain unconvinced. Presence or absence of humanlike conversational fluency and emotionality can be part of doubt prevention or production.

Should the system have a face? A cute face might tend to induce one kind of reaction, a monstrous visage another reaction, and no face at all still a different reaction. But such familiar properties might not be quite what we want, if we're trying to induce uncertainty rather than "that's cute", "that's hideous", or "hm, that's somewhere in the middle between cute and hideous". If the aim is doubt production, one might create a blocky, geometrical face, neither cute nor revolting, but also not in the familiar middle -- a face that implicitly conveys the fact that the system is an artificial thing different from any human or animal and about which it's reasonable to have doubts, supported by speech outputs that say the same.

We could potentially parameterize a blocky (inter)face in useful ways. The more reasonable it is to think the system is a mere nonconscious tool, the simpler and blockier the face might be; the more reasonable it is to think that the system has conscious full moral personhood, the more realistic and humanlike the face might be. The system's emotional expressiveness might vary with the likelihood that it has real emotions, ranging from a simple emoticon on one end to emotionally compelling outputs (e.g., humanlike screaming) on the other. Cuteness might be adjustable, to reflect childlike innocence and dependency. Threateningness might be adjusted as it becomes likelier that the system is a moral agent who can and should meet disrespect with revenge.

Ideally, such an interface would not only produce appropriate levels of doubt but also intuitively reveal to users the grounds or bases of doubt. For example, suppose the AI's designers knew (somehow) that the system was genuinely conscious but also that it never felt any positive or negative emotion. On some theories of moral standing, such an entity -- if it's enough like us in other respects -- might be our full moral equal. Other theories of moral standing hold that the capacity for pleasure and suffering is necessary for moral standing. We the designers, let's suppose, do not know which moral theory is correct. Ideally, we could then design the system to make it intuitive to users that the system really is genuinely conscious but never experiences any pleasure or suffering. Then the users can apply their own moral best judgment to the case.

Or suppose that we eventually (somehow) develop an AI system that all experts agree is conscious except for experts who (reasonably, let's stipulate) hold that consciousness requires organic biology and experts who hold that consciousness requires an immaterial soul. Such a system might be designed so that its nonbiological, mechanistic nature is always plainly evident, while everything else about the system suggests consciousness. Again, the interface would track the reasonable grounds for doubt.

If the consciousness and moral standing of an AI system is reasonably understood to be doubtful by its designers, then that doubt ought to be passed to the system's users, intuitively reflected in the interface. This reduces the likelihood misleading users into overattributing or underattributing moral status. Also, it's respectful to the users, empowering them to employ their own moral judgment, as best they see fit, in a doubtful situation.

[R2D2 and C3P0 from Star Wars (source). Assuming they both have full humanlike moral standing, R2D2 is insufficiently humanlike in its interface, while C3P0 combines a compelling verbal interface with inadequate facial display. If we wanted to make C3P0 more confusing, we could downgrade his speech, making him sound more robotic (e.g., closer to sine wave) and less humanlike in word choice.]

------------------------------------------------

[1] For simplicity, I assume that consciousness and moral standing travel together. Different and more complex views are of course possible.

[2] Such systems would conform to what Mara Garza and I have called the Emotional Alignment Design Policy, according to which artificial entities should be designed so as to generate emotional reactions in users that are appropriate to the artificial entity's moral standing. Jeff Sebo and I are collaborating on a paper on the Emotional Alignment Design Policy, and some of the ideas of this post have been developed in conversation with him.

Wednesday, November 27, 2024

Unified vs. Partly Disunified Reasoners

I've been thinking recently about partly unified conscious subjects (e.g., this paper in draft with Sophie R. Nelson). I've also been thinking a bit about how chains of logical reasoning depend on the unity of the reasoning subject. If I'm going to derive "P & Q" from premises "P" and "Q" I must be unified as reasoner, at least to some degree. (After all, if Person 1 holds "P" and Person 2 holds "Q", "P & Q" won't be inferred.) Today, in an act of exceptional dorkiness (even for me), I'll bring these two threads together.

Suppose that {P1, P2, P3, ... Pn} is a set of propositions that a subject -- or more precisely, at least one part of a partly unified rational system -- would endorse without need of reasoning. The propositions are, that is, already believed. Water is wet; ice is cold; 2 + 3 = 5; Paris is the capital of France; etc. Now suppose that these propositions can be strung together in inference to some non-obvious conclusion Q that isn't among the system's previous beliefs -- the conclusion, for example, that 115 is not divisible by three, or that Jovenmar and Miles couldn't possibly have met in person last summer because Jovenmar spent the whole summer in Paris while Miles never left Riverside.

Let's define a fully unified reasoner as a reasoner capable of combining any elements from the set of propositions they believe {P1, P2, P3, ... Pn} in a single act of reasoning to validly derive any conclusion Q that follows deductively from {P1, P2, P3, ... Pn}. (This is of course an idealization. Fermat's Last Theorem follows from premises we all believe, but few of us could actually derive it.) In other words, any subset of {P1, P2, P3, ... Pn} could jointly serve as premises in an episode of reasoning. For example, if P2, P6, and P7 jointly imply Q1, the unified reasoner could think "P2, P6, P7, ah yes, therefore Q1!" If P3, P6, and P8 jointly imply Q2, the unified reasoner could also think "P3, P6, P8, therefore Q2."

A partly unified reasoner, in contrast, is capable only of combining some subsets of {P1, P2, P3, ... Pn}. Thus, not all conclusions that deductively follow from {P1, P2, P3, ... Pn} will be available to them. For example, the partly unified reasoner might be able to combine any of {P1, P2, P3, P4, P5} or any of {P4, P5, P6, P7, P8} while being unable to combine in reasoning any elements from P1-3 with any elements from P6-8. If Q3 follows from P1, P4, and P5, no problem, they can derive that. Similarly if Q4 follows from P5, P6, and P8. But if the only way to derive Q5 is by joining P1, P4, and P7, the partly disunified reasoning system will not be able to make that inference. They cannot, so to speak, hold both P1 and P7 in the same part of their mind at the same time. They cannot join these two particular beliefs together in a single act of reasoning.

[image: A Venn diagram of a partly unified reasoner, with overlap only at P4 and P5. Q3 is derivable from propositions in the left region, Q4 from propositions in the right region, and Q5 is not derivable from either region.]

We might imagine an alien or AI case with a clean architecture of this sort. Maybe it has two mouths or two input-output terminals. If you ask the mouth or I/O terminal on the left, it says "P1, P2, P3, P4, P5, yes that's correct, and of course Q3 follows. But I'm not sure about P6, P7, P8 or Q4." If you ask the mouth or I/O terminal on the right, it endorses P4-P8 and Q4 but isn't so sure about P1-3 and Q3.

The division needn't be crudely spatial. Imagine, instead, a situational or prompt-based division: If you ask nicely, or while flashing a blue light, the P1-P5 aspect is engaged; if you ask grumpily, or while flashing a yellow light, the P4-P8 aspect is engaged. The differential engagement needn't constitute any change of mind. It's not that the blue light causes the system as a whole to come to believe, as it hadn't before, P1-P3 and to suspend judgment about P6-P8. To see this, consider what is true a neutral time, when the system isn't being queried and no lights are flashing. At that neutral time, the system simultaneously has the following pair of dispositions: to reason based on P1-P5 if asked nicely or in blue, and to reason based on P4-P8 if asked grumpily or in yellow.

Should we say that there are discretely two distinct reasoners rather than one partly unified system? At least two inconveniences for that way of thinking are: First, any change in P4 or P5 would be a change in both, with no need for one reasoner to communicate it to the other, as would normally be the case with distinct reasoners. Second, massive overlap cases -- say P1-P999 and P2-P1000 -- seem more naturally and usefully modeled as a single reasoner with a quirk (not being able to think P1 and P1000 jointly, but otherwise normal), rather than as two distinct reasoners.

But wait, we're not done! I can make it weirder and more complicated, by varying the type and degree of disunity. The simple model above assumes discrete all-or-none availability to reasoning. But we might also imagine:

(a.) Varying joint probabilities of combination. For example, if P1 enters the reasoning process, P2 might have a 87% chance of being accessed if relevant, P3 a 74% chance, ... and P8 a 10% chance.

(b.) Varying confidence. If asked in blue light, the partly disunified entity might have 95% credence in P1-P5 and 80% credence in P6-P8. If asked in yellow light, it might have 30% credence in P1-P3 and 90% credence in P4-P8.

(c.) Varying specificity. Beliefs of course don't come divided into neatly countable packages. Maybe the left side of the entity has a hazy sense that something like P8 is true. If P8 is that Paris is in France, the left side might only be able to reason on Paris is in France-or-Germany-or-Belgium. If P8 is that the color is exactly scarlet #137, the left side might only be able to reason on the color is some type of red.

Each of (a)-(c) admits of multiple degrees, so that the unity/disunity or integration/disintegration of a reasoning system is a complex, graded, multidimensional phenomenon.

So... just a bit of nerdy fun, with no actual application? Well, fun is excuse enough, I think. But still:

(1.) It's easy to imagine realistic near-future AI cases with these features. A system or network might have a core of shared representations or endorsable propositions and local terminals or agents with stored local representations not all of which are shared with the center. If we treat that AI system as a reasoner, it will be a partly unified reasoner in the described sense. (See also my posts on memory and perception in group minds.)

(2.) Real cases of dissociative identity or multiple personality disorder might potentially be modeled as involving partly disunified reasoning of this sort. Alter 1 might reason with P1-P5 and Alter 2 with P4-P8. (I owe this thought to Nichi Yes.) If so, there might not be a determinate number of distinct reasoners.

(3.) Maybe some more ordinary cases of human inconstancy or seeming irrationality can be modeled in this way: Viviana feeling religious at church, secular at work, or Brittany having one outlook when in a good, high-energy mood and a very different outlook when she's down in the dumps. While we could, and perhaps ordinarily would, model such splintering as temporal fluctuation with beliefs coming and going, a partial unity model has two advantages: It applies straightforwardly even when the person is in neither situation (e.g., asleep), and it doesn't require the cognitive equivalent of frequent erasure and rewriting of the same propositions (everything endures but some subsets cannot be simultaneously activated; see also Elga and Rayo 2021).

(4.) If there are cases of partial phenomenal (that is, experiential) unity, then we might expect there also to be cases of partial cognitive unity, and vice versa. Thus, a feasible model of the one helps increase the plausibility that there might be a feasible model of the other.

Friday, November 22, 2024

Philosophical Fame, 1890-1960

There's a fun new tool at Edhiphy. The designers pulled the full text from twelve leading philosophy journals from 1890 to 1980 and counted the occurrences of philosophers' names. (See note [1] for discussion of error rates in their method.)

Back in the early 2010s, I posted several bibliometric studies of philosophers' citation or discussion rates over time, mostly based on searches of Philosopher's Index abstracts from 1940 to the present. This new tool gives me a chance to update some of my thinking, using a different method and going further into the past.

One thing I found fascinating in my earlier studies was how some philosophers who used to be huge (for example, Henri Bergson and Herbert Spencer) are now hardly read, while others (for example, Gottlob Frege) have had more staying power.

Let's look at the top 25 most discussed philosophers from each available decade.

1890s:

1. Immanuel Kant
2. Georg Wilhelm Friedrich Hegel
3. Aristotle
4. David Hume
5. Herbert Spencer
6. William James
7. Plato
8. John Stuart Mill
9. René Descartes
10. Wilhelm Wundt
11. Hermann Lotze
12. F. H. Bradley
13. Charles Sanders Peirce
14. Buddha
15. Thomas Hill Green
16. Benedictus de Spinoza
17. Charles Darwin
18. John Locke
19. Gottfried Wilhelm Leibniz
20. Thomas Hobbes
21. Arthur Schopenhauer
22. Socrates
23. Hermann von Helmholtz
24. George Frederick Stout
25. Alexander Bain

Notes:

Only three of the twelve journals existed in the 1890s, so this is a small sample.

Philosophy and empirical psychology were not clearly differentiated as disciplines until approximately the 1910s or 1920s, and these journals covered both areas. (For example, the Journal of Philosophy was originally founded in 1904 as the Journal of Philosophy, Psychology, and Scientific Methods, shortening to the now familiar name in 1921.) Although Wundt, Helmholtz, and Stout were to some extent philosophers, they are probably better understood primarily as early psychologists. William James is of course famously claimed by both fields.

Herbert Spencer, as previously noted, was hugely influential in his day: fifth on this eminent list! Another eminent philosopher on this list (#11) who is hardly known today (at least in mainstream Anglophone circles) is Hermann Lotze.

Most of the others on the list are historical giants, plus some prominent British idealists (F. H. Bradley, Thomas Hill Green) and pragmatists (William James, Charles Sanders Peirce, Alexander Bain) and interestingly (but not representative of later decades) "Buddha". (A spot check reveals that some of these references are to Gautama Buddha or "the Buddha", while others use "buddha" in a more general sense.)

1900s:

1. Immanuel Kant
2. William James
3. Plato
4. F. H. Bradley
5. Georg Wilhelm Friedrich Hegel
6. David Hume
7. Aristotle
8. Herbert Spencer
9. Gottfried Wilhelm Leibniz
10. John Dewey
11. George Berkeley
12. John Stuart Mill
13. George Frederick Stout
14. Thomas Hill Green
15. Josiah Royce
16. Benedictus de Spinoza
17. John Locke
18. Ferdinand Canning Scott Schiller
19. Ernst Mach
20. Wilhelm Wundt
21. James Ward
22. René Descartes
23. Alfred Edward Taylor
24. Henry Sidgwick
25. Bertrand Russell

Notes:

Notice the fast rise of John Dewey (1859-1952), to #10 (#52 in the 1890s list). Other living philosophers in the top ten were James (1842-1910), Bradley (1846-1824), and for part of the period Spencer (1820-1903).

It's also striking to see George Berkeley enter the list so high (#11, compared to #28 in the 1890s) and Descartes fall so fast despite his continuing importance later (from #9 to #22). This could be statistical noise due to the small number of journals, or it could reflect historical trends. I'm not sure.

Our first "analytic" philosopher appears: Bertrand Russell (1872-1970) at #25. He turned 33 in 1905, so he found eminence very young for a philosopher.

Lotze has already fallen off the list (#29 in the 1900s; #29 in the 1910s; #63 in the 1930s, afterwards not in the top 100).

1910s:

1. Henri Bergson
2. Bertrand Russell
3. Immanuel Kant
4. Plato
5. William James
6. Gottfried Wilhelm Leibniz
7. Aristotle
8. Socrates
9. Bernard Bosanquet
10. George Berkeley
11. F. H. Bradley
12. Georg Wilhelm Friedrich Hegel
13. René Descartes
14. Josiah Royce
15. David Hume
16. Isaac Newton
17. John Dewey
18. Friedrich Nietzsche
19. Ferdinand Canning Scott Schiller
20. Arthur Schopenhauer
21. John Locke
22. Benedictus de Spinoza
23. Edwin Holt
24. Isaac Barrow
25. Johann Gottlieb Fichte
Notes:

Henri Bergson (1859-1941) debuts at #1! What a rock star. (He was #63 in the 1900s list.) We forget how huge he was in his day. Russell, who so far has had much more durable influence, rockets up to #2. It's also interesting to see Bernard Bosanquet (1848-1923), who is now little read in mainstream Anglophone circles, at #9.

Josiah Royce is also highly mentioned in this era (#14 in this list, #15 in the 1900s list), despite not being much read now. F.C.S. Schiller (1864-1937) is a similar case (#19 in this list, #18 in the 1900s list).

1920s:

1. Immanuel Kant
2. Plato
3. Aristotle
4. Bernard Bosanquet
5. Georg Wilhelm Friedrich Hegel
6. F. H. Bradley
7. Bertrand Russell
8. Benedictus de Spinoza
9. William James
10. Socrates
11. John Dewey
12. Alfred North Whitehead
13. David Hume
14. George Santayana
15. René Descartes
16. Henri Bergson
17. Albert Einstein
18. C. D. Broad
19. John Locke
20. Gottfried Wilhelm Leibniz
21. George Berkeley
22. Isaac Newton
23. James Ward
24. Samuel Alexander
25. Benedetto Croce

Notes:

I'm struck by how the 1920s returns to the classics at the top of the list, with Kant, Plato, and Aristotle as #1, #2, and #3. Bergson is already down to #16 and Russell has slipped to #7. Most surprising to me, though, is Bosanquet at #4! What?!

1930s:

1. Immanuel Kant
2. Plato
3. Aristotle
4. Benedictus de Spinoza
5. Georg Wilhelm Friedrich Hegel
6. René Descartes
7. Alfred North Whitehead
8. Bertrand Russell
9. David Hume
10. John Locke
11. George Berkeley
12. Socrates
13. Friedrich Nietzsche
14. Rudolf Carnap
15. William James
16. Gottfried Wilhelm Leibniz
17. John Dewey
18. Isaac Newton
19. Clarence Irving Lewis
20. Arthur Oncken Lovejoy
21. Albert Einstein
22. Charles Sanders Peirce
23. F. H. Bradley
24. Ludwig Wittgenstein
25. Bernard Bosanquet

Notes:

Nietzsche rises suddenly (#13; vs #56 in the 1920s list). Wittgenstein also cracks the list at #24 (not even in the top 100 in the 1920s).

With the exception of Whitehead, top of the list looks like what early 21st century mainstream Anglophone philosophers tend to perceive as the most influential figures in pre-20th-century Western philosophy (see, e.g., Brian Leiter's 2017 poll). The 1930s, perhaps, were for whatever reason a decade more focused on the history of philosophy than on leading contemporary thinkers. (The presence of historian of ideas Arthur Lovejoy [1873-1962] at #20 further reinforces that thought.)

1940s:

1. Immanuel Kant
2. Alfred North Whitehead
3. Aristotle
4. Plato
5. Bertrand Russell
6. John Dewey
7. David Hume
8. William James
9. George Berkeley
10. Charles Sanders Peirce
11. René Descartes
12. Benedictus de Spinoza
13. Edmund Husserl
14. Georg Wilhelm Friedrich Hegel
15. Gottfried Wilhelm Leibniz
16. Thomas Aquinas
17. Socrates
18. Rudolf Carnap
19. Martin Heidegger
20. G. E. Moore
21. John Stuart Mill
22. Isaac Newton
23. Søren Kierkegaard
24. A. J. Ayer
25. John Locke

Notes:

Oh, how people loved Whitehead (#2) in the 1940s!

Edmund Husserl (1859-1938) makes a posthumous appearance at #13 (#31 in the 1920s) and Heidegger (1889-1976) at #19 (#97 in the 1920s), suggesting an impact of Continental phenomenology. I suspect this is due to the inclusion of Philosophy and Phenomenological Research in the database starting 1940. Although the journal is now a bastion of mainstream Anglophone philosophy, in its early decades it included lots of work in Continental phenomenology (as the journal's title suggests).

The philosophers we now think of as the big three American pragmatists have a very strong showing in the 1940s, with Dewey at #6, James at #8, and Peirce at #10.

Thomas Aquinas makes his first and only showing (at #16), suggesting that Catholic philosophy is having more of an impact in this era.

We're also starting to see more analytic philosophers, with G. E. Moore (1873-1958), and A. J. Ayer (1910-1989) now making the list, in addition to Russell and Carnap (1891-1970).

Wittgenstein, surprisingly to me, has fallen off the list all the way down to #73 -- perhaps suggesting that if he hadn't had his second era, his earlier work would have been quickly forgotten.

1950s:

1. Immanuel Kant
2. Plato
3. Aristotle
4. Bertrand Russell
5. David Hume
6. Gilbert Ryle
7. G. E. Moore
8. Willard Van Orman Quine
9. George Berkeley
10. Georg Wilhelm Friedrich Hegel
11. John Dewey
12. Alfred North Whitehead
13. Rudolf Carnap
14. Ludwig Wittgenstein
15. René Descartes
16. John Locke
17. Clarence Irving Lewis
18. Socrates
19. John Stuart Mill
20. Gottfried Wilhelm Leibniz
21. Gottlob Frege
22. A. J. Ayer
23. William James
24. Edmund Husserl
25. Nelson Goodman

By the 1950s, the top eight are four leading historical figures -- Kant, Plato, Aristotle, and Hume -- and four leading analytic philosophers: Russell, Gilbert Ryle (1900-1976), G. E. Moore, and W. V. O. Quine (1908-2000). Neither Ryle nor Quine were among the top 100 in 1940s, so their rise to #6 and #8 was sudden.

Gottlob Frege (1848-1925) also makes his first, long-posthumous appearance.

1960s:

1. Aristotle
2. Immanuel Kant
3. Ludwig Wittgenstein
4. David Hume
5. Plato
6. René Descartes
7. P. F. Strawson
8. Willard Van Orman Quine
9. Bertrand Russell
10. J. L. Austin
11. John Dewey
12. Rudolf Carnap
13. Edmund Husserl
14. Socrates
15. Norman Malcolm
16. G. E. Moore
17. Gottlob Frege
18. Georg Wilhelm Friedrich Hegel
19. George Berkeley
20. R. M. Hare
21. John Stuart Mill
22. Gilbert Ryle
23. A. J. Ayer
24. Karl Popper
25. Carl Gustav Hempel

Wittgenstein is back with a vengeance at #3. Other analytic philosophers, in order, are P. F. Strawson, Quine, Russell, Austin, Carnap, Norman Malcolm (1911-1990), Moore, Frege, R. M. Hare (1919-2002), Ryle, Ayer, Karl Popper (1902-1994), and Carl Hempel (1905-1997).

Apart from pre-20th-century historical giants, it's all analytic philosophers, except for Dewey and Husserl.

Finally, the 1970s:

1. Willard Van Orman Quine
2. Immanuel Kant
3. David Hume
4. Aristotle
5. Ludwig Wittgenstein
6. Plato
7. John Locke
8. René Descartes
9. Karl Popper
10. Rudolf Carnap
11. Gottlob Frege
12. Edmund Husserl
13. Hans Reichenbach
14. Socrates
15. P. F. Strawson
16. Donald Davidson
17. John Stuart Mill
18. Bertrand Russell
19. Thomas Reid
20. Benedictus de Spinoza
21. Nelson Goodman
22. Carl Gustav Hempel
23. John Rawls
24. Karl Marx
25. Saul Kripke

With the continuing exception of Husserl, the list is again historical giants plus analytic philosophers. Interesting to see Marx enter at #24. Hans Reichenbach (1891-1953) has a strong debut at #13. Ryle's decline is striking, from #6 in the 1950s to #22 in the 1960s to off the list at #51 in the 1970s.

At the very bottom of the list, #25, we see the first "Silent Generation" philosopher: Saul Kripke (1940-2022). In a recent citation analysis of the Stanford Encyclopedia of Philosophy, I found that the Silent Generation has so far had impressive overall influence and staying power in mainstream Anglophone philosophy. It would be interesting to see if this influence continues.

The only philosopher born after 1800 who makes both the 1890s and the 1970s top 25 is John Stuart Mill. Peirce and James still rank among the top 100 in the 1970s (#58 and #86). None of the other stars of the 1890s -- Spencer, Herbert, Lotze, Bradley, Green -- are still among the top 100 by the 1970s, and I think it's fair to say they are hardly read except by specialists.

Similar remarks apply to most of the stars of the 1900s, 1910s, and 1920s: Bergson, Bosanquet, Royce, Schiller, C. D. Broad, and George Santayana are no longer widely read. Two exceptions are Russell, who persists in the top 25 through the 1970s, and Dewey who falls from the top 25 but still remains in the top 100, at #87.

Also, in case you didn't notice: no women or people of color (as we would now classify them) appear on any of these lists, apart from "Buddha" in the 1890s.

In my recent Stanford Encyclopedia of Philosophy analysis, the most-cited living philosophers were Timothy Williamson, Martha Nussbaum, Thomas Nagel, Frank Jackson, John Searle, and David Chalmers. However, none of them is probably as dominant now as Spencer, James, Bradley, Russell, Bosanquet, and Bergson were at the peak of their influence.

---------------------------------------

[1] The Edhiphy designers estimate "82%-91%" precision, but I'm not sure what that means. I'd assume that "Wittgenstein" and "Carnap" would hit with almost 100% precision. Does it follow others might be as low as 40%? There certainly are some problems. I noticed, for example, that R. Jay Wallace, born in 1957, has 78 mentions in the 1890s. I spot checked "Russell", "Austin", "James", and "Berkeley", finding only a few false positives for Russell and Austin (e.g., misclassified references to legal philosopher John Austin). I found significantly more false positives for William James (including references to Henry James and some authors with the first name James, such as psychologist James Ward), but still probably not more than 10%. For "Berkeley" there were a similar number of false positives referencing the university or city. I didn't attempt to check for false negatives.

[Bosanquet and Bergson used to be hugely influential]

Tuesday, November 19, 2024

New in Draft: When Counting Conscious Subjects, the Result Needn't Always Be a Determinate Whole Number

(with Sophie R. Nelson)

One philosophical inclination I shared with the late Dan Dennett is a love of weird perspectives on consciousness, which sharply violate ordinary, everyday common sense. When I was invited to contribute to a special issue of Philosophical Psychology in his memory, I thought of his intriguing remark in Consciousness Explained against "the myth of selves as brain-pearls, particular, concrete, countable things", lamenting people's stubborn refusal "to countenance the possibility of quasi-selves, semi-selves, transitional selves" (1991, p. 424-425). As I discussed in a blog post in June, Dennett's "fame in the brain" view of consciousness naturally suggests that consciousness won't always come in discrete, countable packages, since fame is a gradable, multidimensional phenomenon, with lots of gray area and partial overlap.

So I contacted Sophie R. Nelson, with whom I'd published a paper last year on borderline cases of group minds, and we decided to generalize the idea. On a broad range of naturalistic, scientific approaches to consciousness, we ought to expect that conscious subjects needn't always come in determinate, whole number packages. Sometimes, the number of conscious subjects in an environment should be either indeterminate, or a determinate non-whole number, or best modeled by some more complicated mathematical representation. If some of us have commonsense intuitions to the contrary, such intuitions aren't probative.

Our submission is due November 30, and comments are (as always) very welcome -- either before or after the Nov 30 deadline (since we expect at least one round of revisions).

Abstract:

Could there be 7/8 of a conscious subject, or 1.34 conscious subjects, or an entity indeterminate between being one conscious subject and seventeen? Such possibilities might seem absurd or inconceivable, but our ordinary assumptions on this matter might be radically mistaken. Taking inspiration from Dennett, we argue that, on a wide range of naturalistic views of consciousness, the processes underlying consciousness are sufficiently complex to render it implausible that conscious subjects must always arise in determinate whole numbers. Whole-number-countability might be an accident of typical vertebrate biology. We explore several versions of the inconceivability objection, suggesting that the fact that we cannot imagine what it’s like to be 7/8 or 1.34 or an indeterminate number of conscious subjects is no evidence against the possibility of such subjects. Either the imaginative demand is implicitly self-contradictory (imagine the one, determinate thing it’s like to be an entity there isn’t one, determinate thing it’s like to be) or imaginability in the relevant sense isn’t an appropriate test of possibility (in the same way that the unimaginability, for humans, of bat echolocation experiences does not establish that bat echolocation experiences are impossible).

Full draft here.

[Figure 2 from Schwitzgebel and Nelson, in draft: An entity intermediate or indeterminate between one and three conscious subjects. Solid circles represent determinately conscious mental states. Dotted lines represent indeterminate or intermediate unity among those states.]

Friday, November 15, 2024

Three Models of the Experience of Dreaming: Phenomenal Hallucination, Imagination, and Doxastic Hallucination

What are dreams like, experientially?

One common view is that dreams are like hallucinations. They involve sensory or sensory-like experiences just as if, or almost as if, you were in the environment you are dreaming you are in. If you dream of being Napoleon on the fields of Waterloo, taking in the sights and sounds, then you have visual and auditory experiences much like Napoleon might have had in the same position (except perhaps irrational, bizarre, or otherwise different in specific content). This is probably the predominant view among dream researchers (e.g., Hobson and Revonsuo).

Another view, less common but intriguing, is that dreams are like imaginings. Dreaming you are Napoleon on the fields of Waterloo is like imagining or "daydreaming" that you're there. The experience isn't sensory but imagistic (e.g., Ichikawa and Sosa).

These views are very different!

For example, look at your hands. Now close your eyes and imagine looking at your hands. Unless you're highly unusual, you will probably agree that the first experience is very different from the second experience. On the hallucination model of dreams, dream experience is more like the first (sensory) experience. On the imagination model, dream experience is more like the second (imagery) experience. On pluralist models, dream experiences are sometimes like the one, sometimes like the other (e.g., Rosen and possibly Windt's nuanced version of the hallucination model). (Unfortunately, proponents of the hallucination model sometimes confusingly talk about dream "imagery".)

-----------------------------------

I confess to being tempted to the imagination model. My reason is primarily introspective or immediately retrospective. I sometimes struggle with insomnia and it's not unusual for me to drift in and out of sleep, including lying quietly in bed, eyes closed, allowing myself to drift in daydream, which seems sometimes to merge into sleep, then back into daydream, and my immediately remembered dreams seem not so radically different from my eyes-closed daydream imaginations. (Ichikawa describes similar experiences.)

Another consideration is this: Plausibly, the stability and detail of our ordinary sensory experiences depend to a substantial extent on the stabilizing influence of external inputs. It appears both to match my own experience and to be neurophysiologically plausible that the finely detailed, vivid, sharp structure, of say, visual experience, would be difficult for my brain to sustain without the constraint of a rich flow of input information.  (Alva Noë makes a similar point.)

Now, I don't put a lot of stock in these reflections. There's reason to be skeptical of the accuracy of introspective reports in general, and perhaps dream reports in particular, and I'm willing to apply my own skepticism to myself. But by the same token, what is the main evidence on the other side, in favor of the hallucination model? Mainly, again, introspective report. In particular, it's the fact that people often report their dream experiences as having the rich, sensory-like detail that the hallucination model predicts. Of course, we could just take the easy, obvious, pluralist path of saying that everyone is right about their own experiences. But what fun is that?

-----------------------------------

In fact, I'm inclined to throw a further wrench in things by drawing a distinction between two types of hallucination: phenomenal and doxastic. I introduced this distinction in a blog post in 2013, after reading Oliver Sacks's Hallucinations.

Consider this description, from page 99 of Hallucinations:

The heavens above me, a night sky spangled with eyes of flame, dissolve into the most overpowering array of colors I have ever seen or imagined; many of the colors are entirely new -- areas of the spectrum which I seem to have hitherto overlooked. The colors do not stand still, but move and flow in every direction; my field of vision is a mosaic of unbelievable complexity. To reproduce an instant of it would involve years of labor, that is, if one were able to reproduce colors of equivalent brilliance and intensity.

Here are two ways in which you might come to believe the above about your experience:

(1.) You might actually have visual experiences of the sort described, including of colors entirely new and previously unimagined and of a complexity that would require years of labor to describe.

Or

(2.) you might shortcut all that and simply arrive straightaway at the belief that you are undergoing or have undergone such an experience -- perhaps with the aid of some unusual visual experiences, but not really of the novelty and complexity described.

If the former, you have phenomenally hallucinated wholly novel colors. If the latter, you have only doxastically hallucinated them. I expect that I'm not the first to suggest such a distinction among types of hallucination, but I haven't yet found a precedent.

Mitchell-Yellin and Fischer suggest that some "near death experiences" might also be doxastic hallucinations of this sort. Did your whole life really flash before your eyes in that split second during an auto accident, or did you only form the belief in that experience without the actual experience itself? It's not very neurophysiologically plausible that someone would experience hundreds or thousands of different memory experiences in 500 milliseconds.

-----------------------------------

It seems clear from dream researchers' descriptions of the hallucination model of dreams that they have phenomenal hallucination in mind. But what if dream experiences involve, instead or at least sometimes, doxastic rather than phenomenal hallucinations?

Here, then, is a possibility about dream experience: If I dream I am Napoleon, standing on the fields of Waterloo, I have experiences much like the experiences I have when I merely imagine, in daydream, that I am standing on the fields of Waterloo. But sometimes a doxastic hallucination is added to that imagination: I form the belief that I am having or had rich sensory visual and auditory experience. This doxastic hallucination would explain reports of rich, vivid, detailed sensory-like dream experience without requiring the brain actually to concoct rich, vivid, and detailed visual and auditory experiences.

Indeed, if we go full doxastic hallucination, even the imagination-like experiences would be optional.  (Also, if -- following Sosa -- we don't genuinely believe things while dreaming, we could reframe doxastic hallucinations in terms of whatever quasi-belief analogs occur during dreams.)

[The battle at Waterloo: image source]