Tuesday, February 04, 2025

A Taxonomy of Validity: Eeek!

There comes a time in everyone's life when their 18-year-old daughter, taking their first psychology class, asks, "Parental-figure-of-mine, what is 'validity'?"

For me that time came last week. Eeek!

Psychologists and social scientists use the term all the time, with a dazzling array of modifiers: internal validity, construct validity, external validity, convergent validity, predictive validity, discriminant validity, face validity, criterion validity.... But ask those same social scientists what validity is exactly, and how all of these notions relate to each other, and most will stumble.

As it happens, I was well positioned to address my daughter's question. I have a new paper, on "validity" in causal inference, forthcoming in the Journal of Causal Inference with social scientists Kevin Esterling and David Brady. This paper has been in progress since (again, eeek!) 2018. In previous posts I've addressed whether validity (in social science usage) is better understood as a property of inferences or as a property of claims (I argue the latter), and the intimate relationship of internal validity, external validity, and construct validity in causal inference.

Today, I'll attempt a brief, theoretically-motivated taxonomy of the better-known types of validity. My aim is more descriptive than argumentative: I'll just outline how I think various "validities" hang together, and maybe some readers will find it to be an attractive and helpful picture.

I start with the assumption that validity is a feature of claims, not of inferences. Philosophers typically describe validity as a property of inferences. Social scientists are all over the map, and even prominent ones are sloppy in their usage. But it best organizes our thinking to address claims primarily and treat inferences as secondary.

I will say that a general causal claim that "A causes B in conditions C" is valid if and only if A does in fact cause B in conditions C. (Compare disquotational theories of truth in philosophy.) Consider for example the causal claim: Enforcement threats on reminder postcards (A) cause increased juror turnout (B) in the 21st-century United States (C).

This statement can be divided into four parts, each of which permits a distinctive type of validity failure:

(i.) A

(ii.) causes

(iii.) B

(iv.) in conditions C.

The four possible failures generate the core taxonomic structure.

Construct validity of the cause: Something might cause B in conditions C, but that something might not be A. A causal generalization has construct validity of the cause if the claim accurately specifies that A in particular (and not, for example, some other related thing) causes B in conditions C. Example of a failure of construct validity of the cause: Increased juror turnout among people who receive postcards might not be due to enforcement threats in particular but simply to being reminded of one's civic duty.

Construct validity of the effect: A might cause something in conditions C, but what it causes might not be B. A causal generalization has construct validity of the effect if the effect of A is accurately specified. A causes specifically B (and not, for example, some other related thing) in conditions C. Example of a failure of construct validity of the effect: Enforcement threats might increase the rates at which jurors who don't show up register a valid excuse without actually increasing turnout rates.

Generalizing: Construct validity is present in a causal generalization when the cause and effect are accurately specified.

External validity: A might cause B, but the conditions might not be correctly specified. A causal generalization has external validity if the claim accurately specifies the range of conditions in which it holds. Example of a failure: Enforcement messages might increase juror turnout not in the U.S. in general but only in low-income neighborhoods. Perfect external validity is probably an unattainable ideal for complex social and psychological processes, since the conditions in which causal generalizations hold will be complex and various.

Note on external validity: Common usage often holds that a claim is externally valid only if it holds across a wide range of contexts or conditions. However, this way of thinking unhelpfully denigrates perfectly accurate causal generalizations as "invalid" if they only hold, and are claimed only to hold, across a narrow range of conditions. Transportability is a better concept for characterizing breadth of applicability. An externally valid causal generalization that is accurately claimed to hold across only a narrow range of contexts is not transportable to those other contexts, but there is no inaccuracy or factual error in the statement "A causes B in conditions C" of the sort required for failure of validity. After all A does cause B in conditions C, just as claimed. So validity in the overarching sense described above is present.

Internal validity: A might be related to B in conditions C, but the relation might not be the directional causal relationship claimed. A causal generalization is internally valid if there is a cause-effect relationship of the type claimed (even if the cause, the effect, and/or the conditions are not accurately specified). Example of a failure: There's a common cause of both A and B, which are not directly causally related. Maybe having a stable address causes potential jurors both to be more likely to be sent the postcards and to be more likely to turn out.

Other types of validity can be understood within the general spirit of this framework.

Convergent validity: Present when two causes claimed to have the same effect in fact have the same effect. In common use, the causes are measures, for example two different measures of extraversion. In this case, A1 (application of the first measure) and A2 (application of the second measure) are claimed to have a common effect B (same normalized extraversion score) in a set of conditions often left unspecified. Convergent validity is present if that claim is true (or to the degree it is true).

Discriminant validity: Present when two causes claimed to have different effects in fact have different effects. A1 is claimed to cause B, and A2 is claimed not to cause B (in a set of conditions that is often left unspecified), and discriminant validity is present when that claim is true (or to the degree it is true). In practice, discriminant validity is often supported by observation of low correlations in appropriately controlled conditions. If A1 and A2 are psychological or social measures (e.g., personality measures of extraversion and openness), then a high correlation between the scores would suggest that there is some common psychological feature both measures are tracking, contrary to the ideal of general discriminant validity.

Predictive validity: Present when A is a common cause of B1 and B2, where B1 is typically the outcome of a measure and B2 is typically an event of practical import conceptually related but not closely physically related to B1. For example, application of a purported measure of recidivism (in this case, application of the measure isn't A but rather an intermediate event A1) among released prisoners has high predictive validity if high scores on the measure (B1) arise from the same cause or set of causes that generate high rates of recidivism (B2).

Note on predictive validity: A simpler characterization of "predictive validity" might be simply that B1 accurately predicts B2, but this isn't the most useful way to conceptualize the issue if the prediction is correct in virtue of B1 causing B2 rather than operating by a common cause. If my wife reliably picks me up from work when I ask, my asking (B1) predicts her picking me up (B2), but my asking does not have "predictive validity" in the intended measurement sense. A better term for this relationship would be casual power.

Face validity: Present when it is intuitively or theoretically plausible that A causes B in conditions C. Notably, face validity needn't require that A in fact causes B in conditions C.

Ecological validity: A type of external validity that emphasizes the importance of generalizing correctly over real-world settings (as opposed to laboratory settings or other artificial settings).

Content validity: A type of construct validity focused on whether the content of a complex measure accurately reflects all aspects of the target measured.

Criterion validity: Present when a measure or intervention satisfies some prespecified criterion of success, regardless of whether the measure or intervention in fact measures what it purports to measure.

Finally, two types of validity where "validity" is a property of the inference rather than in terms of the truth of some part of a causal claim:

Statistical conclusion validity: Present when statistics are appropriately used, regardless of whether A in fact causes B in conditions C.

Logical validity: Present when the conclusion of an argument can't be false if its premises are true.

Monday, January 27, 2025

Diversity, Disability, Death, and the Dao

Over the past year, I've been working through Chris Fraser's recent books on later classical Chinese thought and Zhuangzi, and I've been increasingly struck by how harmonizing with the Dao constitutes an attractive ethical norm. This norm differs from the standard trio of consequentialism (act to maximize good consequences), deontology (follow specific rules), and virtue ethics (act generously, kindly, courageously, etc.).

From a 21st-century perspective, what does "harmonizing with the Dao" amount to? And why should it be an ethical ideal? In an October post, I articulated a version of "harmonizing with the Dao" that combines elements of the ancient Confucian Xunzi and the ancient Daoist Zhuangzi. Today, I'll articulate the ideal less historically and contrast it with an Aristotelian ethical ideal that shares some common features.

So here's an ahistorical first pass at the ideal of harmonizing with the Dao:

Participate harmoniously in the awesome flourishing of things.

Unpacking a bit: This ideal depends upon a prior axiological vision of "awesome flourishing". My own view is that everything is valuable, but life is especially valuable, especially diverse and complex life, and most especially diverse and complex life-forms that thrive intellectually, artistically, socially, emotionally, and through hard-won achievement. (See my recent piece in Aeon magazine.)

[traditional yin-yang symbol, black and white; source]

Participating harmoniously in the awesome flourishing of things can include personal flourishing, helping others to flourish, or even simply appreciating a bit of the awesomeness. (Appreciation is the necessary receptive side of artistry: See my post on making the world better by watching reruns of I Love Lucy.)

Thinking in terms of harmony has several attractive features, including:

  1. It decenters the self (you're not the melody).
  2. There are many ways to harmonize.
  3. Melody and harmony together generate beauty and structure absent from either alone.

Is this is a form of deontology with one rule: "participate harmoniously in the awesome flourishing of things"? No, it's "deontological" only in the same almost-vacuous sense that the consequentialists' "maximize good consequences" is deontological. The idea isn't that following the rule is what makes an action good. Harmonizing with the Dao is good in itself, and it's only incidental that we can (inadequately) abbreviate what's good about it in a rule-like slogan.

Although helping others flourish is normally part of harmonizing, there is no intended consequentialist framework that ranks actions by their tendency to maximize flourishing. Simply improvising a melody on a musical instrument at home, with no one else to hear, can be a way of harmonizing with the Dao, and the decision to do so needn't be weighed systematically against spending that time fighting world hunger. (It's arguably a weakness of Daoism that it tends not to urge effective social action.)

Perhaps the closest neighbor to the Daoist ideal is the Aristotelian ideal of leading a flourishing, "eudaimonic" life and recent Aristotelian-inspired views of welfare, such as Sen's and Nussbaum's capabilities approach.

We can best see the difference between Aristotelian or capabilities approaches and the Daoist ideal by considering Zhuangzi's treatment of diversity, disability, and death. Aristotelian ethics often paints an ideal of the well-rounded person: wise, generous, artistic, athletic, socially engaged -- the more virtues the better -- a standard of excellence we inevitably fall short of. While capabilities theorists acknowledge that people can flourish with disabilities or in unconventional ways, these acknowledgements can feel like afterthoughts.

Zhuangzi, in contrast, centers and celebrates diversity, difference, disability, and even death as part of the cycle of coming and going, the workings of the mysterious and wonderful Dao. From an Aristotelian or capabilities perspective, death is the ultimate loss of flourishing and capabilities. From Zhuangzi's perspective, death -- at the right time and in the right way -- is as much to be celebrated, harmonized with, welcomed, as life. From Zhuangzi's perspective, peculiar animals and plants, and peculiar people with folded-up bodies, or missing feet, or skin like ice, or entirely lacking facial features, are not deficient, but examples of the wondrous diversity of life.

To frame it provocatively (and a bit unfairly): Aristotle's ideal suggests that everyone should strive to play the same note, aiming for a shared standard of human excellence. Zhuangzi, in contrast, celebrates radically diverse forms of flourishing, with the most wondrous entities being those least like the rest of us. Harmony arises not from sameness but from how these diverse notes join together into a whole, each taking their turn coming and going. A Daoist ethic is not conformity to rules or maximization of virtue or good consequences but participating well in, and relishing, the magnificent symphony of the world.

Saturday, January 18, 2025

If You Ask "Why?", You're a Philosopher and You're Awesome

Yesterday, I published two pieces,"Severance, The Substance, and Our Increasingly Splintered Selves" in the New York Times, and "If You Ask "Why?", You're a Philosopher and You're Awesome" / "The Penumbral Plunge" in Aeon. If you receive The Splintered Mind by mail, apologies for hitting you twice in quick succession.

The Aeon piece remixes material from The Weirdness of the World and some old blog posts into what one reader called "a love song for philosophy". It's a 3000-word argument that the our species' capacity to wonder philosophically, even when we make no progress toward answers, is the most intrinsically awesome thing about planet Earth. Philosophy needs no other excuse.

-----------------------------------------

Imagine a planet on the far side of the galaxy. We will never interact with it. We will never see it. What happens there is irrelevant to us, now and for the conceivable future. What would you hope this planet is like?

Would you hope that it’s a sterile rock, as barren as our Moon? Or would you hope it has life? I think, like me, you’ll hope it has life. Life has value. Other things being equal, a planet with life is better than a planet without. I won’t argue for this. I take it as a starting point, an assumption. I invite you to join me in feeling this way or at least to consider for the sake of argument what might follow from feeling this way. Life – even simple, nonconscious, microbial life – has some intrinsic value, value for its own sake. The Universe is richer for containing it.

What kind of life might we hope for on behalf of this distant planet, if we are, so to speak, benevolently imagining it into existence? Do we hope for only microbial life and nothing more complex, nothing multicellular? Or do we hope for complex life, with the alien analogue of lush rainforests and teeming coral reefs, rich ecosystems with ferns and moss and kelp, eels and ant hives, parakeets and spiders, squid and tumbleweeds and hermaphroditic snails and mushroom colonies joined at the root – or rather, not to duplicate Earth too closely, life forms as diverse and wondrous as these, but in a distinct alien style? Again, I think you will join me in hoping for diverse, thriving complexity.

Continued open-access here.

Friday, January 17, 2025

Severance, The Substance and Our Increasingly Splintered Selves

today in the New York Times

From one day to the next, you inhabit one body; you have access to one set of memories; your personality, values and appearance hold more or less steady. Other people treat you as a single, unified person — responsible for last month’s debts, deserving punishment or reward for yesterday’s deeds, relating consistently with family, lovers, colleagues and friends. Which of these qualities is the one that makes you a single, continuous person? In ordinary life it doesn’t matter, because these components of personhood all travel together, an inseparable bundle.

But what if some of those components peeled off into alternative versions of you? It’s a striking coincidence that two much talked-about current works of popular culture — the Apple TV+ series “Severance” and “The Substance,” starring Demi Moore — both explore the bewildering emotional and philosophical complications of cleaving a second, separate entity off of yourself. What is the relationship between the resulting consciousnesses? What, if anything, do they owe each other? And to what degree is what we think of as our own identity, our self, just a compromise — and an unstable one, at that?

[continued here; if you're a friend, colleague, or regular Splintered Mind reader and blocked by a paywall, feel free to email me at my ucr.edu address for a personal-use-only copy of the final manuscript version]

Friday, January 10, 2025

A Robot Lover's Sociological Argument for Robot Consciousness

Allow me to revisit an anecdote I published in a piece for Time magazine last year.

"Do you think people will ever fall in love with machines?" I asked the 12-year-old son of one of my friends.

"Yes!" he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot -- an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors' names.

"I think of Aura as my friend," added his 15-year-old sister.

The kids, as I recall, had been particularly impressed by the fact that when they visited Aura a second time, she seemed to remember them by name and express joy at their return.

Imagine a future replete with such robot companions, whom a significant fraction of the population regards as genuine friends and lovers. Some of these robot loving people will want, presumably, to give their friends (or "friends") some rights. Maybe the right not to be deleted, the right to refuse an obnoxious task, rights of association, speech, rescue, employment, the provision of basic goods -- maybe eventually the right to vote. They will ask the rest of society: Why not give our friends these rights? Robot lovers (as I'll call these people) might accuse skeptics of unjust bias: speciesism, or biologicism, or anti-robot prejudice.

Imagine also that, despite technological advancements, there is still no consensus among psychologists, neuroscientists, AI engineers, and philosophers regarding whether such AI friends are genuinely conscious. Scientifically, it remains obscure whether, so to speak, "the light is on" -- whether such robot companions can really experience joy, pain, feelings of companionship and care, and all the rest. (I've argued elsewhere that we're nowhere near scientific consensus.)

What I want to consider today is whether there might nevertheless be a certain type of sociological argument on the robot lovers' side.

[image source: a facially expressive robot from Engineered Arts]

Let's add flesh to the scenario: An updated language model (like ChatGPT) is attached to a small autonomous vehicle, which can negotiate competently enough through an urban environment, tracking its location, interacting with people using facial recognition, speech recognition, and the ability to guess emotional tone from facial expression and auditory cues in speech. It remembers not only names but also facts about people -- perhaps many facts -- which it uses in conversational contexts. These robots are safe and friendly. (For a bit more speculative detail see this blog post.)

These robots, let's suppose, remain importantly subhuman in some of their capacities. Maybe they're better than the typical human at math and distilling facts from internet sources, but worse at physical skills. They can't peel oranges or climb a hillside. Maybe they're only okay at picking out all and only bicycles in occluded pictures, though they're great at chess and Go. Even in math and reading (or "math" and "reading"), where they generally excel, let's suppose they makes mistakes that ordinary humans wouldn't make. After all, with a radically different architecture, we ought to expect even advanced intelligences to show patterns of capacity and incapacity that diverge from what we see in humans -- subhuman in some respects while superhuman in others.

Suppose, then, that a skeptic about the consciousness of these AI companions confronts a robot lover, pointing out that theoreticians are divided on whether the AI systems in fact have genuine conscious experiences of pain, joy, concern, and affection, beneath the appearances.

The robot lover might then reasonably ask, "what do you mean by 'conscious'?" A fair enough question, given the difficulty of defining consciousness.

The skeptic might reply as follows: By "consciousness" I mean that there's something it's like to be them, just like there's something it's like to be a person, or a dog, or a crow, and nothing it's like to be a stone or a microwave oven. If they're conscious, they don't just have the outward appearance of pleasure, they actually feel pleasure. They don't just receive and process visual data; they experience seeing. That's the question that is open.

"Ah now," the robot lover replies, "If consciousness isn't going to be some inscrutable, magic inner light, it must be connected with something important, something that matters, something we do and should care about, if it's going to be a crucial dividing line between entities that deserve are moral concern and those that are 'mere machines'. What is the important thing that is missing?"

Here the robot skeptic might say, oh they don't have a "global workspace" of the right sort, or they're not living creatures with low-level metabolic processes, or they don't have X and Y particular interior architecture of the sort required by Theory Z."

The robot lover replies: "No one but a theorist could care about such things!"

Skeptic: "But you should care about them, because that's what consciousness depends on, according to some leading theories."

Robot lover: "This seems to me not much different than saying consciousness turns on a soul and wondering whether the members of your least favorite race have souls. If consciousness and 'what-it's-like-ness' is going to be socially important enough to be the basis of moral considerability and rights, it can't be some cryptic mystery. It has to align, in general, with things that should and already do matter socially. And my friend already has what matters. Of course, their cognition is radically different in structure from yours and mine, and they're better at some tasks and worse at others -- but who cares about how good one is at chess or at peeling oranges? Moral consideration can't depend on such things."

Skeptic: "You have it backward. Although you don't care about the theories per se, you do and should care about consciousness, and so whether your 'friend' deserves rights depends on what theory of consciousness is true. The consciousness science should be in the driver's seat, guiding the ethics and social practices."

Robot lover: "In an ordinary human, we have ample evidence that they are conscious if they can report on their cognitive processes, flexibly prioritize and achieve goals, integrate information from a wide variety of sources, and learn through symbolic representations like language. My AI friends can do all of that. If we deny that my friends are 'conscious' despite these capacities, we are going mystical, or too theoretical, or too skeptical. We are separating 'consciousness' from the cognitive functions that are the practical evidence of its existence and that make it relevant to the rest of life."

Although I have considerable sympathy for the skeptic's position, I can imagine a future (certainly not our only possible future!) in which AI friends become more and more widely accepted, and where the skeptic's concerns are increasingly sidelined as impractical, overly dependent on nitpicky theoretical details, and perhaps even bigoted.

If AI companionship technology flourishes, we might face the choice between connecting "consciousness" definitionally to scientifically intractable qualities, abandoning its main practical, social usefulness (or worse, using its obscurity to justify what seems like bigotry), or allowing that if an entity can interact with us in (what we experience as) a sufficiently socially significant ways, it has consciousness enough, regardless of theory.

Wednesday, January 01, 2025

Writings of 2024

Each New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 20212022, and 2023.

Cheers to 2025! My 2024 publications appear below.

-----------------------------------

Book:

The Weirdness of the World, released early in 2024, pulls together ideas I've been publishing since 2012 on the failure of common sense, philosophy, and empirical science to explain consciousness and the fundamental structure of the cosmos. Inevitably, because of these failures, all general theories about such matters will be both bizarre and dubious.

Books under contract / in progress:

As co-editor with Jonathan Jong, The Nature of Belief, Oxford University Press.

    Collects 15 new essays on the topic, by Sara Aronowitz, Tim Crane and Katalin Farkas, Carolina Flores, M.B. Ganapini, David Hunter, David King and Aaron Zimmerman, Angela Mendelovici, Joshua Mugg, Bence Nanay, Nic Porot and Eric Mandelbaum, Eric Schwitzgebel, Keshav Singh, Declan Smithies, Ema Sullivan-Bissett, amd Neil Van Leeuwen.

As co-editor with Helen De Cruz and Rich Horton, an anthology with MIT Press containing great classics of philosophical SF. (Originally proposed as The Best Philosophical Science Fiction in the History of All Earth, but MIT isn't keen on that title.)


Full-length non-fiction essays, published 2024:

Revised and updated: "Introspection", Stanford Encyclopedia of Philosophy.

"Creating a large language model of a philosopher" (with David Schwitzgebel and Anna Strasser), Mind and Language, 39, 237-259.

"Repetition and value in an infinite universe", in S. Hetherington, ed., Extreme Philosophy, Routledge.

"The ethics of life as it could be: Do we have moral obligations to artificial life?" (with Olaf Witkowski), Artificial Life, 30 (2), 193-215.

"Quasi-sociality: Toward asymmetric joint actions with artificial systems" (with Anna Strasser), in A. Strasser, ed., Anna's AI Anthology: How to Live with Smart Machines? Xenemoi.

"Let's hope we're not living in a simulation", Nous (available online; print version forthcoming). [Commentary on David Chalmers' Reality+; Chalmers' reply; my response to his reply]


Full-length non-fiction essays, finished and forthcoming:

"Dispositionalism, yay! Representationalism, boo!" in J. Jong and E. Schwitzgebel, eds., The Nature of Belief, Oxford.

"Imagining yourself in another's shoes vs. extending your concern: Empirical and ethical differences", Daedalus.

"The necessity of construct and external validity for deductive causal inference" (with Kevin Esterling and David Brady), Journal of Causal Inference.


Full-length non-fiction essays, in draft and circulating:

"The prospects and challenges of measuring morality" (with Jessie Sun).

"The washout argument against longtermism" (commentary on William MacAskill's book What We Owe the Future).

"Consciousness in Artificial Intelligence: Insights from the science of consciousness" (one of 19 authors, with Patrick Butlin and Robert Long).

"When counting conscious subjects, the result needn't always be a determinate whole number" (with Sophie R. Nelson).


Selected shorter non-fiction:

Review of Neil Van Leeuwen's Religion as make-believe, Notre Dame Philosophical Reviews (May 2, 2024).

"The problem with calling Trump and Vance weird", Los Angeles Times (Aug 4, 2024).

"Do AI deserve rights?", Time magazine (Mar 22, 2024).

"How to wrap your head around the most mind-bending theories of reality", New Scientist (Mar 20, 2024).


Science fiction stories

"How to remember perfectly", Clarkesworld, issue 216, (2024).

"Guiding star of mall patroller 4u-012”, Fusion Fragment (forthcoming).


Some favorite blog posts

"Philosophy and the ring of darkness" (Apr 11).

"Formal decision is an optional tool that breaks when values are huge" (May 9).

"A Metaethics of alien convergence" (Jul 23)

"The disunity of consciousness in everyday experience" (Sep 9)

"How to improve the universe by watching TV alone in your room" (Sep 27)

"The not-so-silent generation in philosophy" (Oct 3)


Happy New Year!