Friday, June 28, 2024

Is the World Morally Well Ordered? Secular Versions of the "Problem of Evil"

Since 2003, I've regularly taught a large lower-division class called "Evil", focusing primarily on the moral psychology of evil (recent syllabus here).  We conclude by discussing the theological "problem of evil" -- the question of whether and how evil and suffering are possible given an omnipotent, omniscient, benevolent God.  Over the years I've been increasingly intrigued by a secular version of this question.

I see the secular "problem of evil" as this: Although no individual or collective has anything close to the knowledge or power of God as envisioned in mainstream theological treatments, the world is not wholly beyond our control; so there's at least some possibility that individuals and collectives can work toward making the world morally well ordered in the sense that the good thrive, the evil suffer, justice is done, and people get what they deserve.  So, how and to what extent is the world morally well ordered?  My aim today is to add structure to this question, rather than answer it.

(1.) We might first ask whether it would in fact be good if the world were morally well-ordered.  One theological response to the problem of evil is to argue no.  A world in which God ensured perfect moral order would be a world in which people lacked the freedom to make unwise choices, and freedom is so central to the value of human existence that it's overall better that we're free and suffer than that we're unfree but happy.

A secular analogue might be: A morally well-ordered world would, or might, require such severe impingements on our freedom as to not be worth the tradeoff.  It might, for example, require an authoritarian state that rewards, punishes, monitors, and controls in a manner that -- even if it could accurately sort the good from the bad -- fundamentally violates essential liberties.  Or it might require oppressively high levels of informal social control by peers and high-status individuals, detecting and calling out everyone's moral strengths and weaknesses.

(2.) Drawing from the literature on "immanent justice" -- with literary roots in, for example, Shakespeare and Dostoyevsky -- we might consider plausible social and psychological mechanisms of moral order.  In Shakespeare's Macbeth, one foul deed breeds another and another -- partly to follow through on and cover up the first and partly because one grows accustomed to evil -- until the evil is so extreme and pervasive that the revulsion and condemnation of others becomes inevitable.  In Dostoyevsky's Crime and Punishment, Raskolnikov torments himself with fear, guilt, and loss of intimacy (since he has a life-altering secret he cannot share with most others in his life), until he unburdens himself with confession.

We can ask to what extent it's true that such social and psychological mechanisms cause the guilty to suffer.  Is it actually empirically correct that those who commit moral wrongs end up unhappy as a result of guilt, fear, social isolation, and the condemnation of others?  I read Woody Allen's Crimes and Misdemeanors as arguing the contrary, portraying Judah as overall happier and better off as a result of murdering his mistress.

(3.) Drawing from the literature on the goodness or badness of "human nature", we can ask to what extent people are naturally pleased by their own and others' good acts and revolted by their own and others' evil.  I find the ancient Chinese philosopher Mengzi especially interesting on this point.  Although Mengzi acknowledges that the world isn't perfectly morally ordered ("an intent noble does not forget he may end up in a ditch; a courageous noble does not forget he may lose his head"; 3B1), he generally portrays the morally good person as happy, pleased by their own choices, and admired by others -- and he argues that our inborn natures inevitably tend this direction if we are not exposed to bad environmental pressures.

(4.) We can explore the extent to which moral order is socially and culturally contingent.  It is plausible that in toxic regimes (e.g., Stalinist Russia) the moral order is to some extent inverted, the wicked thriving and the good suffering.  We can aspire to live in a society where, in general -- not perfectly, of course! -- moral goodness pays off, perhaps through ordinary informal social mechanisms: "What goes around comes around."  We can consider what structures tend to ensure, and what structures tend to pervert, moral order.

Then, knowing this -- within the constraints of freedom and given legitimate diversity of moral opinion (and the lack of any prospect for a reliable moralometer) -- we can explore what we as individuals, or as a group, might do to help create a morally better ordered world.

[Dall-E interpretation of a moralometer sorting angels and devils, punishing the devils and rewarding the angels]

Wednesday, June 19, 2024

Conscious Subjects Needn't Be Determinately Countable: Generalizing Dennett's Fame in the Brain

It is, I suspect, an accident of vertebrate biology that conscious subjects typically come in neat, determinate bundles -- one per vertebrate body, with no overlap.  Things might be very different with less neurophysiologically unified octopuses, garden snails, split-brain patients, craniopagus twins, hypothetical conscious computer systems, and maybe some people with "multiple personality" or dissociative identity.


Consider whether the following two principles are true:

Transitivity of Unity: If experience A and experience B are each part of the conscious experience of a single subject at a single time, and if experience B and experience C are each part of the conscious experience of a single subject at a single time, then experience A and experience C are each part of the conscious experience of a single subject at a single time.

Discrete Countability: Except in marginal cases at spatial and temporal boundaries (e.g., someone crossing a threshold into a room), in any spatiotemporal region the number of conscious subjects is always a whole number (0, 1, 2, 3, 4...) -- never a fraction, a negative number, an imaginary number, an indeterminate number, etc.

Leading scientific theories of consciousness, such as Global Workspace Theory and Integrated Information Theory are architecturally committed to neat bundles satisfying transitivity of unity and discrete countability.  Global Workspace Theories treat processes as conscious if they are available to, or represented in, "the" global workspace (one per conscious animal).  Integrated Information Theory contains an "exclusion postulate" according to which conscious systems cannot nest or overlap, and has no way to model partial subjects or indiscrete systems.  Most philosophical accounts of the "unity of consciousness" (e.g. Bayne 2010) also invite commitment to these two theses.

In contrast, Dennett's "fame in the brain" model of consciousness -- though a close kin to global workspace views -- is compatible with denying transitivity of unity and discrete countability.  In Dennett's model, a cognitive process or content is conscious if it is sufficiently "famous" or influential among other cognitive processes.  For example, if you're paying close attention to a sharp pain in your toe, the pain process will influence your verbal reports ("that hurts!"), your practical reasoning ("I'd better not kick the wall again"), your planned movements (you'll hobble to protect it), and so on; and conversely, if a slight movement in peripheral vision causes a bit of a response in your visual areas, but you don't and wouldn't report it, act on it, think about it, or do anything differently as a result, it is nonconscious.  Fame comes in degrees.  Something can be famous to different extents among different groups.  And there needn't even be a determinately best way of clustering and counting groups.

[Dall-E's interpretation of a "brain with many processes, some of which are famous"]

Here's a simple model of degrees of fame:

Imagine a million people.  Each person has a unique identifier (a number 1-1,000,000), a current state (say, a "temperature" from -10 to +10), and the capacity to represent the states of ten other people (ten ordered pairs, each containing the identifier and temperature of one other person).

If there is one person whose state is represented in every other person, then that person is maximally famous (a fame score of 999,999).  If there is one person whose state is represented in no other person, that that person has zero fame.  Between these extremes is of course a smooth gradation of cases.

If we analogize to cognitive processes we might imagine the pain in the toe or the flicker in the periphery being just a little famous: Maybe the pain can affect motor planning but not speech, causes a facial expression but doesn't influence the stream of thought you're having about lunch.  Maybe the flicker guides a glance and causes a spike of anxiety but has no further downstream effects.  Maybe they're briefly reportable but not actually reported, and they have no impact on medium- or long-term memory, or they affect some sorts of memory but not others.

The "ignition" claim of global workspace theory is the empirically defensible (but not decisively established) assertion that there are few such cases of partial fame: Either a cognitive process has very limited effects outside of its functional region or it "ignites" across the whole brain, becoming widely accessible to the full range of influenceable processes.  The fame-in-the-brain model enables a different way of thinking that might apply to a wider range of cognitive architectures.

#

We might also extend the fame model to issues of unity and the individuation of conscious subjects.

Start with a simple case: the same setup as before, but with two million people and the following constraint: Processes numbered 1 to 1,000,000 can only represent the states of other processes in that same group of 1 to 1,000,000; and processes numbered 1,000,001 to 2,000,000 can only represent the states of other processes in that group.  The fame groups are disjoint, as if on different planets.  Adapted to the case of experiences: Only you can feel your pain and see your peripheral flicker (if anyone does), and only I can feel my pain and see my peripheral flicker (if anyone does).

This disjointedness is what makes the two conscious subjects distinct from each other.  But of course, we can imagine less disjointedness.  If we eliminate disjointedness entirely, so that processes numbered 1 to 2,000,000 can each represent the states of any process from 1 to 2,000,000, then our two subjects become one.  The planets are entirely networked together.  But partial disjointedness is also possible: Maybe processes can represent the states of anyone within 1,000,000 of their own number (call this the Within a Million case).  Or maybe processes numbered 950,001 to 1,050,000 can be represented by any process from 1 to 2,000,000 but every process below 950,001 can only be represented by processes 1 to 1,050,000 and every process above 1,050,000 can only be represented by processes 950,001 to 2,000,000 (call this the Overlap case).

The Overlap case might be thought of as two discrete subjects with an overlapping part.  Subject A (1 to 1,050,000) and Subject B (950,001 to 2,000,000) each have their private experiences, but there are also some shared experiences (whenever processes 950,001 to 1,050,000 become sufficiently famous in the range constituting each subject).  Transitivity of Unity thus fails: Subject A experiences, say, a taste of a cookie (process 312,421 becoming famous across processes 1 - 1,050,000) and simultaneously a sound of a bell (process 1,000,020 becoming famous across processes 1 - 1,050,000); while Subject B experiences that same sound of a bell alongside the sight of an airplane (both of those processes being famous across processes 950,001 - 2,000,000).  Cookie and bell are unified in A.  Bell and airplane are unified in B.  But no subject experiences the cookie and airplane simultaneously.

In the Overlap case, discrete countability is arguably preserved, since it's plausible to say there are exactly two subjects of experience.  But it's more difficult to retain Discrete Countability in the Within a Million case.  There, if we want to count each distinct fame group as a separate subject, we will end up with a million different subjects: Subject 1 (1 to 1,000,001), Subject 2 (1 to 1,000,002), Subject 3 (1 to 1,000,003) ... Subject 1,000,001 (1 to 2,000,000), Subject 1,000,002 (2 to 2,000,000), ... Subject 1,999,999 (999,999 to 2,000,000), Subject 2,000,000 (1,000,000 - 2,000,000).  (There needn't be a middle subject with access to every process: Simply extend the case up to 3,000,000 processes.)  While we could say there would be two million discrete subjects in such an architecture, I see at least three infelicities:

First, person 1,000,002 might never be famous -- maybe even could never be famous, being just a low-level consumer whose destiny is to only to make others famous.  If so, Subject 1 and Subject 2 would always have, perhaps even necessarily would always have, exactly the same experiences in almost exactly the same physical substrate, despite being, supposedly, discrete subjects.  That is, at least, a bit of an odd result.

Second, it becomes too easy to multiply subjects.  You might have thought, based on the other cases, that a million processes is what it takes to generate a human subject, and that with two million processes you get either two human subjects or one large subject.  But now it seems that, simply by linking those two million processes together by a different principle (with about 1.5 times as many total connections), you can generate not just two but a full two million human subjects.  It turns out to be surprisingly cheap to create a plethora of discretely different subjective centers of experience.

Third, the model I've presented is simplified in a certain way: It assumes that there are two million discrete, countable processes that could potentially be famous or create fame in others by representing them.  But cognitive processes might not in fact be discrete and countable in this way.  They might be more like swirls and eddies in a turbulent stream, and every attempt to give them sharp boundaries and distinct labels might to some extent be only a simplified model of a messy continuum.  If so, then our two million discrete subjects would itself be a simplified model of a messy continuum of overlapping subjectivities.

The Within a Million case then, might be best conceptualized not as a case of one subject of experience, nor two, nor two million, but rather a case that defies any such simple numerical description, contra Discrete Countability.

#

This is abstract and far-fetched, of course.  But once we have stretched our minds in this way, it becomes, I think, easier to conceive of the possibility that some real cases (cognitively partly disunified mollusks, for example, or people with unusual conditions or brain structures, or future conscious computer systems) might defy transitivity of unity and discrete countability.

What would it be like to be such an entity / pair of entities / diffuse-bordered-uncountable-groupish thing?  Unsurprisingly, we might find such forms of consciousness difficult to imagine with our ordinary vertebrate concepts and philosophical tools derived from our particular psychology.

Tuesday, June 11, 2024

The Meta-Epistemic Objection to Longtermism

According to "longtermism" (as I'll use the term), our thinking should be significantly influenced by our expectations for the billion-plus-year future.  In a paper in draft, I argue, to the contrary, that our thinking should be not at all influenced by our expectations for the billion-year-plus future.  Every action has so vastly many possible positive and negative future consequences that it's impossible to be justified in expecting that any action currently available to us will have a non-negligible positive impact that far into the future.  Last week, I presented my arguments against longtermism to Oxford's Global Priorities Institute, the main academic center of longtermist thinking.

The GPI folks were delightfully welcoming and tolerant of my critiques, and they are collectively nuanced enough in their thinking to already have anticipated the general form of all of my main concerns in various places in print.  What became vivid for me in discussion was the extent to which my final negative conclusion about longtermist reasoning depends on a certain meta-epistemic objection -- that is, the idea that our guesses about the good or bad consequences of our actions for the billion-year-plus future are so poorly grounded that we are epistemically better off not even guessing.

The contrary position, seemingly held by several of the audience members at GPI, is that, sure, we should be very epistemically humble when trying to estimate good or bad consequences for the billion-year-plus future -- but still we might find ourselves, after discounting for our likely overconfidence and lack of imagination, modestly weighing up all the uncertainties and still judging that action A really would be better than action B in the billion-plus-year future; and then it's perfectly reasonable to act on this appropriately humble, appropriately discounted judgment.

They are saying, essentially, play the game carefully.  I'm saying don't play the game.  So why do I think it's better not even to play the game?  I see three main reasons.

[image of A minus delta < B plus delta, overlaid with a red circle and crossout line]

First, longtermist reasoning requires cognitive effort.  If the expected benefit of longtermist reasoning is (as I suggest in one estimate in the full-length essay draft) one quadrillionth of a life, it might not be worth a millisecond of cognitive effort to try to get the decision right.  One's cognitive resources would be better expended elsewhere.

Now one response to this -- a good response, I think, at least for a certain range of thinkers in a certain range of contexts -- is that reflecting about the very distant future is fun, engaging, mind-opening, and even just intrinsically worthwhile.  I certainly wouldn't want to shame anyone just for thinking about it!  (Among other things, I'm a fan of science fiction.)  But I do think we should bear the cost argument in mind to the extent our aim is the efficient use of cognitive labor for improving the world.  It's likely that there are many more productive topics to throw your cognitive energies at, if you want to positively influence the future -- including, I suspect, just being more thoughtful about the well-being of people in your immediate neighborhood.

Second, longtermist reasoning adds noise and error into the decision-making process.  Suppose that when considering the consequences of action A vs. action B for "only" the next thousand years, you come to the conclusion that action A would be better.  But then, upon further reflection, you conclude -- cautiously, with substantial epistemic discounting -- that looking over the full extent of the next billion-plus years, B in fact has better expected consequences.  The play carefully and don't play approaches now yield different verdicts about what you should do.  Play carefully says B.  Don't play says A.  Which is likely to be the better policy, as a general matter?  Which is likely to lead to better decisions and consequences overall, across time?

Let's start with an unflattering analogy.  I'll temper it in a minute.  Suppose that you favor action A over action B on ordinary evidential grounds, but then after summing all those up, you make the further move of consulting a horoscope.  Assuming horoscopes have zero evidential value, the horoscope adds only noise and error to the process.  If the horoscope "confirms" A, then your decision is the same as it would have been.  If the horoscope ends up tilting you toward B, then it has led you away from your best estimate.  It's better, of course, for the horoscope to play no role.

Now what if the horoscope -- or, to put it more neutrally, Process X -- adds a tiny bit of evidential value -- say, one trillionth of the value of a happy human life?  That is, suppose that if the Process X says "Action B will increase good outcomes by trillions of lives".  You then discount the output of Process X a lot -- maybe by 24 orders of magnitude.  After this discounting of its evidentiary value, you consequently increase your estimate of the value of Action B by one trillionth of a life.  In that case, almost no practical decisions which involve the complex weighing up of competing costs and benefits should be such that the tiny Process X difference is sufficient to rationally shift you from choosing A to choosing B.  You would probably be better off thinking a bit more carefully about other pros and cons.  Furthermore, I suspect that -- as a contingent matter of human psychology -- it will be very difficult to give Process X only the tiny, tiny weight it deserves.  Once you've paid the cognitive costs of thinking through Process X, that factor will loom salient for you and be more than a minuscule influence on your reasoning.  As a result, incorporating Process X into your reasoning will add noise and error, of a similar sort to the pure horoscope case, even if it has some tiny predictive value.

Third, longtermist reasoning might have negative effects on other aspects of one’s cognitive life, for example, by encouraging inegalitarian or authoritarian fantasies, or a harmful neo-liberal quantification of goods, or self-indulgent rationalization, or a style of consequentialist thinking that undervalues social relationships or already suffering people.  This is of course highly speculative and contingent both on the contingent psychology of the longtermists in question and the value or disvalue of, for example, "neo-liberal quantification of goods".  But in general cognitive complexity is fertile soil for cognitive vice.  Perhaps rationalization is most straightforward version of this objection: If you are emotionally attracted to Action B -- if you want it to be the case that B is the best choice -- and if billion-year-plus thinking seems to favor Action B, it's plausible that you'll give that fact more than the minuscule-upon-minuscule weight it deserves (if my other arguments concerning longtermism are correct).

Now it might be the case that longtermist reasoning also has positive effects on the longtermist thinker -- for example, by encouraging sympathy for distant others, or by fruitfully encouraging more two-hundred-year thinking, or by encouraging a flexible openness of mind.  This is, I think, pretty hard to know; but longtermists' self-evaluations and peer evaluations are probably not a reliable source here.

[Thanks to all the GPI folks for comments and discussion, especially Christian Tarsney.]

Saturday, June 01, 2024

Two-Layer Metaphysics: Reconciling Dispositionalism about Belief with Underlying Representational Architecture

During the question period following UCR visiting scholar Brice Bantegnie's colloquium talk on dispositional approaches to the mind, one of my colleagues remarked -- teasingly, but also with some seriousness -- "one thing I don't like about you dispositionalists is that you deny cognitive science".  Quilty-Dunn and Mandelbaum express a similar thought in their 2018 critique of dispositionalism: Cognitive science works in the medium of representations.  Therefore (?), belief must be a representational state.  Therefore (?), defining belief wholly in terms of dispositional structures conflicts with the best cognitive science.

None of this is correct.  We can understand why not through what I'll call two-layer metaphysics.  Jeremy Pober's 2022 dissertation under my direction was in part about two-layer metaphysics.  Bantegnie also supports a type of two-layer metaphysics, though he and Pober otherwise have very different metaphysical pictures.  (Pober is biological reductionist and Bantegnie a Ryle-inspired dispositionalist.)  Mandelbaum and I in conversation have also converged on this, recognizing that we can partly reconcile our views in this way.

Two-layer metaphysics emphasizes the distinction between (to somewhat misappropriate David Marr) the algorithmic and the implementational level, or alternatively between conceptual and nomological necessities, or between role and realizer, or between what makes it correct to say that someone believes some particular proposition and in virtue of what particular structures they actually do believe that proposition.  (These aren't equivalent formulations, but species of a genre.)

To get a clearer sense of this, it's helpful to consider space aliens.

Rudolfo, let's say, is an alien visitor from Alpha Centauri.  He arrives in a space ship, quickly learns English, Chinese, and Esperanto, tells amusing stories about his home world illustrated with slide shows and artifacts, enjoys eating oak trees whole and taking naps at the bottom of lakes, becomes a 49ers football fan, and finds employment as a tax attorney.  To all outward appearances, he integrates seamlessly into U.S. society; and although he's a bit strange in some ways, Rudolfo is perfectly comprehensible to us.  Let's also stipulate (though it's a separate issue) that he has all the kinds of conscious experiences you would expect: Feelings of joy and sadness, sensory images, conscious thoughts, and so on.

[Dall-E image of an alien filling out tax forms]

Does Rudolfo believe that taxes are due on April 15?  On a dispositionalist account of the sort I favor, as long as he stably possesses all the right sorts of dispositions, he does.  He is disposed to correctly file tax forms by that deadline.  He utters sentences like "Taxes are due on April 15", and he feels sincere when he says this.  He feels anxiety if a client risks missing that deadline.  If he learns that someone submitted their taxes on April 1, he concludes that they did not miss the deadline, etc.  He has the full suite of appropriate behavioral, experiential, and cognitive dispositions.  (By "cognitive dispositions" I mean dispositions to enter into other related mental states, like the disposition to draw relevant conclusions.)

Knowing all this, we know Rudolfo believes.  Do we also need to dissect him, or put him in some kind of scanner, or submit him to subtle behavioral tests concerning details of reaction time and such, to figure out whether he has the right kind of underlying architecture?  Here, someone committed to identifying belief in a strict way with the possession of a certain underlying architecture faces a dilemma.  Either they say no, no dissection, scan, or subtle cognitive testing is needed, or they say yes, a dissection, scan, or series of subtle cognitive tests is needed.

If no, then the architectural commitment is vacuous: It turns out that having the right set of dispositions is sufficient for having the right architecture.  So one might as well be a dispositionalist after all.

If yes, then we don't really know whether Rudolfo believes despite the behavioral and experiential patterns that would seem to be sufficient for believing.  This conclusion (1.) violates common sense and ordinary usage, and (2.) misses what we do and should care about in belief ascription.  If a hardcore cognitive realist were to say "nope, wrong architecture! that species has no beliefs!", we'd just have to invent a new word for what Rudolfo and his kind share in common with humans when we act and react in such patterns -- maybe belief*.  Rudolfo believes* that taxes are due on April 15.  That's why he's working so hard and reminding his clients.  But then "belief*" is the more useful term, as well as the more commonsensical, and it's probably what we meant, or should have meant, by "belief" all along.

Now it might be that in humans, or in Alpha Centaurians, or in some other Earthly or alien species, belief works by means of manipulating internal representations written in the language of thought.  That could be!  (I have my doubts, unless the view is given a very weak formulation.)  But even if we allow that possibility, the reason that having that architecture counts as believing is because that architecture, in that species, happens to be the architecture that underwrites the dispositional pattern.

There are, then, so to speak, two layers here.  There's the dispositional characterization, which, if an entity matches it well enough, makes it true to describe them as someone who believes.  And then there's the underlying subpersonal architecture, which is how the belief is implemented in them at the detailed cognitive level.

Thus, my dissatisfied colleague, and Quilty-Dunn and Mandelbaum, are wrong: There is no conflict between a dispositional approach to belief and representationalist realism in cognitive science.  The metaphysical dispositionalist and the psychological representationalist are engaged in different tasks, and both can be correct -- or rather, both can be correct unless the representationalist also attempts the dispositionalist's broader metaphysical task (in which case they face the Rudolfo dilemma).

Does this make dispositionalism unscientific?  Not at all!  Two comparisons.  Personality traits: These can be defined dispositionally.  To be an extravert is nothing more or less than to have a certain dispositional profile -- that is, to tend to act and react in characteristically extraverted ways.  There can still be a science of dispositional profiles (most of personality psychology, I'd say); and there can also be a science of implementation (e.g., what subpersonal brain or lower-level cognitive structures explain the extravert's energy and sociality?).  Evolution: At a broad theoretical level, evolution just requires heritable traits with different rates of reproductive success.  At a lower level, we can look at genes as the architectural implementation.  One can work on the science of evolution at either layer or with an eye on both layers at once.