Wednesday, November 27, 2024

Unified vs. Partly Disunified Reasoners

I've been thinking recently about partly unified conscious subjects (e.g., this paper in draft with Sophie R. Nelson). I've also been thinking a bit about how chains of logical reasoning depend on the unity of the reasoning subject. If I'm going to derive "P & Q" from premises "P" and "Q" I must be unified as reasoner, at least to some degree. (After all, if Person 1 holds "P" and Person 2 holds "Q", "P & Q" won't be inferred.) Today, in an act of exceptional dorkiness (even for me), I'll bring these two threads together.

Suppose that {P1, P2, P3, ... Pn} is a set of propositions that a subject -- or more precisely, at least one part of a partly unified rational system -- would endorse without need of reasoning. The propositions are, that is, already believed. Water is wet; ice is cold; 2 + 3 = 5; Paris is the capital of France; etc. Now suppose that these propositions can be strung together in inference to some non-obvious conclusion Q that isn't among the system's previous beliefs -- the conclusion, for example, that 115 is not divisible by three, or that Jovenmar and Miles couldn't possibly have met in person last summer because Jovenmar spent the whole summer in Paris while Miles never left Riverside.

Let's define a fully unified reasoner as a reasoner capable of combining any elements from the set of propositions they believe {P1, P2, P3, ... Pn} in a single act of reasoning to validly derive any conclusion Q that follows deductively from {P1, P2, P3, ... Pn}. (This is of course an idealization. Fermat's Last Theorem follows from premises we all believe, but few of us could actually derive it.) In other words, any subset of {P1, P2, P3, ... Pn} could jointly serve as premises in an episode of reasoning. For example, if P2, P6, and P7 jointly imply Q1, the unified reasoner could think "P2, P6, P7, ah yes, therefore Q1!" If P3, P6, and P8 jointly imply Q2, the unified reasoner could also think "P3, P6, P8, therefore Q2."

A partly unified reasoner, in contrast, is capable only of combining some subsets of {P1, P2, P3, ... Pn}. Thus, not all conclusions that deductively follow from {P1, P2, P3, ... Pn} will be available to them. For example, the partly unified reasoner might be able to combine any of {P1, P2, P3, P4, P5} or any of {P4, P5, P6, P7, P8} while being unable to combine in reasoning any elements from P1-3 with any elements from P6-8. If Q3 follows from P1, P4, and P5, no problem, they can derive that. Similarly if Q4 follows from P5, P6, and P8. But if the only way to derive Q5 is by joining P1, P4, and P7, the partly disunified reasoning system will not be able to make that inference. They cannot, so to speak, hold both P1 and P7 in the same part of their mind at the same time. They cannot join these two particular beliefs together in a single act of reasoning.

[image: A Venn diagram of a partly unified reasoner, with overlap only at P4 and P5. Q3 is derivable from propositions in the left region, Q4 from propositions in the right region, and Q5 is not derivable from either region.]

We might imagine an alien or AI case with a clean architecture of this sort. Maybe it has two mouths or two input-output terminals. If you ask the mouth or I/O terminal on the left, it says "P1, P2, P3, P4, P5, yes that's correct, and of course Q3 follows. But I'm not sure about P6, P7, P8 or Q4." If you ask the mouth or I/O terminal on the right, it endorses P4-P8 and Q5 but isn't so sure about P1-3 and Q3.

The division needn't be crudely spatial. Imagine, instead, a situational or prompt-based division: If you ask nicely, or while flashing a blue light, the P1-P5 aspect is engaged; if you ask grumpily, or while flashing a yellow light, the P4-P8 aspect is engaged. The differential engagement needn't constitute any change of mind. It's not that the blue light causes the system as a whole to come to believe, as it hadn't before, P1-P3 and to suspend judgment about P6-P8. To see this, consider what is true a neutral time, when the system isn't being queried and no lights are flashing. At that neutral time, the system simultaneously has the following pair of dispositions: to reason based on P1-P5 if asked nicely or in blue, and to reason based on P4-P8 if asked grumpily or in yellow.

Should we say that there are discretely two distinct reasoners rather than one partly unified system? At least two inconveniences for that way of thinking are: First, any change in P4 or P5 would be a change in both, with no need for one reasoner to communicate it to the other, as would normally be the case with distinct reasoners. Second, massive overlap cases -- say P1-P999 and P2-P1000 -- seem more naturally and usefully modeled as a single reasoner with a quirk (not being able to think P1 and P1000 jointly, but otherwise normal), rather than as two distinct reasoners.

But wait, we're not done! I can make it weirder and more complicated, by varying the type and degree of disunity. The simple model above assumes discrete all-or-none availability to reasoning. But we might also imagine:

(a.) Varying joint probabilities of combination. For example, if P1 enters the reasoning process, P2 might have a 87% chance of being accessed if relevant, P3 a 74% chance, ... and P8 a 10% chance.

(b.) Varying confidence. If asked in blue light, the partly disunified entity might have 95% credence in P1-P5 and 80% credence in P6-P8. If asked in yellow light, it might have 30% credence in P1-P3 and 90% credence in P4-P8.

(c.) Varying specificity. Beliefs of course don't come divided into neatly countable packages. Maybe the left side of the entity has a hazy sense that something like P8 is true. If P8 is that Paris is in France, the left side might only be able to reason on Paris is in France-or-Germany-or-Belgium. If P8 is that the color is exactly scarlet #137, the left side might only be able to reason on the color is some type of red.

Each of (a)-(c) admits of multiple degrees, so that the unity/disunity or integration/disintegration of a reasoning system is a complex, graded, multidimensional phenomenon.

So... just a bit of nerdy fun, with no actual application? Well, fun is excuse enough, I think. But still:

(1.) It's easy to imagine realistic near-future AI cases with these features. A system or network might have a core of shared representations or endorsable propositions and local terminals or agents with stored local representations not all of which are shared with the center. If we treat that AI system as a reasoner, it will be a partly unified reasoner in the described sense. (See also my posts on memory and perception in group minds.)

(2.) Real cases of dissociative identity or multiple personality disorder might potentially be modeled as involving partly disunified reasoning of this sort. Alter 1 might reason with P1-P5 and Alter 2 with P4-P8. (I owe this thought to Nichi Yes.) If so, there might not be a determinate number of distinct reasoners.

(3.) Maybe some more ordinary cases of human inconstancy or seeming irrationality can be modeled in this way: Viviana feeling religious at church, secular at work, or Brittany having one outlook when in a good, high-energy mood and a very different outlook when she's down in the dumps. While we could, and perhaps ordinarily would, model such splintering as temporal fluctuation with beliefs coming and going, a partial unity model has two advantages: It applies straightforwardly even when the person is in neither situation (e.g., asleep), and it doesn't require the cognitive equivalent of frequent erasure and rewriting of the same propositions (everything endures but some subsets cannot be simultaneously activated; see also Elga and Rayo 2021).

(4.) If there are cases of partial phenomenal (that is, experiential) unity, then we might expect there also to be cases of partial cognitive unity, and vice versa. Thus, a feasible model of the one helps increase the plausibility that there might be a feasible model of the other.

No comments: