Wednesday, February 11, 2015

The Intrinsic Value of Self-Knowledge

April 2, I'll be a critic at a Pacific APA author-meets-critics session on Quassim Cassam's book Self-Knowledge for Humans. (Come!) In the last chapter of the book, Cassam argues that self-knowledge is not intrinsically valuable. It's only, he says, derivatively or instrumentally valuable -- valuable to the extent it helps deliver something else, like happiness.

I disagree. Self-knowledge is intrinsically valuable! It's valuable even if it doesn't advance some other project, valuable even if it doesn't increase our happiness. Cassam defends his view by objecting to three possible arguments for the intrinsic value of self-knowledge. I'll spot him those objections. Here are three other ways to argue for the intrinsic value of self-knowledge.

1. The Argument from Addition and Subtraction.

Here's what I want you to do: Imaginatively subtract our self-knowledge from the world while keeping everything else as constant as possible, especially our happiness or subjective sense of well-being. Now ask yourself: Is something valuable missing?

Now imaginatively add lots of self-knowledge to the world while keeping everything else as constant as possible. Now ask: Has something valuable been gained?

Okay, I see two big problems with this method of philosophical discovery. Both problems are real, but they can be partly addressed.

Problem 1: The subtraction and addition are too vague to imagine. To do it right, you need to get into details, and the details are going to be tricky.

Reply 1: Fair enough! But still: We can give it a try and take our best guess where it's leading. Suppose I suddenly knew more about why I'm drawn to philosophy. Wouldn't that be good, independent of further consequences? Or subtract: I think of myself as a middling extravert. Suppose I lose this knowledge. Stipulate again: To the extent possible, no practical consequences. Wouldn't something valuable be lost?

Alternatively, consider an alien culture on the far side of the galaxy. What would I wish for it? Would I wish for a culture of happy beings with no self-knowledge? Or, if I imaginatively added substantial self-knowledge to this culture, would I be imagining a better state of affairs in the universe? I think the latter.

Contrast with a case where addition and subtraction leave us cold: seas of iron in the planet's core. Unless there are effects on the planetary inhabitants, I don't care. Add or subtract away, whatever.

Problem 2: What these exercises reveal is only that I regard self-knowledge as something that has intrinsic value. You might differ. You might think: happy aliens, no self-knowledge, great! They're not missing anything important. You might think that unless some practical purpose is served by knowing your personality, you might as well not know.

Reply 2: This is just the methodological problem that's at the root of all value inquiries. I can't rationally compel you to share my view, if you start far enough away in value space. I can just invite you to consider how your own values fit together, suggest that if you think about it, you'll find you already do share these values with me, more or less.

2. The Argument from Nearby Cases.

Suppose you agree that knowledge in general is intrinsically valuable. A world of unreflective bliss would lack something important that a world of bliss plus knowledge would possess. I want my alien world to be a world with inhabitants who know things, not just a bunch of ecstatic oysters.

Might self-knowledge be an exception to the general rule? Here's one reason to think not: Knowledge of the motivations and values and attitudes of your friends and family, specifically, is intrinsically good. Set this up with an Argument from Subtraction: Subtracting from the world people's psychological knowledge of people intimate to them would make the world a worse place. Now do the Nearby Cases step: You yourself are one of those people intimate to you! It would be weird if psychological knowledge of your friends were valuable but psychological knowledge of yourself were not.

Unless you're a hedonist -- and few people, when they really think about it, are -- you probably thinks that there's some intrinsic value in the rich flourishing of human intellectual and artistic capacities. It seems natural to suppose that self-knowledge would be an important part of that general flourishing.

3. The Argument from Identity.

Another way to argue that something has intrinsic value is to argue that it is in fact identical to something that we already agree has intrinsic value.

So what is self-knowledge? On my dispositional view (see here and here), to know some psychological fact about yourself is to possess a suite of dispositions or capacities with respect to your own psychology. An example:

What is it to know you're an extravert? It's in part the capacity to say, truly and sincerely, "I'm an extravert". It's in part the capacity to respond appropriately to party invitations, by accepting them in anticipation of the good time you'll have. It's in part to be unsurprised to find yourself smiling and laughing in the crowd. It's in part to be disposed to conclude that someone in the room is an extravert. Etc.

My thought is: Those kinds of dispositions or capacities are intrinsically valuable, central to living a rich, meaningful life. If we subtract them away, we impoverish ourselves. Human life wouldn't be the same without this kind of self-attunement or structured responsiveness to psychological facts about ourselves, even if we might experience as much pleasure. And self-knowledge is not just some further thing floating free of those dispositional patterns that could be subtracted without taking them away too. Self-knowledge isn't some independent representational entity contingently connected to those patterns; it is those patterns.

You might notice that this third argument creates some problems for the straightforward application of the Argument from Addition and Subtraction. Maybe in trying to imagine subtracting self-knowledge from the world while holding all else constant to the extent possible, you were imagining or trying to imagine holding constant all those dispositions I just mentioned, like the capacity to say yes appropriately to party invitations. If my view of knowledge is correct, you can't do that. What this shows is that the Argument from Addition and Subtraction isn't as straightforward as it might at first seem. It needs careful handling. But that doesn't mean it's a bad argument.


I'd go so far as to say this: Self-knowledge, when we have it (which, I agree with Cassam, is less commonly than we tend to think), is one of the most intrinsically valuable things in human life. The world is a richer place because pieces of it can gaze knowledgeably upon themselves and the others around them.


Josh Shepherd said...

What kind of intrinsic value are we talking about? Does it give us reasons to treat beings with self-knowledge differently than beings without? Is this akin to the value of a work of art? Is it primarily practical - we have practical reason to pursue self-knowledge if we can? etc.

Eric Schwitzgebel said...

Why not all of the above, Josh? I guess in the first instance I'm thinking just something like "the universe is a better place with it than without it, all else held equal to the extent possible".

Then I suppose if you remove a being that has both X amount of pleasure and Y amount of self-knowledge from the universe you make the universe worse by something like X+Y rather than just X. (Artworks might have intrinsic value and the *power* to produce pleasure as their Y and X.) In terms of practicality, maybe I'll just stick with a Humean hypothetical imperative: IF among your aims is making the universe (specifically the part of the universe that is you) a better place, then by that token self-knowledge should be among your aims. But I think it's also possible simply to value self-knowledge for its own sake, if that's different. (I'm pretty liberal about what you can value for its own sake.)

Does this make sense?

Josh Shepherd said...

It all makes sense. I'm inclined to assent to the more-value-in-the-universe claim.

I suppose one place of importance for me would be the harms we might/might not impose on two kinds of entity by killing either: an entity with no self-knowledge (or no capacity for it), and an entity with self-knowledge (or capacity for it). Maybe self-knowledge is intrinsically valuable, but those with it do not deserve better treatment in virtue of this. (not sure)

Since this probably shades into degrees, relevant here could be what amounts/types of self-knowledge might influence views on treatment of entities with(out) self-knowledge.

Angra Mainyu said...

Hi, Eric,

The name 'intrinsic value' isn't entirely clear to me, but are we talking about something related to human evaluative attitudes - i.e., even if it's valuable for its own sake not as a means to an end, the conception of 'valuable' seems to be dependent on human language, and through it, human minds.

For example, yuo say "if you think about it, you'll find you already do share these values with me, more or less.". That seems plausible to me (though I'm not sure it yields the result that self-knowledge is intrinsically valuable; I'll test that below), as long as we're talking about human beings. But aliens, AI, etc., may well value things in radically different ways.
While I personally don't think this should be a problem (the truth-conditions of 'valuable' are given by human language, regardless of what aliens, AI, etc., do. Maybe something is intrinsically valuable but not intrinsically alien#92-valuable, for some alien species), I just wanted to ask for clarification: is the claim limited to human beings, language, etc?

That aside, I'd like to test the theory by considering some weird cases:

1. Let's say that Bob is a paperclip maximizer. Would you say that increasing Bob's self-knowledge results in a better state of affairs in the universe, all other things equal?

2. What if the aliens are not happy, but are being horribly tortured by some other, more powerful aliens? Would greater self-knowledge in either the torturers or the victims result in a better overall state of affairs, assuming all other things (including the amount of torture, etc.) equal?

3. In the case of the Archipelagists (Continental entity or identity variant), is the world better, all other things equal, if the Continental entity gains more self-knowledge?

Or do you think that these are cases of robots and monsters that might break human moral systems?

Michel Clasquin-Johnson said...

" I can just invite you to consider how your own values fit together, suggest that if you think about it, you'll find you already do share these values with me, more or less."

OK. Stipulated. Now there are two of us.

But even if all seven billion of us agreed on this, we could still be wrong. Truth is not determined by democratic vote. Descartes' evil demon might be deluding us all.

The problem, as my Zorastrian friend (?) above intimates, lies in the word "intrinsic". I would take that to mean "baked into the very fabric of the universe, as fundamental to its existence as time, space and fairy dust" OK, forget the fairy dust.

I take it you use the word in a less absolutist sense.

Michel Clasquin-Johnson said...

PS. the Earth's molten iron core creates a magnetic field, which deflects high-energy radiation. It is the only reason we are able to live this close to the sun. I suggest you start caring. ;-)

Eric Schwitzgebel said...

Thanks for the thoughtful comments, everyone!

Josh: Yes, I share the worry that is (I assume) implicit in this, that this thinking might somehow lead to an noxiously intellectualist ethics and undermine egalitarian attitudes toward the cognitively disabled. There should be ways to reconcile my view with egalitarian views of the intrinsic value of people; or if not it's going to be a problem for hedonists as well (since some people seem capable of more pleasure than others) and probably for virtually any theory of value. So quite a can of worms there!

Eric Schwitzgebel said...

Angra: I hope that this reasoning can work on either subjective or objective accounts of value. I incline toward a theory of value that's on the knife's edge between subjective and objective, something like a "secondary quality" view, where we take some facts about people's (possibly idealized) moral reactions and rigidify them Kripke-style, then identify what is valuable with that rigidified class of objective event types.

On your 1, 2, and 3, I'll go with yes, yes, and yes. On 3 in particular, the crucial move in the story is the second narrator's recognition that despite their happiness, the Archipelegans lack something important in lacking self-knowledge. So he gives them that. This has the effect of creating the unhappy, violent Continental entity, but the second narrator's urge to give them self-knowledge is, I think, a proper recognition of the intrinsic value of self-knowledge.

Eric Schwitzgebel said...

Michel: I do care about the Earth's core! But only instrumentally because of those effects on us. On your bigger question, as I mentioned in my response to Angra -- and maybe should have been clearer about in the post -- I intend this reasoning to apply even for those who accept subjective views of value. If what's "valuable" is just a matter of what I (or we) happen to care about, there are still things that I regard as valuable for their own sake (happiness, knowledge) and others I care about only instrumentally (iron at the Earth's core). Maybe "intrinsically valuable" is a misleading phrase in that case -- so my objectivish-value tendencies are leaking out -- but the contrast with the view that self-knowledge is only instrumentally valuable should still be clear. Or so I hope!

Angra Mainyu said...


On the "secondary quality" rigidified alternative, would it be correct to understand that it would be rigidified based on human assessments? (which I would count as "objective", but philosophical terminology varies).
I'm thinking something like color: alien visual systems may vary widely, but any rigidification should be based on the human visual system (after all, humans use color language and we're using human language).

With regard to the Archipelagists, maybe it's better if they have more self-knowledge (all other things equal (AOTE)), but my assessment is that it's not better if the Continental entity gains more self-knowledeg AOTE.
Similarly, I don't see greater self-knowledge in the torturers to be better overall in 2, and the same goes for Bob's self-knowledge in 1.

So, back to your invitation: " I can just invite you to consider how your own values fit together, suggest that if you think about it, you'll find you already do share these values with me, more or less."

It seems to me that after thinking about it, I share some of those values with you (I would rather say some beliefs about what is overall better), but there are some others I don't.

Moreover, I seem to intuitively reckon that AOTE, self-knowledge in some sorts of entities makes the world overall better, but not in other sorts of entities, and the difference seems to be based on some properties of the minds of the entities in question (other than self-knowledge).

In particular, if the entity in question has a preference structure such that, upon reflection, all things considered given its values, [as a means to the end of furthering its goals] it ought to torture, kill, etc., innocent persons, cause horrible suffering, etc., for no moral reason but to make paperclips, or for fun, etc., I reckon its having greater self knowledge AOTE (in particular, assuming the self knowledge won't change the amount of suffering, death, etc., it causes), wouldn't make the world a better place.

I'm not sure how to break what looks like divergent intuitions. It may well be that, after ideal reflection, our intuitions would converge to the same conclusion. But how do we manage to go in that direction?

Eric Schwitzgebel said...

Angra: Thanks for pushing a bit more on this.

"rigidified based on human assessments? (which I would count as "objective", but philosophical terminology varies)" Yes, that was my thought.

I'm not sure why self-knowledge in such creatures wouldn't AOTE be better. There's something I'm missing so that I'm not getting the pull of the intuition. Take more ordinary cases: imagine someone vapid and/or evil. If we think aesthetic beauty has intrinsic value, AOTE (I'm inclined to think) the world is better if they play a beautiful piece by Chopin that no one else hears than if they don't. If we think intellectual accomplishment has intrinsic value, AOTE (I'm inclined to think), the world is better if they do some great formal mathematics. Same with self-knowledge? Now maybe it's even more better (if you'll excuse the phrase) if someone more admirable does those things, creating a kind of synergy of value or something.

Maybe my thinking here is driven mostly by the general case, and then I fail to see why these cases would qualify as an exception.

Angra Mainyu said...

Eric, thanks for your reply, and the interesting examples, which seem to indicate where some of our differences may lie. Perhaps, we're not using "good" and "better" in the same sense!

For example, you say that "if we think aesthetic beauty has intrinsic value...". But I think that aesthetic beauty (trivially) has intrinsic aesthetic value, which is to say it's aesthetically good for its own sake.
But I don't see a moral dimension to it.

For example, let's say that possible world W3 contains no minds (I hold theism is false, but I don't think that's a problem), and the world is such that there will be no minds in the future, either. But it's full of beautiful paintings (e.g., there is a planet like ours but which lost its atmosphere, but for whatever reason contains the paintings; say, they just popped into existence in a quantum freak event).

W4 is like W3, but only with seriously ugly paintings instead.
Is W3 better than W4?
In my assessment - at least, in the way I'm using 'better' -, they're equal, and both neutral - neither good nor bad. On the other than, W3 is aesthetically better, even if no one ever appreciates it, or has the chance to appreciate it.

But I'm getting from your assessment that you'd be inclined to say W3 is better. Is that assessment correct?

Angra Mainyu said...


Here's another example: let's say that on a very distant planet beyond the observable universe, Earth 2, there are beings (say, 2-humans) who are psychologically very much like us, but have a different auditory system. Now, if normal 2-humans were to experience what we experience when we listen to a beautiful piece by Chopin, they would find it as pleasant as we find it.
However, if 2-humans were to actually listen to a beautiful piece by Chopin - played by humans, with human instruments -, they would experience something very different - something slightly unpleasant to them, and having that particular experience would be slightly unpleasant to us too.
But 2-humans they have their 2-Chopin, and their 2-musical instruments, and they listen to pieces that they experience in the way we experience beautiful pieces by Chopin - but their pieces are slightly ugly, not beautiful.
The difference stems from the fact that their hearing is sensitive to very different frequencies, so many of the frequencies produced by our instruments and that we can hear, and inaudible to them, and vice versa (one can construct a funnier scenario in the case of paintings, I think).
So, they make slightly ugly pieces, whereas we make beautiful pieces.
Would you say that Earth 2 is overall worse than Earth, AOTE?
I would say neither of them is better or worse than the other one, intuitively and in the way I'm using the words, though Earth is aesthetically better than Earth 2 (and Earth 2 is 2-aesthetically better than Earth).

I guess an alternative might be to say that Earth 2 is not aesthetically worse than Earth, because aesthetic beauty depend on how it's experienced. Please let me know if that's your view, but in that case, whether the vapid person is playing a beautiful or an ugly piece would depend on his own experiences, and in the case of W3 and W4, it seems nothing would be beautiful or ugly (and there are other consequences, but I'd like to ask what your take on Earth 2 is, before I go on).

chinaphil said...

Just to take the most contrary possible position:-

This discussion mostly assumes that self-knowledge has some value, either instrumental or intrinsic. But what if it has disvalue? The obvious framework would be Buddhism, where concentration on the self is a negative.

One might object that you could have Buddhist knowledge of the self - i.e. knowing that the self is an illusion. But that seems like a bit of a cheat to me, and not what we usually mean by self-knowledge. Certainly the example you gave of knowing whether you're the kind of person who has fun at parties seems at odds with Buddhist insight.

There's an assumption in here that knowledge requires attention, and I think that holds. I would say we can never know that to which we never direct our attention; and in general, coming to know (as opposed to just thinking about or imagining) requires some quite concentrated attention. So knowing yourself requires investing some time and energy in thinking about yourself, and that would not be attractive in a Buddhist ethic.

OTOH, there is also something about knowing which stops us paying attention. Once I know that Lima is capital of Peru, I stop thinking about it. I don't spend any time thinking about basic arithmetic (unlike my small children). So perhaps that cancels out the argument above: perhaps the best thing from a Buddhist perspective would just be to know yourself and then never think about yourself again.

Or, to take a quite opposite perspective, what if the Buddhist idea is true, and self-knowledge involves knowing that our selves are illusions? Achieving nirvana? In utilitarian terms, wouldn't that be disastrous?

And speaking of utilitarianism... in a utilitarian ethic, knowledge of any person is valuable because it would help us work out the best value for that person. My knowledge of you is exactly as valuable as my knowledge of myself, because it would be exactly as good to give you more utility as it is to give me more utility. Now, in general, it is a fact that people know themselves better than others (even though we don't know ourselves well), and we spend more time thinking about ourselves than others. So in terms of making the world a better place, adding more self-attention and self-knowledge is an inefficient, inferior choice.

However, in practice, I think our use of time is so inefficient that there is no crowding out of the type I posit above, so I'm not sure this argument really holds.

Callan S. said...

Hi Eric,

Here's what I want you to do: Imaginatively subtract our self-knowledge from the world while keeping everything else as constant as possible, especially our happiness or subjective sense of well-being. Now ask yourself: Is something valuable missing?

I get the feeling, I think. But it also feels like asking the used car salesman about the value of used car salesmen.

I think perhaps some reasoning to not completely accept pure practicality in regard to self thoughts is way further out - as in why is there a universe at all?

I'm not thinking that something will definately come up that makes self knowledge intrinsically valuable. I think that'd be an edge case.

But the reason why the universe exists at all could be so oddball weird that you wouldn't necessarily be getting practical with your self knowledge even if you went that way. The idea of derivatively or instrumentally valuable might turn out pure superstition, due to whatever wacked out reason there is a universe to begin with.

So I don't think there's any reason to be a zealot to the idea of intrinsically valuable self knowledge. But at the same time I don't think there's anything (concrete at the moment) that forces the following of a 'derivatively or instrumentally valuable' paradigm, either. Perhaps it will come down to a particular type of 'derivatively or instrumentally valuable' in the end, but not the type various people think of right now. So I think don't be a zealot to intrinsically valuable - but you don't need to be a zealot to 'derivatively or instrumentally valuable', just as much.

Subtracting from the world people's psychological knowledge of people intimate to them would make the world a worse place. Now do the Nearby Cases step: You yourself are one of those people intimate to you! It would be weird if psychological knowledge of your friends were valuable but psychological knowledge of yourself were not.

In regards to this and identity - ignoring practical concerns of losing such knowledge, I feel I'm just a creature. An animal living a long through time. I feel my intellect is the rider of the horse of emotions - but in the end intellect guides the horse for the benefit of the horse or the horses concerns. Some augmentation of the horses concerns come from intellect (trying to eat less sugery foods than one would without intellect, for example, because of intellectual knowledge of health issues), but that complicates the removal/practicality issue.

I've certainly had the thought of human extinction in the past and feared the idea of there being no one who ever thinks like me ever again. That thinking, gone. But if it came down to humans still being there that feel things like me, then I'm okay with them going on, feeling as animals do, without self knowledge. The thought is just derived from feelings, in the end.

Fleur Jongepier said...

Interesting thoughts! I wonder how much the two of you really disagree (or: where the disagreement really lies). First of all the suggestion seems to be that the relation between self-knowledge and well-being is such that self-knowledge falls out as intrinsically valuable. The way I read Cassam’s chapter, he concludes precisely the opposite, namely that self-knowledge is extrinsically/instrumentally valuable in so far as it promotes well-being (and isn’t always even all that valuable in that respect, as Bob the paperclip maximiser and the unhappy aliens bring out.) So I wonder whether you accept his definition of what ‘intrinsically valuable’ comes down to? ('Something is intrinsically valuable just if it is valuable or desirable for its own sake rather than because it promotes some other good. If X is valuable only because it leads to Y, then X is extrinsically valuable as long as Y is valuable.')

But maybe cashing these questions out in terms of whether self-knowledge is intrinsically valuable or not is not helping to get a better grip on where the two of you disagree. It appears to me that the question of the value of self-knowledge play out on at least two different levels, roughly conceptual and psychological. Your point, if I’m right, is that self-knowledge is intrinsically valuable because it is constitutive of well-being, and not just a way of (possibly) increasing it (as you put it: self-knowledge *just is* those patterns). So there's a kind of conceptual relation between self-knowledge and well-being such that we cannot have the latter without presupposing the former. I think I agree. But I wonder, couldn’t an intrumentalist about the value of SK agree? Maybe what we need is a distinction between having the capacity for self-knowledge (which we cannot imagine a world without) and concrete applications of such a capacity. Self-knowledge in the last (roughly psychological) sense is not ‘desirable for its own sake’, since often enough one can be happy, autonomous, authentic, and so on, without self-knowledge. It isn’t valuable in and of itself because half of the time it’s just no good to know yourself. Which is not to deny that we cannot define or understand what it means to be happy/autonomous without helping ourselves to some *notion* (possibility of; disposition for) self-knowledge. So couldn't one concede that self-knowledge is intrinsically valuable in the first sense (e.g. after having considered some other possible worlds and ecstatic oysters) but nonetheless hold that self-knowledge isn't intrinsically valuable in the second sense, the one concerning our actual world?

Another response would be to distinguish necessary conditions from claims about intrinsic value. E.g. one might say that it doesn't follow from the constitutive argument (conceptual connection between self-knowledge and well-being) that self-knowledge is intrinsically valuable—at most it follows that we need to presuppose some capacity for self-knowledge for the notion of well-being to be intelligible. Perhaps intelligibility is a weaker claim than being valuable. But again, maybe it’s better to just drop the notion of intrinsic value?

Eric Schwitzgebel said...

Shoot, lost track of comments for a few days (lots going on) -- apologies and replies coming!

Anonymous said...

What about the kind of self knowledge that is rooted or can lead to suffering I.e. the kind found in depressed patients? I remember a discussion a while back on here where you pointed me towards research that suggested that people with depression have in many ways a more realistic grasp on reality than the mentally healthy. And the paper I read made the point that while we generally assume that a good grasp on reality is essential to wellbeing, positive illusions are in fact just as important, if not more so. Doesn't that suggest that there is some tension between self knowledge and wellbeing?
Also, I think you're a bit premature in stating we value knowledge of other people's mental states. That might be true to a superficial degree, but just like in the depression vs wellbeing case what we might actually value are positive illusions that allows to interact with people without really knowing them in the first place. Actually gaining true knowledge in this case might lead to a fall in wellbeing, specially if that knowledge gave us a worse view of ourselves. Your research often points to our poor self knowledge skills; I don't see why our cognitive empathy would be any better.


Eric Schwitzgebel said...

Angra: Complicated questions, but good! I'm not sure what to make about aesthetic judgments in universes with no one to appreciate the artwork, but I my inclination is to think that if the situation were parallel between Earth and Earth-2 (I think that was your setup) the worlds would be equally intrinsically good and equally aesthetically good. I think I'm more willing to relativize aesthetic than moral value. But that doesn't mean I wouldn't recognize more informed and more tasteful perceivers within a group.

Eric Schwitzgebel said...

Chinaphil: Fun, interesting questions! I could imagine a utilitarians saying that learning about others would be a more efficient use of resources, for more marginal gain -- an interesting thought -- though another consideration might be that self-knowledge has leveraged value since some instances of it affect many of your actions, e.g., knowing that you are better at helping people with X than with Y might be very valuable in getting you to focus on doing X rather than Y.

On Buddhism: I've always had a bit of trouble understanding the "no self" view -- partly because Buddhism is such a large and varied tradition that there are many versions of that view, but I agree that since knowledge can be implicit, it is consistent with turning your attention away from yourself and just acting skillfully in the world, in a compassionate way, with an implicit appreciate of what "you" can effectively do and not do, what tends to lead to "your" suffering vs. equanimity, etc.; so I think at least some forms of Buddhism ought to allow value of "self"-knowledge as long as the "self" isn't taken as a metaphysically robust thing of the wrong sort. Yes?

Eric Schwitzgebel said...

Callan: Ah, the classic question of why there is something rather than nothing! Morgenbesser's answer, "If there was nothing, still you'd complain!" I don't feel much pull to zealotry here either. And on the possible future loss of human beings, or any beings, with self-knowledge -- I guess I do feel that would be a big loss! I'm not sure how to argue for that though, if that's not already part of your worldview (back to the point about the problem of resolving value disputes between parties who start far apart).

Eric Schwitzgebel said...

Fleur: It will be interesting to see whether Quassim pushes back in a way similar to yours, trying to find points of agreement, or whether he resists more strongly. It is possible that we are talking past each other a bit regarding constitution and necessary conditions; he's a little sketchy on some of that stuff (and I am even sketchier in the post) -- but I do think at least that I would disagree with the compromise position you suggest at the end of your second paragraph. I'm inclined to think that all else being equal, even in normal human cases, it's intrinsically better to have self-knowledge, even with no further benefits to your well-being (apart from whatever benefits to well-being straightaway follow from self-knowledge being partly constitutive of well-being); I'm *pretty* sure that will remain a point of disagreement between Quassim and me.

Angra Mainyu said...


I'm not sure I understand your reply, but to simplify, perhaps we could leave aside for now world without minds.

In the Earth and Earth-2 scenario, I didn't mean to posit two different possible worlds, one with Earth, the other with Earth-2. Rather, I was considering the following hypothetical scenario:
Let's assume that our actual universe - is very large (maybe infinite, maybe not) -, and beyond the observable universe - but still actual - there is this planet, Earth-2.
Then, in the sense in which I understand this arguments about value, Earth-2 is not worse than Earth, even though it's aesthetically worse.
If you prefer, we may consider another possible world W5 (not actual) in which both the Earth (or an exact replica) and Earth-2 exist.

Your reply, however, suggests that Earth-2 is not aesthetically worse than Earth. But I'm not sure how you're relativizing aesthetic value here. If Earth-2 is not aesthetically worse than Earth, let's consider another world W6, in which all of the art that is appealing to 2-humans is replaced by art that is aesthetically appealing to humans (but slightly unpleasant to the 2-humans who live on Earth-2).
Then, it would seem to me - if I read your assessment right - that W6 is aesthetically worse than W5. But if so, then it seems to me that the aesthetic value of an object depends on the perceptions of the beings that observe or might observe the object. A and B may be identical in all respects other than the fact that A is on Earth (in W6) and B is on Earth-2 (in W6), and A is beautiful, but B is slightly ugly (that's not my view, but it's what I seem to get from your assessment regarding Earth and Earth-2; please let me know if I got it wrong).

However, in that case, there is no such thing as intrinsic aesthetic value. But you said earlier that you're inclined to think aesthetic beauty has intrinsic value. Since there is no intrinsic aesthetic value (I'm concluding from the analysis above), then I reckon your position is that aesthetic beauty - which is not intrinsic - has intrinsic non-aesthetic (moral, perhaps) value?

In that case, that does not look like a more ordinary case to me. Actually, I'm inclined to think it's not true.

But perhaps I misunderstood your assessment with regard to Earth and Earth-2?

Fleur Jongepier said...

Eric, thanks for your response. I think you’re probably right that that a disagreement between you and Quassim will remain. I was mostly interested in trying to figure out where exactly the intrinsic/extrinsic distinction is operating. My own intuition is that, if the distinction is to make sense at all, the intrinsic value of SK has got to involve more than a conceptual connection between well-being and self-knowledge (i.e. comparing worlds with and without self-knowledge, irrespective of the question whether self-knowledge would lead to greater happiness or depression.) It does seem right that if the entire human population were to go extinct, and the capacity for self-knowledge with it, that something would go missing. I wonder, though, whether what has gone missing is the intrinsic value of SK, or rather some of its (extrinsic) applications, such as philosophy, politics, human rights, the arts, etc. I don’t pity the cows in the field that do not have a capacity for self-knowledge, and I don’t see why it should be intrinsically valuable, for them, to have such a capacity if such a capacity would not “help deliver something else” like the arts and such (we may assume these cows are suddenly gifted with second order beliefs about their desire for grass and water). What this brings out, I suppose, is that the question of the value of self-knowledge depends on one’s conception of what self-knowledge is. Presumably, the relation between self-knowledge and these prudential/moral practices is much more intimate (not just second-order states added on top of first-order ones), but these more intimate relations would require further spelling out, which I think the the current debate could really benefit from. Maybe that (the relation SK bears to such practices) is what you meant by ‘intrinsic’, that’s what I was trying to figure out.

Eric Schwitzgebel said...

Angra: I'm sorry I misunderstood your example! I'm inclined to think that aesthetic beauty has intrinsic value in the sense that the beautiful does not have to bring happiness (or any other non-aesthetic good) to anyone, or maybe (?) even be known to exist by anyone, to be a valuable part of the world. Possibly, my hunches about this could be pushed around a bit by further examples and argument. But what *makes* something a thing with aesthetic value might depend upon actual, or possible, or idealized perceivers who could appreciate its aesthetic merit.

So... I wonder if the issue here is that we are hearing the implications of "intrinsic" differently. I think secondary qualities might be slightly misleading, but as a partial analogy: Something can be intrinsically blue, in the sense that its blueness depends on its paint job (rather than, say, the lighting), even if one accepts something like a secondary-quality account of color.

Eric Schwitzgebel said...

Fleur: Thanks for pressing me further on this. I have some tendency to think that the cows would be better off in some sense with more self-knowledge, because the world would be richer in a certain way. But I can feel the pull of the other side on that question; I'm not sure what I'd say if I wasn't already committed or captured by a theory.

Where I feel more strongly (maybe just more committed and captured!) is on the intimacy of the connection between self-knowledge and social/moral practices that are a valuable part of the human condition. Here is where my dispositional, almost behaviorist, approach to attitudes becomes important. What it *is* to know that you value family over work is to show a kind of sensitivity to that fact about yourself -- not just in self-ascription, but in planning for the future, when you imagine, for example, how you would react to having to spend multiple weeks away from the family to advance one's career.

Callan S. said...


And on the possible future loss of human beings, or any beings, with self-knowledge

Oh, that's not fair, is it? You just said a lack of self-knowledge in humans, not a lack of humans!

Angra Mainyu said...


No problem. but I think I've not yet been clear. Perhaps, I can clarify the issue I'm trying to raise with an example based on your reply:

You say that you're inclined to think that aesthetic beauty has intrinsic value in the sense that the beautiful the beautiful does not have to bring happiness (or any other non-aesthetic good) to anyone, or maybe (?) even be known to exist by anyone, to be a valuable part of the world.
Okay, but that seems to imply that whether something is beautiful does not depend on the perceptions of the agents who contemplate it.

Now, let's go back to Earth-2. On Earth-2, there are 2-humans with audition different from ours. We may also add they have a different visual system. They have our experiences of color, but associated with different wavelengths of light.
If 2-humans were to listen to Chopin's pieces, they would not hear what we hear, and in fact what they would hear would be slightly unpleasant to them. Moreover, if they were to look at the best paintings on our Earth, they would not see what we see. They would see something that they would not find particularly appealing.
On the other hand, their works of art are appealing to them, but we would judge them slightly ugly.

Given that our judgments of beauty are at least generally reliable, it turns out that our human art is usually beautiful, whereas their 2-human art is usually slightly ugly. Not that they mind, though. Plausibly, instead of generally false judgments of beauty, 2-humans make generally true judgments of 2-beauty, and their art is generally 2-beautiful - ours is generally slightly 2-ugly.

So, Earth contains a lot more beauty than Earth-2, and Earth-2 contains a lot more ugliness (otoh, Earth-2 contains a lot more 2-beauty, whereas Earth contains a lot more 2-ugliness).
So, if the beautiful were a [positively] valuable part of the world regardless of whether it brings happiness to anyone, is observed, etc., the Earth would be more valuable than Earth-2.

But I don't see why this is so - and if I'm getting this right, you agree that Earth and Earth-2 are equally good, or equally valuable if you like, in the sense of value that is relevant here.

Granted, you said earlier that Earth and Earth-2 are also equal with respect to beauty. But that seems to me to be in conflict with the assumption that whether an object is beautiful is independent of the perceptions of the agents who contemplate it, at least as long as we reject an epistemic error theory of judgments of beauty, because if we were to listen to/look at the art on Earth-2, we would be able to tell it's ugly.

With regard to your color example, that might also be useful to illustrate the issue I'm trying to raise. Let's say that O1 is intrinsically all blue, but also intrinsically 2-red, 2-white and 2-orange. That's not a problem.
But let's say that there is also intrinsic beauty. So, the Mona Lisa is (let's say) intrinsically beautiful, but intrinsically 2-ugly, but the 2-Mona Lisa on Earth-2 is intrinsically ugly, but 2-beautiful.
Yet, Earth-2 is not less valuable than Earth, in the relevant sense of value.
It follows, then, that intrinsic beauty is not valuable.
The alternative is that 2-art is as beautiful as art. But if art on Earth were replaced by 2-art, the Mona Lisa with the 2-counterpart, etc., then 2-art would be ugly. It would follow then that whether a piece of art is beautiful depends on the perceptions of the agents that are looking at it, which would imply that beauty is not intrinsic, and also would contradict the assumption I mentioned earlier in this post.
I don't see a way around that, but I'm not sure I'm being clear.