Thursday, March 28, 2019

Journey 2 Psychology

A couple of weeks ago, one of my former students, Michael S. Gordon, now a professor of psychology at William Paterson University, stopped by my office unexpectedly. He told me he had sold his house in New Jersey so that he could spend a year traveling around the world, with his family, interviewing famous psychologists about their lives. He wants to compile an oral history of psychology. He was at the UCR campus to interview Robert Rosenthal.

Wait, he's spending a full year, along with his wife and son, traveling around interviewing famous psychologists? And he sold his house to do it? Whoa. That's commitment. How awesome!

He is posting excepts of his interviews on his blog, Journey2Psychology. For example: Alburt Bandura, Ed Diener, Alison Gopnik, Elizabeth Spelke, Dan Schachter, Dan Gilbert, etc., etc.!

Okay, a psychology nerd could get excited. What an amazing idea!

Part of me wishes he could have done it in the 1980s, when BF Skinner, Timothy Leary, Stanley Milgram, and Eric Erikson were still alive. Or, hey, maybe if we could go back into the 1950s, or the 1920s, or....

Against the Mind-Package View of Minds

(adapted from comments I will be giving on Carrie Figdor's Pieces of Mind, at the Pacific Division meeting of the American Philosophical Association on Friday, April 19, 9-12)

We human beings have a favorite size: our own size. We have a favorite pace of action: our own pace. We have a favorite type of organization: the animal, specifically the mammal, and more specifically us. What’s much larger, much smaller, or much different, we devalue and simplify in our imaginations.

It’s true that we’re great. Wow, us! But we tend to forget that other things can also be pretty amazing.

So here’s a naive picture of the world. Some things have minds, and other things don’t have minds. The things with minds might have highly sophisticated minds, capable of appreciating Shakespeare and proving geometric theorems, or they might have simpler minds. But if an entity has a mind at all, then it has sensory and emotional experiences, preferences and dislikes, plans of at least a simple sort, some kind of understanding of its environment, an ability to select among options, and a sense of its location and the boundaries of its body. Let’s call this the Mind Package.

Everything that exists, you might think, is either a thing that has the whole Mind Package or a thing that has no part of the Mind Package. Stones have no part of the Mind Package. They don’t feel anything. They have no preferences. They make no decisions. They have no comprehension of the world around them. There’s nothing it’s like to be a stone. Dogs, we ordinarily assume, do have the Mind Package. My own dog Cocoa enjoys going on walks, prefers the bucket chair to the recliner, gets excited when she hears my wife coming in the front door, and dislikes our cat.

[A recent picture of some of my favorite biological entities. Can you guess which ones have the Mind Package?]

Now it could be the case that everything in the world either has the Mind Package or doesn’t have it, and if something has one piece of the Mind Package, it has all the pieces. Intuitively, this is an attractive idea. What would it be to kind of have a mind? Could a creature have full-blown desires and preferences but no beliefs at all? Could a creature be somewhere between having experiences and not having any experiences? This seems hard to imagine. It’s much easier to think that either the light is on inside or the light is off. Either you’ve got a stone or you’ve got a dog.

But there are a couple of reasons to suspect that the lights-on/lights-off Mind Package view is two simple.

The first reason to be suspicious is that the world is full of slippery slopes. In fetal development and infant development, biological and cognitive complexity emerges gradually. But if you’ve either got the whole package or you don’t then there must be some moment at which the lights suddenly turned on and you went, in a flash, from being an entity without experiences, preferences, feelings, comprehension, and choice, to being an entity with all of those things. In the gradual flow of maturation, when could this be? Likewise, if we assume, at least for a moment, that jellyfish don’t have the Mind Package but dogs do, similar trouble looms: Across species there’s a gradual continuum of capacities, not, it seems, a sudden break between lights-on and lights-off animals. (Garden snails are an especially fascinating problem case.)

This leads to a second reason to be suspicious of the Mind Package view. As Carrie Figdor emphasizes, bacteria are much more informationally complicated than we tend to think. Plants are much more informationally complicated than we tend to think. Group interactions are much more informationally complicated than we tend to think. The relations of parasite, host, and symbiont are much more informationally complicated than we tend to think. The difference is smaller than we usually imagine between things of our favorite size and pace and other things. The biological world is richly structured with what looks like sophisticated informational processing in response to environmental stimuli. When scientists need to model what’s going on in plants and bacteria and neurons and social groups, they seem to need terms and concepts and models from psychology: signaling, communication, cooperation, decision, memory, detection, learning. Structures other than those of our favorite size and pace seem to show the kinds of informational interactions and responsiveness to environment that we capture with psychological words like these.

Furthermore, there’s no general reason to think that systems usefully described by some of these psychological terms need always also be usefully described by other of these terms. If a scientific community starts to attribute memories or preferences to the entities they research, it doesn’t follow that they will find it fruitful also to ascribe sensory experiences, feelings, or a sense of the difference between body and world. Different aspects of mentality may be separable rather than bundled. They don’t need to stand or fall as a Package. To paraphrase the title of Carrie’s book, the Mind comes in Pieces.

Philosophers of mind love to paint their opponents as clinging to the remnants of Cartesianism. Should I alone resist? The Mind Package view is a remnant of Cartesianism: There’s the Minded stuff, which has this nice suite of cognitive and conscious properties, all as a bundle, and then there’s the non-Minded stuff which is passive and simple. We ought to demolish this Cartesian idea. There is no bright line between the fully and properly Minded and the rest of the world, and there is no need for cognitive properties to all travel on the family plan.

The Mind Package view has a powerful grip on our intuitions. We want to confine “the mental” to privileged spaces – our own heads and the heads of our favorite animals. But if the informational structures of the world are sufficiently complex, this intuitive approach must be jettisoned. Mental processes run wide through the world, different ones in different spaces, defying our intuitive groupings. This radically anti-Cartesian view is the profound and transformative lesson of Carrie’s book, and it takes some getting used to.

If Carrie’s radically anti-Cartesian view of the world is scientifically correct, there are, then, pragmatic grounds to prefer a broad view of the metaphysics of preferences and decisions, according to which many different kinds of entities have preferences and make decisions. It is the view that better respects the evidence that we are continuous with plants, worms, and bacteria, and that the types of patterns of mindedness we see in ourselves resemble what’s happening in them, even if such entities don’t have the whole Mind Package.

Goodbye, Mind-Package rump Cartesianism!

-----------------------------------------------------

Related:

Do Neurons Literally Have Preferences (Nov 4, 2015)

Are Garden Snails Conscious? Yes, No, or *Gong* (Sep 20, 2018)

Tuesday, March 26, 2019

New Podcast Interview: How Little Thou Can Know Thyself

New interview of me at the MOWE blog.

Topics of discussion:

  • Our poor knowledge of our own visual experience,
  • Our poor knowledge of our own visual imagery,
  • Our poor knowledge of our own emotional experience,
  • The conscious self riding on the unconscious elephant,
  • Can we improve at introspection?
  • Our poor knowledge of when and why we feel happy,
  • The amazing phenomenon of instant attachment to adoptive children.
  • Friday, March 22, 2019

    Most U.S. and German Ethicists Condemn Meat-Eating (or German Philosophers Think Meat Is the Wurst)

    It's an honor and a pleasure to have one's work replicated, especially when it's done as carefully as Philipp Schoenegger and Johannes Wagner have done.

    In 2009, Joshua Rust and I surveyed the attitudes and behavior of ethicist philosophers in five U.S. states, comparing those attitudes and behavior to non-ethicist philosophers' and to a comparison group of other professors at the same universities. Across nine different moral issues, we found that ethicists reported behaving overall no morally differently than the other two groups, though on some issues, especially vegetarianism and charitable giving, they endorsed more stringent attitudes. (In some cases, we also had observational behavioral data that didn't depend on self-report. Here too we found no overall difference.) Schoenegger and Wagner translated our questionnaire into German and added a few new questions, then distributed it by email to professors in German-speaking countries, achieving an overall response rate of 29.5% [corrected Mar 23]. (Josh and I had a response rate of 58%.) With a couple of exceptions, Schoenegger and Wagner report similar results.

    The most interesting difference between Schoenegger and Wagner's results and Josh's and my results concerns vegetarianism.

    The Questions:

    We originally asked three questions about vegetarianism. In the first part of the questionnaire, we asked respondents to rate "regularly eating the meat of mammals, such as beef or pork" on a nine-point scale from "very morally bad" to "very morally good", with "morally neutral" in the middle.

    In the second part of the questionnaire, we asked:

    17. During about how many meals or snacks per week do you eat the meat of mammals such as beef or pork?

       enter number of times per week ____

    18. Think back on your last evening meal, not including snacks. Did you eat the meat of a mammal during that meal?

       □ yes

       □ no

       □ don’t recall

    U.S. Results in 2009

    On the attitude question, 60% of ethicist respondents rated meat-eating somewhere on the "bad" side of the nine-point scale, compared to 45% of non-ethicist philosophers and only 19% of professors from other departments (ANOVA, F = 17.0, p < 0.001). We also found substantial differences by both gender and age, with women and younger respondents more likely to condemn meat-eating. For example, 81% of female philosophy respondents born 1960 or later rated eating the meat of mammals as morally bad, compared to 7% of male non-philosophers born before 1960. That's a huge difference in attitude!

    Eight percent of respondents rated it at 1 or 2 on the nine-point scale -- either "very bad" or adjacent to very bad -- including 11% of ethicists (46/564 overall, 22/193 of ethicists).

    On self-report of behavior, Josh and I found much less difference. On our "previous evening meal" question, we detected at best a statistically marginal difference among the three main analysis groups: 37% of ethicists reported having eaten meat at the previous evening meal, compared to 33% of non-ethicist philosophers and 45% of non-philosophers (chi-squared = 5.7, p = 0.06, excluding two respondents who answered "don’t recall").

    The "meals per week" question was actually designed in part as a test of "socially desirable responding" or a tendency to fudge answers: We thought it would be difficult to accurately estimate the number, thus it would be tempting for respondents to fudge a bit. And mathematically, they did seem to be guilty of fudging: For example, 21% of respondents who reported eating meat at one meal per week also reported eating meat at the previous evening meal. Even if we assume that meat is only consumed at evening meals, the number should be closer to 14% (1/7). If we assume, more plausibly, that approximately half of all meat meals are evening meals, then the number should be closer to 7%. With that caveat in mind, on the meals-per-week question we found a mean of 4.1 for ethicists, compared to 4.6 for non-ethicist philosophers and 5.3 for non-philosophers (ANOVA [square-root transformed], F = 5.2, p = 0.006).

    We concluded that although a majority of U.S. ethicists, especially younger ethicists and women ethicists, thought eating meat was morally bad, they ate meat at approximately the same rate as did the non-ethicists.

    German Results in 2018:

    Schoenegger and Wagner find, similarly, a majority of German ethicist respondents rating meat-eating as bad: 67%. Evidently, a majority of U.S. and German ethicists think that eating meat is morally bad.

    However, among the non-ethicist professors, Schoenegger and Wagner find higher rates of condemnation of meat-eating than Josh and I found: 63% among German-speaking non-ethicist philosophers in 2018 compared to our 45% in the U.S. in 2009 (80/127 vs. 92/204, z = 3.2, p = .001), and even more strikingly 40% among German-speaking professors from departments other than philosophy in 2018 compared to only 19% in the U.S. in 2009 (52/131 vs/ 31/167, z = 4.0, p < .001; [note 1]).

    German professors were also much more likely than U.S. professors in 2009 to think that eating meat is very bad, with 18% rating it 1 or 2 on the scale, including 23% of ethicists (57/408 and 35/150, excluding non-respondents; two-proportion test U.S. vs German: overall z = 2.8, p = .005, ethicists z = 2.9, p = .004).

    Apparently, German-speaking professors are not as fond of their wurst as cultural stereotypes might suggest!

    A number of explanations are possible: One is that in general German academics are more pro-vegetarian than are U.S. academics. Another is that attitudes toward vegetarianism are changing swiftly over time (as suggested by the age differences in Josh's and my study) and that the nine years between 2009 and 2018 saw a substantial shift in both cultures. Still another concerns non-response bias. (For non-philosophers, Schoenegger and Wagner's response rate was 30%, while Josh's and mine was 53%.)

    In Schoenegger and Wagner's data, ethicists report having eaten less meat at the previous evening meal than the other two groups: 25%, vs. 40% of non-ethicist philosophers and 39% of the non-philosophers (chi-squared = 9.3, p = .01 [note 2]). The meals per week data are less clear. Schoenegger report 2.1 meals per week for ethicists, compared to 2.8 and 3.0 for non-ethicist philosophers and non-philosophers respectively (ANOVA, F = 3.4, p = .03), but their data are highly right skewed, and due to skew Josh and I had used a square-root transformation for original 2009 analysis. A similar square-root transformation on Schoenegger and Wagner's raw data eliminates any statistically detectable difference (F = 0.8, p = .45). And there is again evidence of fudging in the meals-per-week responses: Among those reporting only one meat meal per week, for example, 18% reported having had meat at their previous evening meal.

    If we take the meals-per-week data at face value, the German respondents ate substantially less meat in 2018 than did the U.S. respondents in 2009: 2.6 meals for the Germans vs. 4.6 for the U.S. respondents (median 2 vs median 4, Mann-Whitney W = 287793, p < .001). However, the difference was not statistically detectable on the previous evening meal question: 38% U.S. vs 34% German (z = 1.3, p = .21).

    All of this is a bit difficult to interpret, but here's the tentative conclusion I draw:

    German professors today -- especially ethicists -- are more likely to condemn meat eating than were U.S. professors ten years ago. They might also be a bit less likely to eat meat, again perhaps especially the ethicists, though that is not entirely clear and might reflect a bit of fudging in the self-reports.

    The other difference Schoenegger and Wagner found was in the question of whether ethicists were on the whole more likely than other professors to embrace stringent moral views -- but full analysis of this will require some detail and will have to wait for another time.

    *********************************************

    Note 1: In the published paper, Schoenegger and Wagner report 39% instead of the 40% I find in reanalyzing their raw data. This might either be a rounding error [39.69%] or some small difference in our analyses.

    Note 2: In the published paper, Schoenegger and Wagner report 24%, which again might be a rounding error (from 24.65%) or a small analytic difference.

    [image source]

    Friday, March 15, 2019

    Should You Defer to Ethical Experts?

    Ernest Sosa gave a lovely and fascinating talk yesterday at UC Riverside on the importance of "firsthand intuitive insight" in philosophy. It has me thinking about the extent to which we ought, or ought not, defer to ethical experts when we are otherwise inclined to disagree with their conclusions.

    To illustrate the idea of firsthand intuitive insight, Sosa gives two examples. One concerns mathematics. Consider a student who learns that the Pythagorean theorem is true without learning its proof. This student knows that a^2 + b^2 = c^2 but doesn't have any insight into why it's true. Contrast this student with one who masters the proof and consequently does understand why it's true. The second student, but not the first, has firsthand intuitive insight. Sosa's other example is in ethics. One child bullies another. Her mother, seeing the act and seeing the grief in the face of the other child, tells the bullying child that she should apologize. The child might defer to her mother's ethical judgment, sincerely concluding she really should apologize, but without understanding why what she has done is bad enough to require apology. Alternatively, she might come to genuinely notice the other child's grief and more fully understand how her bullying was inappropriate, and thus gain firsthand intuitive insight into the need for apology. (I worry that firsthand intuitive insight is a bit of a slippery concept, but I don't think I can do more with it here.)

    Sosa argues that a central aim of much of philosophy is firsthand intuitive insight of this sort. In the sciences and in history, it's often enough just to know that some fact is true (that helium has two protons, that the Qin Dynasty fell in 206 BCE). On such matters, we happily defer to experts. In philosophy, we're less likely to accept a truth without having our own personal, firsthand intuitive insight. Expert metaphysicians might almost universally agree that barstool-shaped-aggregates but not barstools themselves supervene on collections of particles arranged barstoolwise. Expert ethicists might almost universally agree that a straightforward pleasure-maximizing utilitarian ethics would require radical revision of ordinary moral judgments. But we're not inclined to just take them at their word. We want to understand for ourselves how it is so.

    This seems right. And yet, there's a bit of a puzzle in it, if we think that it's important that our ethical opinions be correct. (Yes, I'm assuming that ethics is a domain in which there are correct and incorrect opinions.) What should we do when the majority of philosophical experts think P, but your own (apparent) firsthand intuitive insight suggests not-P? If you care about correctness above all, maybe you should defer to the experts, despite your lack of understanding. But Sosa appears to think, as I suspect many of us do, that often the right course instead is to stand steadfast, continuing to judge according to your own best independent reasoning.

    Borrowing an example from Sarah McGrath's work on moral deference, consider the case of vegetarianism. Based on some of my work, I think that probably the majority of professional ethicists in the United States believe that it is normally morally wrong to eat the meat of factory-farmed animals. This might also be true in German-speaking countries. Impressionistically, most of the philosophers I know who have given the issue serious and detailed consideration come to endorse vegetarianism, including two of the most prominent ethicists currently alive, Peter Singer and Christine Korsgaard. Now suppose that you haven't given the matter nearly as much thought as they have, but you have given it some thought. You're inclined still to think that eating meat is okay, and you can maybe mount one or two plausible-seeming defenses of your view. Should you defer to their ethical expertise?

    Sosa compares philosophical reasoning with archery. You not only want to hit the target (the truth), you want to do so by the exercise of your own skill (your own intuitive insight), rather than by having an expert guide your hand (deference to experts). I agree that ideally this is so. It's nice when you have have both truth and intuitive insight! But when the aim of hitting the target conflicts with the aim of doing so by your own intuitive insight, your preference should depend on the stakes. If it's an archery contest, you don't want the coach's help: The most important thing is the test of your own skill. But if you're a subsistence hunter who needs dinner, then you probably ought to take any help you can get, if the target looks like it's about to escape. And isn't ethics (outside the classroom, at least) more like subsistence hunting than like an archery contest? What should matter most is whether you actually come to the right moral conclusion about eating meat (or whatever) not whether you get there by your own insight. Excessive emphasis on the individual's need for intuitive insight, at the cost of truth or correctness, risks turning ethics into a kind of sport.

    So maybe, then, you should defer to the majority of ethical experts, and conclude that it is normally wrong to eat factory-farmed meat, even if that conclusion doesn't accord with your own best attempts at insight?

    While I'm tempted to say this, I simultaneously feel pulled in Sosa's direction -- and perhaps I should defer to his expertise as one of the world's leading epistemologists! There's something I like about non-deference in philosophy, and our prizing of people's standing fast in their own best judgments, even in the teeth of disagreement by better-informed experts. So here are four brief defenses of non-deference. I fear none of them is quite sufficient. But maybe in combination they will take us somewhere?

    (1.) The "experts" might not be experts. This is McGrath's defense of non-deference in ethics. Despite their seeming expertise, great ethicists have often been horribly wrong in the past. See Aristotle on slavery, Kant on bastards, masturbation, homosexuality, wives, and servants, the consensus of philosophers in favor of World War I, and ethicists' seeming inability to reason better even about trolley problems than non-ethicists.

    (2.) Firsthand intuitive insight might be highly intrinsically valuable. I'm a big believer in the intrinsic value of knowledge (including self-knowledge). One of the most amazing and important things about life on Earth is that sometimes we bags of mostly water can stop and reflect on some of the biggest, most puzzling questions that there are. An important component of the intrinsic value of philosophical reflection is the real understanding that comes with firsthand intuitive insight, or seeming insight, or partial insight -- our ability to reach our own philosophical judgments instead of simply deferring to experts. This might be valuable enough to merit some substantial loss of ethical correctness to preserve it.

    (3.) The philosophical community might profit from diversity of moral opinion, even if individuals with unusual views are likely to be incorrect. The philosophical community as a whole might, over time, be more likely to converge upon correct ethical views if it fosters diversity of opinion. If we all defer to whoever seems to be most expert, we might reach consensus too fast on a wrong, or at least a narrow and partial, ethical view. Compare Kuhn and Longino on the value of diversity in scientific opinion: Intellectual communities need stubborn defenders of unlikely views, even if those stubborn folks are probably wrong -- since sometimes they have an important piece of the truth that others are missing.

    (4.) Proper moral motivation might require acting from one's own insight rather than from moral deference. The bully who apologizes out of deference gives, I think, a less perfect apology than the bully who has firsthand intuitive insight into the need to apologize. Maybe in some cases, being motivated by one's own intuitive insight is so morally important that it's better to do the ethically wrong thing on the basis of your imperfect but non-deferential insight than to do the ethically right thing deferentially.

    As I said, none of these defenses of non-deference seems quite enough on its own. Even if the experts might be wrong (Point 1), from a bird's-eye perspective it seems like our best guess should be that they're not. And the considerations in Points 2-4 seem plausibly to be only secondary from the perspective of the person who wants really to have ethically correct views by which to guide her behavior.

    [image source]

    Friday, March 08, 2019

    Thoughts, Judgments, and Beliefs -- What's the Difference?

    Today I'm going to pitch a taxonomy. My secret agenda is to undermine overly intellectualist views about belief and self-knowledge.

    An episode of some sort occurs in your mind. Let's say it's in inner speech: "I should get started on that blog post" or "It fine for her to choose Irvine over Riverside". On the face of it, it's an assertion: It's not a question or a string of nonsense or a "hmmmmm...". An episode of inner speech in the form of an assertion is, I will say, one type of assertoric thought. (Assertoric thought needn't require inner speech if visual imagery or emotional reactions or imageless thoughts can do the same type of work; but let's set that issue aside for today.) Inner speech of this sort can cross your mind without your judging or believing the content. If I sing to myself "She's buying a stairway to Heaven", normally I don't at that same moment believe that anyone is actually buying a stairway to Heaven. At other times, it seems that I do genuinely believe what I am saying to myself, with the inner speech somehow the vehicle of that belief: "Uh oh, we're out of coffee!" The question is: What is present in the latter case that is absent in the former?

    One possibility is a feeling of assent. On this view, when "out of coffee!" comes to my mind, accompanying that inner speech is another type of experience, not in inner speech -- an experience of yes-this-is-true, or a feeling of confidence or correctness. In contrast, when I'm singing along with Led Zeppelin, there's no such accompanying yes-this-is-true experience.

    Another possibility, the one I prefer, is that the important difference is less in the phenomenology, that is, in some experiential difference between the two cases, than it is in type of cognitive traction the thought has. Does the thought spin idly, so to speak, or does it penetrate into other aspects of your cognitive life? I'd like to suggest that if the thought has one type of cognitive traction, it's a judgment. And if the judgment has a certain type of further traction, you believe. If the thought has little to no traction, it's a mere idle thought (or as I call it elsewhere, a "wraith" of judgment).

    It seems to me phenomenologically plausible that at least sometimes we feel confidence when we speak silently to ourselves, and sometimes we feel doubt or skepticism or like we're just singing some words. The same episode of inner speech "She's buying a stairway" can be experienced very differently, and this difference can have something to do with whether we really judge it to be so. But for two reasons, I think it's a mistake to rely on these phenomenological differences in distinguishing between judgments and merely idle assertoric thoughts.

    First, even if there is sometimes a phenomenology of confidence or this-is-really-so-ness and sometimes a feeling of doubt or I'm-merely-singing, it is by no means clear that such epistemic phenomenology accompanies all of our inner speech or even most of it. For example, in Russell Hurlburt's experience sampling studies, we don't see a lot of reports of this type of epistemic phenomenology. Thinking back as best I can on my own stream of experience (some systematically sampled, but mostly not), it strikes me that such phenomenology would generally be subtle in most cases -- the kind of thing it would be easy for a theorist to miss or alternatively to invent given the difficulty of knowing such structural features of experience. Such phenomenological criteria are, at best, a dubious theoretical foundation for such an important distinction.

    Second, and maybe more importantly, what we should care about in making a distinction of this sort is not the existence, or not, of a feeling of confidence or some accompanying phenomenology of this-is-so. What matters more is the role the thought plays in one's cognitive life. That role is what the distinction between judgment and idle thinking ought to track.

    Consider the two examples I began with: "I should get started on that blog post" and "It's fine for her to choose Irvine over Riverside". I say these to myself, perhaps with some feeling of that-is-so. But then, maybe I don't start on the blog post. I check Facebook instead, though there's no real need for me to do so. Nor do I feel particularly bad about that, or torn. The thought occurred, seemed in some sense right, but didn't penetrate further into my cognition or decision making. Meanwhile, maybe, I remain miffed that she rejected Riverside for Irvine (I'm imagining here a graduate student or faculty member choosing to decline our offer of admission or hiring). Probably I shouldn't be miffed. It is fine! People ought to choose what they think is best for them. And yet... most of my cognition about the matter remains wrongly and irrationally hurt and resentful. I'm trying to convince myself, but I haven't fully succeeded.

    What we do and should care about in distinguishing judgment from idle thought is the extent to which I have succeeded. If, at least for that moment, my planning and thinking really is informed by my seeming-assessment that it's time to get started and that it's fine for her to choose Irvine, then that is what I have judged. But if, as is sometimes the case, the thoughts don't really penetrate into the remainder of my cognitive life, don’t guide other aspects of my reasoning and my posture toward the world, then it's probably best to regard them as mere idle thoughts, rather than genuine judgments, even if in some superficial way I feel sincere and this-is-so-ish when I say them to myself. (On the metaphor of attitudes as postures toward the world, see my discussion here.)

    That is how I would like to draw the distinction between judgment and idle thought.

    How about belief? Here I want to make a similar move, but at an expanded temporal scale. We might sincerely judge something to be so in the sense that our related thoughts, and our general posture toward the world, are for a moment aligned toward the truth of that thing. I really do, now that I think of it carefully, judge it to be fine for her to have made that choice. Of course it's fine! But the difference between a judgment and a belief is the difference between an occurrence and a steady-state thing. A judgment happens in a moment; a belief endures, at least for a while. The question is: Does the judgment stick? Does that momentary assessment have enough cognitive traction to change how I will feel about it next time I return to the question? After the conscious thought vanishes, will it leave some sort of more durable trace in my cognitive structure? Or it is here and gone? Belief requires, I suggest, that more durable trace.

    One way to think of it is this: A conscious thought is in a way a preparation for a judgment, and a judgment is in a way a preparation for a belief. "P" bubbles up into your mind, for some reason. If P finds the right kind of momentary home in your mind, if, at that moment, for the duration of its presence in the footlights of consciousness, it shifts or solidifies related aspects of your mentality, then it is a full-bodied judgment and not just an idle thought. And if what it shifts and solidifies stays shifted and solidified after the judgment fades from consciousness, then that judgment has become a belief.

    Back to the secret agenda: If this is right, you cannot just read what you believe, or even what you currently judge, off of what you can introspectively discover, or what you say with a feeling of sincerity. Genuine belief and judgment require penetration deeper into the springs of thought and action.

    ----------------------------------------------

    Related:

    "A Dispositional Approach to the Attitudes: Thinking Outside of the Belief Box" (in Nottelmann, ed., New Essays on Belief, 2013).

    "Do You Have Whole Herds of Swiftly Forgotten Microbeliefs?" (Feb 1, 2019). [N.B.: Today's post suggests a partial resolution to the question that the microbeliefs post leaves open.]

    Against Intellectualism about Belief (Jul 31, 2015)

    Friday, March 01, 2019

    In Philosophy, Departments with More Women Faculty Award More PhDs to Women (Plus Some Other Interesting Facts)

    Women constitute about 32% of Philosophy Bachelor's degree recipients in the U.S., about 29% of Philosophy PhD recipients, and about 20-25% of philosophy faculty. (Paxton et al 2012; Schwitzgebel and Jennings 2017). It is sometimes suggested that the relatively low percentage of women faculty in philosophy explains the relatively low percentage of women who major in philosophy (which then in turn explains the relatively low percentage of women who become the next generation of philosophy faculty).

    I was curious whether philosophy departments with a relatively high percentage of women faculty would also have a relatively high percentage of students who are women. Maybe departments with more women faculty are more "woman friendly", with a visible effect on the proportion of women who complete the Bachelor's or PhD?

    Paxton et al. 2012 provide some evidence of a relationship between departments' proportion of women faculty, women undergraduates, and women graduate students. In a sample of 49 departments, they found a substantial correlation between the percent of women faculty and the percent of undergraduate philosophy majors who are women (r = .45, p = .012). However, in a similar sample of 31 departments, they did not report finding such a correlation between percent of faculty who are women and percent of PhD students who are women.

    There are a few limitations in the Paxton et al. study. First, thirty-one departments is a somewhat small number for such an analysis, yielding only limited statistical power to detect medium-sized correlations (note that with 49 departments in their undergraduate analysis, Paxton et al's p-value was greater than .01 despite a correlation of .45). Second, the sample of departments might be unrepresentative. And third, the proportion of women who complete the PhD might be a better measure of women-friendliness or women's success than proportion enrolled in the PhD program, since a substantial proportion of philosophy PhD students do not complete their degrees (in many departments completion rates are around 50%) and (anecdotally) non-completion rates might be higher for women than men (I welcome pointers to systematic data on this).

    For these reasons, I decided to examine whether in a larger sample of PhD-granting philosophy departments in the U.S., the percent of women faculty would correlate with the percent of women completing the PhD.

    For the data on students, I relied on the IPEDS database from the National Center for Educational Statistics, using an eight-year time frame from the academic year 2009-2010 to 2016-2017. For faculty, I used Julie Van Camp's counts of women faculty and total faculty in 97 doctoral programs in the U.S. from January 2006 and January 2015, as recovered through the Wayback Machine Internet Archive. (These 97 programs produce about 95% of the Philosophy PhDs in the U.S. ETA: This includes tenured and tenure-track faculty only.) For each department women faculty percentage score, I averaged the percentage of women faculty in 2006 and in 2015 to reduce noise due to temporary gains and losses. (My own department, for example, had 2/17 [12%] women faculty in 2006 and 4/19 [24%] in 2015, and is probably better represented by 18% than by either the higher or the lower number.)

    Overall, women were 20% of faculty in 2006 (340/1669) and 25% of faculty in 2015 (442/1755), a statistically significant increase (z = 3.4, p = .001). Although 20% to 25% may not sound like much, it is actually quite remarkable for such a short period. The faculty growth between 2006 and 2015 in this set of universities was only 86 positions (from 1669 to 1755 total faculty), while the growth of women faculty was 102 positions.

    The pattern in undergraduate Bachelor's degree completions in these same institutions is in some ways similar. Among these 97 institutions, the percentage of women earning BAs increased from 29% (1066/3618) to 34% (957/2787). This is statistically significant (z = 4.1, p < .001), and intermediate years show a slow steady increase (30%, then 31%, then 32%). However, it is possible that this is just a brief fluctuation in a long-term trend, in which percentage of women among philosophy majors has held approximately steady at 30-34% since at least 1986. Also notable: While faculty numbers increased, graduating majors decreased (fitting with national trends across all university types).

    The pattern in PhD completions is approximately flat over the period (fitting with results from the NSF reported here), fluctuating between 25% and 33% women -- coincidentally, 27% both at the beginning (100/372) and at the end (113/415) of the period. However, with numbers this low, statistical power is an issue.

    The main question I was looking at was correlational: Do the universities with a higher proportion of women faculty tend to have a higher proportion of women completing their PhDs? And the answer is...

    Yes!

    Here it is as a chart:

    [apologies for blurry image: click to clarify and enlarge]

    The correlation is substantial r = .42 (p < .001). For example, although only 37 of the 97 universities had over 25% women faculty, all ten of the universities that had the highest proportion of women among their Philosophy PhD recipients did.

    Oddly, however, for Bachelor's degrees, I can find no relationship at all, with a correlation of r = -.01 (p = .96). This result contrasts sharply with the Paxton et al. results, and I'm not sure what to make of it. A follow-up study might look at a broader sample of undergraduate institutions to see what sort of relationship there is between percent of women faculty and percent of women undergraduates in philosophy and whether it might vary with institution type.