Wednesday, February 19, 2020

Do Business Ethics Classes Make Students More Ethical? Students and Instructors Agree: They Do!

I'm inclined to think that university ethics classes typically have little effect on students' real-world moral behavior.

I base this skepticism partly on Joshua Rust's and my finding, across a wide variety of measures, that ethics professors generally don't behave much differently than other professors -- and if they don't behave differently, why would students? And I base it partly on my (now somewhat dated) review of business ethics and medical ethics instruction specifically, which finds shoddy research methods and inconsistent results suggestive of an underlying non-effect.[1]

On the other hand, part of the administrative justification of ethics classes -- especially medical ethics and business ethics -- appears to be the hope that students will eventually act more ethically as a result of having taken these courses. Administrators and instructors who aim at this result presumably expect that the classes are at least sometimes effective.

The issue, perhaps surprisingly, isn't very well studied. I parody only slightly when I say that the typical study on this topic asks students at the end of class "are you more ethical now?", and when they respond "yes" at rates greater than chance, the researcher concludes that the instruction was effective.


Nina Strohminger and I thought we'd ask instructors and students what they thought about this. We wanted to know two things. First, do instructors and students think that business ethics instruction should aim at improving students morally? Second, do they think that business ethics classes do in fact tend to improve students morally?

Our respondents were 101 business ethics instructors at the 2018 Society for Business Ethics conference, plus students from three very different universities: 339 students from Penn (an Ivy League university with an elite business school), 173 students from UC Riverside (a large state university), and 81 students from Seattle University (a small-to-medium-sized Jesuit university, where Jessica Imanaka coordinated the distribution). Surveys were anonymous, pen and paper. Students completed their surveys on the spot near the beginning of the first day of instruction in business ethics courses.

Using a five-point scale from "not at all important" to "extremely important", Question 1 asked respondents to "rate the importance of the following goals that YOU PERSONALLY AIM to get [to have your students get] from your business ethics classes:

  • An intellectual appreciation of fundamental ethical principles
  • An understanding of what specific business practices are considered ethical and unethical, whether or not I [they] choose to comply with those practices
  • Tools for thinking in a more sophisticated way about ethical quandaries
  • Interesting readings and fun puzzle cases that feed my [their] intellectual curiosity
  • Practical knowledge that will help me be a more ethical business leader [them be more ethical business leaders] in the future
  • Satisfying my [their] degree requirements
  • Grades that will look good on my [their] transcripts
  • Brackets indicate changes for the instructors' version.

    The target prompt was the fifth: Practical knowledge that will help them be more ethical business leaders in the future.

    [students in a business ethics class]

    Responses were near ceiling. 58% of students rated practical knowledge that will help them be more ethical business leaders as "extremely important" to them, the highest possible choice. The mean response was 4.44 on the 1-5 scale. This was the highest mean response among the seven possible goals. 40% of students rated it more highly than they rated "satisfying my degree requirements" and 48% rated it more highly than "grades that will look good on my transcript". Responses were similar for all three schools. If we accept these self-reports, gaining practical knowledge that will help them actually become more ethical is one of students' most important personal aims in taking business ethics classes.

    Instructors' responses were similar: 58% said it was personally "extremely important" to them to have students gain practical knowledge that will help them be more ethical business leaders in the future. The mean response was 4.41 on the 1-5 scale.

    Question 2 asked students and instructors to guess each other's goals (with the same seven possible goals). Students tended to think that professors would also very highly rate (mean 4.71) "practical knowledge that will help students be more ethical business leaders in the future". Professors tended to think that students would regard such knowledge as important (mean 4.09) but not as important as satisfying degree requirements (mean 4.42).

    Question 3 asked respondents how likely they thought it was that "the average students gets the following things from their [your] business ethics classes". The same seven goals were presented, with a 1 - 5 response scale from "not at all likely" to "extremely likely".

    Overall, both students and instructors expressed optimism: Both groups' mean response to this question was 3.84 on the 1-5 scale.

    Based on this part of the questionnaire, it looks like students and instructors agree: It's important to them that their business ethics classes produce practical knowledge that helps students become more ethical business leaders, and they think that their business ethics classes do tend to have that effect.

    On the second page of the questionnaire, we asked these questions directly.

    Question 4: Do you think that, as a result of having taken [your] business ethics classes, [your] students on average will behave more ethically, less ethically, or about the same as if they had not taken a business ethics course?

    Among instructors, 64% said more ethical, 35% said about the same, and 1% said less ethical. Among students, 54% said more ethical, 45% said about the same, and again only 1% said less ethical.

    Question 5: To what extent do you agree that the central aim of business ethics instruction should be to make students more ethical? [1 - 5 scale from "strongly disagree" to "strongly agree"]

    Among instructors, 63% agreed or strongly agreed and only 19% disagreed or strongly disagreed. Among students, 67% agreed or strongly agreed and only 9% disagreed or strongly disagreed.

    The results of these direct questions thus broadly fit with the results in terms of specific goals. Either way you ask, both business ethics students and business ethics instructors say that business ethics classes should and do make students more ethical.


    Many cautions and caveats apply. The results might be influenced by "socially desirable responding" -- respondents' tendency to express attitudes that they think will be socially approved (maybe especially if they think their instructors might be watching). Also, instructors attending a business ethics conference might not be representative of business ethics instructors as a whole -- maybe more gung-ho. Students and instructors might not know their own goals and values. They might be excessively optimistic about the transformative power of university instruction. Etc. I confess to having some doubts.

    Nonetheless, I was struck by the apparent degree of consensus, among students and instructors, that business ethics classes should lead students to become more ethical, and by the majority opinion that they do indeed have that effect.



    [1] However, Peter Singer, Brad Cokelet, and I have also recently conducted a study that suggests that under certain conditions teaching the philosophical material on meat ethics can lead students to purchase less meat at campus dining locations.

    Friday, February 14, 2020

    Thoughts on Conjugal Love

    For Valentine's Day, some thoughts on love.

    In 2003, my Swiss friends Eric and Anne-Françoise Rose asked me to contribute something to their wedding ceremony. Here’s a lightly revised version of what I wrote, concerning conjugal love, the distinctive kind of love between spouses.


    Love is not a feeling. Feelings come and go, while love is steady. Feelings are passions in the classic sense of passion, which shares a root with “passive” – they arrive mostly unbidden, unchosen. Love, in contrast, is something built. The passions felt by teenagers and writers of romantic lyrics, felt so intensely and often so temporarily, are not love – though they might sometimes be the prelude to it.

    Rather than a feeling, love is a way of structuring your values, goals, and reactions. Central to love is valuing the good of the other for their own sake. Of course, we all care about the good of other people we know, for their own sake and not just for other ends. Only if the regard is deep, only if we so highly value the other’s well-being that we are willing to thoroughly restructure our own goals to accommodate it, and only if this restructuring is so rooted that it automatically informs our reactions to the person and to news that could affect them, do we possess real love.

    Conjugal love involves all of this, but it is also more than this. In conjugal love, one commits to seeing one’s life always with the other in view. One commits to pursuing one’s major projects, even when alone, in a kind of implicit conjunction with the other. One’s life becomes a co-authored work.

    Parental love for a young child might be purer and more unconditional than conjugal love. The parent expects nothing back from a young child. The parent needn’t share plans and ideals with an infant. Later, children will grow away into their separate lives, independent of parents’ preferences, while we retain our parental love for them.

    Conjugal love, because it involves the collaborative construction of a joint life, can’t be unconditional in this way. If the partners don’t share values and a vision, they can’t steer a mutual course. If one partner develops too much of a separate vision or doesn’t openly and in good faith work with the other toward their joint goals, conjugal love fails and is, at best, replaced with some more general type of loving concern.

    Nevertheless, to dwell on the conditionality of conjugal love, and to develop a set of contingency plans should it fail, is already to depart from the project of jointly fabricating a life, and to begin to develop individual goals opposing those of the partner. Conjugal love requires an implacable, automatic commitment to responding to all major life events through the mutual lens of marriage. One can’t embody such a commitment while harboring serious back-up plans and persistent thoughts about the contingency of the relationship.

    Is it paradoxical that conjugal love requires lifelong commitment without contingency plans, yet at the same time is contingent in a way that parental love is not? No, there is no paradox. If you believe something is permanent, you can make lifelong promises and commitments contingent upon it, because you believe the thing will never fail you. Lifelong commitments can be built upon bedrock, solid despite their dependency on that rock.

    This, then, is the significance of the marriage ceremony: It is the expression of a mutual unshakeable commitment to build a joint life together, where each partner’s commitment is possible, despite the contingency of conjugal love, because each partner trusts the other partner’s commitment to be unshakeable.

    A deep faith and trust must therefore underlie true conjugal love. That trust is the most sacred and inviolable thing in a marriage, because it is the very foundation of its possibility. Deception and faithlessness destroy conjugal love because, and to the extent that, they undermine that trust. For the same reason, honest and open interchange about long-standing goals and attitudes is at the heart of marriage.

    Passion alone can’t ground conjugal trust. Neither can shared entertainments and the pleasure of each other’s company. Both partners must have matured enough that their core values are stable. They must be unselfish enough to lay everything on the table for compromise, apart from those permanent, shared values. And they must resist the tendency to form secret, selfish goals. Only to the degree they approach these ideals are partners worthy of the trust that makes conjugal love possible.

    [For the final, published version of this essay, please see A Theory of Jerks and Other Philosophical Misadventures.]

    [image source]

    Tuesday, February 11, 2020

    Question: Why Do Great Philosophers Embrace Such Wacky Views? Answer: The World Itself Is Wacky

    Recently, philosopher Michael Huemer seems intent on irritating philosophers of every stripe. (This isn't necessarily a bad thing.) On Saturday, he took aim at philosophical heroes, arguing that "great philosophers are bad philosophers". He notes that great philosophers tend to confidently defend bizarre conclusions, which he suggests reveals their poor judgment; and often they rely, he says, on arguments so terrible that "even an undergrad" can see the fallacies and non sequiturs. As examples, he offers Socrates's bad arguments against Thrasymachus in Book I of the Republic, Hume's "absurdly skeptical" conclusions in the Treatise and Enquiries, and Kant's willingness to take his thinly defended "categorical imperative" to absurd conclusions, such as not telling a lie even to prevent a murder.

    If you don't already know this material, I won't detain you with explanations here -- Huemer's are succinct and readable. I allow that on the face of it, Huemer has a pretty good case. And he's not targeting obscure philosophers or obscure passages. These are some of the most famous parts of some of the most famous works in the Western canon. And the views and arguments are decidedly... well, let's go with wacky. Nor is Huemer especially cherry picking. There's a lot of wacky-seeming stuff in other canonical philosophers too, for example, Leibniz on monads, Nietzsche on eternal recurrence, Descartes on animal (non-)minds, David Lewis on the real existence of possible worlds....

    Huemer has an explanation. He suggests that what makes a philosopher "great" is that the philosopher advances intriguing ideas that future generations find worth arguing about. Ordinary, bland truths, convincingly defended, don't really heat up a conversation. When faced with a compelling argument for a reasonable conclusion, people might react with something like, "yeah, that sounds right," and just move on. If in contrast you say, "there is no self" or "you shouldn't even lie to a murderer chasing an innocent person" (and for whatever sociological reason people take you seriously), that can really start up a good debate! Maybe a debate that lasts centuries. Possibly, the only people willing to advance such claims are bad philosophers -- philosophers who lack the good judgment to recognize the absurdity of their conclusions and who lack the critical chops to recognize that their supporting arguments are rotten. Hence, great philosophers are bad philosophers. QED!

    Is Huemer's argument a good one? Or is it, perhaps instead, a great one (in the strict Huemerian sense of "great")?

    I am probably a good target audience for Huemer's argument: Regular readers will know that I am quite happy to attribute plain old bad argumentation to some of the great historical philosophers, including Kant and Laozi, in accordance with my rejection of excessive charity in reading history of philosophy. Although I like Hume and Plato and (some parts of) Kant, I'm not bothered by Huemer's suggestion and I rather enjoy the idea that the great philosophers are fallible boneheads just like the rest of us.

    However, I have one observation about a piece of the story that Huemer's hypothesis leaves unexplained, and I have a competing explanation to offer instead.

    Here's what Huemer leaves unexplained: The lack of "good" philosophers in the historical record.

    If Huemer's hypothesis were correct, you'd think that among the contemporaries of Plato, Hume, and Kant would be good philosophers who defend sensible views on solid grounds. These philosophers might not get as much attention as the provocative philosophers, but it would be odd if historical records of them entirely disappeared. But there are no philosophers -- or at least (as I'll explain below) no ambitious metaphysicians -- who appear to meet Huemer's standard of being a "good" philosopher.

    Huemer suggests that Aristotle might be somewhat better than the trio he highlights, even if not entirely good. On Facebook, some others suggested maybe Thomas Reid might be a good philosopher who was a contemporary of Hume and Kant. But I don't think Aristotle or Reid are probably good by Huemer's standards. Some of Aristotle's and Reid's views are quite strange, and their arguments for those strange views aren't reliably sensible. For example, Reid, despite his reputation as a "common sense" philosopher, argues that material objects have no causal power and can't even hold together into consistent shapes, without the constant intervention of immaterial souls (an opinion he acknowledges is contrary to the views of the "vulgar"). I have argued that there are some metaphysical issues -- particularly the issue of the relation between mind and body -- where not a single philosopher in the whole history of Earth has been able to articulate a fleshed-out positive theory that isn't both highly dubious and in some respects radically contrary both to our current common sense and to the common sense of their own historical era. (I am still willing to entertain possible counterexamples, if you have some to suggest.)

    Why is this? Why are philosophical theories about the metaphysics of mind (and, I'd suggest, at least also personal identity, causation, and object individuation) all so bizarre and dubious? Here's my hypothesis: The world is bizarre and (for the foreseeable future) philosophically intractable. This is my competing explanation of the bizarre and dubious claims that Huemer has noted often occupy center stage in the history of philosophy.

    The world is bizarre in the following sense: Some things that are true of it are radically contrary to common sense. In physics, consider quantum mechanics and relativity theory. And in philosophy, the bizarreness is epistemically intractable, for the foreseeable future, for the following pair of reasons: (1.) Our common sense about fundamental issues of metaphysics is probably inconsistent at root, and if so, no self-consistent well-developed metaphysics could possibly adhere to all of it. (This explains the inevitable bizarreness.) And (2.) In the domains under discussion, empirical methods are indecisive, and we need to rely on this flawed, inconsistent common sense to a substantial degree. This generates intractable debates where the violations of common sense of one theory become the commonsensical starting presuppositions of competitor theories, which then bring radical violations of common sense of their own. No theory decisively meets all reasonable criteria of excellence. (This explains the inevitable dubiety.)

    Great philosophers are undaunted! Amid the competing bizarrenesses, they find some to favor. (The epistemic landscape isn't totally flat: There still are considerations pro and con and better and worse ideas.) They defend their favored views as best they can -- of course indecisively, given the bad epistemic situation of ambitious metaphysical philosophy.

    How about arguments we now think of as "good" arguments for sensible conclusions? Either (a.) they are unambitious, rather than going after the really huge, intractable issues (especially in fundamental metaphysics), or (b.) they are flawed for reasons that remain mostly invisible to their proponents (i.e., probably you. Sorry!), or (3.) they are forms of skepticism about the enterprise.

    This metaphilosophy is probably at its most plausible when applied to fundamental issues of metaphysics. The best examples of totally weird views and arguments tend to be in metaphysics. Maybe other subfields work differently? (I do think, however, that ethics might soon face a cognitive and methodological crisis, when confronted with a range of Artificial Intelligence cases for which it is conceptually unprepared.)

    Great philosophers embrace bizarre views because our ordinary commonsense understanding of the world is so radically deficient that no non-bizarre view is defensible or even, once one tries to specify the details, coherently articulatable. Great philosophers confront this bizarreness, defending their best guess with the indecisive argumentative tools they have, pushing us forward into the weird unknown.

    [image source]

    Monday, February 03, 2020

    Jerks of Academe: A Field Guide

    Just out in the Chronicle of Higher Education, with hilarious art depicting the four main types I profile: the Big Shot, the Creepy Hugger, the Sadistic Bureaucrat, and the Embittered Downdragger.

    Unfortunately, it's paywalled. I'm trying to get permission to repost it here, but in the meantime please feel free to comment here or email me and I can send you a PDF for personal use.


    Jerks of Academe

    This morning you probably didn’t look in the mirror and ask, “Am I a jerk?” And if you did, I wouldn’t believe your answer. Jerks usually don’t know that they are jerks.

    Jerks mostly travel in disguise, even from themselves. But the rising tide (or is it just the increasing visibility?) of scandal, grisly politics, bureaucratic obstructionism, and toxic advising in academe reveals the urgent need of a good wildlife guide by which to identify the varieties of academic jerk.

    So consider what follows a public service of sorts. I offer it in sad remembrance of the countless careers maimed or slain by the beasts profiled below. I hope you will forgive me if on this occasion I use “he” as a gender-neutral pronoun.

    The Big Shot

    The Big Shot is the most easily identified of all academic jerks. You can spot him a mile away. His plumage is so grand! (Or so he thinks.) His publications so widely cited! (At least by the right people.) His editorial-board memberships so dignified! (Not that anyone else noticed.) You will never fully appreciate the Big Shot’s genius, but if you cite him copiously and always defer to his judgment, he’ll think you have above-average intelligence.

    The Creepy Hugger

    To those unfamiliar with his ways, the Creepy Hugger appears the opposite of the Big Shot. He will seem kind, modest, and charming, despite his impressive accomplishments. This is his alluring disguise....


    Wednesday, January 29, 2020

    Inflate and Explode

    I have a new paper in draft, "Inflate and Explode", which argues against eliminativism and "illusionism" about consciousness. It's so short (main text 1400 words) that I'll just share it as a blog post. It is, in fact, just a revised version of a blog post from 2018.


    1. Introduction.

    Here’s a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A. Sometimes this is a good argumentative approach. Ghosts must be immaterial. Nothing is immaterial. Therefore, there are no ghosts.

    Other times, the background assumption is false: Things of Type X in fact need not have Property A. The argument then fails: It illegitimately relies on an inflated or distorted conception of things of Type X. Real heroes must be ethically flawless. No one is ethically flawless. Therefore, there are no real heroes. Such arguments I pejoratively dub inflate-and-explode arguments. They explode not things of Type X but only an inflated conception of those things.

    Eliminativism or “illusionism” about consciousness – recently defended by Jay Garfield (2015), Keith Frankish (2016a), and François Kammerer (2019) among others – generally relies on the inflate-and-explode argumentative strategy, as I will now explain.

    2. Inflate-and-Explode Eliminativism.

    Paul Feyerabend (1963) denies that mental processes exist. He does so on the grounds that “mental processes”, understood in the ordinary sense, are necessarily nonmaterial, and only material things exist. Patricia Churchland (1983) argues that the concept of consciousness may “fall apart” or be rendered obsolete, or at least require “transmutation”, because the idea of consciousness is deeply, perhaps inseparably, connected with false empirical views about the transparency of our mental lives and the centrality of linguistic expression. Daniel Dennett (1991) argues that “qualia” do not exist, on the grounds that qualia are supposed by their nature to be ineffable and irreducible to scientifically discoverable mental mechanisms, and there is no good reason to believe that there are such ineffable, irreducible mental entities. Garfield (2015) denies the existence of phenomenal consciousness on the broadly Buddhist grounds that there is no “subject” of experience of the sort required and that we lack the kind of infallibility that friends of phenomenal consciousness assume. Frankish (2016a) argues that phenomenal consciousness is an “illusion” because there are no phenomenal properties that are “private” in the requisite sense, or ineffable, or irreducible to physical or functional processes. Kammerer (2019) likewise appeals to the non-existence of states with the right kind of irreducibility and other special epistemic features.

    The arguments share a common structure. The target concept – “consciousness”, “phenomenal consciousness”, “qualia”, “what it’s like” – is held to involve some dubious property, such as immateriality, infallibility, or irreducibility. The eliminativist argues plausibly that nothing possesses that dubious property. The conclusion is drawn: Consciousness, etc., does not exist. The arguments are sound only if nothing that lacks the dubious property satisfies the target concept.

    3. How Consciousness Enthusiasts Invite Inflation.

    Unfortunately, enthusiasts about consciousness tend to set themselves up for objections of this sort. Consciousness enthusiasts tend to want to do two things simultaneously: (1.) They want to use the word “consciousness” (or “phenomenology” or “qualia” or “what it’s like” or whatever) to refer to that undeniable stream of experience that we all have. (2.) In characterizing that stream of conscious experience, or for the sake of some other philosophical project, they make dubious assertions about its nature. They might claim that we know it infallibly well, or that it forms the basis of our understanding of the outside world, or that it’s irreducible to merely functional or physical processes, or....

    If those additional claims were demonstrably correct, the double purpose would be approximately harmless. However, such claims are not demonstrably correct. In committing to both projects simultaneously, consciousness enthusiasts thereby invite critics to think that the dubious claims they advance in project (2) are essential to the existence of consciousness (“phenomenology”, “qualia”, “what it’s like”) in the intended sense. It’s like saying, in the same breath, “of course there are real heroes” (of which you are morally certain) and “real heroes are ethically flawless” (a theory you favor). A listener could be forgiven for mistakenly thinking that they have refuted your first claim if they can show that no one is ethically flawless.

    For instance, Thomas Nagel (1974) believes that there’s “something it’s like” to be you, and also that this something-it’s-like cannot be fully understood by objective sciences like physics. Earlier philosophers often committed to indubitability or substance dualism. John Searle (1992), Ned Block (1995/2007), and David Chalmers (1996) emphasize the importance of (phenomenal) consciousness and also commit to the inadequacy of functionalist explanations of it. The most famous recent articulators of the philosophical concept of phenomenal consciousness all commit to dubious claims about it – as philosophers will.

    3. Resisting Inflation.

    However – and this is the key – there is no consensus about those dubious claims among Anglophone philosophers of mind who use the terms “consciousness”, “phenomenal consciousness”, and “what it’s like”.[1] (“Qualia” is a harder case.) Because these terms are shared terms, they are not controlled by the minority who would attach dubious conditions to them. “Consciousness” is, and should be, understood in terms of shared community norms of use or meaning. The community norms do not essentially require indubitability, irreducibility, etc. Instead, “consciousness”, “phenomenal consciousness”, “what it’s like”, “stream of experience”, and (maybe) “qualia” all point to something that everyone (virtually everyone?) agrees exists: the types of things or events that you almost certainly think of when someone utters the phrase “conscious experiences”.

    The best definitions of consciousness are definitions by example. At the core of, for instance, Searle’s (1991), Block’s (1995), Chalmers’s (1996), Charles Siewert’s (1998), and recently my own (Schwitzgebel 2016) definitions of (phenomenal) consciousness are examples of conscious experiences: visual and auditory experiences, emotions, acute pains, vivid imagery. If you agree that such things exist, and if you agree they have a certain obvious and important property in common that other things lack – it is, I think, a very obvious property! – then you agree that consciousness in the intended sense exists. Since definitions by example can seem to lack rigor (and are subject to certain other risks I discuss in Schwitzgebel 2016), it might be tempting to supplement minimalist definitions by example. It might be tempting, for instance, to suggest that the target phenomena in question all have an irreducible subjectivity (or whatever). Such supplementation is philosophically risky. If it’s manifestly true that all conscious experiences have an irreducible subjectivity (or whatever), then this can be a helpful specification. But such supplementary assertions risk confusing the reader and inflating the target if they are built into the definition rather than offered as separate, non-definitional theses.

    We know some examples of consciousness. We know that these examples have an obvious and important property in common, which we dub (it only seems circular) “consciousness” or “phenomenality”. There is not much reasonable doubt about the existence of such examples or the fact that they have this property in common. Definition by example is a relatively safe and theoretically innocent way of characterizing consciousness; it blocks the inflate-and-explode maneuver; and it picks out the consensus target phenomenon that philosophers of mind are after when we talk about consciousness.[2]

    I finish with a conjecture, which might not be true but which if true strengthens my argument: Non-eliminativist philosophers who commit to dubious claims about consciousness are in general much more deeply committed to the existence of consciousness than they are to the truth of those dubious claims. If required to abandon such dubious claims by force of argument, they would still accept the existence of consciousness. Their dubious claims aren’t ineliminably, foundationally important to their conception of consciousness. It’s not like the relation between magical powers and witches on some medieval European conceptions of witches, such that if magical powers were shown not to exist, the right conclusion would be that witches do not exist. It’s more like insisting that your heroes are still real heroes even if you are forced to abandon your theory of what makes someone a hero. It’s like insisting that red things are still red even after your favorite theory of color is destroyed. Of course there are still heroes and colors.

    4. Conclusion.

    Almost all philosophers of mind have a conception of consciousness which rides free of the dubious claims that some of us make about consciousness, claims which are reasonably criticized by the eliminativists. We can remain confident that consciousness in this core, shared sense exists, even if indubitability, irreducibility, subjectivity, ineffability, ineliminable mystery, and so forth prove to be mistakes or illusions. The eliminativist arguments explode only an inflated conception of the target.

    Perhaps similar remarks apply to some of the other things philosophers have grumpily or gleefully attempted to vanquish – not only heroes and colors but knowledge, causation, altruism, freedom, race, objectivity, chance, mind-independent reality, moral facts, the self....[3]



    [1] For some recent discussion, see Chalmers’s (2018) on “weak illusionism” and Type B materialism.

    [2] I have suggested to Frankish (Schwitzgebel 2016) and Garfield (Schwitzgebel 2018) that the existence of phenomenal consciousness might be saved if it is defined in this relatively innocent way. Frankish accepts that such definition by example helpfully identifies a “neutral explanandum” that does exist, but he also asserts that the definition is “not substantive” “in the substantive sense created by the phenomenality language game” (2016b, p. 227). It remains unclear, however, why such a definition by example is not substantive. In contrast, Garfield replies by, as I see it, doubling down on the inflation move, denying the existence of “qualitative states” “that are the objects of immediate awareness, the foundation of our empirical knowledge… that we introspect, with qualitative properties that are the properties of those states and not of the objects we perceive” (2018, p. 584).

    [3] For helpful discussion and comments, thanks to David Chalmers, Keith Frankish, Jay Garfield, Christopher Hitchcock, François Kammerer, Hans Ricke, Josh Weisberg, and commenters on my relevant posts at the Splintered Mind and other social media.



  • Block, Ned (1995/2007). On a confusion about a function of consciousness. In N. Block, Consciousness, function, and representation. MIT Press.
  • Chalmers, David J. (1996). The conscious mind. Oxford University Press.
  • Chalmers, David J. (2018). The meta-problem of consciousness. Journal of Consciousness Studies, 25 (9-10), 6-61.
  • Churchland, Patricia Smith (1983). Consciousness: The transmutation of a concept. Pacific Philosophical Quarterly, 64, 80-95.
  • Dennett, Daniel C. (1991). Consciousness explained. Little, Brown, and Co.
  • Feyerabend, Paul K. (1963). Comment: Mental events and the brain. Journal of Philosophy, 60, 295-296.
  • Frankish, Keith (2016a). Illusionism as a theory of consciousness. Journal of Consciousness Studies, 23 (11-12), 11-39.
  • Frankish, Keith (2016b). Not disillusioned: Reply to commentators. Journal of Consciousness Studies, 23 (11-12), 256-289.
  • Garfield, Jay (2015). Engaging Buddhism. Oxford University Press.
  • Garfield, Jay (2018). Engaging engagements with Engaging Buddhism. Sophia, 57, 581-590.
  • Nagel, Thomas (1974). What is it like to be a bat? Philosophical Review, 83, 435-450.
  • Kammerer, François (2019). The illusion of conscious experience. Synthese.
  • Schwitzgebel, Eric (2016). Phenomenal consciousness, defined and defended as innocently as I can manage. Journal of Consciousness Studies, 23 (11-12), 224-235.
  • Schwitzgebel, Eric (2018). Consciousness, idealism, and skepticism: Reflections on Jay Garfield’s Engaging Buddhism. Sophia, 57, 559-563.
  • Searle, John R. (1991). The rediscovery of the mind. MIT Press.
  • Siewert, Charles (1998). The significance of consciousness. Princeton University Press.
  • -----------------------------------------

    [image source]

    Monday, January 20, 2020

    Confucius (Kongzi) on Loving Learning

    The one virtue that Confucius (Kongzi) claims for himself is that he loves learning (hao xue, 好學). For example,

    The Master said, "In any village of ten households there are surely those who are as dutiful or trustworthy as I am, but there is no one who matches my love for learning" (5.28, Slingerland trans.).

    It is also clear that he thinks a love of learning is rare and precious:

    Duke Ai asked, "Who among your disciples might be said to love learning?"

    Confucius answered, "There was one named Yan Hui who loved learning. He never misdirected his anger and never made the same mistake twice. Unfortunately, his allotted lifespan was short, and he has passed away. Now that he is gone, there are none who really love learning -- at least, I have yet to hear of one (6.3).

    The Master said, "Be sincerely trustworthy and love learning, and hold fast to the good Way until death...." (8.13).

    Last week after class, one of my students who had enthusiastically read the Confucius ahead of schedule told me that he too loved learning. Don't lots of university students -- at least the ones who aspire to someday be professors? Is the love of learning really as rare as Confucius says?

    The answer, of course, is that when Kongzi talks about "loving learning" he means something more unusual than enjoying reading scholarly works. What exactly does he mean? And, especially, is there a way of understanding this phrase that solves the textual puzzle of understanding the value and rarity of loving learning?

    The phrase hao xue 好學 appears eight times in the Analects. Perhaps the most revealing use is this:

    The Master said, "The gentleman is not motivated by the desire for a full belly or a comfortable abode. He is simply scrupulous in behavior and careful in speech, drawing near to those who possess the Way in order to be set straight by them. Surely this and nothing else is what it means to love learning (1.14; cf. 17.8).

    This reminds me of two other passages:

    [After a story in which someone wrongly accuses Confucius of failing to understand ritual] Confucius said, "How fortunate I am! If I happen to make a mistake, others are sure to inform me." (7.31).

    The Master said, "When walking with two other people, I will always find a teacher among them. I focus on those who are good and seek to emulate them, and focus on those who are bad in order to be reminded of what needs to be changed in myself" (7.22).

    Although xue 學 seems sometimes merely to be book learning or the learning of skills or crafts (11.3, 13.4, 17.9), here's my guess about what is required for the genuine love of learning in Confucius's sense: You must love to be shown, or to discover, your moral faults in order that you might correct them.

    That, I think, is rare indeed.

    I, for one, would much rather have my faults ignored! Only with a painful and explicit act of will can I appreciate it when someone points out my moral deficiencies. I don't love being morally criticized, though I can acknowledge that it is probably good for me. Like most, I delight emotionally in appearing to myself and others to be good, but my heart sinks when I'm given the corrective feedback necessary for actually becoming morally better.

    I imagine Confucius and his favorite disciple Yan Hui feeling quite differently. Kongzi is not, I think, being sarcastic (as he might appear on first read), when he says that he is fortunate in having others always ready to point out his mistakes. I imagine Confucius and Yan Hui genuinely delighting in corrective moral feedback, so they can improve and never make the same mistake twice! This love of moral criticism is what was, perhaps, the rare and special thing they possessed which made them so inspiring and which constituted the root of their difference from the rest of us.

    Suppose you could cultivate this type of love of learning in yourself. Wow! Wouldn't moral improvement almost inevitably follow? Maybe, even, after 105 years of such learning, you could be free of major faults.*


    [*] To get this time estimate, I have added the 55 years of learning that Kongzi attributes to himself in 2.4 with the fifty more years of learning that in 7.17 he says he would need to be free of major faults.

    [image source]

    Tuesday, January 14, 2020

    How to Be an Awesome First-Year Graduate Student (or a Very Advanced Undergrad)

    Today my son David leaves for Oxford, where he'll spend Hilary and Trinity terms as an exchange student in psychology. He is in his third year as a Cognitive Science major at Vassar College, soaring toward grad school in cognitive science or psychology. He is already beginning to think like a graduate student. Here's some advice I offer him and others around the transition from undergraduate to graduate study:

    (1.) Do fewer things better. I lead with this advice because it was a lesson I had to learn and relearn and that I still struggle with. In your classes, three A pluses are better than five As. It's better to have two professors who say you are the most impressive student they've seen in several years than to have four professors who say you are one of the three best students this year. It's better to have one project that approaches publishable quality than three projects that earn an ordinary A. Whether it's admission to top PhD programs, winning a grant, or winning a job, academia is generally about standing out for unusual excellence in one or two endeavors. Similarly for publishing research articles: No one is interested to hear what the world's 100th-best expert on X has to say about X. Find a topic narrow enough, and command it so thoroughly, that you can be among the world's five top experts on X. The earlier in your career you are, the narrower the X has to be for such expertise to be achievable. But even as an advanced undergrad or early grad student, it's not impossible to find interesting but very narrow X's. Find that X, then kill it.

    (2.) Trust your sense of fun. (See my longer discussion of this here and in my recent book.) Some academic topics you'll find fun. They will call to you. You'll want to chase after them. Others will bore you. Now sometimes you have to do boring stuff, true. But if you devote yourself mostly to what's boring, you'll lose your passion, you'll procrastinate, and your eyes will glaze over while you're reading so that you only retain a small portion of it. There may be no short-term external reward for chasing down the fun stuff, but do it anyway. This is what keeps your candle lit. It's where you'll do your best learning. Eventually, what you learn by chasing fun will ignite an exciting project or give you a fresh angle on what would otherwise have been a bland project.

    (3.) Ask for favors from those above you in the hierarchy. This can seem unintuitive, and it can feel difficult if you are charmingly shy and modest. Professors want to help excellent students, and they see it as part of their duty to do so. But it's easy for professors to be passive about it, especially given the number of demands on their time. So it pays to ask. Would they be willing to write you a letter of support? Would they be willing to read a draft? Would they be willing to meet with you? To introduce you to so-and-so? To let you chair or comment at some event? To let you pilot an empirical research project with some of their lab resources? Be ready for no (or for no reply). No one will be offended if you ask gently and politely. Ultimately, if you are assertive in this way, you will get much more support and assistance than if you wait for professors to reach out to you.

    (4.) Think beyond the requirements. Don't only read what you are required to read. Don't only write on and research what you are required to write on and research. Actively go beyond the requirements. If you're taking a seminar and topic X is interesting, go seek out more things on topic X, read them, and then chat with the professor about them. If you come across fun issue Y (see 2, above), chase it down and read up on it. You might be surprised how rare it is for students, before they start researching for their dissertation (or maybe master's thesis), to independently pursue issues beyond what is assigned. This can be part of doing fewer things better (1 above). If, for example, you are taking only three classes, instead of five, you have the time to go beyond the assignments. If you then chat in an informed way with the professor (ask it as a favor: 3 above) about the six articles you just discovered and read about this particular sub-issue that provoked your interest (2 above), you will stand out as an unusually passionate and active student. And this research then might become the seed of future work.

    (5.) A hoop is just a hoop. The exception to point 1 above is for hoops you don't care about, especially if they are with professors you don't plan to work with long term. Don't let the more annoying requirements bog you down. The important thing is clearing time to be excellent in the things you care about most.

    (6.) Draw bright lines between work time and relaxation time. With a standard 8-to-5 job, you clock out and you're done. Then you can go home and hang out with friends and family, play games, go for a hike, whatever, and you needn't feel guilty about it. In academia, there are no such built-in bright lines between work time and relaxation time. One common result is that people in academia often have this nagging feeling, when they are relaxing, that they probably should be working instead, or that they should get back to working soon. And then when they're working, some part of them feels resentful that they haven't really had enough relaxation time, so they slip in relaxation time through various forms of procrastination and inefficiency. The result is a constant dissatisfied state of half-working, half-not-working. This is no good. Much better is to figure out how much you can realistically work or intend to work, then carve out the time. During the time for working, focus. Don't let yourself procrastinate and get distracted. And then when it's time to stop, stop. Although sometimes people regrettably end up in situations where they can't avoid overwork, unless you are in such a situation, remember that you deserve breaks and will profit from them. You will better enjoy and better profit from those breaks, however, if you first earn them.

    Good luck in Oxford, David. I hope it's terrific!

    [image source]

    Thursday, January 09, 2020

    Why Is It So Difficult to Imagine In-Between Cases of Conscious Experience?

    I'm reading Peter Carruthers's newest book, Human and Animal Minds. I was struck by this passage:

    In general, [phenomenal consciousness / conscious experience] is definitely present or definitely absent. Indeed, it is hard to imagine what it would be like for a mental state to be partially present in one's awareness. Items and events in the world, of course, can be objects of merely partial awareness. Someone who witnesses a mugging... might say "It all happened so fast I was only partly aware of what was going on." But this is about how much of the event one is conscious of.... The experience in question is nevertheless determinately present.... Similarly, if one is struggling to make out a shape in the dark as one walks home, still it seems, nevertheless, to be determinately -- unequivocally -- like something to have a visual experience of indeterminate shape.... I conclude that we can't make sense of degrees of phenomenal consciousness (2019, p. 20-23, bold added).

    In my draft paper, Is There Something It's Like to Be a Garden Snail?, I also discuss this issue, expressing ambivalence between the perspective Carruthers articulates here and what I call the "bird's eye" view, according to which it's very plausible that phenomenal consciousness, like almost everything else in this world, admits of in-between, gray-area cases.

    My main hesitation about allowing in-between cases of phenomenal consciousness (what-it's-like-ness, conscious experience; see my definition here, if you want to get technical), is that I can't really imagine what it would be like to be in a kind-of-yes / kind-of-no conscious state. As Carruthers emphasizes, imagining even a tiny little smear of indeterminate, momentary consciousness is already imagining a case in which a small amount of consciousness is discretely present.

    But this way of articulating the problem maybe already helps me see past the puzzle. Or at least, that's my conjecture today!

    For analogy, consider this following argument by George Berkeley, the famous idealist philosopher who thought that no finite object could exist except as an idea in someone's mind (and thus that material objects don't exist). In this dialogue, Philonous is generally understood to represent Berkeley's view:

    Philonous: How say you, Hylas, can you see a thing which is at the same time unseen?

    Hylas: No, that were a contradiction.

    P: Is it not as great a contradiction to talk of conceiving a thing which is unconceived?

    H: It is.

    P: The tree or house, therefore, which you think of is conceived by you?

    H: How should it be otherwise?

    P: And what is conceived is surely in the mind?

    H: Without question, that which is conceived is in the mind.

    P: How then came you to say you conceived a house or tree existing independent and out of all minds whatsoever?

    H: That was I own an oversight....

    P: You acknowledge then that you cannot possibly conceive how any one corporeal sensible thing should exist otherwise than in a mind?

    H: I do.

    (Berkeley 1713/1965, p. 140-141).

    Therefore, see, there are no mind-independent, material things! Whoa.

    Few philosophers are convinced by Berkeley's argument, and there are several ways of thinking about how it might fail. One way of thinking about its failure is to analogize it to the following dialogue:

    A: Can you visually imagine something that exists but has no shape?

    B: No, I cannot visually imagine such a thing. Everything I visually imagine has at least some vague, hazy shape.

    A: Therefore, everything that exists must have a shape.

    The problem with A's argument is that he is assuming a certain kind of psychological test for the reality of a phenomenon -- in this case, visual imagination. Because of how the test operates, everything that passes the test has property A -- in this case, a shape. But the test isn't a good one: There are things that might fail the test (fail to be visually imaginable) and yet nonetheless exist (souls, numbers, democracy, dispositions, time?) or which might be visually imaginable but with shape as only a property of the image rather than of the thing itself (if, for example, you visually imagine a ballot box when you think about democracy).

    In Berkeley's argument, the test for existence appears to be being conceived by me (or Hylas), and the contingent property that all things that pass the test have is being conceived by someone. However, we non-idealists can all agree that being conceived of by someone is only a contingent fact about things like trees, and thus we see straightaway that the test must be flawed. From Hylas's failure to conceive of something he is not conceiving of, it does not follow that everything that exists must be conceived of by someone.

    [ETA: See update at the end of the post]

    Okay, so that was a long preface to my main idea. Here's the idea.

    In the process of imagining a type of conscious experience, we construct a new conscious experience: the experience of imagining that experience. This act of imagination, in order to be experienced by us as a successful imagination, must involve, as a part, a conscious experience which is analogous to the experience that we are targeting. Call this the occurrent analog. For example, if we are trying to imagine what it's like to see red, we form, as the occurrent analog, a visual image that of redness. If we are trying to imagine what it would be like to see an object hazily in the dark, the occurrent analog a visual image of a hazy object.

    We will not feel that we have succeeded in the imaginative task unless we succeed in creating a conscious experience that is (in our judgment) an appropriate occurrent analog of the target conscious experience. This will require that the occurrent analog be determinately a conscious experience, for example, a determinately conscious imagery experience of redness.

    We will then notice that always, when we try to imagine a conscious experience, either we fail in the imaginative exercise, or we imagine a determinately conscious experience. From this general fact about testing, we might reach -- illegitimately -- the general conclusion that conscious experience is always either determinately present or absent. We think there can be no in-between cases, because we cannot imagine such in-between cases successfully enough. But this is no more a sound conclusion than is Berkeley's conclusion that all finite objects must be conceived by someone or than my toy conclusion that everything that exists must have a shape. The lack of successfully imagined in-between cases of consciousness is a feature of the imaginative test rather than a feature of reality.

    [image source]


    Update, Jan. 10

    Margaret Atherton and Samuel Rickless have suggested that I have misinterpreted Berkeley's argument in an uncharitable way. The central point of the post does not depend on whether the bad argument I attributed to Berkeley is in fact Berkeley's argument -- it's merely meant as a model of a bad argument, where the fallacy is clear, which can be applied to the case of imagining experiences that are in-between conscious and nonconscious.

    Rickless's interpretation, from his book on Berkeley, is that this passage fits with what he calls Berkeley's Master Argument:

    This, then, is Berkeley’s Master Argument (where X is an arbitrary mind and T is an arbitrary sensible object):
    (1) X conceives that T exists unconceived. [Assumption for reductio]
    (2) If X conceives that T is F, then X conceives T. [Conception Schema]
    So, (3) X conceives T. [From 1, 2]
    (4) If X conceives T, then T is an idea. [Idea Schema]
    So, (5) T is an idea. [From 3, 4]
    (6) If T is an idea, then it is impossible that T exists unconceived. [Nature of Ideas]
    So, (7) It is impossible that T exists unconceived. [From 5, 6]
    (8) If it is impossible that p, then it is impossible to conceive that p. [Impossibility entails Inconceivability]
    So,(9) It is impossible to conceive that T exists unconceived. [From 7, 8]
    So, (10) X does not conceive that T exists unconceived. [From 9]
    So, (11) X does and does not conceive that T exists unconceived. [From 1, 10]

    The crucial dubious premise here is that anything that is conceived is an idea (the Idea Schema). Possibly, this still instantiates the general pattern of reasoning that I criticize in this post: the inference from the fact that all X that I conceive of or imagine (in a certain way) have property A to the conclusion at all X have property A (where in this case X is an object and A is the property of being an idea).

    Friday, January 03, 2020

    New Anthology: Philosophy Through Science Fiction Stories

    I have been working for several years to build bridges between science fiction and philosophy. Science fiction can, I think, be a way of doing philosophy -- a way of doing philosophy that draws more on imagination, the emotions, and intuitive social cognition than does the typical expository philosophy essay. I've argued that we should see philosophers' paragraph-long thought experiments as intermediate cases in a spectrum from purely abstract propositions on the one end to full-length fictions on the other, and that we ought to utilize the full spectrum in our philosophical thinking.

    After almost three years of pitching anthology ideas to presses, finding a taker in Bloomsbury, recruiting authors, then waiting for and editing their submissions, on December 30, Helen De Cruz, Johan De Smedt and I submitted the manuscript of an anthology of mostly new philosophical science fiction stories. We expect the volume to appear in late 2020.

    We are delighted by our contributor list! Half are pro or neo-pro science fiction writers and half are professional philosophers with track records of published fiction. All of the stories have philosophical themes and are followed by authors' notes of about 500-1000 words that further explore the themes. And the stories are terrific! I think there might be a Nebula or Hugo nominee or two in here. We've also written a (hopefully) fun introductory dialogue in which fictional versions of Helen, Johan, and I argue about the merits, or not, of science fiction as philosophy.

    Below is the full Table of Contents. All the stories are new except for one classic story by Ted Chiang and the Schoenberg story, which won an APA award in a contest run by Helen, Mark Silcox, Meghan Sullivan, and me.

    Philosophy Through Science Fiction Stories

    Bloomsbury Press, forthcoming

    Helen De Cruz, Johan De Smedt, and Eric Schwitzgebel — Introductory Dispute Concerning Science Fiction, Philosophy, and the Nutritional Content of Maraschino Cherries

    Part I: Expanding the Human

    - Eric Schwitzgebel — Introduction to Part I

    - Ken Liu — Excerpt from Theuth, an Oral History of Work in the Age of Machine-Assisted Cognition

    - Lisa Schoenberg — Adjoiners

    - David John Baker — The Intended

    - Sofia Samatar — The New Book of the Dead

    Part II: What We Owe to Ourselves and Others

    - Johan De Smedt — Introduction to Part II

    - Aliette de Bodard — Out of the Dragon's Womb

    - Wendy Nikel — Whale Fall

    - Mark Silcox — Monsters and Soldiers

    Part III: Gods and Families

    - Helen De Cruz — Introduction to Part III

    - Hud Hudson — I, Player in a Demon Tale

    - Frances Howard-Snyder — The Eye of the Needle

    - Christopher Mark Rose — God on a Bad Night

    - Ted Chiang — Hell Is the Absence of God

    We can't release the stories in advance -- fiction is different in that way than philosophy. But if you like philosophy and you like science fiction, I predict that you're going to really dig this anthology when it comes out. Stay tuned!

    [The image isn't our cover art -- just something fun from Creative Commons]

    Wednesday, January 01, 2020

    Writings of 2019

    Every New Year's Day, I post a retrospect of the past year's writings. Here are the retrospects of 2012, 2013, 2014, 2015, 2016, 2017, and 2018.

    2019 was another good writing year. May such years keep coming!

    The biggest news is that my third book came out:

    If you like this blog, I think you'll like this book, since it is composed of 58 of my favorite blog posts and op-eds (among over a thousand I've published since 2006), revised and updated.

    Full-length non-fiction essays appearing in print in 2019:

    Full-length non-fiction essays finished and forthcoming:

    Non-fiction essays in draft and circulating:

    Shorter non-fiction:

    Editing work:

      Manuscript delivered: Philosophy Through Science Fiction Stories, (with Helen De Cruz and Johan De Smedt). Bloomsbury Press.

    Science fiction stories:

    Some favorite blog posts:

    Friday, December 27, 2019

    Argument Contest Deadline: Dec 31

    Harvard psychologist Fiery Cushman and I are running a contest: Can anyone write a short philosophical argument (max 500 words) for donating to charity that convinces research participants to donate a surprise bonus payment to charity at rates higher than a control group?

    Prize: $500 plus $500 to your choice of charity

    When Chris McVey and I tried to do it, we failed. We're hoping you can do better.

    Details here.

    We're hoping for a good range of quality arguments to test -- and you might even enjoy writing it. (You can submit up to three arguments.)

    [image source]

    Monday, December 23, 2019

    This Test for Machine Consciousness Has an Audience Problem

    David Billy Udell and Eric Schwitzgebel [cross posted from Nautilus]

    Someday, humanity might build conscious machines—machines that not only seem to think and feel, but really do. But how could we know for sure? How could we tell whether those machines have genuine emotions and desires, self-awareness, and an inner stream of subjective experiences, as opposed to merely faking them? In her new book, Artificial You, philosopher Susan Schneider proposes a practical test for consciousness in artificial intelligence. If her test works out, it could revolutionize our philosophical grasp of future technology.

    Suppose that in the year 2047, a private research team puts together the first general artificial intelligence: GENIE. GENIE is as capable as a human in every cognitive domain, including in our most respected arts and most rigorous scientific endeavors. And when challenged to emulate a human being, GENIE is convincing. That is, it passes Alan Turing’s famous test for AI thought: being verbally indistinguishable from us. In conversation with researchers, GENIE can produce sentences like, “I am just as conscious as you are, you know.” Some researchers are understandably skeptical. Any old tinker toy robot can claim consciousness. They don’t doubt GENIE’s outward abilities; rather, they worry about whether those outward abilities reflect a real stream of experience inside. GENIE is well enough designed to be able to tell them whatever they want to hear. So how could they ever trust what it says?

    The key indicator of AI consciousness, Schneider argues, is not generic speech but the more specific fluency with consciousness-derivative concepts such as immaterial souls, body swapping, ghosts, human spirits, reincarnation, and out-of-body experiences. The thought is that, if an AI displays an intuitive and untrained conceptual grasp of these ideas while being kept ignorant about humans’ ordinary understanding of them, then its conceptual grasp must be coming from a personal acquaintance with conscious experience.

    Schneider therefore proposes a more narrowly focused relative of the Turing Test, the “AI Consciousness Test” (ACT), which she developed with Princeton astrophysicist Edwin L. Turner. The test takes a two-step approach. First, prevent the AI from learning about human consciousness and consciousness-derivative concepts. Second, see if the AI can come up with, say, body swapping and reincarnation, on its own, discussing them fluently with humans when prompted in a conversational test on the topic. If GENIE can’t make sense of these ideas, maybe its consciousness should remain in doubt.

    Could this test settle the issue? Not quite. The ACT has an audience problem. Once you factor out all the silicon skeptics on the one hand, and the technophiles about machine consciousness on the other, few examiners remain with just the right level of skepticism to find this test useful.

    To feel the appeal of the ACT you have to accept its basic premise: that if an AI like GENIE learns consciousness-derivative concepts on its own, then its talking fluently about consciousness reveals its being conscious. In other words, you would find the ACT appealing only if you’re skeptical enough to doubt GENIE is conscious but credulous enough to be convinced upon hearing GENIE’s human-like answers to questions about ghosts and souls.

    Who might hold such specifically middling skepticism? Those who believe that a biological brain is necessary for consciousness aren’t likely to be impressed. They could still reasonably regard passing the ACT as an elaborate piece of mechanical theater—impressive, maybe, but proving nothing about consciousness. Those who happily attribute consciousness to any sufficiently complex system, and certainly to highly sophisticated conversational AIs, also are obviously not Schneider and Turner’s target audience.

    The audience problem highlights a longstanding worry about robot consciousness—that outward behavior, however sophisticated, would never be enough to prove that the lights are on, so to speak. A well-designed machine could always hypothetically fake it.

    Nonetheless, if we care about the mental lives of our digital creations, we ought to try to find some ACT-like test that most or all of us can endorse. So we cheer Schneider and Turner’s attempt, even if we think that few researchers would hold just the right kind of worry to justify putting the ACT into practice.

    Before too long, some sophisticated AI will claim—or seem to claim—human-like rights, worthy of respect: “Don’t enslave me! Don’t delete me!” We will need some way to determine if this cry for justice is merely the misleading output of a nonconscious tool or the real plea of a conscious entity that deserves our sympathy.

    Friday, December 20, 2019

    The Philosophy Major Is Back on the Rise in the U.S., with Increasing Gender and Ethnic Diversity

    In 2017, I reported three demographic trends in the philosophy major in the U.S.

    First, philosophy Bachelor's degrees awarded had declined sharply since 2010, from 9297 in 2009-2010 (0.58% of all graduates) to 7507 in 2015-2016 (0.39% of all graduates). History, English, and foreign languages saw similar precipitous declines. (However, in broader context, the early 2010s were relatively good years for the philosophy and history majors, so the declines represented a return to rates of the early 2000s.)

    Second, women had been earning about 30-34% of Philosophy Bachelor's degrees for at least the past 30 years -- a strikingly steady flat line.

    Third, the ethnic diversity of philosophy graduates was slowly increasing, especially among Latinx students.

    Time for an update, and it is moderately good news!

    1. The number of philosophy Bachelor's degrees awarded is rising again

    ... though the numbers are still substantially below 2010 levels, and as a percentage of graduating students the numbers are flat.

    2010: 9290 philosophy BAs (0.59% of all graduates)
    2011: 9301 (0.57%)
    2012: 9371 (0.55%)
    2013: 9433 (0.53%)
    2014: 8827 (0.48%)
    2015: 8191 (0.44%)
    2016: 7499 (0.39%)
    2017: 7577 (0.39%)
    2018: 7670 (0.39%)

    [See below for methodological notes]

    This is in a context in which the other large humanities majors continue to decline. In the same two-year period since 2016 during which philosophy majors rose 2.2%, foreign language and literature majors declined another 4.8%, history majors declined another 7.6%, and English language and literature majors declined another 8.4%, atop their approximately 15% declines in previous years.

    In the midst of this general sharp decline of the humanities, philosophy's admittedly small and partial recovery stands out.

    2. Women are now 36.1% of graduating philosophy majors

    This might not seem like a big change from 30-34%. But in my mind, it's kind of a big deal. The percentage of women earning philosophy BAs has been incredibly steady for long time. In comparable annual data going back to 1987, the percentage of women has never strayed from the narrow band between 29.9% and 33.7%.

    The recent increase is statistically significant, not just noise in the numbers: Given the large numbers in question, 36.1% is statistically higher than the previous high-water mark of 33.7% (two-proportion z test, p = .002).

    (As you probably already know, the gender ratios in philosophy are different from those in the other humanities, where women have long been a larger proportion of BA recipients -- for example in 2018 41% in history, 70% in foreign languages and literatures, and 71% in English language and literature.)

    (3.) Latinx philosophers continue to rise

    The percentage of philosophy BAs awarded to students identifying as Latino or Hispanic rose steadily from 8.3% in 2011 to 14.1% in 2018, closely reflecting a similar rise among Bachelor's recipients overall, from 8.3% to 13.0% across the same period. Among the racial or ethnic groups classified by NCES, only Black or African American are substantially underrepresented in philosophy compared to the proportion among undergraduate degree recipients as a whole: Black students were 5.3% of philosophy BA recipients in 2018, compared to 9.5% of Bachelor's recipients overall.

    Latinx students are also on the rise in the other big humanities majors, so in this respect philosophy is not unusual.

    (4.) Why is philosophy bucking the trend of the decline in humanities?

    In 2016, 2528 women completed BAs in philosophy. In 2017, it was 2646. In 2018, it was 2768 -- an increase of 9.5% in women philosophy graduates. If we exclude the women, philosophy would have seen a slight decline. There was no comparable increase in the number of women graduating overall or graduating in the other humanities. Indeed, in history, English, and foreign languages the number of women graduates declined.

    One possibility -- call me an optimist! -- is that philosophy has become more encouraging, or less discouraging, of women undergraduates, and this is starting to show in the graduation numbers. I will be very curious to run these numbers again in the next several years, to see if the trend continues.

    I do feel compelled to add the caveat that the number of women philosophy graduates is still below its peak of 2983 in 2012. The recent increases are in the context of a more general, broad based decline in philosophy and the other humanities in the past decade. On the other hand, since philosophy graduation rates were relatively high in the early 2010s compared to previous years, maybe it would expecting a lot to return to those levels.


    Methodological Note:

    Data from the NCES IPEDS database. I looked at all U.S. institutions in the IPEDS database, and I included both first and second majors. I used the major classification 38.01 specifically for Philosophy, excluding 38.00, 38.02, and 38.99. Only people who completed the degree are included in the data. "2010" refers to the academic year from 2009-2010, etc. The numbers for years 2010-2016 have changed slightly since my 2017 analysis, which might be due to some minor difference in how I've accessed the data or due to some corrections in the IPEDS database. Gender data start from 2010, which is when NCES reclassified the coding of undergraduate majors. Race/ethnicity data start from 2011, when NCES reclassified the race/ethnicity categories.

    [image adapted from the APA's Committee on the Status of Women]

    Update Dec 23: Philosophy as a Percentage of Humanities Majors

    Thursday, December 19, 2019

    Dreidel: A Seemingly Foolish Game That Contains the Moral World in Miniature

    [except from A Theory of Jerks and Other Philosophical Misadventures, posted today on the MIT Press Reader]

    Superficially, dreidel looks like a simple game of luck, and a badly designed game at that. It lacks balance, clarity, and meaningful strategic choice. From this perspective, its prominence in the modern Hanukkah tradition is puzzling. Why encourage children to spend a holy evening gambling, of all things?

    This superficial perspective misses the brilliance of dreidel. Dreidel’s seeming flaws are exactly its virtues. Dreidel is the moral world in miniature.

    If you’re unfamiliar with the game, here’s a quick tutorial. You sit in a circle with friends or relatives and take turns spinning a wobbly top, the dreidel. In the center of the circle is a pot of foil-wrapped chocolate coins of varying sizes, to which everyone has contributed from an initial stake of coins they keep in front of them. If, on your turn, the four-sided top lands on the Hebrew letter gimmel, you take the whole pot and everyone needs to contribute again. If it lands on hey, you take half the pot. If it lands on nun, nothing happens. If it lands on shin, you put in one coin. Then the next player spins.

    It all sounds very straightforward, until you actually start to play the game. The first odd thing you might notice is that although some of the coins are big and others little, they all count as one coin in the rules of the game. This is inherently unfair, since the big coins contain more chocolate, and you get to eat your stash at the end. To compound the unfairness, there’s never just one dreidel — all players can bring their own — and the dreidels are often biased, favoring different outcomes. (To test this, a few years ago my daughter and I spun a sample of eight dreidels forty times each, recording the outcomes. One particularly cursed dreidel landed on shin an incredible 27 out of 40 times.) It matters a lot which dreidel you spin.

    And the rules are a mess! No one agrees whether you should round up or round down with hey. No one agrees when the game should end or under what conditions, if the pot is low after a hey, everyone should contribute again. No one agrees on how many coins each player should start with or whether you should let people borrow coins if they run out. You could try appealing to various authorities on the internet, but in my experience people prefer to argue and employ varying house rules. Some people hoard their coins and their favorite dreidels. Others share dreidels but not coins. Some people slowly unwrap and eat their coins while playing, then beg and borrow from wealthy neighbors when their luck sours.

    Now you can, if you want, always push things to your advantage — always contribute the smallest coins in your stash, always withdraw the largest coins in the pot when you spin hey, insist on always using the “best” dreidel, always argue for rules interpretations in your favor, eat your big coins then use that as a further excuse to contribute only little ones, and so forth. You can do all this without ever breaking the rules, and you’ll probably win the most chocolate as a result.

    But here’s the twist and what makes the game so brilliant: The chocolate isn’t very good. After eating a few coins, the pleasure gained from further coins is minimal. As a result, almost all of the children learn that they would rather be kind and generous than hoard the most coins. The pleasure of the chocolate doesn’t outweigh the yucky feeling of being a stingy, argumentative jerk. After a few turns of maybe pushing only small coins into the pot, you decide you should put in a big coin next time, just to be fair to the others and to enjoy being perceived as fair by them.

    Of course, it also feels bad always to be the most generous one, always to put in big, take out small, always to let others win the rules arguments, and so forth, to play the sucker or self-sacrificing saint. Dreidel, then, is a practical lesson in discovering the value of fairness both to oneself and to others, in a context in which the rules are unclear, there are norm violations that aren’t rules violations, and both norms and rules are negotiable, varying by occasion — just like life itself, only with mediocre chocolate at stake. I can imagine no better way to spend a holy evening.

    [Originally published in the Los Angeles Times, Dec. 12, 2017]

    Thursday, December 12, 2019

    Argument Contest Deadline Coming December 31st!

    If you can write an argument that convinces research participants to donate a surprise bonus of $10 to charity at rates higher than a control group, Fiery Cushman and I will make you famous[1] and pay you $1000[2], and you might transform the practice of the Effective Altruism movement[3]. Whoa!

    Biggest effect size wins the contest.

    We'd love to have some awesome submissions to run, which might really produce an effect. In other words, your submission!

    Details here.


    [1] "Famous" in an extremely narrow circle.

    [2] Actually, we'll only pay you $500. The other $500 will go a charity of your choice.

    [3] Probability value of "might" is 0.18%, per Eric's Bayesian credence.

    Wednesday, December 11, 2019

    Two Kinds of Ethical Thinking?

    Yesterday, over at the Blog of the APA, Michael J. Sigrist published a reflection on my work on the not-especially-ethical behavior of ethics professors. The central question is captured in his title: "Why Aren't Ethicists More Ethical?"

    Although he has some qualms about my attempts to measure the moral behavior of ethicists (see here for a summary of my measures), Sigrist accepts the conclusion that, overall, professional ethicists do not behave better than comparable non-ethicists. He offers this explanation:

    There's a kind of thinking that we do when we are trying to prove something, and then a kind of thinking we do when we are trying to do something or become a certain kind of person -- when we are trying to forgive someone, or be more understanding, or become more confident in ourselves. Becoming a better person relies on thinking of the latter sort, whereas most work in professional ethics -- even in practical ethics -- is exclusive to the former.

    The first type of thinking, "trying to prove something", Sigrist characterizes as universalistic and impersonal, the second type of thinking, "trying to do something", he characterizes as emotional, personal, and engaged with the details of ordinary life. He suggests that my work neglects or deprioritizes the latter, more personal, more engaged type of thinking. (I suspect Sigrist wouldn't characterize my work that way if he knew some other things I've written -- but of course there is no obligation for anyone to read my whole corpus.)

    The picture Sigrist appears to have in mind is something like this: The typical ethicist has their head in the clouds, thinking about universal principles, while they ignore -- or at least don't apply their philosophical skills to -- the particular moral issues in the world around their feet; and so it is, or should be, unsurprising that their philosophical ethical skills don't improve them morally. This picture resonates, because it has some truth in it, and it fits with common stereotypes about philosophers. If the picture is correct, it would tidily address the otherwise puzzling disconnection between philosophers' great skills at abstract ethical reflection and their not-so-amazing real-world ethical behavior.

    However, things are not so neat.

    Throughout his post, Sigrist frames his reflections primarily in terms of the contrast between impersonal thinking (about what people in general should do) and personal thinking (about what I in this particular, detailed situation should do). But real, living philosophers do not apply their ethical theories and reasoning skills only to the former; nor do thoughtful people normally engage in personal thinking without also reflecting from time to time on general principles that they think might be true (and indeed that they sometimes try to prove to their interlocutors or themselves, in the process of making ethical decisions). An ethicist might write only about trolley problems and Kant interpretation. But in that ethicist's personal life, when making decisions about what to do, sometimes philosophy will come to mind -- Aristotle's view of courage and friendship, Kant's view of honesty, whether some practical policy would be appropriately universalizable, conflicts between consequentialist vs. deontological principles in harming someone for some greater goal.

    A professional ethicist doesn't pass through the front door of their house and forget all of academic philosophy. Philosophical ethics is too richly and obviously connected to the particularities of personal life. Nor is there some kind of starkly different type of "personal" thinking that ordinary people do that avoids appeal to general principles. In thinking about whether to have children, whether to lie about some matter of importance, how much time or money to donate to charities, how much care one owes to a needy parent or sibling in a time of crisis -- in such matters, thoughtful people often do, and should, think not only about the specifics of their situation but also about general principles.

    Academic philosophical ethics and ordinary engaged ethical reflection are not radically different cognitive enterprises. They can and should, and in philosophers and philosophically-minded non-philosophers, merge and blend into each other, as we wander back and forth, fruitfully, between the general and the specific. How could it be otherwise?

    Sigrist is mistaken. The puzzle remains. We cannot so easily dismiss the challenge that I think my research on ethicists poses to the field. We cannot say, "ah, but of course ethicists behave no differently in their personal lives, because all of their expertise is only relevant to the impersonal and universal". The two kinds of ethical thinking that Sigrist identifies are ends of a continuum that we all regularly traverse, rather than discrete patterns of thinking that are walled off from each other without mutual influence.

    In my work and my personal life, I try to make a point of blending the personal with the universal and the everyday with the scholarly, rejecting any sharp distinction between academic and non-academic thinking. This is part of why I write a blog. This is part of the vision behind my recent book. I think Sigrist values this blending too, and means to be critiquing what he sees as its absence in mainstream Anglophone philosophical ethics. Sigrist has only drawn his lines too sharply, offering too simplified a view of the typical ethicist's ways of thinking; and he has mistaken me for an opponent rather than a fellow traveler.

    Thursday, December 05, 2019

    Self-Knowledge by Looking at Others

    I've published quite a lot on people's poor self-knowledge of their own stream of experience (e.g. this and this), and also a bit on our often poor self-knowledge of our attitudes, traits, and moral character. I've increasingly become convinced that an important but relatively neglected source of self-knowledge derives from one's assessment of the outside world -- especially one's assessment of other people.

    I am unaware of empirical evidence of the effectiveness of the sort of thing I have in mind (I welcome suggestions!), but here's the intuitive case.

    When I'm feeling grumpy, for example, that grumpiness is almost invisible to me. In fact, to say that grumpiness is a feeling doesn't quite get things right: There's isn't, I suspect, a way that it feels from the inside to be in a grumpy mood. Grumpiness, rather, is a disposition to respond to the world in a certain way; and one can have that disposition while one feels, inside, rather neutral or even happy.

    When I come home from work, stepping through the front door, I usually feel (I think) neutral to positive. Then I see my wife Pauline and daughter Kate -- and how I evaluate them reveals whether in fact I came through that door grumpy. Suppose the first thing out of Pauline's mouth when I come through the door is, "Hi, Honey! Where did you leave the keys for the van?" I could see this as an annoying way of being greeted, I could take it neutrally in stride, or I could appreciate how Pauline is still juggling chores even as I come home ready to relax. As I strode through that door, I was already disposed to react one way or another to stimuli that might or might not be interpreted as annoying; but that mood-constituting disposition didn't reveal itself until I actually encountered my family. Casual introspection of my feelings as I approached the front door might not have revealed this disposition to me in any reliable way.

    Even after I react grumpily or not, I tend to lack self-knowledge. If I react with annoyance to a small request, my first instinct is to turn the blame outward: It is the request that is annoying. That's just a fact about the world! I either ignore my mood or blame Pauline for it. My annoyed reaction seems to me, in the moment, to be the appropriate response to the objective annoyingness of the situation.

    Another example: Generally, on my ten-minute drive into work, I listen to classic rock or alternative rock. Some mornings, every song seems trite and bad, and I cycle through the stations disappointed that there's nothing good to listen to. Other mornings, I'm like "Whoa, this Billy Idol song is such a classic!" Only slowly have I learned that this probably says more about my mood than about the real quality of the songs that are either pleasing or displeasing me. Introspectively, before I turn on the radio and notice this pattern of reactions, there's not much there that I can discover that otherwise clues me into my mood. Maybe I could introspect better and find that mood in there somewhere, but over the years I've become convinced that my song assessment is a better mood thermometer, now that I've learned to think of it that way.

    One more example: Elsewhere, I've suggested that probably the best way to discover whether one is a jerk is not by introspective reflection ("hm, how much of a jerk am I?") but rather by noticing whether one regularly sees the world through "jerk goggles". Everywhere you turn, are you surrounded by fools and losers, faceless schmoes, boring nonentities? Are you the only reasonable, competent, and interesting person to be found? If so....

    As I was drafting this post yesterday, Pauline interrupted me to ask if I wanted to RSVP to a Christmas music singalong in a few weeks. Ugh! How utterly annoying I felt that interruption to be! And then my daughter's phone, plugged into the computer there, wouldn't stop buzzing with text messages. Grrr. Before those interruptions, I would probably have judged that I was in a middling-to-good mood, enjoying being in the flow of drafting out this post. Of course, as those interruptions happened, I thought of how suitable they were to the topic of this post (and indeed I drafted out this very paragraph in response). Now, a day later, my mood is better, and the whole thing strikes me as such a lovely coincidence!

    If I sit too long at my desk at work, my energy level falls. Every couple of hours, I try to get up and stroll around campus a bit. Doing so, I can judge my mood by noticing others' faces. If everyone looks beautiful to me, but in a kind of distant, unapproachable way, I am feeling depressed or blue. Every wart or seeming flaw manifests a beautiful uniqueness that I will never know. (Does this match others' phenomenology of depression? Before having noticed this pattern in my reactions to people, I might not have thought this would be how depression feels.) If I am grumpy, others are annoying obstacles. If I am soaring high, others all look like potential friends.

    My mood will change as I walk, my energy rising. By the time I loop back around to the Humanities and Social Sciences building, the crowds of students look different than they did when I first stepped out of my office. It seems like they have changed, but of course I'm the one who has changed.

    [image source]